ELK Stack: what it is used for and how to use it for observability

Banner for the blog article: 'ELK Stack: what it is used for and how to use it for observability'
Understanding what is really happening inside a modern application has become increasingly complex. Microservices, cloud environments, and the growing number of physical or virtual servers all contribute to an explosion of technical signals. This distribution makes so-called “traditional” log analysis—based on directly connecting to a single machine—hard to sustain at scale.

It is in this context that the ELK stack has established itself as a technical foundation for analysing, searching, and visualising technical data, particularly logs.
In this article, we answer three key questions:

  • What exactly is the ELK Stack?
  • What is it used for today, especially in observability?
  • How can it be used effectively without managing the underlying infrastructure?

ELK Stack: a clear definition

The ELK Stack is a historical acronym that refers to three components:

  • Elasticsearch: a distributed search and analytics engine;
  • Kibana: a data exploration and visualisation interface;
  • Logstash: a data collection and transformation tool (depending on the context).

At present, Elasticsearch and Kibana form the functional core of the ELK stack, particularly for data analysis and visualisation use cases, once the data has been ingested into Elasticsearch.

The term Elastic Stack is also used, referring more broadly to the entire Elastic ecosystem. In common usage—especially in cloud environments—the ELK Stack generally refers to the combination of a data collection mechanism, often agent-based, with Elasticsearch for storage and analysis, and Kibana for visualisation.

What is the ELK Stack used for?

The ELK Stack is used to centralise, analyse, and exploit technical data coming from systems and applications. It enables large volumes of data to be indexed and analysed across wide time ranges, while correlating information from multiple sources, services, or environments.

This analytical capability makes it a widely adopted tool for understanding application behaviour, diagnosing incidents, investigating anomalies, or exploring operational data. Its main strength lies in the ability to move quickly from raw data to actionable insights, without relying on specialised tools for each individual use case.

ELK Stack and observability: what is the connection?

Observability aims to understand the internal state of a system through its observable signals. Among these signals, logs play a central role, as they describe precisely what an application is doing at a given point in time.

In this context, the ELK Stack provides a particularly well-suited foundation for log-centric observability. Elasticsearch enables large-scale search and correlation of events, while Kibana provides a visual layer that makes analysis and interpretation easier. Together, they make it possible to detect abnormal behaviour, reconstruct the timeline of an incident, and analyse trends over time.

In an observability approach, the ELK Stack is therefore mainly used as a log analysis foundation, complemented by other signals depending on the needs.

How to use the ELK Stack without managing infrastructure

One of the main barriers to adopting the ELK Stack has long been its operational complexity. Deploying, maintaining, and scaling such a stack requires handling capacity planning, upgrades, security, and backups.

In cloud environments, this operational burden can quickly distract teams from their primary goal: analysing data rather than managing infrastructure. This is why many teams now turn to managed approaches.

Managed approach

In a managed approach, Elasticsearch and Kibana are provided as ready-to-use services. The underlying infrastructure and part of the day-to-day operations—such as service provisioning, maintenance, backups, and access control according to the platform’s model—are handled by the platform. This allows teams to focus on usage rather than operations.

In this model, log collection is handled by the platform’s mechanisms. On Clever Cloud, applications and add-ons can expose their logs through drains, which redirect them to a target Elasticsearch instance without deploying any collection tooling inside the PaaS.

On Clever Cloud, it is for example possible to create an Elastic Stack add-on that provides:

  • a managed Elasticsearch service;
  • an associated Kibana instance;
  • built-in security and backup mechanisms;
  • a connection using the access credentials provided by the add-on.

This approach makes it possible to leverage the ELK Stack without managing low-level infrastructure concerns, while retaining the analytical power of Elasticsearch.

Concrete observability use cases

Application log analysis

Centralising application logs in Elasticsearch makes it possible to quickly search for errors, explore specific events, or filter data using multiple criteria. This capability is essential for understanding the real behaviour of an application in production.

Incident diagnosis

When an incident occurs, event correlation becomes critical. The ELK Stack allows teams to analyse event timelines, identify the components involved, and better understand root causes, without being limited to a fragmented view of logs.

Application behaviour monitoring

Over time, analysing indexed data in Elasticsearch helps detect trends, abnormal spikes, or behavioural changes. Kibana dashboards facilitate this analysis and provide a synthetic view tailored to technical teams.

Conclusion

The ELK Stack remains a solid foundation for analysing and exploiting technical data, particularly logs. Its role in observability practices has grown alongside the evolution of cloud-native and distributed architectures.

By relying on the functional core of the ELK Stack—namely Elasticsearch and Kibana—it is possible to build an analysis environment suited to modern needs without necessarily managing the underlying infrastructure. Managed approaches help reduce operational complexity and allow teams to focus on data value.

ELK Stack use cases continue to evolve. Recent work by Elastic on new log management models, such as streams, opens the door to more flexible approaches better suited to current data volumes. These evolutions build on existing foundations without calling into question Elasticsearch’s central role in observability data analysis.

For those looking to explore these use cases in a controlled environment, creating an Elastic Stack add-on on Clever Cloud offers a pragmatic way to approach Elasticsearch-based observability without turning operations into a constraint.

Blog

À lire également

Elasticsearch Observability: logs, metrics, and traces explained

Modern architectures generate ever-growing volumes of data. Microservices, APIs, cloud workloads, and serverless environments multiply potential failure points. In this context, understanding what is really happening in production has become a central challenge.
Engineering

ELK Stack: what it is used for and how to use it for observability

Understanding what is really happening inside a modern application has become increasingly complex. Microservices, cloud environments, and the growing number of physical or virtual servers all contribute to an explosion of technical signals. This distribution makes so-called “traditional” log analysis—based on directly connecting to a single machine—hard to sustain at scale.
Engineering

Identity Access Management: pillar of security and compliance

Identity Access Management (IAM) is now one of the foundations of information system security and governance.…

Engineering