Skip to content

ELK Stack Setup Example

This example demonstrates how to set up and configure the ELK (Elasticsearch, Logstash, Kibana) stack with the OpenTelemetry Collector.

Prerequisites

  • Docker Desktop installed and running
  • At least 8GB of available RAM (ELK stack is resource-intensive)
  • Ports available: 5601 (Kibana), 9200 (Elasticsearch)

Step 1: Enable ELK Stack Configuration

Update your application configuration (e.g., appsettings.json):

{
  "ObservabilityStack": {
    "Mode": "OtelCollector",
    "CollectorEndpoint": "http://otel-collector:4317"
  },
  "ObservabilityBackend": {
    "EnabledBackends": ["ELK"],
    "ElkStack": {
      "ElasticsearchEndpoint": "http://elasticsearch:9200",
      "KibanaEndpoint": "http://kibana:5601"
    }
  }
}

Step 2: Start the Observability Stack

# Navigate to Docker Compose directory
cd <your-docker-compose-directory>

# Start ELK stack and collector
docker-compose up -d elasticsearch kibana otel-collector

Step 3: Configure Collector

The collector configuration should include Elasticsearch exporters. Update your OpenTelemetry Collector configuration file:

exporters:
  elasticsearch/traces:
    endpoints: [http://elasticsearch:9200]
    tls:
      insecure_skip_verify: true
    traces_index: otel_traces_index
    mapping:
      mode: otel

  elasticsearch/logs:
    endpoints: [http://elasticsearch:9200]
    tls:
      insecure_skip_verify: true
    logs_index: otel_logs_index
    mapping:
      mode: otel

service:
  pipelines:
    traces:
      receivers: [otlp]
      processors: [batch]
      exporters: [elasticsearch/traces]
    logs:
      receivers: [otlp]
      processors: [batch]
      exporters: [elasticsearch/logs]

Restart the collector:

docker-compose restart otel-collector

Step 4: Verify Setup

Check Elasticsearch

curl http://localhost:9200/_cluster/health

Expected response:

{
  "cluster_name": "docker-cluster",
  "status": "green",
  "number_of_nodes": 1
}

Check Kibana

  1. Open http://localhost:5601 in your browser
  2. Go to Stack Management > Index Patterns
  3. Create index pattern: otel_*
  4. Select @timestamp as time field
  5. Create the pattern

Check Collector Logs

docker-compose logs otel-collector | grep elasticsearch

Step 5: Generate Test Data

Start your application and generate some telemetry data:

# Make some API calls to generate traces and logs
curl http://localhost:8081/api/health

Step 6: View Data in Kibana

  1. Go to Discover in Kibana
  2. Select the otel_logs_index or otel_traces_index pattern
  3. View and search your telemetry data

Configuration Examples

Production Configuration

For production, enable security:

exporters:
  elasticsearch/traces:
    endpoints: [https://elasticsearch:9200]
    tls:
      ca_file: /etc/certs/elasticsearch-ca.crt
      cert_file: /etc/certs/elasticsearch-client.crt
      key_file: /etc/certs/elasticsearch-client.key
    user: ${ELASTICSEARCH_USER}
    password: ${ELASTICSEARCH_PASSWORD}

Index Lifecycle Management

Configure index lifecycle in Elasticsearch:

{
  "policy": {
    "phases": {
      "hot": {
        "min_age": "0d",
        "actions": {
          "rollover": {
            "max_size": "50GB",
            "max_age": "7d"
          }
        }
      },
      "delete": {
        "min_age": "30d"
      }
    }
  }
}

Troubleshooting

Elasticsearch Not Starting

Problem: Elasticsearch container exits immediately

Solution:

  • Increase Docker memory limit (Settings > Resources > Memory > 8GB)
  • Check logs: docker-compose logs elasticsearch
  • Verify no port conflicts: netstat -an | findstr 9200

No Data in Kibana

Problem: Data not appearing in Kibana

Solution:

  1. Verify index exists: curl http://localhost:9200/_cat/indices
  2. Check collector logs: docker-compose logs otel-collector
  3. Verify index pattern in Kibana matches actual index names
  4. Check time range in Kibana (default is last 15 minutes)

High Memory Usage

Problem: Elasticsearch using too much memory

Solution:

  • Reduce ES_JAVA_OPTS in docker-compose.yml
  • Enable index lifecycle management to delete old data
  • Reduce batch sizes in collector configuration

Use Cases

Centralized Logging

  • Aggregate logs from multiple services
  • Search and filter logs across services
  • Create dashboards for log analysis

Distributed Tracing

  • View request flows across services
  • Identify performance bottlenecks
  • Debug distributed system issues

Security Monitoring

  • Monitor authentication events
  • Track security-related logs
  • Create alerts for suspicious activity

Further Reading