Skip to content

Multiple Backends Setup Example

This example demonstrates how to configure the OpenTelemetry Collector to export telemetry data to multiple backends simultaneously.

Use Case

Exporting to multiple backends is useful when:

  • You want redundancy (backup data in multiple systems)
  • Different teams use different tools
  • Migrating from one backend to another
  • Different data types go to different backends

Prerequisites

  • Docker Desktop installed and running
  • Sufficient resources (multiple backends increase resource usage)
  • Ports available for all backends

Step 1: Enable Multiple Backends

Update your application configuration (e.g., appsettings.json):

{
  "ObservabilityStack": {
    "Mode": "OtelCollector",
    "CollectorEndpoint": "http://otel-collector:4317"
  },
  "ObservabilityBackend": {
    "EnabledBackends": ["PrometheusGrafana", "Jaeger", "ELK"],
    "PrometheusGrafana": {
      "PrometheusEndpoint": "http://prometheus:9090",
      "GrafanaEndpoint": "http://grafana:3000"
    },
    "Jaeger": {
      "Endpoint": "http://jaeger:4317"
    },
    "ElkStack": {
      "ElasticsearchEndpoint": "http://elasticsearch:9200",
      "KibanaEndpoint": "http://kibana:5601"
    }
  }
}

Step 2: Start All Backends

cd <your-docker-compose-directory>

# Start all observability services
docker-compose up -d \
  otel-collector \
  prometheus \
  grafana \
  jaeger \
  elasticsearch \
  kibana

Step 3: Configure Collector for Multiple Backends

Update your OpenTelemetry Collector configuration file:

receivers:
  otlp:
    protocols:
      grpc:
        endpoint: 0.0.0.0:4317
      http:
        endpoint: 0.0.0.0:4318

processors:
  batch:
    timeout: 1s
    send_batch_size: 1024
  memory_limiter:
    limit_mib: 400
    spike_limit_mib: 100
    check_interval: 5s

exporters:
  # Prometheus for metrics
  prometheus:
    endpoint: 0.0.0.0:8889

  # Jaeger for traces
  otlp/jaeger:
    endpoint: jaeger:4317
    tls:
      insecure: true

  # Elasticsearch for logs and traces
  elasticsearch/traces:
    endpoints: [http://elasticsearch:9200]
    tls:
      insecure_skip_verify: true
    traces_index: otel_traces_index

  elasticsearch/logs:
    endpoints: [http://elasticsearch:9200]
    tls:
      insecure_skip_verify: true
    logs_index: otel_logs_index

service:
  pipelines:
    # Traces to both Jaeger and Elasticsearch
    traces:
      receivers: [otlp]
      processors: [memory_limiter, batch]
      exporters: [otlp/jaeger, elasticsearch/traces]

    # Metrics to Prometheus
    metrics:
      receivers: [otlp]
      processors: [memory_limiter, batch]
      exporters: [prometheus]

    # Logs to Elasticsearch
    logs:
      receivers: [otlp]
      processors: [memory_limiter, batch]
      exporters: [elasticsearch/logs]

Restart collector:

docker-compose restart otel-collector

Step 4: Verify All Backends

Prometheus

curl http://localhost:9091/-/healthy

View metrics: http://localhost:9091

Grafana

curl http://localhost:3000/api/health

Access UI: http://localhost:3000

Jaeger

curl http://localhost:16686/

Access UI: http://localhost:16686

Elasticsearch

curl http://localhost:9200/_cluster/health

Kibana

curl http://localhost:5601/api/status

Access UI: http://localhost:5601

Step 5: Generate Test Data

Start your application and generate telemetry:

# Make API calls
for i in {1..20}; do
  curl http://localhost:8081/api/health
  sleep 0.5
done

Step 6: View Data in Each Backend

Prometheus/Grafana

  1. Open Grafana: http://localhost:3000
  2. View metrics dashboards
  3. Query metrics in Prometheus: http://localhost:9091

Jaeger

  1. Open Jaeger: http://localhost:16686
  2. Search for traces
  3. View trace details and spans

Elasticsearch/Kibana

  1. Open Kibana: http://localhost:5601
  2. Create index patterns: otel_traces_index, otel_logs_index
  3. View traces and logs in Discover

Advanced Configuration

Different Data to Different Backends

Route specific telemetry to specific backends:

processors:
  routing:
    from_attribute: backend
    default_exporters: [prometheus, otlp/jaeger]
    table:
      - value: metrics-only
        exporters: [prometheus]
      - value: traces-only
        exporters: [otlp/jaeger]

service:
  pipelines:
    traces:
      receivers: [otlp]
      processors: [routing, batch]
      exporters: [otlp/jaeger, elasticsearch/traces]

Load Balancing

Distribute load across multiple instances:

exporters:
  otlp/jaeger-1:
    endpoint: jaeger-1:4317
  otlp/jaeger-2:
    endpoint: jaeger-2:4317

service:
  pipelines:
    traces:
      receivers: [otlp]
      processors: [batch]
      exporters: [otlp/jaeger-1, otlp/jaeger-2]

Conditional Export

Export based on conditions:

processors:
  filter:
    traces:
      span:
        - 'attributes["environment"] == "production"'

service:
  pipelines:
    traces:
      receivers: [otlp]
      processors: [filter, batch]
      exporters: [otlp/jaeger]  # Only production traces

Performance Considerations

Resource Usage

Multiple backends increase:

  • Memory usage (each exporter uses memory)
  • Network traffic (data sent to multiple destinations)
  • CPU usage (processing for each exporter)

Optimization

  1. Batch Processing: Always use batch processor
  2. Sampling: Reduce data volume with sampling
  3. Filtering: Filter unnecessary data before export
  4. Resource Limits: Set appropriate limits for collector

Troubleshooting

High Memory Usage

Problem: Collector using too much memory

Solution:

  • Reduce batch sizes
  • Enable sampling
  • Filter data before export
  • Increase memory limits

Slow Performance

Problem: Collector is slow

Solution:

  • Check exporter queue depths
  • Verify backend connectivity
  • Reduce number of exporters
  • Optimize processors

Data Duplication

Problem: Same data in multiple backends

Solution:

  • This is expected behavior when exporting to multiple backends
  • Use filtering to route specific data to specific backends
  • Consider if you really need all data in all backends

Use Cases

Redundancy

  • Backup data in multiple systems
  • Disaster recovery
  • Data archival

Team Preferences

  • Different teams use different tools
  • Gradual migration
  • Tool evaluation

Data Segregation

  • Metrics to Prometheus
  • Traces to Jaeger
  • Logs to Elasticsearch

Further Reading