Skip to content

[exporter/exporterhelper] queue_size and queue_capacity internal metrics are missing the data type attribute #9943

@dloucasfx

Description

@dloucasfx

Describe the bug
When the same exporter definition is used in different data type pipelines; example: otlp exporter used in metrics and logs pipeline, the exact same metric will be initiated twice https://github.com/dloucasfx/opentelemetry-collector/blob/main/exporter/exporterhelper/queue_sender.go#L118-L139 which makes it impossible to know which queue type the metric is measuring.
We need to add the queue data type as an attribute, which requires the queue_sender struct to be expanded and exposes the datatype field.

Steps to reproduce
1- Define an exporter, example: otlp
2- Use this exporter in the logs and metrics pipeline
3- monitor the internal metric otelcol_exporter_queue_size and notice that there is a single MTS and you can't tell which queue data type it's measuring

Metric #0
Descriptor:
     -> Name: otelcol_exporter_queue_size
     -> Description: Current size of the retry queue (in batches)
     -> Unit:
     -> DataType: Gauge
NumberDataPoints #0
Data point attributes:
     -> exporter: Str(otlp)
     -> service_instance_id: Str(0fafd546-8c21-4bd4-a8c7-0faeec4482df)
     -> service_name: Str(otelcontribcol)
     -> service_version: Str(0.97.0-dev)
StartTimestamp: 1970-01-01 00:00:00 +0000 UTC
Timestamp: 2024-04-11 16:09:44.873 +0000 UTC
Value: 0.000000

What did you expect to see?
2 MTSes for otelcol_exporter_queue_size one with log data type attribute and one with metric data type attribute

What did you see instead?
a single otelcol_exporter_queue_size MTS and you can't tell which queue data type it's measuring

What version did you use?
v0.98.0

What config did you use?

receivers:
  hostmetrics:
    collection_interval: 10s
    scrapers:
      filesystem:
      memory:
  tcplog:
    listen_address: "0.0.0.0:54525"
  prometheus/internal:
    config:
      scrape_configs:
        - job_name: 'otel-collector'
          scrape_interval: 10s
          static_configs:
            - targets: [ "127.0.0.1:8899" ]
          metric_relabel_configs:
            - source_labels: [ __name__ ]
              regex: 'otelcol_rpc_.*'
              action: drop
            - source_labels: [ __name__ ]
              regex: 'otelcol_http_.*'
              action: drop
            - source_labels: [ __name__ ]
              regex: 'otelcol_processor_batch_.*'
              action: drop

processors:
  batch:
  resourcedetection/default:
    detectors: [system, ecs, ec2, azure]
    override: false

exporters:
  otlp:
    endpoint: 127.0.0.1:4317
    tls:
      insecure: true

service:
  telemetry:
    logs:
      level: "debug"
    metrics:
      address: "127.0.0.1:8899"
  pipelines:
    metrics/default:
      receivers: [prometheus/internal]
      processors: [batch, resourcedetection/default]
      exporters: [otlp]
    logs/default:
      receivers: [ tcplog ]
      processors: [ batch, resourcedetection/default ]
      exporters: [ otlp ]
      ```

Metadata

Metadata

Labels

bugSomething isn't workingcollector-telemetryhealthchecker and other telemetry collection issues

Type

No type

Projects

Status

Done

Relationships

None yet

Development

No branches or pull requests

Issue actions