-
Notifications
You must be signed in to change notification settings - Fork 3.2k
Description
Component(s)
receiver/prometheusremotewrite
What happened?
Description
Hi! I'm trying to forward metrics from Prometheus to OpenTelemetry Collector using the prometheusremotewritereceiver. Prometheus is receiving metrics via remote_write from several sources (mainly vmagent and prometheus-agent), then forwarding them to Otel Collector.
flowchart TD
vma1(vmagent-01)
vma2(vmagent-02)
pa1(prometheus-agent-01)
prometheus(Prometheus)
otg(Otel Collector Gateway)
prwr["Prometheus Remote<br>Write Receiver"]
vma1 -->|Remote Write| prometheus
vma2 -->|Remote Write| prometheus
pa1 -->|Remote Write| prometheus
prometheus -->|Remote Write| prwr
prwr <--> otg
However, when Prometheus sends these metrics to Otel Collector, the collector rejects them with the following error: METRIC_TYPE_UNSPECIFIED
This happens for many metrics — the rejected type is always METRIC_TYPE_UNSPECIFIED
Steps to Reproduce
Configuration
Prometheus config:
global:
scrape_interval: 15s
scrape_timeout: 10s
evaluation_interval: 15s
scrape_configs:
- job_name: 'prometheus'
static_configs:
- targets: ['localhost:9090']
remote_write:
- url: "http://192.168.138.91:32411/api/v1/write"
name: "Remote write to Otelcol gateway"
protobuf_message: io.prometheus.write.v2.RequestCollector config (relevant part):
receivers:
prometheusremotewrite:
endpoint: 192.168.138.91:32411
otlp:
protocols:
grpc:
endpoint: 192.168.138.91:5555
http:
endpoint: 192.168.138.91:32412Expected Result
Metrics should be successfully forwarded from Prometheus to the OpenTelemetry Collector, even if the metrics were originally written to Prometheus via remote_write (e.g., by prometheus-agent or vmagent).
Actual Result
The prometheusremotewritereceiver component rejects incoming metrics from Prometheus with METRIC_TYPE_UNSPECIFIED errors.
Example log output from Prometheus:
level=ERROR component=remote remote_name="Remote write to Otelcol gateway"
err="server returned HTTP status 400 Bad Request:
unsupported metric type \"METRIC_TYPE_UNSPECIFIED\" for metric \"angie_slabs_pages_slots_used\""
Let me know if you’d like me to provide full debug logs or a test environment to reproduce the issue.
Collector version
0.131.1
Environment information
Environment
OS: Ubuntu 24.04
Compiler: go version go1.24.4 linux/amd64
OpenTelemetry Collector configuration
Log output
Aug 07 13:33:01 test-monit-01 prometheus[186361]: time=2025-08-07T13:33:01.437+03:00 level=ERROR source=queue_manager.go:1668 msg="non-recoverable error" component=remote remote_name="Remote write to Otelcol gateway" url=http://192.168.138.91:32411/api/v1/write failedSampleCount=3995 failedHistogramCount=0 failedExemplarCount=0 err="server returned HTTP status 400 Bad Request: unsupported metric type \"METRIC_TYPE_UNSPECIFIED\" for metric \"node_network_mtu_bytes\"\nunsupported metric type \"METRIC_TYPE_UNSPECIFIED\" for metric \"node_network_mtu_bytes\"\nunsupported metric type \"METRIC_TYPE_UNSPECIFIED\" for metric \"node_network_mtu_bytes\"\nunsupported metric type \"METRIC_TYPE_UNSPECIFIED\" for metric \"node_network_mtu_bytes\"\nunsupported metric type \"METRIC_TYPE_UNSPECIFIED\" for metric \"node_network_mtu_bytes\"\nunsupported metric type \"METRIC_TYPE_UNSPECIFIED\" for metric \"node_network_name_assign_type\"\nunsupported metric type \"METRIC_TYPE_UNSPECIFIED\" for metric \"node_network_name_assign_type\"\nunsupported metric type \"METRIC_TYPE_UNSPECIFIED\" for metric \"node_network_name_assign_type\"\nunsupported metric type \"METRIC_TYPE_UNSPECIFIED\" for metric \"node_network_name_assign_type\"\nunsupported metric type \"METRIC_TYPE_UNSPECIFIED\" for metric \"node_network_name_assign_type\"\nunsupported metric type \"METRIC_TYPE_UNSPECIFIED\" for metric \"node_network_name_assign_type\"\nunsupported metric type \"METRIC_TYPE"
Aug 07 13:35:12 test-monit-01 prometheus[186361]: time=2025-08-07T13:35:12.942+03:00 level=ERROR source=queue_manager.go:1668 msg="non-recoverable error" component=remote remote_name="Remote write to Otelcol gateway" url=http://192.168.138.91:32411/api/v1/write failedSampleCount=3995 failedHistogramCount=0 failedExemplarCount=0 err="server returned HTTP status 400 Bad Request: unsupported metric type \"METRIC_TYPE_UNSPECIFIED\" for metric \"node_disk_filesystem_info\"\nunsupported metric type \"METRIC_TYPE_UNSPECIFIED\" for metric \"node_disk_filesystem_info\"\nunsupported metric type \"METRIC_TYPE_UNSPECIFIED\" for metric \"node_disk_filesystem_info\"\nunsupported metric type \"METRIC_TYPE_UNSPECIFIED\" for metric \"node_disk_filesystem_info\"\nunsupported metric type \"METRIC_TYPE_UNSPECIFIED\" for metric \"node_disk_filesystem_info\"\nunsupported metric type \"METRIC_TYPE_UNSPECIFIED\" for metric \"node_disk_flush_requests_time_seconds_total\"\nunsupported metric type \"METRIC_TYPE_UNSPECIFIED\" for metric \"node_disk_flush_requests_time_seconds_total\"\nunsupported metric type \"METRIC_TYPE_UNSPECIFIED\" for metric \"node_disk_flush_requests_time_seconds_total\"\nunsupported metric type \"METRIC_TYPE_UNSPECIFIED\" for metric \"node_disk_flush_requests_time_seconds_total\"\nunsupported metric type \"METRIC_TYPE_UNSPECIFIED\" for metric \"node_disk_flush_requests_time_seconds_total\"\nunsupported metric type \"METRIC_TYPE_UNSPECI"Additional context
Additional context
In Prometheus, when using remote_write, metric types are not explicitly defined — they are inferred based on the metric’s structure and behavior. This might be the reason why the receiver is unable to handle them.
As far as I understand, the OpenTelemetry Collector might expect the metric type to be explicitly set, which Prometheus does not do for remote_write — especially if the metrics were ingested into Prometheus via remote_write themselves (e.g., from prometheus-agent, vmagent, etc.)
Tip
React with 👍 to help prioritize this issue. Please use comments to provide useful context, avoiding +1 or me too, to help us triage it. Learn more here.