-
Notifications
You must be signed in to change notification settings - Fork 2.8k
Changed behaviour of resource generation in Prometheus receiver since upgrade to Prometheus 3.X #38097
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself. |
We probably want to discuss this. I'm inclined to agree @bacherfl that That is a big change though, I apologize for not noticing it. Changing the behavior here would also require a change in the spec, doesn't it? |
Hmmm... Should we only use the service.name attribute from the scraped target if honor_labels is true? |
I've also heard some users complain about not having a |
Yes, we need to evaluate the impact that would have on the |
Part of this behavior is this bug: #37937 And as part of fixing that bug, otel collector will not send dotted label names in the near term. However, we will still need to account for the future situation where UTF-8 is functioning and these dotted names are sent. Indeed, it is my belief that the info function will need to be updated to support the original label names |
This PR would fix the problem I believe: #37938 |
I think the proposed solution won't work because then Prometheus Receiver won't be able to scrape endpoints that expose UTF-8. The global variable makes it difficult for the collector architecture 😬 |
Ah, this is a bit of a misconception that I have not done a good job of explaining. The global variable is best thought of a way to gate whether the code is UTF-8 aware, and by default, yes, validity checks will look for UTF-8 validity. However there are new APIs, .IsValidLegacy() and IsValidLegacyMetricName() that new code can call if they want to check to see if something is valid under the old world. So the idea is, a client can extend its code to look for either UTF8 or legacy validity, and then flip the SDK global flag when it's ready. This is how Prometheus did it: even when the SDK global is set to UTF-8, individual data sources can be set to legacy mode. |
I really need to write that blog post :) |
Similar to #2386 for 119 This PR includes reaction to breaking upstream changes for Prometheus 3.0. See: * open-telemetry/opentelemetry-collector-contrib#38097 * open-telemetry/opentelemetry-collector-contrib#37937 * open-telemetry/opentelemetry-collector-contrib#38109
There's work going on in Prometheus that is relevant here: prometheus/prometheus#16066 This PR will update the scrape manager to allow escaping options during scrapes. We can rollback to the old behavior with this, but now that the damage is done... should we do it? |
This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself. |
Component(s)
receiver/prometheus
Describe the issue you're reporting
This affects version 0.120.1
Since the recent upgrade to Prometheus 3.x in our dependencies, I have noticed a change in the OTel resources created by the Prometheus receiver: Since now the default validation scheme used by the receiver is set to
UTF8
, as described in the Prometheus 3.x migration guide, labels likeservice_name
, will now be received asservice.name
. This can potentially interfere withservice.name
of the resulting OTel resource that has previously been derived by thejob
label of either the metric itself or the scrape config.One particular example where this is a breaking change is when we use the following config to export the self monitoring metrics of the collector:
The scrape config used here scrapes the collector's self monitoring prometheus endpoint, which, before the upgrade to Prometheus 3.x, contained e.g. the following metric:
As a result, the resource created from this metric then had the following attributes:
Now, since release
0.120.0
, the prometheus receiver will set theAccept
header toAccept:text/plain;version=1.0.0;escaping=allow-utf-8
, when accessing the metrics endpoint, which cases the response of the scrape request to be delivered as follows:This will result in the following resource:
I'm not sure if this should be considered as a bug, as it seems logical to use the
service.name
label instead of thejob
label to create the resource, but since this is a notable change in the behaviour, I would like to raise awareness to hopefully prevent some confusion as to why generated resources might now be named differently.The text was updated successfully, but these errors were encountered: