Skip to content

[receiver/awscontainerinsight] High cardinality with default configuration #35861

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
pjanotti opened this issue Oct 17, 2024 · 3 comments · Fixed by #37697
Closed

[receiver/awscontainerinsight] High cardinality with default configuration #35861

pjanotti opened this issue Oct 17, 2024 · 3 comments · Fixed by #37697
Labels
bug Something isn't working never stale Issues marked with this label will be never staled and automatically removed receiver/awscontainerinsight waiting-for-code-owners

Comments

@pjanotti
Copy link
Contributor

pjanotti commented Oct 17, 2024

Component(s)

receiver/awscontainerinsight

What happened?

Description

Enabling the receiver generates high cardinality metrics since, by default, it adds Timestamp to the published metrics resource.

Steps to Reproduce

Enable the receiver in an AWS ECS EC2 cluster.

Expected Result

No high cardinality metrics from enabling the receiver.

Actual Result

High cardinality metrics.

Collector version

v0.111.0

Environment information

Environment

OS: (e.g., "Ubuntu 20.04")
Compiler(if manually compiled): (e.g., "go 14.2")

OpenTelemetry Collector configuration

No response

Log output

No response

Additional context

The Timestamp may be useful for some post processing by some AWS service, however, per general recommendation on OTel high-cardinality metrics should be avoided on the default configuration, even if a workaround using the resource attribute processor is relatively easy. It seems to me that this attribute should be disabled by default and possibly enabled by some specific configuration.

@pjanotti pjanotti added bug Something isn't working needs triage New item requiring triage labels Oct 17, 2024
Copy link
Contributor

Pinging code owners:

See Adding Labels via Comments if you do not have permissions to add labels yourself.

Copy link
Contributor

github-actions bot commented Jan 1, 2025

This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping @open-telemetry/collector-contrib-triagers. If this issue is still relevant, please ping the code owners or leave a comment explaining why it is still relevant. Otherwise, please close it.

Pinging code owners:

See Adding Labels via Comments if you do not have permissions to add labels yourself.

@pxaws
Copy link
Contributor

pxaws commented Feb 13, 2025

I would suggest to define some config or env variable to control the Timestamp field. For AWS Container Insights use cases, we don't have the so called high cardinality issue because Timestamp will be converted into a field in emf logs by aws emf exporter (in this repo). EMF logs has its own way to define the metrics schema (metric name, dimension, ...) and the Timestamp is just used for getting the time stamp.

dmitryax pushed a commit that referenced this issue Feb 24, 2025
…37697)

#### Description
Per code inspection it looks like `tags` was being used as a convenience
feature to pass the timestamp when converting the metrics to OTLP.
However, timestamp should not be a resource attribute due to causing
high-cardinality time series.

This change keeps the current usage of `tags`, but, ensures that
timestamp is not added as a resource attribute. Code owners should
consider if later the timestamp should be passed outside the `tags` map
- a change much larger than the current one.

#### Link to tracking issue
Fixes #35861

#### Testing
Updated respective tests.

#### Documentation
Changelog added.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working never stale Issues marked with this label will be never staled and automatically removed receiver/awscontainerinsight waiting-for-code-owners
Projects
None yet
2 participants