-
Notifications
You must be signed in to change notification settings - Fork 2.8k
GitHub receiver step spans show huge times when step is skipped #39020
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself. |
Hi @adrielp! Thank you for creating the issue - may i ask to also include the collector config, so it's also easier to reproduce for others? |
Reproduces with the following config: receivers:
github:
initial_delay: 1s
collection_interval: 60s
scrapers:
scraper:
github_org: MercuryTechnologies
auth:
authenticator: bearertokenauth/github
webhook:
secret: "${env:GITHUB_WEBHOOK_SECRET}"
endpoint: 0.0.0.0:8080
path: /events
health_path: /health
processors:
batch:
timeout: 10s
send_batch_size: 1024
exporters:
otlp:
endpoint: "api.honeycomb.io:443"
headers:
"x-honeycomb-team": "${env:HONEYCOMB_API_KEY}"
"x-honeycomb-dataset": "${env:HONEYCOMB_DATASET}"
tls:
insecure: false
debug: # To use locally for testing/debugging
verbosity: detailed
sampling_initial: 5
sampling_thereafter: 200
extensions:
bearertokenauth/github:
token: "${env:GITHUB_PAT}"
health_check:
endpoint: 0.0.0.0:13133
service:
extensions: [health_check, bearertokenauth/github]
pipelines:
traces:
receivers:
- github
processors:
- batch
exporters:
- otlp |
Yea, it's the default behavior for the config w/ any traces. GitHub does a weird thing when emitting skipped jobs. Soon as I'm back from Kubecon I'll have a fix up for this. Largely wanted to open this up for myself to go fix (though if someone wants to take it on, no objections). @arianvp thanks for providing the config. |
Enjoy KubeCon! |
Also happens for |
Discussed in the CICD SIG today. Requires fix. |
Pull request to fix this issue is open. |
…etry#39499) #### Description Fixes end span times for jobs when run is skipped or cancelled. Additionally adds trace testing using the golden package. <!-- Issue number (e.g. open-telemetry#1234) or full URL to issue, if applicable. --> #### Link to tracking issue Fixes open-telemetry#39020
…etry#39499) #### Description Fixes end span times for jobs when run is skipped or cancelled. Additionally adds trace testing using the golden package. <!-- Issue number (e.g. open-telemetry#1234) or full URL to issue, if applicable. --> #### Link to tracking issue Fixes open-telemetry#39020
This issue is not fixed for me ![]() Maybe some interation with #39511 ? it seems to happen to the queued span |
@arianvp -- can you show me what the start and stop time is on those spans? -- please feel free to troubleshoot with me more quickly on slack. |
Component(s)
receiver/github
What happened?
Description
The step spans for skipped runs show times consistently show times like:
4639920h 46m
. I'm mainly opening this up for myself to track and am actively looking at the fix. Wanted to provide transparency though on the discovery.Steps to Reproduce
Any job with skipped steps will show this way.
Expected Result
Expected result is
0
time w/ the status of skipped.Actual Result
Actual result is a crazy huge time w/ a status of skipped as depicted below.
Collector version
v0.122.0
Environment information
Environment
All
OpenTelemetry Collector configuration
Log output
Additional context
No response
The text was updated successfully, but these errors were encountered: