Skip to content

GitHub receiver step spans show huge times when step is skipped #39020

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
adrielp opened this issue Mar 27, 2025 · 10 comments · Fixed by #39499
Closed

GitHub receiver step spans show huge times when step is skipped #39020

adrielp opened this issue Mar 27, 2025 · 10 comments · Fixed by #39499
Assignees
Labels
bug Something isn't working receiver/github

Comments

@adrielp
Copy link
Contributor

adrielp commented Mar 27, 2025

Component(s)

receiver/github

What happened?

Description

The step spans for skipped runs show times consistently show times like: 4639920h 46m. I'm mainly opening this up for myself to track and am actively looking at the fix. Wanted to provide transparency though on the discovery.

Steps to Reproduce

Any job with skipped steps will show this way.

Expected Result

Expected result is 0 time w/ the status of skipped.

Actual Result

Actual result is a crazy huge time w/ a status of skipped as depicted below.

Image

Collector version

v0.122.0

Environment information

Environment

All

OpenTelemetry Collector configuration

Log output

Additional context

No response

@adrielp adrielp added bug Something isn't working needs triage New item requiring triage labels Mar 27, 2025
Copy link
Contributor

Pinging code owners:

See Adding Labels via Comments if you do not have permissions to add labels yourself.

@bacherfl
Copy link
Contributor

Hi @adrielp! Thank you for creating the issue - may i ask to also include the collector config, so it's also easier to reproduce for others?

@arianvp
Copy link

arianvp commented Apr 1, 2025

Reproduces with the following config:

receivers:
  github:
    initial_delay: 1s
    collection_interval: 60s
    scrapers:
      scraper:
        github_org: MercuryTechnologies
        auth:
          authenticator: bearertokenauth/github
    webhook:
      secret: "${env:GITHUB_WEBHOOK_SECRET}"
      endpoint: 0.0.0.0:8080
      path: /events
      health_path: /health

processors:
  batch:
    timeout: 10s
    send_batch_size: 1024

exporters:
  otlp:
    endpoint: "api.honeycomb.io:443"
    headers:
      "x-honeycomb-team": "${env:HONEYCOMB_API_KEY}"
      "x-honeycomb-dataset": "${env:HONEYCOMB_DATASET}"
    tls:
      insecure: false
  debug: # To use locally for testing/debugging
    verbosity: detailed
    sampling_initial: 5
    sampling_thereafter: 200
    
extensions:
  bearertokenauth/github:
    token: "${env:GITHUB_PAT}"
  health_check:
    endpoint: 0.0.0.0:13133

service:
  extensions: [health_check, bearertokenauth/github]
  pipelines:
    traces:
      receivers:
        - github
      processors:
        - batch
      exporters:
        - otlp

@adrielp
Copy link
Contributor Author

adrielp commented Apr 1, 2025

Yea, it's the default behavior for the config w/ any traces. GitHub does a weird thing when emitting skipped jobs. Soon as I'm back from Kubecon I'll have a fix up for this. Largely wanted to open this up for myself to go fix (though if someone wants to take it on, no objections).

@arianvp thanks for providing the config.

@arianvp
Copy link

arianvp commented Apr 1, 2025

Enjoy KubeCon!

@arianvp
Copy link

arianvp commented Apr 10, 2025

Also happens for cancelled jobs

@adrielp
Copy link
Contributor Author

adrielp commented Apr 11, 2025

Discussed in the CICD SIG today. Requires fix.

@adrielp
Copy link
Contributor Author

adrielp commented Apr 18, 2025

Pull request to fix this issue is open.

akshays-19 pushed a commit to akshays-19/opentelemetry-collector-contrib that referenced this issue Apr 23, 2025
…etry#39499)

#### Description

Fixes end span times for jobs when run is skipped or cancelled.
Additionally adds trace testing using the golden package.

<!-- Issue number (e.g. open-telemetry#1234) or full URL to issue, if applicable. -->
#### Link to tracking issue
Fixes open-telemetry#39020
Fiery-Fenix pushed a commit to Fiery-Fenix/opentelemetry-collector-contrib that referenced this issue Apr 24, 2025
…etry#39499)

#### Description

Fixes end span times for jobs when run is skipped or cancelled.
Additionally adds trace testing using the golden package.

<!-- Issue number (e.g. open-telemetry#1234) or full URL to issue, if applicable. -->
#### Link to tracking issue
Fixes open-telemetry#39020
@arianvp
Copy link

arianvp commented Apr 30, 2025

This issue is not fixed for me

Image

Maybe some interation with #39511 ? it seems to happen to the queued span

@adrielp
Copy link
Contributor Author

adrielp commented May 2, 2025

@arianvp -- can you show me what the start and stop time is on those spans? -- please feel free to troubleshoot with me more quickly on slack.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working receiver/github
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants