Skip to content

fatal error: sync: unlock of unlocked mutex #39106

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
lightme16 opened this issue Apr 1, 2025 · 2 comments · Fixed by #39426
Closed

fatal error: sync: unlock of unlocked mutex #39106

lightme16 opened this issue Apr 1, 2025 · 2 comments · Fixed by #39426
Labels
bug Something isn't working processor/deltatocumulative

Comments

@lightme16
Copy link

Component(s)

processor/deltatocumulative

What happened?

Description

Hello,

While trying the fresh version of delta-cumulative processor, while working on an unrelated change, I noticed this error under the load in one of our environments. I don't have steps to reproduce, but that seems to be related to the latest code refactoring in processor.go.

I would like to open an issue and maybe later if I have more insight, I would contribute more. Just wanna keep it open as maybe others would not see the same problem.

+ /otelcontribcol --version
otelcontribcol version 0.121.2-dev

Collector version

0.121.2-dev

Environment information

amazon linux 2 aarch64

OpenTelemetry Collector configuration

exporters:
    prometheusremotewrite:
        add_metric_suffixes: false
        endpoint: https://some.thanos.endpoint:443/api/v1/write

processors:
    batch/metrics:
        send_batch_max_size: 5000
        send_batch_size: 2000
        timeout: 10s
    deltatocumulative: []


receivers:
    statsd:
        aggregation_interval: 15s
        enable_metric_type: true
        endpoint: 0.0.0.0:8135
        is_monotonic_counter: true
        timer_histogram_mapping:
            - observer_type: summary
              statsd_type: distribution
              summary:
                percentiles:
                    - 0
                    - 50
                    - 95
                    - 99
                    - 100
service:
    pipelines:
        metrics/statsd:
            exporters:
                - prometheusremotewrite
            processors:
                - deltatocumulative
                - batch/metrics
            receivers:
                - statsd

Log output

fatal error: sync: unlock of unlocked mutex

goroutine 434 [running]:
internal/sync.fatal({0x99625b9?, 0x856c4?})
        runtime/panic.go:1068 +0x20
internal/sync.(*Mutex).unlockSlow(0x40013e2898, 0xffffffff)
        internal/sync/mutex.go:204 +0x38
internal/sync.(*Mutex).Unlock(...)
        internal/sync/mutex.go:198
sync.(*Mutex).Unlock(0x0?)
        sync/mutex.go:65 +0x58
sync.(*Cond).Wait(0x400169c4b8)
        sync/cond.go:70 +0xb8
github.com/puzpuzpuz/xsync/v3.(*MapOf[...]).waitForResize(0xaa88f00)
        github.com/puzpuzpuz/xsync/[email protected]/mapof.go:496 +0xa4
github.com/puzpuzpuz/xsync/v3.(*MapOf[...]).doCompute(0xaa88f00, {{{{{...}}, {0x9ade7a5, 0x51}, {0x987ff58, 0xb}, {0x0, 0x0, 0x0, 0x0, ...}}, ...}, ...}, ...)
        github.com/puzpuzpuz/xsync/[email protected]/mapof.go:367 +0x2b0
github.com/puzpuzpuz/xsync/v3.(*MapOf[...]).LoadAndDelete(0x0?, {{{{{...}}, {0x9ade7a5, 0x51}, {0x987ff58, 0xb}, {0x0, 0x0, 0x0, 0x0, ...}}, ...}, ...})
        github.com/puzpuzpuz/xsync/[email protected]/mapof.go:313 +0x5c
github.com/open-telemetry/opentelemetry-collector-contrib/processor/deltatocumulativeprocessor/internal/maps.(*Parallel[...]).LoadAndDelete(0xa9beae0, {{{{{...}}, {0x9ade7a5, 0x51}, {0x987ff58, 0xb}, {0x0, 0x0, 0x0, 0x0, ...}}, ...}, ...})
        github.com/open-telemetry/opentelemetry-collector-contrib/processor/[email protected]/internal/maps/map.go:100 +0x4c
github.com/open-telemetry/opentelemetry-collector-contrib/processor/deltatocumulativeprocessor.(*Processor).Start.func1.1({{{{{...}}, {0x9ade7a5, 0x51}, {0x987ff58, 0xb}, {0x0, 0x0, 0x0, 0x0, 0x0, ...}}, ...}, ...}, ...)
        github.com/open-telemetry/opentelemetry-collector-contrib/processor/[email protected]/processor.go:225 +0x8c
github.com/puzpuzpuz/xsync/v3.(*MapOf[...]).Range(0x40012fdf50?, 0x40012fdf98?)
        github.com/puzpuzpuz/xsync/[email protected]/mapof.go:625 +0x33c
github.com/open-telemetry/opentelemetry-collector-contrib/processor/deltatocumulativeprocessor.(*Processor).Start.func1()
        github.com/open-telemetry/opentelemetry-collector-contrib/processor/[email protected]/processor.go:223 +0x9c
created by github.com/open-telemetry/opentelemetry-collector-contrib/processor/deltatocumulativeprocessor.(*Processor).Start in goroutine 1
        github.com/open-telemetry/opentelemetry-collector-contrib/processor/[email protected]/processor.go:214 +0x64

goroutine 1 [select, 25 minutes]:
go.opentelemetry.io/collector/otelcol.(*Collector).Run(0x4000bd3040, {0xa99b3a8, 0x114698a0})
        go.opentelemetry.io/collector/[email protected]/collector.go:329 +0x298
go.opentelemetry.io/collector/otelcol.NewCommand.func1(0x4000e98908, {0x98641ef?, 0x7?, 0x985863e?})
        go.opentelemetry.io/collector/[email protected]/command.go:39 +0x8c
github.com/spf13/cobra.(*Command).execute(0x4000e98908, {0x4000183c30, 0x2, 0x2})
        github.com/spf13/[email protected]/command.go:1015 +0x828
github.com/spf13/cobra.(*Command).ExecuteC(0x4000e98908)
        github.com/spf13/[email protected]/command.go:1148 +0x350
github.com/spf13/cobra.(*Command).Execute(0x9c20a88?)
        github.com/spf13/[email protected]/command.go:1071 +0x1c
main.runInteractive({0x9c20a88, {{0x989ae27, 0xe}, {0x9a88764, 0x3b}, {0x987ff58, 0xb}}, 0x0, {{{0x0, 0x0, ...}, ...}}, ...})
        github.com/open-telemetry/opentelemetry-collector-contrib/cmd/otelcontribcol/main.go:67 +0x4c
main.run(...)
        github.com/open-telemetry/opentelemetry-collector-contrib/cmd/otelcontribcol/main_others.go:10
main.main()
        github.com/open-telemetry/opentelemetry-collector-contrib/cmd/otelcontribcol/main.go:60 +0x4a0

Additional context

No response

@lightme16 lightme16 added bug Something isn't working needs triage New item requiring triage labels Apr 1, 2025
Copy link
Contributor

github-actions bot commented Apr 1, 2025

Pinging code owners:

See Adding Labels via Comments if you do not have permissions to add labels yourself.

@sh0rez
Copy link
Member

sh0rez commented Apr 15, 2025

Possibly related to puzpuzpuz/xsync#145 and

return &Parallel[K, V]{ctx: ctx, elems: *xsync.NewMapOf[K, V]()}

We do copy the map struct once at initialization, but only pointer-refer to the maps.Parallel. This still might be the problem

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working processor/deltatocumulative
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants