-
Notifications
You must be signed in to change notification settings - Fork 1.6k
Fix DefaultBatcher implementation to handle multiple producers. #12244
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
@bogdandrutu Would you mind sharing some insight to this issue? I’ve tried reproducing the deadlock using what I believe to be a particularly adverse setup, but haven’t had any success:
If you have any thoughts on conditions that might reproduce it, I’d really appreciate it. |
Hey, the bug is fixed for memory corruption point. It is only useless at this point to have multiple goroutines since everything in the batcher happens under a lock. |
@bogdandrutu Nice! I'll submit a PR to remove the hardcoded worker number = 1.
Not everything, just the batching itself. Sending the data out is still distributed over the workers, right? The batching part I guess will be addressed with the sharding that you proposed in #12473 |
Remove hardcoding the number of queue workers to one if batching is enabled. The bug described in open-telemetry#12244 isn't relevant anymore.
Remove hardcoding the number of queue workers to one if batching is enabled. The bug described in open-telemetry#12244 isn't relevant anymore.
Right now, if multiple producers are setup to talk to the DefaultBatcher it gets into a deadlock. Steps to reproduce remove all places where this issue is linked (force to allow only 1 producer).
The text was updated successfully, but these errors were encountered: