-
Notifications
You must be signed in to change notification settings - Fork 2.8k
Enhance Kafka exporter to respect max message size #36982
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself. |
Hey @yurishkuro, I would like to contribute to this issue. Can you please assign me this? |
Hey @yurishkuro, I’m tied up with another project and it’s taking longer than expected. I won't be able to pick this up. Thanks for your understanding. |
hi @yurishkuro , is this task still available? I’d like to give it a try, assign please |
@LZiHaN are you working on this ? |
Yes, I'm working on it. |
Hi @yurishkuro , I’m working on implementing this feature and wanted to confirm if the approach I’m considering for message splitting and reassembling is feasible.
Is this approach viable, and would it work seamlessly with the Kafka producer/consumer setup? Or are there any potential issues with storing this information in the headers and reassembling the message on the consumer side? Looking forward to your feedback. |
@LZiHaN this is a possible approach but it would be a breaking change since a consumer that does not understand this chunking may not be able to reassemble the message. My idea was that we instead split the spans from the payload into multiple payloads, such that each payload fits in the MaxMessageSize when serialized. It's not quite simple to implement because it's possible the payload has one huge spans, but if we can split it this way then it's a fully backwards compatible solution. |
@yurishkuro this seems to introduce some performance tradeoffs: So, this marshaller, creates a message, which is then exported using sarama client to kafka. Now irrespective of what kafka configuration is, this export would fail for sizes which exceeds the client configuration (exporter). Now, one solution could be to implement chunking traces received here based on the configured size. (as you suggested) However, it looks like we need to calculate the size and then do the chunking based on the resulting size.
Another approach could be appending spans in a given trace, while calculating size after each append, once the limit is received, again build the resource packet and start appending to it. For this a new size aware SplitTraces can be created. Or maybe some other approach, WDYT would be a better implementation to it? The size check is done here in the sarama client, and this ByteSize function can be used in kafka exporter to calculate the size. |
To avoid perf issues the marshaling can be optimistic: try to marshal the whole thing first and only if the result is larger than max message then try to chunk the spans. As for the chunking algorithm, you don't need to be stuck in analysis paralysis, just write something that works correctly. However you implement it will be an improvement over the current state since now the message just gets dropped. |
Btw binary search sounds like a reasonable approach - keep dividing spans in half until acceptable size is produced. Other methods would be hard since marshalers do not allow concatenation of serialized parts. |
We'd definitely benefit from having this issue addressed (with the PR above merged)! Had a question on this issue, though, so currently there would be no way to address this if a single span has size greater than max message size other than drop it (like in the PR above)? |
@yurishkuro What is the status of this ticket? Indeed it's an issue when combined with the batch processor (which just caps by the number of span items, not the size of them). So currently the only chance is to lower the number and configure a good enough solution, which in turn will not lead to an optimal message size and therefore overhead on the Kafka brokers. If the Kafka exporter would allow a split this would be much more efficient. |
There is a PR attached, but I think it's more complex than it needs to be. |
I'll take a look into it and would also like to contribute a good enough solution. I mean you already mentioned a feasible good enough approach where messages which are too large just get split in half. I mean at the end one needs to find a proper batch configuration (maxBatchSize) that these cases are reduced as much as possible (by checking the exposed metrics). So trying to "get the overall max message size utilization as high as possible" is scientific nice, but it's neither pragmatic nor it's good from a performance point of view. |
hey @jrauschenbusch there's already a PR open where I am doing the chunking #37176 I couldn't work on it as I got occupied at work, can continue to work on it. |
Hey @shivanshuraj1333. I've seen the attached PR, but wonder about the statement here:
Imho this change should absolutely not break any existing consumers nor the OTLP specification. So an OTLP message should be consumable as is w/o any further modifications at the consumers. My main purpose is to split OTLP Traces via Split of a ExportTraceServiceRequest requires that the I cannot tell anything about metrics or logs. But i guess the structural nesting is nearly the same in the OTLP format. Jaeger is might different. But also here it should not break existing consumers. |
Tricky part is:
I shall revisit my implementation, if you have any comments on the logic, please feel free to add. Also, if you can come up with an easier implementation, please feel free to raise a PR for it. |
Short update: By having a short look into your PR this is already the design you've chosen which is totally fine for me.
I would say: Good enough 😅 As i've mentioned before the "normal" situation should not be to have too large batches all the time. Batches should be configured in a way to have most of the time a good message utilization (not perfect) and sometimes maybe oversized messages which should then be covered by this feature to not loose data. At least this is my opinion. Just a short question: If this PR would be finished, is it just an enhancement for Trace data? For me this would be fine, but the exporter seems to also cover metrics and logs. Issue should be the same, right? |
|
You’re right.
|
There is a difference between lacking optimization and causing data loss. The only time data loss is unavoidable is if a single span is too large for the message size. Everything else the algorithm is supposed to handle correctly (if not very efficiently). |
Ok. So what is the status now? Will this improvement be continued? I'm very interested because it's a major drawback now to not have it as it requires non-optimal batch sizes to be configured and still it's not a real stable setup as there can be too large batches. |
I'm restricted by bandwidth, will try to get the PR merged. |
Bandwidth? You mean you have a bad network connection, or what? 😅 But i rather guess you are currently stuck with other projects, right? |
This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself. |
Component(s)
exporter/kafka
Is your feature request related to a problem? Please describe.
The exporter has a config option
MaxMessageBytes
but it itself does not respect it and attempts to send a full serialized payload to the Kafka driver which may reject it based on this setting.Describe the solution you'd like
Most payloads can be safely split into chunks of "safe" size that will be accepted by the Kafka driver. For example, in Jaeger integration tests there is a test that writes a trace with 10k spans, which is 3Mb in size when serialized as JSON. The trace can be trivially split into multiple messages that would fit in the default 1Mb size limit.
Describe alternatives you've considered
No response
Additional context
jaegertracing/jaeger#6437 (comment)
The text was updated successfully, but these errors were encountered: