You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
#### Description
Deprecate `topic` and `encoding`, and introduce signal-specific
equivalents:
- `logs::topic`, `metrics::topic`, and `traces::topic`
- `logs::encoding`, `metrics::encoding`, and `traces::encoding`
This enables users to explicitly define a configuration equivalent to
the default configuration, or some variation thereof. It also enables
specifying different encodings for each signal type, which may be
important due to the fact that some encodings only support a subset of
signals.
<!-- Issue number (e.g. open-telemetry#1234) or full URL to issue, if applicable. -->
#### Link to tracking issue
Fixesopen-telemetry#35432
#### Testing
Unit tests added.
#### Documentation
Updated README.
---------
Co-authored-by: Antoine Toulme <[email protected]>
-`resolve_canonical_bootstrap_servers_only` (default = false): Whether to resolve then reverse-lookup broker IPs during startup.
26
28
-`client_id` (default = "otel-collector"): The client ID to configure the Kafka client with. The client ID will be used for all produce requests.
27
-
-`topic` (default = otlp_spans for traces, otlp_metrics for metrics, otlp_logs for logs): The name of the default kafka topic to export to. See [Destination Topic](#destination-topic) below for more details.
29
+
-`logs`
30
+
-`topic` (default = otlp\_logs): The name of the Kafka topic to which logs will be exported.
31
+
-`encoding` (default = otlp\_proto): The encoding for logs. See [Supported encodings](#supported-encodings).
32
+
-`metrics`
33
+
-`topic` (default = otlp\_metrics): The name of the Kafka topic from which to consume metrics.
34
+
-`encoding` (default = otlp\_proto): The encoding for metrics. See [Supported encodings](#supported-encodings).
35
+
-`traces`
36
+
-`topic` (default = otlp\_spans): The name of the Kafka topic from which to consume traces.
37
+
-`encoding` (default = otlp\_proto): The encoding for traces. See [Supported encodings](#supported-encodings).
38
+
-`topic` (Deprecated in v0.124.0: use `logs::topic`, `metrics::topic`, and `traces::topic`) If specified, this is used as the default topic, but will be overridden by signal-specific configuration. See [Destination Topic](#destination-topic) below for more details.
28
39
-`topic_from_attribute` (default = ""): Specify the resource attribute whose value should be used as the message's topic. See [Destination Topic](#destination-topic) below for more details.
40
+
-`encoding` (Deprecated in v0.124.0: use `logs::encoding`, `metrics::encoding`, and `traces::encoding`) If specified, this is used as the default encoding, but will be overridden by signal-specific configuration. See [Supported encodings](#supported-encodings) below for more details.
29
41
-`include_metadata_keys` (default = []): Specifies a list of metadata keys to propagate as Kafka message headers. If one or more keys aren't found in the metadata, they are ignored.
30
-
-`encoding` (default = otlp_proto): The encoding of the traces sent to kafka. All available encodings:
31
-
-`otlp_proto`: payload is Protobuf serialized from `ExportTraceServiceRequest` if set as a traces exporter or `ExportMetricsServiceRequest` for metrics or `ExportLogsServiceRequest` for logs.
32
-
-`otlp_json`: payload is JSON serialized from `ExportTraceServiceRequest` if set as a traces exporter or `ExportMetricsServiceRequest` for metrics or `ExportLogsServiceRequest` for logs.
33
-
- The following encodings are valid *only* for **traces**.
34
-
-`jaeger_proto`: the payload is serialized to a single Jaeger proto `Span`, and keyed by TraceID.
35
-
-`jaeger_json`: the payload is serialized to a single Jaeger JSON Span using `jsonpb`, and keyed by TraceID.
36
-
-`zipkin_proto`: the payload is serialized to Zipkin v2 proto Span.
37
-
-`zipkin_json`: the payload is serialized to Zipkin v2 JSON Span.
38
-
- The following encodings are valid *only* for **logs**.
39
-
-`raw`: if the log record body is a byte array, it is sent as is. Otherwise, it is serialized to JSON. Resource and record attributes are discarded.
40
42
-`partition_traces_by_id` (default = false): configures the exporter to include the trace ID as the message key in trace messages sent to kafka. *Please note:* this setting does not have any effect on Jaeger encoding exporters since Jaeger exporters include trace ID as the message key by default.
41
43
-`partition_metrics_by_resource_attributes` (default = false) configures the exporter to include the hash of sorted resource attributes as the message partitioning key in metric messages sent to kafka.
42
44
-`partition_logs_by_resource_attributes` (default = false) configures the exporter to include the hash of sorted resource attributes as the message partitioning key in log messages sent to kafka.
@@ -88,6 +90,25 @@ The following settings can be optionally configured:
88
90
-`compression` (default = 'none') the compression used when producing messages to kafka. The options are: `none`, `gzip`, `snappy`, `lz4`, and `zstd`https://docs.confluent.io/platform/current/installation/configuration/producer-configs.html#compression-type
89
91
-`flush_max_messages` (default = 0) The maximum number of messages the producer will send in a single broker request.
90
92
93
+
### Supported encodings
94
+
95
+
The Kafka exporter supports encoding extensions, as well as the following built-in encodings.
96
+
97
+
Available for all signals:
98
+
-`otlp_proto`: data is encoded as OTLP Protobuf
99
+
-`otlp_json`: data is encoded as OTLP JSON
100
+
101
+
Available only for traces:
102
+
-`jaeger_proto`: the payload is serialized to a single Jaeger proto `Span`, and keyed by TraceID.
103
+
-`jaeger_json`: the payload is serialized to a single Jaeger JSON Span using `jsonpb`, and keyed by TraceID.
104
+
-`zipkin_proto`: the payload is serialized to Zipkin v2 proto Span.
105
+
-`zipkin_json`: the payload is serialized to Zipkin v2 JSON Span.
106
+
107
+
Available only for logs:
108
+
-`raw`: if the log record body is a byte array, it is sent as is. Otherwise, it is serialized to JSON. Resource and record attributes are discarded.
109
+
110
+
### Example configuration
111
+
91
112
Example configuration:
92
113
93
114
```yaml
@@ -98,7 +119,9 @@ exporters:
98
119
```
99
120
100
121
## Destination Topic
122
+
101
123
The destination topic can be defined in a few different ways and takes priority in the following order:
124
+
102
125
1. When `topic_from_attribute` is configured, and the corresponding attribute is found on the ingested data, the value of this attribute is used.
103
126
2. If a prior component in the collector pipeline sets the topic on the context via the `topic.WithTopic` function (from the `github.com/open-telemetry/opentelemetry-collector-contrib/pkg/kafka/topic` package), the value set in the context is used.
104
-
3. Finally, the `topic` configuration is used as a default/fallback destination.
127
+
3. Finally, the `<signal>::topic` configuration is used for the signal-specific destination topic. If this is not explicitly configured, the `topic` configuration (deprecated in v0.124.0) is used as a fallback for all signals.
0 commit comments