Skip to content

Commit 7173641

Browse files
authored
[k8sclusterreceiver] refactor metric units to follow Otel conventions (#26708)
**Description:** Refactor some metric units to follow OTEL Semantic conventions **Link to tracking Issue:** #10553
1 parent c9af8e1 commit 7173641

File tree

12 files changed

+146
-119
lines changed

12 files changed

+146
-119
lines changed
Lines changed: 27 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,27 @@
1+
# Use this changelog template to create an entry for release notes.
2+
3+
# One of 'breaking', 'deprecation', 'new_component', 'enhancement', 'bug_fix'
4+
change_type: 'bug_fix'
5+
6+
# The name of the component, or a single word describing the area of concern, (e.g. filelogreceiver)
7+
component: 'k8sclusterreceiver'
8+
9+
# A brief description of the change. Surround your text with quotes ("") if it needs to start with a backtick (`).
10+
note: "Change k8scluster receiver metric units to follow otel semantic conventions"
11+
12+
# Mandatory: One or more tracking issues related to the change. You can use the PR number here if no issue exists.
13+
issues: [10553]
14+
15+
# (Optional) One or more lines of additional information to render under the primary note.
16+
# These lines will be padded with 2 spaces and then inserted directly into the document.
17+
# Use pipe (|) for multiline entries.
18+
subtext:
19+
20+
# If your change doesn't affect end users or the exported elements of any package,
21+
# you should instead start your pull request title with [chore] or use the "Skip Changelog" label.
22+
# Optional: The change log or logs in which this entry should be included.
23+
# e.g. '[user]' or '[user, api]'
24+
# Include 'user' if the change is relevant to end users.
25+
# Include 'api' if there is a change to a library API.
26+
# Default: '[user]'
27+
change_logs: [user]

receiver/k8sclusterreceiver/documentation.md

Lines changed: 22 additions & 22 deletions
Original file line numberDiff line numberDiff line change
@@ -98,39 +98,39 @@ The number of actively running jobs for a cronjob
9898
9999
| Unit | Metric Type | Value Type |
100100
| ---- | ----------- | ---------- |
101-
| 1 | Gauge | Int |
101+
| {job} | Gauge | Int |
102102
103103
### k8s.daemonset.current_scheduled_nodes
104104
105105
Number of nodes that are running at least 1 daemon pod and are supposed to run the daemon pod
106106
107107
| Unit | Metric Type | Value Type |
108108
| ---- | ----------- | ---------- |
109-
| 1 | Gauge | Int |
109+
| {node} | Gauge | Int |
110110
111111
### k8s.daemonset.desired_scheduled_nodes
112112
113113
Number of nodes that should be running the daemon pod (including nodes currently running the daemon pod)
114114
115115
| Unit | Metric Type | Value Type |
116116
| ---- | ----------- | ---------- |
117-
| 1 | Gauge | Int |
117+
| {node} | Gauge | Int |
118118
119119
### k8s.daemonset.misscheduled_nodes
120120
121121
Number of nodes that are running the daemon pod, but are not supposed to run the daemon pod
122122
123123
| Unit | Metric Type | Value Type |
124124
| ---- | ----------- | ---------- |
125-
| 1 | Gauge | Int |
125+
| {node} | Gauge | Int |
126126
127127
### k8s.daemonset.ready_nodes
128128
129129
Number of nodes that should be running the daemon pod and have one or more of the daemon pod running and ready
130130
131131
| Unit | Metric Type | Value Type |
132132
| ---- | ----------- | ---------- |
133-
| 1 | Gauge | Int |
133+
| {node} | Gauge | Int |
134134
135135
### k8s.deployment.available
136136
@@ -154,71 +154,71 @@ Current number of pod replicas managed by this autoscaler.
154154
155155
| Unit | Metric Type | Value Type |
156156
| ---- | ----------- | ---------- |
157-
| 1 | Gauge | Int |
157+
| {pod} | Gauge | Int |
158158
159159
### k8s.hpa.desired_replicas
160160
161161
Desired number of pod replicas managed by this autoscaler.
162162
163163
| Unit | Metric Type | Value Type |
164164
| ---- | ----------- | ---------- |
165-
| 1 | Gauge | Int |
165+
| {pod} | Gauge | Int |
166166
167167
### k8s.hpa.max_replicas
168168
169169
Maximum number of replicas to which the autoscaler can scale up.
170170
171171
| Unit | Metric Type | Value Type |
172172
| ---- | ----------- | ---------- |
173-
| 1 | Gauge | Int |
173+
| {pod} | Gauge | Int |
174174
175175
### k8s.hpa.min_replicas
176176
177177
Minimum number of replicas to which the autoscaler can scale up.
178178
179179
| Unit | Metric Type | Value Type |
180180
| ---- | ----------- | ---------- |
181-
| 1 | Gauge | Int |
181+
| {pod} | Gauge | Int |
182182
183183
### k8s.job.active_pods
184184
185185
The number of actively running pods for a job
186186
187187
| Unit | Metric Type | Value Type |
188188
| ---- | ----------- | ---------- |
189-
| 1 | Gauge | Int |
189+
| {pod} | Gauge | Int |
190190
191191
### k8s.job.desired_successful_pods
192192
193193
The desired number of successfully finished pods the job should be run with
194194
195195
| Unit | Metric Type | Value Type |
196196
| ---- | ----------- | ---------- |
197-
| 1 | Gauge | Int |
197+
| {pod} | Gauge | Int |
198198
199199
### k8s.job.failed_pods
200200
201201
The number of pods which reached phase Failed for a job
202202
203203
| Unit | Metric Type | Value Type |
204204
| ---- | ----------- | ---------- |
205-
| 1 | Gauge | Int |
205+
| {pod} | Gauge | Int |
206206
207207
### k8s.job.max_parallel_pods
208208
209209
The max desired number of pods the job should run at any given time
210210
211211
| Unit | Metric Type | Value Type |
212212
| ---- | ----------- | ---------- |
213-
| 1 | Gauge | Int |
213+
| {pod} | Gauge | Int |
214214
215215
### k8s.job.successful_pods
216216
217217
The number of pods which reached phase Succeeded for a job
218218
219219
| Unit | Metric Type | Value Type |
220220
| ---- | ----------- | ---------- |
221-
| 1 | Gauge | Int |
221+
| {pod} | Gauge | Int |
222222
223223
### k8s.namespace.phase
224224
@@ -242,31 +242,31 @@ Total number of available pods (ready for at least minReadySeconds) targeted by
242242
243243
| Unit | Metric Type | Value Type |
244244
| ---- | ----------- | ---------- |
245-
| 1 | Gauge | Int |
245+
| {pod} | Gauge | Int |
246246
247247
### k8s.replicaset.desired
248248
249249
Number of desired pods in this replicaset
250250
251251
| Unit | Metric Type | Value Type |
252252
| ---- | ----------- | ---------- |
253-
| 1 | Gauge | Int |
253+
| {pod} | Gauge | Int |
254254
255255
### k8s.replication_controller.available
256256
257257
Total number of available pods (ready for at least minReadySeconds) targeted by this replication_controller
258258
259259
| Unit | Metric Type | Value Type |
260260
| ---- | ----------- | ---------- |
261-
| 1 | Gauge | Int |
261+
| {pod} | Gauge | Int |
262262
263263
### k8s.replication_controller.desired
264264
265265
Number of desired pods in this replication_controller
266266
267267
| Unit | Metric Type | Value Type |
268268
| ---- | ----------- | ---------- |
269-
| 1 | Gauge | Int |
269+
| {pod} | Gauge | Int |
270270
271271
### k8s.resource_quota.hard_limit
272272
@@ -302,31 +302,31 @@ The number of pods created by the StatefulSet controller from the StatefulSet ve
302302
303303
| Unit | Metric Type | Value Type |
304304
| ---- | ----------- | ---------- |
305-
| 1 | Gauge | Int |
305+
| {pod} | Gauge | Int |
306306
307307
### k8s.statefulset.desired_pods
308308
309309
Number of desired pods in the stateful set (the `spec.replicas` field)
310310

311311
| Unit | Metric Type | Value Type |
312312
| ---- | ----------- | ---------- |
313-
| 1 | Gauge | Int |
313+
| {pod} | Gauge | Int |
314314

315315
### k8s.statefulset.ready_pods
316316

317317
Number of pods created by the stateful set that have the `Ready` condition
318318

319319
| Unit | Metric Type | Value Type |
320320
| ---- | ----------- | ---------- |
321-
| 1 | Gauge | Int |
321+
| {pod} | Gauge | Int |
322322

323323
### k8s.statefulset.updated_pods
324324

325325
Number of pods created by the StatefulSet controller from the StatefulSet version
326326

327327
| Unit | Metric Type | Value Type |
328328
| ---- | ----------- | ---------- |
329-
| 1 | Gauge | Int |
329+
| {pod} | Gauge | Int |
330330

331331
### openshift.appliedclusterquota.limit
332332

receiver/k8sclusterreceiver/internal/cronjob/testdata/expected.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -21,7 +21,7 @@ resourceMetrics:
2121
dataPoints:
2222
- asInt: "2"
2323
name: k8s.cronjob.active_jobs
24-
unit: "1"
24+
unit: "{job}"
2525
scope:
2626
name: otelcol/k8sclusterreceiver
2727
version: latest

receiver/k8sclusterreceiver/internal/demonset/testdata/expected.yaml

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -21,25 +21,25 @@ resourceMetrics:
2121
dataPoints:
2222
- asInt: "3"
2323
name: k8s.daemonset.current_scheduled_nodes
24-
unit: "1"
24+
unit: "{node}"
2525
- description: Number of nodes that should be running the daemon pod (including nodes currently running the daemon pod)
2626
gauge:
2727
dataPoints:
2828
- asInt: "5"
2929
name: k8s.daemonset.desired_scheduled_nodes
30-
unit: "1"
30+
unit: "{node}"
3131
- description: Number of nodes that are running the daemon pod, but are not supposed to run the daemon pod
3232
gauge:
3333
dataPoints:
3434
- asInt: "1"
3535
name: k8s.daemonset.misscheduled_nodes
36-
unit: "1"
36+
unit: "{node}"
3737
- description: Number of nodes that should be running the daemon pod and have one or more of the daemon pod running and ready
3838
gauge:
3939
dataPoints:
4040
- asInt: "2"
4141
name: k8s.daemonset.ready_nodes
42-
unit: "1"
42+
unit: "{node}"
4343
scope:
4444
name: otelcol/k8sclusterreceiver
4545
version: latest

receiver/k8sclusterreceiver/internal/jobs/testdata/expected.yaml

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -21,31 +21,31 @@ resourceMetrics:
2121
dataPoints:
2222
- asInt: "2"
2323
name: k8s.job.active_pods
24-
unit: "1"
24+
unit: "{pod}"
2525
- description: The number of pods which reached phase Failed for a job
2626
gauge:
2727
dataPoints:
2828
- asInt: "0"
2929
name: k8s.job.failed_pods
30-
unit: "1"
30+
unit: "{pod}"
3131
- description: The number of pods which reached phase Succeeded for a job
3232
gauge:
3333
dataPoints:
3434
- asInt: "3"
3535
name: k8s.job.successful_pods
36-
unit: "1"
36+
unit: "{pod}"
3737
- description: The desired number of successfully finished pods the job should be run with
3838
gauge:
3939
dataPoints:
4040
- asInt: "10"
4141
name: k8s.job.desired_successful_pods
42-
unit: "1"
42+
unit: "{pod}"
4343
- description: The max desired number of pods the job should run at any given time
4444
gauge:
4545
dataPoints:
4646
- asInt: "2"
4747
name: k8s.job.max_parallel_pods
48-
unit: "1"
48+
unit: "{pod}"
4949
scope:
5050
name: otelcol/k8sclusterreceiver
5151
version: latest

receiver/k8sclusterreceiver/internal/jobs/testdata/expected_empty.yaml

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -21,19 +21,19 @@ resourceMetrics:
2121
dataPoints:
2222
- asInt: "2"
2323
name: k8s.job.active_pods
24-
unit: "1"
24+
unit: "{pod}"
2525
- description: The number of pods which reached phase Failed for a job
2626
gauge:
2727
dataPoints:
2828
- asInt: "0"
2929
name: k8s.job.failed_pods
30-
unit: "1"
30+
unit: "{pod}"
3131
- description: The number of pods which reached phase Succeeded for a job
3232
gauge:
3333
dataPoints:
3434
- asInt: "3"
3535
name: k8s.job.successful_pods
36-
unit: "1"
36+
unit: "{pod}"
3737
scope:
3838
name: otelcol/k8sclusterreceiver
3939
version: latest

0 commit comments

Comments
 (0)