connection refused while scraping for kube scheduler metrics #35959
Labels
closed as inactive
question
Further information is requested
receiver/prometheus
Prometheus receiver
Stale
Component(s)
receiver/prometheus
Describe the issue you're reporting
I have a 3 node k8s cluster.
I am using otel as daemonset with the following config:
extensions:
# The health_check extension is mandatory for this chart.
# Without the health_check extension the collector will fail the readiness and liveliness probes.
# The health_check extension can be modified, but should never be removed.
health_check: {}
memory_ballast: {}
bearertokenauth:
token: "XXXXXX"
processors:
receivers:
exporters:
logging: {}
prometheusremotewrite:
endpoint: "xxxxxxx"
resource_to_telemetry_conversion:
enabled: true
tls:
insecure: true
auth:
authenticator: bearertokenauth
service:
telemetry:
metrics:
address: ${env:MY_POD_IP}:8888
logs:
level: debug
extensions:
- health_check
- bearertokenauth
pipelines:
I get the below error.
2024-10-23T12:45:56.402Z debug scrape/scrape.go:1331 Scrape failed {"kind": "receiver", "name": "prometheus", "data_type": "metrics", "scrape_pool": "kube-scheduler", "target": "https://100.xx.xx.xx:10259/metrics", "error": "Get "[https://100.xx.xx.xx:10259/metrics](https://100.xx.xx.xx:10259/metrics%5C)": dial tcp 100.xx.xx.xx:10259: connect: connection refused"}
I have the kube controller as three pods running on one node each on a 3 node cluster in the kube-system namespace.
DO I need a k8s service of type nodeport to get this to work?
I tried to login to the node, and run the curl -kvv https://100.xx.xx.xx:10259/metrics, I get connection refused, but it does work with
curl -kvv https://localhost:10259/metrics
The text was updated successfully, but these errors were encountered: