k8ssandra-operator: Prometheus scrape: Connection refused
What did you do? I deployed K8ssandraCluster in a minikube environment where kube-prometheus-stack was already deployed. Target is to scrape the metrics over https://docs.k8ssandra.io/tasks/monitor/prometheus-grafana/#filtering-metrics-with-the-new-metrics-endpoint-from-v150. However I got the connection refused as below:
Get "http://172.17.0.9:9000/metrics": dial tcp 172.17.0.9:9000: connect: connection refused
Did you expect to see some different? No scrape errors being able to receive the K8ssandra metrics to Prometheus
Environment
-
K8ssandra Operator version:
docker.io/k8ssandra/k8ssandra-operator:v1.5.0 -
Kubernetes version information:
clientVersion:
buildDate: "2020-08-13T16:12:48Z"
compiler: gc
gitCommit: 9f2892aab98fe339f3bd70e3c470144299398ace
gitTreeState: clean
gitVersion: v1.18.8
goVersion: go1.13.15
major: "1"
minor: "18"
platform: linux/amd64
serverVersion:
buildDate: "2022-01-25T21:19:12Z"
compiler: gc
gitCommit: 816c97ab8cff8a1c72eccca1026f7820e93e0d25
gitTreeState: clean
gitVersion: v1.23.3
goVersion: go1.17.6
major: "1"
minor: "23"
platform: linux/amd64
-
Kubernetes cluster kind:
minikube
-
Manifests:
apiVersion: k8ssandra.io/v1alpha1
kind: K8ssandraCluster
metadata:
annotations:
k8ssandra.io/initial-system-replication: '{"dc1":1}'
meta.helm.sh/release-name: kasope-platform
meta.helm.sh/release-namespace: platform
creationTimestamp: '2023-03-27T11:58:53Z'
finalizers:
- k8ssandracluster.k8ssandra.io/finalizer
generation: 2
labels:
app.kubernetes.io/managed-by: Helm
spec:
auth: true
cassandra:
config:
cassandraYaml:
num_tokens: 8
jvmOptions:
gc: G1GC
heapSize: 1Gi
datacenters:
- jmxInitContainerImage:
name: busybox
registry: docker.io
tag: 1.34.1
metadata:
name: dc1
services:
allPodsService:
annotations:
consul.hashicorp.com/service-port: native
dcService:
annotations:
consul.hashicorp.com/service-port: native
perNodeConfigInitContainerImage: mikefarah/yq:4
size: 1
stopped: false
jmxInitContainerImage:
name: busybox
registry: docker.io
tag: 1.34.1
perNodeConfigInitContainerImage: mikefarah/yq:4
resources:
requests:
memory: 2Gi
serverType: cassandra
serverVersion: 3.11.11
storageConfig:
cassandraDataVolumeClaimSpec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
superuserSecretRef:
name: cluster-superuser
telemetry:
mcac:
enabled: false
prometheus:
commonLabels:
release: monitoring-platform
enabled: true
reaper:
ServiceAccountName: default
autoScheduling:
enabled: true
initialDelayPeriod: PT15S
percentUnrepairedThreshold: 10
periodBetweenPolls: PT10M
repairType: AUTO
scheduleSpreadPeriod: PT6H
timeBeforeFirstSchedule: PT5M
containerImage:
name: cassandra-reaper
registry: docker.io
repository: thelastpickle
tag: 3.2.1
deploymentMode: PER_DC
heapSize: 2Gi
initContainerImage:
name: cassandra-reaper
registry: docker.io
repository: thelastpickle
tag: 3.2.1
keyspace: reaper_db
secretsProvider: internal
secretsProvider: internal
So the relevant part:
telemetry:
mcac:
enabled: false
prometheus:
commonLabels:
release: monitoring-platform
enabled: true
- K8ssandra Operator Logs:
No log output to point where the issue is.
Anything else we need to know?:
Automatically Servicemonitor created:
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
annotations:
k8ssandra.io/resource-hash: QBE3Hqk8zb8XFEOHl9lpnWDplf3YpRVG47nV09xy5Dw=
creationTimestamp: '2023-03-27T12:06:48Z'
generation: 1
labels:
app.kubernetes.io/component: telemetry
app.kubernetes.io/created-by: k8ssandracluster-controller
app.kubernetes.io/managed-by: k8ssandra-operator
app.kubernetes.io/part-of: k8ssandra
k8ssandra.io/cluster-name: cluster
k8ssandra.io/cluster-namespace: platform
k8ssandra.io/datacenter: dc1
release: monitoring-platform
managedFields:
- apiVersion: monitoring.coreos.com/v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.: {}
f:k8ssandra.io/resource-hash: {}
f:labels:
.: {}
f:app.kubernetes.io/component: {}
f:app.kubernetes.io/created-by: {}
f:app.kubernetes.io/managed-by: {}
f:app.kubernetes.io/part-of: {}
f:k8ssandra.io/cluster-name: {}
f:k8ssandra.io/cluster-namespace: {}
f:k8ssandra.io/datacenter: {}
f:release: {}
f:ownerReferences:
.: {}
k:{"uid":"6968cea7-98b9-4df0-b883-eb0a2f26d601"}: {}
f:spec:
.: {}
f:endpoints: {}
f:namespaceSelector:
.: {}
f:matchNames: {}
f:selector: {}
f:targetLabels: {}
manager: manager
operation: Update
time: '2023-03-27T12:06:48Z'
name: cluster-dc1-cass-servicemonitor
namespace: platform
ownerReferences:
- apiVersion: cassandra.datastax.com/v1beta1
blockOwnerDeletion: true
controller: true
kind: CassandraDatacenter
name: dc1
uid: 6968cea7-98b9-4df0-b883-eb0a2f26d601
resourceVersion: '2484'
uid: 8b04ff14-adb4-4084-b3ea-6ddb3458f20d
selfLink: >-
/apis/monitoring.coreos.com/v1/namespaces/platform/servicemonitors/cluster-dc1-cass-servicemonitor
spec:
endpoints:
- bearerTokenSecret:
key: ''
interval: 15s
path: /metrics
port: metrics
scheme: http
scrapeTimeout: 15s
namespaceSelector:
matchNames:
- platform
selector:
matchExpressions:
- key: cassandra.datastax.com/cluster
operator: In
values:
- cluster
- key: cassandra.datastax.com/datacenter
operator: In
values:
- dc1
matchLabels:
app.kubernetes.io/managed-by: cass-operator
cassandra.datastax.com/prom-metrics: 'true'
targetLabels:
- cassandra.datastax.com/cluster
- cassandra.datastax.com/datacenter
About this issue
- Original URL
- State: open
- Created a year ago
- Comments: 19 (6 by maintainers)
Try to alter your deployment as follows:
Otherwise the metrics endpoint will be bound to localhost and not available for prometheus.