hubble-ui: Hubble UI doesn't work, fresh Cilium 1.12.1 install, "Data stream has failed on the UI backend: EOF"

I just today reinstalled Cilium in my bare-metal cluster at home. I installed 1.12.1, I did cilium hubble enable --ui, all went well, I open http://localhost:12000 in my browser and I see this:

image

The page stays like this indefinitely, accumulating more and more GetEvents calls:

image

In the browser console I see the following:

image

Uncertain how to proceed with debugging. Any help would be appreciated.

About this issue

  • Original URL
  • State: open
  • Created 2 years ago
  • Reactions: 20
  • Comments: 41 (2 by maintainers)

Most upvoted comments

In my case the same error was happening with httpV2 enabled. Removing that line fixed the issue.

    metrics:
      serviceMonitor:
        enabled: true
      enableOpenMetrics: true
      enabled:
      - dns:query;ignoreAAAA
      - drop
      - tcp
      - flow
      - port-distribution
      - icmp
      - http
      # - httpV2:exemplars=true;labelsContext=source_ip\,source_namespace\,source_workload\,destination_ip\,destination_namespace\,destination_workload\,traffic_direction

I managed to reproduce the Hubble CLI issue as well, but could not fix it by disabling TLS (example below). Chart reinstall helped though.

This did not help:

hubble:
  relay:
    tls:
      server:
        enabled: false
  tls:
    enabled: false

The following fixed the issue for me

hubble: 
  relay: 
    enabled: true
  ui: 
    frontend:
      server: 
        ipv6: 
          enabled: false
    enabled: true
  metrics: 
    enableOpenMetrics: true
    enabled: 
    - dns
    - drop
    - tcp
    - flow
    - port-distribution
    - icmp
    - httpV2:exemplars=true;labelsContext=source_ip,source_namespace,source_workload,destination_ip,destination_namespace,destination_workload,traffic_direction

Alternatively, try with set

--set hubble.metrics.enabled="{dns,drop,tcp,flow,port-distribution,icmp,httpV2:exemplars=true;labelsContext=source_ip\,source_namespace\,source_workload\,destination_ip\,destination_namespace\,destination_workload\,traffic_direction}" 

In my case, the following configuration alone was not enough, the communication from hubble-relay to hubble-peer was failing due to Ubuntu’s ufw.

I allowed access from Cilium’s IP CIDR and it worked fine.

hubble:
  relay:
    tls:
      server:
        enabled: false
  tls:
    enabled: false

For me it was because my cluster domain is “cluster” (imperative for Cilium to not have doted cluster domain) BUT helm chart define by default hubble.peerService.clusterDomain to “cluster.local”.

With Cilium installed with helm on 1.13.3, set correct hubble.peerService.clusterDomain value fix access to UI for me and I didn’t need to disable TLS anywhere.

My Cilium values:

helm install cilium cilium/cilium --version 1.13.3 \
  --namespace kube-system \
  --set ipam.mode=cluster-pool \
  --set ipam.operator.clusterPoolIPv4PodCIDRList=10.66.0.0/16 \
  --set ipam.operator.clusterPoolIPv4MaskSize=20 \
  --set kubeProxyReplacement=strict \
  --set k8sServiceHost=172.16.66.200 \
  --set k8sServicePort=6443 \
  --set hubble.relay.enabled=true \
  --set hubble.ui.enabled=true \
  --set operator.replicas=1 \
  --set tunnel=disabled \
  --set ipv4NativeRoutingCIDR=10.66.0.0/16 \
  --set autoDirectNodeRoutes=true \
  --set hubble.peerService.clusterDomain=cluster

Logs of backend container in UI:

level=info msg="running hubble status checker\n" subsys=ui-backend
level=info msg="fetching hubble flows: connecting to hubble-relay (attempt #1)\n" subsys=ui-backend
level=info msg="hubble-relay grpc client created (hubble-relay addr: hubble-relay:80)\n" subsys=ui-backend
level=info msg="hubble status checker: connection to hubble-relay established\n" subsys=ui-backend
level=info msg="hubble-relay grpc client created (hubble-relay addr: hubble-relay:80)\n" subsys=ui-backend
level=info msg="fetching hubble flows: connection to hubble-relay established\n" subsys=ui-backend
level=info msg="fetching hubble flows: connecting to hubble-relay (attempt #1)\n" subsys=ui-backend
level=error msg="flow error: EOF\n" subsys=ui-backend
level=info msg="hubble status checker: stopped\n" subsys="ui-backend:status-checker"
level=info msg="hubble-relay grpc client created (hubble-relay addr: hubble-relay:80)\n" subsys=ui-backend
level=error msg="fetching hubble flows: connecting to hubble-relay (attempt #1) failed: rpc error: code = Canceled desc = context canceled\n" subsys=ui-backend
level=info msg="fetching hubble flows: stream (ui backend <-> hubble-relay) is closed\n" subsys=ui-backend
level=info msg="Get flows request: number:10000  follow:true  blacklist:{source_label:\"reserved:unknown\"  source_label:\"reserved:host\"  source_label:\"k8s:k8s-app=kube-dns\"  source_label:\"reserved:remote-node\"  source_label:\"k8s:app=prometheus\"  source_label:\"reserved:kube-apiserver\"}  blacklist:{destination_label:\"reserved:unknown\"  destination_label:\"reserved:host\"  destination_label:\"reserved:remote-node\"  destination_label:\"k8s:app=prometheus\"  destination_label:\"reserved:kube-apiserver\"}  blacklist:{destination_label:\"k8s:k8s-app=kube-dns\"  destination_port:\"53\"}  blacklist:{source_fqdn:\"*.cluster.local*\"}  blacklist:{destination_fqdn:\"*.cluster.local*\"}  blacklist:{protocol:\"ICMPv4\"}  blacklist:{protocol:\"ICMPv6\"}  whitelist:{source_pod:\"default/\"  event_type:{type:1}  event_type:{type:4}  event_type:{type:129}  reply:false}  whitelist:{destination_pod:\"default/\"  event_type:{type:1}  event_type:{type:4}  event_type:{type:129}  reply:false}" subsys=ui-backend
level=info msg="running hubble status checker\n" subsys=ui-backend
level=info msg="fetching hubble flows: connecting to hubble-relay (attempt #1)\n" subsys=ui-backend
level=info msg="hubble-relay grpc client created (hubble-relay addr: hubble-relay:80)\n" subsys=ui-backend
level=info msg="hubble-relay grpc client created (hubble-relay addr: hubble-relay:80)\n" subsys=ui-backend
level=info msg="hubble status checker: connection to hubble-relay established\n" subsys=ui-backend
level=info msg="fetching hubble flows: connection to hubble-relay established\n" subsys=ui-backend
level=info msg="fetching hubble flows: connecting to hubble-relay (attempt #1)\n" subsys=ui-backend
level=error msg="flow error: EOF\n" subsys=ui-backend
level=info msg="hubble status checker: stopped\n" subsys="ui-backend:status-checker"
level=info msg="hubble-relay grpc client created (hubble-relay addr: hubble-relay:80)\n" subsys=ui-backend
level=error msg="fetching hubble flows: connecting to hubble-relay (attempt #1) failed: rpc error: code = Canceled desc = context canceled\n" subsys=ui-backend
level=info msg="fetching hubble flows: stream (ui backend <-> hubble-relay) is closed\n" subsys=ui-backend

Hope this helps.

Kind regards,

Pascal