weave: Weave with AWS EKS is not working

What you expected to happen?

EKS pods are able to interact with each other vi Weave network

What happened?

Deployed Weave daemonset on a new AWS EKS cluster with updated CIDR range. Pods are able to get proper IP from Weave but cannot interact with each other.

How to reproduce it?

Create AWS EKS cluster, define IPALLOC_RANGE environment variable in your daemonset file to 172.20.0.0/16. Apply Weave daemonset. Now pods are able to get an IP but can’t interact.

Anything else we need to know?

AWS EKS, no internet access, using proxy for external access.

Versions:

EKS v1.10

$ weave version
2.3.0
$ docker version
18.03.1-ce
$ uname -a
CentOS 7.4 3.10.0-862.3.2.el7.x86_64
$ kubectl version
1.10.4

Logs:

$ sudo docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
c47e8d7383b6        bridge              bridge              local
9c8e7fa1202a        host                host                local
f8d1debbe4bb        none                null                local
$ sudo docker network inspect f8d1debbe4bb
[
    {
        "Name": "none",
        "Id": "f8d1debbe4bbbfdb5ba3e81a731288b9349c455b64f8287c100b42fead9c2988",
        "Created": "2018-06-21T09:45:41.456125078-04:00",
        "Scope": "local",
        "Driver": "null",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": []
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {
            "a955616beadefd7359a3c1d4cc65ce5971fa1d5c9e7fad164ac860eb7e583218": {
                "Name": "k8s_POD_kube-dns-64b69465b4-mpddk_kube-system_78830a80-7581-11e8-ae49-0a87435622a6_0",
                "EndpointID": "e0d77a64dfe2ba5b592d54deb4d53c51955ddebaf172fef153d25d281679df6a",
                "MacAddress": "",
                "IPv4Address": "",
                "IPv6Address": ""
            }
        },
        "Options": {},
        "Labels": {}
    }
]

Network:

$ ip route
default via 10.182.208.1 dev ens3
10.182.208.0/20 dev ens3 proto kernel scope link src 10.182.211.149
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1
172.20.0.0/16 dev weave proto kernel scope link src 172.20.128.0

$ ip -4 -o addr
1: lo    inet 127.0.0.1/8 scope host lo\       valid_lft forever preferred_lft forever
2: ens3    inet 10.182.211.149/20 brd 10.182.223.255 scope global dynamic ens3\       valid_lft 2315sec preferred_lft 2315sec
3: docker0    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0\       valid_lft forever preferred_lft forever
6: weave    inet 172.20.128.0/16 brd 172.20.255.255 scope global weave\       valid_lft forever preferred_lft forever

About this issue

  • Original URL
  • State: closed
  • Created 6 years ago
  • Reactions: 1
  • Comments: 42 (23 by maintainers)

Most upvoted comments

I am able to consistently run Weave on EKS. I am following below steps. Can some one please try and confirm if it works?

  • eksctl create cluster
  • kubectl delete ds aws-node -n kube-system
  • delete /etc/cni/net.d/10-aws.conflist on each of the node
  • edit instance security group to allow UDP, TCP on 6873, 6874 ports
  • flush iptables nat, mangle, filter
  • restart kube-proxy pods
  • apply weave-net daemoset
  • delete existing pods so the get recreated in Weave pod CIDR’s address-space.

I am able to test below scenarios:

  • pod-to-pod connectivity with in same node and across nodes
  • pod-to-node connectivity
  • node-to-pod connectivity
  • pod-service ip-pod connectivity

EDIT: Note that the api-server for your cluster will not be connected to Weave Net (it runs elsewhere, managed by EKS) so will not be able to connect to pods.

For the most part works but problem comes when istio is installed , https://github.com/istio/istio/issues/16434 <- take a look

It seems that I am having issues with the API server being able to talk to pods in my EKS cluster

@jwenz723 yes this does not work. As you figured master nodes are not in the weave overaly they can not connect.

The strange thing is that I am able to access the dashboard (and other services) if I use kubectl port-forward instead of kubectl proxy.

https://kubernetes.io/docs/concepts/architecture/master-node-communication/#master-to-cluster

API server goes through kubelet and then to the pod in case of port-forward

@alec-v IMO Kubernetes control plane/master not able to reach pod IP’s is not necessarily a bad thing from security perspective and make sense for hosted Kubernetes solution. But it does seem to have impact on any extension API using aggregation layer.

Please see this comment you should be able to use hostNetwork for the metricserver pod and make it work.