calico: Network policy isolation between namespaces does not prevent access

General summary

Isolation of pods in one namespace,using labels, does not prevent these pods from being accessed by pods in another namespace.

What I attempted to do

I set up two namespaces, demo and local, to demonstrate whether NetworkPolicy isolation worked to prevent pods in demo from accessing pods in local.

I then set up the following NetworkPolicies:

demo:

kind: NetworkPolicy
apiVersion: extensions/v1beta1
metadata:
  name: default-deny
  namespace: demo
spec:
  podSelector:
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  name: access-demo
  namespace: demo
spec:
  podSelector:
    matchLabels:
      environment: demo
  ingress:
    - from:
      - podSelector:
          matchLabels:
            environment: demo

local:

kind: NetworkPolicy
apiVersion: extensions/v1beta1
metadata:
  name: default-deny
  namespace: local
spec:
  podSelector:
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  name: access-local
  namespace: local
spec:
  podSelector:
    matchLabels:
      environment: local
  ingress:
    - from:
      - podSelector:
          matchLabels:
            environment: local

And applied these. I then deployed two pods in each namespace running on ports 9090 and 9091, labelling those in ns local with environment:local and those in ns demo with environment: demo. I then attempted to telnet to these from third, interactive pod in each namespace (labelled environment: local in the local ns and environment: demo in the demo ns).

Expected Behavior

What should happen is attempting to telnet pods in ns local from the interactive pod in nsdemo should fail, and attempting to telnet pods in ns demo from the interactive pod in ns local should fail.

Current Behavior

I am able to telnet all pods in ns local from any and all pods in ns demo, and likewise, I can telnet all pods in ns demo from any and all pods in ns local.

Possible Solution

Either there are some extra steps that are needed, which are not clearly documented, or if this is the expected process to provide isolation between namespaces, it is not working.

Steps to Reproduce (for bugs)

  1. Create a local and a demo namespace,
  2. Deploy 2-3 pods in each namespace, one of which is interactive
  3. Label each pod in local with an appropriate label to indicate it belongs to local that can be used by a NetworkPolicy, and each pod in demo with a similarly appropriate label.
  4. Define a NetworkPolicy to default-deny access to pods in each namespace
  5. Define a NetworkPolicy to allow access to pods labelled local from other pods labelled local, and to allow access to pods labelled demo from other pods labelled demo
  6. Telnet pods in demo from the interactive pod in demo
  7. Telnet pods in local from the interactive pod in demo

Context

I need to isolate pods in each namespace from pods in another namespace. In other words, I want only pods in local to be able to access other pods in local, and NOT access pods in demo. I want only pods in demo to be able to access other pods in demo, and NOT access pods in local. This is an important real-world use case where namespaces are used to swim-lane dev, stage and prod environments and isolate them from one another.

Your Environment

  • Calico version: 2.4
  • Orchestrator version (e.g. kubernetes, mesos, rkt): kubernetes 1.7.0
  • Cloud provider or hardware configuration**: AWS EC2
  • OS (e.g. from /etc/os-release): 14.04.5 LTS, Trusty Tahr
  • Kernel (e.g. uname -a): 3.13.0-125-generic #174-Ubuntu

About this issue

  • Original URL
  • State: closed
  • Created 7 years ago
  • Reactions: 2
  • Comments: 17 (7 by maintainers)

Commits related to this issue

Most upvoted comments

If there is something that was missed, that is needed to make this work, can we please document this in a step by step manner? I doubt I am the only user seeking to make this use case work. Just some clear docs on this would really help.