I’ve been working on setting up a DualStack-capable Kubernetes cluster (Kubernetes v1.20.0 and Calico v3.17.1), and am running into the following issue: inter-pod communication across different nodes is failing over IPv6. If I run a
ping -6 $POD2_IP6 from inside a pod on node A to a different pod running on node B, I get no response. IPv6 communication between pods on the same node works fine, and IPv4 works between pods on both the same and on different nodes.
When I dig down with
tcpdump, I can see that packets sent from one pod make their way to the physical network interface of the underlying node correctly, but do not reach the physical interface of the node where the destination pod lives. However, the same
ping -6 sent from node A directly to the pod IP running on node B does result in packets reaching the virtual ethernet interface for the destination pod.
This is running in an on-prem Openstack environment on Ubuntu 18.04 VMs, but I saw similar behaviour when I replicated this setup in AWS VPC. I’ve been mostly testing just with
ping6 but have also seen this behaviour with other traffic. I saw the same behaviour both with no network policies (default after Calico installation) and with an “allow all” network policy.
My question: Is this expected behaviour?
As far as I understand, IPv4 works “out of the box” because of the overlay network creatd by Calico (IPIP in my particular case). For IPv6, my understanding is that the onus is on the underlying network to correctly route packets between different nodes.Alternatively, it’s entirely possible I’m just holding IPv6 wrong. For the sake of brevity, I’ve omitted a lot of specific details but can add them if required.
Any advice or pointers are greatly appreciated!