Most CNI-plugins doesn't support pod-pod multicast. A way to
fix that is to add an ipvlan network using Multus. Howto install
Multus and the whereabouts IPAM in you environemt
is not in the scope of this gist.
Assuming multus and whereabouts are installed, create a "NAD" for
ipvlan on the main K8s network.
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
name: ipvlan
spec:
config: '{
"cniVersion": "0.4.0",
"type": "ipvlan",
"master": "eth1",
"ipam": {
"type": "whereabouts",
"ipRanges": [
{ "range": "192.0.2.0/24" },
{ "range": "2001:DB8::192.0.2.0/120" }
]
}
}'Here eth1 is the interface to the main K8s network in my
environment, yours may differ.
NOTE: The CIDRs are documentation ranges from rfc5737 and rfc3849. You should change them.
Then add the multus annotation for the PODs that shall use multicast, and create a route to the ipvlan interface ("net1" by default in multus).
apiVersion: apps/v1
kind: Deployment
metadata:
name: multicast
spec:
selector:
matchLabels:
app: multicast
replicas: 10
template:
metadata:
labels:
app: multicast
annotations:
k8s.v1.cni.cncf.io/networks: default/ipvlan
spec:
initContainers:
- name: route
image: uablrek/multicast:latest
imagePullPolicy: IfNotPresent
securityContext:
privileged: true
command: ["ip", "route", "add", "224.0.0.0/4", "dev", "net1", "table", "local"]
containers:
- name: alpine
image: uablrek/multicast:latest
imagePullPolicy: IfNotPresent
#command: ["/multicast"]Set the route in an initContainer, so the main container can run without privileges (or NET_ADMIN rights).
Install Multus and Whereabouts, then use the test-pod above.
kubectl create namespace ns1
kubectl create -f multicast.yaml -n ns1
kubectl create namespace ns2
kubectl create -f multicast.yaml -n ns2
kubectl get pods -n ns1
NAME READY STATUS RESTARTS AGE
multicast-9fb558bf9-8nvpt 1/1 Running 0 6s
multicast-9fb558bf9-crpzk 1/1 Running 0 6s
multicast-9fb558bf9-cvzg6 1/1 Running 0 6s
multicast-9fb558bf9-dmcxc 1/1 Running 0 6s
multicast-9fb558bf9-f6k6g 1/1 Running 0 6s
multicast-9fb558bf9-fdg87 1/1 Running 0 6s
multicast-9fb558bf9-knbhx 1/1 Running 0 6s
multicast-9fb558bf9-scjzh 1/1 Running 0 6s
multicast-9fb558bf9-tmxwv 1/1 Running 0 6s
multicast-9fb558bf9-vmlss 1/1 Running 0 6s
kubectl get pods -n ns2
NAME READY STATUS RESTARTS AGE
multicast-9fb558bf9-2nfjj 1/1 Running 0 15s
multicast-9fb558bf9-bdvb7 1/1 Running 0 15s
multicast-9fb558bf9-f8x2m 1/1 Running 0 15s
multicast-9fb558bf9-jtf8q 1/1 Running 0 15s
multicast-9fb558bf9-klpq7 1/1 Running 0 15s
multicast-9fb558bf9-n4bbv 1/1 Running 0 15s
multicast-9fb558bf9-pnrh6 1/1 Running 0 15s
multicast-9fb558bf9-t62gm 1/1 Running 0 15s
multicast-9fb558bf9-tdxx9 1/1 Running 0 15s
multicast-9fb558bf9-xl92f 1/1 Running 0 15s
kubectl exec -n ns1 -c alpine multicast-9fb558bf9-8nvpt -- /multicast ping
kubectl logs -n ns2 -c alpine multicast-9fb558bf9-n4bbv
2024/06/05 10:09:51 13 bytes read from 192.0.2.4:44617
2024/06/05 10:09:51 00000000 68 65 6c 6c 6f 2c 20 77 6f 72 6c 64 0a |hello, world.|
A multicast is initiated in a pod in ns1 and logs checked for a pod in ns2 (they all get it).
The test program is taken from here with a slight modification in main():
func main() {
if len(os.Args) > 1 {
ping(srvAddr)
} else {
serveMulticastUDP(srvAddr, msgHandler)
}
}The test image is created with this Dockerfile:
FROM alpine:latest
RUN apk add gcompat iproute2 tcpdump
COPY /multicast /
CMD ["/multicast"]