Whereabouts is an IP Address Management (IPAM) CNI plugin that dynamically assigns IP addresses to pods across all the nodes in a cluster. Compared to other IPAM plugins such as host-local
, which only assigns IP addresses to pods on the same node, whereabouts
assigns IP addresses cluster-wide.
In this topic, we will show how you can attach a secondary network interface to a pod with IP address assigned in the range you specified using whereabouts
. For example, you will have Antrea or Calico as the primary CNI, a secondary interface created using macvlan or ipvlan, and Whereabouts as the IPAM CNI.
NoteAs of v2.5, TKG does not support clusters on AWS or Azure. See the End of Support for TKG Management and Workload Clusters on AWS and Azure in the Tanzu Kubernetes Grid v2.5 Release Notes.
kubectl
, as described in Install the Tanzu CLI and Kubernetes CLI for Use with Standalone Management Clusters.large
or extra-large
, as described in Predefined Node Sizes.To install Whereabouts package on a workload cluster and configure the cluster to use it:
Configure and install Whereabouts.
Create a configuration file that retrieves the Whereabouts parameters.
tanzu package available get whereabouts.tanzu.vmware.com/PACKAGE-VERSION --default-values-file-output FILE-PATH
NoteThe namespace on which to deploy
whereabouts
components must bekube-system
. Any customized namespace installation will fail.
Where PACKAGE-VERSION
is the version of the Whereabouts package that you want to install and FILE-PATH
is the location to which you want to save the configuration file, for example, whereabouts-data-values.yaml
. The above command will create a configuration file named whereabouts-data-values.yaml
containing the default values.
Run the tanzu package available list
command to list the available versions of the Whereabouts package, for example:
tanzu package available list whereabouts.tanzu.vmware.com -A
NAME VERSION RELEASED-AT NAMESPACE
whereabouts.tanzu.vmware.com 0.6.3+vmware.2-tkg.1 2023-30-04 18:00:00 +0000 UTC tanzu-package-repo-global
NoteMake sure that your custom image registry can be reached if you are operating in a network-restricted environment.
Run the tanzu package available get
command with --values-schema
to see which field values can be set:
tanzu package available get whereabouts.tanzu.vmware.com/VERSION --values-schema -o FORMAT
Where: - VERSION
is a version listed in the tanzu package available list
output - FORMAT
is either yaml
or json
Populate the whereabouts-data-values.yaml
configuration file with your desired field values.
Remove all comments from the whereabouts-data-values.yaml
file:
yq -i eval '... comments=""' whereabouts-cni-default-values.yaml
Run tanzu package install
to install the package.
tanzu package install whereabouts --package whereabouts.tanzu.vmware.com --version AVAILABLE-PACKAGE-VERSION --values-file whereabouts-data-values.yaml --namespace TARGET-NAMESPACE
Where:
TARGET-NAMESPACE
is the namespace in which you want to install the Whereabouts package. For example, the my-packages
or tanzu-cli-managed-packages
namespace.
--namespace
flag is not specified, the Tanzu CLI installs the package in the default
namespace.kubectl create namespace my-packages
.AVAILABLE-PACKAGE-VERSION
is the version that you retrieved above, for example 0.6.3+vmware.2-tkg.1
.Run tanzu package installed get
to check the status of the installed package.
tanzu package installed get whereabouts -o <json|yaml|table>
Create a custom resource definition (CRD) for NetworkAttachmentDefinition
that defines the CNI configuration for network interfaces to be used by Multus CNI with Whereabouts as the IPAM type.
Create a CRD specification. For example, this multus-cni-crd.yaml
specifies a NetworkAttachmentDefinition
named macvlan-conf
that configures a macvlan
CNI and has whereabouts
as the IPAM type.:
---
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
name: macvlan-conf
spec:
config: '{
"cniVersion": "0.3.1",
"plugins": [
{
"type": "macvlan",
"capabilities": { "ips": true },
"master": "eth0",
"mode": "bridge",
"ipam": {
"type": "whereabouts",
"range": "192.168.20.0/24",
"range_start": "192.168.20.10",
"range_end": "192.168.20.100",
"gateway": "192.168.20.1"
}
} ]
}'
Create the resource; for example kubectl create -f multus-cni-crd.yaml
Create a pod with the annotation k8s.v1.cni.cncf.io/networks
to specify the additional network to add.
Create the pod specification; for example, my-multi-cni-pod.yaml
:
apiVersion: v1
kind: Pod
metadata:
name: pod0
annotations:
k8s.v1.cni.cncf.io/networks: macvlan-conf
spec:
containers:
- name: pod0
command: ["/bin/ash", "-c", "trap : TERM INT; sleep infinity & wait"]
image: docker.io/kserving/tools:latest
Create the pod. For example, kubectl create -f my-multi-cni-crd.yaml
creates the pod pod0
.
Once the pod is created, it will have three network interfaces:
lo
the loopback interfaceeth0
the default pod network managed by Antrea or Calico CNInet1
the new interface created via the annotation k8s.v1.cni.cncf.io/networks: macvlan-conf
.NoteThe default network gets the name
eth0
and additional network pod interfaces get the name asnet1
,net2
, and so on.
Run kubectl describe pod
on pod0
, and confirm that the annotation k8s.v1.cni.cncf.io/network-status
lists all network interfaces.
For example:
$ kubectl describe pod pod0
Name: pod0
Namespace: default
Priority: 0
Node: tcecluster-md-0-6476897f75-rl9vt/10.170.109.225
Start Time: Thu, 25 August 2022 15:31:20 +0000
Labels: <none>
Annotations: k8s.v1.cni.cncf.io/network-status:
[{
"name": "",
"interface": "eth0",
"ips": [
"100.96.1.80"
],
"mac": "66:39:dc:63:50:a3",
"default": true,
"dns": {}
},{
"name": "default/macvlan-conf",
"interface": "net1",
"ips": [
"192.168.20.11"
],
"mac": "02:77:cb:a0:60:e3",
"dns": {}
}]
k8s.v1.cni.cncf.io/networks: macvlan-conf
Then run kubectl exec sample-pod -- ip a show dev net1
to check if the target interface is up and running with IP listed in annotations above. Repeat this step to validate the configuration for other pods that you will create in the next step.
Create two additional pods, pod1
on the same node and pod2
on a different node. We will use these pods to verify network access within a single node and nodes across the cluster.
You can now check network access between pods in the same node and pods across the cluster.
Verify that the network access between pods on the same node works. For example, the following command verifies that pod0
can reach pod1
via its assigned IP address.
kubectl exec -it pod0 -- ping -c 3 192.168.20.12
PING 192.168.20.12 (192.168.20.12) 56(84) bytes of data.
64 bytes from 192.168.20.12: icmp_seq=1 ttl=64 time=0.237 ms
64 bytes from 192.168.20.12: icmp_seq=2 ttl=64 time=0.215 ms
64 bytes from 192.168.20.12: icmp_seq=3 ttl=64 time=0.156 ms
--- 192.168.20.12 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2030ms
rtt min/avg/max/mdev = 0.156/0.202/0.237/0.037 ms
Verify that the network access between pods on different node works. For example, the following command verifies that pod0
can reach pod2
via its assigned IP address.
kubectl exec -it pod0 -- ping -c 3 <192.168.20.13
PING 192.168.20.13 (192.168.20.13) 56(84) bytes of data.
64 bytes from 192.168.20.13: icmp_seq=1 ttl=64 time=0.799 ms
64 bytes from 192.168.20.13: icmp_seq=2 ttl=64 time=0.626 ms
64 bytes from 192.168.20.13: icmp_seq=3 ttl=64 time=0.655 ms
--- 192.168.20.13 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2030ms
rtt min/avg/max/mdev = 0.626/0.693/0.799/0.078 ms