The following proceedure entails details on either upgrade or rollback for DPDK versions on the nodepools.
Procedure
- Identify the node pool on which the DU application is running, and drain the Nodepool.
capv@workload-cluster-ph3-master-control-plane-vvnh7 [ ~ ]$ kubectl get nodes -A NAME STATUS ROLES AGE VERSION np-n0119-vgkgc-bs6lm Ready <none> 7d1h v1.26.11+vmware.1 workload-cluster-ph3-master-control-plane-vvnh7 Ready control-plane 12d v1.26.11+vmware.1 capv@workload-cluster-ph3-master-control-plane-vvnh7 [ ~ ]$
capv@workload-cluster-ph3-master-control-plane-vvnh7 [ ~ ]$ kubectl drain --ignore-daemonsets --delete-emptydir-data --force -l telco.vmware.com/nodepool=np-n0119 node/np-n0119-vgkgc-bs6lm cordoned Warning: ignoring DaemonSet-managed Pods: kube-system/antrea-agent-vldfp, kube-system/kube-multus-ds-kfmkw, kube-system/kube-proxy-7sh46, kube-system/whereabouts-44smj, tca-system/nodeconfig-daemon-lx7m6, vmware-system-csi/vsphere-csi-node-7v649 evicting pod testnf-n0119/testnf-du-f-59106-rzbul-testnf-du-flexran-759fdc65bc-49tsp evicting pod kube-system/metrics-server-7ffdb8fdf9-c6clb evicting pod tanzu-system/secretgen-controller-7b98cf44d9-xxrmp pod/secretgen-controller-7b98cf44d9-xxrmp evicted pod/metrics-server-7ffdb8fdf9-c6clb evicted pod/testnf-du-f-59106-rzbul-testnf-du-flexran-759fdc65bc-49tsp evicted node/np-n0119-vgkgc-bs6lm drained capv@workload-cluster-ph3-master-control-plane-vvnh7 [ ~ ]$
capv@workload-cluster-ph3-master-control-plane-vvnh7 [ ~ ]$ kubectl get nodes -A NAME STATUS ROLES AGE VERSION np-n0119-vgkgc-bs6lm Ready,SchedulingDisabled <none> 7d1h v1.26.11+vmware.1 workload-cluster-ph3-master-control-plane-vvnh7 Ready control-plane 12d v1.26.11+vmware.1 capv@workload-cluster-ph3-master-control-plane-vvnh7 [ ~ ]$
Upload the following workflow to TCA and use it to perform the drain operation at scale for multiple node pools requiring either DPDK upgrade or rollback procedure to be performed:{ "name": "drain+workflow", "version": "1.0", "startStepId": "step0", "schemaVersion": "3.0", "readOnly": false, "inputs": { "nodepool": { "type": "string", "defaultValue": "np1", "required": false }, "HOSTNAME": { "type": "string", "description": "Hostname", "defaultValue": "{{vim.k8sCluster.masterNode.ip}}", "required": false }, "vim": { "type": "vimLocation", "required": false }, "USER": { "type": "string", "description": "Username", "defaultValue": "{{vim.k8sCluster.masterNode.username}}", "required": false }, "PWD": { "type": "password", "description": "Password", "defaultValue": "e3t2aW0uazhzQ2x1c3Rlci5tYXN0ZXJOb2RlLnBhc3N3b3JkfX0=", "required": false } }, "outputs": { "FINAL_OUTPUT": { "type": "string", "description": "Final Output" } }, "variables": {}, "hierarchy": [], "tags": [], "steps": { "step0": { "type": "K8S", "description": "new Step", "conditions": [], "inBindings": { "nodepool": { "type": "string", "exportName": "nodepool" }, "scope": { "type": "string", "defaultValue": "VIM_RW" }, "location": { "type": "vimLocation", "exportName": "vim" }, "script": { "type": "string", "format": "text", "defaultValue": "kubectl drain --ignore-daemonsets --delete-emptydir-data -l telco.vmware.com/nodepool=$TCA_INPUT_nodepool" }, "target": { "type": "string", "defaultValue": "MANAGEMENT" } }, "outBindings": { "FINAL_OUTPUT": { "name": "result", "type": "string" } } } }, "id": "a9ae006e-4b72-4f64-801e-90ac5c0533ec" }
- Perform the DPDK upgrade/rollback procedure.
- Uncordon the nodepool.
capv@workload-cluster-ph3-master-control-plane-vvnh7 [ ~ ]$ kubectl uncordon np-n0119-vgkgc-bs6lm node/np-n0119-vgkgc-bs6lm uncordoned capv@workload-cluster-ph3-master-control-plane-vvnh7 [ ~ ]$ kubectl get nodes -A NAME STATUS ROLES AGE VERSION np-n0119-vgkgc-bs6lm Ready <none> 7d1h v1.26.11+vmware.1 workload-cluster-ph3-master-control-plane-vvnh7 Ready control-plane 12d v1.26.11+vmware.1
Upload the following workflow to TCA and use it to perform the uncordon operation at scale for multiple node pools requiring either DPDK upgrade or rollback procedure to be performed:{ "name": "uncordon workflow", "version": "1.0", "startStepId": "step0", "schemaVersion": "3.0", "readOnly": false, "inputs": { "nodepool": { "type": "string", "defaultValue": "np1", "required": false }, "HOSTNAME": { "type": "string", "description": "Hostname", "defaultValue": "{{vim.k8sCluster.masterNode.ip}}", "required": false }, "vim": { "type": "string", "required": false }, "USER": { "type": "string", "description": "Username", "defaultValue": "{{vim.k8sCluster.masterNode.username}}", "required": false }, "PWD": { "type": "password", "description": "Password", "defaultValue": "e3t2aW0uazhzQ2x1c3Rlci5tYXN0ZXJOb2RlLnBhc3N3b3JkfX0=", "required": false } }, "outputs": { "FINAL_OUTPUT": { "type": "string", "description": "Final Output" } }, "variables": { "node": { "type": "string" } }, "hierarchy": [], "tags": [], "steps": { "step0": { "type": "K8S", "description": "new Step", "conditions": [], "inBindings": { "nodepool": { "type": "string", "exportName": "nodepool" }, "scope": { "type": "string", "defaultValue": "VIM_RW" }, "location": { "type": "virtualMachine", "exportName": "vim" }, "script": { "type": "string", "format": "text", "defaultValue": "kubectl uncordon -l telco.vmware.com/nodepool={{nodepool}}" }, "target": { "type": "string", "defaultValue": "MANAGEMENT" } }, "outBindings": { "node": { "name": "result", "type": "string" } } } }, "id": "4fa42408-6f0e-4ff9-a2e9-373bddfc8edb" }