Os clusters Kubernetes usam o TLS para proteger as comunicações dos componentes. Quando você atualiza um cluster TKG 2, o processo de atualização sem interrupção alterna automaticamente os certificados TLS. Se necessário, você pode alternar manualmente os certificados TLS concluindo as etapas neste tópico.
Requirements
Essas instruções pressupõem conhecimento avançado e experiência com a administração de cluster do TKG.
Essas instruções pressupõem que os certificados TLS não expiraram. Se os certificados expirarem, não conclua essas etapas.
Para executar essas etapas, faça SSH em um dos nós Supervisor. Consulte Conectando-se a clusters do TKG 2 em Supervisor como administrador do Kubernetes e usuário do sistema.
Recuperar informações do cluster do TKG
export CLUSTER_NAMESPACE="tkg-cluster-ns" kubectl get clusters -n $CLUSTER_NAMESPACE NAME PHASE AGE VERSION tkg-cluster Provisioned 43h
export CLUSTER_NAME="tkg-cluster" kubectl get secrets -n $CLUSTER_NAMESPACE $CLUSTER_NAME-kubeconfig -o jsonpath='{.data.value}' | base64 -d > $CLUSTER_NAME-kubeconfig
kubectl get secrets -n $CLUSTER_NAMESPACE $CLUSTER_NAME-ssh -o jsonpath='{.data.ssh-privatekey}' | base64 -d > $CLUSTER_NAME-ssh-privatekey chmod 600 $CLUSTER_NAME-ssh-privatekey
Verificar o ambiente antes da rotação do certificado
export KUBECONFIG=$CLUSTER_NAME-kubeconfig
kubectl get nodes -o wide
kubectl get nodes \ -o jsonpath='{.items[*].status.addresses[?(@.type=="InternalIP")].address}' \ -l node-role.kubernetes.io/master= > nodes
for i in `cat nodes`; do printf "\n######\n" ssh -o "StrictHostKeyChecking=no" -i $CLUSTER_NAME-ssh-privatekey -q vmware-system-user@$i hostname ssh -o "StrictHostKeyChecking=no" -i $CLUSTER_NAME-ssh-privatekey -q vmware-system-user@$i sudo kubeadm certs check-expiration done;
Exemplo de resultado dos comandos anteriores:
tkg-cluster-control-plane-k8bqh [check-expiration] Reading configuration from the cluster... [check-expiration] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED admin.conf Oct 04, 2023 23:00 UTC 363d no apiserver Oct 04, 2023 23:00 UTC 363d ca no apiserver-etcd-client Oct 04, 2023 23:00 UTC 363d etcd-ca no apiserver-kubelet-client Oct 04, 2023 23:00 UTC 363d ca no controller-manager.conf Oct 04, 2023 23:00 UTC 363d no etcd-healthcheck-client Oct 04, 2023 23:00 UTC 363d etcd-ca no etcd-peer Oct 04, 2023 23:00 UTC 363d etcd-ca no etcd-server Oct 04, 2023 23:00 UTC 363d etcd-ca no front-proxy-client Oct 04, 2023 23:00 UTC 363d front-proxy-ca no scheduler.conf Oct 04, 2023 23:00 UTC 363d no CERTIFICATE AUTHORITY EXPIRES RESIDUAL TIME EXTERNALLY MANAGED ca Oct 01, 2032 22:56 UTC 9y no etcd-ca Oct 01, 2032 22:56 UTC 9y no front-proxy-ca Oct 01, 2032 22:56 UTC 9y no
Girar certificados TLS
unset KUBECONFIG kubectl config current-context kubernetes-admin@kubernetes
kubectl get kcp -n $CLUSTER_NAMESPACE $CLUSTER_NAME-control-plane -o jsonpath='{.apiVersion}{"\n"}' controlplane.cluster.x-k8s.io/v1beta1
kubectl get kcp -n $CLUSTER_NAMESPACE $CLUSTER_NAME-control-plane NAME CLUSTER INITIALIZED API SERVER AVAILABLE REPLICAS READY UPDATED UNAVAILABLE AGE VERSION tkg-cluster-control-plane tkg-cluster true true 3 3 3 0 43h v1.21.6+vmware.1
kubectl patch kcp $CLUSTER_NAME-control-plane -n $CLUSTER_NAMESPACE --type merge -p "{\"spec\":{\"rolloutAfter\":\"`date +'%Y-%m-%dT%TZ'`\"}}" kubeadmcontrolplane.controlplane.cluster.x-k8s.io/tkg-cluster-control-plane patched
kubectl get machines -n $CLUSTER_NAMESPACE NAME CLUSTER NODENAME PROVIDERID PHASE AGE VERSION tkg-cluster-control-plane-k8bqh tkg-cluster tkg-cluster-control-plane-k8bqh vsphere://420a2e04-cf75-9b43-f5b6-23ec4df612eb Running 43h v1.21.6+vmware.1 tkg-cluster-control-plane-l7hwd tkg-cluster tkg-cluster-control-plane-l7hwd vsphere://420a57cd-a1a0-fec6-a741-19909854feb6 Running 43h v1.21.6+vmware.1 tkg-cluster-control-plane-mm6xj tkg-cluster tkg-cluster-control-plane-mm6xj vsphere://420a67c2-ce1c-aacc-4f4c-0564daad4efa Running 43h v1.21.6+vmware.1 tkg-cluster-control-plane-nqdv6 tkg-cluster Provisioning 25s v1.21.6+vmware.1 tkg-cluster-workers-v8575-59c6645b4-wvnlz tkg-cluster tkg-cluster-workers-v8575-59c6645b4-wvnlz vsphere://420aa071-9ac2-02ea-6530-eb59ceabf87b Running 43h v1.21.6+vmware.1
kubectl get machines -n $CLUSTER_NAMESPACE NAME CLUSTER NODENAME PROVIDERID PHASE AGE VERSION tkg-cluster-control-plane-m9745 tkg-cluster tkg-cluster-control-plane-m9745 vsphere://420a5758-50c4-3172-7caf-0bbacaf882d3 Running 17m v1.21.6+vmware.1 tkg-cluster-control-plane-nqdv6 tkg-cluster tkg-cluster-control-plane-nqdv6 vsphere://420ad908-00c2-4b9b-74d8-8d197442e767 Running 22m v1.21.6+vmware.1 tkg-cluster-control-plane-wdmph tkg-cluster tkg-cluster-control-plane-wdmph vsphere://420af38a-f9f8-cb21-e05d-c1bcb6840a93 Running 10m v1.21.6+vmware.1 tkg-cluster-workers-v8575-59c6645b4-wvnlz tkg-cluster tkg-cluster-workers-v8575-59c6645b4-wvnlz vsphere://420aa071-9ac2-02ea-6530-eb59ceabf87b Running 43h v1.21.6+vmware.1
Verificar a rotação do certificado
export KUBECONFIG=$CLUSTER_NAME-kubeconfig kubectl get nodes -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME tkg-cluster-control-plane-m9745 Ready control-plane,master 15m v1.21.6+vmware.1 10.244.0.55 <none> VMware Photon OS/Linux 4.19.198-1.ph3-esx containerd://1.4.11 tkg-cluster-control-plane-nqdv6 Ready control-plane,master 21m v1.21.6+vmware.1 10.244.0.54 <none> VMware Photon OS/Linux 4.19.198-1.ph3-esx containerd://1.4.11 tkg-cluster-control-plane-wdmph Ready control-plane,master 9m22s v1.21.6+vmware.1 10.244.0.56 <none> VMware Photon OS/Linux 4.19.198-1.ph3-esx containerd://1.4.11 tkg-cluster-workers-v8575-59c6645b4-wvnlz Ready <none> 43h v1.21.6+vmware.1 10.244.0.51 <none> VMware Photon OS/Linux 4.19.198-1.ph3-esx containerd://1.4.11 kubectl get nodes \ -o jsonpath='{.items[*].status.addresses[?(@.type=="InternalIP")].address}' \ -l node-role.kubernetes.io/master= > nodes for i in `cat nodes`; do printf "\n######\n" ssh -o "StrictHostKeyChecking=no" -i $CLUSTER_NAME-ssh-privatekey -q vmware-system-user@$i hostname ssh -o "StrictHostKeyChecking=no" -i $CLUSTER_NAME-ssh-privatekey -q vmware-system-user@$i sudo kubeadm certs check-expiration done;
###### tkg-cluster-control-plane-m9745 [check-expiration] Reading configuration from the cluster... [check-expiration] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED admin.conf Oct 06, 2023 18:18 UTC 364d no apiserver Oct 06, 2023 18:18 UTC 364d ca no apiserver-etcd-client Oct 06, 2023 18:18 UTC 364d etcd-ca no apiserver-kubelet-client Oct 06, 2023 18:18 UTC 364d ca no controller-manager.conf Oct 06, 2023 18:18 UTC 364d no etcd-healthcheck-client Oct 06, 2023 18:18 UTC 364d etcd-ca no etcd-peer Oct 06, 2023 18:18 UTC 364d etcd-ca no etcd-server Oct 06, 2023 18:18 UTC 364d etcd-ca no front-proxy-client Oct 06, 2023 18:18 UTC 364d front-proxy-ca no scheduler.conf Oct 06, 2023 18:18 UTC 364d no CERTIFICATE AUTHORITY EXPIRES RESIDUAL TIME EXTERNALLY MANAGED ca Oct 01, 2032 22:56 UTC 9y no etcd-ca Oct 01, 2032 22:56 UTC 9y no front-proxy-ca Oct 01, 2032 22:56 UTC 9y no
Certificado Kubelet
Você não precisa alternar o certificado Kubelet, supondo que o parâmetro rotateCertificates
na configuração do Kubelet esteja definido como true
, que é a configuração padrão.
kubectl get nodes \ -o jsonpath='{.items[*].status.addresses[?(@.type=="InternalIP")].address}' \ -l node-role.kubernetes.io/master!= > workernodes for i in `cat workernodes`; do printf "\n######\n" ssh -o "StrictHostKeyChecking=no" -i $CLUSTER_NAME-ssh-privatekey -q vmware-system-user@$i hostname ssh -o "StrictHostKeyChecking=no" -i $CLUSTER_NAME-ssh-privatekey -q vmware-system-user@$i sudo grep rotate /var/lib/kubelet/config.yaml done;
###### tkg-cluster-workers-v8575-59c6645b4-wvnlz rotateCertificates: true