This document provides step-by-step instructions for deploying Tanzu Kubernetes Operations on VMware Cloud on AWS.
The scope of the document is limited to providing the deployment steps based on the reference design in VMware Tanzu for Kubernetes Operations on VMware Cloud on AWS Reference Design.
You can use VMware Service Installer for VMware Tanzu to automate this deployment.
VMware Service Installer for Tanzu automates the deployment of the reference designs for Tanzu for Kubernetes Operations. It uses best practices for deploying and configuring the required Tanzu for Kubernetes Operations components.
To use Service Installer to automate this deployment, see Deploying VMware Tanzu for Kubernetes Operations on VMware Cloud on AWS Using Service Installer for VMware Tanzu.
Alternatively, if you decide to manually deploy each component, follow the steps provided in this document.
These instructions assume that you have the following set up:
Software Components | Version |
---|---|
Tanzu Kubernetes Grid | 1.6.0 |
VMware Cloud on AWS SDDC Version | 1.18 and later |
NSX Advanced Load Balancer | 21.1.4 |
To verify the interoperability of other versions and products, see VMware Interoperability Matrix.
Before deploying Tanzu Kubernetes Operations on VMC on AWS, ensure that your environment is set up as described in the following:
Your environment should meet the following general requirements:
Your SDDC has the following objects in place:
NSX Advanced Load Balancer 21.1.4 OVA downloaded from the customer connect portal and readily available for deployment.
A content library to store NSX Advanced Load Balancer Controller and service engine OVA templates.
Create NSX-T logical segments for deploying Tanzu for Kubernetes Operations components as per Network Recommendations defined in the reference architecture.
Ensure that the firewall is set up as described in Firewall Recommendations.
The following table shows sample entries of the resource pools and folders that you should create in your SDDC.
Resource Type | Resource Pool Name | Sample Folder Name |
---|---|---|
NSX Advanced Load Balancer Components | NSX-Advanced Load Balancer |
NSX-Advanced Load Balancer-VMS |
TKG Management Components | TKG-Management | TKG-Mgmt-VMS |
TKG Shared Services Components | TKG-Shared-Services | TKG-Shared-Services-VMS |
TKG Workload Components | TKG-Workload | TKG-Workload-VMS |
For the purpose of demonstration, this document uses the following subnet CIDRs for Tanzu for Kubernetes Operations deployment.
Network Type | Segment Name | Gateway CIDR | DHCP Pool | NSX Advanced Load Balancer IP Pool |
---|---|---|---|---|
NSX ALB Mgmt Network | NSX-ALB-Mgmt | 192.168.11.1/27 | 192.168.11.15 - 192.168.11.30 | NA |
TKG Management Network | TKG-Management | 192.168.12.1/24 | 192.168.12.2 - 192.168.12.251 | NA |
TKG Workload Network | TKG-Workload | 192.168.13.1/24 | 192.168.13.2 - 192.168.13.251 | NA |
TKG Cluster VIP Network | TKG-Cluster-VIP | 192.168.14.1/26 | 192.168.14.2 - 192.168.14.30 | 192.168.14.31 - 192.168.14.60 |
TKG Mgmt VIP Network | TKG-Management-VIP | 192.168.15.1/26 | 192.168.15.2 - 192.168.15.30 | 192.168.15.31 - 192.168.15.60 |
TKG Workload VIP Network | TKG-Workload-VIP | 192.168.16.1/26 | 192.168.16.2 - 192.168.16.30 | 192.168.16.31 - 192.168.16.60 |
TKG Shared Services Network | TKG-Shared-Service | 192.168.17.1/24 | 192.168.17.2 - 192.168.17.251 | NA |
The high-level steps for deploying Tanzu for Kubernetes Operation on VMware Cloud on AWS are as follows:
For the purpose of demonstration, this document describes how to deploy NSX Advanced Load Balancer as a cluster of three nodes. After the first node is deployed and configured, two more nodes are deployed to form the cluster.
The following IP addresses are reserved for NSX Advanced Load Balancer:
Controller Node | IP Address | FQDN |
---|---|---|
Node01 (Primary) | 192.168.11.11 | alb01.tanzu.lab |
Node02 (Secondary) | 192.168.11.12 | alb02.tanzu.lab |
Node03 (Secondary) | 192.168.11.13 | alb03.tanzu.lab |
Controller Cluster IP | 192.168.11.10 | alb.tanzu.lab |
To deploy NSX Advanced Load Balancer controller nodes:
Log in to the vCenter server from the vSphere client.
Select the cluster where you want to deploy the NSX Advanced Load Balancer controller node.
Right-click the cluster and invoke the Deploy OVF Template wizard.
Follow the wizard to configure the following:
After the controller VM is deployed and powered on, connect to the URL for the node and configure the node for your Tanzu Kubernetes Grid environment as follows:
Create the administrator account by setting the password and optional email address.
Configure System Settings by specifying the backup passphrase and DNS information.
(Optional) Configure Email/SMTP
Configure Multi-Tenant settings as follows:
Click Save to complete the post-deployment configuration wizard.
If you did not select the Setup Cloud After option before saving, the initial configuration wizard exits. The Cloud configuration window does not automatically launch and you are directed to a Dashboard view on the controller.
To configure the No Orchestrator Cloud, navigate to the Infrastructure > Clouds tab.
Click Create and select No Orchestrator from the dropdown list.
Provide a name for the cloud, enable IPv4 DHCP under DHCP settings, and click Save.
After the cloud is created, ensure that the health status of the cloud is green.
Tanzu for Kubernetes Operations is bundled with a license for NSX Advanced Load Balancer Enterprise. To configure licensing, complete the following steps.
Navigate to the Administration > Settings > Licensing and click on the gear icon to change the license type to Enterprise.
Select Enterprise as license type and click Save.
Once the license tier is changed, apply the NSX Advanced Load Balancer Enterprise license key. If you have a license file instead of a license key, apply the license by selecting the Upload a License File option.
Configure NTP settings if you want to use an internal NTP server. To configure NTP settings, complete the following steps.
Navigate to Administration > Settings > DNS/NTP.
Edit the settings using the pencil icon to specify the NTP server that you want to use and save the settings.
In a production environment, VMware recommends that you deploy additional controller nodes and configure the controller cluster for high availability and disaster recovery. Adding two additional nodes to create a 3-node cluster provides node-level redundancy for the controller and also maximizes performance for CPU-intensive analytics functions.
To run a 3-node controller cluster, you deploy the first node, perform the initial configuration, and set the cluster IP address. After that, you deploy and power on two more Controller VMs, but you must not run the initial configuration wizard or change the admin password for these controller VMs. The configuration of the first controller VM is assigned to the two new controller VMs.
Repeat the steps provided in the Deploy NSX Advanced Load Balancer Controller section to deploy additional controllers.
To configure the controller cluster, navigate to the Administration > Controller > Nodes page and click Edit.
Specify the name for the controller cluster and set the Cluster IP. This IP address should be from the NSX Advanced Load Balancer management network.
Under Cluster Nodes, specify the IP addresses of the two additional controllers that you have deployed. Optionally, you can configure the name for the controllers.
Click Save to complete the cluster configuration.
After you click Save, the controller cluster setup starts, and the controller nodes are rebooted in the process. It takes approximately 10-15 minutes for cluster formation to complete.
You are automatically logged out of the controller node where you are currently logged in. On entering the cluster IP address in the browser, you can see details about the cluster formation task.
Note: Once the controller cluster is deployed, you must use the IP address of the controller cluster, not the IP address of the individual controller node, for any further configuration.
Connect to the NSX Advanced Load Balancer controller cluster IP/FQDN and ensure that all controller nodes are in a healthy state.
The first controller of the cluster receives the “Leader” role. The second and third controllers work as “Followers”.
The controller must send a certificate to clients to establish secure communication. This certificate must have a Subject Alternative Name (SAN) that matches the NSX Advanced Load Balancer controller cluster hostname or IP address.
The controller has a default self-signed certificate, but this certificate does not have the correct SAN. You must replace it with a valid or self-signed certificate that has the correct SAN. You can create a self-signed certificate or upload a CA-signed certificate.
For the purpose of the demonstration, this document uses a self-signed certificate.
To replace the default certificate, navigate to the Templates > Security > SSL/TLS Certificate > Create and select Controller Certificate.
In the New Certificate (SSL/TLS) window, enter a name for the certificate and set the type to Self Signed.
Enter the following details:
Click Save to save the certificate.
To change the NSX Advanced Load Balancer portal certificate, navigate to the Administration > Settings >Access Settings page and click the pencil icon to edit the settings.
Under SSL/TLS Certificate, remove the existing default certificates. From the drop-down menu, select the newly created certificate and click Save.
Refresh the controller portal from the browser and accept the newly created self-signed certificate. Ensure that the certificate reflects the updated information in the browser.
After the certificate is created, export the certificate thumbprint. The thumbprint is required later when you configure the Tanzu Kubernetes Grid management cluster. To export the certificate, complete the following steps.
Navigate to the Templates > Security > SSL/TLS Certificate page and export the certificate by clicking Export.
In the Export Certificate page, click Copy to clipboard against the certificate. Do not copy the key. Save the copied certificate to use later when you enable workload management.
Tanzu for Kubernetes Operations deployment is based on the use of distinct service engine (SE) groups for the Tanzu Kubernetes Grid management and workload clusters. The service engines for the management cluster are deployed in the Tanzu Kubernetes Grid management SE group, and the service engines for Tanzu Kubernetes Grid workload clusters are deployed in the Tanzu Kubernetes Grid workload SE group.
TKG-Mgmt-SEG: The service engines part of this SE group hosts:
TKG-WLD01-SEG: Service engines part of this SE group host virtual services that load balance control plane nodes and virtual services for all load balancer functionalities requested by the workload clusters mapped to this SE group.
Note:
To create and configure a new SE group, complete the following steps.
Go to Infrastructure > Service Engine Group under Cloud Resources and click Create.
Provide a name for the SE group and configure the following settings:
Repeat the steps to create an SE group for the Tanzu Kubernetes Grid workload cluster. You should have created two service engine groups.
As per the reference architecture, Tanzu for Kubernetes Operations deployment makes use of three VIP networks:
Note: You can provision additional VIP networks for the network traffic separation for the applications deployed in various workload clusters. This is a day-2 operation.
To create and configure the VIP networks, complete the following steps.
Go to the Infrastructure > Networks tab under Cloud Resources and click Create. Check that the VIP networks are being created under the correct cloud.
Provide a name for the VIP network and uncheck the DHCP Enabled and IPv6 Auto-Configuration options.
Click Add Subnet and configure the following:
Click Save to continue.
Click Save again to finish the network configuration.
Repeat the steps to create additional VIP networks.
After configuring the VIP networks, set the default routes for all VIP/data networks. The following table lists the default routes used in the current environment.
Network Name | Gateway Subnet Mask | Next Hop |
---|---|---|
TKG-Cluster-VIP | 0.0.0.0/0 | 192.168.14.1 |
TKG-Management-VIP | 0.0.0.0/0 | 192.168.15.1 |
TKG-Workload-VIP | 0.0.0.0/0 | 192.168.16.1 |
Note: Change the gateway subnet addresses to match your network configuration.
Go to the Routing page and click Create.
Add default routes for the VIP networks.
Repeat the steps to configure additional routing. A total of three default gateways are configured.
IPAM is required to allocate virtual IP addresses when virtual services are created. NSX Advanced Load Balancer provides IPAM service for Tanzu Kubernetes Grid cluster VIP network, Tanzu Kubernetes Grid management VIP network and Tanzu Kubernetes Grid workload VIP network.
To create an IPAM profile, complete the following steps.
Navigate to the Templates > Profiles > IPAM/DNS Profiles page, click Create, and select IPAM Profile.
Create the profile using the values shown in the following table.
Parameter | Value |
---|---|
Name | ALB-TKG-IPAM |
Type | AVI Vantage IPAM |
Cloud for Usable Networks | tkg-vmc |
Usable Networks | TKG-Cluster-VIP TKG-Management-VIP TKG-Workload-VIP |
Click Save to finish the IPAM creation wizard.
To create a DNS profile, click Create again and select DNS Profile.
The newly created IPAM and DNS profiles must be associated with the cloud so they can be leveraged by the NSX Advanced Load Balancer objects created under that cloud.
To assign the IPAM and DNS profile to the cloud, go to the Infrastructure > Cloud page and edit the cloud configuration.
Under IPAM Profile, select the IPAM profile.
Under DNS Profile, select the DNS profile and save the settings.
After configuring the IPAM and DNS profiles, verify that the status of the cloud is green.
Deploying a service engine is a manual process in VMC on AWS environment because NSX Advanced Load Balancer is deployed in the no-orchestrator mode. In this mode, NSX Advanced Load Balancer does not have access to the ESX management plane. Access to the ESX management plane is required for automated service engine deployment.
To download the service engine image for deployment, navigate to the Infrastructure > Clouds tab, select your cloud, click the download icon, and select type as OVA.
Wait a few minutes for the image generating task to finish. When the task is finished, the resulting image file is immediately downloaded.
You can use the downloaded OVA file directly to create a service engine VM, but bear in mind that this approach requires you to upload the image to vCenter every time you need to create a new service engine VM.
For faster deployment, import the service engine OVA image into the content library and use the “deploy from template” wizard to create new service engine VMs.
Before deploying a service engine VM, you must obtain a cluster UUID and generate an authentication token. A cluster UUID facilitates integrating the service engine with NSX Advanced Load Balancer Controller. Authentication between the two is performed via an authentication token.
To generate a cluster UUID and auth token, navigate to Infrastructure > Clouds and click the key icon in front of the cloud that you have created. This opens a new popup window containing both the cluster UUID and the auth token.
Note: You need a new auth token every time a new Service Engine instance is deployed.
To deploy a service engine VM, log in to the vSphere client and navigate to Menu > Content Library > Your Content Library. Navigate to the Templates tab and select the service engine template, right-click it, and choose New VM from this template.
Follow the VM creation wizard. On the networks page, select the management and data networks for the SE VM.
The Management network label is mapped to the NSX Advanced Load Balancer Management logical segment. The remaining network labels (Data Network 1 – 9) are connected to any of the front-end virtual service’s network or back-end server’s logical network as required. It is left disconnected if not required.
The service engine for the Tanzu Kubernetes Grid management cluster is connected to the following networks:
Provide the cluster UUID and authentication token that you generated earlier on the Customize template page. Configure the service engine VM management network settings as well.
Repeat the steps to deploy an additional service engine VM for the Tanzu Kubernetes Grid management cluster.
By default, service engine VMs are created in the default Service Engine Group.
To map the service engine VMs to the correct Service Engine Group,
Go to the Infrastructure > Service Engine tab, select your cloud, and click the pencil icon to update the settings and link the service engine to the correct SEG.
Repeat the step for all service engine VMs.
On the Service Engine Group page, you can confirm the association of service engines with Service Engine Groups.
Service engine VMs deployed for Tanzu Kubernetes Grid workload cluster are connected to the following networks:
You need to deploy service engine VMs with the above settings.
After deploying the service engines, edit the service engine VMs and associate them with the TKG-WLD01-SEG Service Engine Group.
The NSX Advanced Load Balancer configuration is complete.
The deployment of the Tanzu Kubernetes Grid management and workload cluster is facilitated by setting up a bootstrap machine where you install the Tanzu CLI and Kubectl utilities which are used to create and manage the Tanzu Kubernetes Grid instance. This machine also keeps the Tanzu Kubernetes Grid and Kubernetes configuration files for your deployments.
The bootstrap machine runs a local kind
cluster when Tanzu Kubernetes Grid management cluster deployment is started. Once the kind
cluster is fully initialized, the configuration is used to deploy the actual management cluster on the backend infrastructure. After the management cluster is fully configured, the local kind
cluster is deleted and future configurations are performed via the Tanzu CLI.
To deploy the Tanzu Kubernetes Grid instance, you must first import the supported version of the Kubernetes OVA into your vCenter server and convert the imported OVA into a template. This template is used by the Tanzu Kubernetes Grid installer to deploy the management and workload clusters.
For importing an OVA template in vCenter, see Deploy an OVF or OVA Template.
To learn more about the supported Kubernetes version with Tanzu Kubernetes Grid 1.6.0, see the Tanzu Kubernetes Grid Release Notes.
You can download the supported Kubernetes templates for Tanzu Kubernetes Grid 1.6.0 from the VMware customer connect portal.
Download the following items from the portal:
In the VMC on AWS environment, the bootstrap machine must be a cloud VM, not a local machine, and should meet the following prerequisites:
Note: For the purpose of the demonstration, this document refers to a bootstrapper machine as a CentOS-7 instance deployed in VMC SDDC and attached to the logical segment designated for the Tanzu Kubernetes Grid management cluster.
To use the Tanzu Kubernetes Grid installation binaries, upload the Tanzu CLI and Kubectl binary to the bootstrapper machine using WinSCP or a similar utility and unpack them using the system utilities like tar/unzip/gunzip
.
After you unpack the Tanzu CLI bundle file, a CLI folder with multiple subfolders and files is created. Use the following command to install the Tanzu CLI.
[root@tkg-bootstrapper ~]# tar -xvf tanzu-cli-bundle-linux-amd64.tar
[root@tkg-bootstrapper ~]# install cli/core/v0.25.0/tanzu-core-linux_amd64 /usr/local/bin/tanzu
At the command line, run the tanzu version
command to check that the correct version of the Tanzu CLI is properly installed. After you have installed the Tanzu CLI, you must install the plugins related to Tanzu Kubernetes cluster management and feature operations.
To install the Tanzu plugins, run the tanzu plugin sync
command.
[root@tkg-bootstrapper ~]# tanzu plugin sync
Checking for required plugins...
Installing plugin 'login:v0.25.0'
Installing plugin 'management-cluster:v0.25.0'
Installing plugin 'package:v0.25.0'
Installing plugin 'pinniped-auth:v0.25.0'
Installing plugin 'secret:v0.25.0'
Installing plugin 'telemetry:v0.25.0'
Successfully installed all required plugins
✔ Done
After a successful installation, run the tanzu plugin list
command to validate that the status of the plugin is showing as installed.
Run the following commands to install the kubectl
utility:
[root@tkg-bootstrapper ~]# gunzip kubectl-linux-v1.23.8+vmware.2.gz
[root@tkg-bootstrapper ~]# mv kubectl-linux-v1.23.8+vmware.2 kubectl
[root@tkg-bootstrapper ~]# chmod +x kubectl
[root@tkg-bootstrapper ~]# mv kubectl /usr/local/bin/
After installing kubectl
, run the kubectl version
command to validate that kubectl
is working and that the version reports as 1.22.5.
An SSH key pair is required for Tanzu CLI to connect to vSphere from the bootstrap machine. The public key part of the generated key is passed during the deployment of the Tanzu Kubernetes Grid management cluster.
To generate a new SSH key pair, execute the ssh-keygen
command as shown below:
[root@tkg-bootstrapper ~]# ssh-keygen -t rsa -b 4096 -C "[email protected]"
You are prompted to enter the file in which to save the key. Press Enter to accept the default.
Enter and repeat a password for the key pair.
Add the private key to the SSH agent running on your machine and enter the password you created in the previous step.
[root@tkg-bootstrapper ~]# ssh-add ~/.ssh/id_rsa
If the ssh-add
command fails, execute eval $(ssh-agent)
and then re-run the ssh-add
command.
Make a note of the public key from the file $home/.ssh/id_rsa.pub. You need this while creating a config file for deploying the Tanzu Kubernetes Grid management cluster.
Tanzu Kubernetes Grid uses the following tools from the Carvel open-source project:
ytt - a command-line tool for templating and patching YAML files. You can also use ytt to collect fragments and piles of YAML into modular chunks for easy re-use.
kapp - the application deployment CLI for Kubernetes. It allows you to install, upgrade, and delete multiple Kubernetes resources as one application.
kbld - an image-building and resolution tool.
imgpkg - a tool that enables Kubernetes to store configurations and the associated container images as OCI images, and to transfer these images.
Navigate to the location on your bootstrap environment machine where you unpacked the Tanzu CLI bundle tar file, cd
to the cli
sub-folder, and run the following commands to install and verify ytt
.
[root@tkg-bootstrapper ~]# cd cli
[root@tkg-bootstrapper cli]# gunzip ytt-linux-amd64-v0.41.1+vmware.1.gz
[root@tkg-bootstrapper cli]# chmod +x ytt-linux-amd64-v0.41.1+vmware.1
[root@tkg-bootstrapper cli]# mv ytt-linux-amd64-v0.41.1+vmware.1 /usr/local/bin/ytt
Check the ytt
version:
[root@tkg-bootstrapper cli]# ytt version
ytt version 0.41.1
[root@tkg-bootstrapper cli]# gunzip kapp-linux-amd64-v0.49.0+vmware.1.gz
[root@tkg-bootstrapper cli]# chmod +x kapp-linux-amd64-v0.49.0+vmware.1
[root@tkg-bootstrapper cli]# mv kapp-linux-amd64-v0.49.0+vmware.1 /usr/local/bin/kapp
Check the kapp
version:
[root@tkg-bootstrapper cli]# kapp version
kapp version 0.49.0
[root@tkg-bootstrapper cli]# gunzip kbld-linux-amd64-v0.34.0+vmware.1.gz
[root@tkg-bootstrapper cli]# chmod +x kbld-linux-amd64-v0.34.0+vmware.1
[root@tkg-bootstrapper cli]# mv .kbld-linux-amd64-v0.34.0+vmware.1 /usr/local/bin/kbld
Check the kbld
version:
[root@tkg-bootstrapper cli]# kbld version
kbld version 0.34.0
[root@tkg-bootstrapper cli]# gunzip imgpkg-linux-amd64-v0.29.0+vmware.1.gz
[root@tkg-bootstrapper cli]# chmod +x imgpkg-linux-amd64-v0.29.0+vmware.1
[root@tkg-bootstrapper cli]# mv imgpkg-linux-amd64-v0.29.0+vmware.1 /usr/local/bin/imgpkg
Check the imgpkg
version:
[root@tkg-bootstrapper cli]# imgpkg version
imgpkg version 0.29.0
yq
a lightweight and portable command-line YAML processor. yq
uses jq
-like syntax but works with both YAML and JSON files.
[root@tkg-bootstrapper cli]# wget https://github.com/mikefarah/yq/releases/download/v4.24.5/yq_linux_amd64.tar.gz
[root@tkg-bootstrapper cli]# tar -xvf yq_linux_amd64.tar.gz
[root@tkg-bootstrapper cli]# mv yq_linux_amd64 /usr/local/bin/yq
Check the yq
version:
[root@tkg-bootstrapper ~]# yq --version
yq (https://github.com/mikefarah/yq/) version 4.24.5
You are now ready to deploy the Tanzu Kubernetes Grid management cluster.
The management cluster is a Kubernetes cluster that runs cluster API operations on a specific cloud provider to create and manage workload clusters on that provider. The management cluster is also where you configure the shared and in-cluster services that the workload clusters use.
You can deploy management clusters in two ways:
The Tanzu Kubernetes Grid installer wizard is an easy way to deploy the cluster. The following steps describe the process.
To launch the Tanzu Kubernetes Grid installer wizard, run the following command on the bootstrapper machine:
tanzu management-cluster create --ui --bind <bootstrapper-ip>:<port> --browser none
Access the Tanzu Kubernetes Grid installer wizard by opening a browser and entering http://<bootstrapper-ip>:port/
Note: Ensure that the port number that you enter in this command is allowed by the bootstrap machine firewall.
From the Tanzu Kubernetes Grid installation user interface, click Deploy for VMware vSphere.
On the IaaS Provider page, enter the IP/FQDN and credentials of the vCenter server where the Tanzu Kubernetes Grid management cluster is to be deployed and click Connect.
To ignore the vCenter SSL thumbprint. select Disable Verification.
If you are running a vSphere 7.x environment, the Tanzu Kubernetes Grid installer detects it and provides a choice between deploying vSphere with Tanzu (TKGS) or the Tanzu Kubernetes Grid management cluster.
Select the Deploy Tanzu Kubernetes Grid Management Cluster option.
Select the Virtual Datacenter and enter the SSH public key that you generated earlier.
On the Management Cluster Settings page, select the instance type for the control plane node and worker node and provide the following information:
TKG-Cluster-VIP
which is configured in NSX Advanced Load Balancer. If you need to provide an IP address, pick an unused IP address from the TKG-Cluster-VIP
static IP pool.On the NSX Advanced Load Balancer page, provide the following information:
Click the Verify Credentials to select/configure the following:
Note: In Tanzu Kubernetes Grid v1.6, you can configure the network to separate the endpoint VIP network of the cluster from the external IP network of the load balancer service and the ingress service in the cluster. This feature lets you ensure the security of the clusters by providing you an option to expose the endpoint of your management or the workload cluster and the load balancer service and ingress service in the cluster, in different networks.
As per the Tanzu for Kubernetes Operations 1.6 Reference Architecture, all the control plane endpoints connected to Tanzu Kubernetes Grid cluster VIP network and data plane networks are connected to the respective management data VIP network or workload data VIP network.
tkg-vmc
.TKG-WLD01-SEG
.TKG-Workload-VIP
and subnet 192.168.16.0/26
.TKG-Cluster-VIP
and subnet 192.168.14.0/26
.TKG-Mgmt-SEG
.TKG-Management-VIP network
and subnet 192.168.15.0/26
.Management Cluster Control Plane VIP Network Name & CIDR: Select TKG-Cluster-VIP
and subnet 192.168.14.0/26
.
Cluster Labels: Optional. Leave the cluster labels section empty to apply the above workload cluster network settings by default. If you specify any label here, you must specify the same values in the configuration YAML file of the workload cluster. Else, the system places the endpoint VIP of your workload cluster in Management Cluster Data Plane VIP Network
by default.
Note: With the above configuration, all the Tanzu workload clusters use TKG-Cluster-VIP
for control plane VIP network and TKG-Workload-VIP
for data plane network by default. If you would like to configure separate VIP networks for workload control plane or data networks, create a custom AKO Deployment Config (ADC) and provide the respective AVI_LABELS
in the workload cluster configuration file. For more information on network separation and custom ADC creation, see Configure Separate VIP Networks and Service Engine Groups in Different Workload Clusters.
On the Metadata page, you can specify location and labels.
On the Resources page, specify the compute containers for the Tanzu Kubernetes Grid management cluster deployment.
On the Kubernetes Network page, select the network where the control plane and worker nodes are placed during management cluster deployment. Ensure that the network has DHCP service enabled.
If the Tanzu environment is placed behind a proxy, enable the proxy and provide the proxy details.
Note: The procedure shown in this document does not use a proxy to connect to the Internet.
If LDAP is configured in your environment, see Configure Identity Management for instructions on how to integrate an identity management system with Tanzu Kubernetes Grid.
In this example, identity management integration is deactivated.
Select the OS image to use for the management cluster deployment.
Note: This list appears empty if no compatible template is present in your environment.
After you import the correct template and click Refresh, the installer detects the image automatically.
Optional: Select Participate in the Customer Experience Improvement Program.
Click Review Configuration to verify your configuration settings.
Note: Tanzu Kubernetes Grid 1.6 has a known issue that installer UI populates an empty AVI_LABEL
in the cluster configuration and leads to management cluster creation failure. It is recommended to export the cluster configuration to a file, delete the empty label, and run the cluster creation command from CLI instead of deploying the cluster from UI.
When you click Review Configuration, the installer populates the cluster configuration file, which is located in the ~/.config/tanzu/tkg/clusterconfigs
subdirectory, with the settings that you specified in the interface. You can optionally export a copy of this configuration file by clicking Export Configuration.
Edit the cluster configuration file and remove the empty AVI label.
AVI_LABELS: |
'': ''
Deploy the management cluster from this configuration file by running the command:
tanzu management-cluster create -f t4uv9zk25b.yaml -v 6
When the deployment is started from the UI, the installer wizard displays the deployment logs on the screen.
Deploying the management cluster takes approximately 20-30 minutes to complete. While the management cluster is being deployed, a virtual service is created in NSX Advanced Load Balancer and placed on one of the service engines created in the “TKG-Mgmt-SEG” SE Group.
The installer automatically sets the context to the management cluster so that you can log in to it and perform additional tasks such as verifying health of the management cluster and deploying the workload clusters.
After the Tanzu Kubernetes Grid management cluster deployment, run the following command to verify the health status of the cluster:
tanzu management-cluster get
Ensure that the cluster status reports as running
and the values in the Ready
column for nodes, etc., are True
.
See Examine the Management Cluster Deployment to perform additional health checks.
When deployment is completed successfully, run the following command to install the additional Tanzu plugins:
[root@tkg-bootstrapper ~]# tanzu plugin sync
Checking for required plugins...
Installing plugin 'cluster:v0.25.0'
Installing plugin 'kubernetes-release:v0.25.0'
Successfully installed all required plugins
✔ Done
After the management cluster is deployed, you must register the management cluster with Tanzu Mission Control and other SaaS products. You can deploy the Tanzu Kubernetes clusters and Tanzu packages directly from the Tanzu Mission Control portal. Refer to the Integrate Tanzu Kubernetes Clusters with SaaS Endpoints page for instructions.
Tanzu Kubernetes Grid v1.6.x management clusters with NSX Advanced Load Balancer are deployed with 2 AKODeploymentConfigs.
install-ako-for-management-cluster
: default config for management clusterinstall-ako-for-all
: default config for all workload clusters. By default, all the workload clusters reference this file for their virtual IP networks, service engine (SE) groups. This ADC configuration does not enable NSX L7 Ingress by default.As per this Tanzu deployment, create 2 more ADCs:
tanzu-ako-for-shared
: Used by shared services cluster to deploy the Virtual services in TKG Mgmt SE Group
and the loadbalancer applications in TKG Management VIP Network
.
tanzu-ako-for-workload-L7-ingress
: Use this ADC only if you would like to enable NSX Advanced Load Balancer L7 Ingress on workload cluster, otherwise leave the cluster labels empty to apply the network configuration from default ADC install-ako-for-all
.
As per the defined architecture, shared services cluster use the same control plane and data plane network as the management cluster. Shared services cluster control plane endpoint uses TKG Cluster VIP Network
, application load balancing uses TKG Management Data VIP network
, and the virtual services are deployed in the TKG-Mgmt-SEG
SE group. This configuration is enforced by creating a custom AKO Deployment Config (ADC) and applying the respective AVI_LABELS
while deploying the shared services cluster.
The format of the AKODeploymentConfig YAML file is as follows.
apiVersion: networking.tkg.tanzu.vmware.com/v1alpha1
kind: AKODeploymentConfig
metadata:
finalizers:
- ako-operator.networking.tkg.tanzu.vmware.com
generation: 2
name: <Unique name of AKODeploymentConfig>
spec:
adminCredentialRef:
name: nsx-alb-controller-credentials
namespace: tkg-system-networking
certificateAuthorityRef:
name: nsx-alb-controller-ca
namespace: tkg-system-networking
cloudName: <NAME OF THE CLOUD in ALB>
clusterSelector:
matchLabels:
<KEY>: <VALUE>
controlPlaneNetwork:
cidr: <TKG-Cluster-VIP-CIDR>
Name: <TKG-Cluster-VIP-Network>
controller: <NSX ALB CONTROLLER IP/FQDN>
dataNetwork:
cidr: <TKG-Mgmt-Data-VIP-CIDR>
name: <TKG-Mgmt-Data-VIP-Name>
extraConfigs:
cniPlugin: antrea
disableStaticRouteSync: true
ingress:
defaultIngressController: false
disableIngressClass: true
nodeNetworkList:
- networkName: <TKG-Mgmt-Network>
serviceEngineGroup: <Mgmt-Cluster-SEG>
The sample AKODeploymentConfig with sample values in place is as follows. You should add the respective AVI label type=shared-services
while deploying shared services cluster to enforce this network configuration.
tkg-vmc
TKG-Mgmt-SEG
TKG-Cluster-VIP
TKG-Management-VIP
TKG-Mgmt-SEG
apiVersion: networking.tkg.tanzu.vmware.com/v1alpha1
kind: AKODeploymentConfig
metadata:
generation: 2
name: tanzu-ako-for-shared
spec:
adminCredentialRef:
name: avi-controller-credentials
namespace: tkg-system-networking
certificateAuthorityRef:
name: avi-controller-ca
namespace: tkg-system-networking
cloudName: tkg-vmc
clusterSelector:
matchLabels:
type: shared
controlPlaneNetwork:
cidr: 192.168.14.0/26
name: TKG-Cluster-VIP
controller: 192.168.11.10
dataNetwork:
cidr: 192.168.15.0/26
name: TKG-Management-VIP
extraConfigs:
cniPlugin: antrea
disableStaticRouteSync: true
ingress:
defaultIngressController: false
disableIngressClass: true
nodeNetworkList:
- networkName: TKG-Management
serviceEngineGroup: TKG-Mgmt-SEG
After you have the AKO configuration file ready, use the kubectl
command to set the context to Tanzu Kubernetes Grid management cluster and create the ADC:
# kubectl config use-context tkg149-mgmt-vmc-admin@tkg149-mgmt-vmc
Switched to context "tkg149-mgmt-vmc-admin@tkg149-mgmt-vmc".
# kubectl apply -f ako-shared-services.yaml
akodeploymentconfig.networking.tkg.tanzu.vmware.com/tanzu-ako-for-shared created
Use the following command to list all AKODeploymentConfig created under the management cluster:
# kubectl get adc
NAME AGE
install-ako-for-all 21h
install-ako-for-management-cluster 21h
tanzu-ako-for-shared 113s
VMware recommends using NSX Advanced Load Balancer L7 ingress with NodePortLocal mode for the L7 application load balancing. This is enabled by creating a custom ADC with ingress settings enabled, and then applying the AVI_LABEL while deploying the workload cluster.
As per the defined architecture, workload cluster control plane endpoint uses TKG Cluster VIP Network
, application load balancing uses TKG Workload Data VIP network
and the virtual services are deployed in TKG-WLD01-SEG
SE group.
Below are the changes in ADC Ingress section when compare to the default ADC.
disableIngressClass: set to false
to enable NSX Advanced Load Balancer L7 Ingress.
nodeNetworkList: Provide the values for Tanzu Kubernetes Grid workload network name and CIDR.
serviceType: L7 Ingress type, recommended to use NodePortLocal
shardVSSize: Virtual service size
The format of the AKODeploymentConfig YAML file for enabling NSX Advanced Load Balancer L7 Ingress is as follows.
apiVersion: networking.tkg.tanzu.vmware.com/v1alpha1
kind: AKODeploymentConfig
metadata:
name: <unique-name-for-adc>
spec:
adminCredentialRef:
name: avi-controller-credentials
namespace: tkg-system-networking
certificateAuthorityRef:
name: avi-controller-ca
namespace: tkg-system-networking
cloudName: <cloud name configured in nsx alb>
clusterSelector:
matchLabels:
<KEY>: <value>
controller: <ALB-Controller-IP/FQDN>
controlPlaneNetwork:
cidr: <TKG-Cluster-VIP-Network-CIDR>
name: <TKG-Cluster-VIP-Network-CIDR>
dataNetwork:
cidr: <TKG-Workload-VIP-network-CIDR>
name: <TKG-Workload-VIP-network-CIDR>
extraConfigs:
cniPlugin: antrea
disableStaticRouteSync: false # required
ingress:
disableIngressClass: false # required
nodeNetworkList: # required
- networkName: <TKG-Workload-Network>
cidrs:
- <TKG-Workload-Network-CIDR>
serviceType: NodePortLocal # required
shardVSSize: MEDIUM # required
serviceEngineGroup: <Workload-Cluster-SEG>
The AKODeploymentConfig with sample values in place is as follows. You should add the respective avi label workload-l7-enabled=true
while deploying shared services cluster to enforce this network configuration.
tkg-vmc
TKG-WLD01-SEG
TKG-Cluster-VIP
TKG-Workload-VIP
TKG-Workload
apiVersion: networking.tkg.tanzu.vmware.com/v1alpha1
kind: AKODeploymentConfig
metadata:
name: tanzu-ako-for-workload-l7-ingress
spec:
adminCredentialRef:
name: avi-controller-credentials
namespace: tkg-system-networking
certificateAuthorityRef:
name: avi-controller-ca
namespace: tkg-system-networking
cloudName: tkg-vmc
clusterSelector:
matchLabels:
type: wkld01-l7
controlPlaneNetwork:
cidr: 192.168.14.0/26
name: TKG-Cluster-VIP
controller: 192.168.11.10
dataNetwork:
cidr: 192.168.16.0/26
name: TKG-Workload-VIP
extraConfigs:
cniPlugin: antrea
disableStaticRouteSync: false
ingress:
disableIngressClass: false
nodeNetworkList:
- cidrs:
- 192.168.13.0/24
networkName: TKG-Workload
serviceType: NodePortLocal
shardVSSize: MEDIUM
serviceEngineGroup: TKG-WLD01-SEG
Use the kubectl
command to set the context to Tanzu Kubernetes Grid management cluster and create the ADC:
# kubectl config use-context tkg149-mgmt-vmc-admin@tkg149-mgmt-vmc
Switched to context "tkg149-mgmt-vmc-admin@tkg149-mgmt-vmc".
# kubectl apply -f workload-adc-l7.yaml
akodeploymentconfig.networking.tkg.tanzu.vmware.com/tanzu-ako-for-workload-l7-ingress created
Use the following command to list all AKODeploymentConfig created under the management cluster:
# kubectl get adc
NAME AGE
install-ako-for-all 22h
install-ako-for-management-cluster 22h
tanzu-ako-for-shared 82m
tanzu-ako-for-workload-l7-ingress 25s
Now that you have successfully created the AKO deployment config, you need to apply the cluster labels while deploying the workload clusters to enable NSX Advanced Load Balancer L7 Ingress with NodePortLocal mode.
A shared services cluster is just a Tanzu Kubernetes Grid workload cluster used for shared services. It can be provisioned using the standard CLI command tanzu cluster create, or through Tanzu Mission Control. Each Tanzu Kubernetes Grid instance can have only one shared services cluster.
Note: This document demonstrates the deployment of shared services and workload clusters through Tanzu Mission Control.
The procedure for deploying a shared service cluster is essentially the same as the procedure for deploying a workload cluster. The only difference is that you add a tanzu-services
label to the shared services cluster to indicate its cluster role. This label identifies the shared services cluster to the management cluster and workload clusters.
Shared services cluster use the custom ADC tanzu-ako-for-shared created earlier to apply the network settings similar to management cluster. This is enforced by applying the AVI_LABEL type:shared
while deploying the shared services cluster.
To deploy a shared services cluster, navigate to the Clusters tab and click Create Cluster.
On the Create cluster page, select the Tanzu Kubernetes Grid management cluster that you registered in the previous step and click Continue to create cluster.
Select the provisioner for creating the shared services cluster.
Enter a name for the cluster. Cluster names must be unique within an organization.
Select the cluster group to which you want to attach your cluster. Optionally, enter a description and apply labels.
On the Configure page, specify the following:
Note: This document doesn’t cover using a proxy server with Tanzu Kubernetes Grid. If your environment uses a proxy server to connect to the internet, ensure that the proxy configuration object includes the CIDRs for the pod, ingress, and egress from the workload network of the Supervisor Cluster in the No proxy list, as described here.
Specify the placement containers such as Resource pool, VM Folder and datastore for the shared services cluster.
Select the High Availability mode for the control plane nodes of the workload cluster. For a production deployment, a highly available workload cluster is recommended.
The control plane endpoint and API server port options are retrieved from the management cluster and are not customizable here.
You can optionally define the default node pool for your workload cluster.
Click Create Cluster to start provisioning your workload cluster.
Cluster creation roughly takes 15-20 minutes to complete. After the cluster deployment completes, ensure that Agent and extensions health shows green.
Post deployment of the shared services cluster, execute the following commands to apply the labels to the cluster.
Switch to the management cluster context.
kubectl config use-context tkg149-mgmt-vmc-admin@tkg149-mgmt-vmc
Apply the tanzu-services
label to update the cluster role.
kubectl label cluster.cluster.x-k8s.io/<shared-services-cluster-name> cluster-role.tkg.tanzu.vmware.com/tanzu-services="" --overwrite=true
Example:
kubectl label cluster.cluster.x-k8s.io/tkg-ss-vmc cluster-role.tkg.tanzu.vmware.com/tanzu-services="" --overwrite=true
The steps for deploying a workload cluster are almost exactly the same as for a shared services cluster, except that the names of the cluster and the placement containers (resource pools, VM folder, network, etc.) are different.
For instructions on enabling Tanzu Observability on your workload cluster, see Set up Tanzu Observability to Monitor a Tanzu Kubernetes Clusters.
For instructions on installing Tanzu Service Mesh on your workload cluster, see Onboard a Tanzu Kubernetes Cluster to Tanzu Service Mesh.
For instructions on installing user-managed packages on the Tanzu Kubernetes clusters, see Deploy User-Managed Packages in Workload Clusters.