This document provides step-by-step instructions for deploying Tanzu Kubernetes Operations in an Internet available vSphere environment backed by NSX-T Data Center networking.
The scope of the document is limited to providing deployment steps based on the reference design in VMware Tanzu for Kubernetes Operations on vSphere with NSX-T. It does not cover deployment procedures for the underlying SDDC components.
You can use VMware Service Installer for VMware Tanzu to automate this deployment.
VMware Service Installer for Tanzu automates the deployment of the reference designs for Tanzu for Kubernetes Operations. It uses best practices for deploying and configuring the required Tanzu for Kubernetes Operations components.
To use Service Installer to automate this deployment, see Deploying VMware Tanzu for Kubernetes Operations on vSphere with NSX-T Using Service Installer for VMware Tanzu.
Alternatively, if you decide to manually deploy each component, follow the steps provided in this document.
The validated Bill of Materials that can be used to install Tanzu Kubernetes Grid on your vSphere with NSX environment is as follows:
Software Components | Version |
---|---|
Tanzu Kubernetes Grid | 1.6.0 |
VMware vSphere ESXi | 7.0 U3d and later |
VMware vCenter (VCSA) | 7.0 U3d and later |
NSX Advanced Load Balancer | 21.1.4 |
VMware NSX-T | 3.1.2 |
For up-to-date information about which software versions can be used together, see the Interoperability Matrix.
Before deploying Tanzu for Kubernetes Operations on vSphere, ensure that your environment is set up as described in the following requirements:
Ensure that following NSX-T configurations are in place:
Note: The following provides only a high-level overview of the required NSX-T configuration. For more information, see NSX-T Data Center Installation Guide and NSX-T Data Center Product Documentation.
Note: You can also download supported older versions of Kubernetes from VMware Customer Connect and import them to deploy workload clusters on the intended Kubernetes versions."
Resource Pools and VM Folders:
The sample entries of the resource pools and folders that need to be created are as follows.
Resource Type | Sample Resource Pool Name | Sample Folder Name |
---|---|---|
NSX ALB Components | NSX-ALB |
NSX-ALB-VMs |
TKG Management components | TKG-Mgmt |
TKG-Mgmt-VMs |
TKG Shared Service Components | TKG-Shared-SVC |
TKG-SS-VMs |
TKG Workload components | TKG-WLD |
TKG-Workload-VMs |
Create separate logical segments in NSX-T for deploying TKO components as per Network Requirements defined in the reference architecture.
Ensure that the firewall is set up as described in Firewall Requirements.
For this demonstration, this document makes use of the following subnet CIDR for Tanzu for Kubernetes Operations deployment.
Network Type | Segment Name | Gateway CIDR | DHCP Pool in NSXT | NSX ALB IP Pool |
---|---|---|---|---|
NSX ALB Management Network | alb-mgmt-ls | 172.19.71.1/27 | N/A | 172.19.71.6 - 172.19.71.30 |
TKG Cluster VIP Network | tkg-cluster-vip | 172.19.75.1/26 | N/A | 172.19.75.2 - 172.19.75.60 |
TKG Management Network | tkg-mgmt-ls | 172.19.72.1/27 | 172.19.72.2 - 172.19.72.30 | N/A |
TKG Shared Service Network | tkg-ss-ls | 172.19.73.1/27 | 172.19.73.2 - 172.19.73.30 | N/A |
TKG Workload Network | tkg-workload-ls | 172.19.77.1/24 | 172.19.77.2- 172.19.77.251 | N/A |
The steps for deploying Tanzu for Kubernetes Operations on vSphere backed by NSX-T are as follows:
As a prerequisite, an NSX-T backed vSphere environment must be configured with at least one tier-0 gateway. A tier-0 gateway performs the functions of a tier-0 logical router. It processes traffic between the logical and physical networks. For more information on creating and configuring a tier-0 gateway, see NSX-T documentation.
This procedure comprises the following tasks:
The tier-1 logical router must be connected to the tier-0 logical router to get the northbound physical router access. The following procedure provides the minimum required configuration to create a tier-1 gateway, which is adequate to successfully deploy the Tanzu for Kubernetes Operations stack. For a more advanced configuration, see NSX-T documentation.
Click Route Advertisement and ensure that following routes are selected:
Click Save.
Complete the following steps to set the DHCP configuration in the tier-1 gateway:
Next to DHCP Config, click Set.
In the Set DHCP Configuration dialog box, set Type to DHCP Server and select the DHCP profile that you created as part of the prerequisites.
Click Save.
VMware NSX provides the option to add two kinds of segments: overlay-backed segments and VLAN-backed segments. Segments are created as part of a transport zone. There are two types of transport zones: VLAN transport zones and overlay transport zones. A segment created in a VLAN transport zone is a VLAN-backed segment and a segment created in an overlay transport zone is an overlay-backed segment.
Create the overlay backed logical segments as shown in the Overlay backed segments CIDR example. All these segments will be a part of the same overlay transport zone and they must be connected to the tier-1 gateway.
The following procedure provides required details to create one such network which is required for the Tanzu for Kubernetes Operations deployment:
With admin privileges, log in to NSX Manager
Select Networking > Segments.
Click ADD SEGMENT and enter a name for the segment. For example, tkg-mgmt-ls
Under Connected Gateway, select the tier-1 gateway that you created earlier.
Under Transport Zone, select a transport zone that will be an overlay transport zone.
Under Subnets, enter the gateway IP address of the subnet in the CIDR format. For example, 172.19.72.1/27
Note: The following step is required only for Tanzu Kubernetes Grid management network, shared services network, and workload network.
Click SET DHCP CONFIG.
DHCP Type field is set to Gateway DHCP Server and DHCP Profile is set to the profile created while creating the tier-1 gateway.
Click Settings, select Enable DHCP Config, and enter the DHCP range and DNS server information.
Click Options and under Select DHCP Options, select GENERIC OPTIONS.
Click ADD GENERIC OPTION, Add NTP servers (42) and Domain Search (119).
Click Save to create the logical segment.
Repeat steps 1-7 to create all other required overlay-backed segments. Once completed, you should see an output similar to:
Additionally, you can create the required inventory groups and firewall rules. For more information, see NSX-T Data Center Product Documentation.
NSX Advanced Load Balancer (ALB) is an enterprise-grade integrated load balancer that provides L4- L7 load balancer support. It is recommended for vSphere deployments without NSX-T or when there are unique scaling requirements.
NSX Advanced Load Balancer is deployed in Write Access Mode in the vSphere Environment backed by NSX-T. This mode grants NSX Advanced Load Balancer controllers full write access to the vCenter or NSX which helps in automatically creating, modifying, and removing service engines (SEs) and other resources as needed to adapt to changing traffic needs.
For a production-grade deployment, it is recommended to deploy 3 instances of the NSX Advanced Load Balancer controller for high availability and resiliency.
The sample IP address and FQDN set for the NSX Advanced Load Balancer controllers is as follows:
Controller Node | IP Address | FQDN |
---|---|---|
Node 1 Primary | 172.19.10.11 | alb01.tanzu.lab |
Node 2 Secondary | 172.19.10.12 | alb02.tanzu.lab |
Node 3 Secondary | 172.19.10.13 | alb03.tanzu.lab |
HA Address | 172.19.10.10 | alb.tanzu.lab |
As part of the prerequisites, you must have the NSX Advanced Load Balancer 21.1.4 OVA downloaded and imported to the content library. Deploy the NSX Advanced Load Balancer under the resource pool NSX-ALB and place it under the folder NSX-ALB-VMs.
To deploy NSX Advanced Load Balancer, complete the following steps.
A new task for creating the virtual machine appears in the Recent Tasks pane. After the task is complete, the NSX Advanced Load Balancer virtual machine is created on the selected resource. Power on the virtual machine and give it a few minutes for the system to boot. Upon successful boot up, navigate to NSX Advanced Load Balancer on your browser.
Note: While the system is booting up, a blank web page or a 503 status code may appear.
Once NSX Advanced Load Balancer is successfully deployed and running, navigate to NSX Advanced Load Balancer on your browser using the URL https://<IP/FQDN> and configure the basic system settings:
Set admin password and click on Create Account.
On the Welcome page, under System Settings, set backup passphrase and provide DNS information, and then click Next.
Under Email/SMTP, provide email and SMTP information, and then click Next.
Under Multi-Tenant, configure settings as follows and click Save.
If you did not select the Setup Cloud After option before saving, the initial configuration wizard exits. The Cloud configuration window does not automatically launch, and you are directed to a dashboard view on the controller.
To configure NTP, navigate to Administration > Settings > DNS/NTP > Edit.
Add your NTP server details and then click Save.
Note: You may also delete the default NTP servers.
This document focuses on enabling NSX Advanced Load Balancer using the license model: Enterprise License (VMware NSX ALB Enterprise).
To configure licensing, navigate to the Administration > Settings > Licensing, and click on the gear icon to change the license type to Enterprise.
Select Enterprise Tier as the license type and click Save.
Once the license tier is changed, apply the NSX Advanced Load Balancer Enterprise license key. If you have a license file instead of a license key, apply the license by selecting the Upload a License File option.
In a production environment, it is recommended to deploy additional controller nodes and configure the controller cluster for high availability and disaster recovery. Adding 2 additional nodes to create a 3-node cluster provides node-level redundancy for the controller and also maximizes performance for CPU-intensive analytics functions.
To run a 3-node controller cluster, you deploy the first node and perform the initial configuration, and set the cluster IP address. After that, you deploy and power on two more controller VMs, but you must not run the initial configuration wizard or change the admin password for these controllers VMs. The configuration of the first controller VM is assigned to the two new controller VMs.
The first controller of the cluster receives the Leader role. The second and third controllers work as Follower.
Complete the following steps to configure NSX Advanced Load Balancer cluster:
Log in to the primary NSX Advanced Load Balancer controller and navigate to Administrator > Controller > Nodes, and then click Edit.
Specify Name and Controller Cluster IP, and then click Save. This IP address must be from the NSX ALB management network.
Deploy the 2nd and 3rd NSX Advanced Load Balancer controller nodes by using steps in Deploy NSX Advanced Load Balancer.
Log into the primary NSX Advanced Load Balancer controller using the Controller Cluster IP/FQDN and navigate to Administrator > Controller > Nodes, and then click Edit. The Edit Controller Configuration popup appears.
In the Cluster Nodes field, enter the IP address for the 2nd and 3rd controller, and then click Save.
After you complete these steps, the primary NSX Advanced Load Balancer controller becomes the leader for the cluster and invites the other controllers to the cluster as members.
NSX Advanced Load Balancer then performs a warm reboot of the cluster. This process can take approximately 10-15 minutes. You will be automatically logged out of the controller node where you are currently logged in. On entering the cluster IP address in the browser, you can see details about the cluster formation task.
The configuration of the primary (leader) controller is synchronized to the new member nodes when the cluster comes online following the reboot. Once the cluster is successfully formed, you can see the following status:
Note: In the following tasks, all NSX Advanced Load Balancer configurations are done by connecting to the NSX Advanced Load Balancer Controller Cluster IP/FQDN.
The default system-generated controller certificate generated for SSL/TSL connections will not have the required subject alternate name (SAN) entries. Complete the following steps to create a controller certificate:
Log in to the NSX Advanced Load Balancer controller and navigate to Templates > Security > SSL/TLS Certificates.
Click Create and select Controller Certificate. You can either generate a self-signed certificate, generate CSR, or import a certificate. For the purpose of this document, a self-signed certificate will be generated.
Provide all required details as per your infrastructure requirements and in the Subject Alternate Name (SAN) field, provide IP address and FQDN of all NSX Advanced Load Balancer controllers including NSX Advanced Load Balancer cluster IP and FQDN, and then click Save.
Once the certificate is created, capture the certificate contents as this is required while deploying the Tanzu Kubernetes Grid management cluster. To capture the certificate content, click on the Download icon next to the certificate, and then click Copy to clipboard under Certificate.
To replace the certificate, navigate to Administration > Settings > Access Settings, and click the pencil icon at the top right to edit the system access settings, and then replace the SSL/TSL certificate and click Save.
Log out and log in to NSX Advanced Load Balancer.
NSX Advanced Load Balancer requires credentials of VMware NSX and vCenter Server to authenticate with these endpoints. These credentials need to be created before configuring NSX Cloud.
To create a new credential, navigate to Administration > User Credentials and click Create.
NSX Advanced Load Balancer can be deployed in multiple environments for the same system. Each environment is called a cloud. The following procedure provides steps to create a VMware NSX cloud. As per the architecture, two service engine (SE) groups will be created.
Service Engine Group 1: Service engines associated with this service engine group hosts:
Service Engine Group 2: Service engines associated with this service engine group hosts virtual services for all load balancer functionalities requested by Tanzu Kubernetes Grid workload clusters mapped to this SE group.
Note:
For information about mapping a specific service engine group to Tanzu Kubernetes Grid workload cluster, see Configure NSX Advanced Load Balancer in Tanzu Kubernetes Grid Workload Cluster.
The following components are created in NSX Advanced Load Balancer.
Object | Sample Name |
---|---|
NSX Cloud | tanzu-nsx |
Service Engine Group 1 | tkg-mgmt-seg |
Service Engine Group 2 | tkg-wld-seg |
Log in to NSX Advanced Load Balancer and navigate to Infrastructure > Clouds > Create > NSX-T Cloud.
Enter cloud name and provide a object name prefix. Click CHANGE CREDENTIALS to connect NSX Advanced Load Balancer with VMware NSX.
Specify NSX-T Manager Address and select the NSX-T credential that you created earlier.
Under the Management Network pane, select the following:
Under the Data Networks pane, select the following:
Under vCenter Servers pane, click ADD.
Specify a name for the vCenter server and click CHANGE CREDENTIALS to connect NSX Advanced Load Balancer with the vCenter server.
Select the vCenter server from the drop down and select the vCenter credential which you have created earlier.
Select the Content Library where Service Engine templates will be stored by NSX Advanced Load Balancer.
Leave the IPAM/DNS profile section empty as this will be populated later, once you have created the profiles. Click SAVE to finish the NSX-T cloud configuration.
Ensure that status of the NSX-T cloud is Green post creation.
Create a service engine group for Tanzu Kubernetes Grid management clusters:
Enter a name for the Tanzu Kubernetes Grid management service engine group and set the following parameters:
Parameter | Value |
---|---|
High availability mode | N+M |
Memory per Service Engine | 4 |
vCPU per Service Engine | 2 |
Use the default values for the rest of the parameters.
For advanced configuration, click on the Advanced tab. Specify the vCenter server endpoint by clicking on the Add vCenter option.
Select the vCenter server from the dropdown, vSphere cluster and datastore for service engine placement, and service engine name prefix, and then click Save.
Repeat steps 12 and 13 to create another service engine group for Tanzu Kubernetes Grid workload clusters. Once complete, there must be two service engine groups created.
As part of the cloud creation, NSX Advanced Load Balancer management and Tanzu Kubernetes Grid VIP networks have been configured in NSX Advanced Load Balancer. Since DHCP was not selected as the IP address management method in the cloud configuration, you have to specify pool of IP addresses that can be assigned to the service engine NICs and the virtual services that will be created in future.
To configure IP address pools for the networks, follow this procedure:
Navigate to Infrastructure > Networks and select the cloud that you have created earlier.
Click on the edit icon next for the network and configure as follows. Change the provided details as per your SDDC configuration.
Network Name | DHCP | Subnet | Static IP Pool |
---|---|---|---|
alb-mgmt-ls | No | 172.19.71.0/24 | 172.19.71.6 - 172.19.71.30 |
tkg-cluster-vip | No | 172.19.75.0/26 | 172.19.75.2 - 172.19.75.60 |
The following snippet shows configuring one of the networks. For example: alb-mgmt-ls
Note: Ensure that VRF Context for alb-mgmt-ls
network is set to Global
.
Edit the tkg-cluster-vip
network and configure as following. The VRF Context for VIP network is set to NSX tier-1 gateway.
Once the networks are configured, the configuration must look like the following image.
Once the networks are configured, set the default routes for the networks by navigating to Infrastructure > Routing.
The default gateway for the alb-mgmt-ls
network is set in the global VRF context and for the tkg-cluster-vip
network, the VRF Context is set to NSX tier-1 gateway.
To set the default gateway for the alb-mgmt-ls
network, click CREATE under the global VRF context and set the default gateway to gateway of the NSX Advanced Load Balancer management subnet.
To set the default gateway for the tkg-cluster-vip
network, click CREATE under the tier-1 gateway VRF context and set the default gateway to gateway of the VIP network subnet.
The final configuration is shown below:
At this point, all the required networks related to Tanzu functionality are configured in NSX Advanced Load Balancer. NSX Advanced Load Balancer provides IPAM service for Tanzu Kubernetes Grid cluster VIP network and NSX ALB management network.
Complete the following steps to create an IPAM profile and once created, attach it to the NSX-T cloud created earlier.
Log in to NSX Advanced Load Balancer and navigate to Templates > IPAM/DNS Profiles > Create > IPAM Profile.
Provide the following details, and then click Save.
Parameter | Value |
---|---|
Name | tkg-ipam |
Type | AVI Vintage IPAM |
Cloud for Usable Networks | tanzu-nsx |
Usable Networks | alb-mgmt-ls tkg-cluster-vip |
Click Create > DNS Profile and provide the domain name.
Attach the IPAM and DNS profiles to the NSX-T cloud.
tanzu-nsx
cloud.Under IPAM/DNS section, choose the IPAM and DNS profiles created earlier and save the updated configuration.
Under the section DNS Resolvers, click ADD to add the DNS server that NSX-T cloud will use to resolve the hostname or FQDN of the components that will be created later.
Enter a name for the DNS and click ADD under Name Servers to add your infrastructure DNS servers. Optionally, you can specify the TTL for the DNS.
This completes the NSX Advanced Load Balancer configuration. The next step is to deploy and configure a bootstrap machine which will be used to deploy and manage Tanzu Kubernetes clusters.
In Tanzu Kubernetes Grid v1.6, you can configure separate VIP networks for the control plane HA and for application load balancer and ingress service. This feature lets you ensure the security of the clusters by providing you an option to expose the endpoint of your management or the workload cluster and the load balancer service and ingress service in the cluster, in different networks.
You can configure the following parameters to expose the endpoints or the services in the management cluster and the workload clusters in different VIP networks.
For the workload cluster:
Where AVI_DATA_NETWORK
is the network that you use load balancing (l4/l7) the application deployed in the workload cluster. The network AVI_CONTROL_PLANE_NETWORK
is the network that provides L4 load balancing to the control plane nodes of the workload clusters.
For the management cluster:
Where AVI_MANAGEMENT_CLUSTER_VIP_NETWORK_NAME
is the network that you use for load balancing the application deployed in the management cluster. The network AVI_MANAGEMENT_CLUSTER_CONTROL_PLANE_VIP_NETWORK_NAME
is the network that provides L4 load balancing to the control plane nodes of the workload clusters.
As per the TKO architecture, the control plane VIP of the management cluster, shared services cluster and the workload clusters, is served from the same network. So, the network AVI_CONTROL_PLANE_NETWORK
and AVI_MANAGEMENT_CLUSTER_CONTROL_PLANE_VIP_NETWORK_NAME
points to the same portgroup/logical segment to implement TKO architecture.
Note: This document makes use of the single network tkg-cluster-vip
for both control plane VIP and application load balancing of all TKG clusters.
The deployment of the Tanzu Kubernetes Grid management and workload clusters is facilitated by setting up a bootstrap machine where you install the Tanzu CLI and Kubectl utilities which are used to create and manage the Tanzu Kubernetes Grid instance. This machine also keeps the Tanzu Kubernetes Grid and Kubernetes configuration files for your deployments. The bootstrap machine can be a laptop, host, or server running on Linux, macOS, or Windows that you deploy management and workload clusters from.
The bootstrap machine runs a local kind
cluster when Tanzu Kubernetes Grid management cluster deployment is started. Once the kind
cluster is fully initialized, the configuration is used to deploy the actual management cluster on the backend infrastructure. After the management cluster is fully configured, the local kind
cluster is deleted and future configurations are performed with the Tanzu CLI.
For this deployment, a CentOS 7 based virtual machine is used as the bootstrap machine. For information on how to configure a macOS or Windows machine, see Install the Tanzu CLI and Other Tools.
The bootstrap machine must meet the following prerequisites:
tkg-mgmt-ls
.To install Tanzu CLI, Tanzu Plugins and Kubectl utility on the bootstrap machine, follow the instructions below:
Download and unpack the following packages from VMware Tanzu Kubernetes Grid Download Product page.
VMware Tanzu CLI 1.6.0 for Linux
kubectl cluster cli v1.23.8 for Linux
Execute the following commands to install Tanzu Kubernetes Grid CLI, Kubectl CLI, and Carvel tools.
## Install Tanzu Kubernetes Grid CLI
tar -zxvf tanzu-cli-bundle-linux-amd64.tar.gz
install cli/core/v0.25.0/tanzu-core-linux_amd64 /usr/local/bin/tanzu
chmod +x /usr/local/bin/tanzu
## Verify Tanzu CLI version
[root@tkg160-bootstrap ~]# tanzu version
version: v0.25.0
buildDate: 2022-08-25
sha: 6288c751-dirty
## Install Tanzu Plugins
[root@tkg160-bootstrap ~]# tanzu plugin sync
Checking for required plugins...
Installing plugin 'login:v0.25.0'
Installing plugin 'management-cluster:v0.25.0'
Installing plugin 'package:v0.25.0'
Installing plugin 'pinniped-auth:v0.25.0'
Installing plugin 'secret:v0.25.0'
Installing plugin 'telemetry:v0.25.0'
Successfully installed all required plugins
✔ Done
## Verify the plugins are installed
[root@tkg160-bootstrap ~]# tanzu plugin list
NAME DESCRIPTION SCOPE DISCOVERY VERSION STATUS
login Login to the platform Standalone default v0.25.0 installed
management-cluster Kubernetes management-cluster operations Standalone default v0.25.0 installed
package Tanzu package management Standalone default v0.25.0 installed
pinniped-auth Pinniped authentication operations (usually not directly invoked) Standalone default v0.25.0 installed
secret Tanzu secret management Standalone default v0.25.0 installed
telemetry Configure cluster-wide telemetry settings Standalone default v0.25.0 installed
## Install Kubectl CLI
gunzip kubectl-linux-v1.23.8+vmware.2.gz
mv kubectl-linux-v1.23.8+vmware.2 /usr/local/bin/kubectl
chmod +x /usr/local/bin/kubectl
# Install Carvel tools
##Install ytt
cd ./cli
gunzip ytt-linux-amd64-v0.35.1+vmware.1.gz
chmod +x ytt-linux-amd64-v0.35.1+vmware.1.gz && mv ./ytt-linux-amd64-v0.35.1+vmware.1.gz /usr/local/bin/ytt
##Install kapp
gunzip kapp-linux-amd64-v0.49.0+vmware.1
chmod +x kapp-linux-amd64-v0.49.0+vmware.1 && mv ./kapp-linux-amd64-v0.49.0+vmware.1 /usr/local/bin/kapp
##Install kbld
gunzip kbld-linux-amd64-v0.34.0+vmware.1
chmod +x kbld-linux-amd64-v0.31.0+vmware.1 && mv ./kbld-linux-amd64-v0.31.0+vmware.1 /usr/local/bin/kbld
##Install impkg
gunzip imgpkg-linux-amd64-v0.29.0+vmware.1
chmod +x imgpkg-linux-amd64-v0.18.0+vmware.1 && mv ./imgpkg-linux-amd64-v0.18.0+vmware.1 /usr/local/bin/imgpkg
Validate Carvel tools installation using the following commands.
ytt version
kapp version
kbld version
imgpkg version
yq
. yq
is a lightweight and portable command-line YAML processor. yq
uses jq
-like syntax but works with YAML and JSON files. cd /root
wget https://github.com/mikefarah/yq/releases/download/v4.24.5/yq_linux_amd64.tar.gz
tar -xvf yq_linux_amd64.tar.gz && mv yq_linux_amd64 /usr/local/bin/yq
kind
. curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.11.1/kind-linux-amd64
chmod +x ./kind
mv ./kind /usr/local/bin/kind
Create an SSH key pair.
An SSH key pair is required for Tanzu CLI to connect to vSphere from the bootstrap machine.
The public key part of the generated key is passed during the Tanzu Kubernetes Grid management cluster deployment.
## Generate SSH key pair
## When prompted enter file in which to save the key (/root/.ssh/id_rsa): press Enter to accept the default and provide password
ssh-keygen -t rsa -b 4096 -C "[email protected]"
## Add the private key to the SSH agent running on your machine and enter the password you created in the previous step
ssh-add ~/.ssh/id_rsa
## If the above command fails, execute "eval $(ssh-agent)" and then rerun the command
All required packages are now installed, and the required configurations are in place in the bootstrap virtual machine. The next step is to deploy the Tanzu Kubernetes Grid management cluster.
Before you proceed with the management cluster creation, ensure that the base image template is imported into vSphere and is available as a template. To import a base image template into vSphere:
Go to the Tanzu Kubernetes Grid downloads page and download a Tanzu Kubernetes Grid OVA for the cluster nodes.
Note: Make sure you download the most recent OVA base image templates in the event of security patch releases. You can find updated base image templates that include security patches on the Tanzu Kubernetes Grid product download page.
In the vSphere client, right-click an object in the vCenter Server inventory and select Deploy OVF template.
Select Local file, click the button to upload files, and navigate to the downloaded OVA file on your local machine.
Follow the installer prompts to deploy a VM from the OVA.
Click Finish to deploy the VM. When the OVA deployment finishes, right-click the VM and select Template > Convert to Template.
Note: Do not power on the VM before you convert it to a template.
If using non administrator SSO account: In the VMs and Templates view, right-click the new template, select Add Permission, and assign the tkg-user to the template with the TKG role.
For information about how to create the user and role for Tanzu Kubernetes Grid, see Required Permissions for the vSphere Account.
The management cluster is a Kubernetes cluster that runs Cluster API operations on a specific cloud provider to create and manage workload clusters on that provider.
The management cluster is also where you configure the shared and in-cluster services that the workload clusters use.
You can deploy management clusters in two ways:
The following procedure provides the required steps to deploy Tanzu Kubernetes Grid management cluster using the installer interface.
To launch the UI installer wizard, run the following command on the bootstrap machine:
tanzu management-cluster create --ui --bind <bootstrapper-ip>:<port> --browser none
For example:
tanzu management-cluster create --ui --bind 172.19.10.4:8000 --browser none
Access Tanzu UI wizard by opening a browser and entering: http://<bootstrapper-ip:port/
On the VMware vSphere tile, click DEPLOY.
In the IaaS Provider section, enter the IP address/FQDN and credentials of the vCenter server where the Tanzu Kubernetes Grid management cluster will be deployed. (Optional) you can skip the vCenter SSL thumbprint verification.
Click CONNECT and select “DEPLOY TKG MANAGEMENT CLUSTER”.
Select the data center and provide the SSH public Key generated while configuring the bootstrap VM.
If you have saved the SSH key in the default location, run the following command in your bootstrap machine to get the SSH public key.
cat /root/.ssh/id_rsa.pub
Click NEXT.
On the Management Cluster Settings section, provide the following details and click Next.
Based on the environment requirements, select appropriate deployment type for the Tanzu Kubernetes Grid management cluster:
Development: Recommended for Dev or POC environments
Production: Recommended for Production environments
It is recommended to set the instance type to Large
or above. For the purpose of this document, we will proceed with deployment type Production
and instance type Medium
.
Management Cluster Name: Name for your management cluster.
On the NSX Advanced Load Balancer section, provide the following information and click Next.
Once these details are provided, click VERIFY CREDENTIALS and choose the following parameters.
tanzu-vcenter-01
.tkg-wld-seg
.tkg-cluster-vip
network and the subnet associated with it.tkg-cluster-vip
network and the subnet associated with it.tkg-mgmt-seg
.tkg-cluster-vip
network and the subnet associated with it.tkg-cluster-vip
network and the subnet associated with it.Cluster Labels: To adhere to the architecture defining a label is mandatory. Provide required labels, for example, type: management
.
Note: Based on your requirements, you may specify multiple labels.
Note: With above configurations, when Tanzu Kubernetes Grid clusters (shared services or workload) are tagged with label type: management
, ako
pod gets deployed on the cluster, and any applications hosted on the cluster that requires the load balancing service is exposed through network tkg-cluster-vip
and the virtual service is placed on SE group tkg-mgmt-seg
.
(Optional) On the Metadata page, you can specify location and labels and click Next.
On the Resources section, specify the resources to be consumed by the Tanzu Kubernetes Grid management cluster and click NEXT.
On the Kubernetes Network section, select the Tanzu Kubernetes Grid management network (tkg-mgmt-ls
) where the control plane and worker nodes will be placed during management cluster deployment. Ensure that the network has DHCP service enabled. Optionally, change the pod and service CIDR.
If the Tanzu environment is placed behind a proxy, enable proxy and provide proxy details:
http-proxy
, you must also set https-proxy
and vice-versa.For the no-proxy
section:
localhost
, 127.0.0.1
, the values of CLUSTER_CIDR
and SERVICE_CIDR
, .svc
, and .svc.cluster.local
are appended along with the user specified values.TKG_NO_PROXY
. Depending on your environment configuration, this may include, but is not limited to, your OIDC or LDAP server, Harbor, NSX, NSX Advanced Load Balancer, and vCenter.TKG_NO_PROXY
.(Optional) Specify identity management with OIDC or LDAP. For the purpose of this document, identity management integration is deactivated.
If you would like to enable identity management, see Enable and Configure Identity Management During Management Cluster Deployment section in the Tanzu Kubernetes Grid Integration with Pinniped Deployment Guide.
Select the OS image that will be used for the management cluster deployment.
Note: This list will appear empty if you don’t have a compatible template present in your environment. Refer steps provided in Import Base Image template for TKG Cluster deployment.
Select “Participate in the Customer Experience Improvement Program”, if you so desire.
Click REVIEW CONFIGURATION.
As of now, it is not possible to deploy management cluster for NSX cloud from the Tanzu Kubernetes Grid installer UI as one of the required field for NSX cloud is not exposed in the UI and it needs to be manually inserted in the cluster deployment yaml.
Click on EXPORT CONFIGURATION to download the deployment yaml file.
Edit the file and insert the key AVI_NSXT_T1LR
. The value of this key is the tier-1 gateway where you have connected the tkg-mgmt-ls
network. In this example, the value is set to tanzu-t1-gw
.
A sample file used for the management cluster deployment is shown below.
AVI_CA_DATA_B64: LS0tLS1CRUdJTiBDRVJUSU....z0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQ==
AVI_CLOUD_NAME: tkg-nsxt
AVI_CONTROL_PLANE_HA_PROVIDER: "true"
AVI_CONTROL_PLANE_NETWORK: tkg-cluster-vip
AVI_CONTROL_PLANE_NETWORK_CIDR: 172.19.75.0/26
AVI_CONTROLLER: alb.tanzu.lab
AVI_DATA_NETWORK: tkg-cluster-vip
AVI_DATA_NETWORK_CIDR: 172.19.75.0/26
AVI_ENABLE: "true"
AVI_LABELS: |
'type': 'management'
AVI_MANAGEMENT_CLUSTER_CONTROL_PLANE_VIP_NETWORK_CIDR: 172.19.75.0/26
AVI_MANAGEMENT_CLUSTER_CONTROL_PLANE_VIP_NETWORK_NAME: tkg-cluster-vip
AVI_MANAGEMENT_CLUSTER_SERVICE_ENGINE_GROUP: tkg-mgmt-seg
AVI_MANAGEMENT_CLUSTER_VIP_NETWORK_CIDR: 172.19.75.0/26
AVI_MANAGEMENT_CLUSTER_VIP_NETWORK_NAME: tkg-cluster-vip
AVI_NSXT_T1LR: /infra/tier-1s/tanzu-t1-gw
AVI_PASSWORD: <encoded:Vk13YXJlMSE=>
AVI_SERVICE_ENGINE_GROUP: tkg-wld-seg
AVI_USERNAME: admin
CLUSTER_CIDR: 100.96.0.0/11
CLUSTER_NAME: tkg160-mgmt-nsxt
CLUSTER_PLAN: prod
ENABLE_AUDIT_LOGGING: "false"
ENABLE_CEIP_PARTICIPATION: "false"
ENABLE_MHC: "true"
IDENTITY_MANAGEMENT_TYPE: none
INFRASTRUCTURE_PROVIDER: vsphere
OS_ARCH: amd64
OS_NAME: photon
OS_VERSION: "3"
SERVICE_CIDR: 100.64.0.0/13
TKG_HTTP_PROXY_ENABLED: "false"
VSPHERE_CONTROL_PLANE_DISK_GIB: "40"
VSPHERE_CONTROL_PLANE_ENDPOINT: ""
VSPHERE_CONTROL_PLANE_MEM_MIB: "8192"
VSPHERE_CONTROL_PLANE_NUM_CPUS: "2"
VSPHERE_DATACENTER: /Tanzu-DC
VSPHERE_DATASTORE: /Tanzu-DC/datastore/vsanDatastore
VSPHERE_FOLDER: /Tanzu-DC/vm/TKG-Mgmt-VMs
VSPHERE_INSECURE: "true"
VSPHERE_NETWORK: /Tanzu-DC/network/tkg-mgmt-ls
VSPHERE_PASSWORD: <encoded:Vk13YXJlMSE=>
VSPHERE_RESOURCE_POOL: /Tanzu-DC/host/Tanzu-CL01/Resources/TKG-Mgmt
VSPHERE_SERVER: tanzu-vc01.tanzu.lab
VSPHERE_SSH_AUTHORIZED_KEY: ssh-rsa AAAAB3NzaC1yc2....zyr3RvifuokfrQ== manish@vmware
VSPHERE_TLS_THUMBPRINT: ""
VSPHERE_USERNAME: [email protected]
VSPHERE_WORKER_DISK_GIB: "40"
VSPHERE_WORKER_MEM_MIB: "8192"
VSPHERE_WORKER_NUM_CPUS: "2"
CONTROL_PLANE_MACHINE_COUNT: "3"
WORKER_MACHINE_COUNT: "3"
DEPLOY_TKG_ON_VSPHERE7: true
While the cluster is being deployed, you will find that a virtual service is created in NSX Advanced Load Balancer and new service engines are deployed in vCenter by NSX Advanced Load Balancer and the service engines are mapped to the SE Group tkg-mgmt-seg
.
The installer automatically sets the context to the Tanzu Kubernetes Grid management cluster on the bootstrap machine. Now, you can access the Tanzu Kubernetes Grid management cluster from the bootstrap machine and perform additional tasks such as verifying the management cluster health, deploying the workload clusters, etc.
To get the status of Tanzu Kubernetes Grid management cluster, run the following command:
tanzu management-cluster get
Use kubectl get nodes
command to get the status of the Tanzu Kubernetes Grid management cluster nodes.
The Tanzu Kubernetes Grid management cluster is successfully deployed and now you can proceed with registering it with Tanzu Mission Control and creating shared services and workload clusters.
If you want to register your management cluster with Tanzu Mission Control, see Register Your Management Cluster with Tanzu Mission Control.
Each Tanzu Kubernetes Grid instance can have only one shared services cluster. Create a shared services cluster if you intend to deploy Harbor.
The procedures for deploying a shared services cluster and workload cluster are almost the same. A key difference is that you add the tanzu-services
label to the shared services cluster as its cluster role. This label identifies the shared services cluster to the management cluster and workload clusters.
Another difference between the shared services cluster and workload clusters is that shared services clusters will be applied with the “Cluster Labels” which were defined while deploying the management cluster. This is to enforce that only shared service cluster will make use of the Tanzu Kubernetes Grid cluster VIP or data network for application load balancing purposes and the virtual services are deployed on “Service Engine Group 1”.
After the management cluster is registered with Tanzu Mission Control, the deployment of the Tanzu Kubernetes clusters can be done in just a few clicks. The procedure for creating Tanzu Kubernetes clusters is as follows.
Navigate to the Clusters tab and click Create Cluster.
Under the Create cluster page, select the management cluster which you registered in the previous step and click Continue to create cluster.
Select the provisioner for creating the workload cluster (shared services cluster). Provisioner reflects the vSphere namespaces that you have created and associated with the management cluster.
Enter a name for the cluster and select the cluster group to which you want to attach your cluster. Cluster names must be unique within an organization. For the cluster group, you can optionally enter a description and apply labels.
On the Configure page, specify the following items:
You can optionally specify a proxy configuration to use for this cluster.
Note: The scope of this document doesn’t cover the use of a proxy for Tanzu Kubernetes Grid deployment. If your environment uses a proxy server to connect to the internet, ensure that the proxy configuration object includes the CIDRs for the pod, ingress, and egress from the workload network of the Management Cluster in the No proxy list, as described in Create a Proxy Configuration Object.
Select the resources for backing this cluster. Provide the resource pool, VM folder, and datastore information.
Select the high availability mode for the control plane nodes of the workload cluster. For a production deployment, it is recommended to deploy a highly available workload cluster.
Customize the default node pool for your workload cluster.
If you have deployed your management cluster with Avi_Labels, you can pass the same labels in the key:value format in the shared services cluster configuration to leverage the VIP network and service engine group that you have specified for the workload cluster in the management cluster configuration. Click CREATE CLUSTER to start provisioning your workload cluster.
Once the cluster is created, you can check the status from Tanzu Mission Control.
Cluster creation takes approximately 15-20 minutes to complete. After the cluster deployment completes, ensure that agent and extensions health shows green.
Connect to the Tanzu Management Cluster context and apply the following labels.
## Connect to tkg management cluster
kubectl config use-context tkg160-mgmt-nsxt-admin@tkg160-mgmt-nsxt
## verify the shared service cluster creation
tanzu cluster list
NAME NAMESPACE STATUS CONTROLPLANE WORKERS KUBERNETES ROLES PLAN TKR
tkg160-sharedsvc-nsx default running 3/3 3/3 v1.23.8+vmware.2 <none> prod v1.23.8---vmware.2-tkg.1
## Add the tanzu-services label to the shared services cluster as its cluster role. In the following command “tkg-shared-svc” is the name of the shared service cluster
kubectl label cluster.cluster.x-k8s.io/tkg160-sharedsvc-nsx cluster-role.tkg.tanzu.vmware.com/tanzu-services="" --overwrite=true
Get the admin context of the shared service cluster using the following commands and switch the context to the shared services cluster.
## Use the following command to get the admin context of shared services Cluster. In the following command “tkg-shared- svc” is the name of the shared services cluster
tanzu cluster kubeconfig get tkg160-sharedsvc-nsx --admin
## Use the following command to use the context of shared services cluster
kubectl config use-context tkg160-sharedsvc-nsx-admin@tkg160-sharedsvc-nsx
# Verify that ako pod gets deployed in avi-system namespace
kubectl get pods -n avi-system
NAME READY STATUS RESTARTS AGE
ako-0 1/1 Running 0 41s
Now that the shared services cluster is successfully created, you may proceed with creating the workload cluster.
Complete the following steps to deploy workload clusters from Tanzu Mission Control:
Navigate to the Clusters tab and click Create Cluster.
Under the create cluster page, select the management cluster which you registered in the previous step and click Continue to create cluster.
Select the provisioner for creating the workload cluster. Provisioner reflects the vSphere namespaces that you have created and that are associated with the management cluster.
Enter a name for the cluster and select the cluster group to which you want to attach your cluster. Cluster names must be unique within an organization. For cluster groups, you can optionally enter a description and apply labels.
On the Configure page, specify the following items:
You can optionally specify a proxy configuration to use for this cluster.
Note: The scope of this document doesn’t cover the use of a proxy for Tanzu Kubernetes Grid deployment. If your environment uses a proxy server to connect to the internet, ensure that the proxy configuration object includes the CIDRs for the pod, ingress, and egress from the workload network of the Management Cluster in the No proxy list, as described in Create a Proxy Configuration Object.
Select the resources for backing this cluster. Provide the Resource Pool, VM folder and Datastore information.
Select the high availability mode for the control plane nodes of the workload cluster. For a production deployment, it is recommended to deploy a highly available workload cluster.
(Optional) Customize the default node pool for your workload cluster.
Click CREATE CLUSTER to start provisioning your workload cluster.
You can monitor the workload cluster creation from the Tanzu Mission Control console.
Cluster creation takes approximately 15-20 minutes to complete.
After the cluster deployment completes, ensure that agent and extensions health shows green.
Tanzu Kubernetes Grid v1.5.x management clusters with NSX Advanced Load Balancer are deployed with 2 AKODeploymentConfigs.
install-ako-for-management-cluster
: default config for management clusterinstall-ako-for-all
: default config for all Tanzu Kubernetes Grid clusters. By default, any clusters that match the cluster labels defined in install-ako-for-all
will reference this file for their virtual IP networks, service engine (SE) groups, and L7 ingress. As part of this architecture, only shared service cluster makes use of the configuration defined in the default AKODeploymentConfig install-ako-for-all
.As per the defined architecture, workload clusters must make use of tkg-wld-seg and VIP Network tkg-cluster-vip for application load balancer services.
AKODeploymentConfig.spec.clusterSelector.matchLabels
element in the AKODeploymentConfig file. Once the labels are applied on the workload cluster, Tanzu Kubernetes Grid management cluster will deploy AKO pod on the target workload cluster which has the configuration defined in the new AKODeploymentConfig.The format of the AKODeploymentConfig yaml file is as follows.
apiVersion: networking.tkg.tanzu.vmware.com/v1alpha1
kind: AKODeploymentConfig
metadata:
generation: 2
name: <Unique name of AKODeploymentConfig>
spec:
adminCredentialRef:
name: avi-controller-credentials
namespace: tkg-system-networking
certificateAuthorityRef:
name: avi-controller-ca
namespace: tkg-system-networking
cloudName: <Name of the Cloud in NSX ALB>
clusterSelector:
matchLabels:
<key>: <value>
controlPlaneNetwork:
cidr: <vip network cidr>
name: <vip network name>
controller: <nsx alb ip/fqdn>
dataNetwork:
cidr: <vip network cidr>
name: <vip network name>
extraConfigs:
cniPlugin: antrea
disableStaticRouteSync: true
l4Config:
autoFQDN: disabled
layer7Only: false
networksConfig:
enableRHI: false
nsxtT1LR: <tier-1 gateway where vip network is connected>
ingress:
defaultIngressController: true
disableIngressClass: false
nodeNetworkList:
- networkName: <tkg workload network>
serviceEngineGroup: <SERVICE ENGINE Group NAME>
The sample AKODeploymentConfig with sample values in place is as follows. As per the following configuration, Tanzu Kubernetes Grid management cluster will deploy AKO pod on any workload cluster that matches the label type=workload
and the AKO configuration will be as follows:
apiVersion: networking.tkg.tanzu.vmware.com/v1alpha1
kind: AKODeploymentConfig
metadata:
generation: 2
name: adc-workload
spec:
adminCredentialRef:
name: avi-controller-credentials
namespace: tkg-system-networking
certificateAuthorityRef:
name: avi-controller-ca
namespace: tkg-system-networking
cloudName: tanzu-nsx
clusterSelector:
matchLabels:
type: workload
controlPlaneNetwork:
cidr: 172.19.75.0/26
name: tkg-cluster-vip
controller: alb.tanzu.lab
dataNetwork:
cidr: 172.19.75.0/26
name: tkg-cluster-vip
extraConfigs:
cniPlugin: antrea
disableStaticRouteSync: true
l4Config:
autoFQDN: disabled
layer7Only: false
networksConfig:
enableRHI: false
nsxtT1LR: /infra/tier-1s/tanzu-t1-gw
ingress:
defaultIngressController: true
disableIngressClass: false
nodeNetworkList:
- networkName: tkg-workload-ls
serviceEngineGroup: tkg-wld-seg
Once you have the AKO configuration file ready, use the kubectl
command to set the context to Tanzu Kubernetes Grid management cluster and use the following command to list the available AKODeploymentConfig:
kubectl apply -f <path_to_akodeploymentconfig.yaml>
Use the following command to list all AKODeploymentConfig created under the management cluster:
kubectl get adc or
kubectl get akodeploymentconfig
Now that you have successfully created the AKO deployment config, you need to apply the cluster labels defined in the AKODeploymentConfig to any of the Tanzu Kubernetes Grid workload clusters. Once the labels are applied, Tanzu Kubernetes Grid management cluster will deploy AKO pod on the target workload cluster.
kubectl label cluster <Cluster_Name> <label>
Example: kubectl label cluster tkg160-wld01-nsx type=workload
Now that the Tanzu Kubernetes Grid workload cluster is created and required AKO configurations are applied, use the following command to get the admin context of the Tanzu Kubernetes Grid workload cluster.
tanzu cluster kubeconfig get <cluster-name> --admin
Connect to the Tanzu Kubernetes Grid workload cluster using the kubectl
command and run the following commands to check the status of AKO and other components.
Switch context to the workload cluster: kubectl config use-context tkg160-wld01-nsx-admin@tkg160-wld01-nsx
List all nodes with status: kubectl get nodes
Check the status of AKO pod: kubectl get pods -n avi-system
Lists all pods and its status: kubectl get pods -A
You can see that the workload cluster is successfully deployed and AKO pod is deployed on the cluster. You can now configure SaaS services for the cluster and deploy user managed packages on this cluster.
For instructions on enabling Tanzu Observability on your workload cluster, please see Set up Tanzu Observability to Monitor a Tanzu Kubernetes Clusters
For instructions on installing Tanzu Service Mesh on your workload cluster, please see Onboard a Tanzu Kubernetes Cluster to Tanzu Service Mesh
For instructions on installing user-managed packages on the Tanzu Kubernetes clusters, see Deploy User-Managed Packages in Workload Clusters.