This topic describes how to use the Tanzu Kubernetes Grid installer interface to deploy a management cluster to vSphere, Amazon Elastic Compute Cloud (Amazon EC2), and Microsoft Azure. The Tanzu Kubernetes Grid installer interface guides you through the deployment of the management cluster, and provides different configurations for you to select or reconfigure. If this is the first time that you are deploying a management cluster to a given infrastructure provider, it is recommended to use the installer interface.
Before you can deploy a management cluster, you must make sure that your environment meets the requirements for the target infrastructure provider.
TKG_CUSTOM_IMAGE_REPOSITORYas an environment variable.
t3.xlarge, see Amazon EC2 Instance Types.
Standard_D4s_v3, see Sizes for virtual machines in Azure.
tanzu management-cluster create command takes time to complete. While
tanzu management-cluster create is running, do not run additional invocations of
tanzu management-cluster create on the same bootstrap machine to deploy multiple management clusters, change context, or edit
On the machine on which you downloaded and installed the Tanzu CLI, run the
tanzu management-cluster create command with the
tanzu management-cluster create --ui
The installer interface launches in a browser and takes you through steps to configure the management cluster.
tanzu management-cluster create --uiwith the
--browser noneoption described in Installer Interface Options below.
tanzu management-cluster create --ui command saves the settings from your installer input in a cluster configuration file. After you confirm your input values on the last pane of the installer interface, the installer saves them to
~/.config/tanzu/tkg/clusterconfigs with a generated filename of the form
By default Tanzu Kubernetes Grid saves the
kubeconfig for all management clusters in the
~/.kube-tkg/config file. If you want to save the
kubeconfig file for your management cluster to a different location, set the
KUBECONFIG environment variable before running
tanzu management-cluster create.
If the prerequisites are met,
tanzu management-cluster create --ui launches the Tanzu Kubernetes Grid installer interface.
tanzu management-cluster create --ui opens the installer interface locally, at http://127.0.0.1:8080 in your default browser.
The Installer Interface Options section below explains how you can change where the installer interface runs, including running it on a different machine from the
Click the Deploy button for VMware vSphere, Amazon EC2, or Microsoft Azure.
tanzu management-cluster create --ui opens the installer interface locally, at http://127.0.0.1:8080 in your default browser. You can use the
--bind options to control where the installer interface runs:
--browserspecifies the local browser to open the interface in.
--bindto run the interface on a different machine, as described below.
--bindspecifies the IP address and port to serve the interface from.
Warning: Serving the installer interface from a non-default IP address and port could expose the
tanzu CLI to a potential security risk while the interface is running. VMware recommends passing in to the
--bind option an IP and port on a secure network.
Use cases for
--bindto serve the interface from a different local port.
To run the
tanzu CLI and create management clusters on a remote machine, and run the installer interface locally or elsewhere:
On the remote bootstrap machine, run
tanzu management-cluster create --ui with the following options and values:
--bind: an IP address and port for the remote machine
tanzu management-cluster create --ui --bind 192.168.1.87:5555 --browser none
On the local UI machine, browse to the remote machine's IP address to access the installer interface.
The options to configure the infrastructure provider section of the installer interface depend on which provider you are using.
In the IaaS Provider section, enter the IP address or fully qualified domain name (FQDN) for the vCenter Server instance on which to deploy the management cluster.
Support for IPv6 addresses in Tanzu Kubernetes Grid is limited; see Deploy Clusters on IPv6 (vSphere Only). If you are not deploying to an IPv6-only networking environment, you must provide IPv4 addresses in the following steps.
Enter the vCenter Single Sign On username and password for a user account that has the required privileges for Tanzu Kubernetes Grid operation, and click Connect.
Verify the SSL thumbprint of the vCenter Server certificate and click Continue if it is valid.
For information about how to obtain the vCenter Server certificate thumbprint, see Obtain vSphere Certificate Thumbprints.
If you are deploying a management cluster to a vSphere 7 instance, confirm whether or not you want to proceed with the deployment.
On vSphere 7, the vSphere with Tanzu option includes a built-in supervisor cluster that works as a management cluster and provides a better experience than a separate management cluster deployed by Tanzu Kubernetes Grid. Deploying a Tanzu Kubernetes Grid management cluster to vSphere 7 when vSphere with Tanzu is not enabled is supported, but the preferred option is to enable vSphere with Tanzu and use the Supervisor Cluster. VMware Cloud on AWS and Azure VMware Solution do not support a supervisor cluster, so you need to deploy a management cluster. For information, see Add a vSphere7 Supervisor Cluster as a Management Cluster.
To reflect the recommendation to use vSphere with Tanzu when deploying to vSphere 7, the Tanzu Kubernetes Grid installer behaves as follows:
Select the datacenter in which to deploy the management cluster from the Datacenter drop-down menu.
Paste the contents of your SSH public key into the text box and click Next.
For the next steps, go to Configure the Management Cluster Settings.
In the IaaS Provider section, select how to provide the credentials for your Amazon EC2 account. You have two options:
Credential Profile (recommended): Select an already existing AWS credential profile. If you select a profile, the access key and session token information configured for your profile are passed to the Installer without displaying actual values in the UI. For information about setting up credential profiles, see Credential Files and Profiles.
One-Time Credentials: Enter AWS account credentials directly in the Access Key ID and Secret Access Key fields for your Amazon EC2 account. Optionally specify an AWS session token in Session Token if your AWS account is configured to require temporary credentials. For more information on acquiring session tokens, see Using temporary credentials with AWS resources.
In Region, select the AWS region in which to deploy the management cluster.
If you intend to deploy a production management cluster, this region must have at least three availability zones.
In the VPC for AWS section, do one of the following:
To create a new VPC, select Create new VPC on AWS, check that the pre-filled CIDR block is available, and click Next. If the recommended CIDR block is not available, enter a new IP range in CIDR format for the management cluster to use. The recommended CIDR block for VPC CIDR is 10.0.0.0/16.
To use an existing VPC, select Select an existing VPC and select the VPC ID from the drop-down menu. The VPC CIDR block is filled in automatically when you select the VPC. If you are deploying the management cluster in an internet-restricted environment, such as a proxied or air-gapped environment, select the This is not an internet facing VPC check box.
For the next steps, go to Configure the Management Cluster Settings.
IMPORTANT: If this is the first time that you are deploying a management cluster to Azure with a new version of Tanzu Kubernetes Grid, for example v1.4.0, make sure that you have accepted the base image license for that version. For information, see Accept the Base Image License in Prepare to Deploy Management Clusters to Microsoft Azure.
In the IaaS Provider section, enter the Tenant ID, Client ID, Client Secret, and Subscription ID values for your Azure account. You recorded these values when you registered an Azure app and created a secret for it using the Azure Portal.
Paste the contents of your SSH public key, such as
.ssh/id_rsa.pub, into the text box.
Under Resource Group, select either the Select an existing resource group or the Create a new resource group radio button.
If you select Select an existing resource group, use the drop-down menu to select the group, then click Next.
If you select Create a new resource group, enter a name for the new resource group and then click Next.
In the VNET for Azure section, select either the Create a new VNET on Azure or the Select an existing VNET radio button.
If you select Create a new VNET on Azure, use the drop-down menu to select the resource group in which to create the VNET and provide the following:
After configuring these fields, click Next.
If you select Select an existing VNET, use the drop-down menus to select the resource group in which the VNET is located, the VNET name, the control plane and worker node subnets, and then click Next.
To make the management cluster private, as described in Azure Private Clusters, enable the Private Azure Cluster checkbox.
This section applies to all infrastructure providers.
In the Management Cluster Settings section, select the Development or Production tile.
In either of the Development or Production tiles, use the Instance type drop-down menu to select from different combinations of CPU, RAM, and storage for the control plane node VM or VMs.
Choose the configuration for the control plane node VMs depending on the expected workloads that it will run. For example, some workloads might require a large compute capacity but relatively little storage, while others might require a large amount of storage and less compute capacity. If you select an instance type in the Production tile, the instance type that you selected is automatically selected for the Worker Node Instance Type. If necessary, you can change this.
If you plan on registering the management cluster with Tanzu Mission Control, ensure that your Tanzu Kubernetes clusters meet the requirements listed in Requirements for Registering a Tanzu Kubernetes Cluster with Tanzu Mission Control in the Tanzu Mission Control documentation.
Optionally enter a name for your management cluster.
If you do not specify a name, Tanzu Kubernetes Grid automatically generates a unique name. If you do specify a name, that name must end with a letter, not a numeric character, and must be compliant with DNS hostname requirements as outlined in RFC 952 and amended in RFC 1123.
(Optional) Deselect the Machine Health Checks checkbox if you want to disable
MachineHealthCheck. You can enable or disable
MachineHealthCheck on clusters after deployment by using the CLI. For instructions, see Configure Machine Health Checks for Tanzu Kubernetes Clusters.
(Optional) Select the Enable Audit Logging checkbox to record requests made to the Kubernetes API server. For more information, see Audit Logging.
Configure additional settings that are specific to your infrastructure provider.
Under Control Plane Endpoint Provider, select Kube-Vip or NSX Advanced Load Balancer to choose the component to use for the control plane API server.
You can also use NSX Advanced Load Balancer as a load balancer for workloads, as configured in the VMware NSX Advanced Load Balancer pane, described below.
To use NSX Advanced Load Balancer, you must first deploy it in your vSphere environment. For information, see Install NSX Advanced Load Balancer.
Under Control Plane Endpoint, enter a static virtual IP address or FQDN for API requests to the management cluster. This setting is required if you are using Kube-Vip.
Ensure that this IP address is not in your DHCP range, but is in the same subnet as the DHCP range. If you mapped an FQDN to the VIP address, you can specify the FQDN instead of the VIP address. For more information, see Static VIPs and Load Balancers for vSphere.
(Optional) Disable the Bastion Host checkbox if a bastion host already exists in the availability zone(s) in which you are deploying the management cluster.
If you leave this option enabled, Tanzu Kubernetes Grid creates a bastion host for you.
If this is the first time that you are deploying a management cluster to this AWS account, select the Automate creation of AWS CloudFormation Stack checkbox.
This CloudFormation stack creates the identity and access management (IAM) resources that Tanzu Kubernetes Grid needs to deploy and run clusters on Amazon EC2. For more information, see Required IAM Resources in Prepare to Deploy Management Clusters to Amazon EC2.
Configure Availability Zones:
From the Availability Zone 1 drop-down menu, select an availability zone for the management cluster. You can select only one availability zone if you selected the Development tile. See the image below.
From the AZ1 Worker Node Instance Type drop-down menu, select select the configuration for the worker node VM from the list of instances that are available in this availability zone.
If you selected the Production tile above, use the Availability Zone 2, Availability Zone 3, and AZ Worker Node Instance Type drop-down menus to select three unique availability zones for the management cluster.
Note: In v1.4.0, if you select Production, Tanzu Kubernetes Grid deploys only one worker node, AZ1 Worker Node Instance Type, instead of three worker nodes. This is a known issue; for more information, see Deployment Issues in VMware Tanzu Kubernetes Grid 1.4 Release Notes. The control plane nodes are distributed across Availability Zone 1, Availability Zone 2, and Availability Zone 3.
If you selected an existing VPC in the previous step, use the VPC public subnet and VPC private subnet drop-down menus to select existing subnets on the VPC.
If you are deploying a production management cluster, select subnets for all three availability zones. Public subnets are not available if you selected This is not an internet facing VPC in the previous section. The image below shows the Development tile.
Under Worker Node Instance Type, select the configuration for the worker node VM.
If you are deploying the management cluster to vSphere, go to Configure VMware NSX Advanced Load Balancer.
If you are deploying the management cluster to Amazon EC2 or Azure, go to Configure Metadata.
VMware NSX Advanced Load Balancer provides an L4+L7 load balancing solution for vSphere. NSX Advanced Load Balancer includes a Kubernetes operator that integrates with the Kubernetes API to manage the lifecycle of load balancing and ingress resources for workloads. To use NSX Advanced Load Balancer, you must first deploy it in your vSphere environment. For information, see Install NSX Advanced Load Balancer.
In the optional VMware NSX Advanced Load Balancer section, you can configure Tanzu Kubernetes Grid to use NSX Advanced Load Balancer. By default all workload clusters will use the load balancer.
If you are using LDAP or OIDC identity management, after your management cluster is created you must integrate NSX ALB with your identity provider as described in Add a Load Balancer for an Identity Provider on vSphere.
Paste the contents of the Certificate Authority that is used to generate your Controller Certificate into the Controller Certificate Authority text box and click Verify Credentials.
If you have a self-signed Controller Certificate, the Certificate Authority is the same as the Controller Certificate.
Use the Cloud Name drop-down menu to select the cloud that you created in your NSX Advanced Load Balancer deployment.
Use the Service Engine Group Name drop-down menu to select a Service Engine Group.
For Workload VIP Network Name and Management VIP Network Name, use the drop-down menus to select the name of the network where the load balancer floating IP Pool resides.
The VIP network for NSX Advanced Load Balancer must be present in the same vCenter Server instance as the Kubernetes network that Tanzu Kubernetes Grid uses. This allows NSX Advanced Load Balancer to discover the Kubernetes network in vCenter Server and to deploy and configure Service Engines.
You can see the network in the Infrastructure > Networks view of the NSX Advanced Load Balancer interface.
For Workload VIP Network CIDR and Management VIP Network CIDR, use the drop-down menu to select the CIDR of the subnet to use for the load balancer VIP.
This comes from one of the VIP Network's configured subnets. You can see the subnet CIDR for a particular network in the Infrastructure > Networks view of the NSX Advanced Load Balancer interface.
(Optional) Enter one or more cluster labels to identify clusters on which to selectively enable NSX Advanced Load Balancer or to customize NSX Advanced Load Balancer Settings per group of clusters.
By default, all clusters that you deploy with this management cluster will enable NSX Advanced Load Balancer. All clusters will share the same VMware NSX Advanced Load Balancer Controller, Cloud, Service Engine Group, and VIP Network as you entered previously. This cannot be changed later. To only enable the load balancer on a subset of clusters, or to preserve the ability to customize NSX Advanced Load Balancer settings for a group of clusters, add labels in the format
key: value. For example
This is useful in the following scenarios:
NOTE: Labels that you define here will be used to create a label selector. Only workload cluster
Cluster objects that have the matching labels will have the load balancer enabled. As a consequence, you are responsible for making sure that the workload cluster's
Cluster object has the corresponding labels. For example, if you use
team: tkg, to enable the load balancer on a workload cluster, you will need to perform the following steps after deployment of the management cluster:
kubectl to the management cluster's context.
kubectl config set-context management-cluster@admin
Cluster object of the corresponding workload cluster with the labels defined. If you define multiple key-values, you need to apply all of them.
kubectl label cluster <cluster-name> team=tkg
This section applies to all infrastructure providers.
In the optional Metadata section, optionally provide descriptive information about this management cluster.
Any metadata that you specify here applies to the management cluster and to the Tanzu Kubernetes clusters that it manages, and can be accessed by using the cluster management tool of your choice.
release : beta,
environment : staging, or
environment : production. For more information, see Labels and Selectors in the Kubernetes documentation.
In the Resources section, select vSphere resources for the management cluster to use, and click Next.
If appropriate resources do not already exist in vSphere, without quitting the Tanzu Kubernetes Grid installer, go to vSphere to create them. Then click the refresh button so that the new resources can be selected.
This section applies to all infrastructure providers.
In the Kubernetes Network section, configure the networking for Kubernetes services and click Next.
100.96.0.0/11are unavailable, update the values under Cluster Service CIDR and Cluster Pod CIDR.
(Optional) To send outgoing HTTP(S) traffic from the management cluster to a proxy, for example in an internet-restricted environment, toggle Enable Proxy Settings and follow the instructions below to enter your proxy information. Tanzu Kubernetes Grid applies these settings to kubelet, containerd, and the control plane.
You can choose to use one proxy for HTTP traffic and another proxy for HTTPS traffic or to use the same proxy for both HTTP and HTTPS traffic.
To add your HTTP proxy information:
http://. For example,
To add your HTTPS proxy information:
If you want to use a different URL for HTTPS traffic, do the following:
http://. For example,
Under No proxy, enter a comma-separated list of network CIDRs or hostnames that must bypass the HTTP(S) proxy. If your management cluster runs on the same network as your infrastructure, behind the same proxy, set this to your infrastructure CIDRs or FQDNs so that the management cluster communicates with infrastructure directly.
127.0.0.1, the values of Cluster Pod CIDR and Cluster Service CIDR,
.svc.cluster.localto the list that you enter in this field.
127.0.0.1, your VPC CIDR, Cluster Pod CIDR, and Cluster Service CIDR,
169.254.0.0/16to the list that you enter in this field.
127.0.0.1, your VNET CIDR, Cluster Pod CIDR, and Cluster Service CIDR,
126.96.36.199to the list that you enter in this field.
Important: If the management cluster VMs need to communicate with external services and infrastructure endpoints in your Tanzu Kubernetes Grid environment, ensure that those endpoints are reachable by the proxies that you configured above or add them to No proxy. Depending on your environment configuration, this may include, but is not limited to, your OIDC or LDAP server, Harbor, and in the case of vSphere, NSX-T and NSX Advanced Load Balancer.
This section applies to all infrastructure providers. For information about how Tanzu Kubernetes Grid implements identity management, see Prepare External Identity Management.
In the Identity Management section, optionally disable Enable Identity Management Settings .
You can disable identity management for proof-of-concept deployments, but it is strongly recommended to implement identity management in production deployments. If you disable identity management, you can reenable it later. For instructions on how to reenable identity management, see Enable Identity Management in an Existing Deployment.
If you enable identity management, select OIDC or LDAPS.
Provide details of your OIDC provider account, for example, Okta.
client_idvalue that you obtain from your OIDC provider. For example, if your provider is Okta, log in to Okta, create a Web application, and select the Client Credentials options in order to get a
secretvalue that you obtain from your OIDC provider.
Provide details of your company's LDAPS server. All settings except for LDAPS Endpoint are optional.
Provide the user search attributes.
Provide the group search attributes.
Paste the contents of the LDAPS server CA certificate into the Root CA text box.
(Optional) Verify the LDAP settings.
cnfor the user name and
oufor the group name. For updates on this known issue, see the VMware Tanzu Kubernetes Grid 1.4 Release Notes.
Note: The LDAP host performs this check, and not the Management Cluster nodes. So your LDAP configuration might work even if this verification fails.
Click Next to go to Select the Base OS Image.
In the OS Image section, use the drop-down menu to select the OS and Kubernetes version image template to use for deploying Tanzu Kubernetes Grid VMs, and click Next.
The OS Image drop-down menu includes OS images that meet all of the following criteria:
skufield values, respectively.
This section applies to all infrastructure providers, however the functionality described in this section is being rolled out in Tanzu Mission Control.
NOTE: Registering a Tanzu Kubernetes Grid 1.4.0 management cluster is not supported. Tanzu Mission Control does not support cluster lifecycle management of 1.4.0 workload clusters. You can use Tanzu Mission Control with Tanzu Kubernetes Grid 1.4.0 by attaching your workload clusters to Tanzu Mission Control without registering them. For more information about attaching workload clusters to Tanzu Mission Control, see Attach an Existing Cluster.
In the Registration URL field, copy and paste the registration URL you obtained from Tanzu Mission Control.
If the connection is successful, you can review the configuration YAML retrieved from the URL.
This section applies to all infrastructure providers.
In the CEIP Participation section, optionally deselect the check box to opt out of the VMware Customer Experience Improvement Program.
You can also opt in or out of the program after the deployment of the management cluster. For information about the CEIP, see Manage Participation in CEIP and https://www.vmware.com/solutions/trustvmware/ceip.html.
Click Review Configuration to see the details of the management cluster that you have configured.
The image below shows the configuration for a deployment to vSphere.
When you click Review Configuration, Tanzu Kubernetes Grid populates the cluster configuration file, which is located in the
~/.config/tanzu/tkg/clusterconfigs subdirectory, with the settings that you specified in the interface. You can optionally copy the cluster configuration file without completing the deployment. You can copy the cluster configuration file to another bootstrap machine and deploy the management cluster from that machine. For example, you might do this so that you can deploy the management cluster from a bootstrap machine that does not have a Web browser.
(Optional) Under CLI Command Equivalent, click the Copy button to copy the CLI command for the configuration that you specified.
Copying the CLI command allows you to reuse the command at the command line to deploy management clusters with the configuration that you specified in the interface. This can be useful if you want to automate management cluster deployment.
(Optional) Click Edit Configuration to return to the installer wizard to modify your configuration.
Click Deploy Management Cluster.
Deployment of the management cluster can take several minutes. The first run of
tanzu management-cluster create takes longer than subsequent runs because it has to pull the required Docker images into the image store on your bootstrap machine. Subsequent runs do not require this step, so are faster. You can follow the progress of the deployment of the management cluster in the installer interface or in the terminal in which you ran
tanzu management-cluster create --ui. If the machine on which you run
tanzu management-cluster create shuts down or restarts before the local operations finish, the deployment will fail. If you inadvertently close the browser or browser tab in which the deployment is running before it finishes, the deployment continues in the terminal.
NOTE: The screen capture below shows the deployment status page in Tanzu Kubernetes Grid v1.4.0.
~/.config/tanzu/tkg/clusterconfigswith a generated filename of the form
UNIQUE-ID.yaml. After the deployment has completed, you can rename the configuration file to something memorable, for example the name that you provided to the management cluster, and save it in a different location for future use.
For information about what happened during the deployment of the management cluster and how to connect
kubectl to the management cluster, see Examine the Management Cluster Deployment.