This topic describes how to use the Tanzu Kubernetes Grid installer interface to deploy a management cluster to vSphere, Amazon Elastic Compute Cloud (Amazon EC2), and Microsoft Azure. The Tanzu Kubernetes Grid installer interface guides you through the deployment of the management cluster, and provides different configurations for you to select or reconfigure. If this is the first time that you are deploying a management cluster to a given infrastructure provider, it is recommended to use the installer interface.
Before you can deploy a management cluster, you must make sure that your environment meets the requirements for the target infrastructure provider.
t3.large
, or t3.xlarge
, see Amazon EC2 Instance Types.Standard_D2s_v3
or Standard_D4s_v3
, see Sizes for virtual machines in Azure.TKG_BOM_CUSTOM_IMAGE_TAG
Before you can deploy a management cluster, you must specify the correct BOM file to use as a local environment variable. In the event of a patch release to Tanzu Kubernetes Grid, the BOM file may require an update to coincide with updated base image files.
Note For more information about recent security patch updates to VMware Tanzu Kubernetes Grid v1.3, see the VMware Tanzu Kubernetes Grid v1.3.1 Release Notes and this Knowledgebase Article.
On the machine where you run the Tanzu CLI, perform the following steps:
Remove any existing BOM data.
rm -rf ~/.tanzu/tkg/bom
Specify the updated BOM to use by setting the following variable.
export TKG_BOM_CUSTOM_IMAGE_TAG="v1.3.1-patch1"
Run tanzu management-cluster create
command with no additional parameters.
tanzu management-cluster create
This command produces an error but results in the BOM files being downloaded to ~/.tanzu/tkg/bom
.
Warning: The tanzu management-cluster create
command takes time to complete. While tanzu management-cluster create
is running, do not run additional invocations of tanzu management-cluster create
on the same bootstrap machine to deploy multiple management clusters, change context, or edit ~/.kube-tkg/config
.
On the machine on which you downloaded and installed the Tanzu CLI, run the tanzu management-cluster create
command with the --ui
option.
tanzu management-cluster create --ui
The installer interface launches in a browser and takes you through steps to configure the management cluster.
tanzu management-cluster create --ui
with the --browser none
option described in Installer Interface Options below.The tanzu management-cluster create --ui
command saves the settings from your installer input in a cluster configuration file. After you confirm your input values on the last pane of the installer interface, the installer saves them to ~/.tanzu/tkg/clusterconfigs
with a generated filename of the form UNIQUE-ID.yaml
.
By default Tanzu Kubernetes Grid saves the kubeconfig
for all management clusters in the ~/.kube-tkg/config
file. If you want to save the kubeconfig
file for your management cluster to a different location, set the KUBECONFIG
environment variable before running tanzu management-cluster create
.
KUBECONFIG=/path/to/mc-kubeconfig.yaml
If the prerequisites are met, tanzu management-cluster create --ui
launches the Tanzu Kubernetes Grid installer interface.
By default, tanzu management-cluster create --ui
opens the installer interface locally, at http://127.0.0.1:8080 in your default browser. The Installer Interface Options section below explains how you can change where the installer interface runs, including running it on a different machine from the tanzu
CLI.
Click the Deploy button for VMware vSphere, Amazon EC2, or Microsoft Azure.
By default, tanzu management-cluster create --ui
opens the installer interface locally, at http://127.0.0.1:8080 in your default browser. You can use the --browser
and --bind
options to control where the installer interface runs:
--browser
specifies the local browser to open the interface in.
chrome
, firefox
, safari
, ie
, edge
, or none
.none
with --bind
to run the interface on a different machine, as described below.--bind
specifies the IP address and port to serve the interface from.Warning: Serving the installer interface from a non-default IP address and port could expose the tanzu
CLI to a potential security risk while the interface is running. VMware recommends passing in to the –bind
option an IP and port on a secure network.
Use cases for --browser
and --bind
include:
--bind
to serve the interface from a different local port.–browser none
.tanzu
CLI and create management clusters on a remote machine, and run the installer interface locally or elsewhere:
tanzu management-cluster create --ui
with the following options and values:
--bind
: an IP address and port for the remote machine--browser
: none
tanzu management-cluster create --ui --bind 192.168.1.87:5555 --browser none
The options to configure the infrastructure provider section of the installer interface depend on which provider you are using.
In the IaaS Provider section, enter the IP address or fully qualified domain name (FQDN) for the vCenter Server instance on which to deploy the management cluster.
Tanzu Kubernetes Grid does not support IPv6 addresses. This is because upstream Kubernetes only provides alpha support for IPv6. Always provide IPv4 addresses in the procedures in this topic.
Enter the vCenter Single Sign On account name and password for a user account that has the required privileges for Tanzu Kubernetes Grid operation, and click Connect.
The account name must include the domain, for example tkg-user@vsphere.local
.
Verify the SSL thumbprint of the vCenter Server certificate and click Continue if it is valid.
For information about how to obtain the vCenter Server certificate thumbprint, see Obtain vSphere Certificate Thumbprints.
If you are deploying a management cluster to a vSphere 7 instance, confirm whether or not you want to proceed with the deployment.
On vSphere 7, the vSphere with Tanzu option includes a built-in Supervisor Cluster that works as a management cluster and provides a better experience than a separate management cluster deployed by Tanzu Kubernetes Grid. Deploying a Tanzu Kubernetes Grid management cluster to vSphere 7 when vSphere with Tanzu is not enabled is supported, but the preferred option is to enable vSphere with Tanzu and use the Supervisor Cluster. VMware Cloud on AWS and Azure VMware Solution do not support a Supervisor Cluster, so you need to deploy a management cluster. For information, see Add a vSphere with Tanzu Supervisor Cluster as a Management Cluster.
To reflect the recommendation to use vSphere with Tanzu when deploying to vSphere 7, the Tanzu Kubernetes Grid installer behaves as follows:
Select the datacenter in which to deploy the management cluster from the Datacenter drop-down menu.
Paste the contents of your SSH public key into the text box and click Next.
For the next steps, go to Configure the Management Cluster Settings.
If this is the first time that you are deploying a management cluster, select the Automate creation of AWS CloudFormation Stack checkbox, and click Connect.
This CloudFormation stack creates the identity and access management (IAM) resources that Tanzu Kubernetes Grid needs to deploy and run clusters on Amazon EC2. For more information, see Permissions Set by Tanzu Kubernetes Grid in Prepare to Deploy Management Clusters to Amazon EC2.
IMPORTANT: The Automate creation of AWS CloudFormation Stack checkbox replaces the clusterawsadm
command line utility that existed in Tanzu Kubernetes Grid v1.1.x and earlier. For existing management and Tanzu Kubernetes clusters initially deployed with v1.1.x or earlier, continue to use the CloudFormation stack that was created by running the clusterawsadm alpha bootstrap create-stack
command.
In the VPC for AWS section, do one of the following:
To create a new VPC, select Create new VPC on AWS, check that the pre-filled CIDR block is available, and click Next. If the recommended CIDR block is not available, enter a new IP range in CIDR format for the management cluster to use. The recommended CIDR block for VPC CIDR is 10.0.0.0/16.
To use an existing VPC, select Select an existing VPC and select the VPC ID from the drop-down menu. The VPC CIDR block is filled in automatically when you select the VPC.
For the next steps, go to Configure the Management Cluster Settings.
IMPORTANT: If this is the first time that you are deploying a management cluster to Azure with a new version of Tanzu Kubernetes Grid, for example v1.3.1, make sure that you have accepted the base image license for that version. For information, see Accept the Base Image License in Prepare to Deploy Management Clusters to Microsoft Azure.
In the IaaS Provider section, enter the Tenant ID, Client ID, Client Secret, and Subscription ID for your Azure account and click Connect. You recorded these values when you registered an Azure app and created a secret for it using the Azure Portal.
Paste the contents of your SSH public key, such as .ssh/id_rsa.pub
, into the text box.
Under Resource Group, select either the Select an existing resource group or the Create a new resource group radio button.
If you select Select an existing resource group, use the drop-down menu to select the group, then click Next.
If you select Create a new resource group, enter a name for the new resource group and then click Next.
In the VNET for Azure section, select either the Create a new VNET on Azure or the Select an existing VNET radio button.
If you select Create a new VNET on Azure, use the drop-down menu to select the resource group in which to create the VNET and provide the following:
10.0.0.0/16
.10.0.0.0/24
.10.0.1.0/24
.After configuring these fields, click Next.
If you select Select an existing VNET, use the drop-down menus to select the resource group in which the VNET is located, the VNET name, the control plane and worker node subnets, and then click Next.
This section applies to all infrastructure providers.
In the Management Cluster Settings section, select the Development or Production tile.
In either of the Development or Production tiles, use the Instance type drop-down menu to select from different combinations of CPU, RAM, and storage for the control plane node VM or VMs.
Choose the configuration for the control plane node VMs depending on the expected workloads that it will run. For example, some workloads might require a large compute capacity but relatively little storage, while others might require a large amount of storage and less compute capacity. If you select an instance type in the Production tile, the instance type that you selected is automatically selected for the Worker Node Instance Type. If necessary, you can change this.
If you plan on registering the management cluster with Tanzu Mission Control, ensure that your Tanzu Kubernetes clusters meet the requirements listed in Requirements for Registering a Tanzu Kubernetes Cluster with Tanzu Mission Control in the Tanzu Mission Control documentation.
Optionally enter a name for your management cluster.
If you do not specify a name, Tanzu Kubernetes Grid automatically generates a unique name. If you do specify a name, that name must end with a letter, not a numeric character, and must be compliant with DNS hostname requirements as outlined in RFC 952 and amended in RFC 1123.
Under Worker Node Instance Type, select the configuration for the worker node VM.
Deselect the Machine Health Checks checkbox if you want to disable MachineHealthCheck
.
MachineHealthCheck
provides node health monitoring and node auto-repair on the clusters that you deploy with this management cluster. You can enable or disable MachineHealthCheck
on clusters after deployment by using the CLI. For instructions, see Configure Machine Health Checks for Tanzu Kubernetes Clusters.
(Azure Only) If you are deploying the management cluster to Azure, click Next.
For the next steps for an Azure deployment, go to Configure Metadata.
(vSphere Only) Under Control Plane Endpoint, enter a static virtual IP address or FQDN for API requests to the management cluster.
Ensure that this IP address is not in your DHCP range, but is in the same subnet as the DHCP range. If you mapped an FQDN to the VIP address, you can specify the FQDN instead of the VIP address. For more information, see Static VIPs and Load Balancers for vSphere.
(Amazon EC2 only) Optionally, disable the Bastion Host checkbox if a bastion host already exists in the availability zone(s) in which you are deploying the management cluster.
If you leave this option enabled, Tanzu Kubernetes Grid creates a bastion host for you.
(Amazon EC2 only) Configure Availability Zones
From the Availability Zone 1 drop-down menu, select an availability zone for the management cluster. You can select only one availability zone in the Development tile. See the image below.
If you selected the Production tile above, use the Availability Zone 1, Availability Zone 2, and Availability Zone 3 drop-down menus to select three unique availability zones for the management cluster. When Tanzu Kubernetes Grid deploys the management cluster, which includes three control plane nodes, it distributes the control plane nodes across these availability zones.
To complete the configuration of the Management Cluster Settings section, do one of the following:
Click Next.
VMware NSX Advanced Load Balancer provides an L4 load balancing solution for vSphere. NSX Advanced Load Balancer includes a Kubernetes operator that integrates with the Kubernetes API to manage the lifecycle of load balancing and ingress resources for workloads. To use NSX Advanced Load Balancer, you must first deploy it in your vSphere environment. For information, see Install VMware NSX Advanced Load Balancer on a vSphere Distributed Switch.
In the optional VMware NSX Advanced Load Balancer section, you can configure Tanzu Kubernetes Grid to use NSX Advanced Load Balancer. By default all workload clusters will use the load balancer.
Use the Cloud Name drop-down menu to select the cloud that you created in your NSX Advanced Load Balancer deployment.
For example, Default-Cloud
.
Use the Service Engine Group Name drop-down menu to select a Service Engine Group.
For example, Default-Group
.
For VIP Network Name, use the drop-down menu to select the name of the network where the load balancer floating IP Pool resides.
The VIP network for NSX Advanced Load Balancer must be present in the same vCenter Server instance as the Kubernetes network that Tanzu Kubernetes Grid uses. This allows NSX Advanced Load Balancer to discover the Kubernetes network in vCenter Server and to deploy and configure Service Engines. The drop-down menu is present in Tanzu Kubernetes Grid v1.3.1 and later. In v1.3.0, you enter the name manually.
You can see the network in the Infrastructure > Networks view of the NSX Advanced Load Balancer interface.
For VIP Network CIDR, use the drop-down menu to select the CIDR of the subnet to use for the load balancer VIP.
This comes from one of the VIP Network’s configured subnets. You can see the subnet CIDR for a particular network in the Infrastructure > Networks view of the NSX Advanced Load Balancer interface. The drop-down menu is present in Tanzu Kubernetes Grid v1.3.1 and later. In v1.3.0, you enter the CIDR manually.
Paste the contents of the Certificate Authority that is used to generate your Controller Certificate into the Controller Certificate Authority text box.
If you have a self-signed Controller Certificate, the Certificate Authority is the same as the Controller Certificate.
In this version of Tanzu Kubernetes Grid, skip the Cluster Labels (Optional) field.
This section applies to all infrastructure providers.
In the optional Metadata section, optionally provide descriptive information about this management cluster.
Any metadata that you specify here applies to the management cluster and to the Tanzu Kubernetes clusters that it manages, and can be accessed by using the cluster management tool of your choice.
release : beta
, environment : staging
, or environment : production
. For more information, see Labels and Selectors in the Kubernetes documentation.If you are deploying to vSphere, click Next to go to Configure Resources. If you are deploying to Amazon EC2 or Azure, click Next to go to Configure the Kubernetes Network and Proxies.
In the Resources section, select vSphere resources for the management cluster to use, and click Next.
If appropriate resources do not already exist in vSphere, without quitting the Tanzu Kubernetes Grid installer, go to vSphere to create them. Then click the refresh button so that the new resources can be selected.
This section applies to all infrastructure providers.
In the Kubernetes Network section, configure the networking for Kubernetes services and click Next.
100.64.0.0/13
and 100.96.0.0/11
are unavailable, update the values under Cluster Service CIDR and Cluster Pod CIDR.(Optional) To send outgoing HTTP(S) traffic from the management cluster to a proxy, toggle Enable Proxy Settings and follow the instructions below to enter your proxy information. Tanzu Kubernetes Grid applies these settings to kubelet, containerd, and the control plane.
You can choose to use one proxy for HTTP traffic and another proxy for HTTPS traffic or to use the same proxy for both HTTP and HTTPS traffic.
To add your HTTP proxy information:
http://
. For example, http://myproxy.com:1234
.To add your HTTPS proxy information:
If you want to use a different URL for HTTPS traffic, do the following:
http://
. For example, http://myproxy.com:1234
.Under No proxy, enter a comma-separated list of network CIDRs or hostnames that must bypass the HTTP(S) proxy.
For example, noproxy.yourdomain.com,192.168.0.0/24
.
localhost
, 127.0.0.1
, the values of Cluster Pod CIDR and Cluster Service CIDR, .svc
, and .svc.cluster.local
to the list that you enter in this field.localhost
, 127.0.0.1
, your VPC CIDR, Cluster Pod CIDR, and Cluster Service CIDR, .svc
, .svc.cluster.local
, and 169.254.0.0/16
to the list that you enter in this field.localhost
, 127.0.0.1
, your VNET CIDR, Cluster Pod CIDR, and Cluster Service CIDR, .svc
, .svc.cluster.local
, 169.254.0.0/16
, and 168.63.129.16
to the list that you enter in this field.Important: If the management cluster VMs need to communicate with external services and infrastructure endpoints in your Tanzu Kubernetes Grid environment, ensure that those endpoints are reachable by the proxies that you configured above or add them to No proxy. Depending on your environment configuration, this may include, but is not limited to, your OIDC or LDAP server, Harbor, and in the case of vSphere, NSX-T and NSX Advanced Load Balancer.
This section applies to all infrastructure providers. For information about how Tanzu Kubernetes Grid implements identity management, see Enabling Identity Management in Tanzu Kubernetes Grid.
In the Identity Management section, optionally disable Enable Identity Management Settings .
You can disable identity management for proof-of-concept deployments, but it is strongly recommended to implement identity management in production deployments. If you disable identity management, you can reenable it later. For instructions on how to reenable identity management, see Enable Identity Management After Management Cluster Deployment.
If you enable identity management, select OIDC or LDAPS.
OIDC:
Provide details of your OIDC provider account, for example, Okta.
client_id
value that you obtain from your OIDC provider. For example, if your provider is Okta, log in to Okta, create a Web application, and select the Client Credentials options in order to get a client_id
and secret
.secret
value that you obtain from your OIDC provider.openid,groups,email
.user_name
, email
, or code
.groups
.LDAPS:
Provide details of your company’s LDAPS server. All settings except for LDAPS Endpoint are optional.
host:port
.Provide the user search attributes.
OU=Users,OU=domain,DC=io
.uid, sAMAccountName
.Provide the group search attributes.
OU=Groups,OU=domain,DC=io
.cn
.distinguishedName, dn
.member
.Paste the contents of the LDAPS server CA certificate into the Root CA text box.
If you are deploying to vSphere, click Next to go to Select the Base OS Image. If you are deploying to Amazon EC2 or Azure, click Next to go to Register with Tanzu Mission Control.
In the OS Image section, use the drop-down menu to select the OS and Kubernetes version image template to use for deploying Tanzu Kubernetes Grid VMs, and click Next.
The drop-down menu includes all of the image templates that are present in your vSphere instance that meet the criteria for use as Tanzu Kubernetes Grid base images. The image template must include the correct version of Kubernetes for this release of Tanzu Kubernetes Grid. If you have not already imported a suitable image template to vSphere, you can do so now without quitting the Tanzu Kubernetes Grid installer. After you import it, use the Refresh button to make it available in the drop-down menu.
This section applies to all infrastructure providers, however the functionality described in this section is being rolled out in Tanzu Mission Control.
Note At time of publication, you can only register Tanzu Kubernetes Grid management clusters on certain infrastructure providers. For a list of currently supported providers, see Requirements for Registering a Tanzu Kubernetes Cluster with Tanzu Mission Control in the Tanzu Mission Control documentation.
You can also register your Tanzu Kubernetes Grid management cluster with Tanzu Mission Control after you deploying the cluster. For more information, see Register Your Management Cluster with Tanzu Mission Control.
In the Registration URL field, copy and paste the registration URL you obtained from Tanzu Mission Control.
If the connection is successful, you can review the configuration YAML retrieved from the URL.
Click Next.
This section applies to all infrastructure providers.
In the CEIP Participation section, optionally deselect the check box to opt out of the VMware Customer Experience Improvement Program.
You can also opt in or out of the program after the deployment of the management cluster. For information about the CEIP, see Managing Participation in CEIP and https://www.vmware.com/solutions/trustvmware/ceip.html.
Click Review Configuration to see the details of the management cluster that you have configured.
The image below shows the configuration for a deployment to vSphere.
When you click Review Configuration, Tanzu Kubernetes Grid populates the cluster configuration file, which is located in the ~/.tanzu/tkg/clusterconfigs
subdirectory, with the settings that you specified in the interface. You can optionally copy the cluster configuration file without completing the deployment. You can copy the cluster configuration file to another bootstrap machine and deploy the management cluster from that machine. For example, you might do this so that you can deploy the management cluster from a bootstrap machine that does not have a Web browser.
(Optional) Under CLI Command Equivalent, click the Copy button to copy the CLI command for the configuration that you specified.
Copying the CLI command allows you to reuse the command at the command line to deploy management clusters with the configuration that you specified in the interface. This can be useful if you want to automate management cluster deployment.
(Optional) Click Edit Configuration to return to the installer wizard to modify your configuration.
Click Deploy Management Cluster.
Deployment of the management cluster can take several minutes. The first run of tanzu management-cluster create
takes longer than subsequent runs because it has to pull the required Docker images into the image store on your bootstrap machine. Subsequent runs do not require this step, so are faster. You can follow the progress of the deployment of the management cluster in the installer interface or in the terminal in which you ran tanzu management-cluster create --ui
. If the machine on which you run tanzu management-cluster create
shuts down or restarts before the local operations finish, the deployment will fail. If you inadvertently close the browser or browser tab in which the deployment is running before it finishes, the deployment continues in the terminal.
NOTE: The screen capture below shows the deployment status page in Tanzu Kubernetes Grid v1.3.1.
~/.tanzu/tkg/clusterconfigs
with a generated filename of the form UNIQUE-ID.yaml
. After the deployment has completed, you can rename the configuration file to something memorable, for example the name that you provided to the management cluster, and save it in a different location for future use.kubectl
to the management cluster, see Examine the Management Cluster Deployment.