After you have deployed the management cluster to vSphere, Amazon EC2, or Azure, you can use the Tanzu Kubernetes Grid CLI to deploy Tanzu Kubernetes clusters. In Tanzu Kubernetes Grid, Tanzu Kubernetes clusters are the Kubernetes clusters in which your application workloads run.

About Tanzu Kubernetes Clusters

Tanzu Kubernetes Grid automatically deploys clusters to the platform on which you deployed the management cluster. For example, you cannot deploy clusters to Amazon EC2 or Azure from a management cluster that is running in vSphere, or the reverse. Tanzu Kubernetes Grid automatically deploys clusters from whichever management cluster you have set as the context for the CLI by using the tkg set management-cluster command. For information about tkg set management-cluster, see Manage Your Management Clusters.

For information about how to upgrade existing clusters to a new version of Kubernetes, see Upgrade Tanzu Kubernetes Clusters.

Tanzu Kubernetes Clusters, kubectl, and kubeconfig

When you create a management cluster, the Tanzu Kubernetes Grid CLI and kubectl contexts are automatically set to that management cluster. However, Tanzu Kubernetes Grid does not automatically set the kubectl context to a Tanzu Kubernetes cluster when you create it. You must set the kubectl context to a Tanzu Kubernetes cluster by using the kubectl config use-context command.

By default, unless you specify the --kubeconfig option to save the kubeconfig for a cluster to a specific file, all Tanzu Kubernetes clusters that you deploy are added to a shared .kube/config file. If you delete the shared .kube/config file and you still have the .kube-tkg/config file for the management console, you can recover the .kube/config of the Tanzu Kubernetes clusters with the tkg get credentials <my-cluster> command.

By default, information about management clusters is stored in a separate .kube-tkg/config file.

Do not change context or edit the .kube-tkg/config or .kube/config files while Tanzu Kubernetes Grid operations are running.

Tanzu Kubernetes Cluster Plans

Tanzu Kubernetes Grid provides standard templates for clusters, known as plans. In this release, there are two plans for Tanzu Kubernetes clusters:

  • By default, the dev plan deploys a cluster with one control plane node and one worker node.
  • By default, the prod plan deploys a cluster with three control plane nodes and three worker nodes.

You can specify options to deploy clusters with different numbers of control plane and worker nodes. If you deploy a cluster with multiple control plane nodes, Tanzu Kubernetes Grid automatically enables stacked HA on the control plane. You can also change the number of nodes in a cluster after deployment by running the tkg scale cluster command on the cluster. For more information, see Scale Tanzu Kubernetes Clusters.

Tanzu Kubernetes Cluster Plans and Node Distribution across AZs (Amazon EC2)

When you create a prod Tanzu Kubernetes cluster from a prod management cluster that is running on Amazon EC2, Tanzu Kubernetes Grid evenly distributes its control plane and worker nodes across the three Availability Zones (AZs) that you specified in your management cluster configuration. This includes Tanzu Kubernetes clusters that are configured with any of the following:

  • The default number of control plane nodes
  • The --controlplane-machine-count setting that is greater than the default number of control plane nodes
  • The default number of worker nodes
  • The --worker-machine-count setting that is greater than the default number of worker nodes

For example, if you specify --worker-machine-count 5, Tanzu Kubernetes Grid deploys two worker nodes in the first AZ, two worker nodes in the second AZ, and one worker node in the third AZ. You can optionally customize this default AZ placement mechanism for worker nodes by following the instructions in Configure AZ Placement Settings for Worker Nodes (Amazon EC2) below. You cannot customize the default AZ placement mechanism for control plane nodes.

If you want to create a prod Tanzu Kubernetes cluster from a dev management cluster, you must define a subset of additional variables in the .tkg/config.yaml file before running the tkg create cluster command. For instructions, see Deploy a Prod Cluster from a Dev Management Cluster (Amazon EC2) below. After you define the additional variables, Tanzu Kubernetes Grid uses the default AZ placement mechanism described above.

Tanzu Kubernetes Cluster Networking

When you use the Tanzu Kubernetes Grid CLI to deploy a Tanzu Kubernetes cluster, an Antrea cluster network interface (CNI) is automatically enabled in the cluster. Alternatively, you can enable a Calico CNI or your own CNI provider. For instructions, see Deploy a Cluster with a Non-Default CNI below.

Existing Tanzu Kubernetes clusters that you deployed with Tanzu Kubernetes Grid v.1.1.x and then upgrade to v1.2 continue to use Calico as the CNI provider. You cannot change the CNI provider for these clusters.

Prerequisites for Cluster Deployment

Preview the YAML for a Tanzu Kubernetes Cluster

To see a preview of the YAML file that Tanzu Kubernetes Grid will create when it deploys a Tanzu Kubernetes cluster, you can run the tkg create cluster command with the --dry-run option. If you specify --dry-run, Tanzu Kubernetes Grid displays the full YAML file for the cluster, but does not create the cluster.

  • vSphere:

    tkg create cluster my-cluster --plan dev --vsphere-controlplane-endpoint-ip <ip_address> --dry-run 
    
  • Amazon EC2 and Azure:

    tkg create cluster my-cluster --plan dev --dry-run
    

You can add --dry-run to any of the commands described in the following sections, to see a preview of the YAML before you deploy the cluster. If you are satisfied with the displayed configuration file, run the command again without the --dry-run option, to create the cluster.

If you specify --dry-run, Tanzu Kubernetes Grid sends the YAML file for the cluster to stdout, so that you can save it for repeated future use.

  • vSphere:

    tkg create cluster my-cluster --plan dev --vsphere-controlplane-endpoint-ip <ip_address> --dry-run > my-cluster-config.yaml
    
  • Amazon EC2 and Azure:
    tkg create cluster my-cluster --plan dev --dry-run > my-cluster-config.yaml 
    

NOTE: Running tkg create cluster with the --dry-run option works in the same way as running the tkg config cluster command. You can save the output of --dry-run and use it to create clusters. In Tanzu Kubernetes Grid 1.1.2 and later, you can deploy clusters from the saved file by using the tkg create cluster command with the --manifest option. In earlier versions, you must use kubectl apply. For information how to deploy clusters from saved YAML files, see Create Tanzu Kubernetes Cluster Configuration Files.

Deploy Tanzu Kubernetes Clusters

To deploy Tanzu Kubernetes clusters, you run the tkg create cluster command, specifying different options to deploy the Tanzu Kubernetes clusters with different configurations. Certain elements of the configuration of a Tanzu Kubernetes cluster are inherited from the .tkg/config.yaml file of the management cluster. Other configuration is determined by the options that you specify when you run tkg create cluster. The following sections describe the different tkg create cluster options.

Deploy a Default Tanzu Kubernetes Cluster

To deploy a Tanzu Kubernetes cluster to Amazon EC2 or Azure with the minimum default configuration, you run tkg create cluster, specifying the cluster name and the --plan dev option.

tkg create cluster my-cluster --plan dev

If you are deploying clusters to vSphere, you must also specify the --vsphere-controlplane-endpoint-ip option. The --vsphere-controlplane-endpoint-ip option sets a static virtual IP address for API requests to the cluster. Make sure that this IP address is not in the DHCP range, but is in the same subnet as the DHCP range. For more information, see Load Balancers for vSphere.

tkg create cluster my-cluster --plan dev --vsphere-controlplane-endpoint-ip <ip_address>

This command deploys a Tanzu Kubernetes cluster that runs the default version of Kubernetes for this Tanzu Kubernetes Grid release, which in Tanzu Kubernetes Grid 1.2.0 is v1.19.1.

The cluster consists of the following VMs or instances:

  • vSphere:
    • One control plane node, with a name similar to my-cluster-control-plane-nj4z6.
    • One worker node, with a name similar to my-cluster-md-0-6ff9f5cffb-jhcrh.
  • Amazon EC2:
    • One control plane node, with a name similar to my-cluster-control-plane-d78t5.
    • One EC2 bastion node, with the name my-cluster-bastion.
    • One worker node, with a name similar to my-cluster-md-0-2vsr4.
  • Azure:
    • One control plane node, with a name similar to my-dev-cluster-20200902052434-control-plane-4d4p4.
    • One worker node, with a name similar to my-dev-cluster-20200827115645-md-0-rjdbr.

Deploy a Cluster with Multiple Worker Nodes

By default, a Tanzu Kubernetes cluster has:

  • One control plane node and one worker node if you create a dev cluster
  • Three control plane nodes and three worker nodes if you create a prod cluster

To deploy clusters with multiple worker nodes, specify the --worker-machine-count option.

  • vSphere:

    tkg create cluster my-dev-cluster --plan dev --vsphere-controlplane-endpoint-ip <ip_address> --worker-machine-count 3
    
  • Amazon EC2 and Azure:

    tkg create cluster my-dev-cluster --plan dev --worker-machine-count 3
    

    NOTE: On Azure, Tanzu Kubernetes Grid v1.2 does not support distributing worker nodes in a cluster across multiple AZs.

This command deploys a Tanzu Kubernetes cluster that consists of the following VMs or instances:

  • vSphere:
    • One control plane node, with a name similar to my-dev-cluster-control-plane-nj4z6.
    • Three worker nodes, with names similar to my-dev-cluster-md-0-6ff9f5cffb-jhcrh.
  • Amazon EC2:
    • One control plane node, with a name similar to my-dev-cluster-control-plane-d78t5.
    • One EC2 bastion node, with the name my-dev-cluster-bastion.
    • Three worker nodes, with names similar to my-dev-cluster-md-0-2vsr4.
  • Azure:
    • One control plane node, with a name similar to my-dev-cluster-20200902052434-control-plane-4d4p4.
    • Three worker nodes, with names similar to my-dev-cluster-20200827115645-md-0-rjdbr.

Configure AZ Placement Settings for Worker Nodes (Amazon EC2)

When creating a prod Tanzu Kubernetes cluster on Amazon EC2, you can optionally specify how many worker nodes the tkg create cluster command deploys in each of the three AZs you selected in the Tanzu Kubernetes Grid installer interface or configured in the .tkg/config.yaml file.

To do this:

  1. Set the following variables in the .tkg/config.yaml file:

    • WORKER_MACHINE_COUNT_0: Sets the number of worker nodes in the first AZ, AWS_NODE_AZ.
    • WORKER_MACHINE_COUNT_1: Sets the number of worker nodes in the second AZ, AWS_NODE_AZ_1.
    • WORKER_MACHINE_COUNT_2: Sets the number of worker nodes in the third AZ, AWS_NODE_AZ_2.
  2. Create the cluster. For example:

    tkg create cluster my-prod-cluster --plan prod
    

Setting the WORKER_MACHINE_COUNT variable instead of the variables listed above is equivalent to specifying the --worker-machine-count flag. If you set this variable, Tanzu Kubernetes Grid distributes your worker nodes evenly across the AZs.

Deploy a Cluster with a Highly Available Control Plane

If you specify --plan prod, Tanzu Kubernetes Grid deploys a cluster with three control plane nodes and automatically implements stacked control plane HA for the cluster.

  • vSphere:

    tkg create cluster my-prod-cluster --plan prod --vsphere-controlplane-endpoint-ip <ip_address>
    
  • Amazon EC2 and Azure:

    tkg create cluster my-prod-cluster --plan prod
    

This command deploys a Tanzu Kubernetes cluster that consists of the following VMs or instances:

  • vSphere
    • Three control plane nodes, with names similar to my-prod-cluster-control-plane-nj4z6.
    • Three worker nodes, with names similar to my-prod-cluster-md-0-6ff9f5cffb-jhcrh.
  • Amazon EC2:
    • Three control plane nodes, with names similar to my-prod-cluster-control-plane-d78t5.
    • One EC2 bastion node, with the name my-prod-cluster-bastion.
    • Three worker nodes, with names similar to my-prod-cluster-md-0-2vsr4.
  • Azure:
    • Three control plane nodes, with names similar to my-prod-cluster-20200902052434-control-plane-4d4p4.
    • Three worker nodes, with names similar to my-prod-cluster-20200827115645-md-0-rjdbr.

You can deploy a Tanzu Kubernetes cluster with more control plane nodes by specifying the --controlplane-machine-count option. The number of control plane nodes that you specify in --controlplane-machine-count must be uneven. Specify the number of worker nodes in the --worker-machine-count option.

  • vSphere:

    tkg create cluster my_cluster --plan prod --vsphere-controlplane-endpoint-ip <ip_address> --controlplane-machine-count 5 --worker-machine-count 10
    
  • Amazon EC2 and Azure:

    tkg create cluster my_cluster --plan prod --controlplane-machine-count 5 --worker-machine-count 10
    

Deploy a Cluster with Nodes of Different Sizes

By default, Tanzu Kubernetes Grid creates the individual nodes of the Tanzu Kubernetes clusters according to the settings that you provided when you deployed the management cluster.

  • If you deployed the management cluster from the Tanzu Kubernetes Grid installer interface, control plane and worker nodes are created with the configuration that you set in the Management Cluster Settings > Instance Type drop-down menu.
  • If you deployed the management cluster from the Tanzu Kubernetes Grid CLI, nodes are created with the configuration that you set in the following options:
    • vSphere: VSPHERE_CONTROL_PLANE_*, VSPHERE_WORKER_*
    • Amazon EC2: CONTROL_PLANE_MACHINE_TYPE, NODE_MACHINE_TYPE

You can override these settings by using the tkg create cluster --size, --controlplane-size, and --worker-size options. By using these options, you can create Tanzu Kubernetes clusters that have nodes with different configurations to the management cluster nodes. You can also create clusters in which the control plane nodes and worker nodes are different sizes.

To create a Tanzu Kubernetes cluster in which all of the control plane and worker node VMs are the same size, specify the --size option.

The values that you set depend on whether you are deploying to vSphere, AWS, or Azure. For information about the configurations of the different sizes of node instances for Amazon EC2, see Amazon EC2 Instance Types. For information about node instances for Azure, see Sizes for virtual machines in Azure.

  • For vSphere, set extra-large, large, medium, or small. For example:

    tkg create cluster my_cluster --plan prod --vsphere-controlplane-endpoint-ip <ip_address> --controlplane-machine-count 5 --worker-machine-count 10 --size large
    
  • For Amazon EC2, set t3.large, t3.xlarge, and so on. For example:

    tkg create cluster my_cluster --plan prod --controlplane-machine-count 5 --worker-machine-count 10 --size m5.large
    
  • For Azure, set Standard_D2s_v3, or Standard_DC4s_v2, and so on. For example:

    tkg create cluster my_cluster --plan prod --controlplane-machine-count 5 --worker-machine-count 10 --size Standard_DC4s_v2
    

To create a Tanzu Kubernetes cluster in which the control plane and worker node VMs are different sizes, specify the --controlplane-size and --worker-size. The values that you set depend on whether you are deploying to vSphere or AWS.

  • For vSphere, set extra-large, large, medium, or small. For example:

    tkg create cluster my_cluster --plan prod --vsphere-controlplane-endpoint-ip <ip_address> --controlplane-machine-count 5 --worker-machine-count 10 --controlplane-size large --worker-size extra-large
    
  • For Amazon EC2, set t3.large, t3.xlarge, and so on. For example:

    tkg create cluster my_cluster --plan prod --controlplane-machine-count 5 --worker-machine-count 10 --controlplane-size m5.large --worker-size t3.xlarge
    
  • For Azure, set Standard_D2s_v3, or Standard_DC4s_v2, and so on. For example:

    tkg create cluster my_cluster --plan prod --controlplane-machine-count 5 --worker-machine-count 10 --controlplane-size Standard_D2s_v3 --worker-size Standard_DC4s_v2
    

You can combine these options with the --size option. For example, if you are deploying to vSphere and you specify --size large with --worker-size extra-large, the control plane nodes will be set to large and worker nodes will be set to extra-large.

Deploy a Cluster in a Specific Namespace

If you have created namespaces in your Tanzu Kubernetes Grid instance, you can deploy Tanzu Kubernetes clusters to those namespaces by using the --namespace option. If you do not specify the --namespace option, Tanzu Kubernetes Grid places clusters in the default namespace. Any namespace that you identify in the --namespace option must exist in the management cluster before you run the command. For example, you might want to create different types of clusters in dedicated namespaces. For information about creating namespaces in the management cluster, see Create Namespaces in the Management Cluster.

  • vSphere:

    tkg create cluster my-cluster --plan dev --vsphere-controlplane-endpoint-ip <ip_address> --namespace my_namespace
    
  • Amazon EC2 and Azure:

    tkg create cluster my-cluster --plan dev --namespace my_namespace
    

This command deploys a default Tanzu Kubernetes cluster and places it in the designated namespace. For example, if you specify --namespace production, Tanzu Kubernetes Grid creates the Tanzu Kubernetes cluster in an existing namespace named production.

NOTE: If you have created namespaces, you must provide a unique name for all Tanzu Kubernetes clusters across all namespaces. If you provide a cluster name that is in use in another namespace in the same instance, the deployment fails with an error.

Deploy a Cluster with a Non-Default Kubernetes Version

Each release of Tanzu Kubernetes Grid provides a default version of Kubernetes. The default version for Tanzu Kubernetes Grid 1.2.0 is Kubernetes v1.19.1.

As upstream Kubernetes releases patches or new versions, VMware makes these patches and versions available in Tanzu Kubernetes Grid patch and update releases. Each Tanzu Kubernetes Grid release supports a defined set of Kubernetes versions. Tanzu Kubernetes Grid 1.2.0 adds support for Kubernetes versions 1.19.1, 1.18.8, and 1.17.11.

To deploy clusters that run a non-default version of Kubernetes different from the default version, follow the steps below.

List Available Kubernetes Versions

To discover which versions of Kubernetes are made available by a management cluster, use tkg set management-cluster to set the context of tkg to the correct management cluster, then run the tkg get kubernetesversions command.

tkg get kubernetesversions

The output lists all of the versions of Kubernetes that you can use to deploy clusters. For example:

 VERSIONS
 v1.17.11+vmware.1
 v1.17.9+vmware.1
 v1.18.3+vmware.1
 v1.18.6+vmware.1
 v1.18.8+vmware.1
 v1.19.1+vmware.2

Publish the Kubernetes Version to your Infrastructure

On vSphere and Azure, you need to take an additional step before you can deploy clusters that run non-default versions of Kubernetes:

  • vSphere: Import the appropriate base OS OVA into vSphere and convert it to a VM template. For information about importing base OVA files into vSphere, see Import the Base Image Template into vSphere.

  • Azure: Run the Azure CLI command to accept the license for the base OS version. Once you have accepted a license, you can skip this step in the future:

    1. Convert the Kubernetes version as listed by tkg get kubernetesversions into its Azure image SKU as follows:
      • Change leading v to k8s-.
      • Change . to dot in the version number.
      • Change trailing +vmware.* to -ubuntu-1804, to designate Ubuntu v18.04, the OS for all Tanzu Kubernetes Grid VMs on Azure.
      • Examples: k8s-1dot17dot11-ubuntu-1804, k8s-1dot18dot2-ubuntu-1804.
    2. Run the command az vm image terms accept --publisher vmware-inc --offer tkg-capi --plan <azure-sku>. For example:
      az vm image terms accept --publisher vmware-inc --offer tkg-capi --plan k8s-1dot18dot2-ubuntu-1804
      
  • Amazon EC2: No action required. The Amazon Linux 2 Amazon Machine Images (AMI) that includes the supported Kubernetes versions is publicly available to all Amazon EC2 users, in all supported AWS regions. Tanzu Kubernetes Grid automatically uses the appropriate AMI for the Kubernetes version that you specify.

Deploy the Kubernetes Cluster

To deploy a Tanzu Kubernetes cluster with a version of Kubernetes that is not the default for your Tanzu Kubernetes Grid release, specify the version in the --kubernetes-version option.

  • Deploy a Kubernetes v1.17.11 cluster to vSphere:

    tkg create cluster k8s-1-17-11-cluster --plan dev --vsphere-controlplane-endpoint-ip <ip_address> --kubernetes-version v1.17.11
    
  • Deploy a Kubernetes v1.18.8 cluster to Amazon EC2 or Azure:

    tkg create cluster k8s-1-18-8-cluster --plan dev --kubernetes-version v1.18.8
    

These commands deploy Tanzu Kubernetes clusters that run Kubernetes v1.17.11 and v1.18.8 even though by default this version of Tanzu Kubernetes Grid deploys clusters with Kubernetes v1.19.1.

Deploy a Cluster with a Custom OVA Image (vSphere)

If you are using a single custom OVA image for each version of Kubernetes to deploy clusters on one operating system, then follow Deploy a Cluster with a Non-Default Kubernetes Version above. In that procedure, you import the OVA into vSphere and then specify it for tkg create cluster with the --kubernetes-version argument.

If you are using multiple custom OVA images for the same Kubernetes version, then the --kubernetes-version value is ambiguous. This happens when the OVAs for the same Kubernetes version:

  • Have different operating systems, for example created by make build-node-ova-vsphere-ubuntu-1804, make build-node-ova-vsphere-photon-3, and make build-node-ova-vsphere-rhel-7.
  • Have the same name but reside in different vCenter folders.

To resolve this ambiguity, set the environment variable VSPHERE_TEMPLATE to the desired OVA image before you run tkg cluster commands such as tkg create cluster.

If the OVA template image name is unique, set VSPHERE_TEMPLATE to just the image name.

If multiple images share the same name, set VSPHERE_TEMPLATE to the full inventory path of the image in vCenter. This path follows the form /MY-DC/vm/MY-FOLDER-PATH/MY-IMAGE, where:

  • MY_DC is the datacenter containing the OVA template image
  • MY_FOLDER_PATH is the path to the image from the datacenter, as shown in the vCenter VMs and Templates view
  • MY_IMAGE is the image name

For example:

export VSPHERE_TEMPLATE="/TKG_DC/vm/TKG_IMAGES/ubuntu-1804-kube-v1.18.8-vmware.1"
tkg create cluster my-cluster --plan dev --kubernetes-version v1.18.8

You can determine the image's full vCenter inventory path manually, or use the govc CLI:

  1. Install govc, for example with brew install govc
  2. Set environment variables for govc to access your vCenter:
    • export GOVC_USERNAME=VCENTER-USERNAME
    • export GOVC_PASSWORD=VCENTER-PASSWORD
    • export GOVC_URL=VCENTER-URL
    • export GOVC_INSECURE=1
  3. Run govc find / type m and find the image name in the output, which lists objects by their complete inventory paths.

For more information about custom OVA images, see Build and Use Custom OVA Images on vSphere.

Deploy a Cluster that Shares a VPC and NAT Gateway(s) with the Management Cluster (Amazon EC2)

By default, Amazon EC2 imposes a limit of 5 NAT gateways per availability zone. For more information about this limit, see Resource Usage in Your Amazon Web Services Account. If you used the option to create a new VPC when you deployed the management cluster, by default, all Tanzu Kubernetes clusters that you deploy from this management cluster will also create a new VPC and one or three NAT gateways: one NAT gateway for development clusters and three NAT gateways, one in each of your availability zones, for production clusters. So as not to hit the limit of 5 NAT gateways per availability zone, you can modify the configuration with which you deploy Tanzu Kubernetes clusters so that they reuse the VPC and NAT gateway(s) that were created when the management cluster was deployed.

Configuring Tanzu Kubernetes clusters to share a VPC and NAT gateway(s) with their management cluster depends on how the management cluster was deployed:

  • It was deployed with the option to create a new VPC, either by selecting the option in the UI or by specifying AWS_VPC_CIDR in the config.yaml file.
  • Ideally, the tkg init --config option was used to save the config.yaml for the management cluster to a different location than the default .tkg/config.yaml file. For example, save it as cluster-config.yaml.

To deploy Tanzu Kubernetes clusters that reuse the same VPC as the management cluster, you must modify the configuration file from which you deploy Tanzu Kubernetes clusters.

If you deployed the management cluster with the option to reuse an existing VPC, all Tanzu Kubernetes clusters will share that VPC and its NAT gateway(s), and no action is required.

  1. Open the cluster-config.yaml file for the management cluster in a text editor.
  2. Update the setting for AWS_VPC_ID with the ID the VPC that was created when the management cluster was deployed.

    You can obtain this ID from your Amazon EC2 dashboard. Alternatively, you can obtain it by running tkg init --ui, selecting Deploy to AWS EC2 and consulting the value that is provided if you select Select an existing VPC in the VPC for AWS section of the installer interface. Cancel the deployment when you have copied the VPC ID.

    Configure the connection to AWS

  3. Update the settings for the AWS_PUBLIC_SUBNET_ID and AWS_PRIVATE_SUBNET_ID variables. If you are deploying a prod Tanzu Kubernetes cluster, update AWS_PUBLIC_SUBNET_ID, AWS_PUBLIC_SUBNET_ID_1, and AWS_PUBLIC_SUBNET_ID_2 and AWS_PRIVATE_SUBNET_ID, AWS_PRIVATE_SUBNET_ID_1, and AWS_PRIVATE_SUBNET_ID_2.

    You can obtain the network information from the VPC dashboard.

  4. Save the config.yaml file.
  5. Run the tkg create cluster command with the --config option, specifying the modified config.yaml file.

    tkg create cluster my-cluster --plan dev --config cluster-config.yaml
    

Deploy a Cluster to an Existing VPC and Add Subnet Tags (Amazon EC2)

If both of the following are true, you must add the kubernetes.io/cluster/YOUR-CLUSTER-NAME=shared tag to the public subnet or subnets that you intend to use for your Tanzu Kubernetes cluster:

  • You want to deploy the cluster to an existing VPC that was not created by Tanzu Kubernetes Grid.
  • You want to create services of type LoadBalancer in the cluster.

Adding the kubernetes.io/cluster/YOUR-CLUSTER-NAME=shared tag to the public subnet or subnets enables you to create services of type LoadBalancer after you deploy the cluster. To add this tag and then deploy the cluster, follow the steps below:

  1. Gather the ID or IDs of the public subnet or subnets within your existing VPC that you want to use for the cluster. To deploy a prod Tanzu Kubernetes cluster, you must provide three subnets.

  2. Create the required tag by running the following command:

    aws ec2 create-tags --resources YOUR-PUBLIC-SUBNET-ID-OR-IDS --tags Key=kubernetes.io/cluster/YOUR-CLUSTER-NAME,Value=shared
    

    Where:

    • YOUR-PUBLIC-SUBNET-ID-OR-IDS is the ID or IDs of the public subnet or subnets that you gathered in the previous step.
    • YOUR-CLUSTER-NAME is the name of the Tanzu Kubernetes cluster that you want to create.

    For example:

    aws ec2 create-tags --resources subnet-00bd5d8c88a5305c6 subnet-0b93f0fdbae3436e8 subnet-06b29d20291797698 --tags Key=kubernetes.io/cluster/my-cluster,Value=shared
    
  3. Create the cluster. For example:

    tkg create cluster my-cluster --plan prod
    

Deploy a Prod Cluster from a Dev Management Cluster (Amazon EC2)

When you create a prod Tanzu Kubernetes cluster from a dev management cluster that is running on Amazon EC2, you must define a subset of additional variables in the .tkg/config.yaml file before running the tkg create cluster command. This enables Tanzu Kubernetes Grid to create the cluster and spread its control plane and worker nodes across AZs.

To create a prod workload cluster from a dev management cluster on Amazon EC2, perform the steps below:

  1. Set the following variables in the .tkg/config.yaml file:

    • AWS_NODE_AZ variables: AWS_NODE_AZ was set when you deployed your dev management cluster. For the prod workload cluster, add AWS_NODE_AZ_1 and AWS_NODE_AZ_2.
    • AWS_PUBLIC_NODE_CIDR (new VPC) or AWS_PUBLIC_SUBNET_ID (existing VPC) variables: AWS_PUBLIC_NODE_CIDR or AWS_PUBLIC_SUBNET_ID was set when you deployed your dev management cluster. For the prod workload cluster, add one of the following:
      • AWS_PUBLIC_NODE_CIDR_1 and AWS_PUBLIC_NODE_CIDR_2
      • AWS_PUBLIC_SUBNET_ID_1 and AWS_PUBLIC_SUBNET_ID_2
    • AWS_PRIVATE_NODE_CIDR (new VPC) or AWS_PRIVATE_SUBNET_ID (existing VPC) variables: AWS_PRIVATE_NODE_CIDR or AWS_PRIVATE_SUBNET_ID was set when you deployed your dev management cluster. For the prod workload cluster, add one of the following:
      • AWS_PRIVATE_NODE_CIDR_1 and AWS_PRIVATE_NODE_CIDR_2
      • AWS_PRIVATE_SUBNET_ID_1 and AWS_PRIVATE_SUBNET_ID_2

    If you do not define these variables, the cluster creation operation fails.

  2. (Optional) Customize the default AZ placement mechanism for the worker nodes that you intend to deploy by following the instructions in Configure AZ Placement Settings for Worker Nodes (Amazon EC2). By default, Tanzu Kubernetes Grid distributes prod worker nodes evenly across the AZs.

  3. Deploy the cluster by running the tkg create cluster command. For example:

    tkg create cluster my-cluster --plan prod
    

Deploy a Cluster with Custom Node Subnets (Azure)

To specify custom subnets (IP ranges) for the nodes in a cluster, you can set variables before you create the cluster as follows. You can define them as environment variables before you run tkg create cluster, or include them in a manifest that you pass in with the --manifest option.

To specify a custom subnet (IP range) for the control plane node in a cluster:

  • Subnet already defined in Azure: Set AZURE_CONTROL_PLANE_SUBNET_NAME to the subnet name.
  • Create a new subnet: Set AZURE_CONTROL_PLANE_SUBNET_NAME to name for the new subnet, and optionally set AZURE_CONTROL_PLANE_SUBNET_CIDR to a CIDR range within the configured Azure VNET.
    • If you omit AZURE_CONTROL_PLANE_SUBNET_CIDR a CIDR is generated automatically.

To specify a custom subnet for the worker nodes in a cluster, set environment variables AZURE_NODE_SUBNET_NAME and AZURE_NODE_SUBNET_CIDR following the same rules as for the control plane node, above.

Deploy a Cluster with a Non-Default CNI

As described in Tanzu Kubernetes Cluster Networking above, Antrea is the default CNI for Tanzu Kubernetes clusters. You can change the default CNI for a Tanzu Kubernetes cluster by specifying the --cni flag in the tkg create cluster command. The flag supports the following options:

If you do not specify the --cni flag, Antrea is enabled by default.

Enable Calico

To enable Calico in a Tanzu Kubernetes cluster, run the following command:

tkg create cluster CLUSTER-NAME --cni calico

After the cluster creation process completes, you can examine the cluster as described in Obtain Tanzu Kubernetes Cluster Credentials and Examine the Deployed Cluster.

Enable a Custom CNI Provider

To enable a custom CNI provider in a Tanzu Kubernetes cluster, follow the steps below:

  1. Specify --cni none when you create the cluster. For example:

    tkg create cluster my-cluster --cni none
    

    The cluster creation process will not succeed until you apply a CNI to the cluster. When you specify --cni none, the cluster creation process detects that the CNI provider is not present and enters one of the following states:

    • If MachineHealthCheck is enabled. The tkg create cluster command attempts to create the cluster machine deployment VMs and fails if you do not apply your CNI provider before the cluster creation process times out.
    • If MachineHealthCheck is disabled. The tkg create cluster command attempts to apply ClusterResourceSet to the cluster and fails if you do not apply your CNI provider before the cluster creation process times out.

      You can monitor these states in the Cluster API logs on the management cluster. For instructions on how to access the Cluster API logs, see Monitor Workload Cluster Deployments in Cluster API Logs.

  2. When the logs of the tkg create cluster command print "Waiting for cluster nodes to be available...", which means the cluster has been initialized, apply your CNI provider to the cluster:

    1. Get the credentials of the cluster. For example:

      tkg get credentials my-cluster
      
    2. Set the context of kubectl to the cluster. For example:

      kubectl config use-context my-cluster-admin@my-cluster
      
    3. Apply the CNI provider to the cluster:

      kubectl apply -f PATH-TO-YOUR-CNI-CONFIGURATION/example.yaml
      
  3. Monitor the status of the cluster by using the tkg get clusters command. When the cluster creation completes, the cluster status changes from creating to running. For more information about how to examine your cluster, see Connect to and Examine Tanzu Kubernetes Clusters.

You can enable or disable MachineHealthCheck on individual clusters using the Tanzu Kubernetes Grid CLI. For more information, see Configure Machine Health Checks for Tanzu Kubernetes Clusters.

Deploy an Authentication-Enabled Tanzu Kubernetes Cluster

The tkg create cluster command includes an option, --enable-cluster-options oidc, that allows you to deploy Tanzu Kubernetes clusters that implement authentication. If you enable authentication, only users with the correct permissions can access those clusters. To use the --enable-cluster-options oidc option, you must implement the Dex and Gangway extensions. For information about how to implement authentication with Dex and Gangway, see Implementing User Authentication with Dex and Gangway.

What to Do Next: Configure DHCP Reservations (vSphere Only)

Each control plane and worker node that you deploy to vSphere requires a static IP address. To make the IP addresses that your DHCP server assigned to the cluster nodes static, you can configure a DHCP reservation for each control plane and worker node in the cluster. For instructions on how to configure DHCP reservations, see your DHCP server documentation.

check-circle-line exclamation-circle-line close-line
Scroll to top icon