check-circle-line exclamation-circle-line close-line

<

After you have deployed the management cluster to either vSphere or to Amazon EC2, you can use the Tanzu Kubernetes Grid CLI to deploy Tanzu Kubernetes clusters. In Tanzu Kubernetes Grid, Tanzu Kubernetes clusters are the Kubernetes clusters in which your applications run.

To deploy Tanzu Kubernetes clusters, you run the tkg create cluster command, specifying different options to deploy the Tanzu Kubernetes clusters with different configurations. Certain elements of the configuration of a Tanzu Kubernetes cluster are inherited from the .tkg/config.yaml file of the management cluster.

Tanzu Kubernetes Grid automatically deploys clusters to the platform on which you deployed the management cluster. You cannot deploy clusters to Amazon EC2 from a management cluster that is running in vSphere, or the reverse. Tanzu Kubernetes Grid automatically deploys clusters from whichever management cluster you have set as the context for the CLI by using the tkg set management-cluster command. For information about tkg set management-cluster, see Connect tkg and kubectl to Management Clusters.

To deploy clusters to a vSphere 7.0 instance on which the vSphere with Kubernetes feature is enabled, you must connect the Tanzu Kubernetes Grid CLI to the vSphere with Kubernetes Supervisor Cluster. For information about how to do this, see Connect the Tanzu Kubernetes Grid CLI to a vSphere 7.0 Supervisor Cluster.

When you deploy a Tanzu Kubernetes cluster, MachineHealthCheck is enabled by default. MachineHealthCheck provides node health monitoring.

For information about how to upgrade existing clusters to a new version of Kubernetes, see Upgrade Tanzu Kubernetes Clusters.

IMPORTANT:

  • When you create a management cluster, the Tanzu Kubernetes Grid CLI and kubectl contexts are automatically set to that management cluster. However Tanzu Kubernetes Grid does not automatically set the kubectl context to a Tanzu Kubernetes cluster when you create it.
  • By default, unless you specify the --kubeconfig option to save the kubeconfig for a cluster to a specific file, all clusters that you deploy from the Tanzu Kubernetes Grid CLI are added to a shared .kube-tkg/config file. If you delete the shared .kube-tkg/config file, all clusters become unusable.
  • Do not change context or edit the .kube-tkg/config file while Tanzu Kubernetes Grid operations are running.

Prerequisites for Cluster Deployment

You have followed the procedures in Installing Tanzu Kubernetes Grid to deploy a management cluster to either vSphere or Amazon EC2, or you have a vSphere 7.0 instance on which a vSphere with Kubernetes Supervisor Cluster is running.

Tanzu Kubernetes Cluster Networking

When you use the Tanzu Kubernetes Grid CLI to deploy a Tanzu Kubernetes cluster, by default Calico networking is automatically enabled in the cluster.

If you deploy clusters to vSphere, you can set static IP addresses on the cluster node VMs by making the DHCP reservations static for the HA proxy load balancer VM and the control plane VM or VMs.

Tanzu Kubernetes Cluster Plans

Tanzu Kubernetes Grid provides standard templates for clusters, known as plans. In this release, there are two plans for Tanzu Kubernetes clusters:

  • By default, the dev plan deploys a cluster with one control plane node and one worker node.
  • By default, the prod plan deploys a cluster with three control plane nodes and one worker node.

You can specify options to deploy clusters with different numbers of control plane and worker nodes. If you deploy a cluster with multiple control plane nodes, Tanzu Kubernetes Grid automatically enables stacked HA on the control plane. You can also change the number of nodes in a cluster after deployment by running the tkg scale cluster command on the cluster. For more information, see Scale Tanzu Kubernetes Clusters.

Deploy a Default Tanzu Kubernetes Cluster

To deploy a Tanzu Kubernetes cluster with the minimum default configuration, you run tkg create cluster, specifying the cluster name and the --plan dev option.

tkg create cluster my-cluster --plan dev

This command deploys a Tanzu Kubernetes cluster that runs the default version of Kubernetes for this Tanzu Kubernetes Grid release, which in Tanzu Kubernetes Grid 1.1.3 is v1.18.6. The cluster consists of the following VMs or instances:

  • vSphere:
    • One control plane VM, with a name similar to my-cluster-control-plane-nj4z6.
    • One loadbalancer VM, with a name similar to my-cluster-default-lb, where default is the name of the namespace in which the cluster is running.
    • One worker node, with a name similar to my-cluster-md-0-6ff9f5cffb-jhcrh.
  • Amazon EC2:
    • One control plane instance, with a name similar to my-cluster-control-plane-d78t5.
    • One EC2 bastion instance, with the name my-cluster-bastion.
    • One worker node, with a name similar to my-cluster-md-0-2vsr4.

Preview the YAML for a Tanzu Kubernetes Cluster

To see a preview of the YAML file that Tanzu Kubernetes Grid will create when it deploys a Tanzu Kubernetes cluster, you can run the tkg create cluster command with the --dry-run option. If you specify --dry-run, Tanzu Kubernetes Grid displays the full YAML file for the cluster, but does not create the cluster.

tkg create cluster my-cluster --plan dev --dry-run

You can add --dry-run to any of the commands described in the following sections, to see a preview of the YAML before you deploy the cluster. If you are satisfied with the displayed configuration file, run the command again without the --dry-run option, to create the cluster.

If you specify --dry-run, Tanzu Kubernetes Grid sends the YAML file for the cluster to stdout, so that you can save it for repeated future use.

tkg create cluster my-cluster --plan dev --dry-run > my-cluster-config.yaml

NOTE: Running tkg create cluster with the --dry-run option works in the same way as running the tkg config cluster command. You can save the output of --dry-run and use it to create clusters. In Tanzu Kubernetes Grid 1.1.2 and later, you can deploy clusters from the saved file by using the tkg create cluster command with the --manifest option. In earlier versions, you must use kubectl apply. For information how to deploy clusters from saved YAML files, see Create Tanzu Kubernetes Cluster Configuration Files.

Deploy a Cluster with Multiple Worker Nodes

By default, a Tanzu Kubernetes cluster has one control plane node and one worker node. To deploy clusters with multiple worker nodes, specify the --worker-machine-count option.

tkg create cluster my-dev-cluster --plan dev --worker-machine-count 3

This command deploys a Tanzu Kubernetes cluster that consists of the following VMs or instances:

  • vSphere:
    • One control plane VM, with a name similar to my-dev-cluster-control-plane-nj4z6.
    • One loadbalancer VM, with a name similar to my-dev-cluster-default-lb, where default is the name of the namespace in which the cluster is running.
    • Three worker nodes, with names similar to my-dev-cluster-md-0-6ff9f5cffb-jhcrh.
  • Amazon EC2:
    • One control plane instance, with a name similar to my-dev-cluster-control-plane-d78t5.
    • One EC2 bastion instance, with the name my-dev-cluster-bastion.
    • Three worker nodes, with names similar to my-dev-cluster-md-0-2vsr4.

Deploy a Cluster with a Highly Available Control Plane

If you specify --plan prod, Tanzu Kubernetes Grid deploys a cluster with three control plane nodes and automatically implements stacked control plane HA for the cluster.

tkg create cluster my-prod-cluster --plan prod

This command deploys a Tanzu Kubernetes cluster that consists of the following VMs or instances:

  • vSphere
    • Three control plane VMs, with names similar to my-prod-cluster-control-plane-nj4z6.
    • One loadbalancer VM, with a name similar to my-prod-cluster-default-lb, where default is the name of the namespace in which the cluster is running.
    • Three worker nodes, with names similar to my-prod-cluster-md-0-6ff9f5cffb-jhcrh.
  • Amazon EC2:
    • Three control plane instances, with names similar to my-prod-cluster-control-plane-d78t5.
    • One EC2 bastion instance, with the name my-prod-cluster-bastion.
    • Three worker nodes, with names similar to my-prod-cluster-md-0-2vsr4.

You can deploy a Tanzu Kubernetes cluster with more control plane nodes by specifying the --controlplane-machine-count option. The number of control plane nodes that you specify in --controlplane-machine-count must be uneven.

tkg create cluster cluster_name --plan prod --controlplane-machine-count 5 --worker-machine-count 10

Deploy a Cluster with Nodes of Different Sizes

By default, Tanzu Kubernetes Grid creates the individual nodes of the Tanzu Kubernetes clusters according to the settings that you provided when you deployed the management cluster.

  • If you deployed the management cluster from the Tanzu Kubernetes Grid installer interface, control plane, worker, and vSphere load balancer nodes are created with the configuration that you set in the Management Cluster Settings > Instance Type drop-down menu.
  • If you deployed the management cluster from the Tanzu Kubernetes Grid CLI, nodes are created with the configuration that you set in the following options:
    • vSphere: VSPHERE_CONTROL_PLANE_*, VSPHERE_WORKER_*, VSPHERE_HA_PROXY_*
    • Amazon EC2: CONTROL_PLANE_MACHINE_TYPE, NODE_MACHINE_TYPE

In Tanzu Kubernetes Grid 1.1.2 and later, you can override these settings by using the tkg create cluster --size, --controlplane-size, and --worker-size options. By using these options, you can create Tanzu Kubernetes clusters that have nodes with different configurations to the management cluster nodes. You can also create clusters in which the control plane nodes and worker nodes are different sizes.

To create a Tanzu Kubernetes cluster in which all of the control plane and worker node VMs are the same size, specify the --size option. The values that you set depend on whether you are deploying to vSphere or AWS.

  • vSphere: extra-large, large, medium, or small. For example:
    tkg create cluster cluster_name --plan prod --controlplane-machine-count 5 --worker-machine-count 10 --size large
    If you are deploying the cluster to vSphere, this setting also applies to the load balancer node VMs.
  • Amazon EC2: i3.xlarge, r4.8xlarge, m5a.4xlarge, m5a.2xlarge, m5.xlarge, m5.large, t3.xlarge, t3.large, t3.medium, or t3.small. For example:
    tkg create cluster cluster_name --plan prod --controlplane-machine-count 5 --worker-machine-count 10 --size m5.large

To create a Tanzu Kubernetes cluster in which the control plane and worker node VMs are different sizes, specify the --controlplane-size and --worker-size. If you are deploying the cluster to vSphere, you also set the --haproxy-size option. The values that you set depend on whether you are deploying to vSphere or AWS.

  • vSphere: extra-large, large, medium, or small. For example:
    tkg create cluster cluster_name --plan prod --controlplane-machine-count 5 --worker-machine-count 10 --controlplane-size large --worker-size extra-large --haproxy-size small
  • Amazon EC2: i3.xlarge, r4.8xlarge, m5a.4xlarge, m5a.2xlarge, m5.xlarge, m5.large, t3.xlarge, t3.large, t3.medium, or t3.small. For example:
    tkg create cluster cluster_name --plan prod --controlplane-machine-count 5 --worker-machine-count 10 --controlplane-size m5.large --worker-size t3.xlarge

You can combine these options with the --size option. For example, if you are deploying to vSphere and you specify --size large with --worker-size extra-large, the control plane and HA proxy nodes will both be set to large and worker nodes will be set to extra-large.

Deploy a Cluster in a Specific Namespace

If you have created namespaces in your Tanzu Kubernetes Grid instance, you can deploy Tanzu Kubernetes clusters to those namespaces by using the --namespace option. If you do not specify the --namespace option, Tanzu Kubernetes Grid places clusters in the default namespace. Any namespace that you identify in the --namespace option must exist in the management cluster before you run the command. For example, you might want to create different types of clusters in dedicated namespaces. For information about creating namespaces in the management cluster, see Create Namespaces in the Management Cluster.

tkg create cluster my-cluster --plan dev --namespace my_namespace

This command deploys a default Tanzu Kubernetes cluster and places it in the designated namespace. For example, if you specify --namespace production, Tanzu Kubernetes Grid creates the Tanzu Kubernetes cluster in an existing namespace named production.

NOTE: If you have created namespaces, you must provide a unique name for all Tanzu Kubernetes clusters across all namespaces. If you provide a cluster name that is in use in another namespace in the same instance, the deployment fails with an error.

Deploy a Cluster that Runs a Different Version of Kubernetes

Each release of Tanzu Kubernetes Grid provides a default version of Kubernetes. As upstream Kubernetes releases patches or new versions, VMware makes these patches and versions available in Tanzu Kubernetes Grid patch and update releases. Each Tanzu Kubernetes Grid release supports a defined set of Kubernetes versions. Tanzu Kubernetes Grid 1.1.3 adds support for Kubernetes versions 1.17.9 and 1.18.6. The default version for this release is 1.18.6. However, you can also deploy clusters that run a version of Kubernetes from a previous release of Tanzu Kubernetes Grid.

To deploy a Tanzu Kubernetes cluster with a version of Kubernetes that is not the default for your Tanzu Kubernetes Grid release, specify the version in the --kubernetes-version option.

NOTES:

  • If you are deploying to vSphere, before you can deploy clusters that use a non-default version of Kubernetes for your version of Tanzu Kubernetes Grid, you must import the appropriate base OS OVA into vSphere and convert it to a VM template. For information about importing base OVA files into vSphere, see Import the Base Image Template into vSphere.
  • If you are deploying to AWS, the Amazon Linux 2 Amazon Machine Images (AMI) that include the supported Kubernetes versions are publicly available to all Amazon EC2 users, in all supported AWS regions. Tanzu Kubernetes Grid automatically uses the appropriate AMI for the Kubernetes version that you specify.
  • You can only specify a version of Kubernetes that is provided with and supported by a given Tanzu Kubernetes Grid release.

In addition to the default Kubernetes version, 1.18.6, Tanzu Kubernetes Grid 1.1.3 supports Kubernetes v1.17.3, v1.17.6, v1.17.9, v1.18.2, and v1.18.3.

The following commands assume that you are using Tanzu Kubernetes Grid 1.1.3.

  • Deploy a Kubernetes v1.17.3 cluster:

    tkg create cluster k8s-1-17-3-cluster --plan dev --kubernetes-version v1.17.3
    
  • Deploy a Kubernetes v1.17.6 cluster:

    tkg create cluster k8s-1-17-6-cluster --plan dev  --kubernetes-version v1.17.6
    
  • Deploy a Kubernetes v1.18.2 cluster:

    tkg create cluster k8s-1-18-2-cluster --plan dev --kubernetes-version v1.18.2
    

These commands deploy Tanzu Kubernetes clusters that run Kubernetes v1.17.3, v1.17.6, and v1.18.2, even though by default this version of Tanzu Kubernetes Grid deploys clusters with Kubernetes 1.18.6.

Deploy a Cluster that Shares a VPC and NAT Gateway with the Management Cluster

Amazon EC2 imposes a limit of 5 NAT gateways per availability zone. For more information about this limit, see Resource Usage in Your Amazon Web Services Account. If you used the option to create a new VPC when you deployed the management cluster, by default all Tanzu Kubernetes Clusters that you deploy from this management cluster will also create a new VPC and NAT gateway. So as not to hit the limit of 5 NAT gateways per availability zone, you can modify the configuration with which you deploy Tanzu Kubernetes clusters so that they reuse the VPC and NAT gateway that were created when the management cluster was deployed.

Configuring Tanzu Kubernetes clusters to share a VPC and NAT gateway with their management cluster depends on how the management cluster was deployed:

  • It was deployed with the option to create a new VPC, either by selecting the option in the UI or by specifying AWS_VPC_CIDR in the config.yaml file.
  • Ideally, the tkg init --config option was used to save the config.yaml for the management cluster to a different location than the default .tkg/config.yaml file. For example, save it as cluster-config.yaml.

To deploy Tanzu Kubernetes clusters that reuse the same VPC as the management cluster, you must modify the configuration file from which you deploy Tanzu Kubernetes clusters.

If you deployed the management cluster with the option to reuse an existing VPC, all Tanzu Kubernetes clusters will share that VPC and its NAT gateway, and no action is required.

  1. Open the cluster-config.yaml file for the management cluster in a text editor.
  2. Update the setting for AWS_VPC_ID with the ID the VPC that was created when the management cluster was deployed.

    You can obtain this ID from your Amazon EC2 dashboard. Alternatively, you can obtain it by running tkg init --ui, selecting Deploy to AWS EC2 and consulting the value that is provided if you select Select an existing VPC in the VPC for AWS section of the installer interface. Cancel the deployment when you have copied the VPC ID.

    Configure the connection to AWS

  3. Update the settings for AWS_PUBLIC_SUBNET_ID and AWS_PRIVATE_SUBNET_ID.

    You can obtain the network information from the VPC dashboard.

  4. Save the config.yaml file.
  5. Run the tkg create cluster command with the --config option, specifying the modified config.yaml file.

    tkg create cluster my-cluster --plan dev --config cluster-config.yaml