When you deploy Tanzu Kubernetes (workload) clusters to vSphere, you must specify options in the cluster configuration file to connect to vCenter Server and identify the vSphere resources that the cluster will use. You can also specify standard sizes for the control plane and worker node VMs, or configure the CPU, memory, and disk sizes for control plane and worker nodes explicitly. If you use custom image templates, you can identify which template to use to create node VMs.

For the basic process of deploying workload clusters, see Deploy a Workload Cluster: Basic Process.

For the full list of options that you must specify when deploying workload clusters to vSphere, see the Tanzu CLI Configuration File Variable Reference.

Tanzu Kubernetes Cluster Template

The template below includes all of the options that are relevant to deploying workload clusters on vSphere. You can copy this template and update it to deploy workload clusters to vSphere.

Mandatory options are uncommented. Optional settings are commented out. Default values are included where applicable.

With the exception of the options described in the sections below the template, the way in which you configure the variables for workload clusters that are specific to vSphere is identical for both management clusters and workload clusters. For information about how to configure the variables, see Create a Management Cluster Configuration File and Management Cluster Configuration for vSphere.

#! ---------------------------------------------------------------------
#! Basic cluster creation configuration
#! ---------------------------------------------------------------------

# CLUSTER_NAME:
CLUSTER_PLAN: dev
NAMESPACE: default
CNI: antrea
IDENTITY_MANAGEMENT_TYPE: oidc

#! ---------------------------------------------------------------------
#! Node configuration
#! ---------------------------------------------------------------------

# SIZE:
# CONTROLPLANE_SIZE:
# WORKER_SIZE:

# VSPHERE_NUM_CPUS: 2
# VSPHERE_DISK_GIB: 40
# VSPHERE_MEM_MIB: 4096

# VSPHERE_CONTROL_PLANE_NUM_CPUS: 2
# VSPHERE_CONTROL_PLANE_DISK_GIB: 40
# VSPHERE_CONTROL_PLANE_MEM_MIB: 8192
# VSPHERE_WORKER_NUM_CPUS: 2
# VSPHERE_WORKER_DISK_GIB: 40
# VSPHERE_WORKER_MEM_MIB: 4096

# CONTROL_PLANE_MACHINE_COUNT: 1
# WORKER_MACHINE_COUNT: 1
# WORKER_MACHINE_COUNT_0:
# WORKER_MACHINE_COUNT_1:
# WORKER_MACHINE_COUNT_2:

#! ---------------------------------------------------------------------
#! vSphere configuration
#! ---------------------------------------------------------------------

VSPHERE_NETWORK: VM Network
# VSPHERE_TEMPLATE:
VSPHERE_SSH_AUTHORIZED_KEY:
VSPHERE_USERNAME:
VSPHERE_PASSWORD:
VSPHERE_SERVER:
VSPHERE_DATACENTER:
VSPHERE_RESOURCE_POOL:
VSPHERE_DATASTORE:
# VSPHERE_STORAGE_POLICY_ID
VSPHERE_FOLDER:
VSPHERE_TLS_THUMBPRINT:
VSPHERE_INSECURE: false
VSPHERE_CONTROL_PLANE_ENDPOINT:

#! ---------------------------------------------------------------------
#! NSX-T specific configuration for enabling NSX-T routable pods
#! ---------------------------------------------------------------------

# NSXT_POD_ROUTING_ENABLED: false
# NSXT_ROUTER_PATH: ""
# NSXT_USERNAME: ""
# NSXT_PASSWORD: ""
# NSXT_MANAGER_HOST: ""
# NSXT_ALLOW_UNVERIFIED_SSL: false
# NSXT_REMOTE_AUTH: false
# NSXT_VMC_ACCESS_TOKEN: ""
# NSXT_VMC_AUTH_HOST: ""
# NSXT_CLIENT_CERT_KEY_DATA: ""
# NSXT_CLIENT_CERT_DATA: ""
# NSXT_ROOT_CA_DATA: ""
# NSXT_SECRET_NAME: "cloud-provider-vsphere-nsxt-credentials"
# NSXT_SECRET_NAMESPACE: "kube-system"

#! ---------------------------------------------------------------------
#! Machine Health Check configuration
#! ---------------------------------------------------------------------

ENABLE_MHC:
ENABLE_MHC_CONTROL_PLANE: true
ENABLE_MHC_WORKER_NODE: true
MHC_UNKNOWN_STATUS_TIMEOUT: 5m
MHC_FALSE_STATUS_TIMEOUT: 12m

#! ---------------------------------------------------------------------
#! Common configuration
#! ---------------------------------------------------------------------

# TKG_CUSTOM_IMAGE_REPOSITORY: ""
# TKG_CUSTOM_IMAGE_REPOSITORY_CA_CERTIFICATE: ""

# TKG_HTTP_PROXY: ""
# TKG_HTTPS_PROXY: ""
# TKG_NO_PROXY: ""

ENABLE_AUDIT_LOGGING: true

ENABLE_DEFAULT_STORAGE_CLASS: true

CLUSTER_CIDR: 100.96.0.0/11
SERVICE_CIDR: 100.64.0.0/13

# OS_NAME: ""
# OS_VERSION: ""
# OS_ARCH: ""

#! ---------------------------------------------------------------------
#! Autoscaler configuration
#! ---------------------------------------------------------------------

ENABLE_AUTOSCALER: false
# AUTOSCALER_MAX_NODES_TOTAL: "0"
# AUTOSCALER_SCALE_DOWN_DELAY_AFTER_ADD: "10m"
# AUTOSCALER_SCALE_DOWN_DELAY_AFTER_DELETE: "10s"
# AUTOSCALER_SCALE_DOWN_DELAY_AFTER_FAILURE: "3m"
# AUTOSCALER_SCALE_DOWN_UNNEEDED_TIME: "10m"
# AUTOSCALER_MAX_NODE_PROVISION_TIME: "15m"
# AUTOSCALER_MIN_SIZE_0:
# AUTOSCALER_MAX_SIZE_0:
# AUTOSCALER_MIN_SIZE_1:
# AUTOSCALER_MAX_SIZE_1:
# AUTOSCALER_MIN_SIZE_2:
# AUTOSCALER_MAX_SIZE_2:

#! ---------------------------------------------------------------------
#! Antrea CNI configuration
#! ---------------------------------------------------------------------
# ANTREA_NO_SNAT: false
# ANTREA_TRAFFIC_ENCAP_MODE: "encap"
# ANTREA_PROXY: false
# ANTREA_POLICY: true
# ANTREA_TRACEFLOW: false

Deploy a Cluster with a Custom OVA Image

If you are using a single custom OVA image for each version of Kubernetes to deploy clusters on one operating system, follow Deploy Tanzu Kubernetes Clusters with Different Kubernetes Versions. In that procedure, you import the OVA into vSphere and then specify it for tanzu cluster create with the --tkr option.

If you are using multiple custom OVA images for the same Kubernetes version, then the --tkr value is ambiguous. This happens when the OVAs for the same Kubernetes version:

  • Have different operating systems, for example created by make build-node-ova-vsphere-ubuntu-1804, make build-node-ova-vsphere-photon-3, and make build-node-ova-vsphere-rhel-7.
  • Have the same name but reside in different vCenter folders.

To resolve this ambiguity, set the VSPHERE_TEMPLATE option to the desired OVA image before you run tanzu cluster create.

If the OVA template image name is unique, set VSPHERE_TEMPLATE to just the image name.

If multiple images share the same name, set VSPHERE_TEMPLATE to the full inventory path of the image in vCenter. This path follows the form /MY-DC/vm/MY-FOLDER-PATH/MY-IMAGE, where:

  • MY_DC is the datacenter containing the OVA template image
  • MY_FOLDER_PATH is the path to the image from the datacenter, as shown in the vCenter VMs and Templates view
  • MY_IMAGE is the image name

For example:

 VSPHERE_TEMPLATE: "/TKG_DC/vm/TKG_IMAGES/ubuntu-1804-kube-v1.18.8-vmware.1"

You can determine the image's full vCenter inventory path manually or use the govc CLI:

  1. Install govc. For installation instructions, see the govmomi repository on GitHub.
  2. Set environment variables for govc to access your vCenter:
    • export GOVC_USERNAME=VCENTER-USERNAME
    • export GOVC_PASSWORD=VCENTER-PASSWORD
    • export GOVC_URL=VCENTER-URL
    • export GOVC_INSECURE=1
  3. Run govc find / -type m and find the image name in the output, which lists objects by their complete inventory paths.

For more information about custom OVA images, see Build Machine Images.

Deploy a vSphere 7 Cluster

After you add a Tanzu Supervisor Cluster (vSphere 7) to the Tanzu CLI, as described in Add a vSphere7 Supervisor Cluster as a Management Cluster, you can deploy Tanzu Kubernetes clusters optimized for vSphere 7 by configuring cluster parameters and creating the cluster as described in the sections below.

Configure Cluster Parameters

Configure the Tanzu Kubernetes clusters that the tanzu CLI calls the supervisor cluster to create:

  1. Obtain information about the storage classes that are defined in the supervisor cluster.

    kubectl get storageclasses
    
  2. Set variables to define the storage classes, VM classes, service domain, namespace, and other required values with which to create your cluster. For information about all of the configuration parameters that you can set when deploying Tanzu Kubernetes clusters to vSphere with Tanzu, see Configuration Parameters for Provisioning Tanzu Kubernetes Clusters in the vSphere with Tanzu documentation.

    The following table lists the required variables:

Option Value Type or Example Description
CONTROL_PLANE_STORAGE_CLASS Value returned from CLI: kubectl get storageclasses Default storage class for control plane nodes
WORKER_STORAGE_CLASS Default storage class for worker nodes
DEFAULT_STORAGE_CLASS Empty string "" for no default, or value from CLI, as above. Default storage class for control plane or workers
STORAGE_CLASSES Empty string "" lets clusters use any storage classes in the namespace, or comma-separated list string of values from CLI, "SC-1,SC-2,SC-3" Storage classes available for node customization
CONTROL_PLANE_VM_CLASS A standard VM class for vSphere with Tanzu, for example guaranteed-large.
See Virtual Machine Class Types for Tanzu Kubernetes Clusters in the vSphere with Tanzu documentation.
VM class for control plane nodes
WORKER_VM_CLASS VM class for worker nodes
SERVICE_CIDR CIDR range The CIDR range to use for the Kubernetes services. The recommended range is 100.64.0.0/13. Change this value only if the recommended range is unavailable.
CLUSTER_CIDR CIDR range The CIDR range to use for pods. The recommended range is 100.96.0.0/11. Change this value only if the recommended range is unavailable.
SERVICE_DOMAIN Domain e.g. my.example.com, or cluster.local if no DNS. If you are going to assign FQDNs with the nodes, DNS lookup is required.
NAMESPACE Namespace The namespace in which to deploy the cluster.
CLUSTER_PLAN dev, prod, or a custom plan See Tanzu CLI Configuration File Variable Reference for variables required for all Tanzu Kubernetes cluster configuration files.
INFRASTRUCTURE_PROVIDER tkg-service-vsphere

You can set the variables above by doing either of the following:

  • Include them in the cluster configuration file passed to the tanzu CLI --file option. For example:

    CONTROL_PLANE_VM_CLASS: guaranteed-large
    
  • From command line, set them as local environment variables by running export (on Linux and macOS) or SET (on Windows) on the command line. For example:

    export CONTROL_PLANE_VM_CLASS=guaranteed-large
    

    Note: If you want to configure unique proxy settings for a Tanzu Kubernetes cluster, you can set TKG_HTTP_PROXY, TKG_HTTPS_PROXY, and NO_PROXY as environment variables and then use the Tanzu CLI to create the cluster. These variables take precedence over your existing proxy configuration in vSphere with Tanzu.

Create the Cluster

Run tanzu cluster create to create the vSphere 7 cluster.

  1. Determine the versioned Tanzu Kubernetes release (TKr) for the cluster:

    1. Obtain the list of TKr that are available in the supervisor cluster.

      tanzu kubernetes-release get
      
    2. From the command output, record the desired value listed under NAME, for example v1.20.8---vmware.1-tkg.1.a87f261. The tkr NAME is the same as its VERSION but with + changed to ---.

  2. Determine the namespace for the cluster.

    1. Obtain the list of namespaces.

      kubectl get namespaces
      
    2. From the command output, record the namespace that includes the Supervisor cluster, for example test-gc-e2e-demo-ns.

  3. Decide on the cluster plan: dev, prod, or a custom plan.

  4. Run tanzu cluster create with the namespace and tkr NAME values above to create a Tanzu Kubernetes cluster:

    tanzu cluster create my-vsphere7-cluster --tkr=TKR-NAME
    
#! ---------------------------------------------------------------------
#! Settings for creating clusters on vSphere with Tanzu
#! ---------------------------------------------------------------------
#! Identifies the storage class to be used for storage of the disks that store the root file systems of the worker nodes.
CONTROL_PLANE_STORAGE_CLASS:
#! Specifies the name of the VirtualMachineClass that describes the virtual
#! hardware settings to be used each control plane node in the pool.
CONTROL_PLANE_VM_CLASS:
#! Specifies a named storage class to be annotated as the default in the
#! cluster. If you do not specify it, there is no default.
DEFAULT_STORAGE_CLASS:
#! Specifies the service domain for the cluster
SERVICE_DOMAIN:
#! Specifies named persistent volume (PV) storage classes for container
#! workloads. Storage classes associated with the Supervisor Namespace are
#! replicated in the cluster. In other words, each storage class listed must be
#! available on the Supervisor Namespace for this to be a valid value
STORAGE_CLASSES:
#! Identifies the storage class to be used for storage of the disks that store the root file systems of the worker nodes.
WORKER_STORAGE_CLASS:
#! Specifies the name of the VirtualMachineClass that describes the virtual
#! hardware settings to be used each worker node in the pool
WORKER_VM_CLASS:
NAMESPACE:

Deploy a Cluster with Region and Zone Tags for CSI

You can specify a region and zone for your workload cluster, to integrate it with region and zone tags configured for vSphere CSI (Cloud Storage Interface). For clusters that span multiple zones, this lets worker nodes find and use shared storage, even if they run in zones that have no storage pods, for example in a telecommunications Radio Access Network (RAN).

To deploy a workload cluster with region and zone tags that enable shared storage with vSphere CSI:

  1. Create tags on vCenter Server

    1. Create tag categories on vCenter Server following Create, Edit, or Delete a Tag Category. For example, k8s-region and k8s-zone.
    2. Follow Create a Tag to create tags within the region and zone categories in the datacenter, as shown in this table:

      Category Tags
      k8s-zone zone-a
      zone-b
      zone-c
      k8s-region region-1

  2. Create corresponding tags to the clusters to the datacenter following Assign or Remove a Tag as indicated in the table.

    vSphere Objects Tags
    datacenter region-1
    cluster1 zone-a
    cluster2 zone-b
    cluster3 zone-c
  3. To enable custom regions and zones for a vSphere workload cluster's CSI driver, set the variables VSPHERE_REGION and VSPHERE_ZONE in the cluster configuration file to the tags above. For example:

    VSPHERE_REGION: region-1
    VSPHERE_ZONE: zone-a
    

    When the tanzu CLI creates a workload cluster with these variables set, it labels each cluster node with the topology keys failure-domain.beta.kubernetes.io/zone and failure-domain.beta.kubernetes.io/region.

  4. Run tanzu cluster create to create the workload cluster, as described in Deploy Tanzu Kubernetes Clusters.

  5. After you create the cluster, and with the kubectl context set to the cluster, you can check the region and zone labels by doing one of the following:

    • Run kubectl get nodes -L failure-domain.beta.kubernetes.io/zone -L failure-domain.beta.kubernetes.io/region and confirm that the output lists the cluster nodes.

    • Run kubectl get csinodes -o jsonpath='{range .items\[\*\]}{.metadata.name} {.spec}{"\\n"}{end}' and confirm that the region and zone are enabled on vsphere-csi.

For more information on configuring vSphere CSI, see vSphere CSI Driver - Deployment with Topology.

Clusters on Different vSphere Accounts

Tanzu Kubernetes Grid can run workload clusters on multiple infrastructure provider accounts, for example to split cloud usage among different teams or apply different security profiles to production, staging, and development workloads.

To deploy workload clusters to an alternative vSphere account, different from the one used to deploy their management cluster, do the following:

  1. Set the context of kubectl to your management cluster:

    kubectl config use-context MY-MGMT-CLUSTER@MY-MGMT-CLUSTER
    

    Where MY-MGMT-CLUSTER is the name of your management cluster.

  2. Create a secret.yaml file with the following contents:

    apiVersion: v1
    kind: Secret
    metadata:
      name: SECRET-NAME
      namespace: CAPV-MANAGER-NAMESPACE
    stringData:
      username: VSPHERE-USERNAME
      password: VSPHERE-PASSWORD
    
    

    Where:

    • SECRET-NAME is a name that you give to the client secret.
    • CAPV-MANAGER-NAMESPACE is namespace where the capv-manager pod is running. Default: capv-system.
    • VSPHERE-USERNAME and VSPHERE-PASSWORD are login credentials that enable access to the alternative vSphere account.
  3. Use the file to create the Secret object:

    kubectl apply -f secret.yaml
    
  4. Create an identity.yaml file with the following contents:

    apiVersion: infrastructure.cluster.x-k8s.io/v1alpha3
    kind: VSphereClusterIdentity
    metadata:
      name: EXAMPLE-IDENTITY
    spec:
      secretName: SECRET-NAME
      allowedNamespaces:
        selector:
          matchLabels: {}
    

    Where:

    • EXAMPLE-IDENTITY is the name to use for the VSphereClusterIdentity object.
    • SECRET-NAME is the name you gave to the client secret, above.
  5. Use the file to create the VsphereClusterIdentity object:

    kubectl apply -f identity.yaml
    

The management cluster can now deploy workload clusters to the alternative account. To deploy a workload cluster to the account:

  1. Create a cluster manifest by running tanzu cluster create --dry-run as described in Create Tanzu Kubernetes Cluster Manifest Files.

  2. Edit the VSphereCluster definition in the manifest to set the spec.identityRef.name value for its VSphereClusterIdentity object to the EXAMPLE-IDENTITY you created above:

    ...
    apiVersion: infrastructure.cluster.x-k8s.io/v1alpha3
    kind: VSphereCluster
    metadata:
     name: new-workload-cluster
    spec:
     identityRef:
       kind: VSphereClusterIdentity
       name: EXAMPLE-IDENTITY
    ...
    
  3. Run kubectl apply -f my-cluster-manifest.yaml to create the workload cluster.

After you create the workload cluster, log in to vSphere with the alternative account credentials, and you should see it running.

For more information, see Identity Management in the Cluster API Provider vSphere repository.

Configure DHCP Reservations for the Control Plane Nodes

After you deploy a cluster to vSphere, each control plane node requires a static IP address. This includes both management and workload clusters. These static IP addresses are required in addition to the static IP address that you assigned to Kube-Vip when you deploy a managment cluster.

To make the IP addresses that your DHCP server assigned to the control plane nodes static, you can configure a DHCP reservation for each control plane node in the cluster. For instructions on how to configure DHCP reservations, see your DHCP server documentation.

What to Do Next

Advanced options that are applicable to all infrastructure providers are described in the following topics:

After you have deployed your cluster, see Manage Clusters.

check-circle-line exclamation-circle-line close-line
Scroll to top icon