Before you can use the Tanzu Kubernetes Grid CLI or installer interface to deploy a management cluster, you must prepare your vSphere environment. You must make sure that vSphere meets the general requirements, and import the base OS templates from which Tanzu Kubernetes Grid creates cluster node VMs.

General Requirements

  • Perform the steps described in Install the Tanzu Kubernetes Grid CLI.
  • You have a vSphere 6.7u3 instance with an Enterprise Plus license.
  • If you have vSphere 7, see Management Clusters Unnecessary on vSphere 7 below.
  • Your vSphere instance has the following objects in place:
    • Either a standalone host or a vSphere cluster with at least two hosts
    • If you are deploying to a cluster, ideally vSphere DRS is enabled
    • Optionally, a resource pool in which to deploy the Tanzu Kubernetes Grid Instance
    • A VM folder in which to collect the Tanzu Kubernetes Grid VMs
    • A datastore with sufficient capacity for the control plane and worker node VM files
    • If you intend to deploy multiple Tanzu Kubernetes Grid instances to this vSphere instance, create a dedicated resource pool, VM folder, and network for each instance that you deploy.
  • You have a network with a DHCP server to which to connect the cluster node VMs that Tanzu Kubernetes Grid deploys. The node VMs must be able to connect to vSphere.

    NOTE: Each control plane and worker node that you deploy to vSphere requires a static IP address. This includes both management and Tanzu Kubernetes clusters. To make DHCP-assigned IP addresses static, you can configure a DHCP reservation for each node in your cluster. Configure the DHCP reservations after you deploy the cluster.

  • You have a set of available static virtual IP addresses for the clusters that you create. Make sure that these IP addresses are not in the DHCP range, but are in the same subnet as the DHCP range. For more information, see Load Balancers for vSphere.

  • Traffic to vCenter Server is allowed from the network on which clusters will run
  • The Network Time Protocol (NTP) service is running on all hosts, and the hosts are running on UTC. To check the time settings on hosts, perform the following steps:
    • Use SSH to log in to the ESXi host.
    • Run the date command to see the timezone settings.
    • If the timezone is incorrect, run esxcli system time set.
  • If your vSphere environment runs NSX-T Data Center, you can use the NSX-T Data Center interfaces when you deploy management clusters. Make sure that your NSX-T Data Center setup includes a segment on which DHCP is enabled. Make sure that NTP is configured on all ESXi hosts, on vCenter Server, and on the bootstrap machine.
  • You have a vSphere account that has at least the permissions described in Required Permissions for the vSphere Account.

Management Clusters Unnecessary on vSphere 7

On vSphere 7, enabling the vSphere with Tanzu feature provides a built-in Tanzu Kubernetes Grid Service. The Tanzu Kubernetes Grid Service includes a Supervisor Cluster that performs the same role as a management cluster deployed by Tanzu Kubernetes Grid. This means that you do not need to deploy a management cluster on vSphere 7, and the Tanzu Kubernetes Grid installer discourages you from doing so.

The Tanzu Kubernetes Grid CLI can connect to the Supervisor Cluster on vSphere 7 and to Tanzu Kubernetes Grid management clusters deployed to Azure, Amazon EC2, and vSphere 6.7u3, letting you deploy and manage Tanzu Kubernetes clusters across multiple infrastructures using a single tool. For more information, see Use the Tanzu Kubernetes Grid CLI with a vSphere with Tanzu Supervisor Cluster.

If the vSphere with Tanzu feature is not enabled in vSphere 7, you can still deploy a Tanzu Kubernetes Grid management cluster, but it is not recommended. For information about the vSphere with Tanzu feature in vSphere 7, see vSphere with Tanzu Configuration and Management in the vSphere 7 documentation.

Load Balancers for vSphere

Each management cluster and Tanzu Kubernetes cluster that you deploy to vSphere requires one static virtual IP address for external requests to the cluster's API server. You must be able to assign this IP address, so it cannot be within your DHCP range, but it must be in the same subnet as the DHCP range.

The cluster control plane's Kube-vip pod uses this static virtual IP address to load-balance API requests across multiple nodes, and the API server certificate includes the address to enable secure TLS communication.

NOTE: Kube-vip is an in-cluster load balancer that is solely used by the API server. It is not a general purpose Service type: load-balancer for vSphere. Tanzu Kubernetes Grid does not currently provide a Service type: load-balancer for vSphere. If you require load balancing, you can use a solution such as MetalLB.

Required Permissions for the vSphere Account

The vCenter Single Sign On account that you provide to Tanzu Kubernetes Grid when you deploy a management cluster must have at the correct permissions in order to perform the required operations in vSphere.

It is not recommended to provide a vSphere administrator account to Tanzu Kubernetes Grid, because this provides Tanzu Kubernetes Grid with far greater permissions than it needs. The best way to assign permissions to Tanzu Kubernetes Grid is to create a role and a user account, and then to grant that user account that role on vSphere objects.

NOTE: If you are deploying Tanzu Kubernetes clusters to vSphere 7 and vSphere with Tanzu is enabled, you must set the Global > Cloud Admin permission in addition to the permissions listed below. If you intend to use Velero to back up and restore management clusters, you must also set the permissions listed in Credentials and Privileges for VMDK Access in the Virtual Disk Development Kit Programming Guide.

  1. In the vSphere Client, go to Administration > Access Control > Roles, and create a new role, for example TKG, with the following permissions.


    vSphere Object Required Permission
    Cns Searchable
    Datastore Allocate space
    Browse datastore
    Low level file operations
    Global (if using Velero for backup and restore) Disable methods
    Enable methods
    Licenses
    Network Assign network
    Profile-driven storage Profile-driven storage view
    Resource Assign virtual machine to resource pool
    Sessions Message
    Validate session
    Virtual machine Change Configuration > Add existing disk
    Change Configuration > Add new disk
    Change Configuration > Add or remove device
    Change Configuration > Advanced configuration
    Change Configuration > Change CPU count
    Change Configuration > Change Memory
    Change Configuration > Change Settings
    Change Configuration > Configure Raw device
    Change Configuration > Extend virtual disk
    Change Configuration > Modify device settings
    Change Configuration > Remove disk
    Change Configuration > Toggle disk change tracking*
    Edit Inventory > Create from existing
    Edit Inventory > Remove
    Interaction > Power On
    Interaction > Power Off
    Provisioning > Allow read-only disk access*
    Provisioning > Allow virtual machine download*
    Provisioning > Deploy template
    Snapshot Management > Create snapshot*
    Snapshot Management > Remove snapshot*

    *Required to enable the Velero plugin, as described in Back Up and Restore Management Clusters. You can add these permissions when needed later.
    vApp Import

  2. In Administration > Single Sign On > Users and Groups, create a new user account in the appropriate domain, for example tkg-user.

  3. In the Hosts and Clusters, VMs and Templates, Storage, and Networking views, right-click the objects that your Tanzu Kubernetes Grid deployment will use, select Add Permission, and assign the tkg-user with the TKG role to each object.

    • Hosts and Clusters
      • The vCenter Server instance
      • The Datacenter and all of the Host and Cluster folders, from the Datacenter object down to the cluster that manages the Tanzu Kubernetes Grid deployment
      • Target hosts and clusters
      • Target resource pools, with propagate to children enabled
    • VMs and Templates
      • The deployed Tanzu Kubernetes Grid base image templates
      • Target VM and Template folders, with propagate to children enabled
    • Storage
      • Datastores and all storage folders, from the Datacenter object down to the datastores that will be used for Tanzu Kubernetes Grid deployments
    • Networking
      • Networks or distributed port groups to which clusters will be assigned
      • Distributed switches

Create an SSH Key Pair

In order for the Tanzu Kubernetes Grid CLI to connect to vSphere from the machine on which you run it, you must provide the public key part of an SSH key pair to Tanzu Kubernetes Grid when you deploy the management cluster. If you do not already have one on the machine on which you run the CLI, you can use a tool such as ssh-keygen to generate a key pair.

  1. On the machine on which you will run the Tanzu Kubernetes Grid CLI, run the following ssh-keygen command.

    ssh-keygen -t rsa -b 4096 -C "email@example.com"

  2. At the prompt Enter file in which to save the key (/root/.ssh/id_rsa): press Enter to accept the default.
  3. Enter and repeat a password for the key pair.
  4. Add the private key to the SSH agent running on your machine, and enter the password you created in the previous step.

    ssh-add ~/.ssh/id_rsa
    
  5. Open the file .ssh/id_rsa.pub in a text editor so that you can easily copy and paste it when you deploy a management cluster.

Import the Base OS Image Template into vSphere

Before you can deploy a management cluster or Tanzu Kubernetes clusters to vSphere, you must provide a base OS image template to vSphere. Tanzu Kubernetes Grid creates the management cluster and Tanzu Kubernetes cluster node VMs from this template. Tanzu Kubernetes Grid provides a base OS image template in OVA format for you to import into vSphere. After importing the OVA, you must convert the resulting VM into a VM template. The base OS image template includes the version of Kubernetes that Tanzu Kubernetes Grid uses to create clusters.

NOTE: Tanzu Kubernetes Grid 1.2.0 adds support for Kubernetes v1.19.1, v1.18.8, and v1.17.11. You can also use this version of Tanzu Kubernetes Grid to deploy clusters that run Kubernetes versions that were supported in previous releases of Tanzu Kubernetes Grid. If you want to deploy clusters with older versions of Kubernetes, either install or retain in your vSphere inventory the versions of the base OS image templates from the previous Tanzu Kubernetes Grid releases, alongside the new Kubernetes templates for this release. For information about the versions of Kubernetes that each Tanzu Kubernetes Grid release supports, see the release notes for that release.

  1. Go to https://www.vmware.com/go/get-tkg and log in with your My VMware credentials.
  2. Download the Tanzu Kubernetes Grid OVAs for node VMs.

    • Kubernetes v1.19.1: Photon v3 Kubernetes v1.19.1 OVA
    • Kubernetes v1.18.8: Photon v3 Kubernetes v1.18.8 OVA
    • Kubernetes v1.17.11: Photon v3 Kubernetes v1.17.11 OVA

    If you want to use Tanzu Kubernetes Grid 1.2 to deploy clusters with older versions of Kubernetes as well as those added in this release, after downloading the OVAs above, select an earlier Tanzu Kubernetes Grid version in the downloads page, and download the corresponding Photon v3 Kubernetes version OVA files.

  3. In the vSphere Client, right-click an object in the vCenter Server inventory, select Deploy OVF template.
  4. Select Local file, click the button to upload files, and navigate to the downloaded OVA file on your local machine.
  5. Follow the installer prompts to deploy a VM from the OVA projects-stg.registry.vmware.com/tkg.

    • Accept or modify the appliance name
    • Select the destination datacenter or folder
    • Select the destination host, cluster, or resource pool
    • Accept the end user license agreements (EULA)
    • Select the disk format and destination datastore
    • Select the network for the VM to connect to

    NOTE: If you select thick provisioning as the disk format, when Tanzu Kubernetes Grid creates cluster node VMs from the template, the full size of each node's disk will be reserved. This can rapidly consume storage if you deploy many clusters or clusters with many nodes. However, if you select thin provisioning, as you deploy clusters this can give a false impression of the amount of storage that is available. If you select thin provisioning, there might be enough storage available at the time that you deploy clusters, but storage might run out as the clusters run and accumulate data.

  6. Click Finish to deploy the VM.
  7. When the OVA deployment finishes, right-click the VM and select Template > Convert to Template.

    NOTE: Do not power on the VM before you convert it to a template.

  8. In the VMs and Templates view, right-click the new template, select Add Permission, and assign the tkg-user to the template with the TKG role.

    For information about how to create the user and role for Tanzu Kubernetes Grid, see Required Permissions for the vSphere Account above.

Repeat the procedure for each of the Kubernetes versions for which you downloaded the OVA file.

What to Do Next

If you are using Tanzu Kubernetes Grid in an environment with an external internet connection, you are now ready to deploy management clusters to vSphere.

check-circle-line exclamation-circle-line close-line
Scroll to top icon