This document outlines the steps for deploying Tanzu for Kubernetes Operations using vSphere with Tanzu in a vSphere environment backed by a Virtual Distributed Switch (VDS) and leveraging NSX Advanced Load Balancer (ALB) for L4/L7 load balancing & ingress.
The scope of the document is limited to providing deployment steps based on the reference design in VMware Tanzu for Kubernetes Operations using vSphere with Tanzu Reference Design. This document does not cover any deployment procedures for the underlying SDDC components.
Before deploying Tanzu Kubernetes operations using vSphere with Tanzu on vSphere networking, ensure that your environment is set up as described in the following:
Ensure that your environment has the following general requirements:
The following table provides example entries for the required port groups. Create network entries with the port group name, VLAN ID, and CIDRs that are specific to your environment.
Network Type | DHCP Service | Description & Recommendations |
---|---|---|
NSX ALB Management Network | Optional | NSX ALB controllers and SEs will be attached to this network. Use static IPs for the NSX ALB controllers. The Service Engine’s management network can obtain IP from DHCP. |
TKG Management Network | IP Pool/DHCP can be used. | Supervisor Cluster nodes will be attached to this network. When an IP Pool is used, ensure that the block has 5 consecutive free IPs. |
TKG Workload Network | IP Pool/DHCP can be used. | Control plane and worker nodes of TKG Workload Clusters will be attached to this network |
TKG Cluster VIP/Data Network | No | Virtual services for Control plane HA of all TKG clusters (Supervisor and Workload). Reserve sufficient IPs depending on the number of TKG clusters planned to be deployed in the environment, NSX ALB handles IP address management on this network via IPAM. |
This document uses the following port groups, subnet CIDR’s and VLANs. Replace these with values that are specific to your environment.
Network Type | Port Group Name | VLAN | Gateway CIDR | DHCP Enabled | IP Pool for SE/VIP in NSX ALB |
---|---|---|---|---|---|
NSX ALB Management Network | NSX-ALB-Mgmt | 1680 | 172.16.80.1/27 | No | 172.16.80.11 - 172.16.80.30 |
TKG Management Network | TKG-Management | 1681 | 172.16.81.1/27 | Yes | No |
TKG Workload Network01 | TKG-Workload | 1682 | 172.16.82.1/24 | Yes | No |
TKG VIP Network | TKG-Cluster-VIP | 1683 | 172.16.83.1/24 | No | 172.16.83.101 - 172.16.83.250 |
After you have created the required networks, the network section in your vSphere environment must have the port groups as shown in the following screen capture:
Ensure that the firewall is set up as described in Firewall Recommendations.
Ensure that resource pools and folders are created in vCenter. The following table shows a sample entry for the resource pool and folder. Customize the resource pool and folder name for your environment.
Resource Type | Resource Pool name | Sample Folder name |
---|---|---|
NSX ALB Components | NSX-ALB | NSX-ALB-VMS |
The following are the high-level steps for deploying Tanzu Kubernetes operations on vSphere networking backed by VDS:
NSX Advanced Load Balancer is an enterprise-grade integrated load balancer that provides L4- L7 load balancer support. We recommended deploying NSX Advanced Load Balancer for vSphere deployments without NSX-T, or when there are unique scaling requirements.
NSX Advanced Load Balancer is deployed in write access mode in the vSphere environment. This mode grants NSX Advanced Load Balancer Controllers full write access to the vCenter. Full write access allows automatically creating, modifying, and removing Service Engines and other resources as needed to adapt to changing traffic needs.
For a production-grade deployment, we recommend deploying three instances of the NSX Advanced Load Balancer Controller for high availability and resiliency.
The following table provides a sample IP address and FQDN set for the NSX Advanced Load Balancer controllers:
Controller Node | IP Address | FQDN |
---|---|---|
Node01 (Primary) | 172.16.80.11 | alb-ctlr01.your-domain |
Node02 (Secondary) | 172.16.80.12 | alb-ctlr02.your-domain |
Node03 (Secondary) | 172.16.80.13 | alb-ctlr03.your-domain |
Controller Cluster | 172.16.80.10 | alb.your-domain |
Do the following to deploy NSX Advanced Load Balancer Controller node:
Follow the wizard to configure the following:
The following example shows the final configuration of the NSX Advanced Load Balancer Controller node.
For more information, see the product documentation Deploy the Controller.
After the Controller VM is deployed and powered-on, configure the Controller VM for your vSphere with Tanzu environment. The Controller requires several post-deployment configuration parameters.
On a browser, go to https://<controller node01-fqdn>/.
Configure an Administrator Account by setting up a password and optionally, an email address.
Configure System Settings by specifying the backup passphrase and DNS information.
(Optional) Configure Email/SMTP.
Configure Multi-Tenant settings as follows:
Click Save to exit the post-deployment configuration wizard. You are directed to a Dashboard view on the controller.
Navigate to Infrastructure > Clouds and edit Default-Cloud.
Select VMware vCenter/vSphere ESX as the infrastructure type and click Next.
Configure the Data Center settings.
Select the Default Network IP Address Management mode.
For Virtual Service Placement, select Prefer Static Routes vs Directly Connected Network
Configure the Network settings as follows:
Verify that the health of Default-Cloud is green.
Configure Licensing.
Tanzu for Kubernetes Operations requires an NSX Advanced Load Balancer Enterprise license. To configure licensing, navigate to the Administration > Settings > Licensing and apply the license key. If you have a license file instead of a license key, click the Upload from Computer link.
Configure NTP settings if you want to use an internal NTP server.
Navigate to Administration > Settings > DNS/NTP.
Click the pencil icon to edit the settings and specify the NTP server that you want to use.
Click Save to save the settings.
For additional product documentation, see the following:
In a production environment, we recommended that you deploy additional controller nodes and configure the controller cluster for high availability and disaster recovery.
To run a three node controller cluster, you deploy the first node and perform the initial configuration, and set the Cluster IP. After that, you deploy and power on two more Controller VMs. However, do not run the initial configuration wizard or change the administrator password for the two additional Controllers VMs. The configuration of the first Controller VM is assigned to the two new Controller VMs.
The first controller of the cluster receives the “Leader” role. The second and third controllers work as “Follower”.
To configure the Controller cluster,
Navigate to Administration > Controller
Select Nodes and click Edit.
Specify a name for the controller cluster and set the Cluster IP.
This IP address should be from the NSX Advanced Load Balancer management network.
In Cluster Nodes, specify the IP addresses of the two additional controllers that you have deployed.
Leave the name and password fields empty.
Click Save.
The Controller cluster setup starts. The Controller nodes are rebooted in the process. It takes approximately 10-15 minutes for cluster formation to complete.
You are automatically logged out of the controller node you are currently logged in. Enter the cluster IP in a browser to see the cluster formation task details.
After the Controller cluster is deployed, use the Controller cluster IP for doing any additional configuration. Do not use the individual Controller node IP.
For additional product documentation, see Deploy a Controller Cluster.
The Controller must send a certificate to clients to establish secure communication. This certificate must have a Subject Alternative Name (SAN) that matches the NSX Advanced Load Balancer Controller cluster hostname or IP address.
The Controller has a default self-signed certificate. But this certificate does not have the correct SAN. You must replace it with a valid or self-signed certificate that has the correct SAN. You can create a self-signed certificate or upload a CA-signed certificate.
This document makes use of a self-signed certificate.
To replace the default certificate,
Create a self-signed certificate.
Navigate to the Templates > Security > SSL/TLS Certificate >
Click Create and select Controller Certificate.
The New Certificate (SSL/TLS) window appears.
Enter a name for the certificate.
To add a self-signed certificate, for Type select Self Signed.
Enter the following details:
Click Save.
Change the NSX Advanced Load Balancer portal certificate.
Navigate to the Administration > Settings > Access Settings.
Clicking the pencil icon to edit the access settings.
Verify that Allow Basic Authentication is enabled.
From SSL/TLS Certificate, remove the existing default portal certificates
From the drop-down list, select the newly created certificate
Click Save.
For additional product documentation, see Assign a Certificate to the Controller.
You need the newly created certificate when you configure the Supervisor Cluster to enable Workload Management.
To export the certificate, navigate to the Templates > Security > SSL/TLS Certificate page and export the certificate by clicking on the export button.
In the Export Certificate page that appears, click Copy to clipboard against the certificate. Do not copy the key. Save the copied certificate for later use when you enable workload management.
vSphere with Tanzu uses the Default Service Engine Group. Ensure that the HA mode for the default-Group is set to N + M (buffer).
Optionally, you can reconfigure the Default-Group to define the placement and number of Service Engine VMs settings.
This document uses the Default Service Engine Group as is.
For more information, see the product documentation Configure a Service Engine Group.
You can configure the virtual IP (VIP) range to use when a virtual service is placed on the specific VIP network. You can configure DHCP for the Service Engines.
Optionally, if DHCP is unavailable, you can configure a pool of IP addresses to assign to the Service Engine interface on that network.
This document uses an IP pool for the VIP network.
To configure the VIP network,
Click the edit icon to edit the network settings.
Click Add Subnet.
In IP Subnet, specify the VIP network subnet CIDR.
Click Add Static IP Address Pool to specify the IP address pool for the VIPs and Service Engine. The range must be a subset of the network CIDR configured in IP Subnet.
Click Save to close the VIP network configuration wizard.
For more information, see the product documentation Configure a Virtual IP Network.
A default gateway enables the Service Engine to route traffic to the pool servers on the Workload Network. You must configure the VIP Network gateway IP as the default gateway.
To configure the Default gateway, 1. Navigate to Infrastructure > Routing > Static Route.
Click Create.
In Gateway Subnet, enter 0.0.0.0/0.
In Next Hop, enter the gateway IP address of the VIP network.
Click Save.
For additional product documentation, see Configure Default Gateway
IPAM is required to allocate virtual IP addresses when virtual services get created. Configure IPAM for the NSX Advanced Load Balancer Controller and assign it to the Default-Cloud.
Click Create and select IPAM Profile from the drop-down menu.
Enter the following to configure the IPAM profile:
Deselect the Allocate IP in VRF option.
Click Add Usable Network.
Click Save.
Assign the IPAM profile to the Default-Cloud configuration.
Verify that the status of the Default-Cloud configuration is green.
For additional product documentation, see Configure IPAM.
As a vSphere administrator, you enable a vSphere cluster for Workload Management by creating a Supervisor Cluster. After you deploy the Supervisor Cluster, you can use the vSphere Client to manage and monitor the cluster.
Before deploying the Supervisor Cluster, ensure the following:
To deploy the Supervisor Cluster,
Log in to the vSphere client and navigate to Menu > Workload Management and click Get Started.
Select the vCenter Server and Network.
Select a cluster from the list of compatible clusters and click Next.
Select the Control Plane Storage Policy for the nodes from the drop-down menu and click Next.
On the Load Balancer screen, select Load Balancer Type as NSX Advanced Load Balancer and provide the following details:
Click Next.
On Management Network screen, select the port group that you created on the distributed switch. If DHCP is enabled for the port group, set the Network Mode to DHCP.
Ensure that the DHCP server is configured to hand over DNS server address, DNS search domain, and NTP server address via DHCP.
Click Next.
On the Workload Network screen,
On the Tanzu Kubernetes Grid Service screen, select the subscribed content library that contains the Kubernetes images released by VMware.
On the Review and Confirm screen, select the size for the Kubernetes control plane VMs that are created on each host from the cluster. For production deployments, we recommend a large form factor.
Click Finish. This triggers the Supervisor Cluster deployment.
The Workload Management task takes approximately 30 minutes to complete. After the task completes, three Kubernetes control plane VMs are created on the hosts that are part of the vSphere cluster.
The Supervisor Cluster gets an IP address from the VIP network that you configured in the NSX Advanced Load Balancer. This IP address is also called the Control Plane HA IP address.
In the backend, three supervisor Control Plane VMs are deployed in the vSphere namespace.
A Virtual Service is created in the NSX Advanced Load Balancer with three Supervisor Control Plane nodes that are deployed in the process.
For additional product documentation, see Enable Workload Management with vSphere Networking.
You can use Kubernetes CLI Tools for vSphere to view and control vSphere with Tanzu namespaces and clusters.
The Kubernetes CLI Tools download package includes two executables: the standard open-source kubectl and the vSphere Plugin for kubectl. The vSphere Plugin for kubectl extends the commands available to kubectl so that you connect to the Supervisor Cluster and to Tanzu Kubernetes clusters using vCenter Single Sign-On credentials.
To download the Kubernetes CLI tool, connect to the URL https://<control-plane-vip>/
For additional product documentation, see Download and Install the Kubernetes CLI Tools for vSphere.
After installing the CLI tool of your choice, connect to the Supervisor Cluster by running the following command:
kubectl vsphere login [email protected] --server=<control-plane-vip> --insecure-skip-tls-verify
The command prompts for the vSphere administrator password.
After your connection to the Supervisor Cluster is established you can switch to the Supervisor context by running the command:
kubectl config use-context <supervisor-context-name>
Where, the <supervisor-context-name>
is the IP address of the control plane VIP.
A vSphere Namespace is a tenancy boundary within vSphere with Tanzu and allows for sharing vSphere resources (computer, networking, storage) and enforcing resources limits with the underlying objects such as Tanzu Kubernetes Clusters. It also allows you to attach policies and permissions.
Every workload cluster that you deploy runs in a Supervisor namespace.
To create a new Supervisor namespace,
Log in to the vSphere Client.
Navigate to Home > Workload Management > Namespaces.
Click Create Namespace.
Select the Cluster that is enabled for Workload Management.
Enter a name for the namespace and select the workload network for the namespace.
Note: The Name field accepts only lower case letters and hyphens.
Click Create.
The namespace is created on the Supervisor Cluster.
For additional product documentation, see Create and Configure a vSphere Namespace.
To access a namespace, you have to add permissions to the namespace. To configure permissions, click on the newly created namespace, navigate to the Summary tab, and click Add Permissions.
Choose the Identity source, search for the User/Group that will have access to the namespace, and define the Role for the selected User/Group.
Certain Kubernetes workloads require persistent storage to store data permanently. Storage policies that you assign to the namespace control how persistent volumes and Tanzu Kubernetes cluster nodes are placed within datastores in the vSphere storage environment.
To assign a storage policy to the namespace, on the Summary tab, click Add Storage.
From the list of storage policies, select the appropriate storage policy and click OK.
After the storage policy is assigned to a namespace, vSphere with Tanzu creates a matching Kubernetes storage class in the vSphere Namespace.
When initially created, the namespace has unlimited resources within the Supervisor Cluster. The vSphere administrator defines the limits for CPU, memory, storage, as well as the number of Kubernetes objects that can run within the namespace. These limits are configured for each vSphere Namespace.
To configure resource limitations for the namespace, on the Summary tab, click Edit Limits for Capacity and Usage.
When limits are configured on the namespace, a resource pool for the namespace is created in the vCenter Server. The storage limitation determines the overall amount of storage that is available to the namespace.
The VM class is a VM specification that can be used to request a set of resources for a VM. The VM class defines parameters such as the number of virtual CPUs, memory capacity, and reservation settings.
vSphere with Tanzu includes several default VM classes and each class has two editions: guaranteed and best effort. A guaranteed edition fully reserves the resources that a VM specification requests. A best-effort class edition does not and allows resources to be overcommitted.
More than one VM Class can be associated with a namespace.
To add a VM class to a namespace,
Click Add VM Class for VM Service.
From the list of the VM Classes, select the classes that you want to include in your namespace.
Click Ok.
The namespace is fully configured now. You are ready to deploy your first Tanzu Kubernetes Cluster.
Tanzu Kubernetes Clusters are created by invoking the Tanzu Kubernetes Grid Service declarative API using kubectl and a cluster specification defined using YAML. After you provision a cluster, you operate it and deploy workloads to it using kubectl.
Before you construct a YAML file for Tanzu Kubernetes Cluster deployment, gather information such as virtual machine class bindings, storage class, and the available Tanzu Kubernetes release that can be used.
You can gather this information by running the following commands:
Connect to the Supervisor Cluster using vSphere Plugin for kubectl.
kubectl vsphere login --server=<Supervisor Cluster Control Plane VIP> --vsphere-username USERNAME
Switch context to the vSphere Namespace where you plan to provision the Tanzu Kubernetes cluster.
kubectl config get-contexts
kubectl config use-context <vSphere-Namespace>
Example: kubectl config use-context prod
List the available virtual machine class bindings
kubectl get virtualmachineclassbindings
The output of the command list all VM class bindings that are available in the vSphere Namespace where you are deploying the Tanzu Kubernetes Cluster.
List the available storage class in the namespace.
kubectl get storageclass
The output of the command list all storage classes that are available in the vSphere Namespace.
List the available Tanzu Kubernetes releases (TKR)
kubectl get tanzukubernetesreleases
The command’s output lists the TKR versions that are available in the vSphere Namespace. You can only deploy Tanzu Kubernetes Cluster with TKR versions that have compatible=true.
Construct the YAML file for provisioning a Tanzu Kubernetes cluster.
Tanzu Kubernetes Clusters can be deployed using Tanzu Kubernetes Grid Service API. There are 2 versions of the API that you can use:
This documentation makes use of v1alpha2 API to provision the Tanzu Kubernetes Clusters.
The following example YAML is the minimal configuration required to provision a Tanzu Kubernetes cluster.
apiVersion: run.tanzu.vmware.com/v1alpha2
kind: TanzuKubernetesCluster
metadata:
name: prod-1
namespace: prod
spec:
topology:
controlPlane:
replicas: 3
vmClass: best-effort-large
storageClass: vsan-default-storage-policy
tkr:
reference:
name: v1.21.2---vmware.1-tkg.1.ee25d55
nodePools:
- name: worker-pool01
replicas: 3
vmClass: best-effort-large
storageClass: vsan-default-storage-policy
tkr:
reference:
name: v1.21.2---vmware.1-tkg.1.ee25d55
Customize the cluster as needed by referring to the full list of cluster configuration parameters
To deploy the cluster, run the command:
kubectl apply -f <name>.yaml
Monitor the deployment of cluster using the command:
kubectl get tanzukubernetesclusters
Sample result:
You can also review the status of the cluster from vSphere Client by clicking on the namespace where the cluster is deployed and navigating to Compute > VMware Resources > Tanzu Kubernetes Clusters.
The Virtual Machines tab displays the list of the control plane and worker nodes that are deployed during the cluster creation.
For the Control-Plane HA, a virtual service is created in NSX Advanced Load Balancer with the pool members as the three control plane nodes that got deployed during the cluster creation.
For additional product documentation, see Configuring and Managing vSphere Namespaces.
By integrating Supervisor Cluster and Tanzu Kubernetes Clusters with Tanzu Mission Control (TMC) you are provided a centralized administrative interface that enables you to manage your global portfolio of Kubernetes clusters.
Tanzu Mission Control is a centralized management platform for consistently operating and securing your Kubernetes infrastructure and modern applications across multiple teams and clouds.
This section describes how to register the Supervisor Cluster with Tanzu Mission Control.
The terms Supervisor Cluster and management cluster are used interchangeably.
Before you register the Supervisor Cluster with Tanzu Mission Control, ensure you have the following:
Do the following to register the Supervisor Cluster with Tanzu Mission Control:
Log in to Tanzu Mission Control and navigate to Administration > Management clusters.
Click Register Management Cluster and select vSphere with Tanzu.
On the Register management cluster page, provide a name for the management cluster, and choose a cluster group.
Optionally, you can provide a description and labels for the management cluster.
If you are using a proxy to connect to the Internet, you can configure the proxy settings by toggling Set proxy for the management cluster to Yes.
On the Register page, Tanzu Mission Control generates a YAML file that defines how the management cluster connects to Tanzu Mission Control for registration. The credentials provided in the YAML expire after 48 hours.
Copy the URL provided on the Register page. This URL is needed to install the Tanzu Mission Control agent on your management cluster and complete the registration process.
When the Supervisor Cluster is registered with Tanzu Mission Control, the Tanzu Mission Control agent is installed in the svc-tmc-cXX namespace, which is included with the Supervisor Cluster by default. After installing the agent, you can use the Tanzu Mission Control web interface to provision and manage Tanzu Kubernetes clusters.
Connect to the management cluster and obtain the name of the namespace where you will install the Tanzu Mission Control agent.
kubectl vsphere login --server=<Supervisor Cluster Control Plane VIP> --vsphere-username USERNAME
kubectl get namespaces
Look for the namespace name starting with svc-tmc-xxx
Prepare a YAML file with the following content to install the Tanzu Mission Control agent on the management cluster.
# vi tmc-registration.yaml
apiVersion: installers.tmc.cloud.vmware.com/v1alpha1
kind: AgentInstall
metadata:
name: tmc-agent-installer-config
namespace: <tmc namespace>
spec:
operation: INSTALL
registrationLink: <TMC-REGISTRATION-URL>
Install the Tanzu Mission Control agent using kubectl.
kubectl create -f tmc-registration.yaml
The Tanzu Mission Control cluster agent is installed on the Supervisor Cluster. The resulting output looks similar to the following:
agentinstall.installers.tmc.cloud.vmware.com/tmc-agent-installer-config created
Optionally, you can check the progress of the agent installation by running the following command:
kubectl describe agentinstall tmc-agent-installer-config -n <tmc namespace>
The installation is complete when the status: line at the bottom of the output changes from INSTALLATION\_IN\_PROGRESS
to INSTALLED
.
Return to the Tanzu Mission Control console and click Verify Connection.
Clicking on Verify Connection takes you to an overview page that displays the health of the cluster and its components.
For additional product documentation, see Integrate the Tanzu Kubernetes Grid Service on the Supervisor Cluster with Tanzu Mission Control.
If you have deployed Tanzu Kubernetes Cluster manually, you can register it in Tanzu Mission Control for life-cycle management and policy enforcement.
You can view the workload clusters associated with a Supervisor Cluster under the Workload clusters tab on the overview page of the Supervisor Cluster.
Select the workload cluster that you want to integrate with Tanzu Mission Control and click Manage Cluster.
Select the Cluster group for the workload cluster and click Manage.
Verify that the workload cluster is in a ready state and showing as a managed cluster.
Tanzu Observability (TO) delivers full-stack observability across containerized cloud applications, Kubernetes health, and cloud infrastructure. The solution is consumed through a Software-as-a-Service (SaaS) subscription model, managed by VMware. This SaaS model allows the solution to scale to meet metrics requirements without the need for customers to maintain the solution itself.
Tanzu Observability by Wavefront significantly enhances observability for your workloads running in Tanzu Kubernetes Grid clusters.
Do the following to enable Tanzu Observability on the Tanzu Kubernetes Grid cluster:
Log in to the Tanzu Mission control console and ensure that the Tanzu Observability is enabled on your org. If it is not enabled, enable it by navigating to the Administration > Integrations.
Create a Service Account in Tanzu Observability (TO) to enable communication between Tanzu Observability and Tanzu Mission Control.
To deploy the Tanzu Observability collectors,
From the dropdown list, select Tanzu Observability and click Add.
Click Setup New Credentials.
Enter the Tanzu Observability URL and API token and click Confirm.
Tanzu Mission Control installs an extension on your cluster to collect data from the cluster and send it to your Wavefront account in one-minute intervals.
In about five minutes, the Tanzu Observability status on your cluster displays as OK.
Log in to the Tanzu Observability portal to view the metrics collection for the cluster.
For additional product documentation, see Enable Observability for Your Organization.
VMware Tanzu Service Mesh (TSM) is an enterprise-class service mesh solution that provides consistent control and security for microservices, end users, and data across all your clusters and clouds in the most demanding multi-cluster and multi-cloud environments.
Do the following to enable Tanzu Service Mesh and add Tanzu Kubernetes Grid clusters:
Log in to the Tanzu Mission Control console and verify that the Tanzu Service Mesh is enabled on your org. If it is not enabled, enable it by navigating to Administration > Integrations.
Navigate to the cluster you want to integrate with Tanzu Service Mesh.
On the cluster detail Overview page, click Add Integrations > Tanzu Service Mesh > Add.
Select Enable Tanzu Service Mesh on all namespaces.
If there are specific namespaces that you want to exclude, select the Exclude namespaces… and choose namespaces to add to the exclusion list.
Click Confirm.
Log in to Tanzu Service Mesh to check the status of the installation.
After the Tanzu Service Mesh extension is installed, you can access the Tanzu Service Mesh console through the Integrations tile on the Overview tab of the cluster in the Tanzu Mission Control console.
This section provides the steps for installing user-managed packages in a Tanzu Kubernetes (workload) cluster created by the Tanzu Kubernetes Grid Service.
Before you install user-managed packages on a workload cluster, ensure the following:
A bootstrap machine with the following installed:
The following steps describe the workflow for installing the user-managed packages:
Add the Supervisor Cluster as a management cluster to Tanzu CLI by running the following commands:
kubectl vsphere login --server=<Supervisor Cluster VIP> --vsphere-username USERNAME --insecure-skip-tls-verify
kubectl config use-context <Supervisor VIP>
tanzu login --kubeconfig ~/.kube/config --context <SUPERVISOR-VIP>
Prepare the workload cluster for installing the user-managed packages.
Change context to the workload cluster by running commands similar to the following:
kubectl vsphere login [email protected] --server=<Supervisor Cluster VIP> --insecure-skip-tls-verify --tanzu-kubernetes-cluster-name=WORKLOAD-CLUSTER-NAME --tanzu-kubernetes-cluster-namespace WORKLOAD-CLUSTER-NAMESPACE
kubectl config use-context WORKLOAD-CLUSTER-NAME
Check if a default storage class is defined.
kubectl get storageclass
If no default storage class is listed, edit the Tanzu Kubernetes Cluster and specify the same.
kubectl config use-context <Supervisor VIP>
kubectl edit tkc <workload-cluster>
Locate the lines that read topology: controlPlane
and add the storage policy above that as shown in the following sample screen capture.
Create cluster role bindings for installing the user-managed packages.
By default, the newly created workload cluster does not have a cluster role binding that grants access to authenticated users to install packages using the default PSP vmware-system-privileged
.
Create a role binding deployment YAML as follows:
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: tkgs-rbac
roleRef:
kind: ClusterRole
name: psp:vmware-system-privileged
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: Group
apiGroup: rbac.authorization.k8s.io
name: system:authenticated
Apply role binding.
kubectl apply -f rbac.yaml
You will see the following output, which indicates that the command is successfully executed.
clusterrolebinding.rbac.authorization.k8s.io/tkgs-rbac created
The value tkgs-rbac
is just a name. It can be replaced with a name of your choice.
Install kapp-controller.
Create a file kapp-controller.yaml containing the Kapp Controller Manifest code.
Apply the kapp-controller.yaml file to the workload cluster
kubectl apply -f kapp-controller.yaml
Verify that kapp-controller pods are created in the tkg-system namespace and are in a running state.
kubectl get pods -n tkg-system | grep kapp-controller
Add the standard packages repository to the Tanzu CLI.
Add tanzu package repository
tanzu package repository add tkgs-repo --url projects.registry.vmware.com/tkg/packages/standard/repo:v1.4.0 -n tanzu-package-repo-global
Verify that the package repository has been added and Reconciliation is successful.
tanzu package repository list -A
If the repository is successfully added, the status reads as Reconcile succeeded
.