This topic describes how to use the VMware Tanzu Kubernetes Grid Integrated Edition Management Console (TKGI MC) Configuration Wizard to deploy TKGI on vSphere.
To deploy TKGI from a YAML, see Deploy Tanzu Kubernetes Grid Integrated Edition by Importing a YAML Configuration File.
To upgrade an existing TKGI MC installation, see Upgrade Tanzu Kubernetes Grid Integrated Edition Management Console.
To deploy the TKGI using TKGI MC Configuration Wizard:
Ensure your environment satisfies the following:
To launch the Configuration Wizard:
To get help in the wizard at any time, click the ? icon at the top of the page and select Help, or click the More Info… links in each section to see help topics relevant to that section. Click the i icons for tips about how to fill in specific fields.
To connect to a vCenter Server:
Enter the IP address or FQDN for the vCenter Server instance on which to deploy Tanzu Kubernetes Grid Integrated Edition.
Select the data center in which to deploy Tanzu Kubernetes Grid Integrated Edition from the drop-down menu.
WARNING: Ideally, do not deploy TKGI from the management console to a data center that also includes TKGI instances that you deployed manually. If deploying management console and manual instances of TKGI to the same data center cannot be avoided, make sure that the TKGI instances that you deployed manually do not use the folder names BoshVMFolder: pks_vms
, BoshTemplateFolder: pks_templates
, BoshDiskPath: pks_disk
. If a manual installation uses these folder names, the VMs that they contain will be deleted when you delete a TKGI instance from the management console.
Provide connection information for the container networking interface to use with Tanzu Kubernetes Grid Integrated Edition. Tanzu Kubernetes Grid Integrated Edition Management Console provides 3 network configuration options for your Tanzu Kubernetes Grid Integrated Edition deployments: Automated NAT deployment, Bring your own topology, and Antrea CNI.
Each network configuration option has specific prerequisites:
View a larger version of this image.
Important: You cannot change the type of networking after you deploy Tanzu Kubernetes Grid Integrated Edition.
Provide information about an VMware NSX network that you have not already configured for use with Tanzu Kubernetes Grid Integrated Edition. You provide information about your VMware NSX setup, and Tanzu Kubernetes Grid Integrated Edition Management Console creates the necessary objects and configures them for you. Make sure that your VMware NSX setup satisfies the Prerequisites for an Automated NAT Deployment to VMware NSX before you begin.
To provide information about an VMware NSX network:
nsx-edge-1
.nsx-edge-2
. The second edge node is optional for proof-of-concept deployments, but it is strongly recommended for production deployments. To use only one edge node, set Edge Node two to None
.Optionally activate Tier0 Active-Active Mode.
By default, the management console sets the high availability (HA) mode of the tier 0 router to active-standby. You can optionally activate active-active mode on the tier 0 router, so that all NAT configuration moves from the tier 0 to the tier 1 router.
Enter information about the network resources for the Tanzu Kubernetes Grid Integrated Edition deployment to use.
Optionally activate Manage certificates manually for NSX if NSX Manager uses a custom CA certificate.
Important: If VMware NSX uses custom certificates and you do not provide the CA certificate for NSX Manager, Tanzu Kubernetes Grid Integrated Edition Management Console automatically generates one and registers it with NSX Manager. This can cause other services that are integrated with NSX Manager not to function correctly. If you have manually deployed TKGI instances to the same data center as the one to which you are deploying this instance, you must select Manage certificates manually for NSX and enter the current NSX manager CA certificate.
Enter the contents of the CA certificate in the NSX Manager CA Cert text box:
-----BEGIN CERTIFICATE-----
nsx_manager_CA_certificate_contents
-----END CERTIFICATE-----
If you do not select Manage certificates manually for NSX, the management console generates a certificate for you.
For the next steps, see Configure Identity Management.
Provide information about an VMware NSX network that you have already fully configured for use with Tanzu Kubernetes Grid Integrated Edition. Make sure that your VMware NSX setup satisfies the Prerequisites for a Bring Your Own Topology Deployment to VMware NSX before you begin.
To provide information about an VMware NSX network:
Use the drop-down menus to select existing network resources for each of the following items.
Network for TKGI Management Plane: Select the name of an opaque network on an NSX Virtual Distributed Switch (N-VDS).
Important: Do not use the network on which you deployed the Tanzu Kubernetes Grid Integrated Edition Management Console VM as the network for the management plane. Using the same network for the management console VM and the management plane requires additional VMware NSX configuration and is not recommended.
Enter IP addresses for the following resources.
Optionally deactivate NAT Mode to implement a routable (No-NAT) topology.
Tanzu Kubernetes Grid Integrated Edition supports NAT topologies, No-NAT with logical switch (NSX) topologies, No-NAT with virtual switch (VSS/VDS) topologies, and multiple tier-0 routers for tenant isolation. For information about implementing a routable topology, see No-NAT Topology in NSX Deployment Topologies for Tanzu Kubernetes Grid Integrated Edition.
Optionally activate Manage certificates manually for NSX if NSX Manager uses a custom CA certificate.
Important: If VMware NSX uses custom certificates and you do not provide the CA certificate for NSX Manager, Tanzu Kubernetes Grid Integrated Edition Management Console automatically generates one and registers it with NSX Manager. This can cause other services that are integrated with NSX Manager not to function correctly. If you have manually deployed TKGI instances to the same data center as the one to which you are deploying this instance, you must select Manage certificates manually for NSX and enter the current NSX manager CA certificate.
Enter the contents of the CA certificate in the NSX Manager CA Cert text box:
-----BEGIN CERTIFICATE-----
nsx_manager_CA_certificate_contents
-----END CERTIFICATE-----
If you do not select Manage certificates manually for NSX, the management console generates a certificate for you.
For the next steps, see Configure Identity Management.
Provide networking information so that Tanzu Kubernetes Grid Integrated Edition Management Console can provision an Antrea network for you during deployment. Make sure that you have the information listed in Prerequisites for vSphere Without an NSX Network before you begin.
To provide networking information:
Configure the Deployment Network Resource options.
Configure the Service Network Resource options.
Configure the Kubernetes network options.
Tanzu Kubernetes Grid Integrated Edition Management Console provides 3 identity management options for your Tanzu Kubernetes Grid Integrated Edition deployments.
To configure identity management:
Configure the identity management option for your deployments:
You can manage users by using a local database that is created during Tanzu Kubernetes Grid Integrated Edition deployment. After deployment, you can add users and groups to the database and assign roles to them in the Identity Management view of the Tanzu Kubernetes Grid Integrated Edition Management Console.
api.tkgi.example.com
. Note: The FQDN for the TKGI API cannot contain uppercase letters.
Provide information about an existing external Active Directory or LDAP server:
cloud.example.com
might use ou=Users,dc=example,dc=com
.cn=Smith
returns all objects with a common name equal to Smith
. Use cn={0}
to return all LDAP objects with the same common name as the user name.cloud.example.com
might use ou=Groups,dc=example,dc=com
.member={0}
.1
. Allowed values are between 1
and 10
. The default value is 1
, which limits queering under the searchBase to one subtree. Values greater than 1
activate nested group searching. If the searchBase in your LDAP groups includes more than one subtree, for example: ou=XX1,ou=XX2,ou=XX3,dc=yy1,dc=yy2,dc=yy3
, increase the Group Max Search Depth value to support searching all subtrees in your groups.Note: Increasing the LDAP group search depth impacts performance.
id_token
.mail
.cn
.sn
.api.tkgi.example.com
. Note: The FQDN for the TKGI API cannot contain uppercase letters.
You can configure Tanzu Kubernetes Grid Integrated Edition so that Kubernetes authenticates users against a SAML identity provider. Before you configure a SAML identity provider, you must configure your identity provider to designate Tanzu Kubernetes Grid Integrated Edition as a service provider. For information about how to configure Okta and Azure Active Directory, see the following topics:
After you have configured your identity provider, enter information about the provider in Tanzu Kubernetes Grid Integrated Edition Management Console:
For Provider Name, enter a unique name you create for the Identity Provider.
This name can include only alphanumeric characters, +
, _
, and -
. You must not change this name after deployment because all external users use it to link to the provider.
For Display Name, enter a display name for your provider.
The display name appears as a link on your login page.
Enter the metadata from your identity provider either as XML or as a URL.
For Name ID Format, select the name identifier format for your SAML identity provider.
This translates to username
on TKGI. The default is Email Address
.
For First Name Attribute and Last Name Attribute, enter the attribute names in your SAML database that correspond to the first and last names in each user record.
These fields are case sensitive.
For Email Attribute, enter the attribute name in your SAML assertion that corresponds to the email address in each user record, for example, EmailID
.
This field is case sensitive.
For External Groups Attribute, enter the attribute name in your SAML database for your user groups.
This field is case sensitive. To map the groups from the SAML assertion to admin roles in Tanzu Kubernetes Grid Integrated Edition, see Grant Tanzu Kubernetes Grid Integrated Edition Access to an External LDAP Group.
By default, all SAML authentication requests from Tanzu Kubernetes Grid Integrated Edition are signed, but you can optionally deactivate Sign Authentication Requests.
If you deactivate this option, you must configure your identity provider to verify SAML authentication requests.
To validate the signature for the incoming SAML assertions, activate Required Signed Assertions.
If you activate this option, you must configure your Identity Provider to send signed SAML assertions.
For Signature Algorithm, choose an algorithm from the drop down to use for signed requests and assertions.
The default value is SHA256.
In the TKGI API FQDN text box, enter an address for the TKGI API Server VM, for example api.tkgi.example.com
.
Note: The FQDN for the TKGI API cannot contain uppercase letters.
However you manage identities, you can use OpenID Connect (OIDC) to instruct Kubernetes to verify end-user identities based on authentication performed by a User Account and Authentication (UAA) server. Using OIDC lets you set up an external IDP, such as Okta, to authenticate users who access Kubernetes clusters with kubectl
. If you activate OIDC, administrators can grant namespace-level or cluster-wide access to Kubernetes end users. If you do not activate OIDC, you must use service accounts to authenticate kubectl
users.
Note: You cannot activate OIDC if you intend to integrate Tanzu Kubernetes Grid Integrated Edition with VMware vRealize Operations Management Pack for Container Monitoring.
To configure UAA to verify and authenticate end-user identities:
(Optional) Select Configure created clusters to use UAA as the OIDC provider and provide the following information.
--oidc-groups-claim
flag on the kube-api server. Enter the name of your groups claim. This is used to set a user’s group in the JSON Web Token (JWT) claim. The default value is roles
.--oidc-groups-prefix
flag. Enter a prefix for your groups claim. This prevents conflicts with existing names. For example, if you enter the prefix oidc:
, UAA creates a group name like oidc:developers
.--oidc-username-claim
flag. Enter the name of your user name claim. This is used to set a user’s user name in the JWT claim. The default value is user_name
. Depending on your provider, admins can enter claims besides user_name
, such as email
or name
.--oidc-username-prefix
flag. Enter a prefix for your user name claim. This prevents conflicts with existing names. For example, if you enter the prefix oidc:
, UAA creates a user name like oidc:admin
.(Optional) Select Manage Certificates Manually for TKGI API to generate and upload your own certificates for the TKGI API Server.
If you do not select this option, the management console creates auto-generated, self-signed certificates.
Enter the contents of the certificate in the TKGI API Certificate text box:
-----BEGIN CERTIFICATE-----
tkgi_api_certificate_contents
-----END CERTIFICATE-----
Enter the contents of the certificate key in the Private Key PEM text box:
-----BEGIN RSA PRIVATE KEY-----
tkgi_api_private_key_contents
-----END RSA PRIVATE KEY-----
Availability zones specify the compute resources for Kubernetes cluster deployment. Availability zones are a BOSH construct, that in Tanzu Kubernetes Grid Integrated Edition deployments to vSphere correspond to vCenter Server clusters, host groups, and resource pools. Availability zones allow you to provide high-availability and load balancing to applications. When you run more than one instance of an application, those instances are balanced across all of the availability zones that are assigned to the application. You must configure at least one availability zone. You can configure multiple additional availability zones.
Note: If you select a cluster as an availability zone, Tanzu Kubernetes Grid Integrated Edition Management Console sets the DRS VM-host affinity rule on that cluster to MUST
. If you select a host group as an availability zone, Tanzu Kubernetes Grid Integrated Edition Management Console sets the DRS VM-host affinity rule on that group to SHOULD
.
To configure availability zones:
Optionally select This is the management availability zone.
The management availability zone is the availability zone in which to deploy the TKGI Management Plane. The management plane consists of the TKGI API VM, Ops Manager, BOSH Director, and Harbor Registry. You can only designate one availability zone as the management zone. If you do not designate an availability zone as the management zone, Tanzu Kubernetes Grid Integrated Edition Management Console selects the first one.
Click Save Availability Zone.
Optionally click Add Availability Zone to add another zone.
You can only select resources that are not already included in another zone. You can create multiple availability zones.
Resource Settings allow you to configure the resources that are allocated to the VM on which the Tanzu Kubernetes Grid Integrated Edition API and other component services, such as UAA, run. Allocate resources according to the workloads that TKGI will run. You can also activate High Availability for the TKGI Database and deploy multiple instances of the TKGI API VM.
Tanzu Kubernetes Grid Integrated Edition, the MySQL database runs on a separate VM to the Tanzu Kubernetes Grid Integrated Edition API and other components.
You must also designate the datastores to use for the different types of storage required by your Tanzu Kubernetes Grid Integrated Edition deployment.
You can use different datastores for the storage of permanent and ephemeral data. If you deactivate the permanent storage option, Tanzu Kubernetes Grid Integrated Edition uses the ephemeral storage for permanent data. For information about when it is appropriate to share the ephemeral, permanent, and persistent volume datastores or use separate ones, see PersistentVolume Storage Options on vSphere.
You can use VMware vSAN, Network File Share (NFS), or VMFS storage for ephemeral, permanent, and Kubernetes persistent storage. Datastores can only be selected if their minimum capacity is greater than 250GB.
To configure the resources available on the Tanzu Kubernetes Grid Integrated Edition API VM:
For TKGI Database Persistent Disk Size, select the size of the persistent disk for the Tanzu Kubernetes Grid Integrated Edition MySQL database VM.
Use the TKGI Database VM Type drop-down menu to select from different combinations of CPU, RAM, and storage for the Tanzu Kubernetes Grid Integrated Edition MySQL database VM.
For TKGI API Persistent Disk Size, select the size of the persistent disk for the Tanzu Kubernetes Grid Integrated Edition API VM.
Set the TKGI API Persistent Disk Size according to the number of pods that you expect the cluster workload to run continuously. It is recommended to allocate 10GB for every 500 pods. For example:
20GB
.200GB
.1TB
.Use the TKGI API VM Type drop-down menu to select from different combinations of CPU, RAM, and storage for the Tanzu Kubernetes Grid Integrated Edition API VM.
Choose the configuration for the API VM depending on the expected CPU, memory, and storage consumption of the workloads that it will run. For example, some workloads might require a large compute capacity but relatively little storage, while others might require a large amount of storage and less compute capacity.
A plan is a cluster configuration template that defines the set of resources for Tanzu Kubernetes Grid Integrated Edition to use when deploying Kubernetes clusters. A plan allows you to configure the numbers of control plane and worker nodes, select between Linux and Windows OS for worker nodes, specify the configuration of the control plane and worker VMs, set disk sizes, select availability zones for control plane and node VMs, and configure advanced settings.
Tanzu Kubernetes Grid Integrated Edition Management Console provides preconfigured default plans, for different sizes of Kubernetes clusters. You can change the default configurations, or you can activate the plans as they are. You must activate at least one plan configuration because when you use the TKGI CLI to create a Kubernetes cluster, you must specify the plan on which you are basing the Kubernetes cluster. If no plans are activated, you cannot create Kubernetes clusters.
Tanzu Kubernetes Grid Integrated Edition plans support privileged containers and three admission control plugins. For information about privileged containers and the supported admission plugins, see Privileged mode for pods in the Kubernetes documentation. For information about admission plugins, see Using Admission Control Plugins for Tanzu Kubernetes Grid Integrated Edition Clusters.
You can create a maximum of 10 Linux plans and a maximum of 3 Windows plans.
After you have deployed Tanzu Kubernetes Grid Integrated Edition, when you use the management console to create clusters, you can override some of the values that you define in plans by using Compute Profiles.
To configure a plan:
small
, medium
, and large
plans.(Optional) Use the drop-down menus and buttons to change the default configurations of the preconfigured plans.
If you use Windows worker nodes, optionally activate the Enable HA Linux Workers option to deploy two Linux worker nodes per Windows cluster instead of one.
The Linux nodes provide cluster services to the Windows clusters.
Consider the following when configuring plans for Windows worker nodes:
If your infrastructure includes existing deployments of VMware Tanzu Mission Control, Wavefront by VMware, VMware vRealize Operations Management Pack for Container Monitoring, or VMware vRealize Log Insight, you can configure TKGI to connect to those services. You can also configure TKGI to forward logs to a Syslog server.
To configure TKGI integration with other products:
Tanzu Mission Control integration lets you monitor and manage Tanzu Kubernetes Grid Integrated Edition clusters from the Tanzu Mission Control console, making the Tanzu Mission Control console a single point of control for all Kubernetes clusters.
For more information about Tanzu Mission Control, see the VMware Tanzu Mission Control home page.
/
).default
or another value, depending on your role and access policy:
Org Member
users in VMware cloud services have a service.admin
role in Tanzu Mission Control. These users:
default
cluster group.organization.admin
user grants them the clustergroup.admin
or clustergroup.edit
role.Org Owner
users have organization.admin
permissions in Tanzu Mission Control. These users:
clustergroup
roles to service.admin
users through the Tanzu Mission Control Access Policy view.For Cluster Name Prefix, enter a name prefix for identifying the TKGI clusters in Tanzu Mission Control. This name prefix cannot contain uppercase letters. For more information, see the see Cluster Group Name Limitation for Tanzu Mission Control Integration in the Release Notes.
NoteWavefront integration in TKGI has been deprecated.
By connecting your Tanzu Kubernetes Grid Integrated Edition deployment to an existing deployment of Wavefront by VMware, you can obtain detailed metrics about Kubernetes clusters and pods.
To configure Wavefront integration:
vRealize Operations Management Pack for Container Monitoring provides detailed monitoring of your Kubernetes clusters. You can connect your TKGI deployment to an existing instance of VMware vRealize Operations Management Pack for Container Monitoring.
To connect a TKGI deployment to VMware vRealize Operations Management Pack for Container Monitoring:
TKGI MC automatically creates a cAdvisor
container in the TKGI deployment after TKGI integration with VMware vRealize Operations Management Pack for Container Monitoring has been activated.
You can configure TKGI deployment so that an existing deployment of VMware vRealize Log Insight pulls logs from all BOSH jobs and containers running in the cluster, including node logs from core Kubernetes and BOSH processes, Kubernetes event logs, and POD stdout and stderr.
To connect a TKGI deployment to VMware vRealize Log Insight:
Optionally deactivate Disable SSL certificate validation.
Note: If you activate integration with vRealize Log Insight, Tanzu Kubernetes Grid Integrated Edition Management Console generates a unique vRealize Log Insight agent ID for the management console. You must provide this agent ID to vRealize Log Insight so that it can pull the appropriate logs from the management console VM. For information about how to obtain the agent ID, see Obtain the VMware vRealize Log Insight Agent ID for TKGI Management Console in Troubleshooting Tanzu Kubernetes Grid Integrated Edition Management Console.
You can configure your TKGI deployment so that it sends logs for BOSH-deployed VMs, Kubernetes clusters, and namespaces to an existing Syslog server.
To connect your TKGI deployment with an existing Syslog server:
Enter a permitted peer ID.
Harbor is an enterprise-class registry server that you can use to store and distribute container images. Harbor allows you to organize image repositories in projects, and to set up role-based access control to those projects to define which users can access which repositories. Harbor also provides rule-based replication of images between registries, optionally implements vulnerability scanning of stored images with Trivy, and provides detailed logging for project and user auditing.
Harbor uses Trivy to perform vulnerability and security scanning of images in the registry. You can set thresholds that prevent users from running images that exceed those vulnerability thresholds. Once an image is uploaded into the registry, Harbor uses Trivy to check the various layers of the image against known vulnerability databases and reports any issues found.
To deploy and configure Harbor registry:
In the Harbor FQDN text box, enter a name for the Harbor VM, for example harbor.tkgi.example.com
.
This is the address at which you access the Harbor administration UI and registry service. Before you set the host name, you must check for potential host name conflicts between TKGI and Harbor. - If the host name might resolve to an IP address that is not one that you want it to, clear the DNS entry manually to avoid conflicts in subsequent use. - If the host name can be resolved to an IP address that you have intentionally created beforehand, be aware that the IP address in the DNS entry might not be the same as the reachable IP address that TKGI Management Console uses, resulting in network issues. If you must use a pre-created DNS entry, after the TKGI deployment finishes, check the IP address that TKGI Management Console uses for Harbor and update the DNS entry accordingly.
Enter and confirm a password for the Harbor VM.
If your environment does not allow Harbor components to access the external network on which Tanzu Kubernetes Grid Integrated Edition Management Console is running, provide proxy addresses.
These proxies allow Trivy to obtain updates from its vulnerability database.
(Optional) Select Manage Certificates Manually for Harbor to use custom certificates with Harbor.
To use custom certificates with Harbor:
Paste the contents of the server certificate PEM file in the SSL Certificate PEM text box:
-----BEGIN CERTIFICATE-----
ssl_certificate_contents
-----END CERTIFICATE-----
Paste the contents of the certificate key in the SSL Key PEM text box:
-----BEGIN PRIVATE KEY-----
ssl_private_key_contents
-----END PRIVATE KEY-----
Paste the contents of the Certificate Authority (CA) file in the Certificate Authority text box:
-----BEGIN CERTIFICATE-----
CA_certificate_contents
-----END CERTIFICATE-----
Apply the configuration to update the TKGI Management Console database with the revised Harbor certificates.
Note: If you use the TKGI Management Console and Harbor and rotate Harbor certificates within the Harbor tile, you must activate the Manage Certificates Manually For Harbor option and configure the new Harbor certificates.
Select the location in which to store image repositories.
Select the size of the disk for the Harbor VM from the Disk Size for Harbor-App drop-down menu.
(Optional) To send Harbor logs to vRealize Log Insight, enable the Enable vRealize Log Insight for Harbor toggle.
If you activate vRealize Log Insight, provide the address and port of your vRealize Log Insight service, and select either UDP or TCP for the transport protocol.
Click Next to complete the configuration wizard.
VMware’s Customer Experience Improvement Program (CEIP) provides VMware with information to improve the products and services, fix problems, and advise you on how best to deploy and use our products. As part of the CEIP program, VMware collects technical information about your organization’s use of Tanzu Kubernetes Grid Integrated Edition Management Console.
To configure VMware’s Customer Experience Improvement Program (CEIP), do the following:
Note: If you join the CEIP Program for Tanzu Kubernetes Grid Integrated Edition, open your firewall to allow outgoing access to https://vcsa.vmware.com/ph
on port 443
.
Note: Even if you do not wish to participate in CIEP, Tanzu Kubernetes Grid Integrated Edition-provisioned clusters send usage data to the TKGI control plane. However, this data is not sent to VMware and remains on your Tanzu Kubernetes Grid Integrated Edition installation.
When all of the sections of the wizard are green, you can generate a YAML configuration file and deploy TKGI.
Note: If TKGI MC fails to deploy TKGI correctly, always use TKGI MC to cleanly remove the failed deployment. For more information see Delete Your Tanzu Kubernetes Grid Integrated Edition Deployment.
To deploy TKGI:
Click Generate Configuration to see the generated YAML file.
(Optional) Click Export YAML to save a copy of the YAML file for future use.
This is recommended. The manifest is exported as the file PksConfiguration.yaml
.
(Optional) Specify an FQDN address for the Ops Manager VM by editing the YAML directly in the YAML editor.
WARNING: You cannot change the Ops Manager FQDN of Tanzu Kubernetes Grid Integrated Edition once it has already deployed.
To specify an FQDN address for the Ops Manager VM, update the YAML as follows:opsman_fqdn:
entry in the YAML file.opsman_fqdn:
entry with the Ops Manager VM FQDN: opsman_fqdn: "myopsman.example.com"
.(Optional) To use a custom certificate for Ops Manager, edit the YAML directly in the YAML editor.
Generate a private key and root certificate for Ops Manager, by using openssl
. For example:
openssl genrsa -out opsman.key 2048
openssl req -key opsman.key -new -x509 -days 365 -sha256 -extensions v3_ca -out opsman_ca.crt -subj "/C=US/ST=CA/L=Palo Alto/O=Vmware/OU=Eng/CN=Sign By Vmware.Inc"
Locate and update the opsman_private_key:
entry in the YAML file.
opsman_private_key: -----BEGIN RSA PRIVATE KEY-----
MIIEpAIBAAKCAQ [...] lOiR19fPqc=
-----END RSA PRIVATE KEY-----
Locate and update the opsman_root_cert:
entry in the YAML file.
opsman_root_cert: -----BEGIN CERTIFICATE-----
MIIDtTCCAp2 [..] l2fUi31u2fq0=
-----END CERTIFICATE-----
(Optional) Edit the YAML directly in the YAML editor to specify additional reserved IP ranges in the deployment network or service network.
No VMs will be deployed in the reserved ranges that you specify. To specify additional reserved IP ranges, update the YAML as follows:
additional_dep_reserved_ip_range:
and additional_svc_reserved_ip_range:
entries in the YAML file.additional_dep_reserved_ip_range:
and additional_svc_reserved_ip_range:
entries to specify reserved IP ranges in the deployment and service networks:
additional_dep_reserved_ip_range: "172.16.100.2,172.16.100.3-172.16.100.10"
additional_svc_reserved_ip_range: ""
(Optional) Edit the YAML directly in the YAML editor to specify TKGI Operation Timeout. In large-scale NSX environments, increase the TKGI Operation Timeout to avoid timeouts during cluster deletion.
The TKGI Operation Timeout value is independently configurable on the TKGI tile and TKGI MC configuration YAML. If you use the TKGI MC, the TKGI MC configuration overrides the TKGI tile configuration.
The default TKGI Operation Timeout value is 120 seconds in both configuration settings.
To specify the TKGI Operation Timeout:
nsx_feign_client_read_timeout
entry in the YAML file.nsx_feign_client_read_timeout
value with your optimal Operation Timeout setting, in milliseconds.Click Apply Configuration then Continue to deploy Tanzu Kubernetes Grid Integrated Edition.
When the deployment has completed successfully, click Continue to monitor and manage your deployment.
You can now access the Tanzu Kubernetes Grid Integrated Edition control plane and begin deploying Kubernetes clusters. For information about how to deploy clusters directly from the management console, see Create and Manage Clusters in the Management Console.
For information about how you can use Tanzu Kubernetes Grid Integrated Edition Management Console to monitor and manage your Tanzu Kubernetes Grid Integrated Edition deployment, see Monitor and Manage Tanzu Kubernetes Grid Integrated Edition in the Management Console.
Important: If you deployed Tanzu Kubernetes Grid Integrated Edition with plans that use Windows worker nodes, see Enable Plans with Windows Worker Nodes for information about how to install a Windows Server stemcell and other necessary configuration actions that you must perform. Plans that use Linux worker nodes are available immediately, but plans that use Windows worker nodes are ignored until you install the Windows Server stemcell.
If Tanzu Kubernetes Grid Integrated Edition fails to deploy, see Troubleshooting.