Deploy Management Clusters from a Configuration File

You can use the Tanzu CLI to deploy a management cluster to vSphere, Amazon Web Services (AWS), and Microsoft Azure with a configuration that you specify in a YAML configuration file.

Prerequisites

Before you can deploy a management cluster, you must make sure that your environment meets the requirements for the target platform.

General Prerequisites

  • Make sure that you have met the all of the requirements and followed all of the procedures in Install the Tanzu CLI and Other Tools for Use with Standalone Management Clusters.
  • For production deployments, it is strongly recommended to enable identity management for your clusters. For information about the preparatory steps to perform before you deploy a management cluster, see Obtain Your Identity Provider Details in Configure Identity Management. For conceptual information about identity management and access control in Tanzu Kubernetes Grid, see About Identity and Access Management.
  • If you are deploying clusters in an internet-restricted environment to either vSphere or AWS, you must also perform the steps in Prepare an Internet-Restricted Environment. These steps include setting TKG_CUSTOM_IMAGE_REPOSITORY as an environment variable.
  • Important

    It is strongly recommended to use the Tanzu Kubernetes Grid installer interface rather than the CLI to deploy your first management cluster to a given target platform. When you deploy a management cluster by using the installer interface, it populates a cluster configuration file for the management cluster with the required parameters. You can use the created configuration file as a model for future deployments from the CLI to this target platform.

  • If you plan on registering the management cluster with Tanzu Mission Control, ensure that your workload clusters meet the requirements listed in Requirements for Registering a Tanzu Kubernetes Cluster with Tanzu Mission Control in the Tanzu Mission Control documentation.

Infrastructure Prerequisites

vSphere
Make sure that you have met the all of the requirements listed in Prepare to Deploy Management Clusters to vSphere.
Important

On vSphere with Tanzu, you do not need to deploy a management cluster. See See vSphere with Tanzu Supervisor is a Management Cluster.

AWS
Make sure that you have met the all of the requirements listed Prepare to Deploy Management Clusters to AWS.
  • For information about the configurations of the different sizes of node instances, for example, t3.large or t3.xlarge, see Amazon EC2 Instance Types.
  • For information about when to create a Virtual Private Cloud (VPC) and when to reuse an existing VPC, see Resource Usage in Your Amazon Web Services Account.
  • If this is the first time that you are deploying a management cluster to AWS, create a Cloud Formation stack for Tanzu Kubernetes Grid in your AWS account by following the instructions in Create IAM Resources below.

Create IAM Resources

Before you deploy a management cluster to AWS for the first time, you must create a CloudFormation stack for Tanzu Kubernetes Grid, tkg-cloud-vmware-com, in your AWS account. This CloudFormation stack includes the identity and access management (IAM) resources that Tanzu Kubernetes Grid needs to create and run clusters on AWS. For more information, see Permissions Set by Tanzu Kubernetes Grid in Prepare to Deploy Management Clusters to AWS.

  1. If you have already created the CloudFormation stack for Tanzu Kubernetes Grid in your AWS account, skip the rest of this procedure.

  2. If you have not already created the CloudFormation stack for Tanzu Kubernetes Grid in your AWS account, ensure that AWS authentication variables are set either in the local environment or in your AWS default credential provider chain. For instructions, see Configure AWS Account Credentials and SSH Key.

    If you have configured AWS credentials in multiple places, the credential settings used to create the CloudFormation stack are applied in the following order of precedence:

    • Credentials set in the local environment variables AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_SESSION_TOKEN and AWS_REGION are applied first.
    • Credentials stored in a shared credentials file as part of the default credential provider chain. You can specify the location of the credentials file to use in the local environment variable AWS_SHARED_CREDENTIAL_FILE. If this environment variable in not defined, the default location of $HOME/.aws/credentials is used. If you use credential profiles, the command uses the profile name specified in the AWS_PROFILE local environment configuration variable. If you do not specify a value for this variable, the profile named default is used.

      For an example of how the default AWS credential provider chain is interpreted for Java apps, see Working with AWS Credentials in the AWS documentation.

  3. Run the following command:

    tanzu mc permissions aws set
    

    For more information about this command, run tanzu mc permissions aws set --help.

Important

The tanzu mc permissions aws set command replaces the clusterawsadm command line utility that existed in Tanzu Kubernetes Grid v1.1.x and earlier. For existing management and workload clusters initially deployed with v1.1.x or earlier, continue to use the CloudFormation stack that was created by running the clusterawsadm alpha bootstrap create-stack command. For Tanzu Kubernetes Grid v1.2 and later clusters, use the tkg-cloud-vmware-com stack.

Azure
Make sure that you have met the requirements listed in Prepare to Deploy Management Clusters to Microsoft Azure.

For information about the configurations of the different sizes of node instances for Azure, for example, Standard_D2s_v3 or Standard_D4s_v3, see Sizes for virtual machines in Azure.


Create the Cluster Configuration File

Before creating a management cluster using the Tanzu CLI, you must define its configuration in a YAML configuration file that provides the base configuration for the cluster. When you deploy the management cluster from the CLI, you specify this file by using the --file option of the tanzu mc create command.

Running tanzu config init command for the first time creates the ~/.config/tanzu/tkg subdirectory that contains the Tanzu Kubernetes Grid configuration files.

If you have previously deployed a management cluster by running tanzu mc create --ui, the ~/.config/tanzu/tkg/clusterconfigs directory contains management cluster configuration files with settings saved from each invocation of the installer interface. Depending the infrastructure on which you deployed the management cluster, you can use these files as templates for cluster configuration files for new deployments to the same infrastructure. Alternatively, you can create management cluster configuration files from the templates that are provided in this documentation.

  • To use the configuration file from a previous deployment that you performed by using the installer interface, make a copy of the configuration file with a new name, open it in a text editor, and update the configuration. For information about how to update all of the settings, see the Configuration File Variable Reference.
  • To create a new configuration file, see Create a Management Cluster Configuration File below. This section provides configuration file templates for each target platform.

VMware recommends using a dedicated configuration file for each management cluster, with configuration settings specific to a single infrastructure.

Create a Management Cluster Configuration File

Create a standalone management cluster configuration file using the instructions and templates below.

Consult the Configuration File Variable Reference for details about each variable.

Important

- As described in Configuring the Management Cluster, environment variables override values from a cluster configuration file. To use all settings from a cluster configuration file, unset any conflicting environment variables before you deploy the management cluster from the CLI. - Support for IPv6 addresses in Tanzu Kubernetes Grid is limited; see Deploy Clusters on IPv6 (vSphere Only). If you are not deploying to an IPv6-only networking environment, all IP address settings in your configuration files must be IPv4. - Some parameters configure identical properties. For example, the SIZE property configures the same infrastructure settings as all of the control plane and worker node size and type properties for the different target platforms, but at a more general level. In such cases, avoid setting conflicting or redundant properties.

To create a configuration file for deploying a standalone management cluster:

  1. Copy and paste the contents of the template for your target platform into a text editor.

    Copy a template from one of the following locations:

    For example, if you have already deployed a management cluster from the installer interface, you can save the file in the default location for cluster configurations, ~/.config/tanzu/tkg/clusterconfigs.

  2. Save the file with a .yaml extension and an appropriate name, for example aws-mgmt-cluster-config.yaml.

The subsequent sections describe how to update the settings that are common to all target platforms as well as the settings that are specific to each of vSphere, AWS, and Azure.

Configure Basic Management Cluster Creation Information

The basic management cluster creation settings define the infrastructure on which to deploy the management cluster and other basic settings. They are common to all target platforms.

  • For CLUSTER_PLAN specify whether you want to deploy a development cluster, which provides a single control plane node, or a production cluster, which provides a highly available management cluster with three control plane nodes. Specify dev or prod.
  • For INFRASTRUCTURE_PROVIDER, specify aws, azure, or vsphere.

    INFRASTRUCTURE_PROVIDER: aws
    
    INFRASTRUCTURE_PROVIDER: azure
    
    INFRASTRUCTURE_PROVIDER: vsphere
    
  • Optionally deactivate participation in the VMware Customer Experience Improvement Program (CEIP) by setting ENABLE_CEIP_PARTICIPATION to false. For information about the CEIP, see Manage Participation in CEIP and https://www.vmware.com/solutions/trustvmware/ceip.html.

  • Optionally deactivate audit logging by setting ENABLE_AUDIT_LOGGING to false. For information about audit logging, see Audit Logging.
  • If the recommended CIDR ranges of 100.64.0.0/13 and 100.96.0.0/11 are unavailable, update CLUSTER_CIDR for the cluster pod network and SERVICE_CIDR for the cluster service network.

For example:

#! ---------------------------------------------------------------------
#! Basic cluster creation configuration
#! ---------------------------------------------------------------------

CLUSTER_NAME: aws-mgmt-cluster
CLUSTER_PLAN: dev
INFRASTRUCTURE_PROVIDER: aws
ENABLE_CEIP_PARTICIPATION: true
ENABLE_AUDIT_LOGGING: true
CLUSTER_CIDR: 100.96.0.0/11
SERVICE_CIDR: 100.64.0.0/13

Configure Identity Management

Set IDENTITY_MANAGEMENT_TYPE to ldap or oidc. Set to none or omit to deactivate identity management. It is strongly recommended to enable identity management for production deployments.

IDENTITY_MANAGEMENT_TYPE: oidc
IDENTITY_MANAGEMENT_TYPE: ldap

OIDC

To configure OIDC, update the variables below. For information about how to configure the variables, see Identity Providers - OIDC in the Configuration File Variable Reference.

For example:

OIDC_IDENTITY_PROVIDER_CLIENT_ID: 0oa2i[...]NKst4x7
OIDC_IDENTITY_PROVIDER_CLIENT_SECRET: 331!b70[...]60c_a10-72b4
OIDC_IDENTITY_PROVIDER_GROUPS_CLAIM: groups
OIDC_IDENTITY_PROVIDER_ISSUER_URL: https://dev-[...].okta.com
OIDC_IDENTITY_PROVIDER_SCOPES: openid,groups,email
OIDC_IDENTITY_PROVIDER_USERNAME_CLAIM: email

LDAP

To configure LDAP, uncomment and update the LDAP_* variables with information about your LDAPS server. For information about how to configure the variables, see Identity Providers - LDAP in the Configuration File Variable Reference.

For example:

LDAP_BIND_DN: ""
LDAP_BIND_PASSWORD: ""
LDAP_GROUP_SEARCH_BASE_DN: dc=example,dc=com
LDAP_GROUP_SEARCH_FILTER: (objectClass=posixGroup)
LDAP_GROUP_SEARCH_GROUP_ATTRIBUTE: memberUid
LDAP_GROUP_SEARCH_NAME_ATTRIBUTE: cn
LDAP_GROUP_SEARCH_USER_ATTRIBUTE: uid
LDAP_HOST: ldaps.example.com:636
LDAP_ROOT_CA_DATA_B64: ""
LDAP_USER_SEARCH_BASE_DN: ou=people,dc=example,dc=com
LDAP_USER_SEARCH_FILTER: (objectClass=posixAccount)
LDAP_USER_SEARCH_NAME_ATTRIBUTE: uid
LDAP_USER_SEARCH_USERNAME: uid

Configure Proxies

To optionally send outgoing HTTP(S) traffic from the management cluster to a proxy, for example in an internet-restricted environment, uncomment and set the *_PROXY settings. The proxy settings are common to all target platforms. You can choose to use one proxy for HTTP requests and another proxy for HTTPS requests or to use the same proxy for both HTTP and HTTPS requests. You cannot change the proxy after you deploy the cluster.

Note

On vSphere, traffic from cluster VMs to vCenter cannot be proxied. In a proxied vSphere environment, you need to either set VSPHERE_INSECURE to true, or else add the vCenter IP address or hostname to the TKG_NO_PROXY list.

  • TKG_HTTP_PROXY_ENABLED: Set this to true to configure a proxy.

  • TKG_PROXY_CA_CERT: Set this to the proxy server’s CA if its certificate is self-signed.

  • TKG_HTTP_PROXY: This is the URL of the proxy that handles HTTP requests. To set the URL, use the format below:

    PROTOCOL://USERNAME:PASSWORD@FQDN-OR-IP:PORT
    

    Where:

    • (Required) PROTOCOL: This must be http.
    • (Optional) USERNAME and PASSWORD: This is your HTTP proxy username and password. You must set USERNAME and PASSWORD if the proxy requires authentication.

    Note: When deploying management clusters with CLI, the following non-alphanumeric characters cannot be used in passwords: # ` ^ | / \ ? % ^ { [ ] }" < > .

    • (Required) FQDN-OR-IP: This is the FQDN or IP address of your HTTP proxy.
    • (Required) PORT: This is the port number that your HTTP proxy uses.

    For example, http://user:[email protected]:1234.

  • TKG_HTTPS_PROXY: This is the URL of the proxy that handles HTTPS requests. You can set TKG_HTTPS_PROXY to the same value as TKG_HTTP_PROXY or provide a different value. To set the value, use the URL format from the previous step, where:

    • (Required) PROTOCOL: This must be http.
    • (Optional) USERNAME and PASSWORD: This is your HTTPS proxy username and password. You must set USERNAME and PASSWORD if the proxy requires authentication.

    Note: When deploying management clusters with CLI, the following non-alphanumeric characters cannot be used in passwords: # ` ^ | / \ ? % ^ { [ ] }" < > .

    • (Required) FQDN-OR-IP: This is the FQDN or IP address of your HTTPS proxy.
    • (Required) PORT: This is the port number that your HTTPS proxy uses.

    For example, http://user:[email protected]:1234.

  • TKG_NO_PROXY: This sets one or more comma-separated network CIDRs or hostnames that must bypass the HTTP(S) proxy, for example to enable the management cluster to communicate directly with infrastructure that runs on the same network, behind the same proxy. Do not use spaces in the comma-separated list setting. For example, noproxy.yourdomain.com,192.168.0.0/24.

    On vSphere, this list must include:

    • Your vCenter IP address or hostname.
    • The CIDR of VSPHERE_NETWORK, which includes the IP address of your control plane endpoint. If you set VSPHERE_CONTROL_PLANE_ENDPOINT to an FQDN, also add that FQDN to the TKG_NO_PROXY list.

    Internally, Tanzu Kubernetes Grid appends localhost, 127.0.0.1, the values of CLUSTER_CIDR and SERVICE_CIDR, .svc, and .svc.cluster.local to the value that you set in TKG_NO_PROXY. It also appends your AWS VPC CIDR and 169.254.0.0/16 for deployments to AWS and your Azure VNET CIDR, 169.254.0.0/16, and 168.63.129.16 for deployments to Azure. For vSphere, you must manually add the CIDR of VSPHERE_NETWORK, which includes the IP address of your control plane endpoint, to TKG_NO_PROXY. If you set VSPHERE_CONTROL_PLANE_ENDPOINT to an FQDN, add both the FQDN and VSPHERE_NETWORK to TKG_NO_PROXY.

    Important

    If the cluster VMs need to communicate with external services and infrastructure endpoints in your Tanzu Kubernetes Grid environment, ensure that those endpoints are reachable by the proxies that you set above or add them to TKG_NO_PROXY. Depending on your environment configuration, this may include, but is not limited to:

    • Your OIDC or LDAP server
    • Harbor
    • VMware NSX
    • NSX Advanced Load Balancer
    • AWS VPC CIDRs that are external to the cluster

For example:

#! ---------------------------------------------------------------------
#! Proxy configuration
#! ---------------------------------------------------------------------

TKG_HTTP_PROXY_ENABLED: true
TKG_PROXY_CA_CERT: "LS0t[...]tLS0tLQ==""
TKG_HTTP_PROXY: "http://myproxy.com:1234"
TKG_HTTPS_PROXY: "http://myproxy.com:1234"
TKG_NO_PROXY: "noproxy.yourdomain.com,192.168.0.0/24"

Configure Node Settings

By default, all cluster nodes run Ubuntu v20.04, for all target platforms. On vSphere you can optionally deploy clusters that run Photon OS on their nodes. On AWS, nodes can optionally run Amazon Linux 2. For the architecture, the default and only current choice is amd64. For the OS and version settings, see see Node Configuration in the Configuration File Variable Reference.

For example:

#! ---------------------------------------------------------------------
#! Node configuration
#! ---------------------------------------------------------------------

OS_NAME: "photon"
OS_VERSION: "3"
OS_ARCH: "amd64"

How you set node compute configuration and sizes depends on the target platform. For information, see Management Cluster Configuration for vSphere, Management Cluster Configuration for AWS, or Management Cluster Configuration for Microsoft Azure.

Configure Machine Health Checks

Optionally, update variables based on your deployment preferences and using the guidelines described in the Machine Health Checks section of Configuration File Variable Reference.

For example:

ENABLE_MHC:
ENABLE_MHC_CONTROL_PLANE: true
ENABLE_MHC_WORKER_NODE: true
MHC_MAX_UNHEALTHY_CONTROL_PLANE: 60%
MHC_MAX_UNHEALTHY_WORKER_NODE: 60%
MHC_UNKNOWN_STATUS_TIMEOUT: 10m
MHC_FALSE_STATUS_TIMEOUT: 20m

Configure a Private Image Registry

If you are deploying the management cluster in an Internet-restricted environment, uncomment and update the TKG_CUSTOM_IMAGE_REPOSITORY_* settings. These settings are common to all target platforms. You do not need to configure the private image registry settings if:

  • You are deploying the management cluster in an Internet-restricted environment and you have set the TKG_CUSTOM_IMAGE_REPOSITORY_* variables by running the tanzu config set command, as described in Prepare an Internet-Restricted Environment. Environment variables set by running tanzu config set override values from a cluster configuration file.
  • If you are deploying the management cluster in an environment that has access to the external internet.

For example:

#! ---------------------------------------------------------------------
#! Image repository configuration
#! ---------------------------------------------------------------------

TKG_CUSTOM_IMAGE_REPOSITORY: "custom-image-repository.io/yourproject"
TKG_CUSTOM_IMAGE_REPOSITORY_CA_CERTIFICATE: "LS0t[...]tLS0tLQ=="

Configure Antrea CNI

By default, clusters that you deploy with the Tanzu CLI provide in-cluster container networking with the Antrea container network interface (CNI).

With ANTREA_* configuration variables, you can optionally deactivate Source Network Address Translation (SNAT) for pod traffic, implement hybrid, noEncap, NetworkPolicyOnly traffic encapsulation modes, use proxies and network policies, and implement Traceflow.

Proxy Settings: Antrea proxy and related configurations determine which TKG components handle network traffic with service types ClusterIP, NodePort, and LoadBalancer originating from internal pods, internal nodes, and external clients:

  • ANTREA_PROXY=true and ANTREA_PROXY_ALL=false (default): AntreaProxy handles ClusterIP traffic from pods, and kube-proxy handles all service traffic from nodes and external traffic, which has service type NodePort.
  • ANTREA_PROXY=false: kube-proxy handles all service traffic from all sources; overrides settings for ANTREA_PROXY_ALL.
  • ANTREA_PROXY_ALL=true: AntreaProxy handles all service traffic from all nodes and pods.
    • If present,kube-proxy also redundantly handles all service traffic from nodes and hostNetwork pods, including Antrea components, typically before AntreaProxy does.
    • If kube-proxy has been removed, as described in Removing kube-proxy, AntreaProxy alone serves all traffic types from all sources.
      • This configuration is Experimental and unsupported.
      • While AntreaProxy provides ClusterIP services for traffic to the Kubernetes API server, it also connects to the server itself. So it is safer to give AntreaProxy its own address for kube-apiserver.
        • You configure this as ANTREA_KUBE_APISERVER_OVERRIDE in format CONTROL-PLANE-VIP:PORT. The address should be either maintained by kube-vip or a static IP for a control plane node.
        • If this value is wrong, Antrea will crash and container networking will not work.
  • LoadBalancer service:
    • To use Antrea as an in-cluster LoadBalancer solution, enable ANTREA_SERVICE_EXTERNALIP and define Antrea ExternalIPPool custom resources as described in Service of type LoadBalancer in the Antrea documentation.
    • kube-proxy cannot serve as a load balancer and needs a third-party load balancer solution for allocating and advertising LoadBalancer IP addresses.

For more information about Antrea, see the following resources:

To optionally configure these features on Antrea, uncomment and update the ANTREA_* variables. For example:

#! ---------------------------------------------------------------------
#! Antrea CNI configuration
#! ---------------------------------------------------------------------

ANTREA_NO_SNAT: true
ANTREA_NODEPORTLOCAL: true
ANTREA_NODEPORTLOCAL_ENABLED: true
ANTREA_NODEPORTLOCAL_PORTRANGE: 61000-62000
ANTREA_TRAFFIC_ENCAP_MODE: "encap"
ANTREA_PROXY: true
ANTREA_PROXY_ALL: false
ANTREA_PROXY_LOAD_BALANCER_IPS: false
ANTREA_PROXY_NODEPORT_ADDRS:
ANTREA_PROXY_SKIP_SERVICES: ""
ANTREA_POLICY: true
ANTREA_TRACEFLOW: true
ANTREA_DISABLE_UDP_TUNNEL_OFFLOAD: false
ANTREA_ENABLE_USAGE_REPORTING: false
ANTREA_EGRESS: true
ANTREA_EGRESS_EXCEPT_CIDRS: ""
ANTREA_FLOWEXPORTER: false
ANTREA_FLOWEXPORTER_COLLECTOR_ADDRESS: "flow-aggregator.flow-aggregator.svc:4739:tls"
ANTREA_FLOWEXPORTER_POLL_INTERVAL: "5s"
ANTREA_FLOWEXPORTER_ACTIVE_TIMEOUT: "5s"
ANTREA_FLOWEXPORTER_IDLE_TIMEOUT: "15s"
ANTREA_IPAM: false
ANTREA_KUBE_APISERVER_OVERRIDE: ""
ANTREA_MULTICAST: false
ANTREA_MULTICAST_INTERFACES: ""
ANTREA_NETWORKPOLICY_STATS: true
ANTREA_SERVICE_EXTERNALIP: true
ANTREA_TRAFFIC_ENCRYPTION_MODE: none
ANTREA_TRANSPORT_INTERFACE: ""
ANTREA_TRANSPORT_INTERFACE_CIDRS: ""

Configure IaaS-Specific Variables

Continue to update the configuration file settings for vSphere, AWS, or Azure. For the configuration file settings that are specific to each target platform, see the corresponding topic:

Run the tanzu mc create Command

After you have created or updated the cluster configuration file and downloaded the most recent BOM, you can deploy a management cluster by running the tanzu mc create --file CONFIG-FILE command, where CONFIG-FILE is the name of the configuration file. If your configuration file is the default ~/.config/tanzu/tkg/cluster-config.yaml, you can omit the --file option. If you would like to review the Kubernetes manifest that the tanzu mc create command will apply you can optionally use the --dry-run flag to print the manifest without making changes. This invocation will still run the validation checks described below before generating the Kubernetes manifest.

Caution

The tanzu mc create command takes time to complete. While tanzu mc create is running, do not run additional invocations of tanzu mc create on the same bootstrap machine to deploy multiple management clusters, change context, or edit `~/.kube-tkg/config.

To deploy a management cluster, run the tanzu mc create command. For example:

tanzu mc create --file path/to/cluster-config-file.yaml

Validation Checks

When you run tanzu mc create, the command performs several validation checks before deploying the management cluster. The checks are different depending on the infrastructure to which you are deploying the management cluster.

vSphere
The command verifies that the target vSphere infrastructure meets the following requirements:
  • The vSphere credentials that you provided are valid.
  • Nodes meet the minimum size requirements.
  • Base image template exists in vSphere and is valid for the specified Kubernetes version.
  • Required resources including the resource pool, datastores, and folder exist in vSphere.
AWS
The command verifies that the target AWS infrastructure meets the following requirements:
  • The AWS credentials that you provided are valid.
  • Cloud Formation stack exists.
  • Node Instance type is supported.
  • Region and AZ match.
Azure
The command verifies that the target Azure infrastructure meets the following requirements:
  • The Azure credentials that you provided are valid.
  • The public SSH key is encoded in base64 format.
  • The node instance type is supported.

If any of these conditions are not met, the tanzu mc create command fails.

Monitoring Progress

When you run tanzu mc create, you can follow the progress of the deployment of the management cluster in the terminal. The first run of tanzu mc create takes longer than subsequent runs because it has to pull the required Docker images into the image store on your bootstrap machine. Subsequent runs do not require this step, so are faster.

If tanzu mc create fails before the management cluster deploys, you should clean up artifacts on your bootstrap machine before you re-run tanzu mc create. See the Troubleshooting Management Cluster Issues topic for details. If the machine on which you run tanzu mc create shuts down or restarts before the local operations finish, the deployment will fail.

If the deployment succeeds, you see a confirmation message in the terminal:

Management cluster created! You can now create your first workload cluster by running tanzu cluster create [name] -f [file]

What to Do Next

For information about what happened during the deployment of the management cluster, how to connect kubectl to the management cluster, how to create namespaces, and how to register the management cluster with Tanzu Mission Control, see Examine and Register a Newly-Deployed Standalone Management Cluster.

check-circle-line exclamation-circle-line close-line
Scroll to top icon