This topic describes how to create and manage compute profiles using the VMware Tanzu Kubernetes Grid Integrated Edition (TKGI) Command Line Interface (TKGI CLI).
A compute profile enables TKGI cluster managers to configure TKGI-provisioned Kubernetes clusters with custom settings:
pks.clusters.admin
accounts.pks.cluster.manage
accounts.TKGI supports creating and managing compute profiles for Linux- and Windows-based Kubernetes clusters on vSphere with NSX-T networking and for Linux-based Kubernetes clusters on vSphere without NSX-T networking.
For general information about compute profile usage, see About Compute Profiles below.
For information about creating compute profiles, see Create a Compute Profile below.
For information on how cluster managers use compute profiles, see Using Compute Profiles (vSphere).
TKGI-provisioned Kubernetes cluster administrators can use compute profiles to customize the following:
Note: A compute profile overrides only those CPU, memory, disk, and AZ settings that you define in the profile. If you do not define a setting in the profile, its configuration is inherited from the plan.
After you create a compute profile, cluster managers, pks.clusters.manage
, can apply it to one or more Kubernetes clusters.
For more information, see Compute Profiles vs. Plans below.
To create a compute profile in TKGI, a cluster administrator must:
Define a compute profile in a JSON configuration file. See Compute Profile Format and Compute Profile Parameters below.
Use the TKGI CLI to define the compute profile within TKGI. See The create-compute-profile Command below.
To create a compute profile, a cluster administrator first defines it as a JSON file. Every profile must include the name
, description
, and parameters
properties. Then, depending on what compute resources you want to customize, define azs
, control_plane
, or node_pools
. For example, you can define the following:
control_plane
and node_pools
, with or without azs
control_plane
or node_pools
, with or without azs
See the table below for examples of compute profiles.
Example | Description |
---|---|
Custom Nodes | Define custom compute resources for Kubernetes control plane and worker nodes. |
Worker Node Pools | Define multiple pools of worker nodes. |
Custom AZs (vSphere with NSX-T Only) | Define AZs for Kubernetes control plane nodes and worker node pools dynamically. |
For information about all of the parameters that you can specify in a compute profile, see Compute Profile Parameters below.
Cluster administrators can define custom compute resources for control plane nodes and/or worker nodes in a compute profile instead of updating plans. Then, cluster managers can apply the profile to an existing cluster to update the cluster with the new compute resources. As a result, the overall impact to the TKGI control plane and other clusters is smaller.
The example below defines compute resources for control plane nodes and one node pool for workers:
{
"name": "custom-nodes-compute-profile",
"description": "custom-nodes-compute-profile",
"parameters": {
"cluster_customization": {
"control_plane": {
"cpu": 2,
"memory_in_mb": 4096,
"ephemeral_disk_in_mb": 16384,
"persistent_disk_in_mb": 16384,
"instances": 3
},
"node_pools": [{
"cpu": 2,
"memory_in_mb": 4096,
"ephemeral_disk_in_mb": 16384,
"persistent_disk_in_mb": 16384,
"name": "tiny-1",
"instances": 5,
"max_worker_instances": 10
}]
}
}
}
Cluster administrators can define pools of worker nodes with different compute resources. Cluster managers then apply the compute profile to one or more clusters. This enables cluster managers to schedule workloads with different compute requirements on a single cluster.
Warning: If cluster update fails while applying a new compute profile with revised node pool names, do not reapply the previous compute profile: the cluster’s worker nodes will be deleted. If you encounter this scenario, fix the new compute profile configuration without modifying the node pool names, and retry your cluster update. For more information, see tkgi update-cluster compute profile failure in the VMware Tanzu Knowledge Base.
The example below defines compute resources for control plane nodes and two worker node pools. The control_plane
block in this example is optional.
{
"name": "custom-node-pools-compute-profile",
"description": "custom-node-pools-compute-profile",
"parameters": {
"cluster_customization": {
"control_plane": {
"cpu": 2,
"memory_in_mb": 4096,
"ephemeral_disk_in_mb": 16384,
"instances": 3
},
"node_pools": [{
"cpu": 2,
"memory_in_mb": 4096,
"ephemeral_disk_in_mb": 16384,
"persistent_disk_in_mb": 16384,
"name": "tiny-1",
"instances": 5,
"node_labels": "k1=v1,k2=v2",
"node_taints": "k3=v3:PreferNoSchedule, k4=v4:PreferNoSchedule"
},
{
"cpu": 4,
"memory_in_mb": 4096,
"ephemeral_disk_in_mb": 32768,
"name": "medium-2",
"instances": 1,
"max_worker_instances": 5,
"node_labels": "k1=v3,k2=v4",
"node_taints": "k3=v3:NoSchedule, k4=v4:NoSchedule"
}
]
}
}
}
Cluster administrators can define AZs in a compute profile instead of adding new AZs in the BOSH Director tile. Cluster managers then use it to specify AZs for a cluster dynamically. As a result, you do not need to make AZ changes to each TKGI plan and the overall impact to the TKGI control plane is smaller.
Note: You cannot use compute profiles to change the AZs of any nodes in an existing cluster. TKGI does not support changing the AZs of existing control plane nodes, but you can change the AZs of worker nodes by modifying their cluster’s plan.
The example below defines three AZs, cp-hg-az-1
, cp-hg-az-2
, and cp-hg-az-3
, in the azs
block, which are then referenced in the cluster_customization
block.
{
"name": "azs-custom-compute-profile",
"description": "Profile for customized AZs",
"parameters": {
"azs": [{
"name": "cp-hg-az-1",
"cpi": "ff8d93840299bd7474f5",
"cloud_properties": {
"datacenters": [{
"name": "vSAN_Datacenter",
"clusters": [{
"vSAN_Cluster": {
"host_group": {
"drs_rule": "MUST",
"name": "CP-HG-AZ-1"
}
}
}]
}]
}
},
{
"name": "cp-hg-az-2",
"cpi": "ff8d93840299bd7474f5",
"cloud_properties": {
"datacenters": [{
"name": "vSAN_Datacenter",
"clusters": [{
"vSAN_Cluster": {
"host_group": {
"drs_rule": "MUST",
"name": "CP-HG-AZ-2"
}
}
}]
}]
}
},
{
"name": "cp-hg-az-3",
"cpi": "ff8d93840299bd7474f5",
"cloud_properties": {
"datacenters": [{
"name": "vSAN_Datacenter",
"clusters": [{
"vSAN_Cluster": {
"host_group": {
"drs_rule": "MUST",
"name": "CP-HG-AZ-3"
}
}
}]
}]
}
}
],
"cluster_customization": {
"control_plane": {
"cpu": 4,
"memory_in_mb": 16384,
"ephemeral_disk_in_mb": 32768,
"az_names": ["cp-hg-az-1", "cp-hg-az-2", "cp-hg-az-3"],
"instances": 3
},
"node_pools": [{
"name": "x-large",
"cpu": 4,
"memory_in_mb": 8192,
"ephemeral_disk_in_mb": 32768,
"az_names": ["cp-hg-az-1", "cp-hg-az-2", "cp-hg-az-3"],
"instances": 3,
"max_worker_instances": 25,
"node_labels": "k1=v1,k2=v2",
"node_taints": "k3=v3:NoSchedule, k4=v4:NoSchedule"
}]
}
}
}
The compute profile JSON configuration file includes the following top-level properties:
Property | Type | Description |
---|---|---|
name |
String | (Required) Name of the compute profile. You use this name when managing the compute profile or assigning the profile to a Kubernetes cluster through the TKGI CLI. |
description |
String | (Required) Description of the compute profile. |
parameters |
Object | (Required) Properties defining the main body of the compute profile such as azs and cluster_customization . |
azs |
Array | (Optional) Properties defining one or more AZs, including the name , cpi , and cloud_properties settings. See azs Block below. |
cluster_customization |
Object | (Optional) Properties defining the control_plane and node_pools settings. See control_plane Block and node_pools Block below. |
azs
Block (vSphere with NSX-T Only)This optional block defines where Kubernetes control plane and worker nodes are created within your vSphere infrastructure. You can define one or more AZs.
If you define the azs
block, do not specify the persistent_disk_in_mb
property in cluster_customization
. You can specify either azs
or persistent_disk_in_mb
, but not both.
Specify the properties below for each AZ. For more information about the cloud_properties
schema, see AZs in the BOSH documentation.
Property | Type | Description |
---|---|---|
name |
String | Name for the AZ where you want to deploy Kubernetes cluster VMs. For example, cp-hg-az-1 . |
cpi |
String | BOSH CPI ID of your TKGI deployment. For example, abc012abc345abc567de . For instructions on how to obtain the ID, see Retrieve the BOSH CPI ID. |
cloud_properties |
Object | Properties defining vSphere datacenters for your Kubernetes cluster VMs. |
datacenters |
Array | Array of data centers. Define only one data center. |
name |
String | Name of your vSphere data center as it appears in Ops Manager and your cloud provider console. For example, vSAN_Datacenter . |
clusters |
Array | Array of clusters. Define only one cluster. |
CLUSTER-NAME |
String | Name of your vSphere compute cluster. For example, vSAN_Cluster . This section defines host_group . |
host_group |
Object | Properties of the host group that you want to use for your Kubernetes cluster VMs. This includes name and drs_rule. |
name |
String | Name of the host group in vSphere. |
drs_rule |
String | Specify MUST . If you use vSAN Stretched Clusters, specify SHOULD . |
Use the following procedure to retrieve the BOSH CPI ID for your TKGI deployment.
Locate the credentials that were used to import the Ops Manager .ova or .ovf file into your virtualization system. You configured these credentials when you installed Ops Manager.
Note: If you lose your credentials, you must shut down the Ops Manager VM in the vSphere UI and reset the password. See vCenter Password Requirements and Lockout Behavior in the vSphere documentation for more information.
From a command line, run the following command to SSH into the Ops Manager VM:
ssh ubuntu@OPS-MANAGER-FQDN
Where OPS-MANAGER-FQDN
is the fully qualified domain name (FQDN) of Ops Manager.
When prompted, enter the password that you configured during the .ova deployment into vCenter.
For example:
$ ssh [email protected]
Password: ***********
Run bosh cpi-config
to locate the Cloud Provider Interface (CPI) name for your deployment.
For example:
$ bosh cpi-config
Using environment 'BOSH-DIRECTOR-IP' as client 'ops_manager'
cpis:
- migrated_from:
- name: ""
name: YOUR-CPI-NAME
For more information about running BOSH commands in your Tanzu Kubernetes Grid Integrated Edition deployment, see Using BOSH Diagnostic Commands in Tanzu Kubernetes Grid Integrated Edition.
control_plane
BlockThis optional block defines properties for Kubernetes control plane node instances.
When defining the control_plane
block, you must specify either cpu
, memory_in_mb
, and ephemeral_disk_in_mb
or none of the three.
Property | Type | Description |
---|---|---|
cpu |
Integer | CPU count for control plane instances. |
memory_in_mb |
Integer | RAM for control plane instances. |
ephemeral_disk_in_mb |
Integer | Ephemeral disk for control plane instances. |
persistent_disk_in_mb |
Integer | Persistent disk for control plane instances. Do not specify this parameter if you intend to define the azs block in the compute profile. |
az_names |
Array | One or more AZs in which you want control plane instances to run. You defined these AZs in the azs block of the compute profile. |
instances |
Integer | Number of control plane instances. Specify 1 , 3 , or 5 . |
Do not assign the name
property to this block.
node_pools
BlockThis optional block defines properties for Kubernetes worker nodes. You can define one or more node pools.
When defining the node_pools
block, you must specify either cpu
, memory_in_mb
, and ephemeral_disk_in_mb
or none of the three.
Note: You must always configure at least one Node Pool group without a node_taints
definition or with PreferNoSchedule
taints. kube-scheduler
will use this Node Pool group to schedule core Pods such as the coredns
Pod. For more information about taints, see Taints and Tolerations in the Kubernetes documentation.
Property | Type | Description |
---|---|---|
name |
String | Name of the node pool. |
cpu |
Integer | CPU count for worker node instances. |
memory_in_mb |
Integer | RAM for worker node instances. |
ephemeral_disk_in_mb |
Integer | Ephemeral disk for worker node instances. |
persistent_disk_in_mb |
Integer | Persistent disk for worker node instances. Do not specify this parameter if you intend to define the azs block in the compute profile. |
az_names |
Array | One or more AZs in which you want worker node instances to run. You defined these AZs in the azs block of the compute profile. |
instances |
Integer | Number of worker node instances. |
max_worker_instances |
Integer | Maximum number of worker node instances for the node pool. |
node_labels |
String | One or more comma-delimited key:value pair labels you want to apply to worker node instances. For information on kubectl label syntax, see label in the kubectl documentation. |
node_taints |
String | One or more comma-delimited key=value:effect taints you want to apply to worker node instances. For information on kubectl taint syntax, see taint in the kubectl documentation. |
Warning: If cluster update fails while applying a new compute profile with revised node pool names, do not reapply the previous compute profile: the cluster’s worker nodes will be deleted. If you encounter this scenario, fix the new compute profile configuration without modifying the node pool names, and retry your cluster update. For more information, see tkgi update-cluster compute profile failure in the VMware Tanzu Knowledge Base.
After a compute profile is defined in a JSON file as described in Compute Profile Format, a cluster administrator can create the compute profile by running the following TKGI CLI command:
tkgi create-compute-profile PATH-TO-YOUR-COMPUTE-PROFILE-CONFIGURATION
Where PATH-TO-YOUR-COMPUTE-PROFILE-CONFIGURATION
is the path to the JSON file you created when defining the compute profile.
For example:
$ tkgi create-compute-profile dc-east-mixed.json
Compute profile dc-east-mixed successfully created
Only cluster administrators, pks.clusters.admin
, can create compute profiles. If a cluster manager pks.clusters.manage
or read-only admin pks.clusters.admin-read-only
attempts to create a compute profile, the following error occurs:
You do not have enough privileges to perform this action. Please contact the TKGI administrator.
After an administrator creates a compute profile, cluster managers can create clusters with it or assign it to existing clusters. For more information, see the Using Compute Profiles (vSphere) topic.
TKGI administrators can delete compute profiles. Administrators can also perform the same operations that cluster managers use to list compute profiles and manage how clusters use them.
Warning: These commands do not work for compute profiles created using the TKGI API in TKGI v1.8 or earlier.
To view details about a compute profile, run the following command:
tkgi compute-profile COMPUTE-PROFILE-NAME
Where COMPUTE-PROFILE-NAME
is the name of the compute profile you want to view.
For example:
tkgi compute-profile test-compute-profile
Name: test-compute-profile
Description: test-compute-profile
Parameters:
Cluster Customization:
Control Plane:
Name:
Instances: 3
CPU: 2
Memory (Mb): 4096
Ephemeral Disk (Mb): 16384
Node Pool:
Name: tiny-1
Instances: 5
CPU: 2
Memory (Mb): 4096
Ephemeral Disk (Mb): 16384
Node Pool:
Name: medium-2
Instances: 1
CPU: 4
Memory (Mb): 4096
Ephemeral Disk (Mb): 32768
To delete a compute profile, run the following command:
tkgi delete-compute-profile COMPUTE-PROFILE-NAME
Where COMPUTE-PROFILE-NAME
is the name of the compute profile you want to delete.
For example:
tkgi delete-compute-profile test-compute-profile-8
Are you sure you want to delete the compute profile test-compute-profile-8? (y/n): y
Deletion of test-compute-profile-8 completed
Limitations:
You cannot delete a compute profile that is in use by a cluster.
Only cluster administrators, pks.clusters.admin
, can delete compute profiles. If a cluster manager pks.clusters.manage
or read-only admin pks.clusters.admin-read-only
attempts to delete a compute profile, the following error occurs:
You do not have enough privileges to perform this action. Please contact the TKGI administrator.
The following sections link to operations that both TKGI administrators and cluster managers can perform on compute profiles, documented in the Using Compute Profiles (vSphere) topic.
As with plans defined in TKGI tile Plans panes, compute profiles let TKGI administrators define cluster resource choices for developers using Kubernetes.
Compute profiles offer more granular control over cluster topology and node sizing than plans do. For example, compute profiles can define heterogenous clusters with different CPU, memory, ephemeral disk, or persistent disk settings for control plane nodes and worker nodes.
You can also apply a compute profile to specific clusters, overriding the default settings defined by their plan and possibly avoiding the need to create new plans.
You use the TKGI tile to manage plans and the TKGI CLI to manage compute profiles.