To use Tanzu Service Mesh, your environment must meet certain hardware, software, and networking requirements and use one of the supported platforms.
Workload Cluster Resource Requirements
To successfully onboard a cluster to Tanzu Service Mesh, you need sufficient resources to support the DaemonSets and the data plane components that Tanzu Service Mesh runs locally on the cluster. For a list of the components for the latest version of data plane, see the Data Plane Component Resource Requirements table.
Requirement |
Value |
---|---|
Nodes |
At the time of onboarding, the Tanzu Service Mesh data plane requires at least 3 nodes, each with at least 4,000 milliCPU (4 CPUs) of allocatable CPU and 6 GB of allocatable memory.
Note:
|
DaemonSets |
The DaemonSets instantiate a pod on every node on the cluster. To run the DaemonSets, Tanzu Service Mesh requires that every node in the cluster have at least 250 milliCPU and 650 MB of memory available. |
Ephemeral storage |
24 GB for the whole cluster and additionally 1 GB for each node. |
Pods |
Tanzu Service Mesh requires a quota of at least 3 pods for each node on the cluster and additionally at least 30 pods for the whole cluster. |
For example, you are onboarding a cluster with 10 worker nodes to Tanzu Service Mesh. 3 of the worker nodes must have at least 3 CPUs and 6 GB of memory available to run the data plane components. Additionally, each of the worker nodes must have at least 250 milliCPU and 650 MB of memory available for the DaemonSets (see the resource requirements for DaemonSets in the Cluster Resource Requirements table).
Tanzu Service Mesh data plane version 4.0.0 or later reserves the specified resources for each of the following major data plane components.
Component |
CPU |
Memory |
No. of Instances |
Total CPU (CPU Reservation * No. of Instances) |
Total Memory (Memory Reservation * No. of Instances) |
---|---|---|---|---|---|
istiod |
500m |
2 GiB |
2 |
1,000m |
4 GiB |
telemetry |
1100m |
1128 MiB |
2 |
2200m |
2256 MiB |
Ingress-gateway |
100m |
128 MiB |
2 |
200m |
256 MiB |
Egress-gateway |
100m |
128 MiB |
2 |
200m |
256 MiB |
Telegraf-istio |
100m |
500 MiB |
1 instance per node in the cluster. For example, if the cluster has 5 nodes, Telegraf-istio requires 5 instances. |
CPU value * No. of Instances |
Memory value * No. of instances |
This is not an exhaustive list of the Tanzu Service Mesh data plane components.
For the pods in the Data Plane Component Resource Requirements table to be successfully scheduled on a Kubernetes cluster, we recommend worker nodes with sufficiently large CPU and memory resources. See the resource requirements for nodes in the Workload Cluster Resource Requirements table. Clusters with worker nodes that do not meet these resource requirements might not have enough CPU or memory on individual nodes to schedule the istiod
and telemetry
pods due to existing pods and reservations.
Supported Istio Version
Tanzu Service Mesh supports Istio version 1.21.4. For details of the changes contained in the release, see the Istio 1.21 release notes.
Supported Platforms
Your environment uses one of the following Kubernetes platforms with supported versions.
Platform |
Platform Version |
Kubernetes Version |
---|---|---|
VMware Tanzu® Kubernetes Grid™ Integrated Edition |
1.19.0 |
1.28 |
1.18.0 |
1.27 |
|
1.17.0 |
1.26 |
|
VMware Tanzu® Mission Control™ (provisioned clusters) |
N/A |
All versions running on clusters |
VMware Tanzu® Mission Control™ (attached clusters) |
N/A |
Kubernetes versions supported by Tanzu Kubernetes Grid Integrated Edition, VMware Tanzu Kubernetes Grid Service, VMware Tanzu Kubernetes Grid, or Amazon EKS depending on the platform of an attached cluster (see the corresponding rows in this table) |
VMware Tanzu® Kubernetes Grid™ on vSphere 8 |
2.5.1 |
1.28, 1.27, 1.26 |
VMware vSphere® with VMware Tanzu® |
VC8 |
1.30, 1.29, 1.28, 1.27 |
Amazon EKS |
N/A |
1.30, 1.29, 1.28, 1.27, 1.26 |
Azure Kubernetes Service (AKS) |
N/A |
1.30, 1.29, 1.28, 1.27 |
Anthos Google Kubernetes Engine (GKE) |
N/A |
1.30, 1.29, 1.28, 1.27 |
Connectivity Requirements
For workloads on your clusters to communicate with each other, the clusters must have network connectivity between each other.
Your clusters have Container Networking Interface (CNI) enabled.
Your clusters have role-based access control (RBAC) enabled.
The kubectl command-line tool is installed in your environment. For instructions on how to download, install, and set up kubectl, refer to the Kubernetes documentation.
Chrome browser version 74.x or later is installed.
The Tanzu Service Mesh client software on your clusters needs to communicate with Tanzu Service Mesh Software as a Service (SaaS), and Amazon Elastic Container Registry (ECR). Configure your firewall rules to allow your clusters to access the following domains on port 443:
prod-1.nsxservicemesh.vmware.com or prod-2.nsxservicemesh.vmware.com
*.prod-1-internal.nsxservicemesh.vmware.com or *.prod-2-internal.nsxservicemesh.vmware.com (for clusters onboarded before Tanzu Service Mesh version 1.10)
prod-1-internal.nsxservicemesh.vmware.com or prod-2-internal.nsxservicemesh.vmware.com (for clusters onboarded to Tanzu Service Mesh version 1.10 or later)
prod-ap-south-1-001.nsxservicemesh.vmware.com
prod-ap-south-1-001-internal.nsxservicemesh.vmware.com
public.ecr.aws/v6x6b8s5/
tsm-release.s3.us-west-2.amazonaws.com (for clusters onboarded before Tanzu Service Mesh version 1.10)
prod-us-west-2-starport-layer-bucket.s3.us-west-2.amazonaws.com
storage.googleapis.com
Note:Depending on whether your organization uses the Tanzu Service Mesh prod-1 or prod-2 production environment, your clusters must have access to the corresponding prod-1 or prod-2 domains.
If you use a policy agent, such as Open Policy Agent (OPA), add the following to your trusted registries:
public.ecr.aws/v6x6b8s5/
An infrastructure load balancer (Kubernetes service type LoadBalancer) is deployed on your clusters. For example, VMware NSX Advanced Load Balancer (Avi Networks) or MetaLB can be used as a network load balancer.
Tanzu Service Mesh does not support the NodePort service type in Kubernetes.