The appendix aggregates all design decisions of the Developer Ready Infrastructure for VMware Cloud Foundation validated solution. You can use this design decision list for reference related to the end state of the environment and potentially to track your level of adherence to the design and any justification for deviations.

Deployment Specification Design

Table 1. Design Decisions on vSphere Storage Policy Based Management for Developer Ready Infrastructure for VMware Cloud Foundation

Decision ID

Design Decision

Design Justification

Design Implication

DRI-TZU-CFG-001

Create a vSphere tag and tag category, and apply the vSphere tag to the vSAN datastore in the shared edge and workload vSphere cluster in the VI workload domain.

Supervisor Cluster activation requires the use of vSphere Storage Based Policy Management (SPBM).

To assign the vSAN datastore to the Supervisor Cluster, you need to create a vSphere tag and tag category to create an SPBM rule.

This must be done manually or via PowerCLI.

DRI-TZU-CFG-002

Create a vSphere Storage Policy Based Management (SPBM) policy that specifies the vSphere tag you created for the Supervisor Cluster.

When you create the SPBM policy and define the vSphere tag for the Supervisor Cluster, you can then assign that SPBM policy during Supervisor Cluster activation.

This must be done manually or via PowerCLI.

Table 2. Design Decisions on the Supervisor Cluster for Developer Ready Infrastructure for VMware Cloud Foundation

Decision ID

Design Decision

Design Justification

Design Implication

DRI-TZU-CFG-003

Activate vSphere with Tanzu on the shared edge and workload vSphere cluster in the VI workload domain.

The Supervisor Cluster is required to run Kubernetes workloads natively and to deploy Tanzu Kubernetes clusters natively using Tanzu Kubernetes Grid Service.

Ensure the shared edge and workload vSphere cluster is sized to support the Supervisor Cluster control plane, any additional integrated management workloads, and any tenant workloads.

DRI-TZU-CFG-004

Deploy the Supervisor Cluster with small size control plane nodes.

Deploying the control plane nodes as small size appliances gives you the ability to run up to 2000 pods within your Supervisor Cluster.

If your pod count is higher than 2000 for the Supervisor Cluster, you must deploy control plane nodes that can handle that level of scale.

You must account for the size of the control plane nodes.

DRI-TZU-CFG-005

Use NSX-T Data Center as provider of the software-defined networking for the Supervisor Cluster.

You can deploy a Supervisor Cluster either by using NSX-T Data Center or vSphere Networking .

VMware Cloud Foundation uses NSX-T Data Center for software-defined networking across the SDDC. Deviating for vSphere with Tanzu would increase the operational overhead.

None.

DRI-TZU-CFG-006

Deploy the NSX Edge cluster with large size nodes.

Large size NSX Edge nodes are the smallest size supported to activate a Supervisor Cluster.

You must account for the size of the NSX Edge nodes.

Table 3. Design Decisions on the vSphere Registry Service for Developer Ready Infrastructure for VMware Cloud Foundation

Decision ID

Design Decision

Design Justification

Design Implication

DRI-TZU-CFG-007

Activate the Registry Service for the Supervisor Cluster.

Activating the Registry Service provides an integrated Harbor image registry to the Supervisor Cluster that provides multiple functionalities:

  • Federated authentication for end users

  • Native life cycle management of the image registry

  • Native health monitoring of the image registry

None

Table 4. Design Decisions on the Tanzu Kubernetes Cluster for for Developer Ready Infrastructure for VMware Cloud Foundation

Decision ID

Design Decision

Design Justification

Design Implication

DRI-TZU-CFG-008

Deploy a Tanzu Kubernetes Cluster in the Supervisor Cluster.

For applications that require upstream Kubernetes compliance, a Tanzu Kubernetes Cluster is required.

None

DRI-TZU-CFG-009

Configure a subscribed content library for Tanzu Kubernetes Cluster use in the shared edge and workload vSphere cluster.

To deploy a Tanzu Kubernetes Cluster on a Supervisor Cluster, you must configure a content library in the shared edge and workload vSphere cluster to pull required images from VMware.

You must manually configure the content library.

DRI-TZU-CFG-010

Use Antrea as the container network interface (CNI) for your Tanzu Kubernetes Clusters.

Antrea is the default CNI for Tanzu Kubernetes Clusters.

New Tanzu Kubernetes Clusters are deployed with Antrea as the CNI, unless you specify Calico.

Table 5. Design Decisions on Sizing the Tanzu Kubernetes Cluster for Developer Ready Infrastructure for VMware Cloud Foundation

Decision ID

Design Decision

Design Justification

Design Implication

DRI-TZU-CFG-011

Deploy the Tanzu Kubernetes Cluster with a minimum of three control plane nodes.

Deploying three control plane nodes ensures the state of your Tanzu Kubernetes Cluster control plane stays healthy in the event of a node failure.

None.

DRI-TZU-CFG-012

Deploy the Tanzu Kubernetes Cluster with a minimum of three worker nodes.

Deploying three worker nodes provides a higher potential level of availability of your workloads deployed to the Tanzu Kubernetes Cluster.

You must configure your tenant workloads to use effectively the additional worker nodes in the Tanzu Kubernetes Cluster to provide high availability on an application-level.

DRI-TZU-CFG-013

Deploy the Tanzu Kubernetes Cluster with small-size nodes for both control plane and workers.

Deploying the Tanzu Kubernetes Cluster control plane and worker nodes as small-size appliances meets the scale requirements for most deployments.

If your scale requirements are higher, you must deploy appliances with the appropriate size or split the workload to multiple Tanzu Kubernetes Clusters.

The size of the Tanzu Kubernetes Cluster nodes impacts the scale of a given cluster. If you must add a node to a cluster, consider the use of larger nodes.

Network Design

Table 6. Design Decisions on Networking for Developer Ready Infrastructure for VMware Cloud Foundation

Decision ID

Design Decision

Design Justification

Design Implication

DRI-TZU-NET-001

Add a /28 overlay-backed NSX segment for use by the Supervisor Cluster control plane nodes.

Supports the Supervisor Cluster control plane nodes.

You must create the overlay-backed NSX segment.

DRI-TZU-NET-002

Use a dedicated /20 subnet for pod networking.

A single /20 subnet is sufficient to meet the design requirement of 2000 pods.

Private IP space behind a NAT that you can use in multiple Supervisor Clusters.

DRI-TZU-NET-003

Use a dedicated /22 subnet for services.

A single /22 subnet is sufficient to meet the design requirement of 2000 pods.

Private IP space behind a NAT that you can use in multiple Supervisor Clusters.

DRI-TZU-NET-004

Use a dedicated /24 or larger subnet on your corporate network for ingress endpoints.

A /24 subnet is sufficient to meet the design requirement of 2000 pods in most cases.

This subnet must be routable to the rest of the corporate network.

A /24 subnet will suffice for most use cases, but you should evaluate your ingress needs prior to deployment

DRI-TZU-NET-005

Use a dedicated /24 or larger subnet on your corporate network for egress endpoints.

A /24 subnet is sufficient to meet the design requirement of 2000 pods in most cases.

This subnet must be routable to the rest of the corporate network.

A /24 subnet will suffice for most use cases, but you should evaluate your egress needs prior to deployment

Life Cycle Management Design

Table 7. Design Decisions on Life Cycle Management for Developer Ready Infrastructure for VMware Cloud Foundation

Decision ID

Design Decision

Design Justification

Design Implication

DRI-TZU-LCM-001

Life cycle management of a Supervisor Cluster is performed using the vSphere Client with its native workflows.

Life cycle management of a Supervisor Cluster is not integrated into SDDC Manager.

Deployment, patching, updates, and upgrades of a Supervisor Cluster and its components subject to update or upgrade are performed without SDDC Manager automation.

DRI-TZU-LCM-002

Life cycle management of a Tanzu Kubernetes cluster is performed using kubectl.

Life cycle management of a Tanzu Kubernetes cluster is not integrated into SDDC Manager.

Deployment, patching, updates, and upgrades of a Tanzu Kubernetes cluster and its components subject to update or upgrade are performed without SDDC Manager automation.

Information Security and Access Design

Table 8. Design Decisions on Authentication and Access Control for Developer Ready Infrastructure for VMware Cloud Foundation

Decision ID

Design Decision

Design Justification

Design Implication

DRI-TZU-SEC-001

Create a security group in Active Directory for DevOps administrators. Add users who need edit permissions within a namespace to the group and grant Can Edit permissions to the namespace for that group.

If you require different permissions per namespace, create additional groups.

Necessary for auditable role-based access control within the Supervisor Cluster and Tanzu Kubernetes clusters.

You must define and manage security groups, group membership, and security controls in Active Directory.

DRI-TZU-SEC-002

Create a security group in Active Directory for DevOps administrators. Add users who need read-only permissions in a namespace to the group, and grant Can View permissions to the namespace for that group.

If you require different permissions per namespace, create additional groups.

Necessary for auditable role-based access control within the Supervisor Cluster and Tanzu Kubernetes clusters.

You must define and manage security groups, group membership, and security controls in Active Directory.

Table 9. Design Decisions on Certificate Management for Developer Ready Infrastructure for VMware Cloud Foundation

Decision ID

Design Decision

Design Justification

Design Implication

DRI-TZU-SEC-003

Replace the default self-signed certificate for the Supervisor Cluster management interface with a PEM-encoded, CA-signed certificate.

Ensures that the communication between administrators and the Supervisor Cluster management interface is encrypted by using a trusted certificate.

Replacing and managing certificates is an operational overhead as it must be done outside of SDDC Manager certificate automation.

DRI-TZU-SEC-004

Use a SHA-2 or higher algorithm when signing certificates.

The SHA-1 algorithm is considered less secure and has been deprecated.

Not all certificate authorities support SHA-2.