You can stretch a cluster in the management domain or VI workload domain using a JSON specification and the VMware Cloud Foundation API.

Prerequisites

  • Verify that vCenter Server is operational.
  • Verify that you have completed the Planning and Preparation Workbook with the management domain or VI workload domain deployment option included.
  • Verify that your environment meets the requirements listed in the Prerequisite Checklist sheet in the Planning and Preparation Workbook.
  • Create a network pool for availability zone 2.
  • Commission hosts for availability zone 2. See Commission Hosts.
  • Ensure that you have enough hosts such that there is an equal number of hosts on each availability zone. This is to ensure that there are sufficient resources in case an availability zone goes down completely.
  • Deploy and configure a vSAN witness host. See Deploy and Configure vSAN Witness Host.
  • If you are stretching a cluster in a VI workload domain, the default management vSphere cluster must have been stretched.
Note: You cannot stretch a cluster in the following cases:
  • The cluster uses static IP addresses for the NSX Host Overlay Network TEPs.
  • The cluster has a vSAN remote datastore mounted on it.
  • The cluster uses vSphere Lifecycle Manager images.
  • The cluster shares a vSAN Storage Policy with any other clusters.
  • The cluster is enabled for Workload Management (vSphere with Tanzu).

Procedure

  1. Create a JSON specification with the following content in a text editor.
    The following example is for an environment with a single vSphere Distributed Switch. If you have multiple vSphere Distributed Switches, see the VMware Cloud Foundation API Reference Guide for details about creating a JSON specification.
    Note: The ESXi hosts that you are adding to availability zone 2 must use the same vmnic to vSphere Distributed Switch mapping as the existing hosts in availability zone 1. For example: If the host in availability zone 1 map vmnic0 and vmnic1 to vSphere Dstributed Switch 1 and vmnic2 and vmnic3 to vSphere Distributed Switch 2, then the hosts you are adding to availability zone 2 must map the same vmincs to the same vSphere Distributed Switches.
    {
     "clusterStretchSpec": {
      "hostSpecs": [{
       "id": "<ESXi host 1 ID>",
       "licenseKey": "XX0XX-XX0XX-XX0XX-XX0XX-XX0XX",
       "hostNetworkSpec": {
        "vmNics": [{
          "id": "vmnic0",
          "vdsName": "<vSphere Distributed Switch 1>"
         },
         {
          "id": "vmnic1",
          "vdsName": "<vSphere Distributed Switch 2>"
         }
        ]
       }
    
      }, {
       "id": "<ESXi host 2 ID>",
       "licenseKey": "XX0XX-XX0XX-XX0XX-XX0XX-XX0XX",
       "hostNetworkSpec": {
        "vmNics": [{
          "id": "vmnic0",
          "vdsName": "<vSphere Distributed Switch 1>"
         },
         {
          "id": "vmnic1",
          "vdsName": "<vSphere Distributed Switch 2>"
         }
        ]
       }
      }, {
       "id": "<ESXi host 3 ID>",
       "licenseKey": "XX0XX-XX0XX-XX0XX-XX0XX-XX0XX",
       "hostNetworkSpec": {
        "vmNics": [{
          "id": "vmnic0",
          "vdsName": "<vSphere Distributed Switch 1>"
         },
         {
          "id": "vmnic1",
          "vdsName": "<vSphere Distributed Switch 2>"
         }
        ]
       }
      }, {
       "id": "<ESXi host 4 ID>",
       "licenseKey": "XX0XX-XX0XX-XX0XX-XX0XX-XX0XX",
       "hostNetworkSpec": {
        "vmNics": [{
          "id": "vmnic0",
          "vdsName": "<vSphere Distributed Switch 1>"
         },
         {
          "id": "vmnic1",
          "vdsName": "<vSphere Distributed Switch 2>"
         }
        ]
       }
      }],
      "witnessSpec": {
       "vsanIp": "<IP address of vSAN witness host>",
       "fqdn": "<fqdn of vSAN witness host>",
       "vsanCidr": "<cidr of vSAN witness host>"
      },
      "witnessTrafficSharedWithVsanTraffic": false,
      "secondaryAzOverlayVlanId": <availability Zone 2 Overlay VLAN ID>,
      "isEdgeClusterConfiguredForMultiAZ": true
     }
    }
    Note:

    vsanCIDR and vsanIP values are for the witness appliance on which vSAN is enabled.

  2. In the navigation pane, click Developer Center > API Explorer.
  3. Retrieve and replace the unique IDs for each ESXi host in the JSON specification.
    1. Expand the APIs for managing hosts section, and expand GET /v1/hosts.
    2. In the Status text box, enter UNASSIGNED_USEABLE and click Execute.
    3. In the Response section, click PageOfHost, copy the id element of each host, and replace the respective value in the JSON specification.

      ESXi Host

      Value

      ESXi Host 1

      ESXi host 1 ID

      ESXi Host 2

      ESXi host 2 ID

      ESXi Host 3

      ESXi host 3 ID

      ESXi Host 4

      ESXi host 4 ID
  4. Replace the license key value in JSON specification with valid keys.
  5. Retrieve the unique ID for the management cluster.
    1. Expand the APIs for managing clusters section, and expand GET /v1/cluster.
    2. Click Execute.
    3. In the Response section, click PageOfCluster, copy the id element of the management cluster.
      You will need the cluster ID for subsequent steps.
  6. Validate the JSON specification file.
    1. Expand the APIs for managing clusters section and expand POST /v1/clusters/{id}/validations.
    2. In the Value text box, enter the unique ID for the management cluster that ypu retrieved in step 5.
    3. In the clusterUpdateSpec text box, type
      {
      "clusterUpdateSpec": 
      }
    4. Paste the JSON specification.
      For example:
      {
      "clusterUpdateSpec": {
       "clusterStretchSpec": {
        "hostSpecs": [{
         "id": "<ESXi host 1 ID>",
         "licenseKey": "XX0XX-XX0XX-XX0XX-XX0XX-XX0XX",
         "hostNetworkSpec": {
          "vmNics": [{
            "id": "vmnic0",
            "vdsName": "<vSphere Distributed Switch 1>"
           },
           {
            "id": "vmnic1",
            "vdsName": "<vSphere Distributed Switch 2>"
           }
          ]
         }
      
        }, {
         "id": "<ESXi host 2 ID>",
         "licenseKey": "XX0XX-XX0XX-XX0XX-XX0XX-XX0XX",
         "hostNetworkSpec": {
          "vmNics": [{
            "id": "vmnic0",
            "vdsName": "<vSphere Distributed Switch 1>"
           },
           {
            "id": "vmnic1",
            "vdsName": "<vSphere Distributed Switch 2>"
           }
          ]
         }
        }, {
         "id": "<ESXi host 3 ID>",
         "licenseKey": "XX0XX-XX0XX-XX0XX-XX0XX-XX0XX",
         "hostNetworkSpec": {
          "vmNics": [{
            "id": "vmnic0",
            "vdsName": "<vSphere Distributed Switch 1>"
           },
           {
            "id": "vmnic1",
            "vdsName": "<vSphere Distributed Switch 2>"
           }
          ]
         }
        }, {
         "id": "<ESXi host 4 ID>",
         "licenseKey": "XX0XX-XX0XX-XX0XX-XX0XX-XX0XX",
         "hostNetworkSpec": {
          "vmNics": [{
            "id": "vmnic0",
            "vdsName": "<vSphere Distributed Switch 1>"
           },
           {
            "id": "vmnic1",
            "vdsName": "<vSphere Distributed Switch 2>"
           }
          ]
         }
        }],
        "witnessSpec": {
         "vsanIp": "<IP address of vSAN witness host>",
         "fqdn": "<fqdn of vSAN witness host>",
         "vsanCidr": "<cidr of vSAN witness host>"
        },
        "witnessTrafficSharedWithVsanTraffic": false,
        "secondaryAzOverlayVlanId": <availability Zone 2 Overlay VLAN ID>,
        "isEdgeClusterConfiguredForMultiAZ": true
        }
       }
      }
    5. Click Execute.
    6. In the confirmation dialog box, click Continue.
    7. In the Response section, expand the result section and verify that the response is SUCCEEDED.
  7. Stretch the cluster with the JSON specification.
    1. Expand the APIs for managing clusters section and expand PATCH /v1/clusters/{id}.
    2. Paste the unique ID of the management cluster in the Value text-box.
    3. clusterUpdateSpec text box, paste the JSON specification.
      For example:
      {
       "clusterStretchSpec": {
        "hostSpecs": [{
         "id": "<ESXi host 1 ID>",
         "licenseKey": "XX0XX-XX0XX-XX0XX-XX0XX-XX0XX",
         "hostNetworkSpec": {
          "vmNics": [{
            "id": "vmnic0",
            "vdsName": "<vSphere Distributed Switch 1>"
           },
           {
            "id": "vmnic1",
            "vdsName": "<vSphere Distributed Switch 2>"
           }
          ]
         }
        }, {
         "id": "<ESXi host 2 ID>",
         "licenseKey": "XX0XX-XX0XX-XX0XX-XX0XX-XX0XX",
         "hostNetworkSpec": {
          "vmNics": [{
            "id": "vmnic0",
            "vdsName": "<vSphere Distributed Switch 1>"
           },
           {
            "id": "vmnic1",
            "vdsName": "<vSphere Distributed Switch 2>"
           }
          ]
         }
        }, {
         "id": "<ESXi host 3 ID>",
         "licenseKey": "XX0XX-XX0XX-XX0XX-XX0XX-XX0XX",
         "hostNetworkSpec": {
          "vmNics": [{
            "id": "vmnic0",
            "vdsName": "<vSphere Distributed Switch 1>"
           },
           {
            "id": "vmnic1",
            "vdsName": "<vSphere Distributed Switch 2>"
           }
          ]
         }
        }, {
         "id": "<ESXi host 4 ID>",
         "licenseKey": "XX0XX-XX0XX-XX0XX-XX0XX-XX0XX",
         "hostNetworkSpec": {
          "vmNics": [{
            "id": "vmnic0",
            "vdsName": "<vSphere Distributed Switch 1>"
           },
           {
            "id": "vmnic1",
            "vdsName": "<vSphere Distributed Switch 2>"
           }
          ]
         }
        }],
        "witnessSpec": {
         "vsanIp": "<IP address of vSAN witness host>",
         "fqdn": "<fqdn of vSAN witness host>",
         "vsanCidr": "<cidr of vSAN witness host>"
        },
        "witnessTrafficSharedWithVsanTraffic": false,
        "secondaryAzOverlayVlanId": <availability Zone 2 Overlay VLAN ID>,
        "isEdgeClusterConfiguredForMultiAZ": true
       }
      }
    4. Click Execute.
    5. On the confirmation dialog box, click Continue.

What to do next

Configure NSX for availability zone 2.