Specific vSAN configurations, such as a stretched cluster, require a witness host. Instead of using a dedicated physical ESXi host as a witness host, you can deploy the vSAN witness appliance. The appliance is a preconfigured virtual machine that runs ESXi and is distributed as an OVA file.

Unlike a general purpose ESXi host, the witness appliance does not run virtual machines. Its only purpose is to serve as a vSAN witness, and it can contain only witness components.

The workflow to deploy and configure the vSAN witness appliance includes this process.

When you deploy the vSAN witness appliance, you must configure the size of the witness supported by the vSAN stretched cluster. Choose one of the following options:
  • Tiny supports up to 750 components (10 VMs or fewer).
  • Medium supports up to 21,833 components (500 VMs). As a shared witness, the Medium witness appliance supports up to 21,000 components and up to 21 two-node vSAN clusters.
  • Large supports up to 45,000 components (more than 500 VMs). As a shared witness, the Large witness appliance supports up to 24,000 components and up to 24 two-node vSAN clusters.
  • Extra Large supports up to 64,000 components (more than 500 VMs). As a shared witness, the Extra Large witness appliance supports up to 64,000 components and up to 64 two-node vSAN clusters.
Note: These estimates are based on standard VM configurations. The number of components that make up a VM can vary, depending on the number of virtual disks, policy settings, snapshot requirements, and so on. For more information about witness appliance sizing for two-node vSAN clusters, refer to the vSAN 2 Node Guide.

You also must select a datastore for the vSAN witness appliance. The witness appliance must use a different datastore than the vSAN stretched cluster datastore.

  1. Download the appliance from the VMware website.
  2. Deploy the appliance to a vSAN host or cluster. For more information, see Deploying OVF Templates in the vSphere Virtual Machine Administration documentation.
  3. Configure the vSAN network on the witness appliance.
  4. Configure the management network on the witness appliance.
  5. Add the appliance to vCenter Server as a witness ESXi host. Make sure to configure the vSAN VMkernel interface on the host.

Set Up the vSAN Network on the Witness Appliance

The vSAN witness appliance includes two preconfigured network adapters. You must change the configuration of the second adapter so that the appliance can connect to the vSAN network.

Procedure

  1. Navigate to the virtual appliance that contains the witness host.
  2. Right-click the appliance and select Edit Settings.
  3. On the Virtual Hardware tab, expand the second Network adapter.
  4. From the drop-down menu, select the vSAN port group and click OK.

Configure Management Network on the Witness Appliance

Configure the witness appliance, so that it is reachable on the network.

By default, the appliance can automatically obtain networking parameters if your network includes a DHCP server. If not, you must configure appropriate settings.

Procedure

  1. Power on your witness appliance and open its console.
    Because your appliance is an ESXi host, you see the Direct Console User Interface (DCUI).
  2. Press F2 and navigate to the Network Adapters page.
  3. On the Network Adapters page, verify that at least one vmnic is selected for transport.
  4. Configure the IPv4 parameters for the management network.
    1. Navigate to the IPv4 Configuration section and change the default DHCP setting to static.
    2. Enter the following settings:
      • IP address
      • Subnet mask
      • Default gateway
  5. Configure DNS parameters.
    • Primary DNS server
    • Alternate DNS server
    • Hostname

Configure Network Interface for Witness Traffic

You can separate data traffic from witness traffic in two-node vSAN clusters and vSAN stretched clusters.

vSAN data traffic requires a low-latency, high-bandwidth link. Witness traffic can use a high-latency, low-bandwidth and routable link. To separate data traffic from witness traffic, you can configure a dedicated VMkernel network adapter for vSAN witness traffic.

You can add support for a direct network cross-connection to carry vSAN data traffic in a vSAN stretched cluster. You can configure a separate network connection for witness traffic. On each data host in the cluster, configure the management VMkernel network adapter to also carry witness traffic. Do not configure the witness traffic type on the witness host.

Note: Network Address Translation (NAT) is not supported between vSAN data hosts and the witness host.

Prerequisites

  • Verify that the data site to witness traffic connection has a minimum bandwidth of 2 Mbps for every 1,000 vSAN components.
  • Verify the latency requirements:
    • Two-node vSAN clusters must have less than 500 ms RTT.
    • vSAN stretched clusters with less than 11 hosts per site must have less than 200 ms RTT.
    • vSAN stretched clusters with 11 or more hosts per site must have less than 100 ms RTT.
  • Verify that the vSAN data connection meets the following requirements.
    • For hosts directly connected in a two-node vSAN cluster, use a 10 Gbps direct connection between hosts. Hybrid clusters also can use a 1 Gbps crossover connection between hosts.
    • For hosts connected to a switched infrastructure, use a 10 Gbps shared connection (required for all-flash clusters), or a 1 Gbps dedicated connection.
  • Verify that data traffic and witness traffic use the same IP version.

Procedure

  1. Open an SSH connection to the ESXi host.
  2. Use the esxcli network ip interface list command to determine which VMkernel network adapter is used for management traffic.
    For example:
    esxcli network  ip interface list
    vmk0
       Name: vmk0
       MAC Address: e4:11:5b:11:8c:16
       Enabled: true
       Portset: vSwitch0
       Portgroup: Management Network
       Netstack Instance: defaultTcpipStack
       VDS Name: N/A
       VDS UUID: N/A
       VDS Port: N/A
       VDS Connection: -1
       Opaque Network ID: N/A
       Opaque Network Type: N/A
       External ID: N/A
       MTU: 1500
       TSO MSS: 65535
       Port ID: 33554437
    
    vmk1
       Name: vmk1
       MAC Address: 00:50:56:6a:3a:74
       Enabled: true
       Portset: vSwitch1
       Portgroup: vsandata
       Netstack Instance: defaultTcpipStack
      VDS Name: N/A
       VDS UUID: N/A
       VDS Port: N/A
       VDS Connection: -1
       Opaque Network ID: N/A
       Opaque Network Type: N/A
       External ID: N/A
       MTU: 9000
       TSO MSS: 65535
       Port ID: 50331660
    
    Note: Multicast information is included for backward compatibility. vSAN 6.6 and later releases do not require multicast.
  3. Use the esxcli vsan network ip add command to configure the management VMkernel network adapter to support witness traffic.
    esxcli vsan network ip add -i vmkx -T witness 
  4. Use the esxcli vsan network list command to verify the new network configuration.
    For example:
    esxcli vsan network list
    Interface
       VmkNic Name: vmk0
       IP Protocol: IP
       Interface UUID: 8cf3ec57-c9ea-148b-56e1-a0369f56dcc0
       Agent Group Multicast Address: 224.2.3.4
       Agent Group IPv6 Multicast Address: ff19::2:3:4
       Agent Group Multicast Port: 23451
       Master Group Multicast Address: 224.1.2.3
       Master Group IPv6 Multicast Address: ff19::1:2:3
       Master Group Multicast Port: 12345
       Host Unicast Channel Bound Port: 12321
       Multicast TTL: 5
       Traffic Type: witness
    
    Interface
       VmkNic Name: vmk1
       IP Protocol: IP
       Interface UUID: 6df3ec57-4fb6-5722-da3d-a0369f56dcc0
       Agent Group Multicast Address: 224.2.3.4
       Agent Group IPv6 Multicast Address: ff19::2:3:4
       Agent Group Multicast Port: 23451
       Master Group Multicast Address: 224.1.2.3
       Master Group IPv6 Multicast Address: ff19::1:2:3
       Master Group Multicast Port: 12345
       Host Unicast Channel Bound Port: 12321
       Multicast TTL: 5
       Traffic Type: vsan
    

Results

In the vSphere Client, the management VMkernel network interface is not selected for vSAN traffic. Do not re-enable the interface in the vSphere Client.