The VMkernel networking layer provides connectivity to hosts and handles the standard system traffic of vSphere vMotion, IP storage, Fault Tolerance, vSAN, and others. You can also create VMkernel adapters on the source and target vSphere Replication hosts to isolate the replication data traffic.

Multihoming in the VMkernel networking is defined as multiple VMkernel adapters in a TCP/IP stack.
Note: If there is a single vmnic with multiple TCP/IP stacks, it cannot be defined as multihoming.
Multihoming is supported if the vmknics are in different IP subnets. The admin must ensure that the route entries reflect the intent. The ESXi does not run any routing protocol and it does not support or add dynamic routes. For any specific route entry, it must be added statically from the vCenter or the ESXi host. Configurations with more than one vmknic interface on the same IP subnet in a single network stack instance is not supported. Deploying such a configuration might lead to unexpected results like connectivity issues, low throughput and asymmetric routing. However, there are a few exceptions to this rule. For example, iSCSI, Multi-NIC vMotion, and NSX VTEPS. For more information, see http://kb.vmware.com/kb/2010877.

TCP/IP Stacks at the VMkernel Level

Default TCP/IP stack
Provides networking support for the management traffic between vCenter Server and ESXi hosts, and for system traffic such as vMotion, IP storage, Fault Tolerance, and so on.
vMotion TCP/IP stack
Supports the traffic for live migration of virtual machines. Use the vMotion TCP/IP to provide better isolation for the vMotion traffic. After you create a VMkernel adapter on the vMotion TCP/IP stack, you can use only this stack for vMotion on this host. The VMkernel adapters on the default TCP/IP stack are disabled for the vMotion service. If a live migration uses the default TCP/IP stack while you configure VMkernel adapters with the vMotion TCP/IP stack, the migration completes successfully. However, the involved VMkernel adapters on the default TCP/IP stack are disabled for future vMotion sessions.
Provisioning TCP/IP stack
Supports the traffic for virtual machine cold migration, cloning, and snapshot migration. You can use the provisioning TCP/IP to handle Network File Copy (NFC) traffic during long-distance vMotion. NFC provides a file-specific FTP service for vSphere. ESXi uses NFC for copying and moving data between datastores. VMkernel adapters configured with the provisioning TCP/IP stack handle the traffic from cloning the virtual disks of the migrated virtual machines in long-distance vMotion. By using the provisioning TCP/IP stack, you can isolate the traffic from the cloning operations on a separate gateway. After you configure a VMkernel adapter with the provisioning TCP/IP stack, all adapters on the default TCP/IP stack are disabled for the Provisioning traffic.
Custom TCP/IP stacks
You can add custom TCP/IP stacks at the VMkernel level to handle networking traffic of custom applications.

Securing System Traffic

Take appropriate security measures to prevent unauthorized access to the management and system traffic in your vSphere environment. For example, isolate the vMotion traffic in a separate network that includes only the ESXi hosts that participate in the migration. Isolate the management traffic in a network that only network and security administrators can access. For more information, see vSphere Security and vSphere Installation and Setup.

System Traffic Types

Dedicate a separate VMkernel adapter for every traffic type . For distributed switches, dedicate a separate distributed port group for each VMkernel adapter.

Management traffic
Carries the configuration and management communication for ESXi hosts, vCenter Server, and host-to-host High Availability traffic. By default, when you install the ESXi software, a vSphere Standard switch is created on the host together with a VMkernel adapter for management traffic. To provide redundancy, you can connect two or more physical NICs to a VMkernel adapter for management traffic.
vMotion traffic
Accommodates vMotion. A VMkernel adapter for vMotion is required both on the source and the target hosts. Configure The VMkernel adapters for vMotion to handle only the vMotion traffic. For better performance, you can configure multiple NIC vMotion. To have multi-NIC vMotion, you can dedicate two or more port groups to the vMotion traffic, respectively every port group must have a vMotion VMkernel adapter associated with it. Then you can connect one or more physical NICs to every port group. In this way, multiple physical NICs are used for vMotion, which results in greater bandwidth .
Note: vMotion network traffic is not encrypted. You should provision secure private networks for use by vMotion only.
Provisioning traffic
Handles the data that is transferred for virtual machine cold migration, cloning, and snapshot migration.
IP storage traffic and discovery
Handles the connection for storage types that use standard TCP/IP networks and depend on the VMkernel networking. Such storage types are software iSCSI, dependent hardware iSCSI, and NFS. If you have two or more physical NICs for iSCSI, you can configure iSCSI multipathing. ESXi hosts support NFS 3 and 4.1. To configure a software Fibre Channel over Ethernet (FCoE) adapter, you must have a dedicated VMkernel adapter. Software FCoE passes configuration information though the Data Center Bridging Exchange (DCBX) protocol by using the Cisco Discovery Protocol (CDP )VMkernel module.
Fault Tolerance traffic
Handles the data that the primary fault tolerant virtual machine sends to the secondary fault tolerant virtual machine over the VMkernel networking layer. A separate VMkernel adapter for Fault Tolerance logging is required on every host that is part of a vSphere HA cluster.
vSphere Replication traffic
Handles the outgoing replication data that the source ESXi host transfers to the vSphere Replication server. Dedicate a VMkernel adapter on the source site to isolate the outgoing replication traffic.
vSphere Replication NFC traffic
Handles the incoming replication data on the target replication site.
vSAN traffic
Every host that participates in a vSAN cluster must have a VMkernel adapter to handle the vSAN traffic.
vSphere Backup NFC
VMKernel port setting for dedicated backup NFC traffic. NFC traffic goes through the VMKernel Adapter when vSphereBackup NFC service is enabled.
NVMe over TCP
VMkernel port setting for dedicated NVMe over TCP storage traffic. NVMe over TCP storage traffic goes through the VMkernel Adapter when NVMe over TCP adapter is enabled. For more information see vSphere Storage Guide.
NVMEe over RDMA
VMkernel port setting for dedicated NVMe over RDMA storage traffic. NVMe over RDMA storage traffic goes through the VMkernel Adapter when NVMe over RDMA adapter is enabled. For more information, see vSphere Storage Guide.