VMware Integrated OpenStack with Kubernetes supports VDS, NSX-V, and NSX-T backend networking.

Networking Support

Container network and load balancer support for Kubernetes Services is dependent on the backend networking.

Backend Networking

Container Network

Load Balancer

VDS

Flannel

Kubernetes Nginx Ingress Controller

NSX-V

Flannel

NSX Edge

NSX-T

NSX Container Plugin

Kubernetes Nginx Ingress Controller

Where:

  • Flannel is a network fabric for containers and is the default for VDS and NSX-V networking.

  • NSX Container Plugin (NCP) is a software component that sits between NSX manager and the Kubernetes API server. It monitors changes on Kubernetes objects and creates networking constructs based on changes reported by the Kubernetes API. NCP includes native support for containers. It is optimized for NSX-T networking and is the default.

  • The NSX Edge load balancer distributes network traffic across multiple servers to achieve optimal resource use, provide redundancy, and distribute resource utilization. It is the default for NSX-V.

VDS Backend

VDS or vSphere Distributed Switch supports virtual networking across multiple hosts in vSphere.

VDS backend networking

With the VDS backend, VMware Integrated OpenStack with Kubernetes deploys Kubernetes cluster nodes directly on the OpenStack provider network. The OpenStack cloud administrator must verify that the provider network is accessible from outside the vSphere environment. VDS networking does not include native load balancing functionality for the cluster nodes, so VMware Integrated OpenStack with Kubernetes deploys HAProxy nodes outside the Kubernetes cluster to provide load balancing.

NSX-V Backend

NSX-V is the VMware NSX network virtualization and security platform for vSphere.

Figure 1. NSX-V backend networking


NSX-V backend networking

With the NSX-V backend, VMware Integrated OpenStack with Kubernetes deploys multiple nodes within a single cluster behind the native NSX Edge load balancer. The NSX Edge load balancer manages up to 32 worker nodes. Every node within the Kubernetes cluster is attached to an internal network and the internal network is attached to a router with a default gateway set to the external management network.

NSX-T Backend

NSX-T or NSX Transformer supports networking on a variety of compute platforms including KVM.

Figure 2. NSX-T backend networking


NSX-T backend networking

With the NSX-T backend, the VMware Integrated OpenStack with Kubernetes deploys worker nodes each with three NICs.

  • One NIC connects to the NSX Container Plugin (NCP) Pod network, an internal network with no routing outside the vSphere environment. When the NSX Container Plugin is enabled, this NIC is dedicated to Pod traffic.

  • One NIC connects to the Kubernetes API network which is attached to a router with SNAT disabled. Special Pods such as KubeDNS can access the API server using this network.

  • One NIC connects to the internal management network. This NIC is accessible from outside the vSphere environment through a floating IP that is assigned by the external management network.

NSX-T networking does not include native load balancing functionality, so VMware Integrated OpenStack with Kubernetes creates two separate load balancer nodes. The nodes connect to the management network and API network.