Refer to this topic to enable the Antrea-NSX Adapter which lets you integrate a TKG Service cluster that uses the Antrea CNI with NSX Manager for networking management and monitoring.

Prerequisites for the Antrea-NSX Adapter

Adhere to the following prerequisites:
  • vSphere 8 U3 (8.0.3) or later
  • NSX 4.1 or later
  • Supervisor is enabled with NSX networking
  • TKG Service 3.0 or later
  • Tanzu Kubernetes release v1.28.x for vSphere 8.x or later
  • Install the NSX Management Proxy Service if there is isolation between the Management and Workload Networks, which is the typical Supervisor topology

Requirements for the Antrea-NSX Adapter

The Antrea-NSX Adapter lets you integrate an Antrea-based TKG Service cluster with NSX Manager. Once the adapter is configured, you can use NSX Manager to manage the networking behavior of the cluster. For more information on the capabilities of the Antrea-NSX integration, see Integration of Kubernetes Clusters with Antrea CNI in the NSX 4.1 Administration Guide.

You must enable the Antrea-NSX Adapter for each TKG Service cluster you want to integrate with NSX. In other words, there is 1-to-1 ratio between an adapter and a cluster. In addition, you can only use the Antrea-NSX Adapter with new cluster deployments. You must enable the adapter before you create the TKG Service cluster, and you must include the name of the to-be-created cluster in the adapter resource definition.

If the NSX Tier-0 or Tier 1 gateways for TKGS clusters are configured with an SNAT IP, all Antrea-NSX connections will share a single source IP. NSX will interpret this as many control plane connections from the same IP and drop the connections. If this is the case, you must manually change the NSX UA firewall rules. Refer to the following KB articles for details: https://knowledge.broadcom.com/external/article?articleNumber=317179.

Enable the Antrea-NSX Adapter

Complete the following steps to enable the Antrea-NSX Adapter.

  1. Authenticate with Supervisor using kubectl.
    kubectl vsphere login --server=SUPERVISOR-CONTROL-PLANE-IP-ADDRESS-or-FQDN --vsphere-username USERNAME
  2. Switch context to the target vSphere Namespace where you will provision the TKG Service cluster.
    kubectl config use-context vsphere-namespace
  3. Create the AntreaConfig.yaml custom resource definition.
    #AntreaConfig.yaml
    apiVersion: cni.tanzu.vmware.com/v1alpha1
    kind: AntreaConfig
    metadata:
     name: tkgs-cluster-name-antrea-package #prefix required
     namespace: tkgs-cluster-ns
    spec:  
      antreaNSX:
        enable: true #false by default
    Where:
    • The element antreaNSX.enable.true enables the Antrea-NSX adapter. By default this field is false.
    • The element metadata.name includes the mandatory suffix -antrea-package and is prefaced by the exact name of the TKG Service cluster that you will create and that will make use of the adapter.
  4. Apply the AntreaConfig custom resource definition.
    kubectl apply -f AntreaConfig.yaml
  5. Provision the TKG Service cluster.

    The adapter can be used with either the v1alpha3 API or the v1beta1 API.

    See Workflow for Provisioning TKG Clusters Using Kubectl.

  6. Verify that the adapter is functioning by logging in to NSX Manager.

    Refer to the NSX 4.1 documentation for further guidance.