This site will be decommissioned on December 31st 2024. After that date content will be available at techdocs.broadcom.com.

VMware NSX Container Plugin 3.2.1.9 | 15 JUL 2024 | Build 24072498

Check for additions and updates to these release notes.

What's New

This is an update release that resolves issues found in earlier releases. For other details about this release, see the previous 3.2.1.x release notes.

Deprecation Notice

The annotation ncp/whitelist-source-range will be deprecated in NCP 4.0. Starting with NCP 3.1.1, you can use the annotation "ncp/allowed-source-range" instead.

The feature that allows access via NAT to Ingress controller pods using the ncp/ingress_controller annotation is deprecated and will be removed in 2023. The recommended way to expose Ingress controller pods is to use services of type LoadBalancer.

Compatibility Requirements

Product

Version

NCP/NSX-T Tile for Tanzu Application Service (TAS)

3.2.1

NSX-T

3.1.3, 3.2, 3.2.1 (see notes below)

vSphere

6.7, 7.0

Kubernetes

1.21, 1.22, 1.23

OpenShift Host VM OS

RHCOS 4.7, 4.8

Kubernetes Host VM OS

Ubuntu 18.04, 20.04

CentOS 8.2

RHEL 8.4, 8.5

See notes below.

Tanzu Application Service

Ops Manager 2.10 + TAS 2.11

Ops Manager 2.10 + TAS 2.13

Notes:

The installation of the nsx-ovs kernel module on CentOS/RHEL requires a specific kernel version. The supported CentOS/RHEL kernel versions are 193, 305, and 348, regardless of the CentOS/RHEL version. Note that the default kernel version is 193 for RHEL 8.2, 305 for RHEL 8.4, and 348 for RHEL 8.5. If you are running a different kernel version, you can (1) Modify your kernel version to one that is supported. When modifying the kernel version and then restarting the VM, make sure that the IP and static routes are persisted on the uplink interface (specified by ovs_uplink_port) to guarantee that connectivity to the Kubernetes API server is not lost. Or (2) Skip the installation of the nsx-ovs kernel module by setting "use_nsx_ovs_kernel_module" to "False" under the "nsx_node_agent" section in the nsx-node-agent config map.

To run the nsx-ovs kernel module on RHEL/CentOS, you must disable the "UEFI secure boot" option under "Boot Options" in the VM's settings in vCenter Server.

Starting with NCP 3.1.2, the RHEL image will not be distributed. For all supported integrations, use the Red Hat Universal Base Image (UBI). For more information, see https://www.redhat.com/en/blog/introducing-red-hat-universal-base-image.

Support for upgrading to this release:

  • All 3.1.x releases

  • All previous 3.2.x releases

Limitations

The "baseline policy" feature for NCP creates a dynamic group which selects all members in the cluster. NSX-T has a limit of 8,000 effective members of a dynamic group (for details, see Configuration Maximums). Therefore, this feature should not be enabled for clusters that are expected to grow beyond 8,000 pods. Exceeding this limit can cause delays in the creation of resources for the pods.

Resolved Issues

  • Issue 3376335: The privsep helper process is not killed when running the command "monit stop"

    The privsep helper process's parent PID is 1, it would not be terminated by monit along with the main process of nsx-node-agent or nsx-kube-proxy. Sometimes hyperbus channel is still established with the orphan process until the new nsx-node-agent starts running.

    Workaround: If the nsx-node-agent job is still running, the stale process has no impact as hyperbus channel could be established with new running process. If nsx-node-agent job is already stopped, kill the existing privsep helper orphan process manually. Run the command "ps -ef | grep node_agent_pri | grep -v grep". This command will print all the stale privsep-helper processes. Use the command "kill -9 $pid" to terminate the processes one by one.

  • Issue 3376407: Node security concern in Distributed Firewall rules for pod liveness/readiness probe

    NCP creates Distributed Firewall rules for pod liveness/readiness probe to allow traffic from node to pod. In manager API mode, the rule allows from node IP to any destination for both ingress and egress traffic, and is applied to both pod and node logical ports. There is a security concern for the node because it allows all node egress traffic.

    Workaround: Override the pod liveness/readiness probe Distributed Firewall rule and add the node IP in the destination.

check-circle-line exclamation-circle-line close-line
Scroll to top icon