VMware NSX Container Plugin 4.1.1.2 | 30 November 2023 | Build 22785222

Check for additions and updates to these release notes.

What's New

NSX Container Plugin 4.1.1.2 is an update release that resolves issues found in earlier releases. For other details about this release, see the NSX Container Plugin 4.1.1 Release Notes.

  • Support for creating new clusters in TAS in Manager mode.

Resolved Issues

  • Issue 3283005: New container does not run because gateway is missing, logical router port is not created for a new logical switch

    In Manager mode, TAS application instance will not start for a specific org (TKGI pods stuck in ContainerCreating for a specific namespace). One or more logical switches for the org (namespace) will not have an uplink to any tier-1 router. In addition, there may be several logical switches with the same name for a given org (namespace).

    Workaround: Use NSX API to delete all the logical switches for the org (namespace) which do not have any logical router port.

  • Issue 3277207: Pod cannot be created on a Windows node when hyperbus channel is down

    nsx-node-agent cannot configure network interface for Windows Pod because it does not receive configuration from hyperbus. This is because hyperbus channel is down and cannot be recovered automatically. The logs have messages such as the following:

    2023-07-18T06:49:09.182Z 1caf097a-0bec-4f5d-a94f-27c7418db3aa NSX 2140 - [nsx@6876 comp="nsx-container-node" subcomp="nsx_node_agent"
    level="WARNING"] nsx_ujo.agent.hyperbus_service Failed to process cif config request due to error RPC Call Failed with CallStatus
    (code=COMMUNICATION_ERROR, message="CloseReason=NETWORK_ERROR").
    2023-07-18T06:49:11.065Z 1caf097a-0bec-4f5d-a94f-27c7418db3aa NSX 2140 - [nsx@6876 comp="nsx-container-node" subcomp="nsx_node_agent"
    level="WARNING"] nsx_ujo.agent.nsxrpc_client Hyperbus service exited, mark service as inactive.
    2023-07-18T06:49:15.527Z 1caf097a-0bec-4f5d-a94f-27c7418db3aa NSX 2140 - [nsx@6876 comp="nsx-container-node" subcomp="nsx_node_agent"
    level="INFO"] nsx_ujo.agent.nsxrpc_client Closing previous connection
    Then nsx-node-agent cannot configure network for Pod:
    2023-07-18T07:03:07.090Z 1caf097a-0bec-4f5d-a94f-27c7418db3aa NSX 2140 - [nsx@6876 comp="nsx-container-node" subcomp="nsx_node_agent"
    level="ERROR" errorCode="NCP01004"] nsx_ujo.agent.cni_watcher Unable to retrieve network info for container nsx.vmware-system-csi.vsphere-csinode-
    windows-gqgjh, network interface for it will not be configured

    Workaround: Enable hyperbus health check on the node and restart nsx-node-agent. In /var/vcap/jobs/nsx-node-agent/config/ncp.ini, set the following:

    [nsx-node-agent]
    connect_retry_timeout = 30
  • Issue 3277899: SNAT rule is not deleted when logical switch is deleted

    In Manager mode, NCP will delete an empty logical switch if there is no logical port attached, but NCP fails to delete any SNAT rule during this operation. This will cause a problem if you have SNAT IP related firewall rules in an external network. This can also cause an error when migration from Manager mode to Policy mode.

    Workaround: Delete the orphan SNAT rule manually using NSX API.

  • Issue 3281327: Orphan IP Pool is not cleaned up for TAS in Policy mode

    For TAS in Policy mode, NCP will do garbage collection if segments and IP pools have no application attached. Sometimes NCP fails to delete orphan IP pools. This may cause an IP block to be exhausted because the orphan IP pools are using subnets.

    Workaround: Delete orphan IP pools and subnets manually using NSX API.

  • Issue 3304321: NCP does not create firewall rules for TAS Network Policy if there is a duplicate NsGroup found for an Application

    In some cases, immediately after a NSX upgrade to version 3.2.x, changes in TAS network policies are not being implemented on NSX. Network Policy synchronization fails with a message such as the following:

    nsx_ujo.common.controller PolicyController worker 0 failed to sync <policy_id> due to
     multiple object exception: Multiple AppNsGroup objects were found for 
    {'app_id': <app_id>'}: [<nsg_id> (app_id: <app_id>)', '<nsg_id> (app_id: <app_id>)']

    Workaround: Based on the error message, use NSX API to delete the NsGroup that is not associated with any firewall rule.

check-circle-line exclamation-circle-line close-line
Scroll to top icon