This site will be decommissioned on December 31st 2024. After that date content will be available at techdocs.broadcom.com.

VMware NSX Container Plugin 3.0.1   |   30 April, 2020   |   Build 16118386

Check regularly for additions and updates to this document.

What's in the Release Notes

The release notes cover the following topics:

What's New

 
NSX Container Plugin 3.0.1 has the following new features:
  • Support for IPAM and L3 connectivity, service cluster IP, and load balancing in IPv6 Kubernetes clusters
  • Support for multiple interfaces per PoD (Multus-like functionality)
  • Support for OpenShift Container Platform (OCP) 4
  • Expose DFW rule logging options in NCP configmap
  • Expose tier-1 router settings in NCP configmap
  • Expose load balancer HTTP profile settings for header sizing and timeouts in NCP configmap
  • Add support for load balancer CRDs in Policy API
  • Support for x-forwarded port
  • SSL-passthrough (SNI-based host switching) and re-encryption
  • Error reporting improvements for network policy controller
  • Support for importing Manager objects to Policy
  • Layer 3 multicast support for container workloads (single-tier shared tier-0 topology only)
  • Dedicated YAML file for Policy mode NCP deployments

Compatibility Requirements

Product Version
NCP/NSX-T Tile for Tanzu Application Service (PCF) 3.0.1
NSX-T 2.5.0, 2.5.1, 2.5.2, 2.5.2.2, 2.5.3, 3.0.0, 3.0.1
vSphere 6.7, 7.0
Kubernetes 1.17, 1.18
OpenShift 3 3.11
OpenShift 4 RHCOS 4.3
Kubernetes Host VM OS Ubuntu 16.04, Ubuntu 18.04, CentOS 7.7, RHEL 7.7, RHEL 7.8

Note: For RHEL 7.8, nsx-ovs is not supported. It is only compatible with the upstream OVS.
OpenShift Host VM OS RHEL 7.6, RHEL 7.7
OpenShift BMC
(Will be deprecated in a future release)
RHEL 7.6, RHEL 7.7
Tanzu Application Service (Pivotal Cloud Foundry) Ops Manager 2.6 + PAS 2.6
Ops Manager 2.8 + PAS 2.8
Ops Manager 2.9 + PAS 2.9

Deprecation Notice: VMware intends to deprecate support for OpenShift Bare Metal in a future release.

Support for upgrading to this release:

  • NCP 3.0 and all NCP 2.5.x releases

 

Resolved Issues

  • Issue 2330811: When creating Kubernetes services of type LoadBalancer while NCP is down, the services might not get created when NCP is restarted

    When NSX-T resources are exhausted for Kubernetes services of type LoadBalancer, you can create new services after deleting some of the existing services. However, if you delete and create the services while NCP is down, NCP will fail to create the new services.

    Workaround: When NSX-T resources are exhausted for Kubernetes services of type LoadBalancer, do not perform both the delete and the create operations while NCP is down.

  • Issue 2408100: In a large Kubernetes cluster with multiple NCP instances in active-standby mode or liveness probe enabled, NCP frequently restarts

    In a large Kubernetes cluster (about 25,000 pods, 2,500 namespaces and 2,500 network policies), if multiple NCP instances are running in active-standby mode, or if liveness probe is enabled, NCP processes might be killed and restarted frequently due to "Acquiring lock conflicted" or liveness probe failure. 

    Workaround: Perform the following steps:

    1. Set replicas of NCP deployment to 1, or increase the configuration option ha.master_timeout in ncp.ini from the default value 18 to 30.
    2. Increase the liveness probe arguments as follows:
        containers:
          - name: nsx-ncp
            livenessProbe:
              exec:
                command:
                  - /bin/sh
                  - -c
                  - timeout 20 check_pod_liveness nsx-ncp
              initialDelaySeconds: 20
              timeoutSeconds: 20
              periodSeconds: 20
              failureThreshold: 5
      
  • Issue 2517201: Unable to create a pod on an ESXi host

    After removing an ESXi host from a vSphere cluster and adding it back to the cluster, creating a pod on the host fails.

    Workaround: Reboot NCP.

Known Issues

  • Issue 2131494: NGINX Kubernetes Ingress still works after changing the Ingress class from nginx to nsx

    When you create an NGINX Kubernetes Ingress, NGINX create traffic forwarding rules. If you change the Ingress class to any other value, NGINX does not delete the rules and continues to apply them, even if you delete the Kubernetes Ingress after changing the class. This is a limitation of NGINX.

    Workaround: To delete the rules created by NGINX, delete the Kubernetes Ingress when the class value is nginx. Than re-create the Kubernetes Ingress.

  • For a Kubernetes service of type ClusterIP, Client-IP based session affinity is not supported

    NCP does not support Client-IP based session affinity for a Kubernetes service of type ClusterIP.

    Workaround: None

  • For a Kubernetes service of type ClusterIP, the hairpin-mode flag is not supported

    NCP does not support the hairpin-mode flag for a Kubernetes service of type ClusterIP.

    Workaround: None

  • Issue 2192489: After disabling 'BOSH DNS server' in PAS director config, the Bosh DNS server (169.254.0.2) still appears in the container's resolve.conf file.

    In a PAS environment running PAS 2.2, after you disable 'BOSH DNS server' in PAS director config, the Bosh DNS server (169.254.0.2) still appears in the container's resove.conf file. This causes a ping command with a fully qualified domain name to take a long time. This issue does not exist with PAS 2.1.

    Workaround: None. This is a PAS issue.

  • Issue 2224218: After a service or app is deleted, it takes 2 minutes to release the SNAT IP back to the IP pool

    If you delete a service or app and recreate it within 2 minutes, it will get a new SNAT IP from the IP pool.

    Workaround: After deleting a service or app, wait 2 minutes before recreating it if you want to reuse the same IP.

  • Issue 2404302: If multiple load balancer application profiles for the same resource type (for example, HTTP) exist on NSX-T, NCP will choose any one of them to attach to the Virtual Servers.

    If multiple HTTP load balancer application profiles exist on NSX-T, NCP will choose any one of them with the appropriate x_forwarded_for configuration to attach to the HTTP and HTTPS Virtual Server. If multiple FastTCP and UDP application profiles exist on NSX-T, NCP will choose any one of them to attach to the TCP and UDP Virtual Servers, respectively. The load balancer application profiles might have been created by different applications with different settings. If NCP chooses to attach one of these load balancer application profiles to the NCP-created Virtual Servers, it might break the workflow of other applications.

    Workaround: None

  • Issue 2397621: OpenShift 3 installation fails

    OpenShift 3 installation expects a node's status to be ready and this is possible after the installation of the CNI plugin. In this release there is no separate CNI plugin file, causing OpenShift installation to fail.

    Workaround: Create the /etc/cni/net.d directory on each node before starting the installation.

  • Issue 2413383: OpenShift 3 upgrade fails because not all nodes are ready

    By default the NCP bootstrap pod is not scheduled on the master node. As a result, the master node status is always Not Ready.

    Workaround: Assign the master node with the role "compute" to allow nsx-ncp-bootstrap and nsx-node-agent DaemonSets to create pods. The node status will change to "Ready" once the nsx-ncp-bootstrap installs the NSX-CNI.

  • Issue 2460219: HTTP redirect does not work without a default server pool

    If the HTTP virtual server is not bound to a server pool, HTTP redirect fails. This issue occurs in NSX-T 2.5.0 and earlier releases.

    Workaround: Create a default server pool or upgrade to NSX-T 2.5.1.

  • Issue 2518111: NCP fails to delete NSX-T resources that have been updated from NSX-T

    NCP creates NSX-T resources based on the configurations that you specify. If you make any updates to those NSX-T resources through NSX Manager or the NSX-T API, NCP might fail to delete those resources and re-create them when it is necessary to do so.

    Workaround: Do not update NSX-T resources created by NCP through NSX Manager or the NSX-T API.

  • Issue 2518312: NCP bootstrap container fails to install nsx-ovs kernel module on Ubuntu 18.04.4, kernel 4.15.0-88

    The NCP bootstrap container (nsx-ncp-bootstrap) fails to install nsx-ovs kernel module on Ubuntu 18.04.4, kernel 4.15.0-88.

    Do not install NSX-OVS on this kernel by setting use_nsx_ovs_kernel_module = False in nsx-node-agent-config. Instead, use the upstream OVS kernel module (Ubuntu comes by default with an OVS kernel module) on the host. If there is no OVS kernel module on the host, either install OVS kernel module manually and set use_nsx_ovs_kernel_module = False in nsx-node-agent-config, or downgrade the kernel version to 4.15.0-76 so that NSX-OVS can be installed.

  • Issue 2524778: NSX Manager shows NCP as down or unhealthy after the NCP master node is deleted

    After an NCP master node is deleted, for example, after a successful switch-over to a backup node, the health status of NCP still says down when it should be up.

    Workaround: Use the Manager API DELETE /api/v1/systemhealth/container-cluster/<cluster-id>/ncp/status to clear the stale status manually.

  • Issue 2548815: In an NCP cluster imported from Manager to Policy, NCP fails to delete an automatically scaled tier-1 router

    An automatically scaled tier-1 router cannot be deleted by NCP running in Policy mode after Manager to Policy import because it is still being referenced by its LocaleService.

    Workaround: Manually delete the tier-1 router using the NSX Manager UI.

  • Issue 2549433: OpenShift node using a single interface configured as the ovs_uplink_port loses name server information when DHCP lease expires

    An OpenShift node with a single interface, which is configured as the ovs_uplink_port in the nsx-node-agent config, loses name server information when the DHCP lease of the ovs_uplink_port expires.

    Workaround: Use a static IP address.

  • Issue 2416376: NCP fails to process a PAS ASG (App Security Group) that binds to more than 128 Spaces

    Because of a limit in NSX-T distributed firewall, NCP cannot process a PAS ASG that binds to more than 128 Spaces.

    Workaround: Create multiple ASGs and bind each of them to no more than 128 Spaces.

  • Issue 2534726: If upgrading to NCP 3.0.1 via NSX-T Tile fails, using the BOSH command line to redo the upgrade causes performance problems

    When upgrading to NCP 3.0.1 via NSX-T Tile on OpsMgr, the upgrade process will mark HA switching profiles in NSX Manager used by NCP as inactive. The switching profiles will be deleted when NCP restarts. if the upgrade fails and you use a BOSH command such as “bosh deploy -d <deployment-id> -n <deployment>.yml” to redo the upgrade, the HA switching profiles will not be deleted. NCP will still run but performance will be degraded.

    Workaround: Always upgrade NCP via OpsMgr and not the BOSH command line.

  • Issue 2550625: After migrating a cluster from Manager to Policy, the IP addresses in a shared IP pool are not released

    After a cluster is migrated from Manager to Policy, deleting a namespace does not release the IP addresses that were allocated to that namespace.

    Workaround: None.

  • Issue 2537221: After upgrading NSX-T to 3.0, the networking status of container-related objects in the NSX Manager UI is shown as Unknown

    In NSX Manager UI, the tab Inventory > Containers shows container-related objects and their status. In a PKS environment, after upgrading NSX-T to 3.0, the networking status of the container-related objects is shown as Unknown. The issue is caused by the fact that PKS does not detect the version change of NSX-T. This issue does not occur if NCP is running as a pod and the liveness probe is active.

    Workaround: After the NSX-T upgrade, restart the NCP instances gradually (no more than 10 at the same time) so as not to overload NSX Manager.

  • Issue 2549765: Importing Manager objects to Policy fails if there is a NAT rule with multiple destination ports

    The Manager to Policy import process will fail if there is a NAT rule with multiple destination ports on the top tier router. One such scenario is when the Kubernetes parameter ingress_mode is 'nat' in NCP config and there exists a pod with the annotation 'ncp/ingress-controller' in Kubernetes.

    Workaround: While NCP is not running and before initiating the import, edit the NAT Rule and remove the '80' and '443' destination ports.

  • Issue 2552918: Rollback for Manager to Policy import is unsuccessful for distributed firewall which causes cluster rollback to fail

    On rare occasions, the Manager to Policy import process must perform a rollback, which is unsuccessful for distributed firewall sections and rules. This causes the cluster rollback to fail and leaves stale resources in NSX Manager.

    Workaround: Use the backup and restore feature to restore the NSX Manager to a healthy state.

  • Issue 2550474: In an OpenShift environment, changing an HTTPS route to an HTTP can cause the HTTP route to not work as expected

    If you edit an HTTPS route and delete the TLS-related data to convert it to an HTTP route, the HTTP route might not work as expected.

    Workaround: Delete the HTTPS route and create a new HTTP route.

  • Issue 2552573: In an OpenShift 4.3 environment, cluster installation might fail if DHCP is configured using Policy UI

    In an OpenShift 4.3 environment, cluster installation requires that a DHCP server is available to provide IP addresses and DNS information. If you use the DHCP server that is configured in NSX-T using the Policy UI, the cluster installation might fail.

    Workaround: Configure a DHCP server using the Manager UI, delete the cluster that failed to install and recreate the cluster.

  • Issue 2552564: In an OpenShift 4.3 environment, DNS forwarder might stop working if overlapping address found

    In an OpenShift 4.3 environment, cluster installation requires that a DNS server be configured. If you use NSX-T to configure a DNS forwarder and there is IP address overlap with the DNS service, the DNS forwarder will stop working and cluster installation will fail.

    Workaround: Configure an external DNS service, delete the cluster that failed to install and recreate the cluster.

  • Issue 2483242: IPv6 traffic from containers being blocked by NSX-T SpoofGuard

    IPv6 link local address is not being auto-whitelisted with SpooGuard enabled.

    Workaround: Disable SpoofGuard by setting nsx_v3.enable_spoofguard = False in the NCP configuration.

  • Issue 2552609 - Incorrect X-Forwarded-For (XFF) and X-Forwarded-Port data

    If you configure XFF with either INSERT or REPLACE for HTTPS Ingress rules (Kubernetes) or HTTPS routes (OpenShift), you might see incorrect X-Forwarded-For and X-Forwarded-Port values in XFF headers.

    Workaround: None.

  • Issue 2555336: Pod traffic not working due to duplicate logical ports created in Manager mode

    This issue is more likely to occur when there are many pods in several clusters. When you create a pod, traffic to the pod does not work. NSX-T shows multiple logical ports created for the same container. In the NCP log only the ID of one of the logical ports can be found. 

    Workaround: Delete the pod and recreate it. The stale ports on NSX-T will be removed when NCP restarts.

  • Issue 2554357: Load balancer auto scaling does not work for IPv6

    In an IPv6 environment, a Kubernetes service of type LoadBalancer will not be active when the existing load balancer scale is reached.

    Workaround: Set nsx_v3.lb_segment_subnet = FE80::/10 in /var/vcap/jobs/ncp/config/ncp.ini for PKS deployments and in nsx-ncp-configmap for others. Then restart NCP.

  • Issue 2593017: Stale logical router ports not deleted on NSX-T

    In a two-tier topology with Manager API, stale logical router ports are sometimes not deleted. When a large number of stale router ports are present, router behavior can be affected.

    Workaround: From NSX Manager, manually delete the stale router ports.

  • Issue 3033821: After manager-to-policy migration, distributed firewall rules not enforced correctly

    After a manager-to-policy migration, newly created network policy-related distributed firewall (DFW) rules will have higher priority than the migrated DFW rules.

    Workaround: Use the policy API to change the sequence of DFW rules as needed.

check-circle-line exclamation-circle-line close-line
Scroll to top icon