VMware NSX Container Plugin 3.1.2 | 15 April, 2021 | Build 17855682 Check regularly for additions and updates to this document. |
What's in the Release Notes
The release notes cover the following topics:
What's New
- For layer 7 load balancer persistence, support for specifying the cookie name
Deprecation Notice
The annotation "ncp/whitelist-source-range" will be deprecated in NCP 3.3. Starting with NCP 3.1.1, you can use the annotation "ncp/allowed-source-range" instead.
Compatibility Requirements
Product | Version |
---|---|
NCP/NSX-T Tile for Tanzu Application Service (TAS) | 3.1.2 |
NSX-T | 3.0.3, 3.1.0, 3.1.1, 3.1.2, 3.1.3 |
vSphere | 6.7, 7.0 |
Kubernetes | 1.18, 1.19, 1.20 |
OpenShift 3 | 3.11 Note: OpenShift 3.x support will be deprecated in a future release. |
OpenShift 4 | RHCOS 4.6, 4.7 |
Kubernetes Host VM OS | Ubuntu 18.04, Ubuntu 20.04 CentOS 7.8, CentOS 7.9, CentOS 8.3 RHEL 7.8, RHEL 7.9, RHEL 8.1, RHEL 8.3 See notes below. |
OpenShift 3 Host VM OS | RHEL 7.7, RHEL 7.8 (Note: RHEL support for vanilla Kubernetes will be deprecated in a future release.) |
Tanzu Application Service | Ops Manager 2.7 + TAS 2.7 (LTS) Ops Manager 2.9 + TAS 2.9 Ops Manager 2.10 + TAS 2.10 Ops Manager 2.10 + TAS 2.11 |
Tanzu Kubernetes Grid Integrated (TKGI) | 1.11 |
Notes:
The installation of the nsx-ovs kernel module on CentOS/RHEL requires a specific kernel version. The supported RHEL kernel versions are 1127 and 1160, regardless of the RHEL version. Note that the default kernel version is 1127 for RHEL 7.8 and 1160 for RHEL 7.9. If you are running a different kernel version, you can skip the installation of the nsx-ovs kernel module by setting "use_nsx_ovs_kernel_module" to "False" under the "nsx_node_agent" section in the nsx-node-agent config map.
Starting with NCP 3.1.2, the RHEL image will not be distributed anymore. For all supported integrations, use the Red Hat Universal Base Image (UBI). For more information, see https://www.redhat.com/en/blog/introducing-red-hat-universal-base-image.
Support for upgrading to this release:
- All previous 3.1.x releases and all NCP 3.0.x releases
Resolved Issues
- Issue 2707883: nsx-ncp-operator does not create an NCP-related Kubernetes resource if the resource was deleted when nsx-ncp-operator was not running
For example, if you delete nsx-node-agent or nsx-ncp-bootstrap DaemonSet when nsx-ncp-operator is not running, it is not recreated when nsx-ncp-operator is running again.
Known Issues
- Issue 2131494: NGINX Kubernetes Ingress still works after changing the Ingress class from nginx to nsx
When you create an NGINX Kubernetes Ingress, NGINX create traffic forwarding rules. If you change the Ingress class to any other value, NGINX does not delete the rules and continues to apply them, even if you delete the Kubernetes Ingress after changing the class. This is a limitation of NGINX.
Workaround: To delete the rules created by NGINX, delete the Kubernetes Ingress when the class value is nginx. Than re-create the Kubernetes Ingress.
- For a Kubernetes service of type ClusterIP, Client-IP based session affinity is not supported
NCP does not support Client-IP based session affinity for a Kubernetes service of type ClusterIP.
Workaround: None
- For a Kubernetes service of type ClusterIP, the hairpin-mode flag is not supported
NCP does not support the hairpin-mode flag for a Kubernetes service of type ClusterIP.
Workaround: None
- Issue 2192489: After disabling 'BOSH DNS server' in TAS director config, the Bosh DNS server (169.254.0.2) still appears in the container's resolve.conf file.
In a TAS environment running TAS 2.2, after you disable 'BOSH DNS server' in TAS director config, the Bosh DNS server (169.254.0.2) still appears in the container's resolve.conf file. This causes a ping command with a fully qualified domain name to take a long time. This issue does not exist with TAS 2.1.
Workaround: None. This is a TAS issue.
- Issue 2224218: After a service or app is deleted, it takes 2 minutes to release the SNAT IP back to the IP pool
If you delete a service or app and recreate it within 2 minutes, it will get a new SNAT IP from the IP pool.
Workaround: After deleting a service or app, wait 2 minutes before recreating it if you want to reuse the same IP.
- Issue 2404302: If multiple load balancer application profiles for the same resource type (for example, HTTP) exist on NSX-T, NCP will choose any one of them to attach to the Virtual Servers.
If multiple HTTP load balancer application profiles exist on NSX-T, NCP will choose any one of them with the appropriate x_forwarded_for configuration to attach to the HTTP and HTTPS Virtual Server. If multiple FastTCP and UDP application profiles exist on NSX-T, NCP will choose any one of them to attach to the TCP and UDP Virtual Servers, respectively. The load balancer application profiles might have been created by different applications with different settings. If NCP chooses to attach one of these load balancer application profiles to the NCP-created Virtual Servers, it might break the workflow of other applications.
Workaround: None
- Issue 2397621: OpenShift 3 installation fails
OpenShift 3 installation expects a node's status to be ready and this is possible after the installation of the CNI plugin. In this release there is no separate CNI plugin file, causing OpenShift installation to fail.
Workaround: Create the /etc/cni/net.d directory on each node before starting the installation.
- Issue 2413383: OpenShift 3 upgrade fails because not all nodes are ready
By default the NCP bootstrap pod is not scheduled on the master node. As a result, the master node status is always Not Ready.
Workaround: Assign the master node with the role "compute" to allow nsx-ncp-bootstrap and nsx-node-agent DaemonSets to create pods. The node status will change to "Ready" once the nsx-ncp-bootstrap installs the NSX-CNI.
- Issue 2451442: After repeatedly restarting NCP and recreating a namespace, NCP might fail to allocate IP addresses to Pods
If you repeatedly delete and recreate the same namespace while restarting NCP, NCP might fail to allocate IP addresses to Pods in that namespace.
Workaround: Delete all stale NSX resources (logical routers, logical switches, and logical ports) associated with the namespace, and then recreate them.
- Issue 2460219: HTTP redirect does not work without a default server pool
If the HTTP virtual server is not bound to a server pool, HTTP redirect fails. This issue occurs in NSX-T 2.5.0 and earlier releases.
Workaround: Create a default server pool or upgrade to NSX-T 2.5.1.
- Issue 2518111: NCP fails to delete NSX-T resources that have been updated from NSX-T
NCP creates NSX-T resources based on the configurations that you specify. If you make any updates to those NSX-T resources through NSX Manager or the NSX-T API, NCP might fail to delete those resources and re-create them when it is necessary to do so.
Workaround: Do not update NSX-T resources created by NCP through NSX Manager or the NSX-T API.
- Issue 2524778: NSX Manager shows NCP as down or unhealthy after the NCP master node is deleted
After an NCP master node is deleted, for example, after a successful switch-over to a backup node, the health status of NCP still says down when it should be up.
Workaround: Use the Manager API DELETE /api/v1/systemhealth/container-cluster/<cluster-id>/ncp/status to clear the stale status manually.
- Issue 2517201: Unable to create a pod on an ESXi host
After removing an ESXi host from a vSphere cluster and adding it back to the cluster, creating a pod on the host fails.
Workaround: Reboot NCP.
- Issue 2416376: NCP fails to process a TAS ASG (App Security Group) that binds to more than 128 Spaces
Because of a limit in NSX-T distributed firewall, NCP cannot process a TAS ASG that binds to more than 128 Spaces.
Workaround: Create multiple ASGs and bind each of them to no more than 128 Spaces.
- Issue 2534726: If upgrading to NCP 3.0.1 via NSX-T Tile fails, using the BOSH command line to redo the upgrade causes performance problems
When upgrading to NCP 3.0.1 via NSX-T Tile on OpsMgr, the upgrade process will mark HA switching profiles in NSX Manager used by NCP as inactive. The switching profiles will be deleted when NCP restarts. if the upgrade fails and you use a BOSH command such as “bosh deploy -d <deployment-id> -n <deployment>.yml” to redo the upgrade, the HA switching profiles will not be deleted. NCP will still run but performance will be degraded.
Workaround: Always upgrade NCP via OpsMgr and not the BOSH command line.
- Issue 2537221: After upgrading NSX-T to 3.0, the networking status of container-related objects in the NSX Manager UI is shown as Unknown
In NSX Manager UI, the tab Inventory > Containers shows container-related objects and their status. In a TKGI environment, after upgrading NSX-T to 3.0, the networking status of the container-related objects is shown as Unknown. The issue is caused by the fact that TKGI does not detect the version change of NSX-T. This issue does not occur if NCP is running as a pod and the liveness probe is active.
Workaround: After the NSX-T upgrade, restart the NCP instances gradually (no more than 10 at the same time) so as not to overload NSX Manager.
- Issue 2550474: In an OpenShift environment, changing an HTTPS route to an HTTP can cause the HTTP route to not work as expected
If you edit an HTTPS route and delete the TLS-related data to convert it to an HTTP route, the HTTP route might not work as expected.
Workaround: Delete the HTTPS route and create a new HTTP route.
- Issue 2552573: In an OpenShift 4.3 environment, cluster installation might fail if DHCP is configured using Policy UI
In an OpenShift 4.3 environment, cluster installation requires that a DHCP server is available to provide IP addresses and DNS information. If you use the DHCP server that is configured in NSX-T using the Policy UI, the cluster installation might fail.
Workaround: Configure a DHCP server using the Manager UI, delete the cluster that failed to install and recreate the cluster.
- Issue 2552564: In an OpenShift 4.3 environment, DNS forwarder might stop working if overlapping address found
In an OpenShift 4.3 environment, cluster installation requires that a DNS server be configured. If you use NSX-T to configure a DNS forwarder and there is IP address overlap with the DNS service, the DNS forwarder will stop working and cluster installation will fail.
Workaround: Configure an external DNS service, delete the cluster that failed to install and recreate the cluster.
- Issue 2483242: IPv6 traffic from containers being blocked by NSX-T SpoofGuard
IPv6 link local address is not being auto-whitelisted with SpooGuard enabled.
Workaround: Disable SpoofGuard by setting nsx_v3.enable_spoofguard = False in the NCP configuration.
- Issue 2552609 - Incorrect X-Forwarded-For (XFF) and X-Forwarded-Port data
If you configure XFF with either INSERT or REPLACE for HTTPS Ingress rules (Kubernetes) or HTTPS routes (OpenShift), you might see incorrect X-Forwarded-For and X-Forwarded-Port values in XFF headers.
Workaround: None.
- Issue 2555336: Pod traffic not working due to duplicate logical ports created in Manager mode
This issue is more likely to occur when there are many pods in several clusters. When you create a pod, traffic to the pod does not work. NSX-T shows multiple logical ports created for the same container. In the NCP log only the ID of one of the logical ports can be found.
Workaround: Delete the pod and recreate it. The stale ports on NSX-T will be removed when NCP restarts.
- Issue 2554357: Load balancer auto scaling does not work for IPv6
In an IPv6 environment, a Kubernetes service of type LoadBalancer will not be active when the existing load balancer scale is reached.
Workaround: Set nsx_v3.lb_segment_subnet = FE80::/10 in /var/vcap/jobs/ncp/config/ncp.ini for TKGI deployments and in nsx-ncp-configmap for others. Then restart NCP.
- Issue 2597423: When importing manager objects to policy, a rollback will cause the tags of some resources to be lost
When importing manager objects to policy, if a rollback is necessary, the tags of the following objects will not be restored:
- Spoofguard profiles (part of shared and cluster resources)
- BgpneighbourConfig (part of shared resources)
- BgpRoutingConfig (part of shared resources)
- StaticRoute BfdPeer (part of shared resources)
Workaround: For resources that are part of the shared resources, manually restore the tags. Use the backup and restore feature to restore resources that are part of cluster resources.
- Issue 2579968: When changes are made to Kubernetes services of type LoadBalancer at a high frequency, some virtual servers and server pools are not be deleted as expected
When changes are made to Kubernetes services of type LoadBalancer at a high frequency, some virtual servers and server pools might remain in the NSX-T environment when they should be deleted.
Workaround: Restart NCP. Alternatively, manually remove stale virtual servers and their associated resources. A virtual server is stale if no Kubernetes service of type LoadBalancer has the virtual server's identifier in the external_id tag.
- Issue 2536383: After upgrading NSX-T to 3.0 or later, the NSX-T UI does not show NCP-related information correctly
After upgrading NSX-T to 3.0 or later, the Inventory > Containers tab in the NSX-T UI shows the networking status of container-related objects as Unknown. Also, NCP clusters do not appear in the System > Fabric > Nodes > NCP Clusters tab. This issue is typically seen in a TKGI environment.
Workaround: After the NSX-T upgrade, restart the NCP instances gradually (no more than 10 at the same time).
- Issue 2622099: Kubernetes service of type LoadBalancer initialization fails with error code NCP00113 and error message "The object was modified by somebody else"
In a single-tier deployment with policy API, if you use an existing tier-1 gateway as the top tier gateway and the pool allocation size of the gateway is ROUTING, a Kubernetes service of type LoadBalancer might fail to initialize with the error code NCP00113 and error message "The object was modified by somebody else. Please retry."
Workaround: When the problem appears, wait 5 minutes. Then restart NCP. The problem will be resolved.
- Issue 2633679: NCP operator does not support OpenShift nodes attached to a tier-1 segment created using API /policy/api/v1/infra/tier-1s/<tier1-id>/segments/<segment-id>
NCP operator does not support OpenShift nodes attached to a tier-1 segment created using API /policy/api/v1/infra/tier-1s/<tier1-id>/segments/<segment-id>.
Workaround: Use API /policy/api/v1/infra/segments/<segment-id> to create the segment.
- NCP fails to start when "logging to file" is enabled during Kubernetes installation
This issue happens when uid:gid=1000:1000 on the container host does not have permission to the log folder.
Workaround: Do one of the following:
- Change the mode of the log folder to 777 on the container hosts.
- Grant “rwx” permission of the log folder to uid:gid=1000:1000 on the container hosts.
- Disable the “logging to file” feature.
- Issue 2653214: Error while searching the segment port for a node after the node's IP address was changed
After changing a node's IP address, if you upgrade NCP or if the NCP operator pod is restarted, checking the NCP operator status with the command "oc describe co nsx-ncp" will show the error message "Error while searching segment port for node ..."
Workaround: None. Adding a static IP address on a node interface which also has DHCP configuration is not supported.
- Issue 2664457: While using DHCP in OpenShift, connectivity might be temporarily lost when nsx-node-agent starts or restarts
nsx-ovs creates and activates 5 temporary connection profiles to configure ovs_bridge but their activation might keep failing temporarily in NetworkManager. As a result, no IP (connectivity) is present on the VM on ovs_uplink_port and/or ovs_bridge.
Workaround: Restart the VM or wait until all the profiles can be successfully activated by NetworkManager.
- Issue 2672677: In a highly stressed OpenShift 4 environment, a node can become unresponsive
In an OpenShift 4 environment with a high level of pod density per node and a high frequency of pods getting deleted and created, a RHCOS node might go into a "Not Ready" state. Pods running on the affected node, with the exception of daemonset members, will be evicted and recreated on other nodes in the environment.
Workaround: Reboot the impacted node.
- Issue 2706551: OpenShift's full-stack automated installation (known as IPI) fails as nodes become not ready during installation
The keepalived pod adds the Kubernetes VIP to the ovs_bridge on the master nodes before the Kubernetes API server starts to run on them. As a result, all the requests to the Kubernetes API server fail and the installation cannot complete.
Workaround: None
- Issue 2697547: HostPort not supported on RHEL/CentOS/RHCOS nodes
You can specify hostPorts on native Kubernetes and TKGI on Ubuntu nodes by setting 'enable_hostport_snat' to True in nsx-node-agent ConfigMap. However, on RHEL/CentOS/RHCOS nodes hostPort is not supported and the parameter 'enable_hostport_snat' is ignored.
Workaround: None
- Issue 2707174: A Pod that is deleted and recreated with the same namespace and name has no network connectivity
If a Pod is deleted and recreated with the same namespace and name when NCP is not running and nsx-ncp-agents are running, the Pod might get wrong network configurations and not be able to access the network.
Workaround: Delete the Pod and recreate it when NCP is running.
- Issue 2713782: NSX API calls fail with the error "SSL: DECRYPTION_FAILED_OR_BAD_RECORD_MAC"
Occasionally, at NCP startup, NCP might restart or fail to initialize load balancer services due to the presence of a duplicated load balancing server or a tier-1 logical router for the load balancer. Also, while NCP is running, an NSX endpoint might be reported as DOWN for a brief period of time (less than 1 second). If the load balancer fails to initialize, the NCP log will have the message "Failed to initialize loadbalancer services."
This behavior will only occur when NCP is doing client-side load balancing across multiple NSX manager instances. It will not occur when a single API endpoint is configured in ncp.ini.
Workaround: Increase the value of the nsx_v3.conn_idle_timeout parameter. Note that this might result in a longer wait time for endpoints to be detected as being available after a temporary disconnection when using client-side load balancing.
- Issue 2745904: The feature "Use IPSet for default running ASG" does not support removing or replacing an existing container IP block
If you enable "Use IPSet for default running ASG" on an NCP tile, NCP will create a dedicated NSGroup for all the container IP blocks configured by "IP Blocks of Container Networks" on the same NCP tile. This NSGroup will be used in the firewall rules created for global running ASGs to allow traffic for all the containers. If you later remove or replace an existing container IP block, it will be removed or replaced in the NSGroup. All the existing containers in the original IP block will no longer be associated with the global running ASGs. Their traffic might no longer work.
Workaround: Only append new IP blocks to "IP Blocks of Container Networks".
- Issue 2744480: Kubernetes service self-access not supported on KVM
If a Kubernetes pod tries to access itself via a Kubernetes service for which the pod is an endpoint, reply packets will be dropped on the KVM host.
Workaround: None
- Issue 2744361: Workload VM in OpenShift configured with a static IP address might lose connectivity when the nsx-node-agent pod is terminated
Occasionally, a workload VM in OpenShift configured with a static IP address loses connectivity when the nsx-node-agent pod is terminated.
Workaround: Reboot the VM.
- Issue 2746362: nsx-kube-proxy fails to receive Kubernetes service events from Kubernetes apiserver
Occasionally, in an OpenShift cluster, nsx-kube-proxy fails to receive any Kubernetes service event from Kubernetes apiserver. The command "nsxcli -c get kube-proxy-watchers" gives the output 'Watcher thread status: Up", but "Number of events processed" is 0, meaning that nsx-kube-proxy has not received any event from apiserver.
Workaround: Restart the nsx-kube-proxy pod.
- Issue 2745907: "monit" commands return incorrect status information for nsx-node-agent
On a diego_cell VM, when monit restarts nsx-node-agent, if it takes more than 30 seconds for nsx-node-agent to fully start, monit will show the status of nsx-node-agent as "Execution failed" and will not update its status to "running" even when nsx-node-agent is fully functional later.
Workaround: None.
- Issue 2735244: nsx-node-agent and nsx-kube-proxy crash because of liveness probe failure
nsx-node-agent and nsx-kube-proxy use sudo to run some commands. If there are many entries in /etc/resolv.conf about DNS server and search domains, sudo can take a long time to resolve hostnames. This will cause nsx-node-agent and nsx-kube-proxy to be blocked by the sudo command for a long time, and liveness probe will fail.
Workaround: Perform one of the two following actions:
- Add hostname entries to /etc/hosts. For example, if hostname is 'host1', add the entry '127.0.0.1 host1'.
- Set a larger value for the nsx-node-agent liveness probe timeout. Run the command 'kubectl edit ds nsx-node-agent -n nsx-system' to update the timeout value for both the nsx-node-agent and nsx-kube-proxy containers.
- Issue 2744557: Complex regular expression patterns containing both a capture group () and {0} not supported for Ingress path matching
For example, if the regular expression (regex) pattern is: /foo/bar/(abc){0,1}, it will not match /foo/bar/.
Workaround: Do not use capture group () and {0} when creating an Ingress regex rule. Use the regular pattern EQUALS to match /foo/bar/.
- Issue 2751080: After a KVM host upgrade, container hosts not able to run Kubernetes pods
After a KVM host upgrade, container hosts deployed on the upgraded host will not be able to run Kubernetes pods. The pods will remain in the container creating status. If the NCP operator is deployed, the node's status might become NotReady and the networkUnavailable node condition will be True. This issue is seen only with RHEL and not with Ubuntu.
Workaround: Restart nsx-opsagent on the KVM hypervisor.
- Issue 2736412: Parameter members_per_small_lbs is ignored if max_allowed_virtual_servers is set
If both max_allowed_virtual_servers and members_per_small_lbs are set, virtual servers may fail to attach to an available load balancer because only max_allowed_virtual_servers is taken into account.
Workaround: Relax the scale constraints instead of enabling auto scaling.
- Issue 2740552: When deleting a static pod using api-server, nsx-node-agent does not remove the pod's OVS bridge port, and the network of the static pod which is re-created automatically by Kubernetes is unavailable
Kubernetes does not allow removing a static pod by api-server. A mirror pod of static pod is created by Kubernetes so that the static pod can be searched by api-server. While deleting the pod by api-server, only the mirror pod will be deleted and NCP will receive and handle the delete request to remove all NSX resource allocated for the pod. However, the static pod still exists, and nsx-node-agent will not get the delete request from CNI to remove OVS bridge port of static pod.
Workaround: Remove the static pod by deleting the manifest file instead of removing the static pod by api-server.
- Issue 2795268: Connection between nsx-node-agent and hyperbus flips and Kubernetes pod is stuck at creating state
In a large-scale environment, nsx-node-agent might fail to connect to Kubernetes apiserver to get pod information. Because of the large amount of information being transferred, keepalive messages cannot be sent to hyperbus, and hyperbus will close the connection.
Workaround: Restart nsx-node-agent. Make sure Kubernetes apiserver is available and the certificate to connect to apiserver is correct.
- Issue 2795482: Running pod stuck in ContainerCreating state after node/hypervisor reboot or any other operation
If the wait_for_security_policy_sync flag is true, a pod can go to ContainerCreating state after being in running state for more than one hour because of a worker node hard reboot, hypervisor reboot, or some other reason. The pod will be in the creating state forever.
Workaround: Delete and recreate the pod.
- Issue 2871314: After TKGI upgrade from 1.10.x to 1.11.x (prior to 1.11.6), the Ingress certificates for the NSX load balancer are deleted.
Starting with NCP 3.1.1, certificates are tracked with a revision number. This causes a problem when upgrading TKGI 1.10.x to TKGI 1.11.x (prior to 1.11.6), causing the Ingress certificates for NSX load balancer to be deleted and not re-imported.
Workaround: Do one of the following:
- Restart NCP. Or,
- Delete the secret in the Kubernetes environment and recreate the same secret. Or,
- Upgrade to TKGI 1.11.6 or later.
- Issue 2871321: After TKGI upgrade from 1.10.x to 1.11.x (prior to 1.11.6), if the CRD LoadBalancer is using L7 cookie persistence, it will lose the IP Address.
This issue is caused by a new feature in NCP 3.1.1 that supports cookie name update in NSX load balancer.
Workaround: Do one of the following:
- Use source IP persistence instead of cookie persistence.
- Upgrade to TKGI 1.11.6 or later.
- Issue 3033821: After manager-to-policy migration, distributed firewall rules not enforced correctly
After a manager-to-policy migration, newly created network policy-related distributed firewall (DFW) rules will have higher priority than the migrated DFW rules.
Workaround: Use the policy API to change the sequence of DFW rules as needed.