VMware NSX-T Data Center 2.5.3  |  11 Feb 2021  |  Build 17558879

Check regularly for additions and updates to these release notes.

What's in the Release Notes

The release notes cover the following topics:

What's New

Features, Functional Enhancements and Extensions

This release of NSX-T Data Center is a maintenance release and there are no major or minor features, functional enhancements or extensions.

Compatibility and System Requirements

For compatibility and system requirements information, see the NSX-T Data Center Installation Guide

API and CLI Resources

See code.vmware.com to use the NSX-T Data Center APIs or CLIs for automation.

The API documentation is available from the API Reference tab. The CLI documentation is available from the Documentation tab.

NSX Intelligence

All NSX Intelligence known and resolved issues and detailed documentation to help you install, configure, update, use, and manage NSX Intelligence is now separately available at NSX Intelligence Documentation.

Available Languages

NSX-T Data Center has been localized into multiple languages: English, German, French, Japanese, Simplified Chinese, Korean, Traditional Chinese, and Spanish. Because NSX-T Data Center localization utilizes the browser language settings, ensure that your settings match the desired language.

Document Revision History

11 Feb 2021. First edition.
15 March 2021. Second edition. Added known issue 2730634.

Resolved Issues

  • Fixed Issue 2572052: Scheduled backups might not get generated.

    In some cases, scheduled backups are not generated.

  • Fixed Issues 2589694, 2682951: A few seconds of IPv6 traffic loss might be observed when VM failover takes place.

    When a VM failover takes place, a few seconds of IPv6 traffic loss might be observed. This happens when the IPv6 address of a workload VM is ported to another workload VM that is communicating with a different workload VM on a different L2 segment. The two isolated L2 segments are connected by the NSX Edge.
    The two workload VMs communicating also need to be in two different ESXi Transport Nodes for the problem to be seen.

  • Fixed Issue 2577028: Host Preparation might fail. 

    Host preparation might fail due to config hash mismatch leading to discovery loop. 

  • Fixed Issue 2519300: NSX Manager upgrade fails with no clear errors.

    Upgrading NSX Manager might fail when the Upgrade Coordinator provides the message: "This page is only available on the NSX Manager where Upgrade Coordinator is running." The NSX Manager upgrade might also fail with no clear errors. 

  • Fixed Issue 2555333: "nsxuser" fails to get created during host prep. 

    During the host prep lifecycle (install/uninstall/upgrade) 'nsxuser' is created internally in ESXi hosts managed by vCenter Server for managing NSX VIBs. This user creation fails intermittently because of the ESXi password requirements.

  • Fixed Issue 2557166: Distributed Firewall rules using context-profiles (layer 7) are not working as expected when applied to Kubernetes pods.

    After configuring L7 rules on Kubernetes pods, traffic that is supposed to match L7 rules is hitting the default rule instead.

  • Fixed Issue 2486119: PNICs are migrated from NVDS back to VDS uplinks with mapping that is different from the original mapping in VDS.

    When a Transport Node is created with a Transport Node Profile that has PNIC install and uninstall mappings, PNICs are migrated from VDS to NVDS. Later when NSX-T Data Center is removed from the Transport Node, the PNICs are migrated back to the VDS, but the mapping of PNIC to uplink may be different from the original mapping in VDS.

  • Fixed Issue 2569691: Ping between external network and logical switch/segment does not work in specific cases.

    Consider the following configuration: 

    1) Create an uplink with x.x.x.x network.
    2) The default route creation for nexthop is:  x.x.x.y
    3) Now update the connected IP for uplink to:  x.x.x.y

    This is a misconfiguration and causes pings to fail from the external network to the logical switch or segment. 

  • Fixed Issue 2607651: NSX Manager does not reflect users from vIDM, if First Name Attribute is missing. 

    If a vIDM user is created in AD with no First Name / Last Name / Email ID Attribute then the user is not reflected in NSX Manager. 

  • Fixed Issue 2586606, 2689250: Load balancer does not work when Source-IP persistence is configured on a large number of virtual servers. 

    When Source-IP persistence is configured on a large number of virtual servers on a load balancer, it consumes significant amount of memory and may lead to NSX Edge running out of memory. The issue can recur with the addition of more virtual servers. See VMware knowledge base article 80450 for more details. 

  • Fixed Issues 2621322, 2682959: HTTP health check does not work when the HTTP content is in multi TCP segments.

    Load Balancer cannot check backend server status according to the HTTP content.

  • Fixed Issue 2491206, 2682761: Load Balancer health check does not work well for body content match when there is chunk encoding in HTTP packet.

    There is CHUNK header in HTTP packet from backend server for health check. The pool member status cannot be up. The backend server is not down and available.

  • Fixed Issue 2683241: Resolved Alarm conditions are still shown in Alarms API.

    These false alarms are confusing to users until they figure out there are no issues.

  • Fixed Issue 2275388: Loopback interface/connected interface routes could get redistributed before filters are added to deny the routes.

    Unnecessary route updates could cause sub-optimal routing on traffic for a few seconds. 

  • Fixed Issues 2275708, 2682727: Unable to import a certificate with its private key when the private key has a passphrase.

    You are unable to import a new certificate with private key because of the following error: 

    "Invalid PEM data received for certificate. (Error code: 2002)"
  • Fixed Issue 2328126: Bare Metal issue: Linux OS bond interface when used in NSX uplink profile returns error.

    When you create a bond interface in the Linux OS and then use this interface in the NSX uplink profile, you see this error message: "Transport Node creation may fail." This issue occurs because VMware does not support Linux OS bonding. However, VMware does support Open vSwitch (OVS) bonding for Bare Metal Server Transport Nodes.

  • Fixed Issue 2390624: Anti-affinity rule prevents service VM from vMotion when host is in maintenance mode.

    If a service VM is deployed in a cluster with exactly two hosts, the HA pair with anti-affinity rule will prevent the VMs from vMotioning to the other host during any maintenance mode tasks. This may prevent the host from entering Maintenance Mode automatically.

  • Fixed Issue 2389993: Route map removed after redistribution rule is modified from the Advanced UI or API.

    If there is a route-map added using "Advanced" UI/API in Redistribution Rule, it will get removed If you modify the same Redistribution Rule from Simplified (Policy) UI/API.

  • Fixed Issue 2400379: Context Profile page shows unsupported APP_ID error message.

    The Context Profile page shows the following error message: "This context profile uses an unsupported APP_ID - [<APP_ID>]. Please delete this context profile manually after making sure it is not being used in any rule." This is caused by the post-upgrade presence of six deprecated APP_IDs (AD_BKUP, SKIP, AD_NSP, SAP, SUNRPC, SVN) that no longer work on the data path.

  • Fixed Issues 2448006, 2682748: Querying a Firewall Section with associated rule-mapping fails.

    Querying a Firewall Section with associated rule-mapping fails if you use the GetSectionWithRules API call. The UI is not impacted as it depends on GetSection and GetRules API calls.

  • Fixed Issue 2475963: NSX-T VIBs fail to install due to insufficient space.

    NSX-T VIBs fail to install due to insufficient space in bootbank on ESXi host, returning a BootBankInstaller.pyc: ERROR. Some ESXi images provided by third-party vendors may include VIBs which are not in use and can be relatively large in size. This can result in insufficient space in bootbank/alt-bootbank when installing/upgrading any VIBs.

  • Fixed Issues 2590444, 2682952: VM tags are deleted when ESXi hosts disconnect from vCenter Server for longer than 30 minutes.

    VM tags are deleted when ESXi hosts disconnect from vCenter Server for longer than 30 minutes, causing DFW rules based on VM tags to stop working.

  • Fixed Issue 2484006: Protected VMs lose network connectivity. 

    SRM Protected VMs in an NSX-T Data Center environment lose network connectivity despite being configured on a different logical network, when placeholder VMs in the secondary site are powered on. This issue occurs because the same VIF UUID is applied to both the protected and the placeholder VMs. 

  • Fixed Issue 2549175: Searching in policy fails with the message: "Unable to resolve with 'start search resync policy'".

    Searching in policy fails because search is out of sync with the DNS PTR records when NSX Manager nodes are provided with new IP addresses.

  • Fixed Issue 2658577: Unable to load the System Overview screen

    The following error is displayed when you load the System Overview screen. that prevents you from monitoring your environment:

    Failed to get "{{reportName}}" report - Failed to fetch System details. Please contact the administrator. []. " error is displayed.
  • Fixed Issue 2685267: When using a Windows SFTP server, the system might not transfer the backup file to the SFTP server. 

    The backup file is not transferred to the SFTP server.

  • Fixed Issue 2661955: NSX Manager runs out of disk space affecting services. 

    You might run out of disk space and see the following message in /var/log/proton/activity-stats.log on the affected NSX Manager node:

     Trying to start running task again. 
  • Fixed Issue 2679368: In case of ENS, upgrade fails if the value for TeamPolicyUpDelay runtime option is high.

    You might see the following error and upgrade might fail if the TeamPolicyUpDelay value is set to a high number: 

    Failed to unload nsxt-vswitch module.
  • Fixed Issue 2605659, 2682956: Packets are sent to incorrect port by LB L7 virtual server.

    Packets are not forwarded to the pool members on correct port under the following conditions:

    • NSGroup for server pool is not statically configured.
    • Rule action is "select pool" in forwarding phase.
    • There is no default pool for virtual server.

    The matched packets after the first non-matched packet are forwarded to backend server on port 80.

  • Fixed Issue 2682957: NSX-T Manager does not reflect users from VIDM, if the First Name attribute is missing.

    If a VIDM user is created in AD without First Name, Last Name, or Email ID attributes, then the user is not reflected in NSX-T Manager.

  • Fixed Issue 2682965: A client outage is caused due to blocked traffic from DHCP and PXE servers.

    Custom segment profiles applied to a segment created in the Simplified UI are reverted to default profiles if the admin state of the segment is disabled and enabled from the Advanced UI.

  • Fixed Issue 2682966: Not able to perform restore operation from NSX Manager UI.

    Backup files are not visible from the Restore tab due to a mismatch in the case used for FQDN in the backup file and the case used for restoring. For example, you use lowercase FQDN for backup but provide FQDN in uppercase to look for backup files to restore. 

  • Fixed Issue 2682970: NTLM does not work due to crashed NGINX process on NSX-T Edge.

    ntlm ctx still holds the freed connection that is reused by the next request.

  • Fixed Issue 2682974: Traffic delay might be observed in an NSX-T Data Center environment configured for East-West Service Insertion.

    Edge VMs are not excluded from the Service Insertion Exclusion List. An Edge VM is a system VM and should not have an East-West Service Insertion IO chain filter attached to its network interfaces.

  • Fixed Issue 2682977: Missing Content-Security-Policy and HTTP Strict-Transport-Security headers in pre-authentication responses.

    Missing Content-Security-Policy (CSP) and HTTP Strict-Transport-Security (HSTS) headers until the user is authenticated results in Qualys scanners detecting this as a security issue.

  • Fixed Issue 2682983: Traffic going through the edge might be disrupted and BGP peering with that edge node might go down.

    When a load balancer virtual server is configured with a single port range like 1000-2000, it might lead to datapath process crash on edge nodes where that load balancer is realized resulting in traffic disruption.

  • Fixed Issue 2683237: Backups cannot be generated.

    Backup generation fails repeatedly. 

  • Fixed Issue 2683249: CBM not responding to "get cluster status" CLI command.

    Timeout occurs when 'get cluster status' call cannot be processed.

  • Fixed Issue 2683256: Restarting CBM when corfu cluster is broken causes CBM and deactivate cluster to be unresponsive.

    Deactivate cluster operation does not work when CBM cannot properly initialize due to corfu issues.

  • Fixed Issue 2685253: Load balancer nginx process core dumps when NTLM traffic is present.

    Load balancer experiences an nginx process core dump when NTLM traffic is present.

  • Fixed Issue 2685261: After taking a VM snapshot of Unified Appliance cluster issues occurred.

    After taking a VM snapshot of a Unified Appliance, snapshots resulted in clustering instability which impacted overall functionality.

  • Fixed Issue 2682750: The host crashes if an AD group used in an IDFW rule has no members.

    If an AD group used in an IDFW rule has no members, the host crashes when the rule gets evaluated for traffic. 

  • Fixed Issue 2682755: Your intended physical NIC to uplink mapping is lost. 

    Physical NICs are migrated from NVDS back to VDS uplinks with mapping that is different than the original mapping in VDS. This causes you to lose your intended mappings. 

  • Fixed Issue 2682768: You experience a memory outage when working with logical switches. 

    NSX Manager nodes crash and show out of memory when you are working with logical switches. 

  • Fixed Issue 2682774: Some NSX-T Data Center services might not function properly when NSX manager is disconnected from NSX Intelligence. 

    NSX-T Data Center that rely on NSX Intelligence services might be impacted. For example, you might have problems creating a new group.

  • Fixed Issue 2682777: Search operations in NSX Manager fail. 

    Search in NSX Manager fails with the error: "Unable to resolve with 'start search resync policy" when IP addresses of NSX Manager nodes are refreshed. 

  • Fixed Issue 2682780: "nsxuser" that is required during host-prep when installing, uninstalling, and upgrading NSX-T Data Center with ESXi, might not get created. 

    During the host-prep lifecycle of installing, uninstalling and upgrading NSX-T Data Center, the creation of the default user named  'nsxuser' that is created internally in vCenter Server managed ESXi hosts, might fail.

  • Fixed Issue 2682782: Distributed Firewall rules are not enforced when applied to Kubernetes pods. 

    Distributed Firewall rules using context-profiles (layer 7) do not work as expected when applied to Kubernetes pods.

  • Fixed Issue 2682793: Automated backups stop working after some time. 

    Your scheduled recurring backups stop working after about a week, disrupting expected backup creation. 

  • Fixed Issue 2682794: Several NSX Edge NICs receive buffer overflow alarms. 

    You observe a high buffer overflow rate in some NSX Edge appliances. 

  • Fixed Issue 2682797: Host preparation might fail in some cases. 

    NSX-T Data Center host preparation fails due to config hash mismatch leading to discovery loop. 

  • Fixed issue 2682801: You might NSX Controller nodes listed separately from NSX Manager nodes. 

    NSX Controller nodes are reported by CLI command "nsxcli -c get nodes", as separate nodes, which is unexpected and confusing. 
     

  • Fixed Issue 2685284: The control plane loses connectivity with the host after circular certificate replacement. 

    The host loses connectivity with the control plane and requires a reboot. This condition occurs when you replace certificate-1 with certificate-2 and replace certificate-2 back with certificate-1. 

  • Fixed Issue 2685285: You experience traffic loss after an autonomous NSX Edge is restored to a new NSX Edge VM. 

    After the restore of the new autonomous NSX Edge VM, network communication does not work due to incorrect MAC address. 

  • Fixed Issue 2686618: NSX Manager upgrade indefinitely stuck at "in-progress" status. 

    NSX Manager upgrade fails by remaining at "in-progress" status indefinitely. 
     

  • Fixed Issue 2688014: Error "Logical router port configuration realization error" displayed even when there is no error. 

    The realization state for a successfully realized Global Tier-0 gateway on a Local Manager incorrectly shows the error "Logical router port configuration realization error" for Local Manager's edge Transport Nodes. The realization was actually successful so this is misleading.

  • Fixed issue 2696694: Deployment of a host fails if your host has insufficient IP pool resources and not using data NIC.

    If you are deploying a host with insufficient IP pool resources and not using data NIC, the deployment fails. 

  • Fixed Issue 2696700: Service Deployment fails​

    Service Deployment fails if Service Segment and Transport Node profile don't have the same Transport Zone.
     

  • Fixed Issue 2696702: Runtime instances have inconsistent service VM ID naming convention for host based and cluster based deployments.

    You might notice inconsistent service VM IDs for host-based deployments compared with cluster-based deployments and this might cause confusion. 

  • Fixed Issue 2696703: DNS server lookup fails if you use multiple DNS servers.

    DNS server lookup for more than one DNS servers is not supported.

  • Fixed Issue 2696711: You experience a break in the flow of service insertion traffic. 

    SPF port might be missing required properties, such as, VXLAN ID, and that causes the flow of service insertion traffic to break.

  • Fixed Issue 2696908: You run out of free IP addresses in the IP Pool. 

    Even after deleting service deployment, your IP addresses are not released causing the IP pool to get exhausted. 

  • Fixed Issue 2698076: NSX Edge stops working. 

    Your NSX-T Data Center deployment suffers an outage because the NSX Edge stops working. 

  • Fixed Issue 2701760: Service segments can't be removed until instance endpoints are delete​d. 

    Instance endpoints should get deleted when a service deployment is deleted, however instance endpoints are not getting deleted along with the service deployment. This prevents you from removing segments for the service deployment. 

  • Fixed Issue 2707380: Your ESXi host might crash. 

    When ESXi flow cache is enabled and traffic has multiple destinations, such as multicast traffic, your ESXi host can crash because of a rare racing condition.

  • Fixed Issue 2682802: Some physical NICs remain down for bare metal NSX Edge after an MTU change.

    In a bare metal NSX Edge, some NICs stay down after an MTU change. This causes a reboot of the system. 

  • Fixed Issue 2683242: NSX syslog entries have multiple hostname formats​. 

    Syslog on a single NSX Manager node would appear to have two different hostname formats, and that may cause confusion when you are trying to filter logs by hostname. 

  • Fixed Issue 2683253: Transport Node state information might be missing in the support bundle. 

    If you have a large number of Transport Nodes, you might not see any Transport Node state information in the support bundle. This is because the API (GET /api/v1/transport-nodes/state) that retrieves Transport Node states times out if the response takes longer than 60 seconds. 

  • Fixed Issue 2683902: Your recurring backups do not execute per the schedule you provide. 

    If you set up recurring backups for your deployment, the backup might be delayed by 24 hours from the day you provide in the schedule. 

  • Fixed Issue 2687985: In-place upgrade fails. 

    In-place upgrade fails and you must use vMotion to migrate the VMs to perform a Maintenance Mode upgrade.

  • Fixed Issue 2688012, 2689021: Support bundle collection using UI/API may fail with timeout error for large scale deployments. 

    Support bundle collection might take longer than the pre-defined 1-hour time limit in the API in large scale setups.

  • Fixed Issue 2688015: Tier-0 realized state shows "In progress" when the Tier-0-SR is deleted from an NSX Edge node. 

    If a Tier-0 SR is deleted from an NSX Edge node, the realized state of the Tier-0 might start showing as "In progress" on that NSX Edge node.
     

  • Fixed Issue 2688973: The file "appliance-info.xml" might incorrectly contain IP address as FQDN. 

    The <fqdn> tag in "/etc/vmware/nsx/appliance-info.xml" might contain an IP address, even when FQDN is not configured. 

  • Fixed Issue 2690458: You are not able to perform operations like adding or removing members in Exclude Lists. 

    Multiple instances of Exclude List entities are incorrectly created in NSX Manager and prevent adding and deletion operations. 

  • Fixed Issue 2597714: You cannot use policy API to change "AdminStatus" property of pool members when using a group. 

    The "AdminStatus" property of pool members in the pool member group setting cannot be set properly using policy API.

  • Fixed Issue 2702999, 2703062: NSX Manager service might stop inadvertently because of a NAT rule problem, resulting in NSX Manager to crash. 

    NAT rules with CIDRs that have a small prefix (e.g. 10.0.0.0/8) will stop the NSX Manager service if the service IP address configuration completely overlaps with the subnet of the uplink or CSP ports. 
     

  • Fixed Issue 2704737: OVS included in NSX-T Data Center version 2.5.0 does not compile on recent Ubuntu kernels. 

    You are not able to install NSX-T Data Center on Ubuntu kernels versions 4.15.0-76-generic or greater. 

  • Fixed Issue 2705694: NSX Manager node might become inaccessible. 

    NSX Manager node might go down due to high memory consumption by NSX CLI. 

  • Fixed Issue 2706955: ESXi host might crash. 

    If flow cache of multiple destination replication is enabled, there is a rare racing condition that can trigger a crash in your ESXi host.

  • Fixed Issue 2682785: The Load Balancer "nginx" service crashes and VIP stops responding. 

    You might experience some failed transactions because of the failure of the "nginx" service that leads to the VIP to stop responding. 

  • Fixed Issue 2687823: Restarting of Opsagent causes errors. 

    You find that the hyperbus status is wrong when Opsagent is upgraded or restarted then removed and Transport Node added back to Transport Zone.

  • Fixed Issue 2696433: The PAN watcher is unable to receive notifications. 

    If you are using PAN and you might have a watcher that is not working correctly, you are not notified when the watcher is repaired and working correctly. 

Known Issues

The known issues are grouped as follows.

General Known Issues
  • Issue 2320529: "Storage not accessible for service deployment" error thrown after adding third-party VMs for newly added datastores.

    "Storage not accessible for service deployment" error thrown after adding third-party VMs for newly added datastores even though the storage is accessible from all hosts in the cluster. This error state persists for up to thirty minutes.

    Retry after thirty minutes. As an alternative, make the following API call to update the cache entry of datastore:

    https://<nsx-manager>/api/v1/fabric/compute-collections/<CC Ext ID>/storage-resources?uniform_cluster_access=true&source=realtime

    where   <nsx-manager> is the IP address of the NSX manager where the service deployment API has failed, and < CC Ext ID> is the identifier in NSX of the cluster where the deployment is being attempted.

  • Issue 2355113: Unable to install NSX Tools on RedHat and CentOS Workload VMs with accelerated networking enabled in Microsoft Azure.

    In Microsoft Azure when accelerated networking is enabled on RedHat (7.4 or later) or CentOS (7.4 or later) based OS and with NSX Agent installed, the ethernet interface does not obtain an IP address.

    Workaround: After booting up RedHat or CentOS based VM in Microsoft Azure, install the latest Linux Integration Services driver available at https://www.microsoft.com/en-us/download/details.aspx?id=55106 before installing NSX tools.

  • Issue 2370555: User can delete certain objects in the Advanced interface, but deletions are not reflected in the Simplified interface.

    When groups added as part of a Distributed Firewall exclude list are deleted in the Advanced interface Distributed Firewall Exclusion List settings, the deletion may not be reflected in the Simplified interface. 

    Workaround: Use the following procedure to resolve this issue:

    1. Add an object to an exclusion list in the Simplified interface.
    2. Verify that it appears displayed in the Distributed Firewall exclusion list in the Advanced interface.
    3. Delete the object from the Distributed Firewall exclusion list in the Advanced interface.
    4. Return to the Simplified interface and a second object to the exclusion list and apply it.
    5. Verify that the new object appears in the Advanced interface.
  • Issue 2607918: SRM works only if both protected and recovery VMs are connected to logical switches that are in the same Transport Zones.​

    SRM works only if both protected and recovery VMs are connected to logical switches that are in the same Transport Zones.​

    Workaround: None. 

  • Issue 2697567: If you have an L7 Load Balancer configured in transparent mode, some requests might fail.  

    You might see "502 Bad Gateway" when using L7 Load Balancer in transparent mode. 

    Workaround: Use SNAT mode instead of transparent mode in Load Balancer pool.

  •  Issue 2730634: Post uniscale upgrade networking component page shows an "Index out of sync" error.

    Post uniscale upgrade networking component page shows an "Index out of sync" error.

    Workaround: Log in to NSX Manager with admin credentials and run the "start search resync policy" command. It will take a few minutes to load the networking components.

Installation Known Issues
  • Issue 2261818: Routes learned from eBGP neighbor are advertised back to the same neighbor.

    Enabling bgp debug logs will indicate packets being received back and packet getting dropped with error message. BGP process will consume additional cpu resources in discarding the update messages sent to peers. If there are large number of routes and peers this can impact route convergence.

    Workaround: None.

Upgrade Known Issues
  • Issue 2441985: Host Live upgrade from NSX-T Data Center 2.5.0 to NSX-T data Center 2.5.1 may fail in some cases.

    Host Live upgrade from NSX-T Data Center 2.5.0 to NSX-T Data Center 2.5.1 fails in some cases and you see the following error:
    Unexpected error while upgrading upgrade unit: Install of offline bundle failed on host 34206ca2-67e1-4ab0-99aa-488c3beac5cb with error : [LiveInstallationError] Error in running ['/etc/init.d/nsx-datapath', 'start', 'upgrade']: Return code: 1 Output: ioctl failed: No such file or directory start upgrade begin Exception: Traceback (most recent call last): File "/etc/init.d/nsx-datapath", line 1394, in CheckAllFiltersCleared() File "/etc/init.d/nsx-datapath", line 413, in CheckAllFiltersCleared if FilterIsCleared(): File "/etc/init.d/nsx-datapath", line 393, in FilterIsCleared output = os.popen(cmd).read() File "/build/mts/release/bora-13885523/bora/build/esx/release/vmvisor/sys-boot/lib64/python3.5/os.py", line 1037, in popen File "/build/mts/release/bora-13885523/bora/build/esx/release/vmvisor/sys-boot/lib64/python3.5/subprocess.py", line 676, in __init__ File "/build/mts/release/bora-13885523/bora/build/esx/release/vmvisor/sys-boot/lib64/python3.5/subprocess.py", line 1228, in _execute_child OSError: [Errno 28] No space left on device It is not safe to continue. Please reboot the host immediately to discard the unfinished update. Please refer to the log file for more details..

    Workaround: See Knowledge Base article 76606 for details and workaround.

check-circle-line exclamation-circle-line close-line
Scroll to top icon