VMware Integrated OpenStack 4.0 | 19 SEP 2017 | Build 6437860 Check for additions and updates to these release notes. |
What's in the Release Notes
The release notes cover the following topics:- About VMware Integrated OpenStack
- Internationalization
- What's New
- VMware Integrated OpenStack with Kubernetes
- Compatibility
- Upgrading to VMware Integrated OpenStack 4.0
- Open Source Components for VMware Integrated OpenStack 4.0
- Known Issues
About VMware Integrated OpenStack
VMware Integrated OpenStack greatly simplifies deploying an OpenStack cloud infrastructure by streamlining the integration process. VMware Integrated OpenStack delivers out-of-the-box OpenStack functionality and an easy configuration workflow through a deployment manager vApp that runs directly in vCenter Server.
Internationalization
VMware Integrated OpenStack version 4.0 is available in English and seven additional languages: Simplified Chinese, Traditional Chinese, Japanese, Korean, French, German, and Spanish. ASCII characters must be used for all input and naming conventions of OpenStack resources (such as project names, user names, image names, and so on) and for the underlying infrastructure components (such as ESXi hostnames, virtual switch port group names, data center names, datastore names, and so on).
What's New
This release is based on the latest OpenStack Ocata release and provides the following new features and enhancements:
- Support for the latest versions of VMware products. VMware Integrated OpenStack 4.0 supports and is fully compatible with VMware vSphere 6.5 Update 1, VMware NSX for vSphere 6.3.3, and VMware NSX-T 2.0.
- SR-IOV enhancements. A
direct
vnic type is now supported to create ports that indicate SR-IOV virtual functions. Neutron ports created using thedirect
vnic type can be used during launch phases of OpenStack instances. This enhancement also adds ability to boot instances with multiple vnic types, for example a boot instance with bothdirect
andnormal
types of vnics. - Multi-VC support. You can add additional compute vCenter Server instances to your NSX-T VMware Integrated OpenStack deployment.
- vRealize Automation integration. You can manage your OpenStack deployment through the embedded VMware Integrated OpenStack tab in the vRealize Automation portal and design OpenStack XaaS blueprints.
- VMware Integrated OpenStack with Kubernetes. Starting with VMware Integrated OpenStack 4.0, a container orchestration platform built on Kubernetes is now included and fully supported.
- Console log. The nova console-log command is now available for all VMs deployed with VMware Integrated OpenStack 4.0.
- Secure multi-tenancy. VMware Integrated OpenStack 4.0 introduces a new Tenant Virtual Datacenter construct that provides multi-tenancy, a key differentiator in OpenStack distributions. This construct offers capability to allocate CPU and memory resources per tenant which provides resource guarantees and isolation in a multi-tenant environment. Administrators configure the secured multi-tenancy through the newly introduced viocli commands as well as by creating tenant-specific flavors.
- VLAN transparency. The VMware NSX plug-in for vSphere Distributed Switch and NSX for vSphere now supports the VLAN transparency extension. This feature allows for creating neutron networks with the transparency flag enabled.
- Live Resize. You can now scale up VNF resources with no downtime for resize operations. You can use the image metadata
os_live_resize
to allow live resize of disk, memory, and vCPU. - New licensing model. VMware Integrated OpenStack 4.0 introduces the Data Center Edition and Carrier Edition license models that must be applied after installing the solution. For more information about obtaining a valid license, see https://www.vmware.com/products/openstack.html#pricing.
- Full NUMA support. VMware Integrated OpenStack 4.0 supports NUMA aware placement on the underlying vSphere platform. This feature provides low latency and high throughput to Virtual Network Functions (VNFs) that run on Telco environments.
VMware Integrated OpenStack with Kubernetes
A container orchestration platform built on Kubernetes is now included with VMware Integrated OpenStack, enabling the privisioning of full infrastructure stacks for application developers. The platform provides the following features:
- Support for the latest version of Kubernetes. VMware Integrated OpenStack 4.0 includes and fully supports Kubernetes version 1.7.
- Seamless deployment with VMware Integrated OpenStack. The container platform quickly installs on top of VMware Integrated OpenStack for an integrated experience.
- Easy to use operational UI. The management UI gives administrators and users quick access to Kubernetes management operations.
- Allows for shared or exclusive cluster types. Clusters can be deployed in exclusive mode where all authorized users can manage the namespace, or in a shared mode where multi-tenancy is strictly enforced.
- Enterprise-ready storage and networking. The integration with VMware Integrated OpenStack extends enterprise class storage and networking based on vSAN and NSX to Kubernetes.
- Out of the box LDAP/AD integration. Using Keystone, AD and LDAP integrations are supported out of the box, with full multi-tenancy.
Compatibility
The VMware Product Interoperability Matrix provides details about the compatibility of the current version of VMware Integrated OpenStack with VMware vSphere components, including ESXi, VMware vCenter Server, the vSphere Web Client, and optional VMware products. Check the VMware Product Interoperability Matrix also for information about supported management and backup agents before you install VMware Integrated OpenStack or other VMware products.
Upgrading to VMware Integrated OpenStack 4.0
You can upgrade directly to VMware Integrated OpenStack 4.0 from a VMware Integrated OpenStack 3.1 deployment.
You perform the upgrade procedure directly in the VMware Integrated OpenStack manager. The complete multi-step procedure is described in detail in the VMware Integrated OpenStack Administrator Guide.
Open Source Components for VMware Integrated OpenStack 4.0
The copyright statements and licenses applicable to the open source software components distributed in VMware Integrated OpenStack 4.0 are available on the Open Source tab of the product download page. You can also download the source files for any GPL, LGPL, or other similar licenses that require the source code or modifications to source code to be made available for the most recent available release of VMware Integrated OpenStack.
Known Issues
The known issues are grouped as follows.
VMware Integrated OpenStack- Provider network creation through horizon fails if no UUID of a transport zone is entered
When you create VLAN type networks, it is mandatory that you provide the UUID value for the transport zone in the provider_network text box in horizon. If no value is entered, network creation fails.
Workaround: See the UUID of the transport zone in your VMware NSX interface and enter that value in the provider_network text box.
- Metadata is not accessible for a DHCP disabled subnet on a distributed logical router
Instances on DHCP disabled subnets cannot access metadata through the interface of the router if a distributed logical router is used. This behavior is not observed for shared and exclusive routers. This might be an expected behavior since same logical networks, for example metadata networks, cannot be attached to multiple distributed logical routers.
Workaround: None.
- When you boot from a glance image created using the Ubuntu Xenial OVA, the OS fails to boot
The OS fails to boot with the following errors:
error: file `/boot/grub/i386-pc/efi_gop.mod' not found error: file `/boot/grub/i386-pc/efi_uga.mod' not found
This is an issue with the Xenial cloud OVA that is tracked by a bug in the Ubuntu cloud-images project, for more inforamtion see https://bugs.launchpad.net/cloud-images/+bug/1615875.
Workaround: Until the Ubuntu bug is resolved and new OVA is published, use Xenial ISO images.
- LBaaS v2 fails when you add members to a pool that is created with the --loadbalancer option
OpenStack LBaaS v2 provides two options to configure a loadbalancer pool:
--loadbalancer
and--listener
. At least one of the two option must be specified to create the pool.
If you create the pool for the OpenStack LBaaS v2 with the --loadbalancer option, addition of members fails and the loadbalancer goes to anERROR
state.Workaround: Create the pool with the
--listener
option. - Renamed OpenStack instances appears under the old name in vCenter Server
If you rename your OpenStack instance by using the
nova rename
command, changes appear only in the OpenStack database. Your vCenter Server instance shows the old name.Workaround: None
- Availability zone configuration might not be successfully applied
After you modify the configuration of an availability zone, the new configuration might not be applied until the backup edges are deleted and recreated.
For example, the following configuration in the
nsx.ini
file features an availability zone that has backup edges:
zone3:resgroup-163:datastore-12:true:datastore-21
If you change the resource pool of that zone and restart Neutron, the backup edges won't be updated. If you deploy new routers or networks, they will use the out-of-date backup edges that leads to inconsistent availability zone configuration.Call the admin utilities after you change the configuration of an availability zone and before you start Neutron:
- Modify the availability zone configuration in the nsx.ini file.
- Delete all backup edges in succession.
nsxadmin -r backup-edges -o clean --property edge-id=edge-XX
- Restart Neutron.
- Verify the new configuration.
availability-zone-list
- Certificate is not in CA store error might appear when you deploy new OpenStack instance
When you deploy a new VMware Integrated OpenStack instance with an IP address that was used by another instance that has been connected to vRealize Automation, you might get certificate errors:
Cannot execute the request: ; java.security.cert.CertificateException: Certificate is not in CA store.Certificate is not in CA store. (Workflow:Invoke a REST operation / REST call (item0)#35)
Workaround: Delete the certificate of the old VMware Integrated OpenStack instance and import the new one by running the respective workflows in vRealize Orchestrator.
- Log in to vRealize Orchestrator.
- Go to Library > Configuration > SSL Trust Manager.
- Run the workflow to delete the trusted certificates of the old VMware Integrated OpenStack instance.
- Run the workflow to import the certificate of the new instance from URL.
- Unable to modify syslog setting post deployment in VMware Integrated OpenStack Manager interface
After deploying VIO, you cannot modify the syslog server configuration using the setting in the VIO Manager interface (VMware Integrated OpenStack > Management Server > Edit Settings > vApp Options).
Workaround: Modify the configuration here: VMware Integrated OpenStack > OpenStack Cluster > Manage > Syslog Server.
Dashboard might show a Volume as attached even if it failed to attach
This is a known OpenStack issue, first reported in the Icehouse release.Long image upload times cause NotAuthenticated failure
This is a known OpenStack issue (https://bugs.launchpad.net/glance/+bug/1371121), first reported in the Icehouse release.Special characters in datastore names not supported by Glance (Image Service)
If a datastore name has non-alphanumeric characters like colons, ampersands, or commas, the datastore cannot be added to the Glance service. Specifically, the following characters are not permitted in Glance datastore names because their use is reserved for other purposes and therefore can interfere with the configuration: : , / $ (colon, comma, forward slash, dollar).
Workaround: Do not use these symbols.- If either controller VM reboots, high availability might be compromised
When a controller fails, the other controller continues to provide services. However, when the initial controller reboots, it might no longer provides services, and thus is not available if the other controller also fails.
Workaround: If a controller fails and HA is invoked, review your deployment to ensure that both controllers are providing services after the failed controller reboots. For more information about how to start and stop VMware Integrated OpenStack deployments, see VMware knowledge base article 2148892.
- Metadata service is not accessible on subnets created with the no-gateway option
Deployments using NSX 6.2.2 or earlier do not support no-gateway networks; Edges are used for edge-routed networks and DHCP is used for VDR networks. Deployments using NSX 6.2.3 or later do not support no-gateway or no-dhcp networks; DHCP is used for any DHCP network and Edges are used for non-DHCP networks. In 2.x, autoconfiguration is turned off for Edge VMs. When applicable, DHCP sets the gateway and metadata is served through this gateway Edge. As a result, when a subnet is created with the the no-gateway option, there is no router Edge to capture the metadata traffic.
Workaround: For networks with the no-gateway option, configure a route for 169.254.169.254/32 to forward traffic to DHCP Edge IP.
- Problem uploading patch file in Firefox Browser
If you are using Firefox to update the patch for VMware Integrated OpenStack, the upload will fail if Firefox is using version 19 of the Adobe Flash plug-in.
Workaround: Obtain the patch using the CLI. You can also work around this issue by using an alternative browser or restoring the Flash plugin in your Firefox browser to an earlier version (15, 16, 17 or 18.)
OpenStack management service does not automatically restart
Under certain conditions, the OpenStack management service does not automatically restart. For example, after a failover event, all OpenStack services successfully restart but the management service remains unreachable.Workaround: Manually restart the VMware Integrated OpenStack vApp in the vSphere Web Client. Right-click the icon in the Inventory page and select Shut Down. After all the services shut down, power on the vApp. Check the OpenStack manager logs to confirm that the restart was successful.
NOTE: Restarting interrupts services.
- Network creation might fail when running Heat templates
Observed in VMware Integrated OpenStack deployments using NSX 6.2.2. When running multiple Heat templates, an iteration of a network creation sometimes fails at the backend.
Resolved in NSX 6.2.3 and greater.
- Recovery operation returns "Nodes already exist" error
Under certain conditions, running the
viocli recovery - <DB name>
command fails if the ansible operation is interrupted. As a result, the database nodes remain and causes the error.Workaround: Manually remove the nodes and run the
viocli recovery
command again. LBaaS v2 migration: health monitors not associated to a pool do not migrate
In LBaaS v2, health-monitors are required to specify and be attached to a pool. In LBaaS v1, health monitors can be created without pool association, and associated with a pool in a separate procedure.As a result, when migrating to LBaaS v2, unassociated health monitors are excluded.
Workaround: Before migrating to LBaaS v1, associate all health monitors with a pool to ensure their successful migration. The migration process is optional after installing or upgrading to VMware Integrated OpenStack 3.0. See the VMware Integrated OpenStack Administration Guide.
- NSX LBaaS v2.0 tenant limitation
NSX for vSphere load balancers support only one tenant per subnet. Under normal operation, this is not an issue because tenants create their own load balancers. If a user attempts to create and attach a load balancer to a subnet, the load balancer will be created in an ERROR state.
Workaround: Allow tenants to create their own load balancers. Do not create and attach a load balancer to an existing subnet.
- Heat stack deletion fails with "Failed to publish configuration on NSX Edge" error
Observed in deployments using NSX v6.2.2. Under stressful conditions, the Heat stack or OpenStack API might fail at the backend.
Workaround: Retry the failed operation.
Images must be VMX version 10 or greater
This issue affects streamOptimized images and OVAs. For example, if an image is not VMX-10 or greater, it might import without difficulty but OpenStack instances created from the image will not function. This is typically experienced when OpenStack compute nodes are deployed on older ESXi versions, such as 5.5. You also cannot correct such an image by modifying the image metadata (vmware_hw_version) or flavor metadata (vmware:hw_version).- Recovery after vSphere HA event shows synchronization and process start-up failures
If vSphere experiences and HA event, it can affect your VMware Integrated OpenStack deployment. After the recovery, in VMware Integrated OpenStack, run the
viocli deployment status -v
command. If the resulting report shows any synchronization or process start-up failures, use the workaround below.Workaround: Use the
viocli services stop
command to stop all OpenStack services. Use theviocli services start
command to restart all OpenStack services. After restarting, run theviocli deployment status -v
command again. There should be no errors. - OpenStack recovery sometimes fails when starting RabbitMQ
In rare occurrences, VMware Integrated OpenStack recovery fails at the point where RabbitMQ initiates.
Workaround: Repeat the recovery process. Recovery should succeed the second time.
- Heat stack deletion fails to delete associated Cinder volumes
Under heavy loads, Cinder volumes sometimes fail to be deleted after their Heat stacks are deleted, resulting in database deadlock warnings and slower Cinder performance.
Workaround: There is no workaround.
SQL-configured users cannot be modified in dashboard
If your VMware Integrated OpenStack deployment is configured to use LDAP for user authentication, you cannot modify user definitions in the OpenStack dashboard (Horizon) even those that are sourced from a SQL database.- OpenStack dashboard: router-size drop-down menu is missing
In the OpenStack dashboard (Horizon), you can specify the size when you create an exclusive router. However when you modify a router from shared to exclusive, the router-size drop-down menu does not appear, preventing you from specifying the router size.
Workaround: Restore the default value, modify type to exclusive again. The drop-down menu should appear.
- When you attach a firewall to a router without a gateway, the rules are not added to the NSX router
Firewall as a Service rules are added only to a router with a gateway because those rules have no affect on routers with no gateway, as there in no relevant traffic to protect from.
Workaround: To add the rules, set a gateway to the router.
- For VMware NSX-T deployments, when you attach a firewall to a router without a gateway, the rules are added to the NSX router
Firewall as a Service rules are added to a router without a gateway, even though there is no relevant traffic to match those rules.
Workaround: To activate the rules, set a gateway to the router.
- For deployments with NSX for vSphere, updating the admin_state parameter has no impact
Updating the
admin_state
parameter toFalse
for a Nova port does nothing, as the parameter is not supported with NSX for vSphere .Workaround: There is no workaround.
- For VMware NSX for vSphere deployments, after you attach a gateway to a MDProxy router, the NSX E
This configuration might cause implications during fetching of a md proxy server from an OpenStack instance. Admin tenant can attach an mdproxy router to a gateway that results in a block of metadata server access from the OpenStack instance.
Workaround: Do not attach a gateway to a MDProxy router.
dge (MDProxy edge) vnic0 index changes from VM Network to a Gateway network portgroup - Upgrade to 4.0 process might fail with "ssh_exchange_identification: Connection closed by remote host" or "Can’t connect to MySQL server" errors that appear in ansible.log
Upgrade progress might fail due to iglitches in the infrastructure, such as network latency or resource contention.
Workaround: Click continue upgrade in the vSphere Web Client to retry the upgrade.
- BGP tenant networks are lost on service GW edges
After the BGP peering between the BGPSpeaker and the Service GW is established , running the neutron command n
eutron bgp-speaker-network-remove
to disassociate BGPSpeaker with the external/provider network may cause that the tenant routes on the Service gateways are lost even after the external/provider network to BGPSpeaker is restored usingneutron bgp-speaker-network-add
. The issue occurs as result of toggling ECMP flags and configuring the BGP at the same time triggered by the Neutron command set.Workaround: The default value of 2 seconds for the e
cmp_wait_time
parameter might not be long enough and must be increased to 5 seconds.This change remediates the missing routes, after performing the same set of operationsbgp-speaker-network-remove
followed bybgp-speaker-add.
Change the property value in thensxv.ini
file. - Metadata Agent HTTP communication with Nova Server experiences security risk
The metadata agent that lives on the edge appliance serves as a reverse proxy and reaches out to an upstream Nova server to gather metadata information about the NSX environment on OpenStack. The nginx reverse proxy configuration also supports plaintext com- munication. The lack of TLS encryption exposes sensitive data to disclosure and attackers on the network can also modify data from the site in transit.
Workaround: Use HTTPS with CA support instead of HTTP for a secure communication between mdproxy and Nova Server. Perform the following steps.
- Enable Nova Metadata HTTPS support by adding the following parameters to
nova.conf
.
[DEFAULT]
enabled_ssl_apis = metadata
[wsgi]
ssl_cert_file = <nova metadata https server certificate file path>
ssl_key_file = <nova metadata https server private key file path> - Log in to the NSX manager, go to System > Trust > CERTIFICATES to import a CA certificate or chain of certificates as needed and record the UUIDs of the certificates.
- Create a HTTPS mdproxy service using REST call.
- Prepare a https_mdproxy.json file by using the following format:
https_mdproxy.json json format is as bellow:
{
"display_name" : "https_md_proxy",
"resource_type" : "MetadataProxy",
"metadata_server_url" : "https://10.117.5.179:8433",
"metadata_server_ca_ids": ["4574853e-e312-4b02-a1c1-002b570c2aa8","940251c9-3500-4bcd-9761-b49d0c2a95d1"],
"secret": "123",
"edge_cluster_id" : "00aaf00d-a8ba-42b6-a3bf-9914c8567401"
}
- Deploy one https mdproxy service with CA by calling REST and wait for the NSX Manager to return a
201 CREATE success
status.curl -i -k -u <nsx-mgr-admin:nsx-mgr-passwd>
-H "content-type: application/json"
-H "Accept: application/json"
-X POST https://<nsx-mgr-ip>/api/v1/md-proxies
-d "`cat ./https_mdproxy.json`
- Prepare a https_mdproxy.json file by using the following format:
-
Record the mdproxy service's uuid and use this mdproxy service as VMware Integrated OpenStack metadata proxy service. The communication between mdproxy and Nova Server is now secured by https with certificates authentication.
- Enable Nova Metadata HTTPS support by adding the following parameters to
- For NSX-T deployments, newly created tier-0 router does not connect to tier-1 router during router-gateway-set
If you have a Tier-0 router configured, when you create a new one, the UUID of the new router does not auto-populate in the nsxv3.ini file. New Tier-1 routers that you create do not connect to your new Tier-0 router. You must manually update the nsxv3.ini file and recreate your external network to fix the issue.
Workaround: Perform the following steps to resolve the issue.
- Get the UUID of your new Tier-0 router.
- Open the
/etc/neutron/plugin/vmware/nsxv3.ini
file and update the UUID for the Tier-0 router. - Restart the Neutron server.
- Delete your your existing external network and create a new one.
- When you remove a gateway of a BGP enabled shared router, you might observe a brief network outage on other shared BGP enabled routers
In a shared router case, multiple routers can get hosted on the same edge. The gateway IP of one of those routers is used as
router id
in the BGP case. When a gateway of the router is cleared, th plugin picks the gateway IP of some other BGP enabled router asrouter id
and modifies that field. During the process, a disruption in the peering occurs, as the advertised routes for any other BGP routers hosted on that edge are lost. Peering recoveres after the peering is reestablished.Workaround: There is no workaround. Use an exclusive router to avoid the issue.
- Tenants traffic might get blocked after enabling NSX policies in Neutron
After enabling
security-group-policy
in the Neutron plug-in, the NSX firewall sections order might be wrong. The correct order must be:- NSX policies
- Tenants security groups
- Default sections
Workaround: Create the first NSX policy before configuring VMware Integrated OpenStack. If you have already made configurations, go to the NSX Firewall page in the vSphere Web Client and move the policies sections up.
- After cycling power on a VMware Integrated OpenStack with Kubernetes VM with an SDDC provider, OpenStack service containers stop working and do not restart automatically.
If a VMware Integrated OpenStack with Kubernetes VM has one SDDC provider deployed, and the VM is powered off and on to address an issue with an ESXi host for example, the VM is migrated to another host. Subsequent operations on the provider such as kubernetes cluster creation and scaleout will fail.
Workaround: To refresh the provider, perform the following steps:
- On the VMware Integrated OpenStack with Kubernetes VM, login as root with the password set during OVA deployment:
vkube login --insecure
- Obtain the SDDC provider ID:
vkube provider list --insecure
- Use the provider ID to refresh the SDDC provider:
vkube provider refresh sddc <provider_id> --insecure
- On the VMware Integrated OpenStack with Kubernetes VM, login as root with the password set during OVA deployment:
- After the admin password is changed, a user cannot list the user and group.
The cached admin password is out of sync with the new admin password.
Workaround: To update the cached admin password, obtain the update_admin_pwd script from VMware GSS. Then run
update_admin_pwd
and runvkube cluster update
on each cluster. - No vSphere clusters appear in the UI for an SDDC cloud provider
If you create an SDDC cloud provider in the UI and choose to upload a Root CA file, no clusters appear on the vSphere Clusters page. This is due to a bug in the certificate verification process. This problem does not occur if you secure your vCenter server with a certificate approved by a trusted certificate authority or if you choose to ignore the vCenter Server certificate validation.
Workaround: Perform one of the following tasks.
- Secure your vCenter server with a certificate approved by a trusted certificate authority then create the SDDC cloud provider.
- Ignore the vCenter Server Cerificate validation. This is not recommended for secure installations.
- Use the CLI to create an SDDC cloud provider and pass the Root CA File as a
vcenter_certificate
property in the CLI payload.
- After entering provider name, UI still prompts for provider name
When creating an SDDC or OpenStack Cloud Provider, the UI displays the error "Provider name is required." even if the provider name is specified in the Add a Provider wizard.
Workaround: This problem occurs the first time the UI is loaded with browsers of a certain versions. To work around the problem, close the browser tab. Then open a new tab and connect to the virtual appliance again.
- Error message "Only {0} files are supported" appears when uploading files in the UI
This error message appears in the UI when an incorrect file type is uploaded. This can occur in either of the following cases:
- When creating a cloud provider or cluster, you attempt to upload a payload file that previously downloaded during the creation process.
- When creating a SDDC cloud provider or OpenStack provider, you attempt to upload a CA certificate file.
Workaround: Apply the workaround that pertains to your case:
- If attempting to upload a payload file, verify that the payload file has the .json extension and has a valid JSON content.
- If attempting to upload a CA certificate, verify that the certificate file has the .crt extension.
- SDDC Cloud Provider creation fails with "dpkg: unrecoverable fatal error, aborting:"
Creating an SDDC Cloud Provider fails with an error written to the logs of the column-api container on the Virtual Appliance. The message that appears is similar to the following.
- docker logs column-api -fTASK [bootstrap-os : Bootstrap | Install python 2.x and pip] *******************172.18.0.2 - - [06/Sep/2017 05:47:32] "GET /runs/46a74449-7123-4574-90c2-3404dfac6641 HTTP/1.1" 200 -fatal: [k8s-node-1-2393e79d-ec6a-4e63-8f63-c6308d72496e]: FAILED! => {"changed": true, "failed": true, "rc": 100, "stderr": "Shared connection to 192.168.0.3 closed.", "stdout": "........"dpkg: unrecoverable fatal error, aborting:", " files list file for package 'python-libxml2' is missing final newline", "E: Sub-process /usr/bin/dpkg returned an error code (2)"]}"
Workaround: The error is a result of file corruption during SDDC cloud provider creation. Delete the SDDC cloud provider and re-create it.