Updated on 13 NOV 2018 VMware Integrated OpenStack 4.1 | 18 JAN 2018 | Build 7538136 Check for additions and updates to these release notes. |
What's in the Release Notes
The release notes cover the following topics:- About VMware Integrated OpenStack
- What's New
- Compatibility
- Upgrading to Version 4.1
- Internationalization
- Open Source Components for VMware Integrated OpenStack
- Resolved Issues
- Known Issues
About VMware Integrated OpenStack
VMware Integrated OpenStack greatly simplifies deploying an OpenStack cloud infrastructure by streamlining the integration process. VMware Integrated OpenStack delivers out-of-the-box OpenStack functionality and an easy configuration workflow through a deployment manager vApp that runs directly in vCenter Server.
What's New
This release is based on the latest OpenStack Ocata release and provides the following new features and enhancements:
VMware Integrated OpenStack
- Support for the latest versions of VMware products: VMware Integrated OpenStack 4.1 supports and is fully compatible with VMware vSphere 6.5 Update 1, VMware NSX for vSphere 6.3.5, VMware NSX-T 2.1, and vSAN 6.6.1.
- Support for the HTML5 vSphere Client: VMware Integrated OpenStack supports the HTML5 vSphere Client versions 6.5.0U1, 6.5.0U1b, 6.5.0U1c, and 6.5.0f.
- Multiple domain LDAP backend: Keystone can now be backed by multiple LDAP domains, enabling support of more complex structures and environments.
- Native NSX-T LBaaS: VMware Integrated OpenStack fully supports the new NSX-T native load balancer for both virtual machine- and container-based workloads.
- HAProxy limiting: HAProxy has been updated with additional configuration settings to prevent malicious and unintentional API overload.
- Tiny deployment mode: A new deployment option has been introduced to allow for deployment to a single server in an appliance model.
- Public management APIs: OpenStack Management Server APIs that can be used directly to automate deployment and lifecycle management of VMware Integrated OpenStack are now documented and available for general consumption.
VMware Integrated OpenStack with Kubernetes
A container orchestration platform built on Kubernetes is included with VMware Integrated OpenStack, enabling the provisioning of full infrastructure stacks for application developers. The platform provides the following new features:
- Support for the latest version of Kubernetes: VMware Integrated OpenStack 4.1 includes and fully supports Kubernetes version 1.8.1.
- Logging enhancements: Logs can now be forwarded to any syslog-compliant destination, enabling integration with existing log management tools.
- Additional components: The container platform has added support for Helm and Heapster to help manage and monitor deployed applications.
- Control plane backup and recovery: This release includes additional tools and best practices to assist with container platform control plane backups.
Compatibility
See the VMware Product Interoperability Matrices for details about the compatibility of VMware Integrated OpenStack with other VMware products, including vSphere components.
Upgrading to Version 4.1
Upgrading VMware Integrated OpenStack
You can upgrade directly to VMware Integrated OpenStack 4.1 from VMware Integrated OpenStack 3.1 or later.
- To upgrade from VMware Integrated OpenStack 3.1.x to VMware Integrated OpenStack 4.1, see Upgrade VMware Integrated OpenStack in the installation guide.
NOTE: If you have configured floating IP addresses on a router with source NAT disabled, enable source NAT or remove the floating IP addresses before upgrading to version 4.1. Floating IP addresses are no longer supported on routers with source NAT disabled. - To upgrade from VMware Integrated OpenStack 4.0 to VMware Integrated OpenStack 4.1, see Patch VMware Integrated OpenStack in the installation guide.
If you are running VMware Integrated OpenStack 3.0 or an earlier version, first upgrade to version 3.1 and then upgrade to version 4.1.
Upgrading VMware Integrated OpenStack with Kubernetes
To upgrade from VMware Integrated OpenStack with Kubernetes 4.0 to VMware Integrated OpenStack with Kubernetes 4.1, see Upgrade VMware Integrated OpenStack with Kubernetes in the VMware Integrated OpenStack with Kubernetes Getting Started Guide.
Internationalization
VMware Integrated OpenStack 4.1 is available in English and seven additional languages: Simplified Chinese, Traditional Chinese, Japanese, Korean, French, German, and Spanish.
The following items must contain only ASCII characters:
- Names of OpenStack resources (such as projects, users, and images)
- Names of infrastructure components (such as ESXi hosts, port groups, data centers, and datastores)
- LDAP and Active Directory attributes
VMware Integrated OpenStack with Kubernetes is available in English only.
Open Source Components for VMware Integrated OpenStack
The copyright statements and licenses applicable to the open source software components distributed in VMware Integrated OpenStack 4.1 are available on the Open Source tab of the product download page. You can also download the disclosure packages for the components of VMware Integrated OpenStack that are governed by the GPL, LGPL, or other similar licenses that require the source code or modifications to source code to be made available.
Resolved Issues
The resolved issues are grouped as follows.
VMware Integrated OpenStack- No vSphere clusters appear in the GUI for an SDDC cloud provider.
If you create an SDDC cloud provider in the GUI and upload a Root CA file, no clusters appear on the vSphere Clusters page. This problem does not occur if you secure your vCenter Server with a certificate approved by a trusted certificate authority or if you choose to ignore the vCenter Server certificate validation.
This issue has been resolved in this release.
- After the admin password is changed, users and groups cannot be listed.
The cached admin password is out of sync with the new admin password.
This issue has been resolved in this release.
- The error message "Only {0} files are supported" appears when uploading files in the GUI.
This error message appears when an incorrect file type is uploaded. This can occur in either of the following cases:
- When creating a cloud provider or cluster, you attempt to upload a payload file that was previously downloaded during the creation process.
- When creating an SDDC cloud provider or OpenStack provider, you attempt to upload a CA certificate file.
This issue has been resolved in this release.
- The GUI prompts you for a provider name even after you have entered one.
When creating an SDDC or OpenStack cloud provider, the GUI displays the error "Provider name is required." even if the provider name is specified in the Add a Provider wizard.
This issue has been resolved in this release.
Known Issues
The known issues are grouped as follows.
VMware Integrated OpenStack- VMware Integrated OpenStack cannot connect to NSX-T after the NSX-T password is changed.
If you change the NSX-T password while the Neutron server is running, VMware Integrated OpenStack might fail to connect to NSX-T.
Workaround: Before changing the NSX-T password, log in to the active controller node and run the
systemctl stop neutron-server
command to stop the Neutron server service. The service will be restarted after you update the NSX-T password in VMware Integrated OpenStack. - You cannot re-add a deleted compute node with tenant virtual data centers.
After you delete a compute node with a tenant virtual data center, attempting to re-add it fails with a "Failed to create resource provider" error in
/var/log/nova/nova-compute.log
.Workaround: Perform the following steps to remove the Nova compute node from the database:
- Find the MOID of the deleted compute node.
- Log in to the active database node and open the nova_api database:
mysql
use nova_api - In the "resource_providers" table, remove the resource_provider record with the MOID of the deleted compute node and remove all children of that record.
- An error occurs when you delete compute nodes out of order and then attempt to add a compute node.
If you do not delete compute nodes in descending order, adding a node later will generate an error.
Workaround: Delete nodes in order from largest to smallest node number. For example, with three compute nodes VIO-Compute-0, VIO-Compute-1, and VIO-Compute-2, you must delete VIO-Compute-2 first, then VIO-Compute-1, and finally VIO-Compute-0.
- The OpenStack GUI only exports the original value of the public virtual IP address.
If the public virtual IP address is changed and the VMware Integrated OpenStack or OpenStack configuration is exported and reloaded on setup, the exported configuration will contain the public virtual IP address of the original configuration, not the updated value.
Workaround: Update the public virtual IP address in the exported and saved configuration file before reloading the OpenStack configuration. Alternatively, update the public virtual IP address in the GUI when confirming the redeployment.
- The public load balancer IP address conflicts with the OpenStack API access network.
If configured outside of the GUI, the IP address of the public load balancer might overlap with the OpenStack API access network. When the configuration is exported and re-applied to the OpenStack or VMware Integrated OpenStack setup, the IP address overlap will not be allowed.
Workaround: When providing or configuring IP addresses, ensure that the public load balancer IP address does not overlap with the OpenStack API access network.
- Deploying VMware Integrated OpenStack using the HTML5 vSphere Client fails.
In the HTML5 vSphere Client, if you deploy VMware Integrated OpenStack using an old template without selecting a deployment type, an internal REST API error occurs in the last step of the deployment wizard.
Workaround: When using an old template, select the deployment type manually. Alternatively, you can use the Flex-based vSphere Web Client to deploy OpenStack.
- The VMware Integrated OpenStack vApp is not displayed in the HTML5 vSphere Client.
After you install VMware Integrated OpenStack, the HTML5 vSphere Client may fail to load VMware Integrated OpenStack plugin.
Workaround: Log out of the vSphere Client and log in again. If the vApp is still not displayed, perform the following steps to restart the HTML5 vSphere Client:
- Log in to the Flex-based vSphere Web Client.
- Select Home > Administration.
- On the Navigator tree, select System Configuration and click Services.
- Select VMware vSphere Client.
- Click the Actions icon and select Restart.
- A load balancer goes into the ERROR state.
If a load balancer is created using a subnet that is not connected to a tier-1 network router, the load balancer cannot be successfully created and will enter the ERROR state.
Workaround: Attach a tier-1 network router to the subnet before creating a load balancer.
- Instances fail to boot under heavy load.
If you deploy a VMware Integrated OpenStack instance while the system is under a heavy load, Keystone may become inundated with API requests and fail to serve with the error "Service Unavailable".
Workaround: Deploy the instance when the system load is lighter.
- Certificate verification may fail on the OpenStack Management Server.
When you use the
viocli
command-line utility, the following error may occur:ssl.SSLError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:590)
Workaround: On the OpenStack Management Server, disable verification of the vCenter Server certificate by running the following commands:
sudo su - export VCENTER_INSECURE=True
- When you remove a gateway of a BGP-enabled shared router, a brief network outage may occur on other BGP-enabled shared routers.
In an environment with shared routers, multiple routers may be hosted on the same edge. If BGP is enabled, the gateway IP address of one of those routers is used as the
router id
. When the gateway of a router is cleared, the plugin selects the gateway of another BGP-enabled router as the new value ofrouter id
. This process causes a temporary disruption in peering because the advertised routes for the other BGP-enabled routers hosted on that edge are lost.Workaround: Use an exclusive router.
- Service disruption occurs during refresh of Nova or Neutron services.
If VMware Integrated OpenStack detects an OpenStack setting that does not meet license requirements, it tries to correct the setting by restarting Nova or Neutron services.
Workaround: None. Assign your license before deploying OpenStack to ensure that OpenStack settings meet license requirements.
- For NSX-T deployments, a new tier-0 router does not connect to tier-1 routers during router-gateway-set.
If you create a tier-0 router when you already have one configured, the UUID of the new router is not automatically written to the
nsxv3.ini
file. Tier-1 routers that you create afterward do not connect to your new tier-0 router.Workaround: Manually update the
nsxv3.ini
file and recreate your external network.- Find the UUID of your new tier-0 router.
- Open the
/etc/neutron/plugin/vmware/nsxv3.ini
file and update the UUID for the new tier-0 router. - Restart the Neutron server.
- Delete your external network and create a new one.
- Deleting a router interface times out.
When concurrent Heat stacks are deployed with shared NSX routers, router interface deletion can time out. The following might be displayed:
neutron_client_socket_timeout
,haproxy_neutron_client_timeout
, orhaproxy_neutron_server_timeout
.Workaround: Do not use shared routers in environments where network resources frequently change. If NAT/FIP is required, use an exclusive router. Otherwise, use a distributed router.
- For NSX-V deployments, after you attach a gateway to a metadata proxy router, the OpenStack deployment cannot access the metadata server.
If you attach a gateway to a metadata proxy router, the NSX E dge vnic0 index changes from VM Network to a gateway network port group. This may prevent the OpenStack deployment from accessing the metadata server.
Workaround: Do not attach a gateway to a metadata proxy router.
- For NSX-T deployments, if you attach a firewall to a router without a gateway, firewall rules are added to the NSX router.
Firewall as a Service rules are added to a router without a gateway, even though there is no relevant traffic to match those rules.
Workaround: To activate the rules, configure a gateway for the router.
- A Nova instance fails to boot with the error "no valid host found".
Under stress conditions, booting an instance using the
tenant_vdc
property may fail.Workaround: Boot the instance when system load is lighter.
- BGP tenant networks are lost on service gateway edges.
After the BGP peering between a BGP speaker and the service gateway is established, running the
neutron bgp-speaker-network-remove
command to disassociate the BGP speaker from the external or provider network may cause the tenant routes on the service gateway to be lost. Restoring the external or provider network to the BGP speaker usingneutron bgp-speaker-network-add
will not recreate the routes.Workaround: In the
nsxv.ini
file, change the value ofecmp_wait_time
to 5 seconds. - iBGP peering between the DLR tenant edge (PLR) and provider gateway edge fails to properly advertise the tenant network and breaks external communication.
When iBGP peering is used, advertised routes are installed on peers without modifying the next hop. As a result, the provider gateway edge installs routes between tenant networks with the next hop IP address in the transit network range instead of the tenant's PLR edge uplink. Since the gateway edge cannot resolve the route to the transit network, communication is interrupted.
Workaround: Use eBGP peering when working with distributed routers.
- For NSX-V deployments, the admin_state parameter has no effect.
Changing the
admin_state
parameter toFalse
for a Nova port does not take effect. This parameter is not supported with NSX-V.Workaround: None.
- The cloud services router contains the IP address but not the FQDN.
During VMware Integrated OpenStack deployment, the public hostname in the load balancer configuration was not specified or did not conform to requirements for public access. The public hostname is used for external access to the VMware Integrated OpenStack dashboard and APIs.
Workaround: To change or edit the public hostname after deployment, see KB 2147624.
- When you attach a firewall to a router without a gateway, firewall rules are not added to the NSX router.
Firewall as a Service rules are added only to a router when it has a gateway. These rules have no effect on routers without a gateway because there is no relevant traffic.
Workaround: Configure a gateway for the router before attaching a firewall.
- Metadata agent HTTP communication with the Nova server experiences security risk.
The metadata agent on the edge appliance serves as a reverse proxy and communicates with an upstream Nova server to gather metadata information about the NSX environment on OpenStack. The nginx reverse proxy configuration also supports plaintext communication. The lack of TLS encryption exposes sensitive data to disclosure, and attackers on the network can also modify data from the site in transit.
Workaround: To ensure secure communication between the metadata proxy server and the Nova server, use HTTPS with CA support instead of HTTP.
- Enable Nova metadata HTTPS support by adding the following parameters to
nova.conf
:[DEFAULT] enabled_ssl_apis = metadata [wsgi] ssl_cert_file = nova-md-https-server-cert-file ssl_key_file = nova-md-https-server-private-key-file
- On the NSX Manager, select System > Trust > CERTIFICATES and import a CA certificate or chain of certificates. Record the UUIDs of the certificates imported.
- Prepare the
https_mdproxy.json
file in the following format:{ "display_name" : "https_md_proxy", "resource_type" : "MetadataProxy", "metadata_server_url" : "https://md-server-url", "metadata_server_ca_ids": ["ca-id"], "secret": "secret", "edge_cluster_id" : "edge-cluster-id" }
- Deploy the HTTPS metadata proxy server by using the REST API.
curl -i -k -u nsx-mgr-admin:nsx-mgr-passwd -H "content-type: application/json" -H "Accept: application/json" -X POST https://nsx-mgr-ip/api/v1/md-proxies -d "`cat ./https_mdproxy.json`"
- Configure VMware Integrated OpenStack with the UUID of the metadata proxy server created. Communication between the metadata proxy server and Nova server is now secured by HTTPS with certificate authentication.
- Enable Nova metadata HTTPS support by adding the following parameters to
- Policy file customizations are not synchronized to the VMware Integrated OpenStack dashboard.
The GUI does not honor changes to the policy specified in the custom playbook.
Workaround: If you use the custom playbook to edit policy files, make the same changes in the VMware Integrated OpenStack dashboard policy files to ensure consistency.
- The availability zone configuration might not be successfully applied.
After you modify the configuration of an availability zone, the new configuration might not be applied until the backup edges are deleted and re-created.
Workaround: Delete all backup edges and restart Neutron.
- Delete all backup edges.
nsxadmin -r backup-edges -o clean --property edge-id=edge-node-id
- Restart Neutron.
- Delete all backup edges.
- Renamed OpenStack instances appear under the original names in vCenter Server.
If you rename your OpenStack instance by using the
nova rename
command, changes appear only in the OpenStack database. Your vCenter Server instance continues to show the original name.Workaround: None
- Metadata is not accessible for a subnet without DHCP on a distributed logical router.
Instances on subnets without DHCP cannot access metadata through the interface of a distributed logical router. This behavior is not observed for shared and exclusive routers.
Workaround: None.
- The "Certificate is not in CA store" error might appear when you deploy an OpenStack instance.
When you deploy a new VMware Integrated OpenStack instance with an IP address that was previously used by another instance that was connected to vRealize Automation, the following certificate error may occur:
Cannot execute the request: ; java.security.cert.CertificateException: Certificate is not in CA store.Certificate is not in CA store. (Workflow:Invoke a REST operation / REST call (item0)#35)
Workaround: Delete the certificate of the old VMware Integrated OpenStack instance and import the new one in vRealize Orchestrator:
- Log in to vRealize Orchestrator.
- Select Library > Configuration > SSL Trust Manager.
- Run the workflow to delete the trusted certificates of the old VMware Integrated OpenStack instance.
- Run the workflow to import the certificate of the new instance from URL.
- Tenant traffic might be blocked after you enable NSX policies in Neutron.
After you enable
security-group-policy
in the Neutron plugin, the NSX firewall sections might be listed in the wrong order. The correct order is as follows:- NSX policies
- Tenant security groups
- Default sections
Workaround: In the vSphere Web Client, open the NSX Firewall page and move the sections to the correct position. To prevent this issue from occurring, create the first NSX policy before configuring VMware Integrated OpenStack.
- The router size drop-down menu is not displayed on the VMware Integrated OpenStack dashboard.
When you create an exclusive router on the VMware Integrated OpenStack dashboard, you can specify its size. However, when you change a router from shared to exclusive, the router size drop-down menu does not appear, preventing you from specifying the router size.
Workaround: Restore the default value for the router and modify the type to exclusive again. The drop-down menu should appear.
SQL-configured users cannot be modified on the VMware Integrated OpenStack dashboard.
If your VMware Integrated OpenStack deployment is configured to use LDAP for user authentication, you cannot modify any user definitions in the VMware Integrated OpenStack dashboard, even those that are sourced from a SQL database.Workaround: None.
- Recovery after a vSphere HA event shows synchronization and process startup failures.
vSphere HA events can affect your VMware Integrated OpenStack deployment. After vSphere recovers, run the
viocli deployment status
command on the OpenStack Management server. If the resulting report shows any synchronization or process startup failures, use the workaround below.Workaround: Manually restart all OpenStack services by running the
viocli services stop
command and then theviocli services start
command. After the OpenStack services have restarted, run theviocli deployment status
command again and confirm that there are no errors. Images must be VMX version 10 or greater.
This issue affects stream-optimized images and OVAs. If the hardware version of an image is earlier than VMX 10, OpenStack instances created from the image will not function. This is typically experienced when OpenStack compute nodes are deployed on older ESXi versions, such as 5.5. You cannot correct such an image by modifying the image metadata (vmware_hw_version) or flavor metadata (vmware:hw_version).Workaround: Use a newer image.
OpenStack Management Server may not automatically restart.
Under certain conditions, the OpenStack Management Server does not automatically restart. For example, after a failover event, all OpenStack services successfully restart but the OpenStack Management Server remains unreachable.Workaround: Manually restart the VMware Integrated OpenStack vApp in the vSphere Web Client. Right-click the icon in the Inventory page and select Shut Down. After all the services shut down, power on the vApp. Check the OpenStack manager logs to confirm that the restart was successful.
- Metadata service is not accessible on subnets created with the no-gateway option.
When a subnet is created with the the no-gateway option, there is no router edge to capture the metadata traffic.
Workaround: For networks with the no-gateway option, configure a route for 169.254.169.254/32 to forward traffic to the DHCP edge IP address.
- High availability may be compromised if a controller virtual machine reboots.
When a controller fails in a high availability setup, the second controller continues to provide services. However, when the initial controller reboots, it might not begin to provide services. The deployment would then be unable to switch back to the initial controller if the second controller failed.
Workaround: After a failed controller reboots in a high availability setup, review your deployment to ensure that both controllers are providing services. For more information about how to start and stop VMware Integrated OpenStack deployments, see KB 2148892.
Special characters in datastore names are not supported by Glance.
If a datastore name includes certain non-alphanumeric characters, the datastore cannot be added to the Glance service. The following characters are reserved for other purposes and not permitted in Glance datastore names: colons (:), commas (,), slashes (/), and dollar signs ($).
Workaround: Do not use these symbols in datastore names.Long image upload times cause NotAuthenticated failure.
This is a known OpenStack issue first reported in the Icehouse release. See https://bugs.launchpad.net/glance/+bug/1371121.Volumes may be displayed as attached on the dashboard even if they failed to attach.
This is a known OpenStack issue first reported in the Icehouse release.- Syslog settings cannot be modified after deployment through the VMware Integrated OpenStack vApp.
The syslog server configuration cannot be modified in VMware Integrated OpenStack > Management Server > Edit Settings > vApp Options after deployment.
Workaround: Modify the configuration in VMware Integrated OpenStack > OpenStack Cluster > Manage > Syslog Server.
- The Kubernetes API server cannot be accessed through the virtual IP address.
If you have deployed multiple Kubernetes clusters with an OpenStack provider in an NSX-T network environment, you may be unable to access the Kubernetes API server using the virtual IP address.
Workaround: Log in to the NSX-T backend server and update the load balancer virtual server with the floating IP address.
- For VDS deployments with an SDDC provider, clusters may appear as ACTIVE but have no external routing after recovery.
If the Nginx ingress controller pod is in the error state after recovery, no external routing can occur.
Workaround: Perform the following steps to clear the error state:
- Delete the default service account and the affected Nginx ingress controller pod.
kubectl delete serviceaccount default -n kube-system kubectl delete pod nginx-ingress-controller-id -n kube-system
- On the VMware Integrated OpenStack with Kubernetes virtual machine, run the
vkube cluster update
command.
- Delete the default service account and the affected Nginx ingress controller pod.
- Deleted clusters cannot be restored.
Once the Delete Cluster and Delete Provider commands have been run, the networks, routers, and load balancers that have been deleted cannot be recovered.
Workaround: None.
- After the guest operating system of the Kubernetes cluster node is restarted, the flannel pod does not start up correctly.
Restarting the guest operating system of the Kubernetes cluster node cleans up all IP table rules. As a result, the flannel pod does not start up correctly.
Workaround: Restart the Kubernetes network proxy. You can stop the kube-proxy process and hyperkube will start a new kube-proxy process automatically.
- The "No policy assigned" error is displayed when cluster operations are performed.
A user that is a member of a group assigned to either an exclusive or shared cluster may see "No policy assigned" when performing operations on the cluster, such as running the kubectl utility. This occurs because the group information of the authenticated user is not stored correctly during the user session.
Workaround: Assign an individual user to the cluster instead of a group.
- SDDC cloud provider creation fails with "dpkg: unrecoverable fatal error, aborting:" message.
Creating an SDDC cloud provider fails, and the logs of the column-api container on the virtual appliance contain a message similar to the following:
- docker logs column-api -fTASK [bootstrap-os : Bootstrap | Install python 2.x and pip] *******************172.18.0.2 - - [06/Sep/2017 05:47:32] "GET /runs/46a74449-7123-4574-90c2-3404dfac6641 HTTP/1.1" 200 -fatal: [k8s-node-1-2393e79d-ec6a-4e63-8f63-c6308d72496e]: FAILED! => {"changed": true, "failed": true, "rc": 100, "stderr": "Shared connection to 192.168.0.3 closed.", "stdout": "........"dpkg: unrecoverable fatal error, aborting:", " files list file for package 'python-libxml2' is missing final newline", "E: Sub-process /usr/bin/dpkg returned an error code (2)"]}"
Workaround: Delete the SDDC cloud provider and re-create it.
- After cycling power on a VMware Integrated OpenStack with Kubernetes virtual machine with an SDDC provider, OpenStack service containers stop working and do not restart automatically.
If a VMware Integrated OpenStack with Kubernetes virtual machine with one SDDC provider is powered off and on, the virtual machine is migrated to another host. Subsequent operations on the provider, such as Kubernetes cluster creation and scale-out, will fail.
Workaround: To refresh the provider, perform the following steps:
- On the VMware Integrated OpenStack with Kubernetes virtual machine, login as the root user.
vkube login --insecure
- Refresh the SDDC provider.
vkube provider refresh sddc provider-id --insecure
You can obtain the SDDC provider ID by running the
vkube provider list --insecure
command.
- On the VMware Integrated OpenStack with Kubernetes virtual machine, login as the root user.