You can import virtual machines from vSphere into your VMware Integrated OpenStack deployment and manage them like OpenStack instances.

Imported virtual machines become OpenStack instances but remain distinct.

  • If a virtual machine has multiple disks, the disks are imported as Cinder volumes.
  • Existing networks are imported as provider networks of type portgroup with access restricted to the given tenant.
  • After a virtual machine with a specific network backing is imported, the same network cannot be imported to a different project.
  • Neutron subnets are automatically created with DHCP disabled.
  • Neutron ports are automatically created based on the IP and MAC address of the network interface card on the virtual machine.
Note: If the DHCP server cannot maintain the same IP address during lease renewal, the instance information in OpenStack will show the incorrect IP address. To avoid this problem, use static DHCP bindings on existing DHCP servers and do not run new OpenStack instances on imported networks.

You import VMs using the Data Center Command-Line Interface (DCLI) on the OpenStack Management Server.

Prerequisites

  • Deploy VMware Integrated OpenStack with NSX Data Center for vSphere or VDS networking. Importing virtual machines is not supported for NSX-T Data Center deployments.
  • Verify that the virtual machines that you want to import are in the same vCenter Server instance.

Procedure

  1. In vSphere, add the clusters containing the desired virtual machines as compute clusters in your VMware Integrated OpenStack deployment. For instructions, see Add Compute Clusters to Your Deployment.
  2. Log in to the OpenStack Management Server as viouser.
  3. If you want to prevent imported virtual machines from being relocated or renamed, update your deployment configuration.
    1. If your deployment is not using a custom.yml file, copy the template custom.yml file to the /opt/vmware/vio/custom directory.
      sudo mkdir -p /opt/vmware/vio/custom
      sudo cp /var/lib/vio/ansible/custom/custom.yml.sample /opt/vmware/vio/custom/custom.yml
    2. Open the /opt/vmware/vio/custom/custom.yml file in a text editor.
    3. Uncomment the nova_import_vm_relocate parameter and set its value to false.
    4. Deploy the updated configuration.
      sudo viocli deployment configure

      Deploying the configuration briefly interrupts OpenStack services.

  4. Connect to the VMware Integrated OpenStack vAPI endpoint.
    dcli +server http://mgmt-server-ip:9449/api +i

    If you cannot connect to the server, see DCLI Cannot Connect to Server.

  5. Import unmanaged virtual machines into VMware Integrated OpenStack.
    Note: When you execute a command, DCLI prompts you to enter the administrator credentials for your vCenter Server instance. You can save these credentials to avoid entering your username and password every time.
    • Run the following command to import all unmanaged virtual machines:
      com vmware vio vm unmanaged importall --cluster cluster-name [--tenant-mapping {FOLDER | RESOURCE_POOL} [--root-folder root-folder | --root-resource-pool root-resource-pool]]
      Option Description
      --cluster

      Enter the compute cluster that contains the virtual machines that you want to import.

      --tenant-mapping {FOLDER | RESOURCE_POOL}

      Specify whether to map imported virtual machines to OpenStack projects based on their location in folders or resource pools.

      If you do not include this parameter, all imported VMs will become instances in the import_service project by default.

      --root-folder ROOT_FOLDER

      If you specified FOLDER for the --tenant-mapping parameter, you can provide the name of the root folder containing the virtual machines to be imported.

      All virtual machines in the specified folder or any of its subfolders are imported as instances into an OpenStack project with the same name as the folder in which they are located.

      Note: If you specify --tenant-mapping FOLDER but do not specify --root-folder, the name of the top-level folder in the cluster is used by default.
      --root-resource-pool ROOT_RESOURCE_POOL

      If you specified RESOURCE_POOL for the --tenant-mapping parameter, you can provide the name of the root resource pool containing the virtual machines to be imported.

      All virtual machines in the specified resource pool or any of its child resource pools are imported as instances into an OpenStack project with the same name as the resource pool in which they are located.

    • Run the following command to import a specified virtual machine:
      com vmware vio vm unmanaged importvm --vm vm-id [--tenant project-name] [--nic-mac-address nic-mac --nic-ipv4-address nic-ip] [--root-disk root-disk-path] [--nics specifications]
      Option Description
      --vm

      Enter the identifier of the virtual machine that you want to import.

      You can view the ID values of all unmanaged virtual machines by running the com vmware vio vm unmanaged list command.

      --tenant

      Specify the OpenStack project into which you want to import the virtual machine.

      If you do not include this parameter, the import_service project is used by default.

      --nic-mac-address

      Enter the MAC address of the network interface card on the virtual machine.

      If you do not include this parameter, the import process attempts to discover the MAC and IP addresses automatically.

      Note: If you include this parameter, you must also include the nic_ipv4_address parameter.
      --nic-ipv4-address

      Enter the IP address and prefix for the network interface card on the virtual machine. Enter the value in CIDR notation (for example, 10.10.1.1/24).

      This parameter must be used together with the --nic-mac-address parameter.

      --root-disk

      For a virtual machine with multiple disks, specify the root disk datastore path in the following format: --root-disk '[datastore1] foo/foo_1.vmdk'

      --nics

      For a virtual machine with multiple NICs, specify the MAC and IP addresses of each NIC in JSON format.

      Use the following key-value pairs:
      • mac_address: MAC address of the NIC in standard format
      • ipv4_address: IPv4 address in CIDR notation
      For example:
      --nics '[{"mac_address": "00:50:56:9a:f5:7b", "ipv4_address": "10.10.1.1/24"}, {"mac_address": "00:50:56:9a:ee:be", "ipv4_address": "10.10.2.1/24"}]'