As part of the What-If workload planning for traditional infrastructure, Workload Planning: Traditional is the pane you use to fill in the details of your virtual machines. You select where to add or remove the workload, configure it yourself or use an existing VM as a template, and establish a time frame. You also have an advanced configuration option that lets you define your configuration more precisely.
Where You Can Add or Remove VMS
At the What-If Analysis screen, click Add VMS or Remove VMS in the Workload Planning: Traditional pane.
Option | Description |
---|---|
Scenario Name | Name of your scenario |
Location | Where do you want to add the workload? Select from the list of existing data centers. You can optionally select the exact cluster where you want the workload to reside. |
Application Profile/Configure | Allows you to configure the virtual compute resource, including vCPU, memory, and storage. |
Application Profile/Import Import from existing VM | Displays the Select VMs dialog box where you can select one or more existing VMs to use as templates for your workload. Once you have made your selections, you return to this screen to enter the quantity of each chosen VM you want to incorporate as templates into your workload. |
Choose Your Workload:
|
With the Configure radio button selected, you can size your workload by defining values for vCPU, memory, and disk space. These are your allocation values. |
Expected Utilization | Set the projected percentage of total workload capacity you expect to average. Click Advanced Configuration to set the percentage of expected utilization for CPU, Memory, and Disk individually and to select thin or thick provisioning. vRealize Operations calculates demand based on these values. |
Annual Projected Growth | Set the percentage by which you expect your capacity go grow, annually. Click Advanced Configuration to set the percentage growth of CPU, Memory, and Disk individually. For example, if the utilization is 100 at the start date, and you set the annual growth % to 10%, then at the end of the year the utilization will grow to 110. The Annual Projected Growth can be set to 0% if no growth is expected. |
Number of VMs (optional)/Quantity | You can optionally select how many VMs to spread the workload across. |
Implementation date/End Date (optional) | Select from pop-up calendars the start and end date for the workload. The end date cannot be later than one year from the current date. |
Run Scenario | Click to run the scenario. The system calculates whether it fits into the location you selected. |
Save | Save the scenario. |
Cancel | Cancel the scenario. |
Option | Description |
---|---|
Scenario Name | Name of your scenario. |
Location | From where do you want to remove the workload? Select from the list of existing data centers. You can optionally choose the exact cluster from where you want to remove the workload. |
Application Profile/Configure | Allows you to configure the virtual compute resource, including vCPU, memory, and storage. After you have configured the scenario, enter the quantity of custom VMs that you want to remove. |
Application Profile/Import Existing VMs | Displays the Select VMs dialog box where you can choose one or more existing VMs. Once you have made your selections, you return to this screen to enter the quantity of each chosen VM you want to remove from your workload.
Note: The recommended limit is 100 VMs as a maximum for workload removal.
|
Application Profile / Custom:
Choose your workload
|
With the Configure radio button selected, you can size your workload by defining values for vCPU, memory, and disk space. |
Implementation date/End Date (optional) | Select from pop-up calendars the start and end date for the workload. The end date cannot be later than one year from the current date. You can also leave the end date blank. |
Run Scenario | Click to run the scenario. The system calculates the impact on the cluster (time remaining and capacity remaining) when removing the workload. |
Save | Save the scenario. |
Cancel | Cancel the scenario. |