This section outlines sizing options for the OpenStack Control Plane, including the number of instances, sizes, and resources that can be scaled horizontally.
The hardware that is required to run VMware Integrated OpenStack depends on the scale of your deployment and the size of the controller that you select.
The VIO manager comes in only a single flavor and requires 4 vCPU, 16 GB memory, and two disks (40GB/30 GB).
Although VIO supports both HA and Non-HA deployments, Non-HA deployment must be limited to POC use only. No further sizing details are provided. For more information, see the VMware Integrated OpenStack 7.2 documentation.
For the production HA deployment, VIO supports the following controller form-factors:
Flavor |
Small |
Medium |
Large |
---|---|---|---|
CPU |
4 |
8 |
12 |
Memory (GB) |
16 |
32 |
32 |
Controller Disk (GB) |
25 |
50 |
75 |
With VIO, the sizing of your control plane VMs is not fixed. Based on the real-world conditions for your cloud, you can add additional controller VMs as a day-2 operation. The more tasks the control plane perform, the more resources are required. We recommend that you use Medium or Large sizes for production deployments.
After the deployment, monitor the CPU and memory consumptions on the controller VMs. If three controllers are consistently running high, scale them horizontally by adding additional controllers and restarting pods that are consuming more resources. Pod restart does not impact the service availability as a redundant copy is always available for each service. Pod restart triggers the Kubernetes scheduler to reassign the Pod to the newly added controller.
VIO Sizing Recommendations
Design Recommendation |
Design Justification |
Design Implication |
---|---|---|
Use HA for production deployments. |
Avoid a single point of failure. |
|
Use Medium or Large size for Production Deployment. |
The amount of CPU and memory a control plane consumes is based on the number of API calls and the type of API calls that the control plane handles. Therefore, if you expect a high-churn environment with lots of API calls, consider moving up to a larger size. |
None |
Deploy Large size if optional service such as Ceilometer is used. |
If the end-user never makes a call to Aodh/Gnocchi/Panko, Ceilometer uses more RAM and CPU compared to other OpenStack Services. |
Increased CPU and memory. |
Increase the number of Pod replicas if the API response is slow. |
|
None |
Add more controller nodes if existing controllers are consistently running high in CPU and memory. |
Kubernetes scheduler reassigns high utilization Pods to the newly added controller to even out the usage across nodes. |
|
Use the Kubernetes node cordon to control the Pod to Controller assignment. |
Cordon removes a node from the Kubernetes scheduler when the scheduler is placed in the Cordon state. |
None |