There are two ways to change the memory and CPU resources of an NSX Manager node in a cluster.

Note that in normal operating conditions all three manager nodes must have the same CPU and memory resources. A mismatch of CPU or memory between NSX Managers in an NSX Manager cluster should only be done when transitioning from one size of NSX Manager to another size.

If you have configured resource allocation reservation for the NSX Manager VMs in vCenter Server, you might need to adjust the reservation. For more information, see the vSphere documentation.

Option 1 (resize a manager node with the same IP address) requires less effort. NSX requires that two managers are available at all times. If you have cluster VIP (virtual IP) configured, there will be a brief outage when the VIP switches to another node in the cluster. You can access the other two nodes directly during the outage if the VIP-assigned node is shut down for resizing. If you have deployed a load balancer for the manager nodes, health checks will be triggered when a manager goes offline. The load balancer should direct traffic to another node. Choose this option if you do not want to change the IP address of the manager nodes.

For option 2 (resize a manager node with a different IP address), you will need IP addresses for the three new managers. If you have cluster VIP configured, there will be a brief outage when the VIP switches to another node in the cluster. You can access the other two nodes directly during the outage in case the VIP-assigned node is deleted. If you have deployed a load balancer for the manager nodes, health checks will be triggered when a manager goes offline. The load balancer should direct traffic to another node. After all the steps are completed, you will need to reconfigure the load balancer (add the new managers and remove the old managers).

When you deploy a new manager node from the NSX Manager UI, if you get the error message "The repository IP address ... is not a part of the current management cluster. Please update the repository IP to the current node by running repository-ip CLI command. (Error code: 21029)," log in to the CLI of one of the existing nodes as admin and run the command set repository-ip. This will resolve the error.

Prerequisites

  • Verify that the new size satisfies the system requirements for a manager node. For more information, see "NSX Manager VM and Host Transport Node System Requirements" in the NSX Installation Guide.
  • Familiarize yourself with how to run CLI commands. For more information, see the NSX Command-Line Interface Reference. Also familiarize yourself with how to change the memory and CPU resources of a VM. For more information, see the vSphere documentation.
  • Familiarize yourself with the requirements of an NSX Manager cluster. For more information, see "NSX Manager Cluster Requirements" in the NSX Installation Guide.
  • Familiarize yourself with how to deploy an NSX Manager into a cluster. For more information, see "Deploy NSX Manager Nodes to Form a Cluster from the UI" in the NSX Installation Guide.

Procedure

  • Option 1: Resize a manage node with the same IP address
    Option 1a: Change the CPU and/or memory of the existing manager nodes. You must make the change to one manager at a time so that two managers are available at all times.
    1. Log in to a manager's CLI as admin and run the shutdown command.
    2. From NSX Manager UI, verify that the state of the manager cluster is DEGRADED.
    3. From vSphere, change the memory and/or CPU resources of the manager VM that was shut down.
    4. From vSphere, power on the VM. From NSX Manager UI, wait for the state of the manager cluster to be STABLE.
    5. Repeat steps 1 to 4 for the other two manager VMs.
    6. If you have an NSX Manager cluster that is onboarded to the NSX+ Intelligence or NSX+ NDR service, activate the maintenance mode for the NSX+ Intelligence or NSX+ NDR agents. For details, see the maintenance mode steps.
    Option 1b: Deploy new manager nodes.
    1. From NSX Manager UI, delete a manager node that was deployed from NSX Manager UI.
    2. From NSX Manager UI, deploy a new manager node with the new size into the cluster with an IP address that is the same as the one used by the manager node that was deleted in step 1.
    3. From NSX Manager UI, wait for the state of the manager cluster to be STABLE.
    4. Repeat steps 1 to 3 for the other manager node that was deployed from NSX Manager UI.
    5. For the manually-deployed manager node, log in to its CLI as admin and run the shutdown command.
    6. From another manager node, log in to its CLI as admin and run the get cluster config command to get the node ID of the manually-deployed manager node. Then run the detach node <node-id> command to detach the manually-deployed manager node from the cluster.
    7. From vSphere, delete the manually-deployed manager node VM.
    8. From NSX Manager UI, deploy a new manager node with the new size into the cluster with an IP address that is the same as the one used by the manually-deployed manager node.
    9. From NSX Manager UI, wait for the state of the manager cluster to be STABLE.
    10. If you have an NSX Manager cluster that is onboarded to the NSX+ Intelligence or NSX+ NDR service, activate the maintenance mode for the NSX+ Intelligence or NSX+ NDR agents. For details, see the maintenance mode steps.
  • Option 2: Resize a manage node with a different IP address
    1. If you have VIP configured and if the new addresses and old addresses are in different subnets, from NSX Manager UI, remove the VIP.
      You must access NSX Manager using the manager's IP address and not the VIP address.
    2. From NSX Manager UI, deploy a new manager node with the new size into the cluster with an IP address that is different from the ones used by the current manager nodes.
    3. From NSX Manager UI, verify that the state of the manager cluster is STABLE.
    4. From NSX Manager UI, delete an old manager node that was deployed from NSX Manager UI.
    5. Repeat steps 1 to 3 for the other manager node that was deployed from NSX Manager UI.
    6. For the manually-deployed manager node, log in to its CLI as admin and run the shutdown command.
    7. From another manager node, log in to its CLI as admin and run the get cluster config command to get the node ID of the manually-deployed manager node. Then run the detach node <node-id> command to detach the manually-deployed manager node from the cluster.
    8. From vSphere, delete the manually-deployed manager node VM.
    9. From NSX Manager UI, deploy a new manager node with the new size into the cluster with an IP address that is different from the one used by the manually-deployed manager node.
    10. From NSX Manager UI, wait for the state of the manager cluster to be STABLE.
    11. If you have an NSX Manager cluster that is onboarded to the NSX+ Intelligence or NSX+ NDR service, activate the maintenance mode for the NSX+ Intelligence or NSX+ NDR agents. For details, see the maintenance mode steps.
    12. If you removed the old VIP in step 1, from NSX Manager UI, configure a new VIP. It must be in the same subnet that the new IP addresses are in.

What to do next

Activate the maintenance mode for the NSX+ Intelligence and the NSX+ NDR agents after checking to see if the NSX Manager cluster is onboarded to either of these services.
  1. To check whether the NSX Manager cluster is onboarded to either the NSX+ Intelligence service or the NSX+ NDR service, use the following API request using the IP address of the resized NSX Manager.
    GET https://nsx-manager-ip-address/policy/api/v1/infra/sites/agents/intelligence/maintenance
    If the NSX Manager site is not onboarded, the API request returns the following message. In this case, no further action is required.
    {
    "enable": true.
    "agent_error_message"; "Site is not onboarded with Saas. Invalid operation."
    }
    If the NSX Manager site is onboarded, the API request returns the following message. Continue with the next steps below.
    {
    "enable": false
    }
  2. After you complete the resizing process for all three NSX Manager nodes in the NSX Manager cluster and you confirmed that the NSX Manager site is onboarded, activate the maintenance mode for the NSX+ Intelligence and the NSX+ NDR agents using the following API request.
    PUT https://nsx-manager-ip-address/policy/api/v1/infra/sites/agents/intelligence/maintenance
    {
    "enable": true
    }
  3. Wait for all the NSX Manager node objects to be in the REALIZED state. Use the following API call to check.
    GET https://nsx-manager-ip-address/policy/api/v1/infra/realized-state/realized-entities?intent_path=/infra/sites/agents/intelligence
    In the API call output, ensure that the output "state": "REALIZED" exists for all the objects in the list.
  4. Log in to any of the NSX Manager in the cluster and from the command line, clear the NsxiAgentDockerConfig table using the following command.
    /opt/vmware/bin/corfu_tool_runner.py -n nsx -o clearTable -t NsxiAgentDockerConfig
  5. Deactivate the maintenance mode using the following API request.
    PUT https://nsx-manager-ip-address/policy/api/v1/infra/sites/agents/intelligence/maintenance
    {
    "enable": false
    }