Can a collector VM be clustered?

No. Clustering of collector VMs is not supported.

Does vRealize Network Insight need a load balancer like vRealize Log Insight?

vRealize Network Insight clustering is a scale-out solution and not an HA solution. If the primary platform VM/master node fails, then the whole service becomes unavailable.

What happens if connection between remote collector & platform goes down?

In case connection between platform and collector VM is down, collector VM will store data locally (depending on the diskspace) and will send it whenever it is connected again.

Is vRealize Log Insight integrated with vRealize Network Insight?

Yes, vRealize Log Insight has been integrated with vRealize Network Insight 3.4. The alerts are sent to syslog which can be vRealize Log Insight.

What happens if a node reboots?

If a node reboots, it automatically joins the cluster and continues to be operational. If it is the primary node, then there is a complete loss of service during the time it was down.

How to change IP of any platform node or a collector in a cluster?

In a cluster, you can change IP of any collector or platform node using the CLI commands.
Note: The appliance reboots at the end of the process. So you must perform these steps on the VM console.
  • To change the collector IP, run the change-network-settings command.
  • To change the platform IP,
    1. Run the change-network-settings command to change the IP.
      Note: Ensure that the platform VM reboots successfully before you run the next command.
    2. For every platform whose IP is changed, run update-IP-change command on all the other nodes.

      For example, for the IP change of platform1 from IP1 to IP2, run the update-IP-change command on platform2 and platform3 with IP1 and IP2 as the arguments. For the IP change of platform2, run the update-IP-change command on platform1 and platform3.

    3. Run finalize-IP-change command on platform1 after step1 and step 2 are completed.
    4. Run the show-connectivity-status command on a collector and search for Platform VM IP/URL to identify if it is associated to this platform.
    5. Run vrni-proxy to reflect new platform IP on associated collectors.
      Scenarios Procedure
      In a 3-Node cluster, only the platform2 IP is changed. No collector shows associated with it.
      1. Run the change-network-settings to change the IP on platform2.
        Note: Ensure that the platform VM reboots successfully before you run the next command.
      2. Run update-IP-change command on all the other nodes, except platform2.

        For example, for the IP change of platform2, run the update-IP-change command on platform1 and platform3.

      3. Run finalize-IP-change command on platform1 after step1 and step 2 are completed.
      In a 3-Node cluster, platform1 and platform2 IPs are changed.
      1. Run change-network-settings on platform1.
      2. Run change-network-settings on platform2.
        Note: Ensure that the platform VM reboots successfully before you run the next command.
      3. Run update-IP-change platform1-oldIP platform1-newIP on platform2 and platform3.
      4. Run update-IP-change platform2-oldIP platform2-newIP on platform1 and platform3.
      5. Run finalize-IP-change command on platform1.
      6. Run the show-connectivity-status command on all the collectors and search for Platform VM IP/URL to identify the associated platform nodes.
      7. Run vrni-proxy set-platform –-ip-or-fqdn platform-newIP on collectors that are associated with platfrom1 and platform2.

        For example, if the CollectorA is associated to platform2, rest are associated to platform3, run vrni-proxy set-platform –-ip-or-fqdn platform2-newIP.

      8. Run vrni-proxy set-platform –-ip-or-fqdn platform-newIP on collectors that are associated with platfrom1 and platform2.

        For example, if the CollectorA is associated to platform2, rest are associated to platform3, run vrni-proxy set-platform –-ip-or-fqdn platform2-newIP.

      In a multi-node cluster setup, all the platform and collector IP are changed.
      1. Run change-network-settings command on all platforms to change the IP.
        Note: Ensure that the platform VM reboots successfully before you run the next command.
      2. Run update-IP-change command on all the nodes.

        For example, in a 3-node cluster, to update IP of platform1 from IP1 to IP2, run the update-IP-change command on platform2 and platform3 with IP1 and IP2 as the arguments. To update IP of platform2, run the update-IP-change command on platform1 and platform3.

      3. Run finalize-IP-change command on platform1.
      4. To change the collector IP, run the change-network-settings command.
      5. Run the show-connectivity-status command on all the collectors and search for Platform_VM_IP/URL to identify the associated platform nodes.
      6. Run vrni-proxy set-platform –-ip-or-fqdn platform-newIP on all collectors.

        For example, if the CollectorA is associated to platform2, run vrni-proxy set-platform –-ip-or-fqdn platform2-newIP.

How much disk space is needed on platform1?

Platform1 requires more disk space compared to other nodes in cluster as some of the configuration data is stored on Platform1 only.

What happens if any of the node ran out of disk space?

The UI starts showing error messages when disk space on any particular platform node reaches a certain threshold. Add more disk space to the platform node by logging in to vCenter.

How many times data is replicated in cluster?

The data replication mechanism is depending on the components present in the platform node.