When you perform a restoration to a different host, you must make configuration changes on the vRealize Log Insight cluster.

About this task

Making changes to the configuration files directly from the appliance console is not officially supported beginning in vRealize Log Insight 3.0. See this Knowledge Base article 2123058 to make these changes by using the Web UI interface in vRealize Log Insight 3.0

These configuration changes are specific to vRealize Log Insight 3.0 and 2.5 builds that can be used with any backup recovery tool.

Recovering to a different host requires manual configuration changes on the vRealize Log Insight cluster. You can assume that the restored vRealize Log Insight nodes have been assigned different IP addresses and FQDNs than their source counterparts from which a backup was taken.

Prerequisites

Review important information about Planning and Preparation

Procedure

  1. List all new IP addresses and FQDNs that were assigned to each vRealize Log Insight node.
  2. Make the following configuration changes on the master node by using vRealize Log Insight 2.5.
    1. Power on the master node if it is not already on.
    2. Use SSH to connect as a root user to the master node.
    3. If the vRealize Log Insight service is running, stop the service first by running the service loginsight stop command.
    4. Run cd /storage/core/loginsight/config.
    5. Run cp loginsight-config.xml#<n> backup-loginsight-config.xml, where <n> represents the largest number that is automatically appended to loginsight-config.xml during configuration changes.
    6. Open the copied version of the configuration file in an editor or in the vRealize Log Insight 3.0 Web UI and look for lines that resemble the following lines. This configuration change applies to both vRealize Log Insight 3.0 and 2.5.
      <distributed overwrite-children="true">
        <daemon host="prod-es-vrli1.domain.com" port="16520" token="c4c4c6a7-f85c-4f28-a48f-43aeea27cd0e">
          <service-group name="standalone" />
        </daemon>
        <daemon host="192.168.1.73" port="16520" token="a5c65b52-aff5-43ea-8a6d-38807ebc6167">
          <service-group name="workernode" />
        </daemon>
        <daemon host="192.168.1.74" port="16520" token="a2b57cb5-a6ac-48ee-8e10-17134e1e462e">
          <service-group name="workernode" />
        </daemon>
      </distributed>

      The code shows three nodes. The first node is the master node, which shows <service-group name=standalone>, and the remaining two nodes are worker nodes, which show <service-group name="workernode">

    7. For the master node, in the newly recovered environment, verify that the DNS entry that was used in the prerecovery environment can be reused.
      • If the DNS entry can be reused, update only the DNS entry to point to the new IP address of the master node.

      • If the DNS entry cannot be reused, replace the master node entry with a new DNS name (pointing to the new IP address).

      • If the DNS name cannot be assigned, as a last option, update the configuration entry with the new IP address.

    8. Update the worker node IP addresses as well to reflect the new IP addresses.
    9. In the same configuration file, verify that you have entries that represent NTP, SMTP, and database and appenders sections.

      This information applies to vRealize Log Insight 3.0 and 2.5. The <logging><appenders>...</appenders></logging> section is applicable only to the vRealize Log Insight 2.5 build and is not available in vRealize Log Insight 3.0.

      <ntp>
        <ntp-servers value="ntp1.domain.com, ntp2.domain.com" />
      </ntp>
       
      <smtp>
        <server value="smtp.domain.com" />
        <default-sender value="source.domain.com@domain.com" />
      </smtp>
       
      <database>
        <password value="xserttt" />
        <host value="vrli-node1.domain.com" />
        <port value="12543" />
      </database>
       
      <logging>
        <appenders>
          <appender name="REMOTE" class="com.vmware.loginsight.commons.logging.ThriftSocketAppender">
            <param name="RemoteHost" value="vdli-node1.domain.com" />
          </appender>
        </appenders>
      </logging>
      • If the configured NTP server values are no longer valid in the new environment, update these values in the <ntp>...</ntp> section

      • If the configured SMTP server values are no longer valid in the new environment, update these values in the <smtp>...</smtp> section.

      • Optionally, change the default-sender value in the SMTP section. The value can be any value but as a good practice, represent the source from where the email is being sent.

      • In the <database>..</database> section, change the host value to point to the master node FQDN or IP address.

      • In the <logging><appenders>...</appenders></logging> section, change the parameter value for RemoteHost to reflect the new master node FQDN or IP address.

    10. In the same configuration file, update the vRealize Log Insight ILB configuration section.

      This example shows the code for a vRealize Log Insight 3.0 appliance.

      <load-balancer> 
      <leadership-lease-renewal-secs value="5" /> 
      <high-availability-enabled value="true" /> 
      <high-availability-ip value="10.158.128.165" />  
      <high-availability-fqdn value="LB-FQDN.eng.vmware.com" />  
      <layer4-enabled value="true" />  
      <ui-balancing-enabled value="true" /> 
      </load-balancer>

      This example shows the code for a vRealize Log Insight 2.5 appliance.

      <load-balancer>
        <leadership-lease-renewal-secs value="5" />
        <high-availability-enabled value="true" />
        <high-availability-ip value="192.168.1.75" />
        <layer4-enabled value="true" />
      </load-balancer>
    11. Under the <load-balancer>...</load-balancer> section, update the high-availability-ip value if it is different from the current setting.
    12. In vRealize Log Insight 3.0, ensure that you also update the FQDN of the load balancer.
    13. Rename the updated configuration file to finish the changes.

      This step applies to vRealize Log Insight 2.5 only. In vRealize Log Insight 3.0, the changes are made through the Web UI.

      Run : mv backup-loginsight-config.xml loginsight-config.xml#<n+1>, where n represents the current maximum number appended to the loginsight-config.xml files.

    14. For vRealize Log Insight 2.5, restart the vRealize Log Insight service and run : service loginsight start.

      For vRealize Log Insight 3.0, you restart from the Web UI through the Cluster tab on the Administration page. For each node listed, select its host name or IP address to open the details panel and click Restart Log Insight. The configuration changes are automatically applied to all cluster nodes.

    15. Wait 2 minutes after the vRealize Log Insight service starts to allow enough time for the Cassandra service to start before bringing other worker nodes online.

    Skip step 3 through step 9 for vRealize Log Insight 3.0. These steps apply only to vRealize Log Insight 2.5. For vRealize Log Insight 3.0, follow the instructions in step 2o in this topic to apply the configuration changes on all worker nodes.

  3. Use SSH to connect to the first worker node as a root user.
  4. To stop the vRealize Log Insight service, run : service loginsight stop.
  5. Copy the latest loginsight-config.xml file from the master node to the worker node.
  6. On the worker node, run : scp root@[master-node-ip]:/storage/core/loginsight/config/loginsight-config.xml#<n> /storage/core/loginsight/config/
  7. Run : service loginsight start.
  8. Wait 2 minutes after the vRealize Log Insight service starts to allow enough time for the Cassandra service to start completely.
  9. Repeat step 3 through step 6 for each worker node.

What to do next

Verify that the restored vRealize Log Insight nodes have been assigned different IP addresses and FQDNs than their source counterparts from which a backup was taken.