This chapter illustrates the approach to migrate an existing airgap server setup in Telco Cloud Automation 2.3.x or earlier releases to Telco Cloud Automation 3.0.0 airgap server deployed via OVA.

Follow the steps below to execute migration:
  1. Take a snapshot of the existing older version of the airgap server.
  2. Backup certificate, IP address, and FQDN of the existing airgap server.
  3. Deploy TCA 3.0.0 airgap OVA with separate FQDN, IP, and certificate.
  4. Configure the newly deployed airgap server to synchronize data from the older version of airgap server.
  5. Shutdown the older version of airgap server.
  6. Configure newly deployed airgap server with backup FQDN, IP, and certificate.
    Note: Migrating FQDN, IP, and certificate from older version airgap server to the new one (Step 5 and Step 6) may lead to downtime of airgap services. In such case, wait for services to be back online.

Procedure of migration

  1. Check status of the existing older version airgap server and take snapshot.
    1. Login to existing older version airgap server via ssh.
      ssh root@<old version airgap server ip>

      Alternatively, login to airgap server via Remote Console from vCenter UI.

    2. On airgap server terminal, check status to ensure that it is running.
      root@photon-machine [ ~ ]# systemctl status nginx 
      ● nginx.service - The NGINX HTTP and reverse proxy server 
         Loaded: loaded (/lib/systemd/system/nginx.service; enabled; vendor 
      preset: enabled) 
         Active: active (running) since Wed 2023-12-20 06:23:43 UTC; 24min ago 
       Main PID: 1629 (nginx) 
          Tasks: 2 (limit: 4915) 
         Memory: 2.2M 
         CGroup: /system.slice/nginx.service 
                 ├─1629 nginx: master process /usr/sbin/nginx 
                 └─1630 nginx: worker process 
       
      Dec 20 06:23:43 tca-ag-tmp.example.com systemd[1]: Starting The NGINX HTTP 
      and reverse proxy server... 
      Dec 20 06:23:43 tca-ag-tmp.example.com nginx[1626]: nginx: the 
      configuration file /etc/nginx/nginx.conf syntax is ok 
      Dec 20 06:23:43 tca-ag-tmp.example.com nginx[1626]: nginx: configuration 
      file /etc/nginx/nginx.conf test is successful 
      Dec 20 06:23:43 tca-ag-tmp.example.com systemd[1]: Started The NGINX HTTP 
      and reverse proxy server. 
      root@photon-machine [ ~ ]# systemctl status harbor 
      ● harbor.service - Harbor 
         Loaded: loaded (/etc/systemd/system/harbor.service; enabled; vendor 
      preset: enabled) 
         Active: active (running) since Wed 2023-12-20 06:23:39 UTC; 24min ago 
           Docs: http://github.com/vmware/harbor 
       Main PID: 1334 (docker-compose) 
          Tasks: 9 (limit: 4915) 
         Memory: 22.4M 
         CGroup: /system.slice/harbor.service 
                 └─1334 /usr/local/bin/docker-compose -f /opt/harbor/docker-
      compose.yml up
    3. Access airgap server via web browser.

      Harbor login page should reflect while accessing airgap server via web browser with url https://. Folders of photon-update repo are expected to show while accessing airgap server via web browser with url https:///updates/photon-updates/.

      Fix unexpeted errors in web browser, if any, before moving to the next steps.

    4. Open ICMP on the airgap server.
      Before migration, turn on ICPM response service on airgap server and check liveness from remote hosts. Open ICMP on iptables:
      iptables -A INPUT -p icmp -j ACCEPT
    5. Take snapshot of existing older version airgap server via vCenter Web UI.

      Locate airgap server virtual machine on vCenter Web UI and take a snapshot.

  2. Backup certificate, IP address, and FQDN of the existing older version airgap server.

    Login to exisiting older version airgap server and backup the output of following commands if the previous airgap/scripts/vars/user-inputs.yml has been deleted. You can retrive the information from user-input.yml alternatively.

    • Display fqdn:
      hostname
    • Display the current static IP configuration:
      cat /etc/systemd/network/10-eth0-static.network
    • Display the certificate suite:
      cd /etc/docker/certs.d/<airgap server fqdn>:8043/ 
      ls 
      for line in `ls -1`; do echo $line && cat $line;done
      For example, copy the content of the following files:
      root@airgap [ /etc/docker/certs.d/airgap.ipv6.eng.vmware.com:8043 ]# ls 
      airgap.ipv6.eng.vmware.com.cert  airgap.ipv6.eng.vmware.com.key  ca.crt
  3. Deploy the TCA 3.0.0 airgap server OVA with a separate FQDN, IP, and certificate.

    Download airgap server OVA from VMware website, and trasfer it to the target vSphere environment.

    To import the OVA via vSphere Web UI, refer to vAppliance properties list.

    Note: Use different IP, FQDN, and certificate as the same will be reconfigured after data synchronization. Set DHCP and certificate autogeneration while deploying the OVA to simplify the operation. Retain the defaults except VM FQDN name, specify a domain name different from the existing older version airgap server. Select Certificate Type: Generate-New and set the passwords.
    Power it on after importing OVA and check the progress of the new airgap server via vCenter Web Console. Airgap server deploy done!indicates successful deployment of new airgap server.
  4. Configure the newly deployed airgap server to synchronize data from the older version airgap server.
    1. Login to the newly deployed airgap server via ssh.
      vmware@dualstack-jumphost:~$ ssh [email protected] 
      The authenticity of host '172.16.69.115 (172.16.69.115)' can't be 
      established. 
      ECDSA key fingerprint is 
      SHA256:aVAbd278g0ulTFUT6fgmugAEWFGdfQBZU599L4esN70. 
      Are you sure you want to continue connecting (yes/no)? yes 
      Warning: Permanently added '172.16.69.115' (ECDSA) to the list of known 
      hosts. 
      Welcome to Photon 4.0 (\m) - Kernel \r (\l) 
      Password: 
      admin@test [ ~ ]$
    2. Copy the Root CA file of older version airgap server to the newly deployed airgap server.
      admin@test [ ~ ] mkdir -p /usr/local/airgap/certs 
      admin@test [ ~ ]$ vi /usr/local/airgap/certs/remote_registry_001_ca.crt

      Copy the content of cert under the path of older version airgap server, /etc/docker/certs.d/:8043/ca.crt, and save with :wq.

    3. Edit user-inputs.yml.
      admin@test [ ~ ] cd /usr/local/airgap/scripts/vars 
      admin@test [ ~ ]$ vi user-inputs.yml
      Update Section 6.
      admin@test [ ~ ]$ cd /usr/local/airgap/scripts/vars/ 
      admin@test [ /usr/local/airgap/scripts/vars ]$ ls 
      deploy-user-inputs.yml  harbor-credential.yml  setup-user-inputs.yml  
      user-inputs.yml  user-inputs.yml-20231220110230.bak 
      admin@test [ /usr/local/airgap/scripts/vars ]$ vi user-inputs.yml
      Update Section 6 with existing older version airgap details, for example:
      # 6. Options for remote sync
       
       # Information about remote harbor registry 
      remote_server_fqdn: <old version airgap server> 
      endpoint_name: remote_registry_001 
      username: admin 
      secret: Harbor12345 
      remote_server_cert_file: 
      /usr/local/airgap/certs/remote_registry_001_ca.crt 
       # Description about remote registry 
      reg_des: remote harbor registry as source 
       # Description about replication policy 
      policy_des: new policy for replication 
       
       # Replication settings 
      policy_name: policy1 
       # By default sync on every 30 mins 
      cron: 0 */30 * * * *
      Following table explains the parameters mentioned above. All fields are mandatory when setting up remote sync operation:
      Parameter Description
      remote_server_fqdn FQND used in existing airgap server should be resolved by DNS server
      endpoint_name Customized name for remote registry, defined by user
      username Remote harbor username, in general used by admin
      secret* Remote harbor password, currently written in user-inputs.yml. Later, this will require user input in each case
      remote_server_cert_file Content of ca.crt of remote airgap server. Save file locally and edit the config file referring to the certificate file in local path
      reg_des Description of above endpoint registry info
      policy_des Description of new replication policy, generally for address, usage, data source info
      policy_name Customized name for new replication policy, defined by user
      cron Schedule cron job to perform remote sync. User can set customized cron schedule, which by default, is scheduled to be run per 30 minutes
    4. Kick off remote sync on new airgap appliance.
      Save the user-inputs.yml config file and exit editor. Then kick off remote sync.
      admin@test [ /usr/local/airgap/scripts/vars ]$ su 
      Password: 
      root@test [ /usr/local/airgap/scripts/vars ]#  agctl rsync 
      executing ansible-playbook 
      /usr/local/airgap/scripts/bin/../../scripts/playbooks/remote-sync.yml >> 
      /usr/local/airgap/logs/ansible_rsync_20231220114131.log 2>&1 & 
      launched playbook execution, run "tail -f 
      /usr/local/airgap/logs/ansible_rsync_20231220114131.log" for running 
      progress 
      root@test [ /usr/local/airgap/scripts/vars ]# tail -f 
      /usr/local/airgap/logs/ansible_rsync_20231220114131.log
      Monitor the log until completed with failed=0.
      root@test [ /usr/local/airgap/scripts ]# tail -f 
      /usr/local/airgap/logs/ansible_rsync_20231220114131.log 
              ] 
          } 
      } 
       
      TASK [Switch ownership of photon repo folder] 
      ********************************** 
      changed: [localhost] 
       
      PLAY RECAP 
      ********************************************************************* 
      localhost                  : ok=67   changed=30   unreachable=0    
      failed=0    skipped=4    rescued=0    ignored=0

      Fix errors, if any, and edit user-inputs.yml by specifying newendpoint_name and policy_name. Then run agctl rsync again.

      Check data volumes, the used size of two following file systems mounted on /photon-reps and /data should be same as in the older version airgap server.
      root@test [ /usr/local/airgap/scripts ]# df -h 
      ... 
      /dev/mapper/VGOS-LV_OS                       688G   94G  560G  15% 
      /photon-reps  
      /dev/mapper/VGDATA-LV_DATA                   196G   30G  157G  16% /data 
      ...
  5. Shutdown the older version airgap server.

    Browse Virtual Machine of the older version airgap server, and shut it down.

  6. Configure newly deployed airgap server with backup FQDN, IP, and certificate.
    1. Copy older version airgap server certificate to the newly deployed airgap server.
      On the new airgap server, create the certificate files backed up from older version airgap server.
      root@test [ /usr/local/airgap/certs ]# ls 
      remote_registry_001_ca.crt 
      root@test [ /usr/local/airgap/certs ]# vi server.crt
      Copy the FIRST certificate in back content of .cert to this file, and save it with :wq.
      root@test [ /usr/local/airgap/certs ]# vi server.key

      Copy the back content of .key to this file, and save it with :wq.

    2. Prepare user-inputs.yml with older version airgap server with FQDN, IP and certificate.
      On the new airgap server, edit user-inputs.yml.
      root@test [ /usr/local/airgap/scripts/vars ]# vi user-inputs.yml
      Modify the following parameters:
      server_fqdn: "old version airgap server's fqdn"
      dhcp: False
      static_ip: "old version airgap server's IP address"
      default_gw: "old version airgap server's default gateway"
      dns_servers: "old version airgap server's dns servers"
      auto_generate: False
      server_cert_path: /usr/local/airgap/certs/server.crt
      server_cert_key_path: /usr/local/airgap/certs/server.key
      ca_cert_path: /usr/local/airgap/certs/remote_registry_001_ca.crt
      Add a new setting.
      harbor_password: "the password of harbor server"

      Save it with :wq.

    3. Reconfigure the newly deployed airgap server.
      root@test [ /usr/local/airgap/scripts/vars ]# agctl deploy 
      executing ansible-playbook 
      /usr/local/airgap/scripts/bin/../../scripts/deploy.yml >> 
      /usr/local/airgap/logs/ansible_deploy_20231220145841.log 2>&1 & 
      launched playbook execution, run "tail -f 
      /usr/local/airgap/logs/ansible_deploy_20231220145841.log" for running 
      progress

      You may lose connection as the IP address has been changed by the command.

      Check vCenter Web UI to ensure the newly deployed VM is configured with the IP of older version airgap server.

    4. Login to airgap server again and verify.
      Login via ssh from any client and verify the logs.
      vmware@dualstack-jumphost:~$ ssh admin@<airgap-server-fqdn> 
      admin@airgap [ ~ ]$ su 
      root@airgap [ /home/admin ]# tail 
      /usr/local/airgap/logs/ansible_deploy_20231220145841.log 
      PLAY RECAP 
      ********************************************************************* 
      localhost                  : ok=89   changed=46   unreachable=0    
      failed=0    skipped=84   rescued=0    ignored=2

      It is expected the command returns with failed=0. Fix errors as reported in the log and run agctl deploy again.

    5. Verify via web browser.

      Harbor login page should show while accessing airgap server via web browser with url, https://. Folders of photon-update repo are expeted to show while accessing airgap server via web browser with url, https:///updates/photon-updates/.

      Fix unexpeted errors in web browser, if any, before moving to the next steps.

    6. Pull an image.
      ssh login to any TCA CaaS cluster node with airgap server configured, and test pull an image with command copy from harbor UI . For example:
      crictl pull <airgap-server-
      fqdn>:443/registry/packages/capabilities@sha256:4d60101bc30eb1a14169a5072a
      5a297b096254a105cdb6033c4df0c54f89b1fa
  7. Import TCA 3.0.0 packages.

    In an internet accessible envionrment, create the TCA 3.0.0 incremental tarballs as decribed in export command guide. Use export to download data from remote registries to local folder into tar package.

    To import tarballs to the newly setup airgap server, refer to export command guide. Use import to import data from tar bundle into airgap server.

    After successfully importing TCA 3.0.0 packages and images, the newly deployed airgap server contains data from both TCA 2.3.x and 3.0.0. Proceed to upgrade TCA CaaS clusters.