If you encounter problems when setting up the airgap server, you can use a troubleshooting topic to understand and solve the problem, if there is a workaround.

1. Seeing issue with /path/to/repo/.repodata/ exists

This issue occurs in the build metadata of the Photon repository after all packages are synched to local. The command createrepo verifies if there is a temp folder named .repodata under the repo folder. If a .repodata folder is found, the createrepo command considers that another createrepo session is running and will exit. If you encounter this error, remove the folder .repodata and all its contents and retry the repo sync operation.

1. Check the log file. Generally, in ansible.log or upgrade_repo.log if this issue occurs during a repo upgrade.

2. Check the folder that reports the errors. For example, if the error occurs in the /photon-reps/updates/photon-updates/.repodata/ folder, list the items in its parent folder. For example,
ls -lta /photon-reps/updates/photon-updates/
3. Remove the existing .repodata folder and all its content.
rm -rf /photon-reps/updates/photon-updates/.repodata/
4. Rerun the failed set up/upgrade process.
root@photon-machine [ ~/airgap ]# scripts/bin/run.sh setup
Or
root@photon-machine [ ~/airgap ]# scripts/bin/run.sh sync

2. Repository name already exists when upgrading Helm charts.

This issue is fixed in the current release but may be seen in older release scripts when you update Helm charts. To fix this issue, perform the following steps:
root@photon-machine [ ~/airgap ]# helm repo rm <repo-name>
root@photon-machine [ ~/airgap ]# scripts/bin/run.sh sync

3. Disk is out of space.

This solution is only applicable when multiple disks are assigned to the airgap VM. Check the disk usage using the df command and find the one that is out of space.
root@photon-machine [ ~/airgap ]# df -h
  1. From the vCenter Server user interface, click the airgap VM Edit Settings and resize the target disk. If the disk size is dimmed, it is likely that the VM has snapshots. You must power off the VM and then delete all the snapshots before you can expand the disks.
  2. Power On airgap VM if needed.
  3. Run resize playbook and check the disk usage.
root@photon-machine [ ~/airgap ]# scripts/bin/run.sh resize
root@photon-machine [ ~/airgap ]# df -h

4. Generate Airgap Server tech support bundle.

If you require assistance from VMware engineering, generate a tech support bundle and attach it with the problem description.
root@photon-machine [ ~/airgap ]# scripts/bin/run.sh techsupport
root@photon-machine [ ~/airgap ]# df -h
The generated support bundle tar ball is located at /tmp/support-bundle folder to copy out. For example:
root@tca-ag-tmp [ /tmp/support-bundle ]# ls
airgap-support-bundle-20220701084921.tar.gz

5. Switch single-disk schema to multi-disk schema.

If you are using a sing-disk schema on your airgap server, you can switch to multi-disk schema without setting up the entire system again. Following are the steps for switching to the multi-disk schema:
  1. Add new disks to the VM: You can add 3 new disks to the airgap server VM. You should add the disk size in the incremental order of their sizes and the disk sizes should not be lesser than the recommended sizes. The recommended disk sizes are: 50 GB, 100 GB, and 100 GB.
  2. Stop system services: Stop the nginxdocker, and harbor services in order.
    systemctl stop nginx
    systemctl stop harbor
    systemctl stop docker
    
  3. Unmount the data disk partitions: When you set up the airgap server with a single-disk schema, the default disk is /dev/sdb with 3 partitions allocated for docker, harbor, and photon repo to store the data. Verify /etc/fstab to ensure the mount point of disk data partitions.
    root@tca-ag-tmp [ ~ ]# cat /etc/fstab
    #system	mnt-pt	type	options	dump	fsck
    PARTUUID=479cd5f8-6133-496c-80c5-27f4dc88d7bd	/	ext4	defaults,barrier,noatime,noacl,data=ordered	1	1
    PARTUUID=7c6c3d42-3fac-48b0-a6a6-c38bca957873	/boot/efi	vfat	defaults	1	2
    /dev/cdrom	/mnt/cdrom	iso9660	ro,noauto	0	0
    tmpfs	/tmp	tmpfs	rw,mode=1777,size=10g
    # BEGIN ANSIBLE MANAGED BLOCK
    /dev/sdb1   /data   ext4   defaults   0 0
    /dev/sdb2   /docker  ext4  defaults  0 0
    /dev/sdb3   /photon-reps ext4 defaults  0 0
    # END ANSIBLE MANAGED BLOCK
    

    Unmount old data disk partitions:

    umount /dev/sdb1
    umount /dev/sdb2
    umount /dev/sdb3
    
  4. Comment out mount point of data disk partitions in /etc/fstab: Earlier, the airgap server setup script mount point was /etc/fstab. Now, comment out as follows:
    #/dev/sdb1   /data   ext4   defaults   0 0
    #/dev/sdb2   /docker  ext4  defaults  0 0
    #/dev/sdb3   /photon-reps ext4 defaults  0 0
    
  5. Initialize 3 new disks: The disks you newly add are raw disks. Therefore, you must initialize and format the disks with ext4 filesystem before mounting them into the system. Use fdisk to initialize the disks and mkfs.ext4 to format the disk partitions.

    Initialize disk:

    root@tca-ag-tmp [ ~ ]# fdisk /dev/sdc     <<<<<#Initialize /dev/sdc
    
    Welcome to fdisk (util-linux 2.32.1).
    Changes will remain in memory only, until you decide to write them.
    Be careful before using the write command.
    
    Device does not contain a recognized partition table.
    Created a new DOS disklabel with disk identifier 0x4b863998.
    
    Command (m for help): p     <<<<<#Print current disk parition table
    Disk /dev/sdc: 50 GiB, 53687091200 bytes, 104857600 sectors
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disklabel type: dos
    Disk identifier: 0x4b863998
    
    Command (m for help): n     <<<<<#Input n to create new partition
    Partition type
       p   primary (0 primary, 0 extended, 4 free)
       e   extended (container for logical partitions)
    Select (default p): p     <<<<<#Input p to set parition as primary partition
    Partition number (1-4, default 1):      <<<<<#Hit return with default value
    First sector (2048-104857599, default 2048):      <<<<<#Hit return with default value
    Last sector, +sectors or +size{K,M,G,T,P} (2048-104857599, default 104857599):      <<<<<#Hit return with default value
    
    Created a new partition 1 of type 'Linux' and of size 50 GiB.
    
    Command (m for help): w     <<<<<#Write changes and exit
    The partition table has been altered.
    Calling ioctl() to re-read partition table.
    Syncing disks.
    

    Format disk partition:

    mkfs.ext4 /dev/sdc1
  6. Mount newly created disk partitions into the system: After the new disks are formatted, mount them into system folders:
    mount /dev/sdc1 /data
    mount /dev/sdd1 /photon-reps
    mount /dev/sde1 /docker
    
  7. Create a directory to mount the old disk partitions.
    mkdir -p /mnt/{data, docker, photon-reps}
  8. Mount the previous data disk partitions into the newly created folders.

    Ensure that the mount point takes reference from /etc/fstab so that the mount point is consistent with the previous mount point mappings. For example, previously, if the mount point is /dev/sdb1, now the mount point should be /data then here to mount /dev/sdb1 to /mnt/data.

    mount /dev/sdb1 /mnt/data/
    mount /dev/sdb2 /mnt/docker/
    mount /dev/sdb3 /mnt/photon-reps/
    
  9. Migrate data from the old disk to the new disks.
    mv /mnt/docker/* /docker/
    mv /mnt/photon-reps/* /photon-reps/
    mv /mnt/data/harbor/ /data/
    mkdir -p /data/docker
    
  10. Add the new disk mount points to /etc/fstab.
    mount /dev/sdb1 /mnt/data/
    mount /dev/sdb2 /mnt/docker/
    mount /dev/sdb3 /mnt/photon-reps/
    
  11. Migrate data from the old disk to the new disks.
    mv /mnt/docker/* /docker/
    mv /mnt/photon-reps/* /photon-reps/
    mv /mnt/data/harbor/ /data/
    mkdir -p /data/docker
    
  12. Add the new disks mount points to /etc/fstab.
    /dev/sdb1   /data   ext4   defaults   0 0
    /dev/sdc1   /docker  ext4  defaults  0 0
    /dev/sdd1   /photon-reps ext4 defaults  0 0
    
  13. Detach the old data disk from the VM.
    1. On the VM, navigate to Edit Settings.
    2. Remove the old data disk by selecting Delete Files from the datastore.
      Note: The preceding step detaches the disk file from the VM but does not delete the disk file. You can add the disk file to the datastore if there is a problem using the new disks.
    3. Click OK to save the settings.
    4. Reboot the VM.

    After the VM reboots, the new disk partitions should be mounted into the system, and the docker, harbor, and ngix services must run. This ensures that the disk schema is switched completely. Once the new disk schema works as expected, you can wait for a few days and remove the old data disk file from the datastore.

6. Checklists for debugging the sync operation failure.

There might be failures during the sync operation and some of the failures are not available in the log files. You need to debug the root cause. Follow these checklists when debugging the sync operation failures.
  1. Check the disk space usage: When you find push failures in logs, it indicates that the disk space is full. Therefore, check if there is sufficient space in the disk.
    df -h

    Check if the space utilization is 100% usage in data, docker, and disk partitions. If they are full, either release some space or resize the disk to extend the storage space and resume the sync operation. The following are some of the methods to release and expand disk spaces:

    1. If the airgap server is set up with multi-disk schema, expand the disk size of the airgap server VM and run the following command:
      run.sh resize

      After the disk is resized, redo the sync operation.

    2. If the airgap server is set up with single-disk schema, switch to multi-disk schema and then redo the sync operation.
    3. If the docker folder is full, delete the image caches and redo the sync operation.
  2. Check login to the airgap server's harbor: A push failure issue may occur due to a login failure in the harbor of airgap server or a login that has expired. Log in to the airgap server with your credentials.
    docker login <airgap server FQDN>:<harbor port>
  3. Check harbor service run status: Harbor service may not work when the disk is full or a dependent docker service should be restarted. To check the harbor services, do the following:
    1. Check the systemd service.
      systemctl status harbor
    2. If status is not active, restart the harbor service.
      systemctl restart harbor

      Alternatively, to check the harbor status use docker-compose. Go to the /opt/harbor folder and run the following command:

      docker-compose ps

      If all the harbor containers are not running or in Healthy status, restart the harbor containers by using docker-compose.

      docker-compose down -v
      docker-compose up -d
      docker-compose ps
      

7. Folder redirect for generating bundle when the techsupport bundle fails with insufficient space.

With the airgap server up and running for a longer period, the log size of harbor and nginx might increase and the /tmp folder may be full. Due to insufficient space, the generation of the techsupport bundle in the airgap server fails.

Workaround: Modify the ansible playbook to redirect the support bundle to generate the techsupport bundle in another disk/partition with sufficient space.

Do the following to generate the techsupport bundle into another disk:
  1. Edit the ansible playbook {root-dir}/airgap/scripts/playbooks/airgap-support.yml.
  2. Change the default folder location tmp to a different folder.

    Modify the locations of the Create logs dir and Package support-bundle sections in the playbook.

    ......
      tasks:
      - name: Create logs dir
        file:
          path: '/photon-reps/airgap-support'
          state: directory
        register: support_dir
    ......
      - name: Package support-bundle
        shell: "{{ item }}"
        with_items:
          - mkdir -p /photon-reps/support-bundle/
          - tar cvfz /photon-reps/support-bundle/airgap-support-bundle-`date '+%Y%m%d%H%M%S'`.tar.gz {{ support_dir.path }}/*
          - rm -rf {{ support_dir.path }}/
    
    Note: The Create logs dir where the temporary files are saved and the Package support-bundle directory where the support bundles are generated can be two different folders.
  3. Run techsupport to collect log the bundle.
    run.sh techsupport

8. Clear the Harbor credentials when the Setup, Sync, Deploy, and Import operations fail

The Harbor credentials are removed from user-inputs.yml and the user inputs are requested after the Setup, Sync, Deployment, and Import operations begin.

The Harbor credentials are saved in the {root-dir}/airgap/scripts/vars/harbor_credentials.yml file temporarily. After the Setup, Sync, Deployment, and Import operations are completed, the Harbor credentials are removed.

However, if the credentials are not automatically removed from the temp file, you should manually remove them after the Setup, Sync, Deployment, and Import operations are completed. To remove the Harbor credentials, do the following:
  1. Edit the {root-dir}/airgap/scripts/vars/harbor_credentials.yml temp file and delete the row harbor_password.
  2. Save the file and restart the failed operations.