If you encounter problems when setting up the airgap server, you can use a troubleshooting topic to understand and solve the problem, if there is a workaround.
1. Seeing issue with /path/to/repo/.repodata/ exists
This issue occurs in the build metadata of the Photon repository after all packages are synched to local. The command createrepo
verifies if there is a temp folder named .repodata
under the repo folder. If a .repodata folder is found, the createrepo
command considers that another createrepo
session is running and will exit. If you encounter this error, remove the folder .repodata and all its contents and retry the repo sync operation.
1. Check the log file. Generally, in ansible.log or upgrade_repo.log if this issue occurs during a repo upgrade.
ls -lta /photon-reps/updates/photon-updates/
rm -rf /photon-reps/updates/photon-updates/.repodata/
root@photon-machine [ ~/airgap ]# scripts/bin/run.sh setup
root@photon-machine [ ~/airgap ]# scripts/bin/run.sh sync
2. Repository name already exists when upgrading Helm charts.
root@photon-machine [ ~/airgap ]# helm repo rm <repo-name> root@photon-machine [ ~/airgap ]# scripts/bin/run.sh sync
3. Disk is out of space.
df
command and find the one that is out of space.
root@photon-machine [ ~/airgap ]# df -h
- From the vCenter Server user interface, click the airgap VM Edit Settings and resize the target disk. If the disk size is dimmed, it is likely that the VM has snapshots. You must power off the VM and then delete all the snapshots before you can expand the disks.
- Power On airgap VM if needed.
- Run resize playbook and check the disk usage.
root@photon-machine [ ~/airgap ]# scripts/bin/run.sh resize root@photon-machine [ ~/airgap ]# df -h
4. Generate Airgap Server tech support bundle.
root@photon-machine [ ~/airgap ]# scripts/bin/run.sh techsupport root@photon-machine [ ~/airgap ]# df -h
root@tca-ag-tmp [ /tmp/support-bundle ]# ls airgap-support-bundle-20220701084921.tar.gz
5. Switch single-disk schema to multi-disk schema.
- Add new disks to the VM: You can add 3 new disks to the airgap server VM. You should add the disk size in the incremental order of their sizes and the disk sizes should not be lesser than the recommended sizes. The recommended disk sizes are: 50 GB, 100 GB, and 100 GB.
- Stop system services: Stop the
nginx
docker
, andharbor
services in order.systemctl stop nginx systemctl stop harbor systemctl stop docker
- Unmount the data disk partitions: When you set up the airgap server with a single-disk schema, the default disk is
/dev/sdb
with 3 partitions allocated for docker, harbor, and photon repo to store the data. Verify/etc/fstab
to ensure the mount point of disk data partitions.root@tca-ag-tmp [ ~ ]# cat /etc/fstab #system mnt-pt type options dump fsck PARTUUID=479cd5f8-6133-496c-80c5-27f4dc88d7bd / ext4 defaults,barrier,noatime,noacl,data=ordered 1 1 PARTUUID=7c6c3d42-3fac-48b0-a6a6-c38bca957873 /boot/efi vfat defaults 1 2 /dev/cdrom /mnt/cdrom iso9660 ro,noauto 0 0 tmpfs /tmp tmpfs rw,mode=1777,size=10g # BEGIN ANSIBLE MANAGED BLOCK /dev/sdb1 /data ext4 defaults 0 0 /dev/sdb2 /docker ext4 defaults 0 0 /dev/sdb3 /photon-reps ext4 defaults 0 0 # END ANSIBLE MANAGED BLOCK
Unmount old data disk partitions:
umount /dev/sdb1 umount /dev/sdb2 umount /dev/sdb3
- Comment out mount point of data disk partitions in
/etc/fstab
: Earlier, the airgap server setup script mount point was /etc/fstab. Now, comment out as follows:#/dev/sdb1 /data ext4 defaults 0 0 #/dev/sdb2 /docker ext4 defaults 0 0 #/dev/sdb3 /photon-reps ext4 defaults 0 0
- Initialize 3 new disks: The disks you newly add are raw disks. Therefore, you must initialize and format the disks with
ext4
filesystem before mounting them into the system. Usefdisk
to initialize the disks andmkfs.ext4
to format the disk partitions.Initialize disk:
root@tca-ag-tmp [ ~ ]# fdisk /dev/sdc <<<<<#Initialize /dev/sdc Welcome to fdisk (util-linux 2.32.1). Changes will remain in memory only, until you decide to write them. Be careful before using the write command. Device does not contain a recognized partition table. Created a new DOS disklabel with disk identifier 0x4b863998. Command (m for help): p <<<<<#Print current disk parition table Disk /dev/sdc: 50 GiB, 53687091200 bytes, 104857600 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0x4b863998 Command (m for help): n <<<<<#Input n to create new partition Partition type p primary (0 primary, 0 extended, 4 free) e extended (container for logical partitions) Select (default p): p <<<<<#Input p to set parition as primary partition Partition number (1-4, default 1): <<<<<#Hit return with default value First sector (2048-104857599, default 2048): <<<<<#Hit return with default value Last sector, +sectors or +size{K,M,G,T,P} (2048-104857599, default 104857599): <<<<<#Hit return with default value Created a new partition 1 of type 'Linux' and of size 50 GiB. Command (m for help): w <<<<<#Write changes and exit The partition table has been altered. Calling ioctl() to re-read partition table. Syncing disks.
Format disk partition:
mkfs.ext4 /dev/sdc1
- Mount newly created disk partitions into the system: After the new disks are formatted, mount them into system folders:
mount /dev/sdc1 /data mount /dev/sdd1 /photon-reps mount /dev/sde1 /docker
- Create a directory to mount the old disk partitions.
mkdir -p /mnt/{data, docker, photon-reps}
- Mount the previous data disk partitions into the newly created folders.
Ensure that the mount point takes reference from
/etc/fstab
so that the mount point is consistent with the previous mount point mappings. For example, previously, if the mount point is/dev/sdb1
, now the mount point should be/data then here to mount /dev/sdb1 to /mnt/data
.mount /dev/sdb1 /mnt/data/ mount /dev/sdb2 /mnt/docker/ mount /dev/sdb3 /mnt/photon-reps/
- Migrate data from the old disk to the new disks.
mv /mnt/docker/* /docker/ mv /mnt/photon-reps/* /photon-reps/ mv /mnt/data/harbor/ /data/ mkdir -p /data/docker
- Add the new disk mount points to
/etc/fstab
.mount /dev/sdb1 /mnt/data/ mount /dev/sdb2 /mnt/docker/ mount /dev/sdb3 /mnt/photon-reps/
- Migrate data from the old disk to the new disks.
mv /mnt/docker/* /docker/ mv /mnt/photon-reps/* /photon-reps/ mv /mnt/data/harbor/ /data/ mkdir -p /data/docker
- Add the new disks mount points to
/etc/fstab
./dev/sdb1 /data ext4 defaults 0 0 /dev/sdc1 /docker ext4 defaults 0 0 /dev/sdd1 /photon-reps ext4 defaults 0 0
- Detach the old data disk from the VM.
- On the VM, navigate to Edit Settings.
- Remove the old data disk by selecting Delete Files from the datastore.
Note: The preceding step detaches the disk file from the VM but does not delete the disk file. You can add the disk file to the datastore if there is a problem using the new disks.
- Click OK to save the settings.
- Reboot the VM.
After the VM reboots, the new disk partitions should be mounted into the system, and the
docker
,harbor
, andngix
services must run. This ensures that the disk schema is switched completely. Once the new disk schema works as expected, you can wait for a few days and remove the old data disk file from the datastore.
6. Checklists for debugging the sync operation failure.
- Check the disk space usage: When you find push failures in logs, it indicates that the disk space is full. Therefore, check if there is sufficient space in the disk.
df -h
Check if the space utilization is 100% usage in data, docker, and disk partitions. If they are full, either release some space or resize the disk to extend the storage space and resume the sync operation. The following are some of the methods to release and expand disk spaces:
- If the airgap server is set up with multi-disk schema, expand the disk size of the airgap server VM and run the following command:
run.sh resize
After the disk is resized, redo the sync operation.
- If the airgap server is set up with single-disk schema, switch to multi-disk schema and then redo the sync operation.
- If the docker folder is full, delete the image caches and redo the sync operation.
- If the airgap server is set up with multi-disk schema, expand the disk size of the airgap server VM and run the following command:
- Check login to the airgap server's harbor: A push failure issue may occur due to a login failure in the harbor of airgap server or a login that has expired. Log in to the airgap server with your credentials.
docker login <airgap server FQDN>:<harbor port>
- Check harbor service run status: Harbor service may not work when the disk is full or a dependent docker service should be restarted. To check the harbor services, do the following:
- Check the
systemd
service.systemctl status harbor
- If status is not active, restart the harbor service.
systemctl restart harbor
Alternatively, to check the harbor status use
docker-compose
. Go to the/opt/harbor
folder and run the following command:docker-compose ps
If all the harbor containers are not running or in Healthy status, restart the harbor containers by using
docker-compose
.docker-compose down -v docker-compose up -d docker-compose ps
- Check the
7. Folder redirect for generating bundle when the techsupport
bundle fails with insufficient space.
With the airgap server up and running for a longer period, the log size of harbor and nginx might increase and the /tmp folder may be full. Due to insufficient space, the generation of the techsupport bundle in the airgap server fails.
Workaround: Modify the ansible playbook to redirect the support bundle to generate the techsupport bundle in another disk/partition with sufficient space.
- Edit the
ansible
playbook{root-dir}/airgap/scripts/playbooks/airgap-support.yml
. - Change the default folder location
tmp
to a different folder.Modify the locations of the Create logs dir and Package support-bundle sections in the playbook.
...... tasks: - name: Create logs dir file: path: '/photon-reps/airgap-support' state: directory register: support_dir ...... - name: Package support-bundle shell: "{{ item }}" with_items: - mkdir -p /photon-reps/support-bundle/ - tar cvfz /photon-reps/support-bundle/airgap-support-bundle-`date '+%Y%m%d%H%M%S'`.tar.gz {{ support_dir.path }}/* - rm -rf {{ support_dir.path }}/
Note: TheCreate logs dir
where the temporary files are saved and thePackage support-bundle
directory where the support bundles are generated can be two different folders. - Run
techsupport
to collect log the bundle.run.sh techsupport
8. Clear the Harbor credentials when the Setup, Sync, Deploy, and Import operations fail
The Harbor credentials are removed from user-inputs.yml
and the user inputs are requested after the Setup, Sync, Deployment, and Import operations begin.
The Harbor credentials are saved in the {root-dir}/airgap/scripts/vars/harbor_credentials.yml
file temporarily. After the Setup, Sync, Deployment, and Import operations are completed, the Harbor credentials are removed.
- Edit the
{root-dir}/airgap/scripts/vars/harbor_credentials.yml
temp file and delete the rowharbor_password
. - Save the file and restart the failed operations.