After you've created and prepared your Salt infrastructure for the new RHEL 8/9 systems, you can perform these migration steps to complete your upgrade to RHEL 8/9.

Prepare and perform migration

  1. Stop the RaaS service on both the RHEL 7 and RHEL 8/9 systems.
  2. Copy the gz backup file from the old server to the new server. The gz file must be stored in the /var/lib/pgsql directory with ownership=postgres:postgres.
  3. From the postgres user, run these commands to remove the database directory:
    su - postgres
    psql -U postgres
    drop database raas_43cab1f4de604ab185b51d883c5c5d09
    
  4. Create an empty database and verify user:
    create database raas_43cab1f4de604ab185b51d883c5c5d09
    \du – should display users for postgres and salteapi
  5. Copy the /etc/raas/pki/.raas.key and /etc/raas/pki/.instance_id files from the old RaaS server to the new RaaS Server.
  6. Run the upgrade commands for the new Postgresql database:
    su – raas
    raas -l debug upgrade
    
You can now start up the raas service on the new rhel9-raas server. You can also access the Automation Config UI in your browser. Next, you must configure the master plugin on the new RHEL 8/9 Salt Master.

Configure the Master Plugin on the new Salt Master

Peform these steps on your new rhel9-master node.
  1. Log in to your Salt master and verify the /etc/salt/master.d directory exists, or create it.
  2. Generate the master configuration settings.
    Caution: If you want to preserve your settings when upgrading your installation, make a backup of your existing Master Plugin configuration file before running this step. Then copy relevant settings from your existing configuration to the newly generated file.
    sudo sseapi-config --all > /etc/salt/master.d/raas.conf
    Important: If you installed Salt using onedir, the path to this executable is /opt/saltstack/salt/extras-3.10/bin/sseapi-config.
  3. Edit the generated raas.conf file and update the values as follows:
    Value Description

    sseapi_ssl_validate_cert

    Validates the certificate the API (RaaS) uses. The default is True.

    If you are using your own CA-issued certificates, set this value to True and configure the sseapi_ssl_ca, sseapi_ssl_cert, and sseapi_ssl_cert: settings.

    Otherwise, set this to False to not validate the certificate.

    sseapi_ssl_validate_cert:False

    sseapi_server

    HTTP IP address of your RaaS node, for example, http://example.com, or https://example.com if SSL is enabled.

    sseapi_command_age_limit

    Sets the age (in seconds) after which old, potentially stale jobs are skipped. For example, to skip jobs older than a day, set it to:

    sseapi_command_age_limit:86400

    Skipped jobs continue to exist in the database and display with a status of Completed in the Automation Config user interface.

    Some environments might need the Salt master to be offline for long periods of time and will need the Salt master to run any jobs that were queued after it comes back online. If this applies to your environment, set the age limit to 0.

    sseapi_windows_minion_deploy_delay Sets a delay to allow all requisite Windows services to become active. The default value is 180 seconds.
    sseapi_linux_minion_deploy_delay Sets a delay to allow all requisite Linux services to become activate. The default value is 90 seconds.
    sseapi_local_cache
         load: 3600
         tgt: 86400
         pillar: 3600
         exprmatch: 86400
         tgtmatch: 86400

    Sets the length of time that certain data is cached locally on each salt master. Values are in seconds. The example values are recommended values.

    • load- salt save_load() payloads

    • tgt- SSE target groups

    • pillar- SSE pillar data (encrypted)

    • exprmatch- SSE target expression matching data

    • tgtmatch- SSE target group matching data

  4. OPTIONAL: This step is necessary for manual installations only. To verify you can connect to SSL before connecting the Master Plugin, edit the generated raas.conf file to update the following values. If you do not update these values, the Master Plugin uses the default generated certificate.
    Value Description
    sseapi_ssl_ca The path to a CA file.
    sseapi_ssl_cert The path to the certificate. The default value is /etc/pki/raas/certs/localhost.crt.
    sseapi_ssl_key The path to the certificate’s private key. The default value is /etc/pki/raas/certs/localhost.key.
    id Comment this line out by adding a # at the beginning. It is not required.
  5. OPTIONAL: Update performance-related settings. For large or busy environments, you can improve the performance of the communications between the Salt master and Automation Config by adjusting the following settings.
    • Configure the master plugin engines:

      The master plugin eventqueue and rpcqueue engines offload some communications with Automation Config from performance-critical code paths to dedicated processes. While the engines are waiting to communicate with Automation Config, payloads are stored in the Salt master’s local filesystem so the data can persist across restarts of the Salt master. The tgtmatch engine moves the calculation of minion target group matches from the RaaS server to the salt-masters.

      To enable the engines, ensure that the following settings are present in the Salt Master Plugin configuration file (raas.conf):

      engines: 
           - sseapi: {} 
           - eventqueue: {} 
           - rpcqueue: {} 
           - jobcompletion: {}      
           - tgtmatch: {}

      To configure the eventqueue engine, verify that the following settings are present:

      sseapi_event_queue: 
        name: sseapi-events 
        strategy: always 
        push_interval: 5 
        batch_limit: 2000 
        age_limit: 86400 
        size_limit: 35000000 
        vacuum_interval: 86400 
        vacuum_limit: 350000 

      The queue parameters can be adjusted with consideration to how they work together. For example, assuming an average of 400 events per second on the Salt event bus, the settings shown above allow for about 24 hours of queued event traffic to collect on the Salt master before the oldest events are discarded due to size or age limits.

      To configure the rpcqueue engine, verify the following settings in raas.conf:

      sseapi_rpc_queue: 
        name: sseapi-rpc 
        strategy: always 
        push_interval: 5 
        batch_limit: 500 
        age_limit: 3600 
        size_limit: 360000 
        vacuum_interval: 86400 
        vacuum_limit: 100000 
      To configure the tgtmatch engine, ensure that these settings are present in the Master Plugin config file (/etc/salt/master.d/raas.conf)
      engines: 
          - sseapi: {} 
          - eventqueue: {} 
          - rpcqueue: {} 
          - jobcompletion: {}    
          - tgtmatch: {} 
      
      sseapi_local_cache:     
          load: 3600 
          tgt: 86400 
          pillar: 3600 
          exprmatch: 86400 
          tgtmatch: 86400 
      
      sseapi_tgt_match: 
          poll_interval: 60     
          workers: 0 
          nice: 19
      Note: To make use of target matching on the salt-masters, the following config setting must also be present in the RaaS configuration: target_groups_from_master_only: true.
    • Limit minion grains payload sizes:
      sseapi_max_minion_grains_payload: 2000
    • Enable skipping jobs that are older than a defined time (in seconds). For example, use 86400 to set it to skip jobs older than a day. When set to 0, this feature is disabled:
      sseapi_command_age_limit:0
      Note:

      During system upgrades, enabling this setting is useful to prevent old commands stored in the database from running unexpectedly.

    Together, event queuing in Salt and the queuing engines, salt-master target matching, grains payload size limit, and command age limit in the Salt Master Plugin increase the throughput and reduce the latency of communications between the Salt master and Automation Config in the most performance-sensitive code paths.

  6. Restart the master service.
    sudo systemctl restart salt-master
  7. OPTIONAL: You might want to run a test job to ensure the Master Plugin is now enabling communication between the master and the RaaS node.
    salt -v '*' test.ping
The RHEL 8/9 Master now appears on the Master Keys page.
Caution: Do not accept the master key at this point.

Configure the Minion Agent

Follow these steps to configure the minion agent on the rhel9-master node to point to itself.
  1. SSH into the rhel9-master node and browse to the /etc/salt/minion.d directory.
  2. Edit the minion.conf file and change the master setting to master:localhost.
  3. Browse to the /etc/salt/pki/minion directory and delete the minion_master.pub file.
  4. Restart the salt-minion service using
    systemctl restart salt-minion
  5. View and accept the minion key on the rhel9-master by running:
    salt-key
    salt-key -A
  6. In Automation Config, navigate to Administration > Master Keys and accept the Master Key.

    The RHEL8/9 Master should now appear on the Targets page.

  7. SSH into the RHEL7 Master and delete the key for the rhel9-master minion.

Migrate Salt-Minion systems

There are many ways to migrate your managed systems. If you already have a process set up, follow that process. If not, use these instructions to migrate your salt-minions from an old Salt Master to a new Salt Master.
Note: These steps do not apply to multi-master systems.
  1. Create an orchestration file. For example,
    # Orchestration to move Minions from one master to another
    # file: /srv/salt/switch_masters/init.sls
    {% import_yaml 'switch_masters/map.yaml' as mm %}
    {% set minions = mm['minionids'] %}
    
    {% if minions %}
    {% for minion in minions %}
    move_minions_{{ minion }}:
      salt.state:
        - tgt: {{ minion }}
        - sls:
          - switch_masters.move_minions_map
    
    {% endfor %}    
    {% else %}
    no_minions:
      test.configurable_test_state:
        - name: No minions to move
        - result: True 
        - changes: False 
        - comment: No minions to move
    {% endif %}
    
    remove_minions:
      salt.runner:
        - name: manage.down
        - removekeys: True 
    
    # map file for moving minions
    # file: /srv/salt/switch_masters/map.yaml
    newsaltmaster: <new_ip_address>
    oldsaltmaster: <old_ip_address>
    minionids:
      - minion01
      - minion02
      - minion03
    state to switch minions from one master to another
    # file: /srv/salt/swith_masters/move_minions_map.sls
    {% set minion = salt['grains.get']('os') %}
    # name old master and set new master ip address
    {% import_yaml 'switch_masters/map.yaml' as mm %}
    {% set oldmaster = mm['oldsaltmaster'] %}
    {% set newmaster = mm['newsaltmaster'] %}
    
    # remove minion_master.pub key
    {% if minion == 'Windows' %}
    remove_master_key:
      file.absent:
        - name: c:\ProgramData\Salt Project\Salt\conf\pki\minion\minion_master.pub
    
    change_master_assignment:
      file.replace:
        - name: c:\ProgramData\Salt Project\Salt\conf\minion.d\minion.conf 
        - pattern: 'master: {{oldmaster}}'
        - repl: 'master: {{newmaster}}'
        - require:
          - remove_master_key
    {% else %}
    remove_master_key:
      file.absent:
        - name: /etc/salt/pki/minion/minion_master.pub
    
    # modify minion config file
    change_master_assignment:
      file.replace:
        - name: /etc/salt/minion.d/minion.conf 
        - pattern: 'master: {{oldmaster}}'
        - repl: 'master: {{newmaster}}'
        - require:
          - remove_master_key
    {% endif %}
    # restart salt-minion
    restart_salt_minion:
      service.running:
        - name: salt-minion 
        - require:
          - change_master_assignment
        - watch:
          - change_master_assignment
    
  2. Create a map.yaml file that includes (see the above code example):
    1. <old salt master> IP address/FQDN
    2. <new salt master> IP address/FQDN
    3. List of saltminionIDs to be moved.
  3. Create a state file (see the above code example) to process the migration. For example, move_minions_map.sls.
  4. Add these files to a directory (For example, /srv/salt/switch_masterson the RHEL7 Salt Master.
  5. Run the orchestration file on the RHEL7 Salt Master. This results in some errors as the Salt minion service restarts, and does not come back online for the RHEL7 Salt Master.
  6. Monitor the progress in Automation Config. Accept the migrate Salt minion IDs as they populate in the UI.
  7. After all the systems have migrated, run a test.ping job against them to verify that everything is communicating properly.

Migrate Existing Files

This process is completely dependent on how your organization creates, stores, and manages your state and configuration file. The most common use cases are outlined below as reference.

Use Case 1: Automation Config file server

In this use case, your Automation Config files are stored in the Postgres database and appear in the Automation Config UI.

During the Postgres database restore, these files are recovered and migrated. There are no additional steps you need to take to migrate these files to your RHEL8/9 environment.

Use Case 2: Github/Gitlab file server

In this use case, your Automation Config state and configuration files are stored in Github/Gitlab/Bitbucket or some other code version-control system.

Since these files are stored in a third party tool, you need to configure your new RHEL8/9 Master to connect to your repository system. This configuration will mirror your RHEL7 repository configuration.

Use Case 3: Local file roots of Salt Master

In this use case, you Automation Config are stored on a local file server directory on the Salt Master.

To migrate these files to your RHEL8/9 Master, copy the appropriate directories from your RHEL7 Master to your RHEL8/9 Master.
  1. Files are stored in /srv/salt and /srv/pillar for state files and pillar files, respectively.
  2. Perform a secure copy of these directories from your RHEL7 Master to your RHEL8/9 Master using a secure copy tool such as winscp or the command line.
  3. Refresh pillar date using Salt \* saltutil.refresh_pillar