After you’ve created and prepared your Salt infrastructure for the new RHEL 8/9 systems, you can perform these migration steps to complete your upgrade to RHEL 8/9.
/var/lib/pgsql
directory with ownership=postgres:postgres
.From the postgres user, run these commands to remove the database directory:
su - postgres
psql -U postgres
drop database raas_43cab1f4de604ab185b51d883c5c5d09
Create an empty database and verify user:
create database raas_43cab1f4de604ab185b51d883c5c5d09
\du – should display users for postgres and salteapi
Copy the /etc/raas/pki/.raas.key
and /etc/raas/pki/.instance_id
files from the old RaaS server to the new RaaS Server.
Run the upgrade commands for the new Postgresql database:
su – raas
raas -l debug upgrade
You can now start up the raas service on the new rhel9-raas server. You can also access the Tanzu Salt UI in your browser. Next, you must configure the master plugin on the new RHEL 8/9 Salt Master.
Perform these steps on your new rhel9-master node.
/etc/salt/master.d
directory exists, or create it.Generate the master configuration settings.
Caution:
If you want to preserve your settings when upgrading your installation, make a backup of your existing Master Plugin configuration file before running this step. Then copy relevant settings from your existing configuration to the newly generated file.
Note:
If you want to preserve your configuration settings, make a backup of your existing Master Plugin configuration file before running this step. Then copy relevant settings from your existing configuration to the newly generated file.
sudo sseapi-config --all > /etc/salt/master.d/raas.conf
Important:
If you installed Salt using onedir, the path to this executable is /opt/saltstack/salt/extras-3.10/bin/sseapi-config
.
Edit the generated raas.conf
file and update the values as follows:
Value | Description |
---|---|
sseapi_ssl_validate_cert |
Validates the certificate the API (RaaS) uses. The default is If you are using your own CA-issued certificates, set this value to Otherwise, set this to
|
sseapi_server |
HTTP IP address of your RaaS node. For example, http://example.com , or https://example.com if SSL is enabled. |
sseapi_command_age_limit |
Sets the age (in seconds) after which old, potentially stale jobs are skipped. For example, to skip jobs older than a day, set it to:
Skipped jobs continue to exist in the database and display with a status of Some environments might need the Salt master to be offline for long periods of time and will need the Salt master to run any jobs that were queued after it comes back online. If this applies to your environment, set the age limit to |
sseapi_windows_minion_deploy_delay |
Sets a delay to allow all requisite Windows services to become active. The default value is 180 seconds. |
sseapi_linux_minion_deploy_delay |
Sets a delay to allow all requisite Linux services to become activate. The default value is 90 seconds. |
sseapi_local_cache |
Sets the length of time that certain data is cached locally on each salt master. Values are in seconds. The example values are recommended values.
|
OPTIONAL: This step is necessary for manual installations only. To verify you can connect to SSL before connecting the Master Plugin, edit the generated raas.conf
file to update the following values. If you do not update these values, the Master Plugin uses the default generated certificate.
Value | Description |
---|---|
sseapi_ssl_ca |
The path to a CA file. |
sseapi_ssl_cert |
The path to the certificate. The default value is /etc/pki/raas/certs/localhost.crt . |
sseapi_ssl_key |
The path to the certificate’s private key. The default value is /etc/pki/raas/certs/localhost.key . |
id |
Comment this line out by adding a # at the beginning. It is not required. |
OPTIONAL: Update performance-related settings. For large or busy environments, you can improve the performance of the communications between the Salt master and Tanzu Salt by adjusting the following settings.
Configure the master plugin engines:
The master plugin eventqueue
and rpcqueue
engines offload some communications with Tanzu Salt from performance-critical code paths to dedicated processes. While the engines are waiting to communicate with Tanzu Salt, payloads are stored in the Salt master’s local filesystem so the data can persist across restarts of the Salt master. The tgtmatch
engine moves the calculation of minion target group matches from the RaaS server to the salt-masters.
To enable the engines, ensure that the following settings are present in the Salt Master Plugin configuration file (raas.conf):
engines:
- sseapi: {}
- eventqueue: {}
- rpcqueue: {}
- jobcompletion: {}
- tgtmatch: {}
To configure the eventqueue
engine, verify that the following settings are present:
sseapi_event_queue:
name: sseapi-events
strategy: always
push_interval: 5
batch_limit: 2000
age_limit: 86400
size_limit: 35000000
vacuum_interval: 86400
vacuum_limit: 350000
The queue parameters can be adjusted with consideration to how they work together. For example, assuming an average of 400 events per second on the Salt event bus, the settings shown above allow for about 24 hours of queued event traffic to collect on the Salt master before the oldest events are discarded due to size or age limits.
To configure the rpcqueue
engine, verify the following settings in raas.conf:
sseapi_rpc_queue:
name: sseapi-rpc
strategy: always
push_interval: 5
batch_limit: 500
age_limit: 3600
size_limit: 360000
vacuum_interval: 86400
vacuum_limit: 100000
To configure the tgtmatch engine, ensure that these settings are present in the Master Plugin config file (/etc/salt/master.d/raas.conf)
engines:
- sseapi: {}
- eventqueue: {}
- rpcqueue: {}
- jobcompletion: {}
- tgtmatch: {}
sseapi_local_cache:
load: 3600
tgt: 86400
pillar: 3600
exprmatch: 86400
tgtmatch: 86400
sseapi_tgt_match:
poll_interval: 60
workers: 0
nice: 19
Note:
To make use of target matching on the salt-masters, the following config setting must also be present in the RaaS configuration: target_groups_from_master_only: true
.
Limit minion grains payload sizes:
sseapi_max_minion_grains_payload: 2000
Enable skipping jobs that are older than a defined time (in seconds). For example, use 86400
to set it to skip jobs older than a day. When set to 0
, this feature is disabled:
sseapi_command_age_limit:0
Note:
During system upgrades, enabling this setting is useful to prevent old commands stored in the database from running unexpectedly.
Together, event queuing in Salt and the queuing engines, salt-master target matching, grains payload size limit, and command age limit in the Salt Master Plugin increase the throughput and reduce the latency of communications between the Salt master and Tanzu Salt in the most performance-sensitive code paths.
Restart the master service.
sudo systemctl restart salt-master
OPTIONAL: You might want to run a test job to ensure the Master Plugin is now enabling communication between the master and the RaaS node.
salt -v '*' test.ping
The RHEL 8/9 Master now appears on the Master Keys page.
Caution:
Do not accept the master key at this point.
Follow these steps to configure the minion agent on the rhel9-master node to point to itself.
/etc/salt/minion.d
directory.master:localhost
./etc/salt/pki/minion
directory and delete the minion_master.pub
file.Restart the salt-minion service using:
systemctl restart salt-minion
View and accept the minion key on the rhel9-master by running:
salt-key
salt-key -A
In Tanzu Salt, navigate to Administration > Master Keys and accept the Master Key.
The RHEL8/9 Master should now appear on the Targets page.
SSH into the RHEL7 Master and delete the key for the rhel9-master minion.
There are many ways to migrate your managed systems. If you already have a process set up, follow that process. If not, use these instructions to migrate your salt-minions from an old Salt Master to a new Salt Master.
Note:
These steps do not apply to multi-master systems.
Create an orchestration file. For example:
# Orchestration to move Minions from one master to another
# file: /srv/salt/switch_masters/init.sls
{% import_yaml 'switch_masters/map.yaml' as mm %}
{% set minions = mm['minionids'] %}
{% if minions %}
{% for minion in minions %}
move_minions_{{ minion }}:
salt.state:
- tgt: {{ minion }}
- sls:
- switch_masters.move_minions_map
{% endfor %}
{% else %}
no_minions:
test.configurable_test_state:
- name: No minions to move
- result: True
- changes: False
- comment: No minions to move
{% endif %}
remove_minions:
salt.runner:
- name: manage.down
- removekeys: True
# map file for moving minions
# file: /srv/salt/switch_masters/map.yaml
newsaltmaster: <new_ip_address>
oldsaltmaster: <old_ip_address>
minionids:
- minion01
- minion02
- minion03
state to switch minions from one master to another
# file: /srv/salt/swith_masters/move_minions_map.sls
{% set minion = salt['grains.get']('os') %}
# name old master and set new master ip address
{% import_yaml 'switch_masters/map.yaml' as mm %}
{% set oldmaster = mm['oldsaltmaster'] %}
{% set newmaster = mm['newsaltmaster'] %}
# remove minion_master.pub key
{% if minion == 'Windows' %}
remove_master_key:
file.absent:
- name: c:\ProgramData\Salt Project\Salt\conf\pki\minion\minion_master.pub
change_master_assignment:
file.replace:
- name: c:\ProgramData\Salt Project\Salt\conf\minion.d\minion.conf
- pattern: 'master: {{oldmaster}}'
- repl: 'master: {{newmaster}}'
- require:
- remove_master_key
{% else %}
remove_master_key:
file.absent:
- name: /etc/salt/pki/minion/minion_master.pub
# modify minion config file
change_master_assignment:
file.replace:
- name: /etc/salt/minion.d/minion.conf
- pattern: 'master: {{oldmaster}}'
- repl: 'master: {{newmaster}}'
- require:
- remove_master_key
{% endif %}
# restart salt-minion
restart_salt_minion:
service.running:
- name: salt-minion
- require:
- change_master_assignment
- watch:
- change_master_assignment
Create a map.yaml file that includes (see the previous code example):
move_minions_map.sls
./srv/salt/switch_masterson
the RHEL7 Salt Master.test.ping
job against them to verify that everything is communicating properly.This process is completely dependent on how your organization creates, stores, and manages your state and configuration file. The most common use cases are outlined below as reference.
In this use case, your Tanzu Salt files are stored in the Postgres database and appear in the Tanzu Salt UI.
During the Postgres database restore, these files are recovered and migrated. There are no additional steps you need to take to migrate these files to your RHEL8/9 environment.
In this use case, your Tanzu Salt state and configuration files are stored in Github/Gitlab/Bitbucket or some other code version-control system.
Since these files are stored in a third party tool, you need to configure your new RHEL8/9 Master to connect to your repository system. This configuration will mirror your RHEL7 repository configuration.
In this use case, you Tanzu Salt are stored on a local file server directory on the Salt Master.
To migrate these files to your RHEL8/9 Master, copy the appropriate directories from your RHEL7 Master to your RHEL8/9 Master.
Salt \* saltutil.refresh_pillar