-
Ensure that
salt-master,loadbalancer, and host in question can be brought up with vagrant locally, and that their health check for the relevant service is failing inhaproxyafter the host is fully upvagrant up salt-master vagrant up loadbalancer vagrant up host
To view
haproxystatus:vagrant upthesalt-master,loadbalancer, and host in question:vagrant up salt-master vagrant up loadbalancer
- Prepare an SSH configuration file to access the host with native ssh commands:
vagrant ssh-config salt-master loadbalancer >> vagrant-ssh - Open an SSH session with port forwarding to the
haproxystatus page:ssh -L 4646:127.0.0.1:4646 -F vagrant-ssh loadbalancer - View the
haproxystatus page in your browserhttp://localhost:4646/haproxy?stats
-
Edit pillar data for
roles.slsto include both old and new hostnames (ex.hostname*)
diff --git a/pillar/prod/roles.sls b/pillar/prod/roles.sls
index 68387c9..7a8ace1 100644
--- a/pillar/prod/roles.sls
+++ b/pillar/prod/roles.sls
@@ -35,7 +35,7 @@ roles:
purpose: "Builds and serves CPython's documentation"
contact: "mdk"
downloads:
- pattern: "downloads.nyc1.psf.io"
+ pattern: "downloads*.nyc1.psf.io"
purpose: "Serves python.org downloads"
contact: "CPython Release Managers"
hg:
- SSH into the salt-master server
ssh salt.nyc1.psf.iossh salt.nyc1.psf.io - Navigate to
srv/psf-saltcd /srv/psf-salt - Pull the latest changes from the repository
sudo git pull - Run
highstateto update the role settings to reflect the new matching pattern, as well as additional changes to support migration:sudo salt-call state.highstate
- SSH into the
old-host:ssh old-host - Run
highstate:sudo salt-call state.highstate
- Start a new droplet in digital ocean, and check resources being used on old host to see if we are over or under spending on resources
- Create a new droplet with a new version of Ubuntu, appropriate resources, and name it according to a hostname + current LTS version
- See the current preferred version of Ubuntu in the Server Guide
- SSH into
new-hostvia the IP address provided by DigitalOcean:ssh root@NNN.NNN.NNN.NNN - Add Salt repositories for our current target version (add the apt-repo and install
salt-minionpackage):Note: Ensure you are adding the correct key/repository for the version of Ubuntu you are using.
See the Salt installation guide for more information.
UBUNTU_VERSION=$(lsb_release -rs) ARCH=$(dpkg --print-architecture) CODENAME=$(cat /etc/os-release | grep VERSION_CODENAME | cut -d '=' -f 2) echo "Adding the SaltStack repository key for $UBUNTU_VERSION $CODENAME ($ARCH)..." sudo curl -fsSL -o /etc/apt/keyrings/salt-archive-keyring.pgp https://packages.broadcom.com/artifactory/api/security/keypair/SaltProjectKey/public echo "Adding the SaltStack repository for $UBUNTU_VERSION $CODENAME ($ARCH)..." echo "deb [signed-by=/etc/apt/keyrings/salt-archive-keyring.pgp arch=$ARCH] https://packages.broadcom.com/artifactory/saltproject-deb/ stable main" | sudo tee /etc/apt/sources.list.d/salt.list echo "Pinning Salt to v3006.*" RUN printf "Package: salt-*\nPin: version 3006.*\nPin-Priority: 1001\n" > /etc/apt/preferences.d/salt-pin-1001
- Install and configure the salt-minion. On
$new-host, run the commandapt-get update -y && apt-get install -y --no-install-recommends salt-minion- On the
old-host, look through/etc/salt/minion.d*to set up salt-minion configuration files to match on new-host:- Generate bash that will create these files
for file in /etc/salt/minion.d/*; do echo -e "cat > $file <<EOF"; sudo cat $file; echo "EOF"; done- Copy and paste the generated commands to create and populate the files on
new-host
- On the
- Restart the
salt-minionservice on the new host to pick up the configuration and register with salt-master:sudo service salt-minion restart - On
salt-master, accept the key for the new-host:sudo salt-key -a new-host - On the
new-host, runhighstate:sudo salt-call state.highstate - Log out of
rootsession. The firsthighstaterun adds the users defined inpillar/base/users.slsso that you can log in as your user. - Ensure that the new host is not passing health checks in the load balancer:
ssh -L 4646:127.0.0.1:4646 lb-a.nyc1.psf.io- Then view the
haproxystatus page in your browserhttp://localhost:4646/haproxy?stats
- Then view the
- Run
hightstateon thesalt-masterto create a public dns record for the new hostsudo salt-call state.highstate
- SSH into
new-hostto enable forwarding ofssh-agentssh -A new-host - Stop cron jobs
user@new-host:~$ sudo service cron stopsudo service cron stop - Stop public-facing services, like
nginx, or the service the health check is looking for.- Use this command as an example:
sudo service nginx stopDon't forget to pause service checks for both the old and new hosts in things like Sentry monitors, Pingdom, etc. - Ensure that any additional volumes are mounted and in the correct location:
- Check what disks are currently mounted and where:
df - Determine where any additional disks should be mounted (based on salt configuration of services, for example
docsanddownloadsroles need a big/srvfor their data storage - Ensure mounting of any external disks are in the right location using
mountcommand with appropriate arguments - Ensure that the volumes will be remounted on startup by configuring them in
/etc/fstab
- Check what disks are currently mounted and where:
- If the service has pillar data for backups (see
pillar/prod/backup/$service.sls), runrsynconce to move the bulk of data and as necessary to watch for changes:sudo -E -s rsync -av --rsync-path="sudo rsync" username@hostname:/pathname/ /pathname/- The
/pathname/can be determined by looking at the pillar data for backups,pillar/prod/backupusing thesource_directorypath for the given host (example: thedownloadshost uses/srv/) -
Don't forget to enable SSH forwarding to allow the `rsync` command to use the local SSH key to connect to the old host.
- The
- SSH into
old-host:ssh old-host - Stop cron jobs:
sudo service cron stop - Stop public-facing services, like
nginx(or the service the health check is looking for, for example):sudo service nginx stop
-
If the service has pillar data for backups (see
pillar/prod/backup/$service.sls), runrsynconce more to finalize data migration:sudo -E -s rsync -av --rsync-path="sudo rsync" username@hostname:/pathname/ /pathname/ -
Start cron jobs:
sudo service cron start -
Start public-facing services involved with healthcheck, like
nginx:sudo service nginx start -
Ensure that the
new-hostis live and serving traffic by viewing load balancer page:- View the
haproxystatus page in your browserhttp://localhost:4646/haproxy?stats
- View the
-
Check if users have any files on
old-hostand transfer accordingly:for user in /home/psf-users/*; do sudo -E -s rsync --delete -av --progress --rsync-path="sudo rsync" user@old-host:$user/ $user/migrated-from-ubuntu-1804-lts-host/; done
- On
old-host, shut it down:sudo shutdown -h now - Destroy the
old-hostin DigitalOcean - Change the
new-hostname in DigitalOcean by removing the suffix or similar that was used to differentiate it from the old host (e.g.,new-hostname-2404->old-hostname) - List out and delete the old host keys:
sudo salt-key -L sudo salt-key -d old-host
- On
new-host, rename the hostname:sudo hostname new-host - Update
new-hostname in/etc/hostname,/etc/salt/minion_id, and/etc/hosts:sudo sed -i 's/old-host/new-host/g' /etc/hostname /etc/salt/minion_id /etc/hosts - Restart the salt minion:
sudo service salt-minion restart - Restart Datadog agent:
sudo service datadog-agent restart - Accept the new host key on the
salt-master:sudo salt-key -a new-host - Run
highstateonsalt-masterto update domain name as well asknown_hostsfile:sudo salt-call state.highstate