Infinite Migrations

my life is only migrations
  following the herds,
  as the sun wheels across the sky
    we will never know settled peace,
      only dynamic peace in constant movement

At Autonomic we often find ourselves adopting our clients’ and peers’ infrastructure work. Along the way, we often end up moving the things from however they were hosted previously (straight VPS, docker, docker swarm, or even sometimes Co-op Cloud) usually into our preferred way of doing things (Co-op Cloud on our own infrastructure). We’ve done an unusual number of large migrations lately, and wanted to share some things we found helpful, including some new discoveries we’ve made!

As part of our exciting adventures we have been helping these folks into a more sustainable and capable hosting situation:

  • Helping Geeks for Social Change, who previously hosted their Mastodon / Hometown instance on a VPS. We moved it over to a Co-op Cloud server, and added email and several other services.
  • On collective.tools, migrating away from corporate (and increasingly expensive) email hosting, and an ornate garden of ansible-managed Nextcloud and Discourse instances, onto a single high-powered dedicated server, to lower costs, ease maintenance, and improve performance.
  • Moving services for Ruangrupa’s lumbung.space, and SMAT’s web crawlers, to dedicated servers in order to handle their workloads, which were growing larger and more complex in different ways.
  • Migrating Co-op Cloud’s infrastucture, including Gitea, Drone, and various custom apps, from a creaky old dedicated server onto a VPS, to improve performance, and be able to more easily give access to other members of the Co-op Cloud Federation.

Before a migration

One of the biggest issues we’ve run into with migrations is timing DNS changes, and a big part of that is minimizing caching issues. The fundamental lever for controlling these issues is the Time To Live (TTL) for the DNS records.

We learned that it’s a lot easier to have direct access to the DNS configuration for the services in question beforehand. In situations where it’s necessary to coordinate with the client to do the required DNS settings changes – which can be challenging but not impossible – checking what provider they’re using, and giving step-by-step instructions, can be a big help.

After DNS access is secured, radically lowering the TTL for DNS entries that will be used after the migration is essential. Lowering them to 5 or 15 minutes is probably fine. Doing this prior to the migration window is important, since DNS servers may cache the previous DNS settings (including TTL) for times up to the previous TTL.

The TTL and Minimum Time fields on the SOA record controls the TTL for unknown subdomains under the domain that the SOA controls.

Another useful thing particularly when migrating to Co-op Cloud is to have a splat (*) record for assigning DNS to extra services that don’t necessarily need a shiny name, and for testing purposes. For example, even if we’re making social.example.net, eventually it is useful to have something like *.cloud.example.net point to the server we’re migrating to. This lets you deploy anything on that server as a subdomain under the cloud.example.net domain, which can be very handy for pre-gaming, testing and allowing services that don’t yet have a pretty name.

Part of the before-migration work is determining the upgrade path, and compatibility between versions from the existing version and the version that you’ll be deploying in the eventual destination. This gives an idea of things which have to be done in the downtime window to make the data, configurations and other things up to date with the new version.

This can be particularly relevant for Discourse – where the official distribution updates often and has a web self-updater which means that many admins update quickly, and where third-party Discourse distributions (like the one that the Discourse Co-op Cloud recipe is based on) often lag behind significantly in publishing updates. For Mastodon/Hometown, there’s a somewhat similar situation, where as of the time of writing, Hometown is a couple of minor versions (Y in X.Y.Z, but fairly “major” in terms of functionality and backwards, compatibility) behind Mastodon, and switching between them could be tricky.

We found that, as part of the pre-work, it is essential to have good backups of all of the data we be move. This includes taking an image, if possible, of the server being migrated from, but also setting aside tars of the volume(s), data files, configurations, and database dumps as needed. Working from copies of backups is often most convenient when doing the data copying steps discussed below.

Copying data

Copying a lot of small files in any circumstance sucks. Mastodon stores all media as separate files in a hash tree database and it takes forever to copy.

Some apps – notably Mastodon / Hometown, Peertube, and custom web crawling scripts we’ve worked with – accumulate data very quickly, and copying this around can easily take hours into days.

The main ways we’ve found to mitigate this are:

  1. Copying server-to-server wherever possible. This usually gets a massive speed increase compared to copying to one’s workstation and then copying back to the new server, in exchange for a little potential extra complexity in setting up SSH access between two servers, which can be as simple as ssh-keygen as the user on one end, and putting into the .ssh/authorized_keys on the other.
  2. Avoiding unnecessary archiving / compression steps. ssh tar -cf - path/to/files | tar -xf - … is often a lot faster than separate tar/rsync/untar steps, and also makes the migration more “fire and forget”, rather than having to intervene at multiple places during the process. This technique hugely benefits situations where writing back to the origin disk to store the temporary tar would be slow (because it is a mechnical drive for example) and where the data is already compressed. Likewise, depending on data size, drive speed, and bandwidth, streaming a database dump over SSH (e.g. ssh mysqldump > dump.sql, or even ssh mysqldump | mysql) can save a lot of time.
  3. For truly hairy transfers between VPSs from the same provider, a neat trick is to copy data onto an attached block storage volume, then detach it from the source server, attach it to the destination, and copy it using local filesystem tools – almost always at much faster speeds than are possible over a network connection.

Although rsync has good support for resuming transfers, it has still helped with efficiency and timeliness to have some protection against network drop-outs between our workers and the servers we’re migrating during large transfers.

At Autonomic, we’re used to running a lot of our command-line work through tmate (we hope to write more about our pair-programming-inspired approach to sysadmin soon!), which provides a little of this extra safety already

The industrial-strength answer to this, which we’ve especially used when copying large amounts of data off machines with slower hard drives is to run tmux or screen on the server, and run the copy from there. Then, even if someone drives a tractor through an internet cable, the transfer will continue in the background, and it’s possible to tmux attach/ screen -r to check on progress.

Finally, although we haven’t tested its performance or reliability, wormhole (Magic wormhole) or croc can be very handy for quick one-off transfers between servers, with the benefit of not needing to configure server-to-server SSH and so not leaving open a potential (if small) security hole.

Aside: docker and storage

Docker, under many circumstances, makes migrating between servers very easy.

Often, stopping docker on the source and destination server, and then just copying /var/lib/docker to the destination server, and then restarting docker will have the effect of the swarm running on the new server. We completely deactivate docker with systemctl mask docker.service and then systemctl stop docker.service on both ends, so that docker can’t restart from its socket being accessed. It is later restarted with systemctl unmask docker.service and then systemctl start docker.service (mostly only needed on the receiving end).

Copying a particular docker storage volume is as trivial as digging out the path in /var/lib/docker/volumes and copying it to the destination volume under /var/lib/docker/volumes, which usually have helpful names.

We’ve had to move particularly large docker installation to disks separate from the one holding /, so we just shut down docker as above, copied all of the things in /var/lib/docker to the new big drive, and then mounted the new drive over /var/lib/docker. Under some circumstances we don’t have the ability to control where the new big drive is mounted, so we use Linux’s bind mounting to additionally mount it in the right place. As an example, if the big volume is in /media/bigvol it can be mounted in the right place like mount -o bind /media/bigvol /var/lib/docker.

Many stacks provide useful tools that allow you to pipe, for example, database dumps directly into the container; as an example, one could pipe a MySQL dump into a db container with something like zcat dbdump.sql.gz | docker exec … mysql … .

Testing / monitoring

We’ve noticed things going more smoothly when we have a lot of insight from monitoring. We often set up monitoring, or at least a plan for monitoring, at the start of a migration process. This can be as simple as deploying uptimekuma to do ping tests, or something like munin to gather statistics and produce alerts.

Some steps we do before migrating:

  1. Make a list of all the key functionality in each app (We have a think about special cases like uploading files, serving files that have been uploaded, and similar ‘out of band’ activities.)
  2. Run through the test plan before we migrate. This helps us avoid trying to fix problems that already existed.

A quick, subjective list of common areas where we’ve seen things break after a migration:

  • Mismatched UNIX file permissions (resulting in missing images, non-working uploads, etc.) – particularly when migrating into or out of Docker, or between servers with different users defined. Make sure to include any functionality which might depend on file permissions in your test plan, and keep an eye on server / app logs for permissions errors. As filthy as it is, you can adjust the permissions even outside of docker in the volume directories (using numeric id and group values if necessary).
  • Outgoing email. This is often due to DNS settings, like missing or incorrect SPF, DKIM, or DMARC records, or a reverse DNS entry for the sending host. We use mailu extensively for hosting email (see below), and it provides a cheat sheet of the proper DNS settings. We also extensively test for deliverability and DNS correctness using https://mail-tester.com. Pro-tip: for apps that don’t have a dedicated email testing feature, mail-tester can also handle most incoming registration / share invitations, and other transactional emails.
  • Continuous deployment. If apps were being deployed automatically to the old server, remember to update the configurations and, if necessary, migrate the SSH accounts they were using for access or reconfigure CD to use new details.
  • Caching. Drupal and CiviCRM (and WordPress, if a caching plugin is in use) can wreak havoc with file caching, especially if permissions issues (see above…) interrupt part of the caching process, or create unclearable cache files. Familiarise yourself with each app’s cache management tools, and pay attention to any error messages when running them.

We ran into a couple of specific wrinkles with Nextcloud, including that it stores apps’ data in a directory on an instanceid which is stored in config/config.php – we found we either needed to rename this directory if generating a fresh instanceId, or otherwise copy the instanceId across.

Finally: email migrations

One huge, and slightly unexpected, reason for a lot of our recent migration work has been e-mail: after years of including mailboxes with the cost of domain name registrations, the very widely-used Gandi.net decided to start charging extra for them, and we’ve also seen significant mail price increases from other corporate providers.

These changes came at a time when Autonomic had been test-driving running our own mailserver for a couple of years, including joining the development community of Mailu, an all-in-one mail stack – and created an interesting opportunity to get familiar with larger and larger email migrations.

Recently, we’ve moved over dozens of mailboxes for collective.tools to their own Mailu instance, and migrated 8 domains to our own mail.autonomic.zone instance.

Our tool of choice for IMAP migrations is imapsync; we’ve used both the web interface, and the command-line tool, to great effect, and it hasn’t let us down yet!

For the collective.tools migration, we also ended up doing some lightweight scripting of Mailu using its API, to generate users and set passwords from a data file and save us some clicking. We’ll be open-sourcing our scripts soon as part of a general write-up of that excellent inter-coöp partnership, but we’re happy to share them on request too if anyone needs them sooner: launch a mail at boop@autonomic.zone.