At the OER Foundation we host an array of Free and Open Source Software (FOSS) services. To keep things manageable, we have established a set of conventions to which we adhere, and I'm describing them here in case it's helpful to others who can either learn from, or improve on, our experience.
To begin with, we use Ubuntu Linux for all of our hosts. We deploy the current Long Term Support (LTS) version of Ubuntu (at the time of this writing, it's 20.04) on commodity Linux hosting services, like Digital Ocean and Hetzner.
We access all our hosts on the command line, via secure, end-to-end encrypted connections, using OpenSSH. We don't use a server management tool (neither an open source one, like ISPConfig or Webmin, nor proprietary ones like Plex or CPanel). They would just slow us down. We prefer the efficiency and power afforded us by SSH. At any given time, I'm logged into a half a dozen remote systems, issuing updates, deploying code, diagnosing issues, checking resource usage, shoring up disk space, or tweaking Docker configurations.
Software on the host
On the host, we run the following:
- We run etckeeper on each server to track changes in configuration. In general, we have these changes pushed to a central server to provide a reference in the event of a disaster (e.g. the server is compromised and needs to be 'nuked from orbit'...).
- The Nginx web server - it's our reverse proxy of choice. All of our Let's Encrypt certificates terminate at the host server, greatly simplifying the deployment of the individual services running on it.
- The Postfix SMTP server - it's our outgoing mailserver of choice, and we set it up as a smarthost, so that the server can send status and administrative messages to us via our MailCow email infrastructure using authenticating secure SMTP.
- The Docker containerisation framework using the Docker Compose system to coordinate collections of containers that work together to provide specific services. Docker allows us to run a bunch of effectively independent Linux systems, each with potentially different sets of complex software dependencies, efficiently and reliably on a single host.
- For each service, with an individual domain or subdomain name, we get a Let's Encrypt SSL certificate using the letsencrypt scripting toolchain. It is free of cost, and easy to build into our workflow, so it's a 'no-brainer' as far as we're concerned, ensuring all of our sites protect the data of their users with full encrypted links!
- For applications we deploy that depend on the MySQL database, we run a single instance of the fully MySQL-compatible MariaDB on our host, and the Docker containers connect to it via the internal network. We do it this way to easy the process of backing up the various MariaDB databases, which would be much more fiddly if we were running a bunch of individual MariaDB or MySQL instances in Docker containers. We use MariaDB because we prefer its development model, design decisions, and the fact that it's not being run by the Oracle Corporation, which owns the MySQL project.
- We use Restic to perform remote encrypted incremental file backups for the filesystems on each server. We send them to a development server we have which has oodles of disk space (BTRFS RAID1 if you're curious) for safekeeping.
When each new Ubuntu LTS release comes out, we tend to create an install image for our commodity hosting provider (most recently, Digital Ocean), which has all these things pre-installed.
For the various services we deploy using Docker - more specifically, using the very handy Docker Compose scripting toolchain, we have a bunch of conventional practices.
- We put all our Docker Composer recipes in a common directory:
/home/docker- each service goes in a directory using the domain name of that service. For example, this Tech blog, tech.oeru.org is in the directory
/home/docker/tech.oeru.orgon our server about.oerfoundation.org. Each service directory has a
docker-compose.ymlfile which defines the collection of Docker containers making up each service. It also specifies the local directors in which data we want to make persistent, even surviving the removal of relevant Docker containers. In the case of this site, the containers include an Nginx webserver container which submits requests delivered to it by our hosts's Nginx reverse proxy to a PHP scripting engine container which, in turn, consults the Drupal 8 source code making up the site, and the MariaDB on the host which stores the data. There is also a PHP container which automatically runs the behind-the-scenes automated 'cron' tasks that every Drupal site requires.
- We store that per-service persistent data in a similarly named directory under
/home/data, so this site's data is stored in
/home/data/tech.oeru.org. That data includes, in the case of this site, an Nginx directory, with a generic Drupal website configuration, and a directory to contain all the Drupal 8 core source code and that for theme, module, and library dependencies. In a few cases, where we're using tools with specialised Docker deployment practices, like Mailcow, BigBlueButton, and Discourse, the persistent data for the site is stored under the
/home/dockerdirectory to avoid unnecessarily complicating our use of those tools. So our conventions are just that - not hard and fast, if there's a good reason to compromise them.
For all of our service configurations, and, in particular our Docker deployments, we use Git to provide source code versioning and management. We also use it to deploy code that we have developed ourselves. Where possible, we use (and contribute back to!) upstream git repositories supplied by the communities surrounding many of the FOSS services we offer. We see that as doing our part to be good, contributing FOSS community members.
All of our git repositories are held on our own, self-hosted Gitlab instance: https://git.oeru.org - anyone is welcome to peruse the repositories, and we invite anyone interested in contributing to request a an account (give us an idea of what you're interested in doing and how you'd like to participate!).