Here at the OER Foundation, where we build and maintain the computer services used by the OERu, its learners, and its educators around the globe, we've come up with a tech process that seems to work very nicely, with minimal cost and maintenance overhead. It involves setting up hosting infrastructure to adhere to some conventions that have emerged over time (and are subject to change if we identify better ones).
This post gives you a step-by-step process to create a host that adheres to those conventions, making it a perfect jumping off point for hosting any of the Free and Open Source Software-based services we use.
Step one - a suitable host
The first step is to get yourself an entry-level virtual server or compute instance (also sometimes referred to as a "VPS" or Virtual Private Server, or even a "VM" a Virtual Machine).
I generally use DigitalOcean (I have no affiliation with the company), but there are many other commodity hosting services (check out Vultr or Linode, for example) around the world which offer comparably (or better) spec'd servers for USD5.00/month, or USD60.00/year. For that you get a Gigabyte (GB) of RAM, a processor, and 40GB of SSD (Static Storage Device = faster) storage.
A server (a "Droplet" in Digital Ocean parlance) with a GB of RAM and 20+ GB of disk space will be sufficient for most of our individual services. If you expect it to have heavy traffic, or you might want to add more services, you might want to invest in a higher-spec server up-front (because, among other things, it'll offer you more disk space). Most of our servers are USD40/month instances (USD480/year) which buys 8GB of RAM, 4 virtual processors, and 160GB of disk space.
If you're wanting to host something, like Mastodon or NextCloud, with substantial potential for chewing up disk space with user-uploaded content, you might want to start off with an additional storage option which is easier to expand if required, and has the added benefit of vastly reducing the hassles associated with recovering from a 'full disk' by moving that storage blowout away from your system disk.
You'll create an account for yourself on your chosen hosting provider (it's a good idea to use Two Factor Authentication, aka 2FA, on your hosting account so that no one can log in as you and, say, delete your server unexpectedly - you'll find instructions on how to set up 2FA on your hosting provider's site) and create an Ubuntu 20.04 (or the most recent 'Long Term Support' (LTS) version - the next will be 22.04, in April 2022) in the 'zone' nearest to you (or your primary audience, if that's different).
If you don't already have an SSH key on your computer, I encourage you to create one and specify the public key in the process of creating your server - specifying the 'public key' of your SSH identity during the server creation process that should allow you to log in without needing a password!
You'll need to note the server's IPv4 address (it'll be a series of 4 numbers, 0-254, separated by full stops, e.g. 22.214.171.124), and you should also be aware that your server will have a newer IPv6 address, which will be a set of 8 four hex character values (each hex character can have one of 16 values: 0-9,A-F) separated by colons, e.g. 2604:A880:0002:00D0:0000:0000:20DE:9001. With one or the other of those IPs, you should be able to log into your new server via SSH. If you're on a UNIX command line (e.g. a Linux or MacOS desktop), do this in a terminal (on Windows, I understand people use a tool called Putty for SSH, in which case follow the app's instructions.):
ssh [your server IPv4 or IPv6]
followed by the ENTER key (that'll be true for any line of commands I provide).
In some cases, depending on your hosting provider, you'll have a password to enter, or if you've specified your pre-existing public SSH key, you shouldn't need to enter a password at all, you should be logged in. To check what user you care, you can type
If it returns
root, you're the root or super-admin of the server. If not, you're a normal user (some hosting providers have a convention of giving you a default user of "ubuntu" user perhaps "debian").
Now that you're logged in, it's worth doing an upgrade of your server's Ubuntu system! Do that as follows:
sudo apt-get update && sudo apt-get dist-upgrade
Usually the user, even if it's not the root user, will have the ability to use the
sudo command modifier - that means "do this action as the root (aka the 'Super User', thus 'su' for short) user" - you may be asked to enter your password as a security precaution the first time you run a command prefaced by
sudo. Enter it, and it should run the command. Plus, the system shouldn't bother you for it again unless you leave your terminal unused for a while and come back to it.
Get your Domain lined up
You will want to have a domain to point at your server, so you don't have to remember the IP number. There're are thousands of domain "registrars" in the world who'll help you do that... You just need to "register" a name, and you pay yearly fee (usually between USD10-30 depending on the country and the "TLD" (Top Level Domain. There're national ones like .nz, .au, .uk, .tv, .sa, .za, etc., or international domains (mostly associated with the US) like .com, .org, .net, and a myriad of others. Countries decide on how much their domains wholesale for and registrars add a margin for the registration service).
Here in NZ, I use the services of Metaname (they're local to me in Christchurch, and I know them personally and trust their technical capabilities). If you're not sure who to use, ask your friends. Someone's bound to have recommendations (either positive or negative, in which case you'll know who to avoid).
Once you have selected and registered your domain, you can 'manage your Zone' to set up (usually through a web interface provided by the registrar) an A Record which associates your website's name to the IPv4 address of your server. So you should just be able to enter your server's IPv4 address, the domain name (or sub-domain) you want to use for the web service you want to set up.
Nowadays, if your Domain Name host offers it (some don't, meaning you might be better off with a different one), it's also important to define an IPv6 record, which is called an AAAA Record... you put in your IPv6 address instead of your IPv4 one.
You might be asked to set a "Time-to-live" (which has to do with the length of time Domain Name Servers are asked to "cache" the association that the A Record specifies) in which case you can put in 3600 seconds or an hour depending on the time units your interface requests... but in most cases that'll be set to a default of an hour automatically.
Log into your server
You should be able to test that your A and AAAA Records have been set correctly by logging into your server via SSH using your domain name rather than the IPv4 or IPv6 address you used previously. It should (after you accept the SSH warning that the server's name has a new name) work the same way your original SSH login did. On Linux, you'd SSH via a terminal and enter
ssh root@[domain name]. I think you can do similar on MacOS and on Windows, I believe people typically use software called Putty...
This will log you into your server as the 'root' user. It's not considered good practice to access your server as root (it's too easy to completely screw it up). Best practice is to create a separate 'non-root' user who has 'sudo' privileges and the ability to log in via SSH. If you are currently logged in as 'root', you can create a normal user for yourself via (replace [username] with your chosen username):
adduser $U ssh
adduser $U sudoers
You'll also want to a set a password for user [username]:
then become that user temporarily (note, the root user can 'become' another user without needing to enter a password) and create an SSH key and, in the process, the
.ssh directory (directories starting with a '.' are normally 'hidden') for the file into which to put your public SSH key:
ssh-keygen -t rsa -b 2048
and in that file, copy and paste (without spaces on either end) your current computer's public ssh key (never publish your private key anywhere!), save and close the file.
From that point, you should be able to SSH to your server via
ssh [username]@[domain name] without needing to enter a password.
These instructions use 'sudo' in front of commands because I assume you're using a non-root user. The instructions will still work fine even if you're logged in as 'root'.
Sort out your details
You'll replace the similarly [named] variables in the configuration files and command lines below. These are the values you need to find or create.
[domain name] - the fully qualified domain name or subdomain by which you want your WPMS to be accessed. You must have full domain management ability on this domain. Example:
[port] - this is an unused port, i.e. not used by any other service, that will be used by the Nginx reverse proxy to talk to your Docker Nginx webserver container. A conventional option would be 8080... If that's already in use, try 8081, etc. You can check what ports are in use with the command
sudo netstat -punta- the ports are listed after the ':' in each case
- MariaDB/MySQL database details
- [database root password] - the administrative user (root) password for this server - use a random password - see below
- [your database password] - if you, optionally, want to set up an admin user for yourself on this server. Paired with [your username] on the server.
- Authenticating SMTP details - this is required so your web service can send emails to users - crucial things like email address validation and password recovery emails...
- [smtp host] - the domain name or IPv4 or IPv6 address of an SMTP server
- [smtp reply-to-email address] - a monitored email to which people can send email related to this WordPress site, e.g. webmaster@[domain name]
- [smtp user] - the username (often an email address) used to authenticate against your SMTP server, provided by your email provider.
- [smtp password] - the accompanying password, provided by your email provider.
- Docker.com login details (you'll need a username and password).
Note: not all values in all files surrounded by  need to be replaced! If they're not included in the list above, leave them as you find them!
To generate decent random passwords, we encourage you to follow our how to create strong random passwords tutorial.
Step two - prepare the host
Preparing the host involves ensuring your firewall, UFW, is configured properly, installing the Nginx webserver to act as your host's reverse proxy, and installing MariaDB (MySQL compatible but better, for a variety of reasons including that it's not controlled by Oracle) and configuring it properly. Then we can create the MariaDB database specifically for the web service you're wanting to install. We also suggest you install Postfix so your server can send out email to you, and finally, we'll ensure that your server knows how to launch Docker containers and manage them with Docker Compose.
Before we do anything else, let's make sure your Ubuntu package repository is up-to-date.
sudo apt-get update
If you pause this build process for more than a few hours, it pays to run it again before you continue on.
Firewall with UFW
No computer system is ever full secure - there're always exploits waiting to be found, so security is a process of maintaining vigilance. Part of that is reducing exposure - minimising your "attack surface". Use a firewall -
ufw is installed on Ubuntu by default and is easy to set up and maintain. Make sure you've got exceptions for SSH (without them, you could lock yourself out of your machine! Doh!).
Run the following commands to allow your Docker containers to talk to other services on your host.
sudo ufw allow in on docker0
sudo ufw allow from 126.96.36.199/8 to any
Specifically for Docker's benefit, you need to tweak the default Forwarding rule (I use
vim as my editor. An alternative, also installed by default on Ubuntu,
nano, is probably easier to use for simple edits like this, so I'll use
sudo nano /etc/default/ufw
and copy the line
DEFAULT_FORWARD_POLICY="DROP" tweak it to look like this (commenting out the default, but leaving it there for future reference!):
and then save and exit the file (CTRL-X and then 'Y').
You also have to edit
/etc/ufw/sysctl.conf and remove the "#" at the start of the following lines, so they look like this:
sudo nano /etc/ufw/sysctl.conf
# Uncomment this to allow this host to route packets between interfaces net/ipv4/ip_forward=1 net/ipv6/conf/default/forwarding=1 net/ipv6/conf/all/forwarding=1
and finally restart the network stack and ufw on your server
sudo systemctl restart systemd-networkd
sudo service ufw restart
Installing the Nginx webserver/reverse proxy
In the configuration I'm describing here, you'll need a webserver running on the server - it'll be acting as a reverse proxy for the Docker-based Nginx instance described below. I prefer the efficiency of Nginx and clarity of Nginx configurations over those of Apache and other open source web servers. Here's how you install it.
sudo apt-get install nginx-full
nginx to be visible via ports 80 and 443, run
sudo ufw allow "Nginx Full"
To check that all worked, you can put
http://[domain name] into your browser's address bar, and you should see a default "NGINX" page...
Note: make sure your hosting service is not blocking these ports at some outer layer (depending on who's providing that hosting service you may have to set up port forwarding).
Because a server like this one is set up to perform lots of rather complex jobs to perform, it's vital that your server has the ability to send you emails to alert you of problems, like failed updates or backups. We encourage you to follow our instructions on how to configure your server to use the
Postfix SMTP server to send out email, using your Authenticating SMTP details.
If your web service requires MySQL or the drop-in replacement we prefer, MariaDB, you can follow our instructions for installing MariaDB on this host.
Regular automatic database backups
Finally, it's a good idea (but optional - if you're in a hurry, you can do this later) to make sure that your server is maintaining backups of your database - in this case, we'll use the
automysqlbackup script to automatically maintain a set of dated daily database backups. It's easy to install, and the database backups will be in
/var/lib/automysqlbackup in dated folders and files. If you haven't set up Postfix in the previous step, just beware you will be asked to set it up when installing automysqlbackup.
sudo apt-get install automysqlbackup
That's all there is to it. It should run automatically every night and store a set of historical SQL snapshots that may well save your bacon sometime down the track!
Set up Docker and Docker-Compose
First, you need to set up Docker support on your server - use the 'repository method' for Ubuntu 20.04 and choose the 'x86_64 / amd64' tab!
Also, if you're using a non-root user, follow the complete instructions including setting up Docker for your non-root user.
The way I implement this set of containers is to use Docker Compose) which depends on the Python script interpreter (version 3+). I suggest using the latest installation instructions provided by the Docker community. Of the options provided, I use the 'alternative instructions', employing the 'pip' approach. This is what I usually do (to summarise the pip instructions):
The firrst step is to Install Ubuntu's Python3 pip which is a bit outdated...
sudo apt install python3-pip
use the Ubuntu instance, called pip3 to install the latest Python 3 pip
sudo pip install -U pip
and (finally) install the docker-compose script:
sudo pip install -U docker-compose
Set up our conventional directories
To set up your server, I recommend setting up a place for your Docker containers as per our Docker-related conventions:
sudo mkdir -p /home/data/[domain name]
sudo mkdir -p /home/docker/[domain name]
And once you've done that, you're done preparing your server, and you can start the fun part - setting up the service(s) you're after, like a WordPress Multisite or a NextCloud + OnlyOffice instance or your own BitWarden or many others!