Creating your own OER Foundation-style Libre Self-hosting Infrastructure with Docker Compose and Ubuntu LTS
-
Installing the Docker Engine, Docker Compose, and Let's Encrypt
-
Setting up places for your Docker configurations and persistent Data
Tips for this tutorial
This tutorial is aimed at adventuresome would-be system administrators. I try not to assume any specialised knowledge on your part, and try to provide useful tips and explanation along the way to help you build a valid mental model of what you're doing. At the same, this is not a trivial process. Luckily, if you try it out, and decide not to follow through, so long as you delete your VPS, you should not be out-of-pocket by more than a few cents.
With this tutorial, I'm assuming you've got a computer with an Internet connection, that can run SSH (all modern systems should do that) and you can copy-and-paste stuff from this tutorial (in your browser) into either a terminal window (in which you're SSH'd into your VPS) or into a text editor. Note, if you find it difficult to paste into a terminal window, try using CTRL+SHIFT+V (CTRL+V is already used as a short-cut for something else in UNIX terminals since long before the Windows world started using CTRL+C and CTRL+V).
When I provide files you need to copy, look for the 'tokens' or placeholders with values you need to substitute (search-and-replace) in square brackets like this, [token name], with your own values. Any text editor worth using should let you do that!
Create a Virtual Private Server
The first step is to create a place to host instances of your own LibreSoftware services.
You can use a local piece of computing hardware of sufficient capacity in your own home or organisation (e.g. a redundant depreciated desktop should be sufficiently powerful), recognising that it needs to be reliable because anyone using your solution will not be able to access it if it's not running. But, doing that is no joke - making reliable systems is hard.
The more cost-effective self-hosting approach in our experience is to lease a low cost commodity Linux Virtual Private Server (aka a VPS) running Ubuntu Linux 22.04 (or the latest "Long Term Support" version - the next one, 24.04, will be released in April 2024). That's what we assume you're running for all our recent tutorials.
We have used quite a few Linux VPSs commodity providers. Known good options include Digital Ocean, Linode, Vultr, Hetzner, and TurnkeyLinux. There are many (hundreds of) other credible options. We recommend you find one hosted in the network epicentre (which isn't necessarily the same as the 'geographic' epicentre) of your audience. In the past year, we shifted almost all of our hosting to Hetzner as they've got the benefit of not being US-owned (They're German, and are therefore less likely to expose us to the egregiously over-reaching US Cloud and Patriot Acts). Shifting to Hetzner also reduced our infrastructure costs by about 50% compared to Digital Ocean. It also represents a 95% savings over Amazon AWS or Microsoft Azure hosting (can't imagine how anyone could justify hosting with either of them).
If you have trouble getting a VPS, you might find this video I created for provisioning a VPS, using Digital Ocean as an example, useful. In my experience, the process for provisioning VPSs on other platforms is very similar. You'll find this process much easier than using either Microsoft Azure or Amazon AWS, which we strongly recommend against using.
VPS Properties:
We recommend that you provision a VPS with the following spec - these will suffice capacity-wise for all the tutorials we provide, although depending on your load, you might want to beef them up, which you can do as the need arises - ou should be able to upgrade those specs in real time if required, except for your disk space. You can, however, provision a secondary storage space (you can start small and increase it as you need to). I will cover setting this up, as it'll make your life far far easier in the medium-long term.
- 4-8 GB RAM - system volatile memory for running applications
- 2-4 Virtual CPUs - processing capacity
- 80-160 GB Disk space (NVMe storage is faster than SSD which is faster than spinning disk space) - long term storage for data
- running Ubuntu Linux 22.04 (the current Long Term Support version) - the operating system
- extra storage - 20-40GB extra space (can be expanded on fairly short notice) - a long term storage option separate to your operating system drive - this is very very useful in the event that your app inadvertently fills your hard drive. You'll thank yourself for doing this (I an assure you from bitter experience). It's always a good idea to learn from others' past mistakes rather than repeat them!
You'll need to create an account for yourself on your chosen hosting provider (it's a good idea to use Two Factor Authentication, aka 2FA, on your hosting account so that no one can log in as you and, say, delete your server unexpectedly - you'll find instructions on how to set up 2FA on your hosting provider's site) and create an Ubuntu 22.04 (or the most recent 'Long Term Support' (LTS) version) in the 'zone' nearest to you (or your primary audience, if that's different).
If you don't already have an SSH key on your computer, I encourage you to create one and specify the public key in the process of creating your server - specifying the 'public key' of your SSH identity during the server creation process that should allow you to log in without needing a password!
You'll need to note the server's IPv4 address (it'll be a series of 4 numbers, 0-254, separated by full stops, e.g. 103.99.72.244), and you should also be aware that your server will have a newer IPv6 address, which will be a set of 8 four hex character values (each hex character can have one of 16 values: 0-9,A-F) separated by colons, e.g. 2604:A880:0002:00D0:0000:0000:20DE:9001. With one or the other of those IPs, you should be able to log into your new server via SSH. If you're on a UNIX command line (e.g. a Linux or MacOS desktop), do this in a terminal (On Windows, I understand people use a tool called Putty for SSH, in which case follow the app's instructions - or you'll find many tutorials on using it with a quick web search):
ssh [your server IPv4 or IPv6]
followed by the ENTER key (that'll be true for any line of commands I provide).
In some cases, depending on your hosting provider, you'll have a password to enter, or if you've specified your pre-existing public SSH key, you shouldn't need to enter a password at all, you should be logged in. To check what user you care, you can type
whoami
If it returns root
(there's also a convention of using a '#' as the command prompt), you're the root or super-admin of the server. If not, you're a normal user (some hosting providers have a convention of giving you a default user called "ubuntu" or perhaps "debian") with a prompt that is, by convention, a '$'.
Now that you're logged in, it's worth doing an upgrade of your server's Ubuntu system! Do that as follows (this works regardless of whether your a root user or an unprivileged user with 'sudo' ability):
sudo apt update && sudo apt dist-upgrade
Usually the user, even if it's not the root user, will have the ability to use the sudo
command modifier - that means "do this action as the root (aka the 'Super User', thus 'su' in 'sudo' for short) user" - if you're a non-root user, you'll likely be asked to enter your password as a security precaution the first time you run a command prefaced by sudo
. Enter it, and it should run the command. Plus, the system shouldn't bother you for it again unless you leave your terminal unused for a while (usually 5 minutes) and come back to it.
At this point, I also like to install a cool software package called 'etckeeper' which records configuration changes on your VPS for future reference (it can be life-saving if trying to recover from an administrative mess-up!):
sudo apt install etckeeper git
which will also install some dependencies, including the very important (and relevant later on) 'git' version control system.
Key variables for you VPS
To set up your services, you'll need a few crucial bits of information related to your system's identity and external systems you'll need it to interact with. For example, as mentioned before, you'll need a domain name. For the rest of this tutorial, we'll use the convention of representing those variables as a name inside [], for example, the domain name you've picked, [domain name].
Here's a list of variables you'll need to know to complete the rest of this tutorial:
- [ipv4] and [ipv6] - your VPS' IPv4 and IPv6 addresses (the latter can be ignored if your cloud provider doesn't support IPv6 addresses) as described above.
- [domain name] - the fully qualified domain names or subdomains of a base [domain name] by which you want your services to be accessed. You must have full domain management ability on this domain. Example: nextcloud.oeru.org - that's the nextcloud subdomain of the oeru.org domain.
- Authenticating SMTP details - this is required so your services can send emails to users - crucial things like email address validation and password recovery emails...
- [smtp server] - the domain name or IPv4 or IPv6 address of an SMTP server
- [smtp port] - the port number on the server that is listening for your connection. By convention it's likely to be 465 or 587, or possibly 25.
- [smtp reply-to-email] - a monitored email to which people can send email related to this WordPress site, e.g. notifications@[domain name]
- [smtp user] - the username (often an email address) used to authenticate against your SMTP server, provided by your email provider.
- [smtp password] - the accompanying password, provided by your email provider.
- [your email] - an email address to which system-related emails can be sent to you, perhaps something like webmaster@[domain name].
- [vps username] - the username you use on your server (by convention, these are one word, and all lower case).
Get your Domain lined up
You will want to have a domain to point at your server, so you don't have to remember the IP number. There're are thousands of domain "registrars" in the world who'll help you do that... You just need to "register" a name, and you pay yearly fee (usually between USD10-30 depending on the country and the "TLD" (Top Level Domain. There're national ones like .nz, .au, .uk, .tv, .sa, .za, etc., or international domains (mostly associated with the US) like .com, .org, .net, and a myriad of others. Countries decide on how much their domains wholesale for and registrars add a margin for the registration service).
Here in NZ, I use the services of Metaname (they're local to me in Christchurch, and I know them personally and trust their technical capabilities). If you're not sure who to use, ask your friends. Someone's bound to have recommendations (either positive or negative, in which case you'll know who to avoid).
Once you have selected and registered your domain, you can 'manage your Zone' to set up (usually through a web interface provided by the registrar) an A Record which associates your website's name to the IPv4 address of your server. So you should just be able to enter your server's IPv4 address, the domain name (or sub-domain) you want to use for the web service you want to set up.
Nowadays, if your Domain Name host offers it (some don't, meaning you might be better off with a different one), it's also important to define an IPv6 record, which is called an AAAA Record... you put in your IPv6 address instead of your IPv4 one.
You might be asked to set a "Time-to-live" (which has to do with the length of time Domain Name Servers are asked to "cache" the association that the A Record specifies) in which case you can put in 3600 seconds or an hour depending on the time units your registrar's interface requests... but in most cases that'll be set to a default of an hour automatically.
Editing files
In the rest of this tutorial, we're going to be editing quite a few files via the command line. If you're new to this, I recommend using the 'nano' text editor which is installed by default on Ubuntu Linux systems. It's fairly simple, and all of its options are visible in the text-based interface. I tend to use a far more powerful but far less beginner-friendly editor called 'vim'. There're other editors people might choose, too. To use your preferred editor for the rest of the tutorial, enter the following to set an environment variable EDIT, specifying your preferred editor, e.g.:
EDIT=$(which nano)
or, if you're like me
EDIT=$(which vim)
so that subsequent references to $EDIT will invoke your preferred editor. Note the command $(which nano)
is a script which finds the full path to the named command, in this case 'nano'. Putting a command inside the $() means 'replace with the value the script returns', so it sets the value of EDIT to the path of the nano command in this case. On my current machine, the value it returns is /usr/bin/nano
, which is pretty typical.
To test (at any time) whether you session still knows your $EDIT command, run
echo $EDIT
if it returns the path to your preferred editor, you're good to go. If not, just reassert (run again) the EDIT= line from above!
Note: if you log out and back in again, change users, or create a new terminal tab/session, you'll need to reassert the EDIT value.
Set up an unprivileged user for yourself
You should be able to test that your A and AAAA Records have been set correctly by logging into your server via SSH using your domain name rather than the IPv4 or IPv6 address you used previously. It should (after you accept the SSH warning that the server's name has a new name) work the same way your original SSH login did.
This will log you into your server as it did the first time, either as 'root' or the default unprivileged user. It's not considered good practice to access your server as root (it's too easy to completely screw it up by accident). It's a good idea to create your own separate 'non-root' user who has 'sudo' privileges and the ability to log in via SSH. If you are currently logged in as 'root', you can create a normal user for yourself via (replace [vps username] with your chosen username - in my case, I'd use U=dave
):
U=[vps username]
adduser $U
adduser $U ssh
adduser $U admin
adduser $U sudo
You'll also want to a set a password for user [vps username] (we have a tutorial on creating good passwords):
passwd $U
then become that user temporarily (note, the root user can 'become' another user without needing to enter a password) and create an SSH key and, in the process, the .ssh
directory (directories starting with a '.' are normally 'hidden' - you can show them in a directory listing via ls -a
) for the file into which to put your public SSH key:
su $U
after which you need to re-run your EDIT command:
EDIT=$(which nano)
and then run
ssh-keygen -t rsa -b 2048
$EDIT ~/.ssh/authorized_keys
and in that file, copy and paste (without spaces on either end) your current computer's public ssh key (never publish your private key anywhere!), save and close the file.
and then leave the 'su' state, back to the superuser:
CTRL+D
or type exit
From that point, you should be able to SSH to your server via ssh [vps username]@[domain name]
without needing to enter a password.
These instructions use 'sudo' in front of commands because I assume you're using a non-root user. The instructions will still work fine even if you're logged in as 'root' (the 'sudo' will be ignored as it's unnecessary).
Configure the VPS
First things first. Let's make sure you've got the time zone set appropriately for your instance. It'll probably default to 'UTC' (Greenwich Mean Time). For our servers, I tend to pick 'Pacific/Auckland' which is our time zone. Run this
sudo dpkg-reconfigure tzdata
and pick the appropriate timezone. You can just leave it running UTC, but you might find it tricky down the track if, for example, you're looking at logs and having to constantly convert the times into your timezone.
Configuring your firewall
In the name of safety from the get-go, let's configure our firewall. We work on the basis of explicitly allowing in only what we want to let in (i.e. a 'default deny' policy).
First we'll enable the use of SSH through the firewall (not doing this could lock us out of your machine!)
sudo ufw allow ssh
while we're here, we'll also enable data transfer from the internal (to the VPS) Docker virtual network and the IP range it uses for Docker containers:
sudo ufw allow in on docker0
sudo ufw allow from 172.0.0.0/8 to any
Then we'll enable forwarding from internal network interfaces as required for Docker containers to be able to talk to the outside world:
sudo $EDIT /etc/default/ufw
and copy the line DEFAULT_FORWARD_POLICY="DROP"
tweak it to look like this (commenting out the default, but leaving it there for future reference!):
#DEFAULT_FORWARD_POLICY="DROP" DEFAULT_FORWARD_POLICY="ACCEPT"
and then save and exit the file (CTRL-X and then 'Y' if your editor is nano).
You also have to edit /etc/ufw/sysctl.conf
sudo $EDIT /etc/ufw/sysctl.conf
and remove the "#" at the start of the following lines, so they look like this:
# Uncomment this to allow this host to route packets between interfaces net/ipv4/ip_forward=1 net/ipv6/conf/default/forwarding=1 net/ipv6/conf/all/forwarding=1
Then we need to restart the network stack to apply that configuration change
sudo systemctl restart systemd-networkd
(on older Ubuntu systems this would have been done via sudo service networking restart
...)
Next we have to enable the (default on Ubuntu) UFW firewall to start at boot time to keep your machine relatively safe.
sudo $EDIT /etc/ufw/ufw.conf
And set the ENABLED variable near the top:
ENABLED=yes
Now you can formally start UFW now:
sudo ufw enable
Install the Nginx
Next we need to install the Nginx web server and reverse-proxy, as well as the Let's Encrypt SSL certificate generator, both of which are crucial for any secure web services you might want to host. Nginx is a more efficient and flexible alternative to the older Apache web server you might've seen elsewhere (Nginx recently surpassed Apache as the most widely used web server on the Internet).
sudo apt install nginx-full letsencrypt ssl-cert
You'll get a couple pop-up windows in your terminal, just hit ENTER to accept the defaults. Having installed it, we need to create firewall rules to allow external services to see it:
sudo ufw allow 'Nginx Full'
You can check if the firewall rules you requested have been enabled:
sudo ufw status
Outgoing VPS Email (optional)
Although it's not absolutely necessary (you can do this section later if you're in a big hurry), it's very useful for your server to be able to send out emails, like status emails to administrators (perhaps you) about things requiring their attention, e.g. the status of backups, pending security updates, expiring SSL certificates, etc.
To do this, we'll set up the industrial strength Postfix SMTP server, which is pretty quick and easy. First we install Postfix and a command line mail client for testing purposes.
sudo apt install postfix bsd-mailx
During the install, you'll be asked to select a bunch of configuration parameters. Select the defaults except:
- Select "Internet Site with Smarthost",
- fill in the domain name for your server [domain name],
- the [smtp server] name and [smtp port] (in the form [smtp server]:[smtp port], e.g. smtp.oeru.org:587 ) of your "smarthost" who'll be doing the authenticating SMTP for you, and
- the email address to which you want to receive system-related messages, [your email].
After that's done, we set a default address for the server to mail to, to [your email] selected above. First
sudo $EDIT /etc/aliases
We need to make sure the "root" user points to a real email address. Add a line at the bottom which says (replacing [your email] with your email :) )
root: [your email]
After which you'll need to convert the aliases file into a form that postfix can process, simply by running this:
sudo newaliases
Then we have to define the authentication credentials required to convince your mail server that you're you!
sudo $EDIT /etc/postfix/relay_password
and enter a single line in this format:
[smtp server] [smtp user]:[smtp password]
as an example, this is more or less what I've got for my system. Note that the [smtp user] in my case is an email address (this is common with many smtp system - the user is the same as the email address):
smtp.oeru.org smtp-work@fossdle.org:YourObscurePassw0rd
then save the file and, like the aliases file, run the conversion process (which uses a slightly different mechanism):
sudo postmap /etc/postfix/relay_password
Finally, we'll edit the main configuration file for Postfix to tell it about all this stuff:
sudo $EDIT /etc/postfix/main.cf
If your SMTP server uses port 25 (the default for unencrypted SMTP) you don't have to change anything, although most people nowadays prefer to use StartTLS or otherwise encrypted transport to at least ensure that your SMTP authentication details (at least) are transferred encrypted. That means using port 587 or 465. If you're using either of those ports, find the "relayhost = [your server name]" line... and add your port number after a colon, like this
relayhost = [smtp server]:[smtp port]
or, for example:
relayhost = smtp.oerfoundation.org:465
Then we have to update the configuration for Postfix to ensure that it knows about the details we've just defined (this command will automatically back up the original default configuration so you can start from scratch with the template below):
sudo mv /etc/postfix/main.cf /etc/postfix/main.cf.orig && sudo $EDIT /etc/postfix/main.cf
You can just copy-and-paste the following into it, substituting your specific values for the [tokens]. Note: the IPv6 designations in the line mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128
are not tokens - you can leave those unchanged.
# See /usr/share/postfix/main.cf.dist for a commented, more complete version # Debian specific: Specifying a file name will cause the first # line of that file to be used as the name. The Debian default # is /etc/mailname. #myorigin = /etc/mailname smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) biff = no # appending .domain is the MUA's job. append_dot_mydomain = no # Uncomment the next line to generate "delayed mail" warnings #delay_warning_time = 4h readme_directory = no # See http://www.postfix.org/COMPATIBILITY_README.html -- default to 3.6 on # fresh installs. compatibility_level = 3.6 # TLS parameters smtpd_tls_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_key_file=/etc/ssl/private/ssl-cert-snakeoil.key smtpd_tls_security_level=may smtp_tls_CApath=/etc/ssl/certs #smtp_tls_security_level=may smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtpd_relay_restrictions = permit_mynetworks permit_sasl_authenticated defer_unauth_destination myhostname = [domain name] alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases myorigin = /etc/mailname mydestination = $myhostname, localhost relayhost = [smtp server]:[smtp port] mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = all inet_protocols = all # added to configure accessing the relay host via authenticating SMTP smtp_sasl_auth_enable = yes smtp_sasl_password_maps = hash:/etc/postfix/relay_password smtp_sasl_security_options = noanonymous smtp_tls_security_level = encrypt # if you're using Ubuntu prior to 20.04, uncomment (remove the #) the # earlier line smtp_tls_security_level = may to save errors in 'postfix check' # and comment this line (by adding a # at the start) smtp_tls_wrappermode = yes
Once you've created that main.cf
file, you can double check that your config is valid:
sudo postfix check
and if it's all ok, you can get Postfix to re-read its configuration:
sudo postfix reload
You can then try sending an email so see if it works!
By default, a command line application called "mail" is installed as part of the bsd-mailx package we installed alongside postfix. You can use it to send test email from the command line on your host to verify you've got things working correctly! The stuff in <> are the keys to hit at the end of the line...
mail you@email.domain<ENTER>
Subject: Testing from your.relay.server.domain<ENTER> Testing postfix remote host<ENTER> <CTRL-D> Cc:<ENTER>
Typing (hold down the Control or Ctrl key on your keyboard and press the "d" key) will finish your message, showing you a "CC:" field, in which you can type in other email addresses if you want to test sending to multiple addresses. When you then hit , it will attempt to send this email. It might take a few minutes to work its way through to the receiving email system (having to run the gauntlet of spam and virus filters on the way).
You can also always check the postfix system logs to see what postfix thinks about it using the command:
sudo less +G /var/log/mail.log
if your system doesn't have a /var/log/mail.log
, never fear! Try this instead:
sudo less +G /var/log/syslog
In either case, hit SHIFT- + F to have the log update in real time. Use CTRL + C to exit back to the command prompt.
Installing the Docker Engine, Docker Compose, and Let's Encrypt
First let's install the Docker Engine, which (these days) comes with Docker Compose and the Let's Encrypt scripts that let you procure no-cost Secure Sockets Layer certificates to secure access to your server. You can follow the official Docker Engine install instructions, but I've summarised them here (If the following doesn't work for you, go back to the official instructions, because something might've changed since I wrote this).
First we want to make sure no old Docker engines are installed on this server (this probably won't do anything, but no harm in running it):
for pkg in docker.io docker-doc docker-compose docker-compose-v2 podman-docker containerd runc; do sudo apt-get remove $pkg; done
Second, we want to set up Docker's 'APT' repository, so you can keep Docker up-to-date with their latest versions (usually more up-to-date than those shipped with Ubuntu):
First we Add Docker's official GPG key (copy and paste all of this at your command line):
sudo apt-get update sudo apt-get install ca-certificates curl gnupg sudo install -m 0755 -d /etc/apt/keyrings curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg sudo chmod a+r /etc/apt/keyrings/docker.gpg
Then we have to add the repository to Apt our system's sources and install the Docker Engine:
echo \ "deb [arch="$(dpkg --print-architecture)" signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \ "$(. /etc/os-release && echo "$VERSION_CODENAME")" stable" | \ sudo tee /etc/apt/sources.list.d/docker.list > /dev/null sudo apt-get update sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
With this approach, any updates made by the Docker community will be installed as part of your regular server upgrades.
Having installed the most recent release of the Docker engine, you should find that you've now got Docker Compose built in. Test that by running
docker compose version
which should return something like
Docker Compose version v2.18.1
If that's the case, superb, everything's great.
Backwards compatibility for Docker Compose
Since version 2 of the Docker Engine, the 'Docker Compose' capability has been incorporated into the base system. Prior to version 2, using Docker Compose required installing a separate app, and it was run by typing docker-compose
rather than docker compose
... so something I do now, to accommodate my muscle memory of typing docker-compose
is to create a tiny script that lets me keep using that command, but have it call, instead, the new docker compose
functionality. I do this via
sudo $EDIT /usr/local/bin/docker-compose
into which I put the following:
#!/bin/bash D=`which docker` $D compose "$@"
After saving that, we have to make the script 'executable' via
sudo chmod a+x /usr/local/bin/docker-compose
So you should now be able to run
docker-compose version
and get the same result that you did above for docker compose version
...
Docker use by non-root user
If the above docker commands didn't work for you... and if you want to run Docker commands without being the root user or using sudo
, as we usually do, you need to do a few more steps...
-
create a 'docker' group on your system (this might already exist, but doing this again won't hurt):
sudo groupadd docker
-
add your user to it:
sudo usermod -aG docker $USER
-
refresh your shell so that it recognises your user's membership in the docker group:
newgrp docker
You should now be to run a test as a non-root user
docker run hello-world
Setting up places for your Docker configurations and persistent Data
The next step is to set up the file structure for holding your Docker configurations and the data your Docker containers will access. This is my convention, so you're welcome to do things different, but this is a 'known good' approach.
Now we create the set of directories I use for holding Docker Compose configurations (/home/docker
) and the persistent data the Docker containers create (/home/data
)
D=[domain] sudo mkdir -p /home/data/$D sudo mkdir -p /home/docker/$D
It's helpful to make sure that your non-root user can also read and write files in these directories:
U=[vps username] sudo chown -R $U /home/docker sudo chown -R $U /home/data
Configuring Nginx reverse proxy for your service
Above, we installed Nginx as well as the Let's Encrypt scripts. Now we'll configure them as it's useful to have them working before you set up your services.
In order for you, outside of your server, to see your specific LibreSoftware services, you will need to set up a secure external 'reverse proxy' on your host VPS which will accept requests for each of those services from the Internet and pass those requests securely to the two sets of Docker containers providing the services. These will answer to https://[domain name]
, noting that a given VPS can answer on behalf of more than one domain name or subdomain name, where each references a different instance of a service or altogether different services. For example, a given VPS could host multiple services, like, say, a password manager, a WordPress blog, a Mautic email automation system, an Authentik Single-Sign On service, and a Discourse forum, where each has a separate domain name (or sub domain name) and each has an Nginx reverse proxy configuration that directs requests to the appropriate domain to the appropriate set of Docker containers.
We use Let's Encrypt to provide the SSL certificates (each is a file with a specially generated, very long string) which we use to limit access to our services to encrypted (secure) connections (protecting both our users and ourselves from external enemies), usually one for each service domain name and corresponding Nginx configuration file. Some Nginx configurations will have multiple domains names for which a given service accepts connections, but it consolidates them to the 'canonical' (official) domain name. For example, the OER Foundation's WordPress website will respond to any of the following:
- http://www.oerfoundation.org,
- http://oerfoundation.org,
- https://www.oerfoundation.org, and
- https://oerfoundation.org
but all requests to any of those 4 options (note the difference between http:// and https:// where 's' refers to 'Secure' or encrypted) will all be redirected - transparently by the Nginx configuration, which will be reflected in your browser's address bar - to the canonical web address (or URL), https://oerfoundation.org, that we've chosen because it's short and secure.
Nginx will not run unless the SSL certificates you reference in your configurations are valid. Given that we need to request them with a working Nginx prior to them being created puts us in an awkward position. We use a trick to get around it: we temporarily reference the default 'self-signed' SSL certificates (sometimes called 'Snakeoil certs' because that's the placeholder name they're given) that every new Linux system generates when it's installed, that are valid certificates (and thus acceptable to Nginx) but *they won't work with our domains, as they're generic and not 'signed' by an external party, like Let's Encrypt, meaning that your browser won't like them. But that's ok, as you browser will never need to see them, and Let's Encrypt's systems won't look at them either. We'll swap the Snakeoil certs out as soon as we've successfully created the Let's Encrypt ones, and your browser will be happy, and all will be well with the world.
This should happen automatically when you install the Let's Encrypt packages, but just to be sure, run this (it won't harm anything if you run it more than once):
sudo make-ssl-cert generate-default-snakeoil
which creates your default 'Snakeoil' certificates. They are technically valid certificates, but they aren't signed by any
Let's Encrypt setup
Step one of using Let's Encrypt is to make sure the Let's Encrypt scripts are installed (note: sometimes they're referred to as 'certbot' - it's the same code):
sudo apt install letsencrypt
Let's Encrypt and Nginx need to work together. Nginx stores all of its configuration in the directory /etc/nginx
. The first thing we'll do is create a place for Let's Encrypt Nginx-specific configuration details:
sudo mkdir /etc/nginx/includes
Then we create that configuration file itself:
sudo $EDIT /etc/nginx/includes/letsencrypt.conf
into which we copy-and-paste the following (no [tokens] to replace in this one!)
# Rule for legitimate ACME Challenge requests location ^~ /.well-known/acme-challenge/ { default_type "text/plain"; # this can be any directory, but this name keeps it clear root /var/www/letsencrypt; } # Hide /acme-challenge subdirectory and return 404 on all requests. # It is somewhat more secure than letting Nginx return 403. # Ending slash is important! location = /.well-known/acme-challenge/ { return 404; }
As described in the file we've just created, Let's Encrypt will look for a secret code we create to verify that we own the domain we're requesting an SSL certificate for, so we have to make sure it exists:
sudo mkdir /var/www/letsencrypt
You will need to install an Nginx reverse proxy configuration file for each web-based service you run on your VPS, but those will usually be service-specific in their make up, so I won't discuss them here.
But, for the record (to make your lives a bit easier), this is what you'll do after you've got a suitable Nginx reverse proxy configuration in place and ready to run to acquire a Let's Encrypt SSL certificate.
Requesting Let's Encrypt certificates
To request Let's Encrypt SSL certificates for your service, run the following, replacing the [domain name] reference with the address you've selected for your service. Note that 'certbot' is the script provided by the Let's Encrypt package - historically, it could also be called via 'letsencrypt', although apparently the latter is now deprecated:
sudo certbot --webroot -w /var/www/letsencrypt -d [domain name]
Note - if you want to address your instance from multiple domains, use one (or more) -d [another domain]
- just make sure that
- all those domains already point to your VPS, and
- those domains are included in the Nginx proxy configuration above.
otherwise the Let's Encrypt certbot request will fail!
Here's what you're likely to see as output from the first run of the letsencrypt script - note that it will ask you for an email address (so it can send you warnings if your certificate is going to expire, e.g. due to a problem with renewal (like if you make a configuration change that breaks the renewal process)).
Saving debug log to /var/log/letsencrypt/letsencrypt.log Enter email address (used for urgent renewal and security notices) (Enter 'c' to cancel): webmaster@fossdle.org - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Please read the Terms of Service at https://letsencrypt.org/documents/LE-SA-v1.3-September-21-2022.pdf. You must agree in order to register with the ACME server. Do you agree? - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - (Y)es/(N)o: y - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Would you be willing, once your first certificate is successfully issued, to share your email address with the Electronic Frontier Foundation, a founding partner of the Let's Encrypt project and the non-profit organization that develops Certbot? We'd like to send you email about our work encrypting the web, EFF news, campaigns, and ways to support digital freedom. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - (Y)es/(N)o: y Account registered. Requesting a certificate for [domain name] Successfully received certificate. Certificate is saved at: /etc/letsencrypt/live/[domain name]/fullchain.pem Key is saved at: /etc/letsencrypt/live/[domain name]/privkey.pem This certificate expires on (some future date). These files will be updated when the certificate renews. Certbot has set up a scheduled task to automatically renew this certificate in the background. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - If you like Certbot, please consider supporting our work by: * Donating to ISRG / Let's Encrypt: https://letsencrypt.org/donate * Donating to EFF: https://eff.org/donate-le - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Ideally, you'll see a message like the above. If not, and there's an error, the error messages they provide are usually very useful and accurate. Fix the problem and try again. Note, your SSL certificate will have the name of your [domain name], even if it also provide support for [second domain name] (or third, fourth, etc.).
Once you have a Let's Encrypt certificate, you can update our Nginx configuration:
sudo $EDIT /etc/nginx/sites-available/[domain name]
and swap all occurrences of
ssl_certificate /etc/ssl/certs/ssl-cert-snakeoil.pem; ssl_certificate_key /etc/ssl/private/ssl-cert-snakeoil.key; # ssl_certificate /etc/letsencrypt/live/[domain name]/fullchain.pem; # ssl_certificate_key /etc/letsencrypt/live/[domain name]/privkey.pem;
to
# ssl_certificate /etc/ssl/certs/ssl-cert-snakeoil.pem; # ssl_certificate_key /etc/ssl/private/ssl-cert-snakeoil.key; ssl_certificate /etc/letsencrypt/live/[domain name]/fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/[domain name]/privkey.pem;
which enables your new domain-specific SSL certificate. Check that Nginx is happy with your change:
sudo nginx -t
and if so,
sudo service nginx reload
You domain should now be enabled for https://
access. Note that going to http://[domain name]
should automatically redirect you to https://[domain name]
because you care about your user's security!
And that's it - your server is now ready to receive specific configurations for a myriad of Libre web services! Have a look around this site and check out some of the other tutorials - for many of them, you'll now be able to skip to near the end, as you've done most of the groundwork already!
Add new comment