Protecting your users with Let's Encrypt SSL Certs

View recent blog entries

For any website that requires anyone (users or even just a few admins) to transfer secrets to and from it, you want to ensure the data is end-to-end encrypted. Today various browsers (like Firefox) give warnings when you're sending secret data (like passwords) "in the clear", namely unencrypted. In early 2017, Google added further urgency to doing the right thing for your users. 

In the past, getting an SSL certificate to achieve encryption for your domain (the little "lock" icon in browser address bar indicating that your communication with the site is encrypted), was a complicated, expensive proposition, requiring a lot of annoying and time consuming "identity verification" (sometimes via post in the dark old days) and a fee of, in some cases, a couple hundred dollars per year paid to your "SSL Cert Provider" to pay for those administrative services along with the software needed to gin up a long prime number to act as your encryption key (the long string of characters making up your SSL certificate).

Thankfully, thanks to the efforts of the Let's Encrypt community, the process is both far far easier, and free of cost. Now there really isn't an excuse for not having an SSL certificate on your site.

Members of the Let's Encrypt community have provided a range of useful open source tools you can use to create and maintain certificates on your hosting infrastructure (e.g. the Virtual Machine (VM) on which you're installing web services detailed in the howtos on this site!). In this case we're going to use a tool, "certbot" provided by the good folks at the Electronic Frontier Foundation. For VMs running Ubuntu 14.04 or 16.04 (the Long Term Support (LTS) versions of the Ubuntu Linux platform) which are what we use, the install is easy - at your VM command line, run:

sudo add-apt-repository ppa:certbot/certbot
sudo apt-get update
sudo apt-get install certbot 

We run Nginx on our VMs, which makes one or more hosted services (normally running in Docker containers) available on the Internet. Strictly speaking, we don't use full "end-to-end" encryption - in our case, on the server-end the encryption terminates at the Nginx server. We, perhaps cavalierly, assume that transfer between the host machine and a Docker container running on that host will be implicitly secure... The only way it could be compromised is if the VM itself is compromised, in which case, the Docker containers running on it could be, too. Avoiding having secure transfer between Nginx on the VM host and the various Docker containers also substantially simplifies setting up each application.

Thanks to a service which Nginx provides SNI (or Server Name Indication) - Apache and a few other web servers also provide this - which removes the historical limitation that meant you could only have one SSL certificate per IP address on a web server. The only downside of SNI is that some older browsers (and platforms) don't support it. Since those older technologies are rapidly dying out and it's quite expensive and difficult to have a single IP address for each SSL service on a server, we accept this compromise.

The Let's Encrypt Cert Process

Here's (roughly) how the process works:

  1. Point your domain (via A or CNAME record) to point to the/an external IP address on your VM.
  2. Set up a domain (or domains) for non-secure hosting (on port 80) via your Nginx instance.
  3. The domain's configuration must include a special directory reserved for Let's Encrypt verification content.
  4. You request that certbot (on the VM) acquires a certificate for that domain (or domains) at which point
    1. the certbot writes a file with a hard-to-guess name to that special directory and requests that the Let's Encrypt infrastructure checks the domain name from outside
    2. Let's Encrypt checks that domain name and special directory to see that the expected number appears there, thus verifying that the requesting party actually has the ability to set content at this hard-to-guess filename, and therefore has legitimate claim to being the party controlling the domain name and server.
    3. Let's Encrypt registers the certificate request in the name of the party running the certbot (so it can, for example, send emails to the administrator warning them that the certificate needs to be renewed - which happens every 8 weeks or so), and
    4. Let's Encrypt sends verification back to your VM's certbot telling it to complete the certificate generation, which it then (digitally) signs in the name of the Let's Encrypt Certificate Authority (which, in turn, is recognised by almost all web browsers out-of-the-box - no mean feat, I can tell you).
  5. You get an alert telling you that you have created a valid SSL certificate.
  6. You alter your Nginx domain configuration to
    1. redirect connections to port 80 (un-encrypted) to port 443 (encrypted), and
    2. you set up the 443 configuration including your new certificates.
  7. You reload your Nginx configuration, and your site will now be end-to-end encrypted.

The Let's Encrypt Snippet

To make it easy to include the relevant directory info, I recommend that you create a new file in your Nginx configuration (substitute your preferred text editor for "vim" in the following - "nano" is a good choice if you haven't already got a preference):

sudo mkdir /etc/nginx/includes
sudo vim /etc/nginx/includes/letsencrypt.conf

and make sure it has the following content (note, I learned this thanks to someone else's howto on the global Internet knowledge commons :))

# Rule for legitimate ACME Challenge requests
location ^~ /.well-known/acme-challenge/ {
    default_type "text/plain";
    # this can be any directory, but this name keeps it clear
    root /var/www/letsencrypt;
}

# Hide /acme-challenge subdirectory and return 404 on all requests.
# It is somewhat more secure than letting Nginx return 403.
# Ending slash is important!
location = /.well-known/acme-challenge/ {
    return 404;
}

Next, make sure your directory exists (note - you only need to do this once per VM) - it shouldn't need an special permissions - it'll be written by the "root" user, and needs to be readable by the Nginx user, usually "www-data" on a Debian or Ubuntu Linux instance.

mkdir /var/www/letsencrypt

Example Nginx Domain Configuration - unencrypted

Here's an example of a pre-cert Nginx domain configuration for example.org and www.example.org (I usually name the configuration file after the main domain it concerns, so my file would be /etc/nginx/sites-available/example.org) - this should also let you do initial test of your app to make sure it works, before adding the additional complexity of SSL. (Replace example.com (and www.example.com) with your own domain!):

server {

    listen 80; # this is one of our external IPs on the server.
    #listen   [::]:80 default ipv6only=on; ## listen for ipv6

    # this root directory isn't really relevant in a proxy situation

    # so I usually set it to the system default
    root /usr/share/nginx/www;
    index index.html index.htm;

    server_name example.org www.example.org;

    access_log /var/log/nginx/example.org_access.log;
    error_log /var/log/nginx/example.org_error.log;

    # this is where we include the snippet
    include /etc/nginx/includes/letsencrypt.conf;

    # this is just an example of a "proxy" configuration
    # for, say, a Docker-based service, exposed on the VM's
    # local port 8081
    location / {
        proxy_read_timeout      300;
        proxy_connect_timeout   300;
        proxy_redirect          off;
        proxy_set_header    Host                $http_host;
        proxy_set_header    X-Real-IP           $remote_addr;
        proxy_set_header    X-Forwarded-For     $proxy_add_x_forwarded_for;
        proxy_set_header    X-Forwarded-Proto   $scheme;
        proxy_pass      http://127.0.0.1:8081;
    }

}

You can make sure that the configuration is visible to Nginx by adding it into the "sites-enabled" directory via a file link

sudo ln -sf /etc/nginx/sites-available/example.com /etc/nginx/sites-enabled

Test things to make sure the new configuration doesn't have typos or bad references:

sudo nginx -t

and if not, make it live:

sudo service nginx reload

You should now be able to go to http://example.com (or your domain) and you'll hopefully get your proxied application (if it's set up) or an Nginx error (see you nginx error file for more info!).

Now it's time to request the certificate!

Example Certbot invocation

Once cerbot is installed, and a domain is configured, it's pretty straightforward to get a certificate.

On the first invocation of certbot, you might get a coloured interface that requests your user details (e.g. name and email address) so that Let's Encrypt can register them for the purposes of future emails. They email if one of your certificates is on the verge of expiring, or if there's been a change to Let's Encrypt policy or process. It's worth being on the list. 

You can request your certificate with the following:

sudo certbot certonly --webroot -w /var/www/letsencrypt -d example.org -d www.example.org

If it works, it gratifyingly results in a message that starts with "Congratulations"!

Example Nginx Domain Configuration - unencrypted

Once you've got your certificate, you can reference it in your configuration. We normally set up a redirect from the unencrypted version of the site to the encrypted on (except for the Let's Encrypt verification directory!):

server {

    listen 80; # this is one of our external IPs on the server.
 
    root /var/www/html;
    index index.html index.htm;

    server_name example.org www.example.org;

    access_log /var/log/nginx/example.org_access.log;
    error_log /var/log/nginx/example.org_error.log;

    include /etc/nginx/includes/letsencrypt.conf;

    # a 302 is a "soft" redirect. A 301 can never be reversed.
    location / {
        return 302 https://chat.oeru.org$request_uri;
    }       
}

server {
    listen 443 ssl;
    ssl on;
    ssl_certificate /etc/letsencrypt/live/example.org/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/example.org/privkey.pem;

    ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
    ssl_dhparam /etc/ssl/certs/dhparam.pem;
    keepalive_timeout 20s;

    access_log /var/log/nginx/example.org_access.log;
    error_log /var/log/nginx/example.org_error.log;

    root /var/www/html;
    index index.html index.htm;


    server_name example.org www.example.org;
   
    # this is just an example of a "proxy" configuration
    # for, say, a Docker-based service, exposed on the VM's
    # local port 8081
    location / {
        proxy_read_timeout      300;
        proxy_connect_timeout   300;
        proxy_redirect          off;
        proxy_set_header    Host                $http_host;
        proxy_set_header    X-Real-IP           $remote_addr;
        proxy_set_header    X-Forwarded-For     $proxy_add_x_forwarded_for;
        proxy_set_header    X-Forwarded-Proto   $scheme;
        proxy_pass      http://127.0.0.1:8081;
    }

}

Note - you also need to set up the ssl_dhparam file for this configuration to work. You can do this based on these instructions after installing OpenSSL tools:

sudo apt-get install openssl

by running (warning - this can take quite a long time - the system needs to generate sufficient entropy to achieve acceptable randomness):

openssl dhparam -out /etc/ssl/certs/dhparam.pem 4096

When you've set up the file, you can enable it:

sudo ln -sf /etc/nginx/sites-available/example.com /etc/nginx/sites-enabled

Test the file to ensure there aren't any syntax errors before reloading nginx:

sudo nginx -t

If this shows an error, you'll need to fix the file. If all's well, reload nginx to include the new configuration:

sudo service nginx reload

You should now be able to point your browser at your domain name, and it should automatically redirect you to https://example.org - and (based on the above configuration, https://www.example.org should work too. You might want to redirect www.example.org to example.org or visa versa).

A word to the wise - if it doesn't work, check your firewall settings!

Update: discovered this very well done how-to on Let's Encrypt that offers additional background to this one.

Ongoing Certificate Maintenance

One of the nice things about EFF's certbot is that when it's installed, it also installs a nightly cron task (see /etc/cron.d/certbot) which checks all domains registered on the server for renewals. Assuming your domains have been configured in Nginx as described above, renewals should occur automatically, and you'll just receive a periodic email to let you know that they've happened.

If you want to check your renewal status, you can run this:

sudo certbot renew --dry-run

Good on you for securing your users and your site!