Zero Downtime nginx Letsencrypt Certificate Renewals Without the nginx Plugin

This is an old post!

This post is over 2 years old. Solutions referenced in this article may no longer be valid. Please consider this when utilizing any information referenced here.

Quick post on how to install Letsencrypt certificates into nginx without using the official plugin. There may be some cases where you don’t want to use the official plugin (which until recently was still marked as “experimental.”) The concepts here could be theoretically applied to any webserver software.

Basics of a ACME Challenge

Letsencrypt is based on a technology called ACME, which stands for Automated Certificate Management Environment. It’s a way for a certificate issuer to verify your ownership of a domain and issue you a certificate without requiring any manual intervention. And while there are a number of ways for this to happen, by far the most common is via a webserver.

The ACME client places a file in the /.well-known/acme-challenges/ directory. This will usually look something like /.well-known/acme-challenge/LYORRg3BLMyxa8_WYUa27QHofvO2M2GfvoPkLV5H-7I. The certificate issuer attempts to download this file. If it was successful, new certificates are generated, downloaded by the client, and installed in the correct location.

Obviously, this is a high-level overview, but the important thing to take away from this is that this is designd to be scripted. It is designed with zero interaction in mind.

The Restart Problem

Until the nginx plugin was stabilized, the common way to do this was to stop nginx, spin up a standalone server for the ACME challenge, then restart nginx. Obviously, this is not desirable in a production environment. Even a brief, less than 10 second outage it takes to do the ACME exchange is too long.

Fortunately, the letsencrypt client provides you another option --webroot.

Installing Certs without Restarts

Using a combination of some nginx config changes and the proper commands, we can do the challenge then tell nginx to just reload the configs, resulting in a zero downtime cert install.

First, you may need to make an nginx change. This is necessary if, for example, you are running nginx as a reverse proxy server and don’t have easy access to the remote end. Or if you just want to serve the requests from another location.

  location ^~ /.well-known {
    allow all;
    root /var/www/well-known/;

Next, you just need to issue the right command to letsencrypt.

letsencrypt certonly --quiet -n --agree-tos --webroot -w /var/www/well-known/ --deploy-hook "systemctl reload nginx" -d

Let’s take apart what we did here.

  • certonly obtains a certificate, but does not install it. This is a bit of a misnomer. It downloads the cert, but does not install it into your config. You will have to handle that. Also useful for specific certs.
  • -n Non interactive. We want to script this! :)
  • --quiet again, we want to script this in cron, so we don’t want it making noise. You should probably remove this while you are testing.
  • --agree-tos you agree to the terms of service.
  • --webroot this is the magic sauce. You’re telling letsencrypt that you want to use the webroot auth, placing files in a location on your server.
  • -w /var/www/well-known/ Tells webroot where to place the files. This is the same location as your config change above.
  • --deploy-hook "systemctl reload nginx" on a successful certificate download it will run this command. Note that it will only run this if a new certificate is downloaded, making this safe to run daily!
  • -d is the domain you want the certificate for.

The systemctl reload nginx part there is important. Not only are we only running the command if we have new certs, we are telling nginx to reload, not restart. The server will continue to reply to existing requests, but new requests will be served with the new config, making this a zero downtime operation.

Finally, if you have not already done so, you will need to add the appropriate SSL configuration lines to your config:

  ssl_certificate     /etc/letsencrypt/live/;
  ssl_certificate_key /etc/letsencrypt/live/;

Congratulations, you now have certs! And you can shove that command into a daily cronjob to be sure that you never have to deal with renewing an SSL certificate again.

Comments (0)

Interested in why you can't leave comments on my blog? Read the article about why comments are uniquely terrible and need to die. If you are still interested in commenting on this article, feel free to reach out to me directly and/or share it on social media.

Contact Me
Share It
I have a well-documented obsession with pretty URLs, and this extends even to my internal home network. I have way too much stuff bouncing around in my head to have to remember IP addresses when a domain name is much easier to remember. LetsEncrypt launched to offer free SSL certificates to anyone, but the most crucial feature of their infrastructure, and one someone should have figured out before then, was scriptable automatically renewing certificates. Basically they validate you do in fact own the domain using automated methods, then issue you the new certificate. Thus, your certificates can be renewed on a schedule with no interaction from you. Traditionally, they have done this by placing a file in the webroot and looking for that file before issuing the certificate (see my earlier blog post about Zero Downtime nginx Letsencrypt Certificate Renewals Without the nginx Plugin for more detail about this.) But what happens when you want to issue an internal certificate? One for a service that is not accessible to the outside world, and thus, not visible using the webroot method? Well, it turns out there is a solution for that too!
Read More
Home Assistant
One of the big missing pieces from my conversion to Home Assistant was Amazon Alexa integration. It wasn’t something we used a lot, but it was a nice to have. Especially for walking out a room and saying “Alexa, turn off the living room lights.” I had been putting it off a bit because the setup instructions are rather complex. But this weekend I found myself with a couple free hours and decided to work through it. It actually wasn’t as difficult as I expected it to be, but it is definitely not the type of thing a beginner or someone who does not have some programming and sysadmin background could accomplish. But in working through it, there was one thing that was an immediate red flag for me: the need to expose your Home Assistant installation to the Internet. It makes sense that you would need to do this - the Amazon mothership needs to send data to you to take an action after all. But exposing my entire home automation system to the Internet seems like a really, really bad idea. So in doing this, rather than expose port 443 on my router to the Internet and open my entire home to a Shodan attack, I decided to try something a bit different.
Read More
I am currently in the process of migrating a bunch of sites on this machine from Apache to nginx. Rather than take everything down and migrate it all at once, I wanted to do this incrementally. But that raises a question: how do you incrementally migrate site configs from one to the other on the same machine, since both servers will need to be running and listening on ports 80 and 443? The solution I came up with was to move Apache to different ports (8080 and 4443) and to set the default nginx config to be a reverse proxy!
Read More