We're planting a tree for every job application! Click here to learn more

Zero-downtime Clojure Deployment

Alexander Ivanov

23 Apr 2018

•

7 min read

Zero-downtime Clojure Deployment
  • Clojure

Zero-downtime Clojure deployment

Clojure apps take a long time to build and start. If you stop your application server process and restart it, the app will be unavailable to the users for about 10 to 20 seconds until the new server process starts. In this article, we describe a setup for Clojure web app deployment using git and Supervisor that does not have a downtime period and thus offers a better experience for the users.

Preparatory steps

This guide assumes that you have Clojure and Leiningen installed (if you don’t, you can use the installation instructions here) and that you have a domain name and a virtual private server on a cloud service like DigitalOcean or Hetzner.

On your virtual server, you will need nginx and supervisord (both can be installed with apt-install on Ubuntu), as well as Leiningen.

For a demo, create a simple Clojure application on your development machine. A good way to do it is to use the Luminus framework:

$ lein new luminus myapp

where myapp will be the name of our test app. You can now try launching the app locally:

$ cd myapp
$ PORT=8000 lein run

This will install all the dependencies and launch the app on the port 8000 (Luminus starts the web server on the port specified in the $PORT environment variable, and the default port number is 3000). You can check if it is has started successfully by trying http://localhost:8000 in your browser.

Next, we will set up the deployment procedure.

Git deployment setup

If you are familiar with how Git deployment works, you can skip this section. If you encounter any problems, you can refer to DigitalOcean's guides for a more detailed explanation.

On your virtual server, create a bare repository which will serve as the target for your git push, and a directory where your app's code will be located. For this example, we create both the repo and the app directory in your account's home directory. If needed, you can also do this in /var/www or elsewhere on the server.

remote-machine:~$ mkdir -p repos/myapp.live.git
remote-machine:~$ mkdir -p apps/myapp/live
remote-machine:~$ cd repos/myapp.live.git
remote-machine:~/repos/myapp.live.git$ git init --bare

On your local machine, initialise a git repository and commit the files that Luminus has created for you:

$ git init
$ git remote add
$ git add .
$ git commit -m "Initial commit"

Add the repository on the server as a remote repository on your local machine:

$ git remote add demo ssh://username@remote-machine:repos/myapp.live.git

We will provide the necessary git hooks later on.

App process management

Now you need to launch the application on your server.

For this example, we will use Supervisor as the process manager, but the same logic will apply to any other process manager. Supervisor daemon turns your applications into daemons, or tasks (Supervisor calls them 'programs'), starting and stopping them upon request and restarting them on failure and on server restart.

The issue is that in the simple one-task configuration, every time you re-deploy and restart the app, it will be unavailable while it is starting - which may take around 10 seconds even for a basic Clojure app. To work around that, we create two tasks and rotate them on each deployment - so that the task running the old code is only stopped after we have verified that the task running the newly deployed code is ready to receive requests.

After you have installed Supervisord on your virtual server, create a configuration for our app by adding /etc/supervisor/conf.d/myapp.conf [from this Gist]# file-myapp-conf).

We will run up to two instances of our app. Each time we will deploy the app, we will build an uberjar, so we don't have to care about any Java / Clojure dependencies - it will include all the libraries that the specific app snapshot depends upon, and can be launched with a single java command.

The two tasks will be named myapp-live-1 and myapp-live-2: the names will have the same prefix and the only difference between them will be the number of the instance.

We will launch the app under the www-data user account to keep the logs in /var/log (Supervisor rotates the logs automatically), start both instances automatically at system startup and restart each of them up to 3 times. Each instance will have its own environment variables; at the very least, we give them different PORT and NREPL_PORT numbers so their web servers can work simultaneously. Other configuration variables can be added as necessary - they may be the same for the two instances (database credentials, etc.) or different.

To make the Supervisor's control command accessible to your user without sudo (which will help to restart the tasks it git hooks during deployment), change /etc/supervisor/supervisord.conf by adding the lines in the [unix_http_server] section:

[unix_http_server]
file=/var/run/supervisor.sock    ; (the path to the socket file)
chmod=0700                       ; socket file mode (default 0700)
chown=username:username

Now, reload Supervisord with that config:

remote-machine:~$ supervisorctl reload

Note that the processes won't start because there are no JAR files yet. For that, we need to set up how we actually deploy the application.

Deployment procedure

First, we need to keep track which instance of the app is running and which one should be launched next. One can do it using a special file in the app's directory with the number of the app to be launched on the next deployment. We will keep that file, alongside other files related to the deployment, in a subdirectory named deployment-files. Below are the shell commands to read that file and put its contents into the $next variable. By default, we create the directory and assume that we are going to launch instance 1. No need to run these commands manually - we will put them in a script a bit later:

mkdir -p deployment-files
next=$(cat ./deployment-files/next)
next=${next:-1}

Now, we build the uberjar and put it into the deployment-files under the name that we specified in the Supervisor config:

# A standard function that reports an error and aborts the script
die() { echo "$@" 1>&2 ; exit 1; }

$HOME/bin/lein uberjar || die "Cannot deploy: build failed."
cp "./target/standalone.jar" "./deployment-files/instance-$next.jar"

After that, we start the Supervior program with the task in question:

PROGRAM_PREFIX="myapp-live"
supervisorctl restart $PROGRAM_PREFIX-$next

Now the program has been launched, but it will only be available to the users in around 10 seconds (or never, in case if the http server within the app cannot start due to a bug or for some other reason). To make sure that it starts, we poll the server port at 0.1s intervals until it starts responding. If the app doesn't start within 30 seconds, we assume that it failed to start and abort the deployment:

PORT_1=3000
PORT_2=3001
TIMEOUT_INTERVAL=30
starttime=$(date +%s)
[[ $next = 1 ]] && port=$PORT_1 || port=$PORT_2
until curl -s localhost:$port > /dev/null; do
  sleep 0.1;
  if [[ $(date +%s) > $((starttime + TIMEOUT_INTERVAL)) ]]; then
    supervisorctl stop $PROGRAM_PREFIX-$next
    die "Could not start instance $next: timeout."
  fi
done

Finally, when we have made sure that the server is working, we can stop the old instance and mark it as the one to be started on the next deployment:

[[ $next = 1 ]] && prev=2 || prev=1
echo "Stopping instance $prev."
supervisorctl stop $PROGRAM_PREFIX-$prev
echo $prev > ./deployment-files/next

echo "Done updating $PROGRAM_PREFIX. Instance $next listening on port $port."

These steps should be done at the remote machine at each deployment, so we put them into a shell script. Since PROGRAM_PREFIX, PORT_1 and PORT_2 are application-dependent parameters, it makes sense to put them into a separate file deployment-config, that will be located the app's directory on the remote machine:

# deploy-config
PROGRAM_PREFIX=myapp-live
PORT_1=3000
PORT_2=3001

In case we have different deployment targets on a single machine (for example, a live deployment and a beta-testing deployment), we can keep them in separate directories and have separate configs and separate Supervisor program groups for them.

Now, all the steps to do the deployment will be stored in a file named deploy.sh - you can find the full text of the script [in this Gist]# file-deploy-sh). It also has an additional parameter, NOBUILD - you can run the script with NOBUILD=1 if you just want to rotate the instances without rebuilding the app (for example, to roll back a deployment).

To avoid accidentally overwriting the deployment-files/ directory and the deploy-config file, it makes sense to add them to the .gitignore file.

Also, we use the name standalone.jar for the Uberjar file (by default, the file name includes the app's name and version, which will be difficult to keep track of in the deployment script), so you will need to add :uberjar-name "standalone.jar" to your project.clj file.

Now, we need to specify the post-receive git hook that will check out the tree to the application's directory on each push and then pass control to the deploy.sh script provided with the application:

remote-machine:~$ vim repos/myapp.live.git/post-receive
#!/bin/bash

git --work-tree="/home/alex/apps/myapp/live" \
    --git-dir="/home/alex/repos/myapp.live.git" \
    checkout -f && \
cd /home/alex/apps/myapp/live && bash deploy.sh

Now, commit the changes (a new file, deploy.sh, and an additional lines in project.clj and .gitignore) and push the app to the virtual server:

$ git add deploy.sh
$ git commit -m "Add the deployment procedure"
$ git push live master

If everything compiles and launches successfully, you will receive the message that the instance 1 is listening on port 3000. As this is the first deployment, you will also receive a message that instance 2 cannot be stopped as it is not running.

Configuring NGINX

Finally, to make the app available to the users, you need to set up the load balancer so that it passes the incoming requests to one of the instances, and then tries the other instance if the first instance does not respond.

In /etc/nginx/sites-available, create the file myapp-live [like in this Gist]# file-myapp-live), and make a symlink from /etc/nginx/sites-enabled to this file.

That is the simplest possible config - you will have to expand it if you need HTTPS, caching or using NGINX to serve static files. Test if the config is valid and restart NGINX:

sudo nginx -t && sudo service nginx restart

Now your app should be up and running on http://myapp-domain.com .

Further deployments

Now, each time you commit changes in your code, you can push it to your virtual server and have it deployed without downtime.

If you need to roll back to the previous deployment, you can log in to your virtual server and run NOBUILD=1 bash deploy.sh in your app's directory.

You can also add other deployment targets (for example, beta) - for each one, you will need a separate app directory (with its own deploy-config file), a Supervisor program group, an NGINX config file and a git repo on the remote machine (which you will need to add as an upstream on the local machine) - you can refer to the DigitalOcean's guide for details.

Did you like this article?

Alexander Ivanov

See other articles by Alexander

Related jobs

See all

Title

The company

  • Remote

Title

The company

  • Remote

Title

The company

  • Remote

Title

The company

  • Remote

Related articles

JavaScript Functional Style Made Simple

JavaScript Functional Style Made Simple

Daniel Boros

•

12 Sep 2021

JavaScript Functional Style Made Simple

JavaScript Functional Style Made Simple

Daniel Boros

•

12 Sep 2021

WorksHub

CareersCompaniesSitemapFunctional WorksBlockchain WorksJavaScript WorksAI WorksGolang WorksJava WorksPython WorksRemote Works
hello@works-hub.com

Ground Floor, Verse Building, 18 Brunswick Place, London, N1 6DZ

108 E 16th Street, New York, NY 10003

Subscribe to our newsletter

Join over 111,000 others and get access to exclusive content, job opportunities and more!

© 2024 WorksHub

Privacy PolicyDeveloped by WorksHub