Using docker-compose for multiple web servers on a Raspberry Pi

If you’re reading this, you’re probably visiting my Raspberry Pi. Which hosts my WordPress blog, its corresponding database, Certbot and my portfolio – a static HTML page – on Docker containers.

The Motive

Recently I had a issue with my Pi and decided to unplug/replug.

Little did I know, it corrupted my SD card. That made me very sad indeed.

As I thought to myself “So what? It should be easy setting my web server up again” I realised that this was not to be. I had to install several dependencies I never knew I needed and it took several hours for my website to be back to its former glory.

Since then, I became a big fan of easy deployments and minimising configuration. To the point where I need very few files to make a deployment with all of the right configurations.

Why Docker?

There are a lot of benefits of using Docker containers to manage your services. The single most beneficial point for me, however, has been that the containers handle dependencies rather than the server.

This means that I can throw away my server, start up a different machine within a few minutes everything is back. Awesome sauce.

Here is a simple diagram of the desired structure:

Docker architecture diagram
Target Web Architecture using Docker containers in a Raspbian Stretch OS

The Let’s Encrypt container also has an Nginx server built-in which handles all incoming connections from the outside world. We have also mounted our static HTML project to this container for the Nginx server to point traffic towards it.

Docker-compose

This is what the docker-compose.yml looks like:


version: '3.3'

services:
   db:
     image: hypriot/rpi-mysql:latest
     restart: always
     volumes:
       - ./data:/var/lib/mysql
     environment:
       - MYSQL_ROOT_PASSWORD
       - MYSQL_DATABASE
       - MYSQL_USER
       - MYSQL_PASSWORD
   wordpress:
     image: wordpress:5-fpm
     depends_on:
       - db
     restart: always
     volumes:
       - ./php-uploads.ini:/usr/local/etc/php/conf.d/uploads.ini
       - ./wordpress:/var/www/html 
     environment:
       - WORDPRESS_DB_HOST
       - WORDPRESS_DB_USER
       - WORDPRESS_DB_PASSWORD
   letsencrypt:
     image: linuxserver/letsencrypt
     container_name: letsencrypt
     ports:
       - 443:443
       - 80:80
     volumes:
       - ./letsencrypt/config:/config
       - ./name_of_portfolio_root:/var/www/html/name_of_portfolio_root # Change me
       - ./wordpress:/var/www/html/wordpress
     restart: always
     depends_on:
       - wordpress
     environment:
       - PUID
       - PGID
       - EMAIL
       - URL
       - SUBDOMAINS
       - TZ

This is where most of the magic happens, you simply plug in where your root folders live (i.e. your WordPress folder which will probably be called WordPress if you use the official WP image) and some environment variables inside of your .env file.

Note: The PUID and PGID are the IDs of the user you would like the container to be run by. So in your case, you could set that to www-data, and then change the ownership of all of the dependant volumes to also be www-data to avoid any permissions issues.

I’m assuming you’re wearing your big boy/girl pants, so I’ll let you figure out how to write a .env file by yourself :).

Notice that I am using hypriot/rpi-mysql for my SQL server, that’s because my Raspberry Pi uses an ARMv7 processor and the docker image needs to be compatible with your target machine architecture. So make sure to look at docker hub for a version that suits your needs.

You can configure the Nginx server to handle different hostnames. In my case, I have my personal website and this blog so I have two .conf files in my letsencrypt/config/nginx directory. One for each server (apart from the default, which simply redirects HTTP to HTTPS).

Nginx

Let’s take a look at our portfolio configuration file:

## Version 2018/12/05 - Changelog: https://github.com/linuxserver/docker-letsencrypt/commits/master/root/defaults/default

# Expires map, this is some optional cache-control stuff to speed up revisits to my site
map $sent_http_content_type $expires {
    default                    off;
    text/html                  epoch;
    text/css                   max;
    application/javascript     max;
    ~image/                    max;
}

# main server block
server {
	listen 443 ssl http2 default_server;
	listen [::]:443 ssl http2 default_server;

	root /var/www/html/name_of_portfolio_site; # Change me
	index index.html index.htm index.php;

	server_name name-of-domain.url; # Change me

	# set cache control expiry map, this actually uses the block above
	expires $expires;

	# enable subfolder method reverse proxy confs
	include /config/nginx/proxy-confs/*.subfolder.conf;

	# all ssl related config moved to ssl.conf
	include /config/nginx/ssl.conf;


	client_max_body_size 0;

	location / {
		try_files $uri $uri/ /index.html /index.php?$args =404;
	}

	location ~ \.php$ {
		fastcgi_split_path_info ^(.+\.php)(/.+)$;
		fastcgi_pass 127.0.0.1:9000;
		fastcgi_index index.php;
		include /etc/nginx/fastcgi_params;
	}
}

You can see that most of this file is just the default that the Let’s Encrypt application generates. The only this you need to define here is the domain name and the root of the HTML/PHP files. So in the case of a WordPress blog, you would point it to whichever directory your index.php file lives in.

If you have everything set up correctly, then you should be able to run:

docker-compose up -d

And all of your services should start (Let’s Encrypt might take a while to generate SSL certs and verify your domain). You should be able to visit your website(s) without any need for installing anything other than docker and docker-compose!

If you get stuck, need some help or wish to contribute to my project. You can always visit my GitHub Repository as a reference guide.

Leave a Reply

Your email address will not be published. Required fields are marked *