Docker WordPress site with Nginx, MariaDB, PHP7

Docker WordPress site with Nginx

Imagine it’s the weekend and you have a few minutes to spare. You’ve had the nagging feeling you’d like to do something with WordPress that is a little more secure, and ready to scale, but just haven’t had the time. All the cool kids are talking about Docker and the little you’ve heard about containers seems like this might be something fun to play around with…

Having played around with WordPress, Rails, Django and Mambo/Joomla way back in my early days of CMS experimentation, it always seems like I make my way back to WordPress for the wide ecosystem of plugins. Now as we all know, the trade-off for all that quick and easy bolt-on functionality is the requirement for the PHP framework that is increasingly starting to show signs of wear. Even as much as the familiar vanilla LAMP stack used to be a comforting alternative to Microsoft's IIS offerings, it's never too early to re-evaluate what we think we know and consider other ways to build a better mousetrap.

I'll skip a load of commentary about LEMP vs LAMP stack but it is something I'd like to come back to at some point and explore in detail as I know the initial reaction might be why can't we just do all of this in a virtual server with apache?

For this WordPress project the stack we'll be using is:

  • Webserver: Nginx (pronounced "N-gine X")
  • Database: MariaDb (open-source Mysql fork)
  • Containerization: Docker
  • Hosting: AWS EC2
  • Other: PHP 7.1 and PhpMyAdmin

I'm going to go ahead with the Docker install in this tutorial based on a vanilla Amazon AMI Linux T2.micro EC2 instance, but these install instructions should be similar for all flavors of Linux. Obviously CentOs-based distros will be using yum and Debian-based distros will be using apt-get. For Mac OSX or Windows Docker installs, please find instructions here:

Once you have SSH’d into your EC2 instance, let's start by updating our installed software packages:

sudo yum update -y

Then go ahead and install Docker:

sudo yum install Docker -y

Docker Ahoy!

What we are trying to achieve with Docker is a light-weight and somewhat isolated web app that will be both easy to maintain and able to scale with future demand. In a nutshell, Docker allows us to build images that will serve as the blueprints for the containers that will house the functionality for our website. We will build and launch these containers on our Docker host server using the Docker command-line interface (CLI). I'll step through the configuration/customization of our Docker images line by line and try to callout some additional customization options along the way.

Our Stack

TLGTF (Too Long Give Me The Files)

Github Repository: http://bit.ly/2jFJMN3

There is no secret sauce here as far as directory structure is concerned - feel free to arrange according to your own personal OCD organizational tendencies.  Just remember to adjust paths accordingly in the configs if you decide to roll your own.

Let's start by creating our top-level directory to house our projects. Then we'll add subdirectories for each of our images.

If you're not familiar with Docker's container/layer concepts, it would be worth a quick visit over to https://docs.docker.com/engine/userguide/storagedriver/imagesandcontainers/#container-and-layers where they describe the layer concept and what copy-on-write means. Several of these directories (highlighted in green) will also serve as external mount points for our container's persistent storage:

WordPress App Directory Layout

Without further ado, we'll build out our directories:

mkdir docker docker/project
cd docker/project
mkdir mariadb nginx nginx/log php www

If you'd like to follow along with all of the git commands, I've optionally included those as we move along:

git intialization and exclude the contents of persistent directories:

git init
echo mariadb/* >> .gitignore
echo nginx/log/* >> .gitignore
echo www/* >> .gitignore
git add .gitignore

Build: PHP Container

We're going to use the PHP 7.1 image compiled with the FastCGI Process Manager (FPM) for use as our web server gateway interface (WSGI). Nginx will handle all of the static requests and pass any application requests off to our PHP-FPM container. We could have our Nginx container load balance PHP-FPM containers down the road as necessary, but there are several ways to skin that cat and for the purposes of this demo, we'll keep it simple.

We could create an external volume for persistence of any php configuration file (php.ini, etc.) but I prefer baking this specific for our WordPress app and so we'll add additional PHP extensions and set PHP.ini settings in our dockerfile. If a PHP.ini setting needs to change we can just edit our dockerfile and update accordingly.

Note: If you are going to be serving multiple virtual sites through the same php container, it might be better to externalize the php.ini file and/or the conf.d directory (under php subdirectory) if you want different .ini settings by virtual host.

Lte's go ahead and open up our favorite text editor (I prefer vi, but won't judge too much if you nano...) and create a Dockerfile in the project/php directory:to handle the rendering of our customize our image by adding another layer to the base image from the Docker Repository for php:

vi php/Dockerfile
FROM php:7.1-fpm

# install PHP extensions (gd, mysqli opcache and zip)
# configure gd with png and jpg support
RUN set -ex; \
\
apt-get update; \
apt-get install -y \
libjpeg-dev \
libpng-dev \
zlib1g-dev \
; \
rm -rf /var/lib/apt/lists/*; \
\
docker-php-ext-configure gd --with-png-dir=/usr --with-jpeg-dir=/usr; \
docker-php-ext-install gd mysqli opcache zip

# set recommended PHP.ini settings for opcache
# see https://secure.php.net/manual/en/opcache.installation.php
RUN { \
echo 'opcache.memory_consumption=128'; \
echo 'opcache.interned_strings_buffer=8'; \
echo 'opcache.max_accelerated_files=4000'; \
echo 'opcache.revalidate_freq=2'; \
echo 'opcache.fast_shutdown=1'; \
echo 'opcache.enable_cli=1'; \
} > /usr/local/etc/php/conf.d/opcache-recommended.ini

# Set PHP.ini settings for script execution and uploads
RUN { \
echo 'file_uploads = On'; \
echo 'upload_max_filesize = 64M'; \
echo 'post_max_size = 64M'; \
echo 'memory_limit = 256M'; \
echo 'max_execution_time = 600'; \
echo 'max_input_time = 600'; \
} > /usr/local/etc/php/php.ini

# Create /www directory
RUN mkdir /www
RUN chmod a+rwx -R /www

PHP Extensions

If there are additional PHP extensions that need to be added, make sure you add a line to install the appropriate development library with apt-get. You will then also need to make sure these are configured and installed with the docker php helper scripts from php:7.1-fpm image:

  • docker-php-ext-configure
  • docker-php-ext-install

PHP.ini Customizations

  • We've added the Opcache extension to boost PHP performance. This extension stores pre-compiled code in memory to minimize loading and parsing of scripts for each request. Find out more about it here: https://secure.php.net/manual/en/intro.opcache.php
  • Filesize Limits/Memory Limits/Execution Times:
Setting Definition
file_uploads Enabled by default (boolean)
upload_max_filesize Maximum size of an uploaded file. 2MB is the default if no setting is specified.
post_max_size Maximum size of post data. This should be larger than upload_max_filesize as an uploaded file would need to be encapsulated in the post data.
memory_limit Maximum amount of memory a script is able to use. Should be larger than post_max_size. Default is 128MB.
max_execution_time Maximum time in seconds a script is allowed to run. Default is 30 seconds.
max_input_time Maximum time in seconds a script can parse input data. Default is -1 which means it takes same value as max_execution_time.

Finally we've created a directory and added a statement to the end to make the /www read/write/executable.

So now we've built our first container. Provided we are still in the ~/docker/projects directory, we can now run a one-liner to test our container and also dump our entire php.ini settings out to stdout:

docker run -it php /usr/local/bin/php -i

If everything looks good, let's go ahead and git commit.

git add php/Dockerfile
git commit -m "Initial baseline of php container"

Build: Nginx

Before we continue on to the final build of the rest of our containers, we'll go ahead and setup the configuration of Nginx. We'll start by creating a simple Dockerfile in our nginx directory:

vi nginx/Dockerfile
FROM nginx:latest

# Remove default configuration files
RUN rm /etc/nginx/conf.d/default.conf; exit 0

# Copy our custom config files to Nginx Contatiner
COPY nginx.conf /etc/nginx
COPY site.conf /etc/nginx/conf.d

In the pursuit of exposing less of the host directory structure, we are going to copy our customizations over at container build. We first remove the default.conf (adding the exit 0 allows build to ignore the "No such file or directory error" in later modifications we might make to the container) so our conf.d is empty.

There are several other ways to approach Nginx config files - we could also simply map both the configuration directory (/etc/nginx/conf.d) and the nginx.conf file to locations on our host system so we can modify on the fly.

But as stated, we are looking to reduce exposure to host file system as these files will not change often and when they do we can simply use git to keep track of what we were trying to achieve and rebuild as needed.

Now let's finish by creating our Nginx config files and get this show on the road!

vi nginx/nginx.conf

nginx.conf:

user  nginx;
worker_processes  1;

error_log  /var/log/nginx/error.log warn;
pid        /var/run/nginx.pid;


events {
worker_connections  1024;
}


http {
include       /etc/nginx/mime.types;
default_type  application/octet-stream;

log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';

access_log  /var/log/nginx/access.log  main;

sendfile        on;

keepalive_timeout  65;

client_max_body_size 0;

include /etc/nginx/conf.d/*.conf;
}
vi nginx/site.conf

site.conf:

server {
index index.php index.html;
server_name "";
error_log  /var/log/nginx/error.log;
access_log /var/log/nginx/access.log;
rewrite_log on;
root /www;

# Disable sendfile as per https://docs.vagrantup.com/v2/synced-folders/virtualbox.html
sendfile off;

location / {
# First attempt to serve request as file, then
# as directory, then fall back to index.html
try_files $uri $uri/ /index.php?q=$uri&$args;
}

location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass php:9000;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
fastcgi_read_timeout 300;
}
}

You'll note that our root for our site will be located at /www in our container. Later on we will map this local directory to our external directory on the host to maintain persistence of our static files.

Assuming we are still in the ~/docker/project directory - let's build our nginx container to make sure everything works:

docker build -t nginx nginx

Finally we'll add and commit our changes with git:

Assuming we are in ~/docker/project directory:

git add nginx/

git commit -m "Initial baseline for nginx container"

Build: Nginx, MariaDb, PHPMyAdmin, PHP-FPM

We can now put the application together by joining our containers as services within a docker-compose.yml file.

You'll need to have your text editor handy again - we'll create this file and then I'll explain what is happening under the hood

vi docker-compose.yml

docker-compose.yml:

version: '2'
services:

nginx:
build: ./nginx/ 
ports:
- "80:80"
volumes_from:
- php
volumes:
- ./nginx/log:/var/log/nginx
links:
- php

php:
build: ./php/
volumes:
- ./www:/www
links:
- mariadb

mariadb:
image: mariadb 
restart: always
volumes:
- ./mariadb:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: PASSWORD 
MYSQL_USER: admin
MYSQL_PASSWORD: PASSWORD
MYSQL_DATABASE: projectdb
expose:
- "3306"

phpmyadmin:
image: phpmyadmin/phpmyadmin
restart: always
links:
- mariadb
ports:
- 8183:80
environment:
MYSQL_USERNAME: admin
MYSQL_ROOT_PASSWORD: PASSWORD 
PMA_HOST: 127.0.0.1
PMA_ARBITRARY: 1

So we've a large list of things to cover here. Starting from the top:

service: nginx

docker-compose sub-command description
build directory path to where Dockerfile is located
ports this maps the host port to a container port, i.e. "host_port:local_port"
volume_from inherit volumes from specified containers, here we gain read access to www volume mounted to php container
volumes mounts a host file/directory to a container file/directory, i.e. "host:local", we are mounting an external directory for site logs
links links to container in another service (php)

service: php

docker-compose sub-command description
build directory path to where Dockerfile is located
volumes mounts a host file/directory to a container file/directory, i.e. "host:local"
links links to container in another service, needs to be linked to mariadb so we can read/write to our WordPress database

service: mariadb

docker-compose sub-command description
image specifies the image to start container from, if it doesn't exist, Compose will attempt to pull it from the Docker repository
restart sets the container restart policy. We are setting to always restart on container up/cycle
volumes mounts a host file/directory to a container file/directory, i.e. "host:local", we are mounting an external directory for site logs
environment sets specified environment variables, here we are setting up mysql users/password
expose port on container to expose to linked services only

service: phpmyadmin

docker-compose sub-command description
image specifies the image to start container from, if it doesn't exist, Compose will attempt to pull it from the Docker repository
restart sets the container restart policy. We are setting to always restart on container up/cycle
links links to container in another service, needs to be linked to mariadb so we can read/write to our database(s)
ports this maps the host port to a container port, i.e. "host_port:local_port". In this we are mapping container port to 80 and host facing port to 8183. Make sure port 8183 is open on external firewall if you want to be able to access externally/remotely
environment sets specified environment variables, here we need to set PMA_HOST to localhost or if using AWS this needs to be the internal IP address of host

git add and commit our docker-compose.yml

git add docker-compose.yml
git commit -m "Initial docker-compose configuration"

Setup: WordPress

We're almost complete here - now we just have to download the latest WordPress (currently located at https://wordpress.org/latest.tar.gz

Let's go ahead and download it now:

wget -O www/latest.tar.gz https://wordpress.org/latest.tar.gz

Next we'll untar/gunzip:

cd www
tar -xvzf latest.tar.gz

We could leave everything in the newly created wordpress directory or (as in this case) move it all to www root and remove wordpress directory. This really is a choice on how you want directory structures under www laid out:

mv wordpress/* ./
rmdir wordpress
rm latest.tar.gz​

Finally let's fix permissions on the host www directory:

chmod a+rwx -R ../www

At this point we should be ready to move forward with setting up WordPress. Just remember what your wordpress database, DB user and DB password was so that you can move quickly through the setup process.

When ready, use docker-compose command from ~/docker/projects directory to bring the application up as a daemon (so it will run after your shell is terminated):

docker-compose up -d

To shut it down, from the ~/docker/projects directory simply enter:

docker-compose down

Now we just have to enter in our credentials and we can blaze through the famous 5 minute WordPress setup. For database host, make sure you use the mariadb container name rather than localhost.

Wordpress setup page 1
Wordpress setup page 2

If you found this tutorial helpful or would like additional help, leave me a reply in comments below!

Jeff Jones

  • New Jersey

Subscribe to Out Of My Head

Stay up to date! Get all the latest & greatest posts delivered straight to your inbox.