This is a write-up of how I migrated my Nginx web server from running as a standard service to running inside a Docker container. We will also see how to customize logging and network options, including configuring docker for IPv6
For me, I find it easier having a single nginx.conf file, appending the single flat file with vhosts generated from a template. For example, for additional vhosts, you could use this template
The main things you'll need to comment out any
error_log entries, created on a per-vhost basis. For now, I'm just using Dockers built in syslog functionality.
If you desire to use the
/var/log/nginx files, that should be possible, using a volume mount.
Creating an additional docker bridge
docker network create --driver bridge \ --subnet 10.128.64.0/24 \ --opt com.docker.network.bridge.name=docker1 \ --opt com.docker.network.bridge.enable_icc=false \ --opt "com.docker.network.bridge.host_binding_ipv4=$floatingip" \ docker1
Let's break down the options.
network.bridge.nameis the device name of the bridge created on the host.
If you don't manually set it, you'll get a name like "br-78c40ed9122e"
enable_iccrefers to "Inter-container Communications'.
If you intend on using nginx as a reverse proxy, you'd want that to be set as true.
com.docker.network.bridge.host_binding_ipv4address for published ports
e.g. -p 80:80 by default redirects port 80 along any interface, in this case we pick a particular IP address. Omit this if unneeded for you.
Stop and disable the host/nginx service
Currently port 80 and 443 are occupied by the nginx service running on the host, which will result in the container failing to start. Therefore, we need to stop the nginx service
sudo systemctl stop nginx sudo systemctl disable nginx
Creating the docker container
docker create \ --network docker1 \ --hostname nginx_prod \ --ip 10.128.64.128 \ --name nginx-production \ --volume /var/www/html:/var/www/html:ro \ --volume /etc/ssl:/etc/ssl:ro \ --restart=on-failure \ -p 80:80 -p 443:443 \ nginx:latest
Copying over the nginx.conf file
sudo docker cp /etc/nginx/nginx.conf nginx-production:/etc/nginx sudo docker restart nginx-production
Checking the container is operational
The simple way is to use
docker ps which lists all containers
Let's look at how we can query the status of the container in more depth via the
docker inspect JSON interface
docker inspect nginx-production | jq -r ..State.Status running
Then try visiting your website to check nginx is actually working.
Query the system logs of he container
$ docker logs nginx-production
Where are these logs actually stored?
NGINX_LOG=$(docker inspect nginx-production | jq ..LogPath | tr -d \")
In case you want to use a tool like
goaccess to process your logs
Creating a systemd unit to auto-start the container at boot
Paste the following into the file
[Unit] Description=Nginx-Docker Requires=docker.service [Service] Type=oneshot ExecStart=/usr/bin/docker start nginx-production
First, we stop the running container.
This is to confirm the systemd unit is actually working at starting the container
sudo docker stop nginx-production
Now start and enable the systemd unit
sudo systemctl start nginx-docker docker inspect nginx-production | jq -r ..State.Status
Restricting container access to the outside world
Restricting egress network access is a great way to improve the security of a web server as it makes the attackers job significantly harder as they can't download their tools or phone home to spawn a reverse shell.
iptables rules in the
DOCKER-ISOLATION chain so that our nginx server is only allowed to contact the server needed for OCSP stapling (see my tls tutorial if you want to know what that is)
# By default, drop all egress traffic from the nginx container sudo iptables \ --insert DOCKER-ISOLATION \ --in-interface docker1 \ --src 10.128.64.128 \ --jump DROP # Allow the container to contact the OCSP stapling server sudo iptables \ --insert DOCKER-ISOLATION \ --in-interface docker1 \ --src 10.128.64.128 \ --dst ocsp.comodoca.com \ --proto tcp --dport 80 \ --jump ACCEPT
Setting it up with IPv6
Delete the networks we created in the above steps (if they exist)
sudo docker rm nginx-production sudo docker network rm docker1
Create a docker network.
Give it the IPv6 subnet allotted by your cloud provider for additional addresses.
The main IPv6 address on the server is
2001:db8:420:d0::d08:a001/64 with a gateway address at
My provider lets me add
2001:db8:420:d0::d08:a00f (10 additional addresses). In IPv6 CIDR this is a
docker network create \ --driver bridge \ --subnet 10.128.64.0/24 \ --ipv6 \ --subnet 2001:db8:420:d0::d08:a000/124 \ --opt com.docker.network.bridge.name=docker1 \ --opt com.docker.network.bridge.enable_icc=false \ --opt "com.docker.network.bridge.host_binding_ipv4=$floatingip" \ docker1
Enable proxy NDP
sudo sysctl net.ipv6.conf.eth0.proxy_ndp=1 sudo ip -6 neigh add proxy 2001:db8:420:d0::d08:a00a dev eth0
Creating the container
sudo docker run \ --network docker1 \ --ip 10.128.64.128 \ --ip6 2001:db8:420:d0::d08:a00a \ --name nginx-production \ --volume /var/www/html:/var/www/html:ro \ --volume /etc/ssl:/etc/ssl:ro \ --restart=always \ -p 80:80 -p 443:443 \ nginx:latest
Now just copy the new nginx.conf (uncommented IPv6 lines)
sudo docker cp /etc/nginx/nginx.conf nginx-production:/etc/nginx sudo docker start nginx-production
Testing it works
Go onto an IPv6 capable client and see if you can access the web server
$ curl --ipv6 https://etherarp.net/robots.txt User-agent: * Sitemap: http://etherarp.net/sitemap.xml Disallow: /ghost/