Networking, Security, Linux


Migrating a Ghost Blog using Docker Volumes

Today we will learn how Docker volumes work and use it to synchronize the contents of this website between the remote server hosting the production and the loca…

Rohan MolloyRohan Molloy

Today we will learn how Docker volumes work and use it to synchronize the contents of this website between the remote server hosting the production and the local workstation (for modifying)

In a series of posts we will look at how I will prepare to move this site from the legacy Ghost 0.11 to the latest Ghost 1.10 platform. But before that, I want a local copy of the blog to tinker with.

While Ghost has a "backup/export" option, this doesn't backup everything, including, and especially uploaded images. In this tutorial, we will create a mirror on my home workstation 'pulling' from the production server.

Docker makes it easy!


In Docker, persistent state data is generally not stored inside the containers themselves but rather inside Volumes. For the pre-1.x versions of Ghost, you'd want /var/lib/ghost to be stored in a volume. This way if you accidently or intentionally delete your container your blog isn't lost, and new containers can easily be pointed to it.

My blog lives on a remote server, but I want a local copy on my Desktop to test the upgrade. No worries, All I want is the contents of the blog, not the entire file system of the container.

Currently this site runs off a Docker container for Ghost 0.11.11. This container is named etherarp-ghost-0.11.11-production We declare this as a variable

# Name of our container

Finding the remote volume

Your Ghost container automatically created a persistent volume for internal location /var/lib/ghost, and this volume persists even after the container is deleted. However, there is a problem, unless you explicitly create, name, and mount the volume, it will have not have a human-readable name, instead it will have a long alphanumerical sequence identifying it.

So in order to find the location of this automatically created volume, we need to inspect the container's metadata.

From my home machine, I SSH into the server and use docker inspect etherarp-ghost-0.11.11-production to view the container metadata.

In the "Mounts": section, we should see something like this. "Source" is the true location on the Host while "Destination" is the mount point inside the container

            "Type": "volume",
            "Name": "09e245a[...]",
            "Source": "/var/lib/docker/volumes/09e245a[..]/_data",
            "Destination": "/var/lib/ghost/",
            "Driver": "local",
            "Mode": "",
            "RW": true,
            "Propagation": ""


Let's retrieve the remote Ghost path by inspecting the remote container (we defined the name earlier). We store it as a variable on the local machine.

The first line SSH's into the remote machine, runs Docker-Inspect, looks for the /var/lib/ghost and filters out the exact Source location. The second line is to remove the trailing comma. ${conVolRemotePath::-1} means all but the last character of the variable conVolRemotePath

# Remote path of the container volume (on the remote machine)
conVolRemotePath=`ssh root@Website docker inspect $conName | jq '.[] | .Mounts' | grep -b1 -A0 /var/lib/ghost | grep Source | awk '{print $3}'` 

Creating a named persistent volume on the local machine

Now I create a volume on my local machine, and store its name and path into variables respectively.

By default, we are using the remote container name as the name of our volume, but we could change it, perhaps like this docker volume create --name sed s/production/development/ <<< "$conName"

# Create the volume
conVolLocal=`docker volume create --name "$conName"

Now running a docker volume ls we should see our newly created container. We also should be able to cd $conVolLocalPath.

Transferring the files from remote to local volumes

Now we transfer the files from the remote volume to the local volume

# Perform the migration
ssh root@Website tar -cpv -C $conVolRemotePath . | tar -xp -C $conVolLocalPath

You could even run this as a cron job so that your local machine always has an up-to-date snapshot of the remote blog

Creating the new container

Finally, we create a new container on the local machine and point our local container to it. The -v option states that the existing volume named $conVolLocal is mounted to the container path /var/lib/ghost.

# Create the container
docker run --detach --name "$conName" -v "$conVolLocal":/var/lib/ghost

Connecting to the container's address, it appears to function identically to the copy on the server

The local mirror of the site


Rohan Molloy

View Comments