Unique Remote & Local Volume Paths on Docker Machine Containers

With a few lines of code you’ll be managing your files and directories across remote docker-machine hosts and your local machine in no time.

The Problem

Using to manage my remote Docker deployments, I was frustrated to find out that volume mount paths in are mounted verbatim on the remote host. For example, with a such as this:

# docker-compose.yml
...
services:
someservice:
...
volumes:
- ./local/dir:/path/in/container
...

The command will convert the relative path to its absolute state: and mirror it on the remote server (although it doesn’t populate the files — more on that later). This results in the server containing a path which is not only unsightly, it means that every developer that pushes new container changes will be writing to a different directory on the remote host — which is no bueno indeed!

The Solution

Since I was already using a to manage my lifecycle this one was pretty easy.

I) In the change to :

# docker-compose.yml
...
services:
servicename:
...
volumes:
- ${BASE_PATH}/dir:/path/in/container
...

II) In the file set and variables, e.g.

# .env
LOCAL_PATH=./web
REMOTE_PATH=/home/$(SSH_USER)/web
  • You can set and to whatever works for you.
  • I already had the variable set for docker-machine purposes, but you could manually set the remote username if you prefer.

III) In the set the path depending on if is set or not (it’s set when the has been created via and the local environment has been set via )

# Makefile
export BASE_PATH = $(if $(DOCKER_HOST),$(REMOTE_PATH),$(LOCAL_PATH))

… and there you have it! commands will now use the correct directories!

Syncing Files

So the paths are working but there aren’t any files you say? to the rescue!

I created this simple “sync” command in the :

# Makefile
...
sync:
docker-machine scp -d -r $(LOCAL_PATH)/ $(SSH_USER)@$(PROJECT_NAME):$(REMOTE_PATH)
...
  • instructs to transfer file “deltas” (changed files) instead of all files (uses rsync)
  • syncs directories recursively

I also had the path defined in my , but you could specify it manually, or maybe you’re using … whatever works!

Putting It Together

In the my command therefore looks like:

# Makefile
...
up
:
$(if $(DOCKER_HOST), make sync,)
@echo "Starting up containers for $(PROJECT_NAME)..."
docker-compose pull
docker-compose up -d --build --remove-orphans
...

Which copies the files to the server if we’re targeting a remote server (again using to see if we’re targeting a remote or not) and then brings up our Docker Containers which now have bind-mounts in a more logical place (in the user’s home directory, in my case).

Image for post
Image for post
“Sending it up to the cloud”

Thoughts

Give mea a shout if you have any suggestions or need clarification on any of it.

Obviously this would all be a lot easier if Docker allowed mounting subdirectories of named volumes (a 4+ year-old feature request!) because then we could just do:

# docker-compose.yml
...
services
servicename
...
volumes:
- volumename/dir:/path/in/container
volumes:
vpolumename:
driver_opts:
type: none
device: $PWD
o: bind

Of course there’s always the “multiple compose files” way (where you create a , , and any number of other files, and load them during container initialization by passing the "files” option with a filename, e.g. , in which volume paths can be overridden, but for this use case I thought it was pretty redundant to redeclare all volumes, I prefer to have everything built from the variables in my file to keep it simple and easy to manage later on.

I think this works pretty darn well, what do you think? Let me know in the comments if there’s an alternative way or room for improvement.

<3/>

Programmer. If I can’t find a solution, I write one. <3/>

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store