WARNING: highly technical post ahead!
A tale of two synced folder backend implementations
Here at DigiACTive, we’ve made much use of Vagrant to help us manage and deploy a consistent development environment across all our development machines. For the uninitiated, Vagrant essentially allows developers to create a standard configuration — operating system, software packages, configuration files, and so on — which can be automagically deployed (usually using a single command!) into a variety of environments, such as a VirtualBox virtual machine. Our development team uses both Linux and Mac OS X, and our software has a large number of dependencies, so having a standard development environment has proven very useful.
Our Vagrant configuration uses Vagrant’s synced folders to share the developer’s working copy of our Git repository with the test server. Depending on the provider onto which the Vagrant setup is deployed, there are a number of different backend implementations used for synced folders. By default, VirtualBox deployments will use VirtualBox shared folders — which, apart from being notoriously unreliable, have limited support for some important POSIX file system features, such as hard links.
To get around this, one can enable the alternative NFS backend, which doesn’t have these limitations — and can conveniently be enabled with a single line in the Vagrantfile. It all sounds great, right?
Daniel fixed up the last details of our Vagrantfile, gave it a final test, and pushed it up. Ben pulled it down on to his MacBook, ran
vagrant up, and after a couple of hours of watching our Chef scripts download many, many dependencies, it all worked! (Well, mostly. Anyway.)
Meanwhile, on my Debian box, I (Andrew) was getting very, very close to tearing my hair out. After fixing up a number of other problems to get Vagrant running properly, I tried to provision the VM — and every time I tried, it complained that the provisioning scripts didn’t have appropriate permissions when working with our shared folders.
NFS and permissions
After some investigation, it became apparent what the issue was.
Our Chef scripts were attempting to change the owner of the configuration files they were modifying to
vagrant, the default user set up inside the VM. However, as the synced folders are mounted using NFS, changing the ownership of a file on the remote client (the VM) means changing the ownership on the server (the host system). On the host system, root squashing means that the client didn’t have the root privileges necessary to do that.
However, it soon became apparent that there was another issue here:
1 2 3 4 5 6
It turns out that NFS shares UIDs between the server and clients — which is fine if all systems use the same authentication backend. This isn’t the case in a Vagrant system, obviously. NFS does not provide a built-in way to map between different server/client users — I believe this can be accomplished with a proper directory service, but we weren’t going to set that up just for a simple development VM.
This also explained why Daniel and Ben had no problems provisioning on their Macs. On my Debian host system, standard UIDs start at 1000, while in the CentOS guest system, UIDs start at 501. Coincidentally, Mac OS X also starts UIDs at 501 — so on their systems, the files were already owned by
I searched around for a few too many hours trying to find a solution, playing around with random NFS mounting options in an effort to make it work…
bindfs and vagrant-bindfs
Bind mounts have existed for a while in various *nixes, allowing users to mount already-mounted filesystems to other locations.
bindfs takes this concept further, though. Using the wonderful powers of FUSE, bindfs allows you to virtually alter ownership and permission bits — which is exactly what I needed! As a FUSE filesystem, bindfs has some unfortunate performance issues, but hey, at least it’d work!
Even better, there’s a Vagrant plugin, vagrant-bindfs, that allows easy configuration of bindfs mounts straight from the Vagrantfile!
I thought I’d found my solution, and proceeded to try and grab the bindfs RPM for CentOS 6…
…which didn’t exist. It used to be built as part of the EPEL repository, but alas, no more!
bindfs, vagrant-bindfs, and RPMs
After much searching around for information about building RPMs, I ended up downloading the RPM spec file from the old EPEL package, updating it to bindfs 1.12.3, and compiling it myself. After learning more than I ever wanted to know about RPM building, I managed to compile a CentOS 6-compatible bindfs RPM.
However, there was still the small issue of installing this RPM during the Vagrant configuration process. For vagrant-bindfs to work, bindfs needs to be installed in the boot-up configuration phase, not the provisioning phase, which meant that adding it to our Chef scripts wasn’t going to work.
Conveniently, vagrant-bindfs makes use of Vagrant’s guest capabilities framework to detect when the guest doesn’t have bindfs installed and trigger an installation capability to download the appropriate package. The installation capability needs to be implemented separately for each type of guest, and at present only Debian is supported.
I implemented a quick-and-dirty hack to extend vagrant-bindfs’s installation capability for CentOS. Because bindfs has a number of dependencies which mean we can’t just use
rpm -i to install it automatically, I decided to create a YUM repository (which is actually really easy!), which makes the rest really easy.
So, after many, many hours of trying, I finally got bindfs working, and finally managed to provision my testing VM!
If you’re trying to get bindfs working with Vagrant and CentOS 6, and don’t want to go through the same pain I went through, I give you…
Downloads and Links!
DigiACTive provides these downloads as-is, and takes no responsibility for any issues! Use at your own risk!
- vagrant-bindfs-0.2.4.digiactive2.gem (install with
vagrant plugin install)
- our version of vagrant-bindfs on GitHub
- DigiACTive YUM Repository Configuration File (put in
- DigiACTive PGP Signing Key