Thanks for the mention Lorna. I should probably give some props to my colleague @mmoscosa who helped out on the day.

> or you have new puppet config perhaps, you want to throw away your VM and start again

Worth pointing out here that the `vagrant provision` command will re-run your Puppet manifests (or Chef recipes) without the added wait of booting up a fresh VM (plus it won’t have to reinstall any packages that are already on the existing VM).

Paul, I can see where you are coming from. Compared to setting up your typical LAMP server, Vagrant is not without it’s own set of potential issues. You have to add varying versions of Vagrant, Ruby and VirtualBox to your stack (well, each developer’s machine, but likely the same versions of each), decide on an existing base box from somewhere or make your own (Google “veewee” for that), verse yourselves with any provisioning scripts you are using (Puppet/Chef/shell scripts), and maybe Google some obscure error messages generated by these tools you don’t normally use.

However, I think the advantages (time savings) lie within the infrastructure management side of things. To explain, let me try to create some scenarios if you don’t mind:

You set up a development box manually and lock it down enough so people don’t break things, or alternatively you create a VM instead (making snapshots possible). You run this VM centrally on bare-metal (using a hypervisor), or developers run this VM locally in VirtualBox (using their mouse), or they can run it in VirtualBox with Vagrant (using their command-line). Any of these options are equally viable, but hopefully you are somehow saving yourself the hassle of having to set up the development environment from scratch at a later date.

Beyond that (and this blog post, and maybe even the point you were making), the rest of the power seems to lie within the provisioning scripts themselves (not the virtualisation). Instead of manually installing all the required packages needed inside your development server (physical or virtual), you script it instead. These scripts act as documentation but can then be added to version control and used to consistently provision not only your development server(s), but also any staging server(s) and production server(s) – again, physical or virtual.

With things hitting “the cloud” more these days (back to virtual), this allows you to more easily scale applications horizontally by automating the infrastructure. Sure, you could just clone an existing VM (assuming virtual, still) as it already has the required packages, but here-in lies another advantage – automating environment consistency:

Say you are using Puppet; you have taken some time to set up any old server as a “Puppet master”; and now you want to move some of your application’s stack to a new version (say PHP 5.3 to 5.4, or simply a config value in php.ini). You change a line in your provisioning scripts, commit it to version control, and within minutes various servers start checking in to the master (or is it the other way around) and are told exactly how to go about their automated upgrade. Developers also get these changes from version control and all servers (the real ones and the developer’s toy ones) end up on identical versions. (Thanks again for sharing the knowledge @JayTaph)

I guess the point here is that YMMV. The tools are new but they scale well. The bigger your deployment(s) and/or number of projects, the more advantages you gain from learning them. And yes, the opposite can also be true as you point out, especially in the initial stages. The 80/20 split you mention – where developers spend time playing with virtual machines and configuration management instead of, well, developing – can probably be best be attributed to the “DevOps” term (which appeared on Wikipedia just two years ago).