When it comes to our development environment we all have our own tools, that we prefer. But at some point we might be in a search for something new and try to keep up with the latest trends in the industry.
When you work on multiple projects you need to be able to switch fast your environment, your stack – even if it comes to minor versions of packages and even the architecture if you want to simulate as better as possible the production environment.
Until containerizations kicked-in (Docker, Rocket etc.), Vagrant was pretty much the way to go as best practice. Either have the entire stack on one machine – or if you had independent components, using other package versions – run multiple machines for each component. But this was a real hit on the performance of your development workstation. If you add 2-3 Vagrant VMs with a powerful IDE, several browser tabs and other services – that adds a lot.
And this is why containers are so nice. Containers use way less RAM and CPU and a lot more space than Vagrant VMs (considering major distros).
VMs offer a higher isolation level from the host system, but they introduce additional layers when it comes to using the hardware, while containers actually share some OS libraries with the host itself and even the kernel. This is why there are some security concerns regarding using the containers in production.
It is also true that containers are a bit more difficult to manage than Vagrant boxes. For example, Docker, the most popular containerization solution, until very recently, was not available for Windows and had big troubles with OS X.
Another big plus is that you build your environment out of tiny blocks. You can have images for a database container with MariaDB, another one for a Redis container or one for php7 + php-fpm and one for php5.5 + apache2. And you can easily switch them around and reuse them and build your architecture just by editing a YAML file.
The most popular containerization platform is clearly Docker. It is in very active development and is currently available for Linux, OS X and even Windows.
It is so stable and well developed that some companies are using it even in production. Using it you can easily orchestrate and quickly scale your complex architecture, using its built-in tools. You basically have 40 machines running docker and you can spawn instantly different containers on any of them using pre-defined images.
Before Docker provided their own implementation for managing complex architecures, Google developed Kubernetes, an open source platform for automating the container management.
Even if the majority is afraid of using containers in production, they have proven themselves already at a massive scale – being used by Google Search.
Also Google is expert at hosting containers on its App Engine or Compute Engine services, but in these cases they are isolated in their own KVMs for a more clear boundary between VMs. Here is a presentation about containers at Google by Joe Beda.
The next article in this series, about containers, will be a tutorial on creating a multi-container LEMP environment,