There are already some weeks since last DockerCon 2015, and I want to share some ideas and thoughts that I took from there. The conference took place on November 16th and 17th this year in Barcelona, and was attended by over 1000 participants from Europe and beyond. The estimation is based on the size of the large forum auditorium that may take up 3.140 persons that got filled for the three plenary sessions.
First of all, some background, although Docker as a technology or hype or platform - however you conceive it - is in the meantime an already well-known terminology. And a large number of articles have already been published on it in the last two years. Docker was initially released in 2013. so it is a relatively new. My first experience with Docker was last year, in Spring 2014, when I was asked to do a prototype implementation for the Haufe editorial system (internally known as HRS). Docker was new, and it was new to me, and I struggled with a lot of does and don’ts when transforming an environment - even the small part choosen for the prototype - that has been grown over years and with heavy data centricity.
So I was excited to visit DockerCon and see how Docker continued to evolve as platform into a very flexible, lightweight virtualization platform. The Docker universe indeed made big steps under the hood, with the tooling around it, and also with a growing number of third party adopters improving many aspects of what Docker is and wants to be. Docker may and will revolutionize the way we will build and deploy software in the future. And the future starts now, in the projects we bring ahead.
Virtualization and Docker
The past waves of virtualization are now commodity, has reached IT and is no longer the domain of development as it was years ago, when we started with VMware for development and testing. It is the base for nowadays deployment. Virtualization has many aspects and flavours, but one thing is in common: it is rather heavy weighted to build up a virtualization platform, and using it will cause some performance reduction in comparision to deploying software artifacts directly to physical machines - what was still done for this reason, to have maximum throughput and optimal performance for business. But with virtualization we gain flexibility, beeing able to move a virtualized computing unit on the hardware below, especially from an older system to a newer one without having to rebuild, repackage or deploy anything. And it is already a big, well known industry behind virtualization infrastructure and technology.
So what is new with Docker? First of all, Docker is very lightweight. It fits well in modern Unix enviroments as it bases upon kernel features like CGroups, LXC and more to provide a separation of the runtime environment for the application components from the base os system and drivers and hardware below. But docker is not linux only, there is movement also in the non-Linux part of our world implementing docker and docker related services. Important is: Docker is not about VMs, it is containers. Docker as technology and platform promises to become a radical shift in view. But as I am no authority in this domain, I just refer to a recent article on why Docker is the biggest disruption in Linux virtualization.
There was one session that made a deep impression on me. It was the session titled “Cgroups, namespaces and beyond: what are containers made from” by Jerome Pettazzoni. Jerome shows how Docker bases on and evolves from Linux features like cgroups and namespaces and LXC (Linux containers). Whereas early docker releases based on LXC, it uses now an own abstraction from underlying OS and Linux kernel called libcontainer. He show containers can be built from out of the box linus features in an impressive demo. The main message I took from this presentation: Docker introduces no overhead in comparision to direct deployment to a linux system, as the mechanism used by Docker are system inherent, it means are also used and in effect when one uses Linux without, as they sit there and are used anywhere. Docker is lightweight, really, and has nearly no runtime overhead, so it should be the natural way to deploy and use non-OS software components on Linux.
Docker and Security
When I started in 2014, one message from IT was: Docker is insecure, and not ready for production use, we cannot support it. Indeed there are a couple of security issues related to Docker, especially if the application to deploy depends on NFS needed to share configuration data and provide a central pool of storage to be accessed by a multi-node system (as HRS is, for reasons of scaling and load balancing). In a Docker container, you are root, and this implies also root access to underlying services, such as NFS mounted volumes. Nowadays, still true, unfortunately. You will find the discussions in various discussion groups in the internet. For example in “NFS Security in Docker” and many more.
But there are be big advances with Docker that may slip into the next versions that are planned. One of them, that I yearn to have, is called user namespace mapping. It was announced at DockerCon in more than one presentation, but I remember from “Understanding Docker Security”, presented by two memobers of the Docker team, Nathan McCauley and Diogo Monica. The reason why it is not yet final is that it requires further improvements and testing, and so it is only available in the experimental branch of docker, currently. The announcement can be read here: ”User namespaces have arrived in Docker”] (http://integratedcode.us/2015/10/13/user-namespaces-have-arrived-in-docker). The concept of user namespaces in Linux itself is described in the linux manpages and may be supported by a few up-to-date linux kernels. So it is something for the hopefully near future. See also the known restrictions section in github project ‘Experimental: User namespace support’.
An other progress with container security is the project Notary and docker content trust. It was briefly presented at DockerCon, and I would have to dive deeper into this topic to say more on it. Interesting news is also support for secure hardware based security. To promote that, every participant in one of the general sessions got a YubiKey 4 Nano device, and its use for two factor authentication with code in a Docker repository was demonstrated in the session. The announcement can be found in “Yubico Launches YubiKey 4 and Touch-to-Sign Functionality at DockerCon Europe 2015”. More technical information on it can be read in the blog article Docker Content Trust. See also the InnoQ article and the presentation from May 2015.
Stateless vs Persistency
One thing that struck me last year, when I worked on my Docker prototype implementation, was that Docker is perfect for stateless services. But troubles are ahead, as in real world projects, many services tend to be stateful, with more or less heavy dependencies on configuration and data. This has to be handled with care when constructing Docker containers - and I indeed ran into a problems with that in my experiments.
I hoped to hear more on this topic, as I guess I am probably not the only one that has run into issues while constructing docker containers. Advances in docker volumes where mentioned, indeed. Here I mention the session “Persistent, stateful services with docker clusters, namespaces and docker volume magic” by Michael Neale.
Usecase and Messages
As a contrast to the large number of rather technology focussed sessions was the one held by Ian Miell - author of ‘Docker in Practice’ - on “Cultural Revolution - How to Manage the Change Docker brings”
A use case presentation was “Continuous Integration with Jenkins, Docker and Compose”, held by Sandro Cirulli, Platform Tech Lead of Oxford University Press (OUP). He presents the devops workflow used with OUP for building and deploying two websites provising resources for digitally under represented languages. The infrastructure runs on Docker containers, with Jenkins used to rebuild the Docker Images for the API based on a Python Flask application and Docker Compose to orchestrate the container. The CI workflow and demo of how continuous integration was achived were given in the presentation. It is available in slideshare, too.
One big message hovered over the whole conference: Docker is evolving … as an open source project that is not only based on a core team but also heavily on many constributors making it grow and become a success story. Here to mention the presentation “The Missing Piece: when Docker networking unleashing soft architecture 2.0”. And “Intro to the Docker Project: Engine, Networking, Swarm, Distribution” that rouse some expectations that were not met by the speaker, unfortunately.
An overview on the sessions held at DockerCon 2015 in Barcelona can be found here, together with many links to the announcements made, presentations for most sessions in slideshare, and links to youtube videos of the general sessions, of which I recommented viewing the one for day 2 closing general session with a couple of demonstrations what can be done using Docker. It is entertaining and amazing.