Painless Docker Basic Edition: A Practical Guide To Master And Its Ecosystem Based On Real World Examples Edition Real%
User Manual: Pdf
Open the PDF directly: View PDF .
Page Count: 375 [warning: Documents this large are best viewed by clicking the View PDF Link!]
- Introduction
- Preface
- Chapter I - Introduction To Docker & Containers
- Chapter II - Installation & Configuration
- Chapter III - Basic Concepts
- Chapter IV - Advanced Concepts
- Chapter V - Working With Docker Images
- Chapter VI - Working With Docker Containers
- Chapter VII - Working With Docker Machine
- Chapter VIII - Docker Networking
- Chapter IX - Composing Services Using Compose
- Chapter X - Docker Logging
- Chapter XI - Docker Debugging And Troubleshooting
- Chapter XII - Orchestration - Docker Swarm
- Chapter XIII - Orchestration - Kubernetes
- Chapter XIV - Orchestration - Rancher/Cattle
- Chapter XV - Docker API
- Chapter XVI - Docker Security
- Chapter XVII - Docker, Containerd & Standalone Runtimes Architecture
- Final Words
Table of Contents
1. Introduction 1.1
2. Preface 1.2
3. Chapter I - Introduction To Docker & Containers 1.3
4. Chapter II - Installation & Configuration 1.4
5. Chapter III - Basic Concepts 1.5
6. Chapter IV - Advanced Concepts 1.6
7. Chapter V - Working With Docker Images 1.7
8. Chapter VI - Working With Docker Containers 1.8
9. Chapter VII - Working With Docker Machine 1.9
10. Chapter VIII - Docker Networking 1.10
11. Chapter IX - Composing Services Using Compose 1.11
12. Chapter X - Docker Logging 1.12
13. Chapter XI - Docker Debugging And Troubleshooting 1.13
14. Chapter XII - Orchestration - Docker Swarm 1.14
15. Chapter XIII - Orchestration - Kubernetes 1.15
16. Chapter XIV - Orchestration - Rancher/Cattle 1.16
17. Chapter XV - Docker API 1.17
18. Chapter XVI - Docker Security 1.18
19. Chapter XVII - Docker, Containerd & Standalone Runtimes Architecture 1.19
20. Final Words 1.20
Painless Docker
About The Author
Aymen is a Cloud & Software Architect, Entrepreneur, Author, CEO of Eralabs, A DevOps & Cloud Consulting
Company and Founder of DevOpsLinks Community.
He has been using Docker since the word Docker was just a buzz. He worked on web development, system
engineering, infrastructure & architecture for companies and startups. He is interested in Docker, Cloud Computing,
the DevOps philosophy, the lean programming and the tools/methodologies.
You can find Aymen on Twitter.
Don't forget to join DevOpsLinks & Shipped newsletters and the community Job Board JobsForDevOps. You can
also follow this course Twitter account for future updates.
Wishing you a pleasant reading.
Disclaimer
Docker and the Docker logo are trademarks or registered trademarks of Docker, Inc. in the United States and/or
other countries.
Preface
Docker is an amazing tool, may be you have tried using or testing it or may be you started using it in some or all of
your production servers but managing and optimizing it can be complex very quickly, if you don't understand some
basic and advanced concepts that I am trying to explain in this book.
The fact that the ecosystem of containers is rapidly changing is also a constraint to stability and a source of
confusion for many operation engineers and developers.
Most of the examples that can be found in some blog posts and tutorials are -in many cases- promoting Docker or
giving tiny examples, managing and orchestrating Docker is more complicated, especially with high-availability
constraints.
This containerization technology is changing the way system engineering, development and release management are
working since years, so it requires all of your attention because it will be one of the pillars of future IT technologies
if it is not actually the case.
At Google, everything runs in a container. According to The Register, two billion containers are launched every
week. Google has been running containers since years, when containerization technologies were not yet
democratized and this is one of the secrets of the performance and ops smoothness of Google search engine and all
of its other services.
Some years ago, I was in doubt about Docker usage, I played with Docker in testing machines and I decided later to
use it in production. I have never regretted my choice, some months ago I created a self-service in my startup for
developers : an internal scalable PaaS - that was awesome ! I gained more than 14x on some production metrics and
I realized my goal of having a service with SLA and Appdex score of 99%.
Appdex (Application Performance Index) is an open standard that defines a standardized method to report,
benchmark, and track application performance.
SLA (Service Level Agreement) is a contract between a service provider (either internal or external) and the
end user that defines the level of service expected from the service provider.
It was not just the usage of Docker, this would be too easy, it was a list of todo things, like moving to micro-services
and service-oriented architectures, changing the application and the infrastructure architecture, continuous
integration ..etc But Docker was one of the most important things on my checklist, because it smoothed the whole
stack's operations and transormation, helped me out in the continuous integration and the automation of routine task
and it was a good platform to create our own internal PaaS.
Some years ago, computers had a central processing unit and a main memory hosted in a main machine, then come
mainframes whose were inspired from the latter technology. Just after that, IT had a new born called virtual
machines. The revolution was in the fact that a computer hardware using a hypervisor, allows a single machine to
act as if it where many machines. Virtual machines were almost run in on-premise servers, but since the emergence
of cloud technologies, VMs have been moved to the cloud, so instead of having to invest heavily in data centers and
physical servers, one can use the same virtual machine in the infrastructure of servers providers and benefit from the
'pay-as-you-go' cloud advantage.
Over the years, requirements change and new problems appears, that's why solutions also tends to change and new
technologies emerge.
Nowadays, with the fast democratization of software development and cloud infrastructures, new problems appears
and containers are being largely adopted since they offer suitable solutions.
A good example of the arising problems is supporting software environment in an identical environment to the
production when developing.Weird things happen when your development and testing environments are not the
same, same thing for the production environments. In this particular case, you should provide and distribute this
environment to your R&D and QA teams.
But running a Node.js application that has 1 MB of dependencies plus the 20MB Node.js runtime in a Ubuntu 14.04
VM will take you up to 1.75 GB. It's better to distribute a small container image than 1G of unused libraries..
Containers contains only the OS libraries and Node.js dependencies, so rather than starting with everything
included, you can start with minimum and then add dependencies so that the same Node.js application will be 22
times smaller! When using optimized containers, you could run more applications per host.
Containers are a problem solver and one of the most sophisticated and adopted containers solutions is Docker.
To Whom Is This Book Addressed ?
To developers, system administrators, QA engineers, operation engineers, architects and anyone faced to work in
one of these environments in collaboration with the other or simply in an environment that requires knowledge in
development, integration and system administration.
The most common idea is that developers think they are here to serve the machines by writing code and
applications, systems administrators think that machines should works for them simply by making them happy
(maintenance, optimization ..etc ).
Moreover, within the same company there is generally some tension between the two teams:
System administrators accuse developers to write code that consumes memory, does not meet system
security standards or not adapted to available machines configuration.
Developers accuse system administrators to be lazy, to lack innovation and to be seriously uncool!
No more mutual accusations, now with the evolution of software development, infrastructure and Agile engineering,
the concept of DevOps was born.
DevOps is more a philosophy and a culture than a job (even if some of the positions I occupied were called
"DevOps"). By admitting this, this job seeks closer collaboration and a combination of different roles involved in
software development such as the role of developer, responsible for operations and responsible of quality assurance.
The software must be produced at a frenetic pace while at the same time the developing in cascade seems to have
reached its limits.
If you are a fan of service-oriented architectures, automation and the collaboration culture
if you are a system engineer, a release manager or an IT administrator working on DevOps, SysOps
or WebOps
If you are a developer seeking to join the new movement
This book is addressed to you. Docker one the most used tools in DevOps environments.
And if you are new to Docker ecosystem and no matter what your Docker level is, through this book, you will firstly
learn the basics of Docker (installation, configuration, Docker CLI ..etc) and then move easily to more complicated
things like using Docker in your development, testing and live environments.
You will also see how to write your own Docker API wrapper and then master Docker ecosystem, form
orchestration, continuous integration to configuration management and much more.
I believe in learning led by practical real-world examples and you ill be guided through all of this book by tested
examples.
How To Properly Enjoy This Book
This book contains technical explanations and shows in each case an example of a command or a configuration to
follow. The only explanation gives you a general idea and the code that follows gives you convenience and help you
to practice what you are reading. Preferably, you should always look both parts for a maximum of understanding.
Like any new tool or programming language you learned, it is normal to find difficulties and confusions in the
beginning, perhaps even after. If you are not used to learn new technologies, you can even have a modest
understanding while being in an advanced stage of this book. Do not worry, everyone has passed at least once by
this kind of situations.
At the beginning you could try to make a diagonal reading while focusing on the basic concepts, then you could try
the first practical manipulation on your server or using your laptop and occasionally come back to this book for
further reading on a about a specific subject or concept.
This book is not an encyclopedia but sets out the most important parts to learn and even to master Docker and its
fast-growing ecosystem. If you find words or concepts that you are not comfortable with, just try to take your time
and do your own on-line research.
Learning can be serial so understanding a topic require the understanding of an other one, do not lose patience : You
will go through chapters with good examples of explained and practical use cases.
Through the examples, try to showcase your acquired understanding, and, no, it will not hurt to go back to previous
chapters if you are unsure or in doubt.
Finally, try to be pragmatic and have an open mind if you encounter a problem. The resolution begins by asking the
right questions.
Conventions Used In This Book
Basically, this is a technical book where you will find commands (Docker commands) and code (YAML, Python
..etc).
Commands and code are written in a different format.
Example :
docker run hello-world
This book uses italic font for technical words such as libraries, modules, languages names. The goal
is to get your attention when you are reading and help you identify them.
You will find two icons, I have tried to be as simple as possible so I have chosen not to use too many
symbols, you will only find:
To highlight useful and important information.
To highlight a warning or a cautionary advice.
Some containers/services/networks identifiers are long to be well formatted, you will find for example some ids in
this format 4..d , e..3 : instead of writing all of the string : 4lrmlrazrlm4213lkrnalknra125009hla94l1419u14N14d , I only use
the first and the last character, which gives 4..d .
How To Contribute And Support This Book ?
This work will be always a work in progress but it does not mean that it is not a complete learning resource - writing
a perfect book is impossible on the contrary of iterative and continuous improvement.
I am an adopter of the lean philosophy so the book will be continuously improved in function of many criteria but
the most important one is your feedback.
I imagine that some readers do not know how "Lean publishing" works. I'll try to explain briefly:
Say, the book is 25% complete, if you pay for it at this stage, you will pay the price of the 25% but get all of the
updates until 100%.
Another point, lean publishing for me is not about money, I refused several interesting offers from known publishers
because I want to be free from restrictions and DRM ..etc
If you have any suggestion or if you encountered a problem, it would be better to use a tracking system for issues
and recommendations about this book, I recommend using this github repository.
You can find me on Twitter or you can use my blog contact page if you would like to get in touch.
This book is not perfect, so you can find typo, punctuation errors or missing words.
Contrariwise every line of the used code, configurations and commands was tested before.
If you enjoyed reading Painless Docker and would like to support it, your testimonials will be more than welcome,
send me an email, if you need a development/testing server to manipulate Docker, I recommend using Digital
Ocean, you can also show your support by using this link to sign up.
If you wan to join more than 1000 developers, SRE engineers, sysadmins and IT experts, you can subscribe to a
DevOpsLinks community and you will be invited to join our newsletter and join out team chat.
Chapter I - Introduction To Docker & Containers
o ^__^
o (oo)\_______
(__)\ )\/\
||----w |
|| ||
What Are Containers
"Containers are the new virtualization." : This is what pushed me to adopt containers, I have been always curious
about the virtualization techniques and types and containers are - for me - a technology that brings together two of
my favorite fields: system design and software engineering.
Container like Docker is the technology that allows you to isolate, build, package, ship and run an application.
A container makes it easy to move an application between development, testing, staging and production
environments.
Containers exist since years now, it is not a new revolutionary technology: The real value it gives is not the
technology itself but getting people to agree on something. In the other hand, it is experiencing a rebirth with easy-
to-manage containerization tools like Docker.
Containers Types
The popularity of Docker made some people think that it is the only container technology but there are many others.
Let's enumerate most of them.
System administrators will be more knowledgeable about the following technologies but this book is not just for
Linux specialists, operation engineers and system architects, it is also addressed to developers and software
architects.
The following list is ordered from the least to the most recent technology.
Chroot Jail
Historically, the first container was the chroot.
Chroot is a system call for nix OSs that changes the root directory of the current running process and their children.
The process running in a chroot jail will not know about the real filesystem root directory.
A program that is run in such environment cannot access files and commands outside that environmental directory
tree. This modified environment is called a chroot jail.
FreeBSD Jails
The FreeBSD jail mechanism is an implementation of OS-level virtualization.
A FreeBSD-based operating system could be partitioned into several independent jails. While chroot jail restricts
processes to a particular filesystem view, the FreeBSD is an OS-level virtualization: A jail restricting the activities of
a process with respect to the rest of the system. Jailed processes are "sandboxed".
Linux-VServer
Linux-VServer is a virtual private server using OS-level virtualization capabilities that was added to the Linux
Kernel.
Linux-VServer technology has many advantages but its networking is based on isolation, not virtualization which
prevents each virtual server from creating its own internal routing policy.
Solaris Containers
Solaris Containers are an OS-level virtualization technology for x86 and SPARC systems. A Solaris Container is a
combination of system resource controls and the boundary separation provided by zones.
Zones are the equivalent of completely isolated virtual servers within a single OS instance. System administrators
place multiple sets of application services onto one system and place each into isolated Sloaris Container.
OpenVZ
Open Virtuozzo or OpenVZ is also OS-level virtualization technology for Linux. OpenVZ allows system
administrators to run multiple isolated OS instances (containers), virtual private servers or virtual environments.
Process Containers
Engineers at Google (primarily Paul Menage and Rohit Seth) started the work on this feature in 2006 under the
name "Process Containers". It was then called cgroups (Control Groups). We will see more details about cgroups
later in this book.
LXC
Linux Containers or LXC is an OS-level virtualization technology that allows running multiple isolated Linux
systems (containers) on a control host using a single Linux kernel. LXC provides a virtual environment that has its
own process and network space. It relies on cgroups (Process Containers).
The difference between Docker and LXC is detailed later in this book.
Warden
Warden used LXC at its initial stage and was later on replaced with a CloudFoundry implementation. It provides
isolation to any other system than Linux that support isolation.
LMCTFY
Let Me Contain That For You or LMCTFY is the Open Source version of Google’s container stack, which provides
Linux application containers.
Google engineers have been collaborating with Docker over libcontainer and porting the core lmctfy concepts and
abstractions to libcontainer.
The project is not actively being developed, in future the core of lmctfy will be replaced by libcontainer.
Docker
This is what we are going to discover in this book.
RKT
CoreOs started building a container called rkt (pronounce Rocket).
CoreOs is designing rkt following the original premise of containers that Docker introduced but with more focus on:
Composable ecosystem
Security
A different image distribution philosophy
Openness
rkt is Pod-native which means that its basic unit of execution is a pod, linking together resources and user
applications in a self-contained environment.
Introduction To Docker
Docker is a containerization tool with a rich ecosystem that was conceived to help you develop, deploy and run any
application, anywhere.
Unlike a traditional virtual machine, Docker container share the resources of the host machine without needing an
intermediary (a hypervisor) and therefore you don't need to install an operating system. It contains the application
and its dependencies, but works in an isolated and autonomous way.
In other words, instead of a hypervisor with a guest operating system on top, Docker uses its engine and containers
on top.
Most of us used to use virtual machines, so why containers and Docker are taking an important part of today
infrastructures ?
This table explains briefly the difference and the advantages of using Docker over a VM:
VM Docker
Size Small CoreOs = 1.2GB A Busybox container = 2.5 MB
Startup Time Measured in minutes An optimized Docker container, will run in less that a second
Integration Difficult More open to be integrated to other tools
Dependency Hell Frustration Docker fixes this
Versionning No Yes
Docker is a process isolation tool that used LXC (an operating-system-level virtualization method for running
multiple isolated Linux systems (containers) on a control host using a single Linux Kernel) until the version 0.9.
The basic difference between LXC and VMs is that with LXC there is only one instance of Linux Kernel running.
For curious readers, LXC was replaced by Docker own libcontainer library written in the Go programming
language.
So, a Docker container isolate your application running in a host OS, the latter can run many other containers. Using
Docker and its ecosystem, you can easily manage a cluster of containers, stop, start and pause multiple applications,
scale them, take snapshots of running containers, link multiple services running Docker, manage containers and
clusters using APIs on top of them, automate tasks, create applications' watchdogs and many other features that are
complicated without containers.
Using this book, you will learn how to use all of these features and more.
What Is The Relation Between The Host OS And Docker
In a simple phrase, the host OS and the container share the same Kernel.
If you are running Ubuntu as a host, container's Kernel is going to use the same Kernel as Ubuntu system, but you
can use CentOs or any other OS image inside your container. That is why the main difference between a virtual
machine and a Docker container is the absence of an intermediary part between the Kernel and the guest, Docker
takes place directly within your host's Kernel.
You are probably saying "If Docker's using the host Kernel, why should I install an OS within my container ?".
You are right, in some cases you can use Docker's scratch image, which is an explicitly empty image, especially for
building images from scratch. This is useful for containers that contain only a single binary and whatever it requires,
such as the "hello-world" container that we are going to use in the next section.
So Docker is a process isolation environment and not an OS isolation environment (like virtual machines), you can,
as said, use a container without an OS. But imagine you want to run an Nginx or an Apache container, you can run
the server's binary, but you will need to access the file system in order to configure nginx.conf, apache.conf,
httpd.conf ..etc or the available/enabled sites configurations.
In this case, if you run a containers without an OS, you will need to map folders from the container to the host like
the /etc directory (since configuration files are under /etc).
You can actually do it but you will lose the change management feature that docker containers offers: So every
change within the container file system will be also mapped to the host file system and even if you map them on
different folders, things could become complex with advanced production/development scenarios and environments.
Therefore, amongst other reasons, Docker containers running an OS are used for portability and change
management.
In the examples explained in this book, we often rely on official images that can be found in the official Docker hub,
we will also create some custom images.
What Does Docker Add To LXC Tools ?
LXC owes its origin to the development of cgroups and namespaces in the Linux kernel. One of the most asked
questions on the net about Docker is the difference between Docker and VMs but also the difference between
Docker and LXC.
This question was asked in Stackoverflow and I am sharing the response of Solomon Hykes (the creator of Docker)
pusblished under CC BY-SA 3.0 license:
Docker is not a replacement for LXC. LXC refers to capabilities of the Linux Kernel (specifically namespaces and
control groups) which allow sandboxing processes from one another, and controlling their resource allocations.
On top of this low-level foundation of Kernel features, Docker offers a high-level tool with several powerful
functionalities:
Portable deployment across machines
Docker defines a format for bundling an application and all its dependencies into a single object which can be
transferred to any docker-enabled machine, and executed there with the guarantee that the execution environment
exposed to the application will be the same. LXC implements process sandboxing, which is an important pre-
requisite for portable deployment, but that alone is not enough for portable deployment. If you sent me a copy of
your application installed in a custom LXC configuration, it would almost certainly not run on my machine the way
it does on yours, because it is tied to your machine's specific configuration: networking, storage, logging, distro, etc.
Docker defines an abstraction for these machine-specific settings, so that the exact same Docker container can run -
unchanged - on many different machines, with many different configurations.
Application-centric
Docker is optimized for the deployment of applications, as opposed to machines. This is reflected in its API, user
interface, design philosophy and documentation. By contrast, the LXC helper scripts focus on containers as
lightweight machines - basically servers that boot faster and need less ram. We think there's more to containers than
just that.
Automatic build
Docker includes a tool for developers to automatically assemble a container from their source code, with full control
over application dependencies, build tools, packaging etc. They are free to use make, Maven, Chef, Puppet,
SaltStack, Debian packages, RPMS, source tarballs, or any combination of the above, regardless of the
configuration of the machines.
Versioning
Docker includes git-like capabilities for tracking successive versions of a container, inspecting the diff between
versions, committing new versions, rolling back etc. The history also includes how a container was assembled and
by whom, so you get full traceability from the production server all the way back to the upstream developer. Docker
also implements incremental uploads and downloads, similar to git pull , so new versions of a container can be
transferred by only sending diffs.
Component re-use
Any container can be used as an "base image" to create more specialized components. This can be done manually or
as part of an automated build. For example you can prepare the ideal Python environment, and use it as a base for 10
different applications. Your ideal PostgreSQL setup can be re-used for all your future projects. And so on.
Sharing
Docker has access to a public registry (https://registry.hub.docker.com/) where thousands of people have uploaded
useful containers: anything from Redis, Couchdb, PostgreSQL to IRC bouncers to Rails app servers to Hadoop to
base images for various distros. The registry also includes an official "standard library" of useful containers
maintained by the Docker team. The registry itself is open-source, so anyone can deploy their own registry to store
and transfer private containers, for internal server deployments for example.
Tool ecosystem
Docker defines an API for automating and customizing the creation and deployment of containers. There are a huge
number of tools integrating with Docker to extend its capabilities. PaaS-like deployment (Dokku, Deis, Flynn),
multi-node orchestration (Maestro, Salt, Mesos, OpenStack Nova), management dashboards (Docker-UI, OpenStack
horizon, Shipyard), configuration management (chef, puppet), continuous integration (jenkins, strider, travis), etc.
Docker is rapidly establishing itself as the standard for container-based tooling.
Docker Use Cases
Docker has many use cases and advantages:
Versionning & Fast Deployment
Docker registry (or Docker Hub) could be considered as a version control system for a given application. Rollbacks
and updates are easier this way.
Just like Github, BitBucket or any other Git system, you can use tags to tag your images versions. Imagine you can
tag differently a container with each application release, it will be easier to deploy and rollback to the n-1 release.
As you may already know, Git-like systems gives you commit identifiers like 2.1-3-xxxxxx , these are not tags, you
can also use your Git system to tag you code, but for deployment you will need to download these tags or their
artifacts.
If your fellow developers are working on an application with thousands of packages dependencies like JavaScript
apps (using the package.json), you may be facing a deployment of an application with very small files to download
or update with probably some new configurations. A single Docker image with your new code, already builded and
tested and configurations will be easier and faster to ship and deploy.
Tagging is done with docker tag command, these tags are the base for the commit. Docker versionning and tagging
system is working also in this way.
Distribution & Collaboration
If you would like to share images and containers, Docker allows this social feature so that anyone can contribute to a
public (or private) image.
Individuals and communities can collaborate and share images. Users can also vote for images. In Docker Hub, you
can find trusted (official) and community images.
Some images have a continuous build and security scan feature to keep them up-to-date.
Multi Tenancy & High Availability
Using the right tools from the ecosystem, it is easier to run many instances of the same application in the same
server with Docker than the "main stream" way.
Using a proxy, a service discovery and a scheduling tool, you can start a second server (or more) and load-balance
your traffic between the cluster nodes where your containers are "living".
CI/CD
Docker is used in production systems but it is considered as a tool to run the same application in developer's
laptop/server. Docker may move from development to QA to production without being changed. If you would like
to be as close as possible to production, then Docker is a good solution.
Since it solves the problem of "it works on my machine", it is important to highlight this use case. Most problems in
software development and operations are due to the differences between development and production environments.
If your R&D team use the same image that QA team will test against and the same environment will be pushed to
live servers, it is sure that a great part of the problems (dev vs ops) will disappear.
There are many DevOps topologies in the software industry now and "container-centric" (or "container-based")
topology is one of them.
This topology makes both Ops and Dev teams share more responsabilities in common, which is a DevOps approach
to blur the boundaries between teams and encourage the co-creation.
Isolation & The Dependency Hell
Dockerizing an application is also isolating it into a separate environment.
Imagine you have to run two APIs with two different languages or running them with the same language but with
different versions.
You may need two incompatible versions of the same language, each API is running one of them, for example
Python 2 and Python 3.
If the two apps are dockerized, you don't need to install nothing on your host machine, just Docker, every version
will run in an isolated environment.
Since I start running Docker in production, most of my apps were dockerized, I stopped using the host system
package manager since that time, every new application or middleware were installed inside the container.
Docker simplifies the system packages management and eliminates the "dependency hell" by its isolation feature.
Using The Ecosystem
You can use Docker with multiple external tools like configuration management tools, orchestration tools, file
storage technologies, filesystem types, logging softwares, monitoring tools, self-healing tools ..etc
On the other hand, even with all the benefits of Docker, it is not always the best solution to use, there are always
exceptions.
Chapter II - Installation & Configuration
o ^__^
o (oo)\_______
(__)\ )\/\
||----w |
|| ||
In Painless Docker book, we are going to use a Docker version superior or equal to the 1.12.
I used to use previous stable version like 1.11, but a new important feature which is the Swarm Mode was introduced
in version 1.12. Swarm orchestration technology is directly integrated into Docker and just before it was an add-on.
I am a GNU/Linux user, but for Windows and Mac users, Docker unveiled with the same version, the first full
desktop editions of the software for development on Mac and Windows machines.
There are many other interesting features, enhancements and simplifications in the version 1.12 of Docker, you can
find the whole list in Docker github repository.
If you are completely new to Docker, you will not get all of the new following features, but you will be able to
understand them as you go along with this book.
The most important new features in Docker 1.12 are about the Swarm Mode:
Built-in Virtual-IP based internal and ingress load-balancing using IPVS
Routing Mesh using ingress overlay network
New swarm command to manage swarms with these subcommands:
init ,
join ,
join-token ,
leave ,
update
New service command to manage swarm-wide services with create , inspect , update , rm , ps
subcommands
New node command to manage nodes with accept , promote , demote , inspect , update , ps , ls and rm
subcommands
New stack and deploy commands to manage and deploy multi-service applications
Add support for local and global volume scopes (analogous to network scopes)
Other important features in Docker were introduced in preceding versions like the multi-build support in the version
v17.05.0-ce.
When writing this book, I used Ubuntu 16.04 server edition with a 64 bit architecture as my main operating system,
but you will see how to install Docker in other OSs like Windows and MacOS.
For other Linux distributions users, things are not really different, except the package manager (apt/aptitude) that
you should replace by your own one - we are going to explain the installation for other distributions like CentOs.
Requirements & Compatibility
Docker itself does not need many resources so a little RAM could help you install and run Docker engine. But
running containers depends on what are you running exactly, in the case you are running a Mysql or a busy
MongoDB inside a container, you will need more memory than running a small Nodejs or Python application .
Docker requires a 64 bit Kernel.
For developers using Windows or Mac, you have the choice to use Docker Toolbox or native Docker. Native Docker
is for sure faster but you still have the choice.
If you will use Docker Toolbox:
Mac users: Your Mac must be running OS X 10.8 "Mountain Lion" or newer to run Docker.
Windows users: Your machine must have a 64-bit operating system running Windows 7 or higher. You should
have an enabled virtualization.
If you prefer Docker for Mac as it is mentioned in the official Docker website:
Your Mac must be a 2010 or newer model, with Intel’s hardware support for memory management unit (MMU)
virtualization; i.e., Extended Page Tables (EPT) OS X 10.10.3 Yosemite or newer
You must have at least 4GB of RAM
You must have VirtualBox prior to version 4.3.30 must NOT be installed (it is incompatible with Docker for
Mac : uninstall the older version of VirtualBox and re-try the install if you already missed this).
And if you prefer Docker for Windows:
Your machine should have a 64bit Windows 10 Pro, Enterprise and Education (1511 November update, Build
10586 or later).
The Hyper-V package must be enabled and if it will be installed by Docker for Windows installer, it will enable
it for you
Installing Docker On Linux
Docker is supported by all Linux distributions satisfying the requirements, but not with all of the versions and this is
due to compatibility of Docker with old Kernel versions.
Kernels older than 3.10 will not support Docker and can cause data loss or any other bugs.
Check your Kernel by typing:
uname -r
Docker recommends making an upgrade, a dist upgrade and having the latest Kernel version for your servers before
using it in production.
Ubuntu
For Ubuntu, only those versions are supported to run and manage containers:
Ubuntu Xenial 16.04 (LTS)
Ubuntu Wily 15.10
Ubuntu Trusty 14.04 (LTS)
Ubuntu Precise 12.04 (LTS)
Update your package manager, add the apt key & Docker list then type the update command.
sudo apt-get update
sudo apt-get install apt-transport-https ca-certificates
sudo apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys 58118E89F3A912897C070ADBF76221572C52609D
echo "deb https://apt.dockerproject.org/repo ubuntu-trusty main"|tee -a /etc/apt/sources.list.d/docker.list
sudo apt-get update
Purge the old lxc-docker if you were using it before and install the new Docker Engine:
sudo apt-get purge lxc-docker
sudo apt-get install docker-engine
If you need to run Docker without root rights (with your actual user), run the following commands:
sudo groupadd docker
sudo usermod -aG docker $USER
If everything was ok, then running this command will create a container that will print a Hello World message than
exits without errors:
docker run hello-world
There is a good explanation about how Docker works in the output, if you have not noticed it, here it is:
Hello from Docker!
This message shows that your installation appears to be working correctly.
To generate this message, Docker took the following steps:
1. The Docker client contacted the Docker daemon.
2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
3. The Docker daemon created a new container from that image which runs the
executable that produces the output you are currently reading.
4. The Docker daemon streamed that output to the Docker client, which sent it
to your terminal.
CentOS
Docker runs only on CentOS 7.X. Same installation may apply to other EL7 distributions (but they are not supported
by Docker)
Add the yum repo.
sudo tee /etc/yum.repos.d/docker.repo <<-'EOF'
[dockerrepo]
name=Docker Repository
baseurl=https://yum.dockerproject.org/repo/main/centos/7/
enabled=1
gpgcheck=1
gpgkey=https://yum.dockerproject.org/gpg
EOF
Install Docker:
sudo yum install docker-engine
Start its service:
sudo service docker start
Set the daemon to run at system boot:
sudo chkconfig docker on
Test the Hello World image:
docker run hello-world
If you see a similar output to the following one, than your installation is fine:
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
c04b14da8d14: Pull complete
Digest: sha256:0256e8a36e2070f7bf2d0b0763dbabdd67798512411de4cdcf9431a1feb60fd9
Status: Downloaded newer image for hello-world:latest
Hello from Docker!
This message shows that your installation appears to be working correctly.
To generate this message, Docker took the following steps:
1. The Docker client contacted the Docker daemon.
2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
3. The Docker daemon created a new container from that image which runs the
executable that produces the output you are currently reading.
4. The Docker daemon streamed that output to the Docker client, which sent it
to your terminal.
To try something more ambitious, you can run an Ubuntu container with:
$ docker run -it ubuntu bash
Share images, automate workflows, and more with a free *Docker Hub* account:
https://hub.docker.com
For more examples and ideas, visit:
https://docs.docker.com/engine/userguide/
Now if you would like to create a Docker group and add your current user to it in order to avoid running command
with sudo privileges :
sudo groupadd docker
sudo usermod -aG docker $USER
Verify your work by running the hello-world container without sudo.
Debian
Only:
Debian testing stretch (64-bit)
Debian 8.0 Jessie (64-bit)
Debian 7.7 Wheezy (64-bit) (backports required)
are supported.
We are going to use the installation for Weezy. In order to install Docker on Jessie (8.0), change the entry for
backports and source.list entry to Jessie.
First of all, enable backports:
sudo su
echo "deb http://http.debian.net/debian wheezy-backports main"|tee -a /etc/apt/sources.list.d/backports.list
apt-get update
Purge other Docker versions if you have already used them:
``` bash
apt-get purge "lxc-docker*"
apt-get purge "docker.io*"
and update you package manager:
apt-get update
Install apt-transport-https and ca-certificates
apt-get install apt-transport-https ca-certificates
Add the GPG key.
apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys 58118E89F3A912897C070ADBF76221572C52609D
Add the repository:
echo "deb https://apt.dockerproject.org/repo debian-wheezy main"|tee -a /etc/apt/sources.list.d/docker.list
apt-get update
And install Docker:
apt-get install docker-engine
Start the service
service docker start
Run the Hello World container in order to check if everything is good:
sudo docker run hello-world
You will have a similar output to the following, if Docker is installed without problems:
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
c04b14da8d14: Pull complete
Digest: sha256:0256e8a36e2070f7bf2d0b0763dbabdd67798512411de4cdcf9431a1feb60fd9
Status: Downloaded newer image for hello-world:latest
Hello from Docker!
This message shows that your installation appears to be working correctly.
To generate this message, Docker took the following steps:
1. The Docker client contacted the Docker daemon.
2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
3. The Docker daemon created a new container from that image which runs the
executable that produces the output you are currently reading.
4. The Docker daemon streamed that output to the Docker client, which sent it
to your terminal.
To try something more ambitious, you can run an Ubuntu container with:
$ docker run -it ubuntu bash
Share images, automate workflows, and more with a free *Docker Hub* account:
https://hub.docker.com
For more examples and ideas, visit:
https://docs.docker.com/engine/userguide/
Now, in order to use your current user (not root user) to manage and run Docker, add the docker group if it does not
already exist.
exit # Exit from sudo user
sudo groupadd docker
Add your preferred user to this group:
sudo gpasswd -a ${USER} docker
Restart the Docker daemon.
sudo service docker restart
Test the Hello World container to check if your current user have right to execute Docker commands.
Docker Toolbox
Few months ago, installing Docker for my developers using MacOs and Windows was a pain. Now the new Docker
Toolbox have made things easier. Docker Toolbox is quick and easy installer that will setup a full Docker
environment. The installation includes Docker, Machine, Compose, Kitematic, and VirtualBox.
Docker Toolbox could be downloaded from Docker's wesbite.
Using this tool you will be able to work with:
docker-machine commands
docker commands
docker-compose commands
The Docker GUI (Kitematic)
a shell preconfigured for a Docker command-line environment
and Oracle VirtualBox
The installation is quite easy:
This is a screenshot for Docker
If you would like a default installation press "Next" to accept all and then click on Install. If you are running
Windows, make sure you allow the installer to make the necessary changes.
Now that you finished the installation, on the application folder, click on "Docker Quickstart Terminal".
Mac users, type the following command in order to :
Create a machine dev...
Create a VirtualBox VM...
Create SSH key...
Start the VirtualBox VM...
Start the VM...
Start the machine dev
Set the environment variables for machine dev
Windows users you can also follow the following instructions, since there are common commands between the two
OSs.
bash '/Applications/Docker Quickstart Terminal.app/Contents/Resources/Scripts/start.sh'
Running the following command will show you how to connect Docker to this machine:
docker-machine env dev
Now for testing, use the Hello World container:
docker run hello-world
You will probably see this message:
Unable to find image 'hello-world:latest' locally
This is not an error but Docker is saying that the image Hello World will not be used from your local disk but it will
be pulled from Docker Hub.
latest: Pulling from library/hello-world
535020c3e8ad: Pull complete
af340544ed62: Pull complete
Digest: sha256:a68868bfe696c00866942e8f5ca39e3e31b79c1e50feaee4ce5e28df2f051d5c
Status: Downloaded newer image for hello-world:latest
Hello from Docker.
This message shows that your installation appears to be working correctly.
If you are using Windows, it is actually not very different.
Click on the "Docker Quickstart Terminal" icon, if your operating system displays a prompt to allow VirtualBox,
choose yes and a terminal will show on your screen.
To test if Docker is working, type:
docker run hello-world
You will see the following message:
Hello from Docker.
You may also notice the explanation of how Docker is working on your local machine.
To generate this message ("Hello World" message), Docker took the following steps:
- The *Docker Engine CLI* client contacted the *Docker Engine daemon*.
- The *Docker Engine daemon* pulled the *hello-
world* image from the *Docker Hub*. (Assuming it was not already locally available.)
- The *Docker Engine daemon* created a new container from that image which runs the executable that produces the output you are currently reading.
- The *Docker Engine daemon* streamed that output to the *Docker Engine CLI* client, which sent it to your terminal.
After the installation, you can also start using the GUI or the command line, click on the create button to create a
Hello World container just to make sure if everything is OK.
Docker Toolbox is a very good tool for every developer but you may need more performance with larger projects in
your local development. Docker for Mac and Docker for Windows are native for each OS.
Docker For Mac
Use the following link to download the .dmg file and install the native Docker.
https://download.docker.com/mac/stable/Docker.dmg
To use native Docker, get back to the requirements section and make sure of your system configuration.
After the installation, drag and drop Docker.app to your Applications folder and start Docker from your applications
list.
You will a whale icon on your status bar and when you click on it, you can see a list of choices and you can also
click on About Docker to verify if you are using the right version.
If you prefer using the CLI, open your terminal and type:
docker --version
or
docker -v
If you installed Docker 1.12, you will see:
Docker version 1.12.0, build 8eab29e
If you go to Docker.app preferences, you can find some configurations, but one of the most important ones are
sharing drivers.
In many cases, your containers running in your local machines can use a file system mounted to a folder in your
host, we will not need this for the moment but you should remember later in this book that if you mount a container
to a local folder, you should get back to this step and share the concerned files, directories, users or volumes on your
local system with your containers.
Docker For Windows
Use the following link to download the .msi file and install the native Docker.
https://download.docker.com/win/stable/InstallDocker.msi
Same thing for Windows: To use native Docker, get back to the requirements section and make sure of your
system configuration.
Double-click InstallDocker.msi and run the installer
Follow the installation wizard
Authorize Docker if you were asked for that by your system
Click Finish to start Docker
If everything was OK, you will get a popup with a success message.
Now open cmd.exe (or PowerShell) and type
docker --version
or
docker version
If your containers running on your local development environment may need in many cases (that we will see in this
book) to access to your file system, folders, files or drives. This is the case when you mount a folder inside the
Docker container to your host file system. We will see many examples of this type so you should remember to get
back here and make the right configurations if mounting a directory or a file will be needed later in this book.
Docker Experimental Features
Even if Docker has a stable version that can be safely used in production environments, many features are still in
development and you may need to plan for your future projects using Docker, so in this case, you will need to test
some of these features.
I have been testing Docker Swarm Mode since it was experimental and I needed to evaluate this feature in order to
prepare the adequate architecture, servers and adopt development and integration work flows to the coming changes.
You may find some instability and bugs using the experimental installation packages which is normal.
Docker Experimental Features For Mac And Windows
For native Docker running on both systems, to evaluate the experimental features, you need to download the beta
channel installation packages.
For Mac:
https://download.docker.com/mac/beta/Docker.dmg
For Windows
https://download.docker.com/win/beta/InstallDocker.msi
Docker Experimental Features For Linux
Running the following command will install the experimental version of Docker:
You should have curl installed
curl -sSL https://experimental.docker.com/ | sh
Genrally curl | bash is not a good security practice even if the transport is over HTTPS. Content can be
modified on the server.
You can download the script, read it and execute it:
wget https://experimental.docker.com/
Or you can get one of the following binaries in function of your system architecture:
https://experimental.docker.com/builds/Linux/i386/docker-latest.tgz
https://experimental.docker.com/builds/Linux/x86_64/docker-latest.tgz
For the remainder of the installation :
tar -xvzf docker-latest.tgz
mv docker/* /usr/bin/
sudo dockerd &
Removing Docker
Let's take Ubuntu as an example.
Purge the Docker Engine:
sudo apt-get purge docker-engine
sudo apt-get autoremove --purge docker-engine
sudo apt-get autoclean
This is enough in most cases, but to remove all of Docker's files, follow the next steps.
If you wish to remove all the images, containers and volumes:
sudo rm -rf /var/lib/docker
Then remove docker from apparmor.d:
sudo rm /etc/apparmor.d/docker
Then remove docker group:
sudo groupdel docker
You have successfully deleted completely docker.
Docker Hub
Docker Hub is a cloud registry service for Docker.
Docker allows to package artifacts/code and configurations into a single image. These images can be reusable by
you, your colleague or even your customer. If you would like to share your code you will generally use a git
repository like Github or Bitbucket.
You can also run your own Gitlab that will allows you to have your own private on-premise Git repositories.
Things are very similar with Docker, you can use a cloud-based solution to share your images like Docker Hub or
use your own Hub (a private Docker registry).
Docker Hub is a public Docker repository, but if you want to use a cloud-based solution while keeping your
images private, the paid version of Docker Hub allows you to have privates repositories.
Docker Hub allows you to
Access to community, official, and private image libraries
Have public or paid private image repositories to where you can push your images and from where your could
pull them to your servers
Create and build new images with different tags when the source code inside your container changes
Create and configure webhooks and trigger actions after a successful push to a repository
Create workgroups and manage access to your private images
Integrate with GitHub and Bitbucket
Basically Docker Hub could be a component of your dev-test pipeline automation.
In order to use Docker Hub, go to the following link and create an account:
https://hub.docker.com/
If you would like to test if your account is enabled, type
docker login
Login with your Docker ID to push and pull images from Docker Hub. If you don't have a Docker ID, head
over to https://hub.docker.com to create one.
Now, go to Docker Hub website and create a public repository. We will see how to send a running container as an
image to Docker Hub and for the same reason we are going to use a sample app genrally used by Docker for demos,
called vote (that you can also find on Docker official Github repository).
Vote application is a Python webapp that lets you vote between two options, it uses a Redis queue to collects new
votes, .NET worker which consumes votes and stores them in a Postgres database backed by a Docker volume and a
Node.Js webapp which shows the results of the voting in real time.
I consider that you created a working account on Docker Hub, typed the login command and entered the right
password.
If have a starting level with Docker, you may not understand all of the next commands but the goal of this section is
just to demonstrate how a Docker Registry works (In this case, the used Docker Registry is a cloud-based one built
by Docker, and as said, it is called Docker Hub).
When you type the following command, Docker will search if it has the image locally, otherwise it will check if it is
on Docker Hub:
docker run -d -it -p 80:80 instavote/vote
You can find the image here:
https://hub.docker.com/r/instavote/vote/
Now type this command to show the running container. This is the equivalent of ps command in Linux systems for
Docker:
docker ps
You can see here that the nauseous_albattani container (a name given automatically by Docker), is running the vote
application pulled from instavote/vote repository.
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
136422f45b02 instavote/vote "gunicorn app:app -b " 8 minutes ago Up 8 minutes 0.0.0.0:80-
>80/tcp nauseous_albattani
The container id is : 136422f45b02 and the application is reachable via http://0.0.0.0:80
Just like using git, we are going to commit and push the image to our Docker Hub repository. No need to create a
new repository, the commit/push can be used in a lazy mode, it will create it for you.
Commit:
docker commit -m "Painless Docker first commit" -a "Aymen El Amri" 136422f45b02 eon01/painlessdocker.com_voteapp:v1
sha256:bf2a7905742d85cca806eefa8618a6f09a00c3802b6f918cb965b22a94e7578a
And push:
docker push eon01/painlessdocker.com_voteapp:v1
The push refers to a repository [docker.io/eon01/painlessdocker.com_voteapp]
1f31ef805ed1: Mounted from eon01/painless_docker_vote_app
3c58cbbfa0a8: Mounted from eon01/painless_docker_vote_app
02e23fb0be8d: Mounted from eon01/painless_docker_vote_app
f485a8fdd8bd: Mounted from eon01/painless_docker_vote_app
1f1dc3de0e7d: Mounted from eon01/painless_docker_vote_app
797c28e44049: Mounted from eon01/painless_docker_vote_app
77f08abee8bf: Mounted from eon01/painless_docker_vote_app
v1: digest: sha256:658750e57d51df53b24bf0f5a7bc6d52e3b03ce710a312362b99b530442a089f size: 1781
Change eon01 by your username.
Notice that a new repository is added automatically to my Docker Hub dashboard:
Now you can pull the same image with the latest tag:
docker pull eon01/painlessdocker.com_voteapp
Or with a specific tag:
docker pull eon01/painlessdocker.com_voteapp:v1
In our case, the v1 is the latest version, so the result of the two above commands will be the same image pulled to
your local image.
Docker Registry
Docker Registry is a scalable server side application conceived to be an on-premise Docker Hub. Just like Docker
Hub, it helps you push, pull and distribute your images.
The software powering Docker Registry is open-source under Apache license. Docker Registry could be also a
cloud-based solution, because Docker's commercial offer called Docker Trusted Registry is cloud-based.
Docker Registry could be run using Docker. A Docker image for the Docker Registry is available here:
https://hub.docker.com/_/registry/
It is easy to create a registry, just pull and run the image like this:
docker run -d -p 5000:5000 --name registry registry:2.5.0
Let's test it : We will pull an image from Docker Hub, tag and push it to our own registry .
docker pull ubuntu
docker tag ubuntu localhost:5000/myfirstimage
docker push localhost:5000/myfirstimage
docker pull localhost:5000/myfirstimage
Deploying Docker Registry On Amazon Web Services
You need to have:
An Amazon Web Services account
The good IAM privileges
Your aws CLI configured
Type:
aws configure
Type your credentials, choose your region and your preferred output format:
AWS Access Key ID [None]: ******************
AWS Secret Access Key [None]: ***********************
Default region name [None]: eu-west-1
Default output format [None]: json
Create an EBS disk (Elastic Block Store), specify the region you are using and the availability zone.
aws ec2 create-volume --size 80 --region eu-west-1 --availability-zone eu-west-1a --volume-type standard
You should have an similar output to the following one:
{
"AvailabilityZone": "eu-west-1a",
"Encrypted": false,
"VolumeType": "standard",
"VolumeId": "vol-xxxxxx",
"State": "creating",
"SnapshotId": "",
"CreateTime": "2016-10-14T15:29:35.400Z",
"Size": 80
}
Keep the output, because we are going to use the volume id later.
Our choice for the volume type was 'standard' and you must choose your preferred volume type and it is mainly
about the iops.
The following table could help:
IOPS Use Case
Magnetic Up to 100 IOPS/volume Little access
GP Up to 3000 IOPS/volume Larger access needs, suitable for the majority of classic cases
PIOPS Up to 4000 IOPS/volume High speed access
Start an EC2 instance:
aws ec2 run-instances --image-id ami-xxxxxxxx --count 1 --instance-type t1.medium --key-name MyKeyPair --security-group-
ids sg-xxxxxxxx --subnet-id subnet-xxxxxxxx
Replace your image id, instance type, key name, security group ids and subnet id with your proper values.
On the output look for the instance id because we are going to use it.
{
"OwnerId": "xxxxxxxx",
"ReservationId": "r-xxxxxxx",
"Groups": [
{
[..]
}
],
"Instances": [
{
"InstanceId": "i-5203422c",
[..]
}
aws ec2 attach-volume --volume-id vol-xxxxxxxxxxx --instance-id i-xxxxxxxx --device /dev/sdf
Now that the volume is attached, you should check your volumes in the EC2 instance with a df -kh and you will see
your new attached instance.
In this example, let's say the attached EBS has the following device name /dev/xvdf . Create a folder and a new file
system upon the volume:
sudo mkfs -t ext4 /dev/xvdf
mkdir /data
Make sure you get the right device name for the new attached volume.
Now go to the fstab configuration file:
/etc/fstab
and add :
``` bash
/dev/xvdf /data ext4 defaults 1 1
Now mount the volume by typing :
mount -a
You should have Docker installed in order to run a private Docker registry.
The next step is running the registry:
docker run -d -p 80:5000 --restart=always -v /data:/var/lib/registry registry:2
If you type docker ps , you should see the registry running:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS
bb6201f63cc5 registry:2 "/entrypoint.sh /etc/" 21 hours Up About 1h 0.0.0.0:80->5000/tcp
Now you should create an ELB but first create its Security Group (expose port 443).
Create the ELB using AWS CLI or AWS Console and redirect traffic to the EC2 port number 80. You can get the ELB
DNS since we are going to use it to push and pull images.
Opening port 443 is needed since the Docker Registry needs it to send and receive data, that's why we used ELB
since the latter has integrated certificate management and SSL decryption.
It is also used to build a highly available system.
Now let's test it by pushing an image:
docker pull hello-world
docker tag hello-world load_balancer_dns/hello-world:1
docker push load_balancer_dns/hello-world
docker pull load_balancer_dns/hello-world
If you don't want to use ELB, you should bring your own certificates and run:
docker run -d -p 5000:5000 --restart=always --name registry \
-v `pwd`/certs:/certs \
-v /data:/var/lib/registry \
-e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/domain.crt \
-e REGISTRY_HTTP_TLS_KEY=/certs/domain.key \
registry:2
Another option to use is to run storage on AWS S3:
docker run \
-e SETTINGS_FLAVOR=s3 \
-e AWS_BUCKET=my_bucket \
-e STORAGE_PATH=/data \
-e AWS_REGION="eu-west-1"
-e AWS_KEY=*********** \
-e AWS_SECRET=*********** \
-e SEARCH_BACKEND=sqlalchemy \
-p 80:5000 \
registry
In this case, you should not forget to add a policy for S3 that allows the Docker Registry to read and write your
images to S3.
Deploying Docker Registry On Azure
Using Azure, we are going to deploy the same Docker Registry using Azure Storage service.
You will also need a configured Azure CLI.
We need to create a storage account using the Azure CLI:
azure storage account create -l "North Europe" <storage_account_name>
Change <storage_account_name> by your proper value.
Now we need to list the storage account keys to use one of them later:
azure storage account keys list <storage_account_name>
Run:
docker run -d -p 80:5000 \
-e REGISTRY_STORAGE=azure \
-e REGISTRY_STORAGE_AZURE_ACCOUNTNAME="<storage_account_name>" \
-e REGISTRY_STORAGE_AZURE_ACCOUNTKEY="<storage_key>" \
-e REGISTRY_STORAGE_AZURE_CONTAINER="registry" \
--name=registry \
registry:2
If the port 80 is closed on your Azure virtual machine, you should open it:
azure vm endpoint create <machine-name> 80 80
Configuring security for the Docker Registry is not covered in this part.
Docker Store
Docker Store is a Docker inc product and it is designed to provide a scalable self-service system for ISVs to publish
and distribute trusted and enterprise-ready content
It provides a publishing process that includes:
security scanning
component inventory
The open-source license usage
Image construction guidelines
In other words, it is an official marketplace with workflows to create and distribute content were you can find free
and commercial images.
Chapter III - Basic Concepts
o ^__^
o (oo)\_______
(__)\ )\/\
||----w |
|| ||
Hello World, Hello Docker
Through the following chapters, you will learn how to manipulate Docker containers (create, run, delete, update,
scale ..etc).
But to ensure a better understanding of some concepts, we need to learn at least how to run a Hello World container.
We have already seen this, but as a reminder, and to be sure that your installation was good, we will run the Hello
World container again:
docker run --rm -it hello-world
You can see a brief explanation of how Docker has created the container on the command output:
To generate this message, Docker took the following steps:
1. The Docker client contacted the Docker daemon.
2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
3. The Docker daemon created a new container from that image which runs the
executable that produces the output you are currently reading.
4. The Docker daemon streamed that output to the Docker client, which sent it
to your terminal.
This gives a good explanation of how Docker created a container and it will be a waste of energy to re-explain this
in my own words, albeit Docker uses more energy to do the same task :-)
General Information About Docker
Docker is a very active project and its code is frequently changing, to understand many concepts about this
technology, I have been following the project on Github and browsing its issues when I had problems and especially
when I found bugs.
Therefore, it is important to know the version you are using in your production servers. docker -v will give you the
version and the build number you are using.
Example:
Docker version 1.12.2, build bb80604
You can get other general information about the server/client version, the architecture, the Go version ..etc. Use
docker version to get the latter information. Example:
Client:
Version: 1.12.2
API version: 1.24
Go version: go1.6.3
Git commit: bb80604
Built: Tue Oct 11 18:19:35 2016
OS/Arch: linux/amd64
Server:
Version: 1.12.2
API version: 1.24
Go version: go1.6.3
Git commit: bb80604
Built: Tue Oct 11 18:19:35 2016
OS/Arch: linux/amd64
Docker Help
You can view the Docker help using the command line:
docker --help
You will get a list of
options like:
--config=~/.docker Location of client config files
-D, --debug Enable debug mode
-H, --host=[] Daemon socket(s) to connect to
-h, --help Print usage
-l, --log-level=info Set the logging level
--tls Use TLS; implied by --tlsverify
--tlscacert=~/.docker/ca.pem Trust certs signed only by this CA
--tlscert=~/.docker/cert.pem Path to TLS certificate file
--tlskey=~/.docker/key.pem Path to TLS key file
--tlsverify Use TLS and verify the remote
-v, --version Print version information and quit
commands like:
attach Attach to a running container
build Build an image from a Dockerfile
commit Create a new image from a container's changes
cp Copy files/folders between a container and the local filesystem
create Create a new container
diff Inspect changes on a container's filesystem
If you need more help about a specific command like cp or rmi, you need to type:
docker cp --help
docker rmi --help
In some cases, you may get "the command third level" of help like:
docker swarm init --help
Docker Events
To start this section, let's run a MariaDB container and list Docker Events. For this manipulation, you can use
terminator to split your screen into two and notice in the same time the events output while typing the following
command:
docker run --name mariadb -e MYSQL_ROOT_PASSWORD=password -v /data/db:/var/lib/mysql -d mariadb
Unable to find image 'mariadb:latest' locally
latest: Pulling from library/mariadb
386a066cd84a: Already exists
827c8d62b332: Pull complete
de135f87677c: Pull complete
05822f26ca6e: Pull complete
ad65f56a251e: Pull complete
d71752ae05f3: Pull complete
87cb39e409d0: Pull complete
8e300615ba09: Pull complete
411bb8b40c58: Pull complete
f38e00663fa6: Pull complete
fb7471e9a58d: Pull complete
2d1b7d9d1b69: Pull complete
Digest: sha256:6..c
Status: Downloaded newer image for mariadb:latest
0..7
The events launched by the last command are:
docker events
2016-12-09T00:54:51.827303500+01:00 image pull mariadb:latest (name=mariadb)
2016-12-09T00:54:52.000498892+01:00 container create 0..7 (image=mariadb, name=mariadb)
2016-12-09T00:54:52.137792817+01:00 network connect 8..d (container=0..7, name=bridge, type=bridge)
2016-12-09T00:54:52.550648364+01:00 container start 0..7 (image=mariadb, name=mariadb)
Explicitly:
image pull : The image is pulled from the public repository by it identifier Mariadb.
container create : The container is created from the pulled image and it was given a Docker identifier
0..7 . At this stage the container is not running yet
network connect : The container is attached to a network called bridge having the identifier 8..d .
At this stage the container is not running yet
container start : The container at this stage is running on your host system with the same identifier
0..7
This is the full list of Docker events (44) that can be reported by images, containers, networks, plugins, volumes and
Docker daemon:
- attach
- commit
- copy
- create
- destroy
- detach
- die
- exec_create
- exec_detach
- exec_start
- export
- health_status
- kill
- oom
- pause
- rename
- resize
- restart
- start
- stop
- top
- unpause
- update
- delete
- import
- load
- pull
- push
- save
- tag
- untag
- install
- enable
- disable
- remove
- create
- mount
- unmount
- destroy
- create
- connect
- disconnect
- destroy
- reload
Using Docker API To List Events
There is a chapter about Docker API that will help you discover it in more details, but there is no harm to start using
it now.
You can, in fact, use it to see Docker Events while manipulating containers, images, networks .. etc.
Install curl and type curl --unix-socket /var/run/docker.sock http:/events then open another terminal window (or a new
tab) and type docker pull mariadb , docker rmi -f mariadb to remove the pulled image then docker pull mariadb to pull it
again.
These are the 3 events reported by the 3 commands typed above:
Action = Pulling and image that already exists in the host:
{
"status":"pull",
"id":"mariadb:latest",
"Type":"image",
"Action":"pull",
"Actor": {
"ID":"mariadb:latest",
"Attributes": {
"name":"mariadb"
}
},
"time":1481381043,
"timeNano":1481381043157632493
}
Action = Removing it:
{
"status":"untag",
"id":"sha256:0..c",
"Type":"image",
"Action":"untag",
"Actor" {
"ID":"sha256:0..c",
"Attributes" {
"name":"sha256:0..c"
}
},
"time":1481381060,
"timeNano":1481381060026443422
}
Action = Pulling the same image again from a distant repository:
{
"status":"pull",
"id":"mariadb:latest",
"Type":"image",
"Action":"pull",
"Actor":{
"ID":"mariadb:latest",
"Attributes":{
"name":"mariadb"
}
},
"time":1481381069,
"timeNano":1481381069629194420
}
Each event has a status (pull, untag for removing the image ..etc), a resource identifier (id) with its Type (image,
container, network ..) and other information like:
Actor,
time
and timeNano
You can use DoMonit (a Docker API wrapper) that I created for this book to discover Docker API.
Docker API is detailed in a separate chapter.
To test DoMonit, you can create a folder and install a Python virtual environment:
virtualenv DoMonit/
New python executable in /home/eon01/DoMonit/bin/python
Installing setuptools, pip, wheel...done.
cd DoMonit
Clone the repository:
git clone https://github.com/eon01/DoMonit.git
Cloning into 'DoMonit'...
remote: Counting objects: 237, done.
remote: Compressing objects: 100% (19/19), done.
remote: Total 237 (delta 9), reused 0 (delta 0), pack-reused 218
Receiving objects: 100% (237/237), 106.14 KiB | 113.00 KiB/s, done.
Resolving deltas: 100% (128/128), done.
Checking connectivity... done.
List the files inside the created folder:
ls -l
total 24
drwxr-xr-x 2 eon01 sudo 4096 Dec 11 14:46 bin
drwxr-xr-x 6 eon01 sudo 4096 Dec 11 14:46 DoMonit
drwxr-xr-x 2 eon01 sudo 4096 Dec 11 14:45 include
drwxr-xr-x 3 eon01 sudo 4096 Dec 11 14:45 lib
drwxr-xr-x 2 eon01 sudo 4096 Dec 11 14:45 local
-rw-r--r-- 1 eon01 sudo 60 Dec 11 14:46 pip-selfcheck.json
Activate the execution environment:
. bin/activate
Install the requirements:
pip install -r DoMonit/requirements.txt
Collecting pathlib==1.0.1 (from -r DoMonit/requirements.txt (line 1))
/home/eon01/DoMonit/local/lib/python2.7/site-packages/pip/_vendor/requests/packages/urllib3/util/ssl_.py:318:
SNIMissingWarning: An HTTPS request has been made, but the SNI (Subject Name Indication) extension to TLS is not available on this platform.
This may cause the server to present an incorrect TLS certificate, which can cause validation failures.
You can upgrade to a newer version of Python to solve this.
For more information, see https://urllib3.readthedocs.io/en/latest/security.html#snimissingwarning.
SNIMissingWarning
/home/eon01/DoMonit/local/lib/python2.7/site-packages/pip/_vendor/requests/packages/urllib3/util/ssl_.py:122:
InsecurePlatformWarning:
A true SSLContext object is not available.
This prevents urllib3 from configuring SSL appropriately and may cause certain SSL connections to fail.
You can upgrade to a newer version of Python to solve this.
For more information, see https://urllib3.readthedocs.io/en/latest/security.html#insecureplatformwarning.
InsecurePlatformWarning
Collecting PyYAML==3.11 (from -r DoMonit/requirements.txt (line 2))
Collecting requests==2.10.0 (from -r DoMonit/requirements.txt (line 3))
Using cached requests-2.10.0-py2.py3-none-any.whl
Collecting requests-unixsocket==0.1.5 (from -r DoMonit/requirements.txt (line 4))
Collecting simplejson==3.8.2 (from -r DoMonit/requirements.txt (line 5))
Collecting urllib3==1.16 (from -r DoMonit/requirements.txt (line 6))
Using cached urllib3-1.16-py2.py3-none-any.whl
Installing collected packages: pathlib, PyYAML, requests, urllib3, requests-unixsocket, simplejson
Successfully installed PyYAML-3.11 pathlib-1.0.1 requests-2.10.0 requests-unixsocket-0.1.5 simplejson-3.8.2 urllib3-1.16
and start streaming the events using:
python DoMonit/events_test.py
On another terminal, create a network and notice the output of the executed program:
docker network create test1
6d517f8251736446874e14bafeb08c76cdc6d8219537b1ed1af64984bb3590b2
The output should be something like:
{
"Type": "network",
"Action": "create",
"Actor": {
"ID": "6d517f8251736446874e14bafeb08c76cdc6d8219537b1ed1af64984bb3590b2",
"Attributes": {
"name": "test1",
"type": "bridge"
}
},
"time": 1481464270,
"timeNano": 1481464270716590446
}
This program called the Docker Events API:
curl --unix-socket /var/run/docker.sock http:/events
Let's get back to Docker Events command since we haven't seen yet the options used to stream events.
Docker Events supports the following filters (using in most cases the name or the id of the resource):
container=<name/id>
event=<action>
image=<tag/id>
plugin=<name/id>
label=<key> / label=<key>=<value>
type=<container/image/volume/network/daemon>
volume=<name/id>
network=<name/id>
daemon=<name/id>
Examples:
docker events --filter 'event=start'
docker events --filter 'image=mariadb'
docker events --filter 'container=781567cd25632'
docker events --filter 'container=781567cd25632' --filter 'event=stop'
docker events --filter 'type=volume'
In the next sections we are going to see what each Docker component (volumes, containers ..etc) reports as an event.
Monitoring A Container Using Docker Events
Note that we are always using the container of MariaDB database that we can create like this:
docker run --name mariadb -e MYSQL_ROOT_PASSWORD=password -v /data/db:/var/lib/mysql -d mariadb
If you haven't created the container yet, do it.
Based on the examples of Docker Events given above, we are going to create a small monitoring script to send us an
email if MariaDB containers reports a stop event.
We are going to listen to a file in /tmp and send an email as soon as we read a line.
tail -f /tmp/events | while read line; do echo "$line" | mail -s subject "amri.aymen@gmail.com"; done
Yes, this is my real email address if you would like to stay in touch !
Using the Docker Events command, we can redirect all the logs to a file (in this case we will use /tmp/events ).
docker events --filter 'event=stop' --filter 'container=mariadb' > /tmp/events
If you have already created the container, you can test this simple monitoring system using :
docker stop mariadb
docker start mariadb
This is what you should get by email:
{
"Type":"network",
"Action":"connect",
"Actor":{
"ID":"7b64910a4b7b6b2b90245db1c7a1d8330fb1d0db9237da8c64378c0ad7745e9e",
"Attributes":{
"container":"9..d",
"name":"bridge",
"type":"bridge"
}
},
"time":1481466943,
"timeNano":1481466943153467120
}{
"Type":"network",
"Action":"connect",
"Actor":{
"ID":"7b64910a4b7b6b2b90245db1c7a1d8330fb1d0db9237da8c64378c0ad7745e9e",
"Attributes":{
"container":"9..d",
"name":"bridge",
"type":"bridge"
}
},
"time":1481466943,
"timeNano":1481466943153467120
}{
"status":"start",
"id":"9..d",
"from":"mariadb",
"Type":"container",
"Action":"start",
"Actor":{
"ID":"9..d",
"Attributes":{
"image":"mariadb",
"name":"mariadb"
}
},
"time":1481466943,
"timeNano":1481466943266967807
}{
"s TAG IMAGE ID CREATED SIZE
hello-world tatus":"start",
"id":"9..d",
"from":"mariadb",
"Type":"container",
"Action":"start",
"Actor":{
"ID":"9..d",
"Attributes":{
"image":"mariadb",
"name":"mariadb"
}
},
"time":1481466943,
"timeNano":1481466943266967807
}{
"status":"kill",
"id":"9..d",
"from":"mariadb",
"Type":"container",
"Action":"kill",
"Actor":{
"ID":"9..d",
"Attributes":{
"image":"mariadb",
"name":"mariadb",
"signal":"15"
}
},
"time":1481466945,
"timeNano":1481466945523301735
}{
"status":"kill",
"id":"9..d",
"from":"mariadb",
"Type":"container",
"Action":"kill",
"Actor":{
"ID":"9..d",
"Attributes":{
"image":"mariadb",
"name":"mariadb",
"signal":"15"
}
},
"time":1481466945,
"timeNano":1481466945523301735
}{
"status":"die",
"id":"9..d",
"from":"mariadb",
"Type":"container",
"Action":"die",
"Actor":{
"ID":"9..d",
"Attributes":{
"exitCode":"0",
"image":"mariadb",
"name":"mariadb"
}
},
"time":1481466948,
"timeNano":1481466948241703985
}{
"status":"die",
"id":"9..d",
"from":"mariadb",
"Type":"container",
"Action":"die",
"Actor":{
"ID":"9..d",
"Attributes":{
"exitCode":"0",
"image":"mariadb",
"name":"mariadb"
}
},
"time":1481466948,
"timeNano":1481466948241703985
}{
"Type":"network",
"Action":"disconnect",
"Actor":{
"ID":"7b64910a4b7b6b2b90245db1c7a1d8330fb1d0db9237da8c64378c0ad7745e9e",
"Attributes":{
"container":"9..d",
"name":"bridge",
"type":"bridge"
}
},
"time":1481466948,
"timeNano":1481466948357530397
}{
"Type":"network",
"Action":"disconnect",
"Actor":{
"ID":"7b64910a4b7b6b2b90245db1c7a1d8330fb1d0db9237da8c64378c0ad7745e9e",
"Attributes":{
"container":"9..d",
"name":"bridge",
"type":"bridge"
}
},
"time":1481466948,
"timeNano":1481466948357530397
}{
"status":"stop",
"id":"9..d",
"from":"mariadb",
"Type":"container",
"Action":"stop",
"Actor":{
"ID":"9..d",
"Attributes":{
"image":"mariadb",
"name":"mariadb"
}
},
"time":1481466948,
"timeNano":1481466948398153788
}2016-12-11T15:35:48.398153788+01:00{
"status":"stop",
"id":"9..d",
"from":"mariadb",
"Type":"container",
"Action":"stop",
"Actor":{
"ID":"9..d",
"Attributes":{
"image":"mariadb",
"name":"mariadb"
}
},
"time":1481466948,
"timeNano":1481466948398153788
}
This is a simple example for sure ! But it could give you ideas to probably create a home-made event
alerting/monitoring system using simple Bash commands.
Docker Images
If you type docker images , you will see that you have the hello-world image, even if its container is not running:
REPOSITORY TAG IMAGE ID CREATED SIZE
hello-world latest c54a2cc56cbb 6 months ago 1.848 kB
An image has a tag (latest), an id (c54a2cc56cbb), a creation date (3 months ago) and a size (1.848 kB).
Images are generally stored in a registry unless it is an image from scratch that you created using the tar method and
didn't push to any of the public or private registires.
Registries are to Docker images what git is to code.
Docker images responds to the following Docker Events:
delete,
import,
load,
pull,
push,
save,
tag,
untag
Docker Containers
Docker is a containerization software that solves many problems in the modern IT like it is said in the introduction
of this book.
A container is basically an image that is running in your host OS. The important thing to remember is that there is a
big difference between containers and VMs and that's why Docker is so darn popular:
Containers use shared operating systems while VMs emulate in a different way the hardware.
The command to create a container is docker create followed by options, image name, command to execute at the
container creation and its arguments.
Other commands are used to start, pause, stop, run and delete containers and we are going to see these commands
and more in another chapter.
Keep in mind that a container have a life cycle with different phases that responds to the defined events :
attach
commit
copy
create
destroy
detach
die
exec_create
exec_detach
exec_start
export
health_status
kill
oom
pause
rename
resize
restart
start
stop
top
unpause
update
Docker Volumes
We still using the example of MariaDB:
docker run --name mariadb -e MYSQL_ROOT_PASSWORD=password -v /data/db:/var/lib/mysql -d mariadb
In the docker run command we are using to create a dockerized database, we are using the -v flag to mount a
volume. The mounted volume inside the container is pointing on /var/lib/mysql which is the Mysql data directory
where we can find databases data and other Mysql data.
Without logging inside the container, you can see the data directory mapped to the host in the /data/db directory:
ls -l /data/db/
total 110632
-rw-rw---- 1 999 999 16384 Dec 11 21:27 aria_log.00000001
-rw-rw---- 1 999 999 52 Dec 11 21:27 aria_log_control
-rw-rw---- 1 999 999 12582912 Dec 11 21:27 ibdata1
-rw-rw---- 1 999 999 50331648 Dec 11 21:27 ib_logfile0
-rw-rw---- 1 999 999 50331648 Dec 11 21:27 ib_logfile1
drwx------ 2 999 999 4096 Dec 11 21:27 mysql
drwx------ 2 999 999 4096 Dec 11 21:27 performance_schema
You can create the same volume without mapping it to the host filesystem:
docker run --name mariadb -e MYSQL_ROOT_PASSWORD=password -v /var/lib/mysql -d mariadb
In this way you can't access the files of your MariaDB databases from the host machine.
Nope ! In reality, you can. Docker volumes can be found in:
/var/lib/docker/volumes/
You can see a list of volumes identifiers:
ls -lrth /var/lib/docker/volumes/
3..e
3..7
5..0
b..2
b..7
metadata.db
One of the above identifiers is the volume that was created using the command:
docker run --name mariadb -e MYSQL_ROOT_PASSWORD=password -v /var/lib/mysql -d mariadb
Docker has a helpful command to inspect a container called docker inspect .
We are going to see it in details later but we will use it in order to identify the Docker volumes attached to our
running container.
docker inspect -f '{{ (index .Mounts 0).Source }}' mariadb
It should output one of the ids above:
/var/lib/docker/volumes/b..2/_data
The Docker inspect command lets you have several information about a container (or an image). It returns generally
a json containing all of the information but using the -f or the --format flag, you can filter an attribute. docker inspect
-f '{{ (index .Mounts 0).Source }}' mariadb will shows the first element of a json dictionary referenced by its name
Mounts.
This is the part of the json output containing the information about the mounts:
"Mounts": [
{
"Name": "b..2",
"Source": "/var/lib/docker/volumes/b..2/_data",
"Destination": "/var/lib/mysql",
"Driver": "local",
"Mode": "",
"RW": true,
"Propagation": ""
}
],
Now, you can understand why the used format '{{ (index .Mounts 0).Source }}' will shows
/var/lib/docker/volumes/b..2/_data .
Keep in mind that the host directory /var/lib/docker/volumes/b..2/_data depends on the host that's why creating
a volume using Dockerfile does not allow mounting a volume on the host filesystem.
Data Volumes
Data volumes are different from the previous volumes created to host MariaDB databases. They are separate
containers, dedicated to storing data, they are persistent, they can be shared between containers, they can be used as
a backup, a restore point or to used in data migration.
For the same container (MariaDB), we are going to create a volumes in a separate container:
docker run --name mariadb_storage -v /var/lib/mysql -it -d busybox
Now let's run the MariaDB container without creating a new volumes but using the previously created data
container:
docker run --name mariadb -e MYSQL_ROOT_PASSWORD=password --volumes-from mariadb_storage -d mariadb
Notice the usage of --volumes-from mariadb_storage that will attach the data container as a volume to the MariaDB
container.
In the same way, we can inspect the data container:
docker inspect -f '{{ (index .Mounts 0).Source }}' mariadb_storage
Say we have this as an output:
/var/lib/docker/volumes/f..9/_data
This means that the latter directory will contains all of our databases files:
You can check it by typing:
ls -l /var/lib/docker/volumes/f..9/_data
total 110656
-rw-rw---- 1 999 999 16384 Dec 11 22:23 aria_log.00000001
-rw-rw---- 1 999 999 52 Dec 11 22:23 aria_log_control
-rw-rw---- 1 999 999 12582912 Dec 11 22:23 ibdata1
-rw-rw---- 1 999 999 50331648 Dec 11 22:23 ib_logfile0
-rw-rw---- 1 999 999 50331648 Dec 11 22:23 ib_logfile1
-rw-rw---- 1 999 999 0 Dec 11 22:23 multi-master.info
drwx------ 2 999 999 4096 Dec 11 22:23 mysql
drwx------ 2 999 999 4096 Dec 11 22:23 performance_schema
-rw-rw---- 1 999 999 24576 Dec 11 22:23 tc.log
Now that we have a separate volume container, we can create 3 or more MariaDB containers using the same
volume:
docker run \
--name mariadb1 \
-e MYSQL_ROOT_PASSWORD=password \
--volumes-from mariadb_storage \
-d mariadb \
&& \
docker run \
--name mariadb2 \
-e MYSQL_ROOT_PASSWORD=password \
--volumes-from mariadb_storage \
-d mariadb \
&& \
docker run \
--name mariadb3 \
-e MYSQL_ROOT_PASSWORD=password \
--volumes-from mariadb_storage \
-d mariadb
f0ff0622ea755c266e069a42a2a627380042c8f1a3464521dd0fb16ef3257534
0c10ea5ec23a3057de30dc4f97e485349ac7b0a335a98e1c3deece56f3594964
128e66ffa823e3156fb28b48f45e3234d235949508780f1c1cef8c38ed5bfb0b
Now, you can see all of our 4 containers:
docker ps
CONTAINER ID IMAGE COMMAND PORTS NAMES
049824509c18 mariadb "docker-entrypoint.sh" 3306/tcp mariadb1
128e66ffa823 mariadb "docker-entrypoint.sh" 3306/tcp mariadb3
0c10ea5ec23a mariadb "docker-entrypoint.sh" 3306/tcp mariadb2
03808f86ba9b busybox "sh" mariadb_storage
Sharing a data volume between three MariaDB Docker instances in this example is just for testing and
learning purpose, it is not production-ready.
We can in fact manage to have multiple containers sharing same volumes. However, the fact that it is possible to do,
does not simply means that it is safe to do it. Multiple containers writing to a single shared volume may cause data
corruption or a high level application problem like consumer/producer problems.
Now you can type again docker ps and I am pretty sure that only one MariaDB will be running, the others will stop.
If you want more explanation about this, you can find MariaDB logs of the stopped container and you will notice
that mysqld could not get an exclusive lock and that the latter could be used by another process and that's what
actually happening.
[ERROR] mysqld: Got error 'Could not get an exclusive lock; file is probably in use by another process' when trying to use aria control file '/var/lib/mysql/aria_log_control'
Other logs are showing the same problem:
2016-12-11 22:02:25 140419388524480 [ERROR] InnoDB: Can't open './ibdata1'
2016-12-11 22:02:25 140419388524480 [ERROR] InnoDB: Could not open or create
the system tablespace. If you tried to add new data files to the system tablespace,
and it failed here, you should now edit innodb_data_file_path in my.cnf back to what it was,
and remove the new ibdata files InnoDB created in this failed attempt.
InnoDB only wrote those files full of zeros, but did not yet use them in any way.
But be careful: do not remove old data files which contain your precious data!
2016-12-11 22:02:25 140419388524480 [ERROR] Plugin 'InnoDB' init function returned error.
2016-12-11 22:02:25 140419388524480 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed.
2016-12-11 22:02:25 140419388524480 [ERROR] mysqld: Can't lock aria control file
'/var/lib/mysql/aria_log_control' for exclusive use, error: 11. Will retry for 30 seconds
2016-12-11 22:02:56 140419388524480 [ERROR] mysqld: Got error 'Could not get an exclusive lock;
file is probably in use by another process' when trying to use aria control file '/var/lib/mysql/aria_log_control'
2016-12-11 22:02:56 140419388524480 [ERROR] Plugin 'Aria' init function returned error.
2016-12-11 22:02:56 140419388524480 [ERROR] Plugin 'Aria' registration as a STORAGE ENGINE failed.
2016-12-11 22:02:56 140419388524480 [Note] Plugin 'FEEDBACK' is disabled.
2016-12-11 22:02:56 140419388524480 [ERROR] Unknown/unsupported storage engine: InnoDB
2016-12-11 22:02:56 140419388524480 [ERROR] Aborting
Make sure your applications are designed to write to shared data containers.
Executing docker rm -f mariadb will remove the container from the host but if you type ls -l
/var/lib/docker/volumes/f..9/_data you can still find your data on the host machine and you can check that the Docker
volume container is always running independently from removed container.
Cleaning Docker Dangling Containers
You can find yourself in the situation where some volumes are not used (or referenced) by any container, we call
this type of volumes "dangling volumes" or "orphaned". To see your orphaned volumes, type:
docker volume ls -qf dangling=true
3..e
3..7
5..0
b..2
b..7
The fastest way to cleanup your host from these volumes is to run:
docker volume ls -qf dangling=true | xargs -r docker volume rm
Docker Volumes Events
Docker volumes report the following events: create, mount, unmount, destroy.
Docker Networks
Like a VM, a container can be part of one or many networks.
You may have noticed - just after the installation of Docker - that you have a new network interface on your host
machine:
ifconfig
docker0 Link encap:Ethernet HWaddr 02:42:ef:e0:98:84
inet addr:172.17.0.1 Bcast:0.0.0.0 Mask:255.255.0.0
UP BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:5 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:356 (356.0 B) TX bytes:0 (0.0 B)
And if you type docker network ls you can see the list of the default 3 networks.
NETWORK ID NAME DRIVER SCOPE
2aaea08714ba bridge bridge local
608a539ce1e3 host host local
ede46dbb22d7 none null local
Docker Networks Types
When you typed docker network ls you had 3 default networks implemented in Docker:
NETWORK ID NAME DRIVER SCOPE
2aaea08714ba bridge bridge local
608a539ce1e3 host host local
ede46dbb22d7 none null local
bridge (driver: bridge)
none (driver: null)
host (driver: host)
Bridge Networks
The bridge network is one of the host network interfaces (docker0). It is what you see when you type ifconfig .
All Docker containers are connected to this network unless you add --network flag to the container.
Let's verify this by creating a Docker container and inspecting the default network (bridge):
docker run --name mariadb_storage -v /var/lib/mysql -it -d busybox
Let's inspect the bridge network:
docker network inspect bridge
[
{
"Name": "bridge",
"Id": "2aaea08714ba2e3927334e8d0044afeb85f68ec0a0624ae7ab1f81d976b49292",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.17.0.0/16",
"Gateway": "172.17.0.1"
}
]
},
"Internal": false,
"Containers": {
"3c9afd0c72eb4738750ad1b83d38587a1c47636429e5f67c5deec5ec5dfa3210": {
"Name": "mariadb_storage",
"EndpointID": "ed47ef978ed78c5677c81a11d439d0af19a54f0620387794db1061807e39c2b7",
"MacAddress": "02:42:ac:11:00:02",
"IPv4Address": "172.17.0.2/16",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.bridge.default_bridge": "true",
"com.docker.network.bridge.enable_icc": "true",
"com.docker.network.bridge.enable_ip_masquerade": "true",
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
"com.docker.network.bridge.name": "docker0",
"com.docker.network.driver.mtu": "1500"
},
"Labels": {}
}
]
We can see that the mariadb_storage data container is living inside the bridge network.
Now, if you create a new network db_network:
docker network create db_network
Then create a Docker container attached to the same network (db_network):
docker run --name mariadb -e MYSQL_ROOT_PASSWORD=password --network db_network -d mariadb
If you inspect the default bridge network, you will notice that there is no containers attached to it or at least the
newly created container is not attached to it:
docker network inspect bridge
[
{
"Name": "bridge",
"Id": "2aaea08714ba2e3927334e8d0044afeb85f68ec0a0624ae7ab1f81d976b49292",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.17.0.0/16",
"Gateway": "172.17.0.1"
}
]
},
"Internal": false,
"Containers": {},
"Options": {
"com.docker.network.bridge.default_bridge": "true",
"com.docker.network.bridge.enable_icc": "true",
"com.docker.network.bridge.enable_ip_masquerade": "true",
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
"com.docker.network.bridge.name": "docker0",
"com.docker.network.driver.mtu": "1500"
},
"Labels": {}
}
]
And if you inspect the new network, you can see that mariadb container is running in this network (db_network).
docker network inspect db_network
[
{
"Name": "db_network",
"Id": "ec809d635d4ad22f852a2419032812cb7c9361e8bb0111815dc79a34fee30668",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "172.18.0.0/16",
"Gateway": "172.18.0.1/16"
}
]
},
"Internal": false,
"Containers": {
"3ddfbc0f03f4a574480d842330add4169f834cfcde96e52956ee34164629dd52": {
"Name": "mariadb",
"EndpointID": "0efca766ec11f7442399a4c81b6e79db825eeb81736db1760820fe61f24b9805",
"MacAddress": "02:42:ac:12:00:02",
"IPv4Address": "172.18.0.2/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
]
This network has 172.18.0.0/16 as a subnet and the container address is 172.18.0.2. In all cases, if you create a
network without specifying its type, it will have bridge as a default type.
Overlay Networks
Generally, overlay network is a network that is built on top of another network. For your information, VoIP, VPN,
CDNs are overlay networks.
Docker uses also overlay networks, precisely in Swarm mode where you can create this type of networks on a
manager node. While you can do this to managed containers (Swarm mode or using an external key-value store),
unmanaged containers can not be part of an overlay network.
Docker overlay networks allows creating a multi-host network.
To create a network called database_network
docker network create --driver overlay --subnet 10.0.9.0/32 database_network
This will shows an error message, telling you that there is an error response from daemon that failed to allocate
gateway.
Since overlay networks will only work in Swarm mode, you should run your Docker engine in this mode before
creating the network:
docker swarm init --advertise-addr 127.0.0.1
Swarm mode is probably the easiest way to create and manage overlay networks, but you can use external
components like Consul, Etcd or ZooKeeper because overlay networks require a valid key-value store service.
Using Swarm Mode
Docker Swarm is the mode where you are using the built-in container orchestrator implemented in Docker engine
since its version 1.12.
We are going to see in detail Docker Swarm mode and Docker orchestration but for the networking part, it is
important to define briefly the orchestration.
Container Orchestration allows users to coordinate containers in the cloud running a multi-container environment.
Say you are deploying a web application, with an orchestrator you can define a service for your web app, deploy its
container and scale it to 10 or 20 instance, create a network and attach your different containers to it and consider all
of the web application containers as single entity from an deploying, availability, scaling and networking point of
view.
To start working with Swarm mode you need at least the version 1.12. Initialize the Swarm cluster and don't forget
to put your machine IP instead of 192.168.0.47 :
docker swarm init --advertise-addr 192.168.0.47
You will notice an information message similar to the following one:
Swarm initialized: current node (0kgtmbz5flwv4te3cxewdi1u5) is now a manager.
To add a worker to this swarm, run the following command:
docker swarm join \
--token SWMTKN-1-1re2072eii1walbueysk8yk14cqnmgltnf4vyuhmjli2od2quk-8qc8iewqzmlvp1igp63xyftua \
192.168.0.47:2377
To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
In this part of the book, since we are focusing on networking rather than the Swarm mode, we are going to use a
single node.
Now that the Swarm mode is activated, we can create a new network that we call webapps_network:
docker network create --driver overlay --subnet 10.0.9.0/16 --opt encrypted webapps_network
Let's check if our network using docker network ls :
NETWORK ID NAME DRIVER SCOPE
1wdbz5gldym1 webapp_network overlay swarm
If you inspect the network by typing docker inspect webapp_network you will be able to see different information about
webapp_network like its name, its id, its IPv6 status, the containers using it, some configurations like the subnet, the
gateway and the scope which is swarm.
[
{
"Name": "webapp_network",
"Id": "1wdbz5gldym121beq4ftdttgg",
"Scope": "swarm",
"Driver": "overlay",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "10.0.9.0/16",
"Gateway": "10.0.0.1"
}
]
},
"Internal": false,
"Containers": null,
"Options": {
"com.docker.network.driver.overlay.vxlanid_list": "257",
"encrypted": ""
},
"Labels": null
}
]
After learning how to use and manage Docker Swarm mode you will be able to create services and containers and
attach them to a network but for the moment let's just continue exploring briefly the networking part.
In our example, we are considering that the service webapp_service was created and that we are running 3 instances
of webapp_container.
If we type the same command to inspect we will notice that the 3 Docker instances are included now:
[
{
"Name": "webapp_network",
"Id": "2tb6djzmq4x4u5ey3h2e74y9e",
"Scope": "swarm",
"Driver": "overlay",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "10.0.9.0/16",
"Gateway": "10.0.0.2"
}
]
},
"Internal": false,
"Containers": {
"ab364a18eb272533e6bda88a0c705004df4b7974e64ad63c36fd3061a716f613": {
"Name": "webapp.3.3fmrpa3cd7y6b4ol33x7m413v",
"EndpointID": "e6980c952db5aa7b6995bc853d5ab82668a4692af98540feef1aa58a4e8fd282",
"MacAddress": "02:42:0a:00:00:06",
"IPv4Address": "10.0.0.6/16",
"IPv6Address": ""
},
"c5628fcfcf6a5f0e5a49ddfe20b141591c95f01c45a37262ee93e8ad3e3cff7e": {
"Name": "webapp.2.dckgxiffiay2kk7t2n149rpnj",
"EndpointID": "ce05566b0110f448018ccdaed4aa21b402ae659efa2adff3102492ee6c04fce8",
"MacAddress": "02:42:0a:00:00:05",
"IPv4Address": "10.0.0.5/16",
"IPv6Address": ""
},
"f0a7b5201e84c010616e6033fee7d245af7063922c61941cfc01f9edd0be7226": {
"Name": "webapp.1.73m1914qlcj8agxf5lkhzb7se",
"EndpointID": "1dce8dd253b818f8b0ba129a59328fed427da91d86126a1267ac7855666ec434",
"MacAddress": "02:42:0a:00:00:04",
"IPv4Address": "10.0.0.4/16",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.driver.overlay.vxlanid_list": "258",
"encrypted": ""
},
"Labels": {}
}
]
Let's make some experiments to better discover networking in Swarm mode.
If we execute the Linux command route to see the default
If we execute traceroute google.com from inside one of the containers, this is a sample output:
traceroute to google.com (216.58.209.238), 30 hops max, 46 byte packets
1 172.19.0.1 (172.19.0.1) 0.003 ms 0.012 ms 0.004 ms
2 192.168.3.1 (192.168.3.1) 2.535 ms 0.481 ms 0.609 ms
3 192.168.1.1 (192.168.1.1) 2.228 ms 1.442 ms 1.826 ms
4 80.10.234.169 (80.10.234.169) 10.377 ms 5.951 ms 17.026 ms
[...]
11 par10s29-in-f14.1e100.net (216.58.209.238) 4.723 ms 3.428 ms 4.191 ms
To go outside and reach the Internet, the first hop was 172.19.0.1 . If you type ifconfig in your host machine, you
can find that this address is allocated to docker_gwbridge interface that we are going to see later in this section.
docker_gwbridge Link encap:Ethernet HWaddr 02:42:d2:fd:fd:dc
inet addr:172.19.0.1 Bcast:0.0.0.0 Mask:255.255.0.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:7334 errors:0 dropped:0 overruns:0 frame:0
TX packets:3843 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:595520 (595.5 KB) TX bytes:358648 (358.6 KB)
The containers provisioned in docker swarm mode can be accessed in service discovery in two ways:
Via Virtual IP (VIP) and routed through the docker swarm ingress overlay network.
Or via a DNS round robbin (DNSRR)
Because VIP is not on the ingress overlay network, you need to create a user-defined overlay network in order to use
VIP or DNSRR.
Natively, Docker Engine in its Swarm mode supports overlay networks so you don't need an external key-value store
and container-to-container networking is enabled.
The eth0 interface represents the container interface that is connected to the overlay network. So if you create an
overlay network, you will have those VIP associated to it. eth1 interface represents the container interface that is
connected to the docker_gwbridge network, for external connectivity outside of container cluster.
Using External Key/Value Store
Our choice for this part will be Consul, a service discovery and configuration manager for distributed systems
implementing a key/value storage.
To use Consul 0.7.1, you should download it:
cd /tmp
wget https://releases.hashicorp.com/consul/0.7.1/consul_0.7.1_linux_amd64.zip
unzip consul_0.7.1_linux_amd64.zip
chmod +x consul
mv consul /usr/bin/consul
Create the configuration directory for Consul:
mkdir -p /etc/consul.d/
If you want to install the the web UI download it and unzip:
cd /tmp
wget https://releases.hashicorp.com/consul/0.7.1/consul_0.7.1_web_ui.zip
mkdir -p /opt/consul
cd /opt/consul
unzip /tmp/consul_0.6.0_web_ui.zip
Now you can run a Consul agent:
consul agent \
-server \
-bootstrap-expect 1 \
-data-dir /tmp/consul \
-node=agent-one \
-bind=192.168.0.47 \
-client=0.0.0.0 \
-config-dir /etc/consul.d \
-ui-dir /opt/consul-ui
Change the options values by your own ones, where:
-bootstrap-expect=0 Sets server to expect bootstrap mode.
-bind=0.0.0.0 Sets the bind address for cluster communication
-client=127.0.0.1 Sets the address to bind for client access.
This includes RPC, DNS, HTTP and HTTPS (if configured)
-node=hostname Name of this node. Must be unique in the cluster
-config-dir=foo Path to a directory to read configuration files
from. This will read every file ending in ".json"
as configuration in this directory in alphabetical
order. This can be specified multiple times.
-ui-dir=path Path to directory containing the Web UI resources
You can use other options like:
-bootstrap Sets server to bootstrap mode
-advertise=addr Sets the advertise address to use
-atlas=org/name Sets the Atlas infrastructure name, enables SCADA.
-atlas-join Enables auto-joining the Atlas cluster
-atlas-token=token Provides the Atlas API token
-atlas-endpoint=1.2.3.4 The address of the endpoint for Atlas integration.
-http-port=8500 Sets the HTTP API port to listen on
-config-file=foo Path to a JSON file to read configuration from.
This can be specified multiple times.
-data-dir=path Path to a data directory to store agent state
-recursor=1.2.3.4 Address of an upstream DNS server.
Can be specified multiple times.
-dc=east-aws Datacenter of the agent
-encrypt=key Provides the gossip encryption key
-join=1.2.3.4 Address of an agent to join at start time.
Can be specified multiple times.
-join-wan=1.2.3.4 Address of an agent to join -wan at start time.
Can be specified multiple times.
-retry-join=1.2.3.4 Address of an agent to join at start time with
retries enabled. Can be specified multiple times.
-retry-interval=30s Time to wait between join attempts.
-retry-max=0 Maximum number of join attempts. Defaults to 0, which
will retry indefinitely.
-retry-join-wan=1.2.3.4 Address of an agent to join -wan at start time with
retries enabled. Can be specified multiple times.
-retry-interval-wan=30s Time to wait between join -wan attempts.
-retry-max-wan=0 Maximum number of join -wan attempts. Defaults to 0, which
will retry indefinitely.
-log-level=info Log level of the agent.
-protocol=N Sets the protocol version. Defaults to latest.
-rejoin Ignores a previous leave and attempts to rejoin the cluster.
-server Switches agent to server mode.
-syslog Enables logging to syslog
-pid-file=path Path to file to store agent PID
You can see the execution logs on your terminal:
==> WARNING: BootstrapExpect Mode is specified as 1; this is the same as Bootstrap mode.
==> WARNING: Bootstrap mode enabled! Do not enable unless necessary
==> Starting Consul agent...
==> Starting Consul agent RPC...
==> Consul agent running!
Node name: 'agent-one'
Datacenter: 'dc1'
Server: true (bootstrap: true)
Client Addr: 0.0.0.0 (HTTP: 8500, HTTPS: -1, DNS: 8600, RPC: 8400)
Cluster Addr: 192.168.0.47 (LAN: 8301, WAN: 8302)
Gossip encrypt: false, RPC-TLS: false, TLS-Incoming: false
Atlas: <disabled>
==> Log data will now stream in as it occurs:
2016/12/17 18:50:55 [INFO] raft: Node at 192.168.0.47:8300 [Follower] entering Follower state
2016/12/17 18:50:55 [INFO] serf: EventMemberJoin: agent-one 192.168.0.47
2016/12/17 18:50:55 [INFO] consul: adding LAN server agent-one (Addr: 192.168.0.47:8300) (DC: dc1)
2016/12/17 18:50:55 [INFO] serf: EventMemberJoin: agent-one.dc1 192.168.0.47
2016/12/17 18:50:55 [ERR] agent: failed to sync remote state: No cluster leader
2016/12/17 18:50:55 [INFO] consul: adding WAN server agent-one.dc1 (Addr: 192.168.0.47:8300) (DC: dc1)
2016/12/17 18:50:56 [WARN] raft: Heartbeat timeout reached, starting election
2016/12/17 18:50:56 [INFO] raft: Node at 192.168.0.47:8300 [Candidate] entering Candidate state
2016/12/17 18:50:56 [INFO] raft: Election won. Tally: 1
2016/12/17 18:50:56 [INFO] raft: Node at 192.168.0.47:8300 [Leader] entering Leader state
2016/12/17 18:50:56 [INFO] consul: cluster leadership acquired
2016/12/17 18:50:56 [INFO] consul: New leader elected: agent-one
2016/12/17 18:50:56 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/12/17 18:50:56 [INFO] consul: member 'agent-one' joined, marking health alive
2016/12/17 18:50:56 [INFO] agent: Synced service 'consul'
You can check Consul nodes by typing consul members :
Node Address Status Type Build Protocol DC
agent-one 192.168.0.47:8301 alive server 0.6.0 2 dc1
Now go to your Docker configuration file: /etc/default/docker and add the following line with the good
configurations:
DOCKER_OPTS="--cluster-store=consul://192.168.0.47:8500 --cluster-advertise=192.168.0.47:4000"
Restart Docker:
service docker restart
And create an overlay network:
docker network create --driver overlay --subnet=10.0.1.0/24 databases_network
Check if the network was created:
docker network ls
NETWORK ID NAME DRIVER SCOPE
6d6484687002 databases_network overlay global
If you inspect the recently created network using docker network inspect databases_network , you will have something
similar to the following output:
[
{
"Name": "databases_network",
"Id": "6d64846870029d114bb8638f492af7fc9b5c87c2facb67890b0f40187b73ac87",
"Scope": "global",
"Driver": "overlay",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "10.0.1.0/24"
}
]
},
"Internal": false,
"Containers": {},
"Options": {},
"Labels": {}
}
]
Notice that the network Scope is now global :
"Scope": "global",
If you compare it to the other network created using the Swarm mode, you will notice that the scope changed from
swarm to global.
Docker Networks Events
Docker networks system reports the following events: create, connect, disconnect, destroy.
Docker Daemon & Architecture
Docker Daemon
Like the init daemon, cron daemon crond, dhcp daemon dhcpd, Docker has its own daemon dockerd.
To list Docker daemons, list all Linux daemons:
ps -U0 -o 'tty,pid,comm' | grep ^?
And grep Docker on the output:
ps -U0 -o 'tty,pid,comm' | grep ^?|grep -i dockerd
? 2779 dockerd
Note that you may see docker-containe or any other short version of docker-containerd-shim .
If you are already running Docker, when you type dockerd you will have a similar error message to this :
FATA[0000] Error starting daemon: pid file found, ensure docker is not running or delete /var/run/docker.pid
Now let's stop Docker service docker stop and run is daemon directly using dockerd command. Running the Docker
daemon command using dockerd is a good debugging tool, as you may see, you will have the running traces right on
your terminal screen:
INFO[0000] libcontainerd: new containerd process, pid: 19717
WARN[0000] containerd: low RLIMIT_NOFILE changing to max current=1024 max=4096
INFO[0001] [graphdriver] using prior storage driver "aufs"
INFO[0003] Graph migration to content-addressability took 0.63 seconds
WARN[0003] Your kernel does not support swap memory limit.
WARN[0003] mountpoint for pids not found
INFO[0003] Loading containers: start.
INFO[0003] Firewalld running: false
INFO[0004] Removing stale sandbox ingress_sbox (ingress-sbox)
INFO[0004] Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. \
Daemon option --bip can be used to set a preferred IP address
INFO[0004] Loading containers: done.
INFO[0004] Listening for local connections addr=/var/lib/docker/swarm/control.sock proto=unix
INFO[0004] Listening for connections addr=[::]:2377 proto=tcp
INFO[0004] 61c88d41fce85c57 became follower at term 12
INFO[0004] newRaft 61c88d41fce85c57 [peers: [], term: 12, commit: 290, applied: 0, lastindex: 290, lastterm: 12]
INFO[0004] 61c88d41fce85c57 is starting a new election at term 12
INFO[0004] 61c88d41fce85c57 became candidate at term 13
INFO[0004] 61c88d41fce85c57 received vote from 61c88d41fce85c57 at term 13
INFO[0004] 61c88d41fce85c57 became leader at term 13
INFO[0004] raft.node: 61c88d41fce85c57 elected leader 61c88d41fce85c57 at term 13
INFO[0004] Initializing Libnetwork Agent Listen-Addr=0.0.0.0 Local-addr=192.168.0.47 Adv-addr=192.168.0.47 Remote-addr =
INFO[0004] Daemon has completed initialization
INFO[0004] Initializing Libnetwork Agent Listen-Addr=0.0.0.0 Local-addr=192.168.0.47 Adv-addr=192.168.0.47 Remote-addr =
INFO[0004] Docker daemon commit=7392c3b graphdriver=aufs version=1.12.5
INFO[0004] Gossip cluster hostname eonSpider-3e64aecb2dd5
INFO[0004] API listen on /var/run/docker.sock
INFO[0004] No non-localhost DNS nameservers are left in resolv.conf. Using default external servers :
[nameserver 8.8.8.8 nameserver 8.8.4.4]
INFO[0004] IPv6 enabled; Adding default IPv6 external servers :
[nameserver 2001:4860:4860::8888 nameserver 2001:4860:4860::8844]
INFO[0000] Firewalld running: false
Now if you create or remove containers for example, you will see that Docker daemon connects you (Docker client)
to Docker containers.
This is how Docker daemon communicate with the rest of Docker modules:
Containerd
Containerd is one of the recent projects in the Docker ecosystem and its purpose is breaking up more modularity to
Docker architecture and more neutrality visà-vis the other industry actors (Cloud providers and other orchestrator
services).
According to Solomon Hykes, containerd is already deployed on millions of machines since April 2016 when it was
included in Docker 1.11. The announced roadmap to extend containerd get its input from the cloud providers and
actors like Alibaba Cloud, AWS, Google, IBM, Microsoft, and other active members of the container ecosystem.
More Docker engine functionality will be added to containerd so that containerd 1.0 will provide all the core
primitives you need to manage containers with parity on Linux and Windows hosts:
Container execution and supervision
Image distribution
Network Interfaces Management
Local storage
Native plumbing level API
Full OCI support, including the extended OCI image specification
To build, ship and run containerized applications, you may continue to use Docker but if you are looking for
specialized components you could consider containerd.
Docker Engine 1.11 was the first release built on runC (a runtime based on Open Container Intiative technology)
and containerd.
Formed in June 2015, the Open Container Initiative (OCI) aims to establish common standards for software
containers in order to avoid a potential fragmentation and divisions inside the container ecosystem.
It contains two specifications:
runtime-spec: The runtime specification
image-spec: The image specification
The runtime specification outlines how to run a filesystem bundle that is unpacked on disk:
A standardized container bundle should contain the needed information and configurations to load and run a
container in a config.json file residing in the root of the bundle directory.
A standardized container bundle should contain a directory representing the root filesystem of the container.
Generally this directory has a conventional name like rootfs.
You can see the json file if you export and extract an image. In the following example, we are going to use busybox
image.
mkdir my_container
cd my_container
mkdir rootfs
docker export $(docker create busybox) | tar -C rootfs -xvf -
Now we have an extracted busybox image inside of rootfs directory.
tree -d my_container/
my_container/
└── rootfs
├── bin
├── dev
│ ├── pts
│ └── shm
├── etc
├── home
├── proc
├── root
├── sys
├── tmp
├── usr
│ └── sbin
└── var
├── spool
│ └── mail
└── www
We can generate the config.json file:
docker-runc spec
You could also use runC to generate your json file:
runc spec
This is the generated configuration file (config.json):
{
"ociVersion": "1.0.0-rc2-dev",
"platform": {
"os": "linux",
"arch": "amd64"
},
"process": {
"terminal": true,
"consoleSize": {
"height": 0,
"width": 0
},
"user": {
"uid": 0,
"gid": 0
},
"args": [
"sh"
],
"env": [
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"TERM=xterm"
],
"cwd": "/",
"capabilities": [
"CAP_AUDIT_WRITE",
"CAP_KILL",
"CAP_NET_BIND_SERVICE"
],
"rlimits": [
{
"type": "RLIMIT_NOFILE",
"hard": 1024,
"soft": 1024
}
],
"noNewPrivileges": true
},
"root": {
"path": "rootfs",
"readonly": true
},
"hostname": "runc",
"mounts": [
{
"destination": "/proc",
"type": "proc",
"source": "proc"
},
{
"destination": "/dev",
"type": "tmpfs",
"source": "tmpfs",
"options": [
"nosuid",
"strictatime",
"mode=755",
"size=65536k"
]
},
{
"destination": "/dev/pts",
"type": "devpts",
"source": "devpts",
"options": [
"nosuid",
"noexec",
"newinstance",
"ptmxmode=0666",
"mode=0620",
"gid=5"
]
},
{
"destination": "/dev/shm",
"type": "tmpfs",
"source": "shm",
"options": [
"nosuid",
"noexec",
"nodev",
"mode=1777",
"size=65536k"
]
},
{
"destination": "/dev/mqueue",
"type": "mqueue",
"source": "mqueue",
"options": [
"nosuid",
"noexec",
"nodev"
]
},
{
"destination": "/sys",
"type": "sysfs",
"source": "sysfs",
"options": [
"nosuid",
"noexec",
"nodev",
"ro"
]
},
{
"destination": "/sys/fs/cgroup",
"type": "cgroup",
"source": "cgroup",
"options": [
"nosuid",
"noexec",
"nodev",
"relatime",
"ro"
]
}
],
"hooks": {},
"linux": {
"resources": {
"devices": [
{
"allow": false,
"access": "rwm"
}
]
},
"namespaces": [
{
"type": "pid"
},
{
"type": "network"
},
{
"type": "ipc"
},
{
"type": "uts"
},
{
"type": "mount"
}
],
"maskedPaths": [
"/proc/kcore",
"/proc/latency_stats",
"/proc/timer_list",
"/proc/timer_stats",
"/proc/sched_debug",
"/sys/firmware"
],
"readonlyPaths": [
"/proc/asound",
"/proc/bus",
"/proc/fs",
"/proc/irq",
"/proc/sys",
"/proc/sysrq-trigger"
]
}
}
Now you can edit any of the configurations listed above and run again a container without even using Docker, just
runC:
runc run container-name
Note that you should install runC first in order to use it. sudo apt install runc for Ubuntu 16.04. You could
also install it from sources:
mkdir -p ~/golang/src/github.com/opencontainers/
cd ~/golang/src/github.com/opencontainers/
git clone https://github.com/opencontainers/runc
cd ./runc
make
sudo make install
runC, a standalone containers runtime, is at its full spec, it allows you to spin containers, interact with them, and
manage their lifecycle and that's why containers built with one engine (like Docker) can run on another engine.
Containers are started as a child process of runC and can be embedded into various other systems without having to
run a daemon (Docker Daemon).
runC is built on libcontainer which is the same container library powering a Docker engine installation. Prior to the
version 1.11, Docker engine was used to manage volumes, networks, containers, images etc.. Now, the Docker
architecture is broken into four components: Docker engine, containerd, containerd-shm and runC. The binaries are
respectively called docker, docker-containerd, docker-containerd-shim, and docker-runc.
To run a container, Docker engine creates the image, pass it to containerd. containerd calls containerd-shim that
uses runC to run the container. Then, containerd-shim allows the runtimes (runC in this case) to exit after it starts
the container : This way we can run daemon-less containers because we are not having to have the long running
runtime processes for containers.
Currently, the creation of a container is handled by runc (via containerd) but it is possible to use another
binary (instead of runC) that expose the same command line interface of Docker and accepting an OCI bundle.
You can see the different runtimes that you have on your host by typing:
docker info|grep -i runtime
Since I am using the default runtime, this is what I should get as an output:
Runtimes: runc
Default Runtime: runc
To add another runtime, you should follow this command:
docker daemon --add-runtime "<runtime-name>=<runtime-path>"
Example:
docker daemon --add-runtime "oci=/usr/local/sbin/runc"
There is only one containerd-shim by process and it manages the STDIO FIFO and keeps it open for the container in
case containerd or Docker dies.
It is also in charge of reporting the container's exit status to a higher level like Docker.
Container runtime, lifecycle support and the execution (create, start, stop, pause, resume, exec, signal & delete) are
some features implemented in Containerd. Some others are managed by other components of Docker (volumes,
logging ..etc). Here is a table from the Containerd Github repository that lists the different features and tell if they
are in or out of scope.
Name Description In/Out Reason
execution
Provide an extensible
execution layer for executing a
container
in Create,start, stop pause, resume exec, signal, delete
cow
filesystem
Built in functionality for
overlay, aufs, and other copy
on write filesystems for
containers
in
distribution
Having the ability to push and
pull images as well as
operations on images as a first
class api object
in containerd will fully support the management and
retrieval of images
low-level
networking
drivers
Providing network
functionality to containers
along with configuring their
network namespaces
in
Network support will be added via interface and network
namespace operations, not service discovery and service
abstractions.
build Building images as a first
class API out
Build is a higher level tooling feature and can be
implemented in many different ways on top of
containerd
volumes Volume management for
external data out The api supports mounts, binds, etc where all volumes
type systems can be built on top of.
logging Persisting container logs out
Logging can be build on top of containerd because the
container’s STDIO will be provided to the clients and they
can persist any way they see fit. There is no io copying of
container STDIO in containerd.
If we run a container:
docker run --name mariadb -e MYSQL_ROOT_PASSWORD=password -v /data/lists:/var/lib/mysql -d mariadb
Unable to find image 'mariadb:latest' locally
latest: Pulling from library/mariadb
75a822cd7888: Pull complete
b8d5846e536a: Pull complete
b75e9152a170: Pull complete
832e6b030496: Pull complete
034e06b5514d: Pull complete
374292b6cca5: Pull complete
d2a2cf5c3400: Pull complete
f75e0958527b: Pull complete
1826247c7258: Pull complete
68b5724d9fdd: Pull complete
d56c5e7c652e: Pull complete
b5d709749ac4: Pull complete
Digest: sha256:0ce9f13b5c5d235397252570acd0286a0a03472a22b7f0384fce09e65c680d13
Status: Downloaded newer image for mariadb:latest
db5218c494190c11a2fcc9627ea1371935d7021e86b5f652221bdac1cf182843
If you type ps aux you can notice the docker-containerd-shim process relative to this container running with
db5218c494190c11a2fcc9627ea1371935d7021e86b5f652221bdac1cf182843
/var/run/docker/libcontainerd/db5218c494190c11a2fcc9627ea1371935d7021e86b5f652221bdac1cf182843
and runC binary ( docker-runc ) as parameters:
docker-containerd-shim \
db5218c494190c11a2fcc9627ea1371935d7021e86b5f652221bdac1cf182843 \
/var/run/docker/libcontainerd/db5218c494190c11a2fcc9627ea1371935d7021e86b5f652221bdac1cf182843 \
docker-runc
db5218c494190c11a2fcc9627ea1371935d7021e86b5f652221bdac1cf182843 is the id of the container that you can see at the
end of the container creation.
ls -l /var/run/docker/libcontainerd/db5218c494190c11a2fcc9627ea1371935d7021e86b5f652221bdac1cf182843
total 4
-rw-r--r-- 1 root root 3653 Dec 27 22:21 config.json
prwx------ 1 root root 0 Dec 27 22:21 init-stderr
prwx------ 1 root root 0 Dec 27 22:21 init-stdin
prwx------ 1 root root 0 Dec 27 22:21 init-stdout
Docker Daemon Events
Docker daemon reports the following event: reload.
Docker Plugins
Even if they are not widely used, but Docker plugins are an interesting feature because it extends Docker by third-
party plugins like network or storage plugins. Docker plugins run out of Docker processes and expose a webhook-
like functionality which the Docker daemon uses to send HTTP POST requests in order the plugins acts as Docker
Events.
Let's take Flocker as an example.
Flocker is an open-source container data volume orchestrator for dockerized applications. Flocker gives Ops teams
the tools they need to run containerized stateful services like databases in production, since it provides a tool that
migrates data along with containers as they change hosts.
The standard way to run a volume container is :
docker run --name my_volume_container_1 -v /data -it -d busybox
But if you want to use Flocker plugin, you should run it like this:
docker run --name my_volume_container_2 -v /data -it --volume-driver=flocker -d busybox
Docker community is one of the most active communities during 2016 and the value of plugins is to allow these
Docker developers to contribute in a growing ecosystem.
Another known plugin is developed by Weave Net which integrates with Weave Scope so you can see how
containers are connected. Using Weave Flux works you can automate request routing in order to turn containers into
microservices.
We have seen how to use a storage plugin and it is something that looks like:
docker run -v <volume_name>:<mountpoint> --volume-driver=<plugin_name> <..> <image>
Network plugins has a different usage, you create the network first of all:
docker network create -d <plugin_name> <network_name>
Then you use it:
docker run --net=<networkname> <..> <image>
You can use some commands to manage Docker plugins, but they are all experimental and may change before it
becomes generally available.
To use the following commands, you should install the experimental version of Docker.
docker plugin ls
docker plugin enable
docker plugin disable
docker plugin inspect
docker plugin install
docker plugin rm
Overview Of Available Plugins
This is an inexhaustive overview of available plugins, that you can also find in the official documentation:
Contiv Networking An open source network plugin to provide infrastructure and security policies for
a multi-tenant micro services deployment, while providing an integration to physical network for
non-container workload. Contiv Networking implements the remote driver and IPAM APIs available
in Docker 1.9 onwards.
Kuryr Network Plugin A network plugin is developed as part of the OpenStack Kuryr project and
implements the Docker networking (libnetwork) remote driver API by utilizing Neutron, the
OpenStack networking service. It includes an IPAM driver as well.
Weave Network Plugin A network plugin that creates a virtual network that connects your Docker
containers - across multiple hosts or clouds and enables automatic discovery of applications. Weave
networks are resilient, partition tolerant, secure and work in partially connected networks, and other
adverse environments - all configured with delightful simplicity.
Azure File Storage plugin Lets you mount Microsoft Azure File Storage shares to Docker containers
as volumes using the SMB 3.0 protocol. Learn more.
Blockbridge plugin A volume plugin that provides access to an extensible set of container-based
persistent storage options. It supports single and multi-host Docker environments with features that
include tenant isolation, automated provisioning, encryption, secure deletion, snapshots and QoS.
Contiv Volume Plugin An open source volume plugin that provides multi-tenant, persistent,
distributed storage with intent based consumption. It has support for Ceph and NFS.
Convoy plugin A volume plugin for a variety of storage back-ends including device mapper and
NFS. It’s a simple standalone executable written in Go and provides the framework to support
vendor-specific extensions such as snapshots, backups and restore.
DRBD plugin A volume plugin that provides highly available storage replicated by
DRBD. Data written to the docker volume is replicated in a cluster of DRBD nodes.
Flocker plugin A volume plugin that provides multi-host portable volumes for Docker, enabling you
to run databases and other stateful containers and move them around across a cluster of machines.
gce-docker plugin A volume plugin able to attach, format and mount Google Compute persistent-
disks.
GlusterFS plugin A volume plugin that provides multi-host volumes management for Docker using
GlusterFS.
Horcrux Volume Plugin A volume plugin that allows on-demand, version controlled access to your
data. Horcrux is an open-source plugin, written in Go, and supports SCP, Minio and Amazon S3.
HPE 3Par Volume Plugin A volume plugin that supports HPE 3Par and StoreVirtual iSCSI storage
arrays.
IPFS Volume Plugin An open source volume plugin that allows using an ipfs filesystem as a volume.
Keywhiz plugin A plugin that provides credentials and secret management using Keywhiz as a central
repository.
Local Persist Plugin A volume plugin that extends the default local driver’s functionality by
allowing you specify a mountpoint anywhere on the host, which enables the files to always persist,
even if the volume is removed via docker volume rm .
NetApp Plugin (nDVP) A volume plugin that provides direct integration with the Docker ecosystem
for the NetApp storage portfolio. The nDVP package supports the provisioning and management of
storage resources from the storage platform to Docker hosts, with a robust framework for adding
additional platforms in the future.
Netshare plugin A volume plugin that provides volume management for NFS 3/4, AWS EFS and
CIFS file systems.
OpenStorage Plugin A cluster-aware volume plugin that provides volume management for file and
block storage solutions. It implements a vendor neutral specification for implementing extensions
such as CoS, encryption, and snapshots. It has example drivers based on FUSE, NFS, NBD and EBS
to name a few.
Portworx Volume Plugin A volume plugin that turns any server into a scale-out converged
compute/storage node, providing container granular storage and highly available volumes across any
node, using a shared-nothing storage backend that works with any docker scheduler.
Quobyte Volume Plugin A volume plugin that connects Docker to Quobyte’s data center file system,
a general-purpose scalable and fault-tolerant storage platform.
REX-Ray plugin A volume plugin which is written in Go and provides advanced storage
functionality for many platforms including VirtualBox, EC2, Google Compute Engine, OpenStack,
and EMC.
Virtuozzo Storage and Ploop plugin A volume plugin with support for Virtuozzo Storage distributed
cloud file system as well as ploop devices.
VMware vSphere Storage Plugin Docker Volume Driver for vSphere enables customers to address
persistent storage requirements for Docker containers in vSphere environments.
Twistlock AuthZ Broker A basic extendable authorization plugin that runs directly on the host or
inside a container. This plugin allows you to define user policies that it evaluates during
authorization. Basic authorization is provided if Docker daemon is started with the –tlsverify flag
(username is extracted from the certificate common name).
Docker Plugins Events
Docker plugins report the following events: install, enable, disable, remove.
Docker Philosophy
Docker is a new way to build, ship & run differently but easier than "traditional" ways. This isn't just related to the
technology but also to the philosophy.
Build Ship & Run
Visualization is a useful technology to run different OSs on a single bare-metal server but containers go further, they
decouple applications and software from the underlying OS and could be used to create a self-service of an
application or a PaaS. Docker is development-oriented and user friendly it could be used in DevOps pipelines in
order to develop, test and run applications.
On the same way Docker allows DevOps teams to have a local development environment similar to production
easily, just build and ship.
After finishing his tests, the developer could publish the container using Docker machine.
In the developer-driven era, cloud infrastructures, IoT, mobile devices and other technologies are innovating each
day, it is really important from a DevOps view to reduce the complexity of software production, quality, stability
and scalability by integrating containers as wrappers to standardize runtime systems.
Docker Is Single Process
A container can be single or multiprocess. While other containerization allows containers to run multiple processes
(lxc), Docker restricts containers to run as a single process and if you would like to run n processes for your
application , you should run n Docker containers.
In reality, you can run multiple processes, since Docker has an instruction called ENTRYPOINT that starts a
program, you can set the ENTRYPOINT to execute a script and run as many programs as you want, but it is not
completely aligned with Docker philosophy.
Building an application based on a single-process containers is efficient to create microservices architectures,
develop PaaS (platform as a service), create FaaS (function as a service) platforms, serverless computing or event-
based programming..
We a going to learn about building microservices applications using Docker later in this book. Briefly, it is an
approach to implement distributed architectures where services are independent processes that communicate with
each other over a network.
Docker Is Stateless
A Docker container consists of a series of a series of layers created previously in an image, once the image becomes
a container, these layers become read-only layers.
Once a modification happens in the container by a process, a diff is made and Docker notices the difference
between a container's layer and the matched images's layer but only when you type
docker commit <container id>
In this case a new image will be created with the last modifications but if you delete the container, the diff will
disappear.
Docker does not support a persistent storage but if you want to keep your storage persistent, you should use Docker
volumes and mount your files in the host filesystem, and in this case your code will be a part of a volume not a
container.
Docker is stateless, if any change happens in a running container, and new and different image could be created and
this is one of its strengths because you can do a fast and an easy rollback anytime.
Docker Is Portable
I created some public images that I pushed to my public Docker Hub, like eon01/vote, eon01/nginx-static - these
images were pulled more than 5k times. I created both of them using my laptop and all of the people who pulled
them used them as containers in different enviroments: bare-metal servers, desktops, laptops, virtual machines .. etc
all of these containers was built in a machine and run in the same way in thousands of other machines.
Chapter IV - Advanced Concepts
o ^__^
o (oo)\_______
(__)\ )\/\
||----w |
|| ||
Namespaces
According to Linux man pages, namespace wraps a global system resource in an abstraction that makes it appear to
the processes within the namespace that they have their own isolated instance of the global resource. Changes to the
global resource are visible to other processes that are members of the namespace, but are invisible to other
processes. One use of namespaces is to implement containers.
| Namespace | Constant | Isolates |
|------------|------------------|-------------------------------------|
| Cgroup | CLONE_NEWCGROUP | Cgroup root directory |
| IPC | CLONE_NEWIPC | System V IPC, POSIX message queues |
| Network | CLONE_NEWNET | Network devices, stacks, ports, etc.|
| Mount | CLONE_NEWNS | Mount points |
| PID | CLONE_NEWPID | Process IDs |
| User | CLONE_NEWUSER | User and group IDs |
| UTS | CLONE_NEWUTS | Hostname and NIS domain name |
Namespaces are the first form of isolation where a process running in a container A can not see or affect another
process running in a container B even if both processes are running in the same host machine.
Kernel namespaces allows a single host to have multiple:
Network devices
IP addresses
Routing rules
Netfilter rules
Timewait buckets
Bind buckets
Routing cache
etc ..
Every Docker container will have its own IP address and will see other docker containers as hosts connected to the
same "switch".
Control Groups (cgroups)
According to Wikipedia, cgroups or control groups is a Linux kernel feature that limits, accounts for, and isolates the
resource usage (CPU, memory, disk I/O, network, etc.) of a collection of processes.
This feature was introduced by engineers at Google in 2006 under the name "process containers". In late 2007, the
nomenclature changed to "control groups" to avoid confusion caused by multiple meanings of the term "container"
in the Linux kernel context, and the control groups functionality was merged into the Linux kernel mainline in kernel
version 2.6.24, which was released in January 2008.
Control Groups are also used by Docker not only to implement resource accounting and isolate its usage but
because they provide many useful metrics that help ensure that each container gets enough memory, CPU, disk I/O,
memory cache, CPU cache ..etc.
Control Groups provides also a level of security because using the feature ensure that a single container will not
bring down the host OS by exhausting its resources.
Public and private PaaS are also making use of cgroups in order to guarantee a stable service with no downtimes
when an application exhaust CPU, memory or any other critical resource. Other software and containers
technologies use cgroups like LXC, libvirt, systemd, Open Grid Scheduler and lmctfy.
Linux Capabilities
Unix was designed to run two categories of processes :
privileged processes, with the user id 0 which is the root user.
unprivileged processes where the user id is different from zero.
If you are familiar with Linux, you can find all of users ids when you type:
ps -eo uid
Every process in a UNIX has an owner (designed by its UID) and a group owner (identified by a GID). When Unix
starts a process, its GID is set to the GID of its parent process. The main purpose of this schema is performing
permission checks.
While privileged processes bypass all kernel permission checks, all the other processes are subject to a permission
check ( the Unix-like system Kernel make the permission check based on the UID and the GID of the non root
process.
Starting with Kernel 2.2, Linux divides the privileges into distinct units that we call capabilities. Capabilities can be
independently enabled and disabled, which is a great security and permission management feature. A process can
have CAP_NET_RAW capability while it is denied to use the CAP_NET_ADMIN capability. This is just an example
explaining how capabilities works. You can find the complete list of the other capabilities in Linux manpage:
man capabilities
By default, Docker starts containers with a limited list of capabilities so that containers will not use a real root
privileges and the root user within the container will not access all of the privileges of a real root user.
This is the list of the capabilities used by a default Docker installation :
s.Process.Capabilities = []string{
"CAP_CHOWN",
"CAP_DAC_OVERRIDE",
"CAP_FSETID",
"CAP_FOWNER",
"CAP_MKNOD",
"CAP_NET_RAW",
"CAP_SETGID",
"CAP_SETUID",
"CAP_SETFCAP",
"CAP_SETPCAP",
"CAP_NET_BIND_SERVICE",
"CAP_SYS_CHROOT",
"CAP_KILL",
"CAP_AUDIT_WRITE",
}
You can find the same list in Docker source code on github
Secure Computing Mode (seccomp)
Since Kernel version 2.6.12, Linux allows a process to make a one-way transition into a "secure" state where it
cannot make any system calls to an open file descriptors (fd).
Only exit(), sigreturn(), read() and write() system calls are allowed, any other call made by a process with a "secure"
state will beget its termination : the Kernel will send him a SIGKILL. This is the sandboxing mechanism in Linux
Kernel based on seccomp Kernel's feature.
Docker allows using seccomp to restrict the actions available within a container and restrict your application's
access.
To use the seccomp feature, Kernel should be configured with CONFIG_SECCOMP enabled and Docker should be
built with seccomp .
If you would like to check if your Kernel has seccomp activated, type:
cat /boot/config-`uname -r` | grep CONFIG_SECCOMP=
If it is activated, yous should see this:
CONFIG_SECCOMP=y
Docker uses seccomp profiles, which are configurations that disable around 44 system calls out of 300+. You can
find the default profile in the official repository.
You can choose a specific seccomp profile when running a container by adding the following options to the Docker
run command:
--security-opt seccomp=/path/to/seccomp/profile.json
The following table can be found in Docker official repository and it represents some of the blocked syscalls.
Syscall Description
acct Accounting syscall which could let containers disable their own resource limits or process
accounting. Also gated by CAP_SYS_PACCT .
add_key Prevent containers from using the kernel keyring, which is not namespaced.
adjtimex Similar to clock_settime and settimeofday , time/date is not namespaced.
bpf Deny loading potentially persistent bpf programs into kernel, already gated by
CAP_SYS_ADMIN .
clock_adjtime Time/date is not namespaced.
clock_settime Time/date is not namespaced.
clone Deny cloning new namespaces. Also gated by CAP_SYS_ADMIN for CLONE_* flags, except
CLONE_USERNS .
create_module Deny manipulation and functions on kernel modules.
delete_module Deny manipulation and functions on kernel modules. Also gated by CAP_SYS_MODULE .
finit_module Deny manipulation and functions on kernel modules. Also gated by CAP_SYS_MODULE .
get_kernel_syms Deny retrieval of exported kernel and module symbols.
get_mempolicy Syscall that modifies kernel memory and NUMA settings. Already gated by CAP_SYS_NICE .
init_module Deny manipulation and functions on kernel modules. Also gated by CAP_SYS_MODULE .
ioperm Prevent containers from modifying kernel I/O privilege levels. Already gated by
CAP_SYS_RAWIO .
iopl Prevent containers from modifying kernel I/O privilege levels. Already gated by
CAP_SYS_RAWIO .
kcmp Restrict process inspection capabilities, already blocked by dropping CAP_PTRACE .
kexec_file_load Sister syscall of kexec_load that does the same thing, slightly different arguments.
kexec_load Deny loading a new kernel for later execution.
keyctl Prevent containers from using the kernel keyring, which is not namespaced.
lookup_dcookie Tracing/profiling syscall, which could leak a lot of information on the host.
mbind Syscall that modifies kernel memory and NUMA settings. Already gated by CAP_SYS_NICE .
mount Deny mounting, already gated by CAP_SYS_ADMIN .
move_pages Syscall that modifies kernel memory and NUMA settings.
name_to_handle_at Sister syscall to open_by_handle_at . Already gated by CAP_SYS_NICE .
nfsservctl Deny interaction with the kernel nfs daemon.
open_by_handle_at Cause of an old container breakout. Also gated by CAP_DAC_READ_SEARCH .
perf_event_open Tracing/profiling syscall, which could leak a lot of information on the host.
personality Prevent container from enabling BSD emulation. Not inherently dangerous, but poorly tested,
potential for a lot of kernel vulns.
pivot_root Deny pivot_root , should be privileged operation.
process_vm_readv Restrict process inspection capabilities, already blocked by dropping CAP_PTRACE .
process_vm_writev Restrict process inspection capabilities, already blocked by dropping CAP_PTRACE .
ptrace Tracing/profiling syscall, which could leak a lot of information on the host. Already blocked
by dropping CAP_PTRACE .
query_module Deny manipulation and functions on kernel modules.
quotactl Quota syscall which could let containers disable their own resource limits or process
accounting. Also gated by CAP_SYS_ADMIN .
reboot Don't let containers reboot the host. Also gated by CAP_SYS_BOOT .
request_key Prevent containers from using the kernel keyring, which is not namespaced.
set_mempolicy Syscall that modifies kernel memory and NUMA settings. Already gated by CAP_SYS_NICE .
setns Deny associating a thread with a namespace. Also gated by CAP_SYS_ADMIN .
settimeofday Time/date is not namespaced. Also gated by CAP_SYS_TIME .
stime Time/date is not namespaced. Also gated by CAP_SYS_TIME .
swapon Deny start/stop swapping to file/device. Also gated by CAP_SYS_ADMIN .
swapoff Deny start/stop swapping to file/device. Also gated by CAP_SYS_ADMIN .
sysfs Obsolete syscall.
_sysctl Obsolete, replaced by /proc/sys.
umount Should be a privileged operation. Also gated by CAP_SYS_ADMIN .
umount2 Should be a privileged operation.
unshare Deny cloning new namespaces for processes. Also gated by CAP_SYS_ADMIN , with the exception
of unshare --user .
uselib Older syscall related to shared libraries, unused for a long time.
userfaultfd Userspace page fault handling, largely needed for process migration.
ustat Obsolete syscall.
vm86 In kernel x86 real mode virtual machine. Also gated by CAP_SYS_ADMIN .
vm86old In kernel x86 real mode virtual machine. Also gated by CAP_SYS_ADMIN .
If you would like to run a container without the default seccomp profile defined below, you can run:
docker run --rm -it --security-opt seccomp=unconfined hello-world
If you are using a distribution that does not support seccomp profiles, like Ubuntu 14.04, you will have the
following error : docker: Error response from daemon: Linux seccomp: seccomp profiles are not supported on this daemon, you
cannot specify a custom seccomp profile.
Application Armor (Apparmor)
Application Armor is a Linux Kernel security module. It allows the Linux administrator to restrict programs'
capabilities with per-program profiles.
In other words, this is a tool to lock applications by limiting their access to the only resources they are supposed to
use without disturbing neither their execution nor their performance.
Profiles can allow capabilities like raw socket, network access, read, write or execute files. This functionality was
added to Linux to complete the Discretionary Access Control (DAC) model. The Mandatory Access Control (MAC)
was then introduced.
You can check your Apparmor status by typing:
sudo apparmor_status
You will see a similar output to the following one, if it is activated:
apparmor module is loaded.
19 profiles are loaded.
18 profiles are in enforce mode.
/sbin/dhclient
/usr/bin/evince
/usr/bin/evince-previewer
/usr/bin/evince-previewer//sanitized_helper
/usr/bin/evince-thumbnailer
/usr/bin/evince-thumbnailer//sanitized_helper
/usr/bin/evince//sanitized_helper
/usr/bin/lxc-start
/usr/lib/NetworkManager/nm-dhcp-client.action
/usr/lib/connman/scripts/dhclient-script
/usr/sbin/cups-browsed
/usr/sbin/mysqld
/usr/sbin/ntpd
/usr/sbin/tcpdump
docker-default
lxc-container-default
lxc-container-default-with-mounting
lxc-container-default-with-nesting
1 profiles are in complain mode.
/usr/sbin/sssd
8 processes have profiles defined.
8 processes are in enforce mode.
/sbin/dhclient (1815)
/usr/sbin/cups-browsed (1697)
/usr/sbin/ntpd (3363)
docker-default (3214)
docker-default (3380)
docker-default (3381)
docker-default (3382)
docker-default (3390)
0 processes are in complain mode.
0 processes are unconfined but have a profile defined.
You can see in the list above that Docker profile is active and it is called docker-default . It is the default profile so
when you run a container, the following command is explicitly executed :
docker run --rm -it --security-opt apparmor=docker-default hello-world
The default profile is the following:
#include <tunables/global>
profile docker-default flags=(attach_disconnected,mediate_deleted) {
#include <abstractions/base>
network,
capability,
file,
umount,
deny @{PROC}/{*,**^[0-9*],sys/kernel/shm*} wkx,
deny @{PROC}/sysrq-trigger rwklx,
deny @{PROC}/mem rwklx,
deny @{PROC}/kmem rwklx,
deny @{PROC}/kcore rwklx,
deny mount,
deny /sys/[^f]*/** wklx,
deny /sys/f[^s]*/** wklx,
deny /sys/fs/[^c]*/** wklx,
deny /sys/fs/c[^g]*/** wklx,
deny /sys/fs/cg[^r]*/** wklx,
deny /sys/firmware/efi/efivars/** rwklx,
deny /sys/kernel/security/** rwklx,
}
You can confine containers using Apparmor but not the programs running inside a container. If you are
running Nginx under a container profile to protect the host, you will not be able to confine the Nginx with a different
profile to protect the container.
Apparmor is used since Linux Kernel since version 2.6.36 release.
Docker Union Filesystem
A Docker image is built of layers. UnionFS is the technology that allows that.
Union Filesystem or UnionFS is a filesystem service used in Linux, FreeBSD and NetBSD. It allows a set of files
and directories to form a single filesystem by grouping them in a single branch. Any Docker image is in reality a set
of layered filesystems superposed, one over the other.
UnionFS layers are transparently overlaid and form a coherent file system that we call branches. They may be either
read-only or read-write file systems.
Docker uses UnionFS in order to avoid duplicating a branch. The boot filesystem (bootfs) for example is one of the
branches that are used in many containers and it resembles the typical Unix boot filesystem. It is used in Ubuntu,
Debian, CentOs and many other containers.
When a container is started, Docker mounts a filesystem on top of any layers below and make it writable. So any
change is applied only to this layer. If you want to modify a file, it will be moved from the read-only layer below
into the read-write layer at the top.
Let's start a container and see its filesystem layers.
docker create -it redis
You can notice that Docker is downloading different layers in the output:
latest: Pulling from library/redis
6a5a5368e0c2: Pull complete
2f1103ce5ca9: Pull complete
086a40c85e01: Pull complete
9a5e9d112ec4: Pull complete
dadc4b601bb4: Pull complete
b8066982e7a1: Pull complete
2bcdfa1b63bf: Pull complete
Digest: sha256:38e873a0db859d0aa8ab6bae7bcb03c1bb65d2ad120346a09613084b49185912
Status: Downloaded newer image for redis:latest
6ab94a06bc0263110b973174d65cbc6ebd6d9fc637526b2c9dd3eac3c3bcf032
`docker create creates a writable container layer over the specified image hello-world in the last case) and
prepares it for running a command. It is similar to docker run -d except the container will not start running.
When running the last command, you will have an id that will be printed on your terminal. It is the id of the prepared
container.
6ab94a06bc0263110b973174d65cbc6ebd6d9fc637526b2c9dd3eac3c3bcf032
The container is ready, let's start it:
docker start 0eeab42751ff0172f845b9cb737c966471be9f10282cf1684519bc7f5da80170
The command docker start creates a process space around the UnionFS block.
We can't have more than one process space per container.
A docker ps will show that the container is running:
CONTAINER ID IMAGE COMMAND PORTS NAMES
6ab94a06bc02 redis "docker-entrypoint.sh" 6379/tcp trusting_carson
If you would like to see the layers, then type:
ls -l /var/lib/docker/aufs/layers
Layers of the same images will have a name that starts with the container id 6ab94a06bc02 .
We haven't seen yet the concept of Dockerfile, but don't worry if it is the first time you saw it. For the moment, let's
just limit our understanding to the explanation figuring in the official documentation:
Docker can build images automatically by reading the instructions from a Dockerfile. A Dockerfile is a text
document that contains all the commands a user could call on the command line to assemble an image. Using
Docker build users can create an automated build that executes several command-line instructions in
succession.
In general, when writing a Dockerfile, every line in this file will create an additional layer.
FROM busybox # This is a layer
MAINTAINER Aymen El Amri # This is a layer
RUN ls # This is a layer
Let's build this image:
docker build .
The id of the build, on the output for this example was
7f49abaf7a69
In other words, a layer is a change in an image.
So the second advantage of using UnionFS and the layered filesystem is the isolation of modifications : Any change
that happens to the running container will be initialized once the containers is restarted. The bootfs layer is one of
the layers that users will not interact with.
To sum up, when Docker mounts the rootfs, it starts as a read-only system, as in a traditional Linux boot, but then,
instead of changing the file system to the read-write mode, Docker takes advantage of a union mount to add a read-
write filesystem over the first read-only filesystem. There may be multiple read-only file systems layered on top of
each other. These file systems are called layers.
Docker supports a number of different union file systems. The next chapter about images will reinforce your
understanding about this.
Storage Drivers
OverlayFS, AUFS, VFS or Device Mapper are some storage technologies that are compatible with Docker. They are
easily "pluggable" to Docker and we call them storage drivers. Every technology has its own specificities and
choosing one depends on your technical requirements and criteria (like stability, maintenance, backup, speed,
hardware ..etc)
When we started running an image, Docker used a mechanism called copy-on-write (CoW).
CoW is a standard Unix pattern that provides a single shared copy of some data until it is modified. In general, CoW
is an implicit sharing of resources used to implement a copy operation on modifiable resources.
// define x
std::string x("Hello");
// x and y use the same buffer
std::string y = x;
// now y uses a different buffer while x still uses the same old buffer
y += ", World!";
In storage, CoW is used as an underlying mechanism to create snapshots like it is the case for for logical volume
management done by ZFS, AUFS .. etc
In a CoW algorithm, the original storage is never modified and when a write operation is requested it is
automatically redirected to a new storage (the original one is not modified). This redirection is called Redirect-on-
write or ROW
When a write request is made, all the data is copied into a new storage resource. We call this second step Copy-on-
write" or COW.
You have probably seen blog posts about one storage drivers being better than others or some post mortems. Keep in
mind that there is no technology better than the other, every technology has its pro and it cons.
OverlayFS
OverlayFS is a filesystem service for Linux that implements a union mount for other file systems. In the commit
453552c8384929d8ae04dcf1c6954435c0111da0 of Docker, OverlayFS was added to Docker by @alexlarsson from
Redhat. In the signature, the commit was described as:
Each container/image can have a "root" subdirectory which is a plain filesystem hierarchy, or they can use
overlayfs.
If they use overlayfs there is a "upper" directory and a "lower-id" file, as well as "merged" and "work"
directories. The "upper" directory has the upper layer of the overlay, and "lower-id" contains the id of the
parent whose "root" directory shall be used as the lower layer in the overlay. The overlay itself is mounted in
the "merged" directory, and the "work" dir is needed for overlayfs to work.
When a overlay layer is created there are two cases, either the parent has a "root" dir, then we start out with a
empty "upper" directory overlaid on the parents root. This is typically the case with the init layer of a container
which is based on an image. If there is no "root" in the parent, we inherit the lower-id from the parent and start
by making a copy if the parents "upper" dir. This is typically the case for a container layer which copies its
parent -init upper layer.
Additionally we also have a custom implementation of ApplyLayer which makes a recursive copy of the parent
"root" layer using hardlinks to share file data, and then applies the layer on top of that. This means all chile
images share file (but not directory) data with the parent.
Overlay2 was added to Docker in the pull #22126
Pro
OverlayFS is similar to aufs but it is supported and was merged into the mainline Linux kernel since version 3.18.
Like aufs it enables shared memory between containers using the same shared libraries (in the same disk). The
advantage of using OverlayFS is the fact that it is on a continued development. It is simpler than aufs and in most
cases faster.
Since its version 1.12, Docker provides overlay2 storage driver which is more efficient than overlay. The version 2
of OverlayFS is compatible with Kernel 4.0 and later.
Many users had issues with OverlayFS because they run out of inode and this problem was solved by Overlay2.
Testing shows that there is a significant reduction in the number of used inode used, this result is available
especially for images having multiple layers. This is clearly solving the problem of inode usage.
Cons
Mainly, the inode exhaustion problems. It is a serious problem that was fixed by OverlayFS 2.
Some other issues but the majority of these problems were fixed in the version 2. If you are planning to use
OverlaFS use OverlayFS 2 instead.
On the other hand, Overlay2 is a young codebase and Docker 1.12 is the first release offering Overlay2 and logically
some other bugs will be discovered in the future so it is a system to use with vigilance.
AUFS
aufs (short for advanced multi-layered unification filesystem) is Union Filesystem that existed in Docker since the
beginning. It was developed to improve reliability and performance. It introduced some new concepts, like writable
branch balancing. In its manpage, aufs is described as a stackable unification filesystem such as Unionfs, which
unifies several directories and provides a merged single directory.
In the early days, aufs was entirely re-designed and re-implemented Unionfs Version 1.x series.
After many original ideas, approaches and improvements, it becomes totally different from Unionfs while keeping
the basic features.
Pro
aufs is one of the most popular drivers for Docker. It is stable and used by many distributions like Knoppix, Ubuntu
10.04, Gentoo Linux 12.0 and Puppy Linux live CDs distributions. aufs help different containers to share memory
pages if they are loading the same shared libraries from the same layer.
The longest existing and possibly the most tested graphdriver backend for Docker. Reasonably performant and
stable for wide range of use cases, even though it is only available on Ubuntu and Debian Kernels (as noted below),
there has been significant use of these two distributions with Docker allowing for lots of airtime for the aufs driver
to be tested in a broad set of environments. Enables shared memory pages for different containers loading the same
shared libraries from the same layer (because they are the same inode on disk).
Cons
aufs was rejected for merging into mainline Linux. Its code was criticized for being "dense, unreadable, and
uncommented".
Aufs consists of about 20,000 lines of dense, unreadable, uncommented code, as opposed to around 10,000 for
Unionfs and 3,000 for union mounts and 60,000 for all of the VFS. The aufs code is generally something that
one does not want to look at. source
Instead, OverlayFS was merged in the Linux kernel.
Btrfs
Btrfs is a modern CoW filesystem for Linux.
The philosophy behind Btrfs is to implement advanced features while focusing on fault tolerance and easy
administration. Btrfs is developed at multiple companies and licensed under the GPL. The name stands for "B TRee
File System" so it is easier and probably acceptable to pronounce "B-Tree FS" instead of b-t-r-fs, but the real
pronunciation is "Butter FS".
Btrfs has native features named subvolumes and snapshots providing together CoW-like features.
It actually consists of three types of on-disk structures:
block headers,
keys,
and items
currently defined as follows:
struct btrfs_header {
u8 csum[32];
u8 fsid[16];
__le64 blocknr;
__le64 flags;
u8 chunk_tree_uid[16];
__le64 generation;
__le64 owner;
__le32 nritems;
u8 level;
}
struct btrfs_disk_key {
__le64 objectid;
u8 type;
__le64 offset;
}
struct btrfs_item {
struct btrfs_disk_key key;
__le32 offset;
__le32 size;
}
Btrfs was added to Docker in this commit by Alex Larsson from Red Hat.
Pro
Btrfs was introduced in the mainline Linux kernel in 2007 and was used for some few years and it was used by Linus
Torvalds as his root file system on one of his laptops. When Btrfs was released it was optimized compared to old-
school filesystems. According to the official wiki, this filesystem has the following features:
Extent based file storage
2^64 byte == 16 EiB maximum file size (practical limit is 8 EiB due to Linux VFS)
Space-efficient packing of small files
Space-efficient indexed directories
Dynamic inode allocation
Writable snapshots, read-only snapshots
Subvolumes (separate internal filesystem roots)
Checksums on data and metadata (crc32c)
Compression (zlib and LZO)
Integrated multiple device support
File Striping
File Mirroring
File Striping+Mirroring
Single and Dual Parity implementations (experimental, not production-ready)
SSD (flash storage) awareness (TRIM/Discard for reporting free blocks for reuse) and optimizations (e.g.
avoiding unnecessary seek optimizations, sending writes in clusters, even if they are from unrelated files. This
results in larger write operations and faster write throughput)
Efficient incremental backup
Background scrub process for finding and repairing errors of files with redundant copies
Online filesystem defragmentation
Offline filesystem check
In-place conversion of existing ext3/4 file systems
Seed devices. Create a (readonly) filesystem that acts as a template to seed other Btrfs filesystems. The original
filesystem and devices are included as a readonly starting point for the new filesystem. - - Using copy on write,
all modifications are stored on different devices; the original is unchanged.
Subvolume-aware quota support
Send/receive of subvolume changes
Efficient incremental filesystem mirroring
Batch, or out-of-band deduplication (happens after writes, not during)
Cons
Btrfs hasn't been a real choice for many Linux distributions and this fact made of Btrf a technology without much
testing and bug hunting.
Device Mapper
The device mapper is a framework provided by the Linux kernel (Kernel-based framework) to map physical block
devices to virtual block devices which is a higer-level abstraction. LVM2, software RAIDs and dm-crypt disk
encryption are targets of this technology. It also offers other features like file system snapshots. Device Mapper
algorithm works at the block level and not the file level:
The device mapper driver stores every image and container on a separate virtual device that are provisioned CoW
snapshot devices. We call this thin provision or thip.
Pro
It was tested and use by many communities unlike other filesystems.
Many projects and Linux features are built on top of the device mapper:
LVM2 – logical volume manager for the Linux kernel
dm-crypt – mapping target that provides volume encryption
dm-cache – mapping target that allows creation of hybrid volumes
dm-log-writes – mapping target that uses two devices, passing through the first device and logging the write
operations performed to it on the second device
dm-verity – validates the data blocks contained in a file system against a list of cryptographic hash values,
developed as part of the Chromium OS project
dmraid – provides access to "fake" RAID configurations via the device mapper
DM Multipath – provides I/O failover and load-balancing of block devices within the Linux kernel
Linux version of TrueCrypt
DRBD (Distributed Replicated Block Device)
kpartx – utility called from hotplug upon device maps creation and deletion
EVMS (deprecated)
cryptsetup – utility used to conveniently setup disk encryption based on dm-crypt
And of course Docker that uses device mapper to create copy-on-write storage for software containers.
Cons
If you would like to get device mapper storage driver performance, you should consider doing some configurations
and you should not run "loopback" mode in production.
You should try 15 steps explained in Docker official documentation, in order to use devicemapper driver in the
direct-lvm mode.
I have never used it but I saw some feedback from users having problems with its usage.
ZFS
This filesystem was first merged into the Docker engine in May 2015 and was available since Docker 1.7 release.
ZFS and Docker/go-zfs wrapper requires the installation of zfs-utils or zfs (e.g Ubuntu 16.04).
ZFS or Zettabyte File System is a combined file system and logical volume manager created at Sun Microsystems. It
was used by OpenSolaris since 2005.
Pro
ZFS is a native filesystem to Solaris, OpenSolaris, OpenIndiana, illumos, Joyent SmartOS, OmniOS, FreeBSD,
Debian GNU/kFreeBSD systems, NetBSD ans OSv.
ZFS is a killer-app for Solaris, as it allows straightforward administration of disks and pool of disks while giving
performance and integrity. It protects data against "silent error" of the disk ( caused by firmware bugs or even
hardware malfunctions like bad cables..etc ).
According to Sun Microsystems official website, ZFS meets the needs of a file system for everything from desktops
to data centers and offers:
Simple administration : ZFS automates and consolidates complicated storage administration concepts, reducing
administrative overhead by 80 percent.
Provable data integrity : ZFS protects all data with 64-bit checksums that detect and correct silent data
corruption.
Unlimited scalability : As the world's first 128-bit file system, ZFS offers 16 billion billion times the capacity of
32- or 64-bit systems.
Blazing performance : ZFS is based on a transactional object model that removes most of the traditional
constraints on the order of issuing I/Os, which results in huge performance gains.
It achieves its performance through a number of techniques:
Dynamic striping across all devices to maximize throughput
Copy-on-write design makes most disk writes sequential
Multiple block sizes, automatically chosen to match workload
Explicit I/O priority with deadline scheduling
Globally optimal I/O sorting and aggregation
Multiple independent prefetch streams with automatic length and stride detection
Unlimited, instantaneous read/write snapshots
Parallel, constant-time directory operations
Creating a new zpool is needed to use zfs :
sudo zpool create -f zpool-docker /dev/xvdb
Cons
The ZFS license is incompatible with Linux license, so Linux does not have a ZFS implementation and not every OS
can use ZFS (e.g Windows).
It takes some learning to use, so if you are using it as your main filesystem you will probably need some knowledge
about it. zfs lacks inode sharing for shared libraries between containers, but in reality it is not the only driver not
implementing this.
VFS
vfs simply stand for Virtual File System which is the abstraction layer on top of a concrete physical filesystem but it
does not use Union File System and CoW and that is why it is used by developers for debugging only.
You can find Docker volumes using vfs in /var/lib/docker/vfs/dir .
Pro
To test Docker engine, VFS is very useful since it is simpler to validate tests using this simple FS. This filesystem is
helpful to run Docker in Docker (dind). The official Docker image uses vfs as the default filesystem.
--storage-driver=vfs
vfs is the only driver which is guaranteed to work regardless of the underlying filesystem in the case of Docker in
Docker but it could be very slow and inefficient. Running Docker in Docker will be detailed later in this book.
Cons
It is not recommended at all to run VFS on production since it is not intended to be used with Docker production
clusters.
What Storage Driver To Choose
Your choice of the storage driver could not based on only pros and cons of each filesystem but there are other choice
criteria. For example, overlayFS and overlay2 can not be used on the top of a btrfs.
In general, some of these storage drivers can operate on top of different backing filesystems (host filesystem) but not
all. These table explains the common usage of each storage driver:
Storage Driver Commonly Used On Top Of
overlay xfs & ext4
overlay2 xfs & ext4
aufs xfs & ext4
btrfs btrfs
devicemapper direct-lvm
zfs zfs
vfs debugging
Docker community created a practical diagram that shows simply the strength and the weakness of some storage
drivers and their best use case.
Finally, there is no storage driver adapted to all use cases, there is no "ultimate choice" to make but it depends on
your use case. If you don't know what to choose exactly go for aufs or Overlay2.
Chapter V - Working With Docker Images
o ^__^
o (oo)\_______
(__)\ )\/\
||----w |
|| ||
Managing Docker Images
Images, Intermediate Images & Dangling Images
If you are running Docker on your laptop or your server, you can see a list of the images that Docker used or uses by
typing:
docker images
You will have a similar list:
REPOSITORY TAG IMAGE ID CREATED SIZE
<none> none 57ce8f25dd63 2 days ago 229.7 MB
scratch latest f02aa3980a99 2 days ago 0 B
xenial latest 7a409243b212 2 days ago 229.7 MB
<none> none 149b13361203 2 days ago 12.96 MB
Images with are untagged images.
You can print a custom output where you choose to view the ID and the size of the image:
docker images --format "{{.ID}}: {{.Size}}"
Or to view the repository:
docker images --format "{{.ID}}: {{.Repository}}"
In many cases, we just need to get IDs:
docker images -q
If you want to list all of them, then type:
docker images -a
or
docker images --all
In this list you will see all of the images even the intermediate ones.
REPOSITORY TAG IMAGE ID SIZE
my_app latest 7f49abaf7a69 1.093 MB
<none> <none> afe4509e17bc 225.6 MB
All of the <none>:<none> are intermediate images.
<none>:<none> images will grow exponentially with the numbers of images you download.
As you know each docker image is composed of layers with a parent-child hierarchical relationship .
These intermediate layers are a result of caching build operations which decrease disk usage and speed up builds.
Every build step is cached, that's why you may experienced some disk space problems after using Docker for a
while.
All docker layers are stored in /var/lib/docker/graph called the graph database.
Some of the intermediary images are not tagged, they are called dangling images.
docker images --filter "dangling=true"
Other filters may be used:
- label=<key> or label=<key>=<value>
- before=(<image-name>[:tag]|<image-id>|<image@digest>)
- since=(<image-name>[:tag]|<image-id>|<image@digest>)
Finding Images
If you type :
docker search ubuntu
you will get a list of images called Ubuntu that people shared publicly in the Docker Hub (hub.docker.com).
NAME DESCRIPTION STARS OFFICIAL AUTOMATED
ubuntu Ubuntu is a Debian-based Linux operating s... 4958 [OK]
ubuntu-upstart Upstart is an event-based replacement for ... 67 [OK]
rastasheep/ubuntu-sshd Dockerized SSH service, built on top of of... 47 [OK]
ubuntu-debootstrap debootstrap --variant=minbase --components... 28 [OK]
torusware/speedus-ubuntuAlways updated official Ubuntu docker imag... 27 [OK]
consol/ubuntu-xfce-vnc Ubuntu container with "headless" VNC sessi... 26 [OK]
ioft/armhf-ubuntu ABR] Ubuntu Docker images for the ARMv7(a... 19 [OK]
nickistre/ubuntu-lamp LAMP server on Ubuntu 10 [OK]
nuagebec/ubuntu Simple always updated Ubuntu docker images... 9 [OK]
nimmis/ubuntu This is a docker images different LTS vers... 5 [OK]
maxexcloo/ubuntu Base image built on Ubuntu with init, Supe... 2 [OK]
jordi/ubuntu Ubuntu Base Image 1 [OK]
admiringworm/ubuntu Base ubuntu images based on the official u... 1 [OK]
darksheer/ubuntu Base Ubuntu Image -- Updated hourly 1 [OK]
lynxtp/ubuntu https://github.com/lynxtp/docker-ubuntu 0 [OK]
datenbetrieb/ubuntu custom flavor of the official ubuntu base ... 0 [OK]
teamrock/ubuntu TeamRock's Ubuntu image configured with AW... 0 [OK]
labengine/ubuntu Images base ubuntu 0 [OK]
esycat/ubuntu Ubuntu LTS 0 [OK]
ustclug/ubuntu ubuntu image for docker with USTC mirror 0 [OK]
widerplan/ubuntu Our basic Ubuntu images. 0 [OK]
konstruktoid/ubuntu Ubuntu base image 0 [OK]
vcatechnology/ubuntu A Ubuntu image that is updated daily 0 [OK]
webhippie/ubuntu Docker images for ubuntu 0 [OK]
As you can notice, there are images having automated builds while others don't have this feature activated. An
automated builds allow your image to be up-to-date with changes on your private and public git (Github or
Bitbucket) code.
Notice that if you type the search command you will get only 25 image and if you want more, you could use the --
limit option:
docker search --limit 100 mongodb
NAME DESCRIPTION STARS OFFICIAL AUTOMATED
mongo MongoDB document databases provide high av... 2616 [OK]
tutum/mongodb MongoDB Docker image – listens in port 2... 166 [OK]
frodenas/mongodb A Docker Image for MongoDB 12 [OK]
agaveapi/mongodb-sync Docker image that regularly backs up and/o... 7 [OK]
sameersbn/mongodb 6 [OK]
bitnami/mongodb Bitnami MongoDB Docker Image 5 [OK]
waitingkuo/mongodb MongoDB 2.4.9 4 [OK]
tobilg/mongodb-marathon A Docker image to start a dynamic MongoDB ... 4 [OK]
appelgriebsch/mongodb Configurable MongoDB container based on Al... 3 [OK]
azukiapp/mongodb Docker image to run MongoDB by Azuki - htt... 3 [OK]
tianon/mongodb-mms https://mms.mongodb.com/ 2 [OK]
cpuguy83/mongodb 2 [OK]
zokeber/mongodb MongoDB Dockerfile in CentOS 7 2 [OK]
triply/mongodb Extension of official mongodb image that a... 1 [OK]
hairmare/mongodb MongoDB on Gentoo 1 [OK]
networld/mongodb Networld PaaS MongoDB image in default ins... 1 [OK]
jetlabs/mongodb Build MongoDB 3 image 1 [OK]
mattselph/ubuntu-mongodb Ubuntu 14.04 LTS with Mongodb 2.6.4 1 [OK]
oliverwehn/mongodb Out-of-the-box app-ready MongoDB server wi... 1 [OK]
gorniv/mongodb MongoDB Docker image 1 [OK]
vaibhavtodi/mongodb A MongoDB Docker image on Ubuntu 14.04.3. ... 1 [OK]
tozd/meteor-mongodb MongoDB server image for Meteor applications. 1 [OK]
ncarlier/mongodb MongoDB Docker image based on debian. 1 [OK]
pulp/mongodb 1 [OK]
ulboralabs/alpine-mongodb Docker Alpine Mongodb 1 [OK]
mminke/mongodb Mongo db image which downloads the databas... 1 [OK]
kardasz/mongodb MongoDB 0 [OK]
tcaxias/mongodb Percona's MongoDB on Debian. Storage Engin... 0 [OK]
whatwedo/mongodb 0 [OK]
guttertec/mongodb MongoDB is a free and open-source cross-pl... 0 [OK]
peerlibrary/mongodb 0 [OK]
jecklgamis/mongodb mongodb 0 [OK]
unzeroun/mongodb Mongodb image 0 [OK]
falinux/mongodb mongodb docker image. 0 [OK]
hysoftware/mongodb Docker mongodb image for hysoftware.net 0 [OK]
birdhouse/mongodb Docker image for MongoDB used in Birdhouse. 0 [OK]
airdock/mongodb 0 [OK]
bitergia/mongodb MongoDB Docker image (deprecated) 0 [OK]
tianon/mongodb-server 0 [OK]
andreynpetrov/mongodb mongodb 0 [OK]
lukaszm/mongodb MongoDB 0 [OK]
radiantwf/mongodb MongoDB Enterprise Docker image 0 [OK]
luca3m/mongodb-example Sample mongodb app 0 [OK]
derdiedasjojo/mongodb mongodb cluster prepared 0 [OK]
faboulaye/mongodb Mongodb container 0 [OK]
tcloud/mongodb mongodb 0 [OK]
omallo/mongodb MongoDB image build 0 [OK]
romeoz/docker-mongodb MongoDB container image which can be linke... 0 [OK]
denmojo/mongodb A simple mongodb container that can be lin... 0 [OK]
pl31/debian-mongodb mongodb from debian packages 0 [OK]
kievechua/mongodb Based on Tutum's Mongodb with official image 0 [OK]
apiaryio/base-dev-mongodb WARNING: to be replaced by apiaryio/mongodb 0 [OK]
babim/mongodb docker-mongodb 0 [OK]
blkpark/mongodb mongodb 0 [OK]
partlab/ubuntu-mongodb Docker image to run an out of the box Mong... 0 [OK]
jbanetwork/mongodb mongodb 0 [OK]
baselibrary/mongodb ThoughtWorks Docker Image: mongodb 0 [OK]
glnds/mongodb CentOS 7 / MongoDB 3 0 [OK]
hpess/mongodb 0 [OK]
hope/mongodb MongoDB image 0 [OK]
recteurlp/mongodb Fedora DockerFile for MongoDB 0 [OK]
docker search --limit 100 mongodb|wc -l
101
To refine your search, you can filter it using the --filter option.
Let's search for the best Mongodb images according to the community (images with more than 5 stars):
docker search --filter=stars=5 mongo
Tutum, Frodenas and Bitnami have the most popular Mongodb images:
NAME DESCRIPTION STARS OFFICIAL AUTOMATED
tutum/mongodb MongoDB.. 166 [OK]
frodenas/mongodb A Docker.. 12 [OK]
bitnami/mongodb Bitnami.. 5 [OK]
Images could be official or not, just like any Open Source project, Docker public images are made by anyone who
may have access to the Docker Hub so consider double-checking images before using them in your production
servers.
Official images could be filtered in this way:
docker search --filter=is-official=true mongo
NAME DESCRIPTION STARS OFFICIAL AUTOMATED
mongo MongoDB document databases provide high av... 2616 [OK]
mongo-express Web-based MongoDB admin interface, written... 89 [OK]
Filter output based on these conditions:
stars=
is-automated=(true|false)
is-official=(true|false)
The images you can find are even public images or your own private images stored in Docker Hub. In the next
section we will use a private registry.
Finding Private Images
If you have a private registry, you can use the Docker API:
curl -X GET http://localhost:5000/v1/search?q=ubuntu
Or use Docker client:
docker search localhost:5000/ubuntu
In the last example, I am using a local private registry without any SSL encryption, change localhost by your
secure remote server domain or IP address.
Pulling Images
If you want to pull the latest tag of an image, say Ubuntu, you just need to type:
docker pull ubuntu
But Ubuntu image has many tags like: devel, 16.10, 14.04, trusty ..etc
You can find all of the tags here: https://hub.docker.com/r/library/ubuntu/tags/
To pull the development image, type:
docker pull ubuntu:devel
You will see the tagged image in your download:
devel: Pulling from library/ubuntu
8e21f82d32cf: Pulling fs layer
54d6ba364cfb: Pulling fs layer
451f851c9d9f: Pulling fs layer
55e763f0d444: Waiting
b112a925308f: Waiting
Removing Images
Simply type:
docker rmi $(docker images -q)
We can also safely remove only dangling images by typing:
docker rmi $(docker images -f "dangling=true" -q)
Creating New Images Using Dockerfile
The Dockerfile is the file that Docker reads in order to build images. It is a simple text file with a specific
instructional language to assemble the different layers of an image.
You can find below a list of the different instructions that could be used to create an image and then we will see how
to build the image using Dockerfile.
FROM
In the Dockerfile, the first line should start with this instruction.
This is not widely used but when we want to build multiple images, we can have multiple FROM instruction in the
same Dockerfile.
FROM <image>:<tag>
Example:
FROM ubuntu:14.04
If the tag is not specified then Docker will download the latest tagged image.
MAINTAINER
The maintainer is not really an instruction but it indicates the name, email or website of the image maintainer. It is
the equivalent of author in code documentations.
MAINTAINER <nam>
Example:
MAINTAINER Aymen EL Amri - @eon01
RUN
You can run commands (like Linux CLI commands) or executables.
RUN <command>
or
RUN ["<executable>", "<param>", "<param1>", ... ,"<paramN>"]
The first run block will run a command just like any other Linux command using /bin/sh -c shell. Windows
commands are executed using cmd /S /C shell.
Examples:
RUN ls -l
During the build process, you will see the output of the command:
Step 4 : RUN ls -l
---> Running in b3e87d26c09a
total 64
drwxr-xr-x 2 root root 4096 Oct 6 07:47 bin
drwxr-xr-x 2 root root 4096 Apr 10 2014 boot
drwxr-xr-x 5 root root 360 Nov 3 21:55 dev
drwxr-xr-x 64 root root 4096 Nov 3 21:55 etc
drwxr-xr-x 2 root root 4096 Apr 10 2014 home
drwxr-xr-x 12 root root 4096 Oct 6 07:47 lib
drwxr-xr-x 2 root root 4096 Oct 6 07:47 lib64
drwxr-xr-x 2 root root 4096 Oct 6 07:46 media
drwxr-xr-x 2 root root 4096 Apr 10 2014 mnt
drwxr-xr-x 2 root root 4096 Oct 6 07:46 opt
dr-xr-xr-x 267 root root 0 Nov 3 21:55 proc
drwx------ 2 root root 4096 Oct 6 07:47 root
drwxr-xr-x 8 root root 4096 Oct 13 21:13 run
drwxr-xr-x 2 root root 4096 Oct 13 21:13 sbin
drwxr-xr-x 2 root root 4096 Oct 6 07:46 srv
dr-xr-xr-x 13 root root 0 Nov 3 21:54 sys
drwxrwxrwt 2 root root 4096 Oct 6 07:47 tmp
drwxr-xr-x 11 root root 4096 Oct 13 21:13 usr
drwxr-xr-x 13 root root 4096 Oct 13 21:13 var
The same command could be called like this:
RUN ["/bin/sh", "-c", "ls", "-l"]
The latter is called the exec from.
CMD
CMD command helps you to identify which executable should be run when a container is started from your image.
Like the run command, you can use the shell from:
CMD <command> <param1> <param2> .. <param2>
The exec from:
CMD ["<executable>", "<param>", "<param1>", ... ,"<paramN>"]
or as a default parameter to ENTRYPOINT instruction (explained later):
CMD [<"param1">,<"param2"> .. <"paramN">]
Using CMD, the same instruction (that we used in RUN) will be run but not during the build, it will be executed
during the execution of the container.
Example:
CMD ["ls", "-l"]
LABEL
The LABEL instruction is useful in case you want to add metadata to a given image.
LABEL <key1>=<value1> <key2>=<value2> .. <keyN>=<valueN>
Labels are key-value pairs.
Not only Docker's images can have labels but also:
Docker containers
Docker daemons
Docker volumes
Docker networks (and Swarm networks)
Docker Swarm nodes
Docker Swarm services
EXPOSE
When running an application or a service inside a Docker container, it is obvious that this service needs to listen and
send its data to the outside. Imagine we a php/Mysql web application in a host. We created two two containers, a
Mysql container and a webserver container, say Apache.
At this stage, neither the DB server cannot communicate with the web server and the web server can not request a
database. In addition, both servers are not accessible from the outside of the host.
EXPOSE 3306
EXPOSE 80, 443
Now if we would like opening the ports 80 and 443 so that Nginx can be reached from outside the host, we can use
the same instruction with the -p or -P flag.
-p publish a range of ports or the and -P publish all of the exposed ports. You can expose one port number and
publish it externally under another number.
We are going to see this later in this book but keep in mind that exposing ports in the Dockerfile is not mapping
ports to host's network interfaces.
To expose a list of ports, say 3000, 3001, 3002, .., 3999, 4000, you can use this:
EXPOSE 3000-4000
ENV
The ENV is an instruction that sets environment variables. It is the equivalent of Linux:
export variable=value
ENV works with <key>/<value> pair. You can use ENV instruction in two manners:
ENV variable1 This is value1
ENV variable2 This is value2
or like this:
ENV variable1="this is value1" variable2="this is value2"
If you are used to Dockerfile and building your own imafes, you may have seen this:
ENV DEBIAN_FRONTEND noninteractive
This is discouraged because the environment variable persists after the build.
However, you can set it via ARG ( ARG instruction is explained later in this section).
ARG DEBIAN_FRONTEND=noninteractive
ADD
As its name may indicate, the ADD instruction will add files from the host to the guest.
ADD is part of Docker from the beginning and supports a few additional tricks (compared to COPY) beyond simply
copying files.
ADD has two forms:
ADD <src>... <dest>
And if your path contains whitespaces, you may use this form:
ADD ["<src>",... "<dest>"]
You can use some tricks like the * operator:
ADD /var/www/* /var/www/
Or the ? operator to replace a character. If we want to copy all of the following files:
-rwxrwxrwx 1 eon01 sudo 11474 Nov 3 00:50 chapter1
-rwxrwxrwx 1 eon01 sudo 35163 Nov 3 00:50 chapter2
-rwxrwxrwx 1 root root 5233 Nov 3 00:50 chapter3
-rwxrwxrwx 1 eon01 sudo 22411 Nov 3 00:50 chapter4
-rwxr-xr-x 1 eon01 sudo 13550 Nov 6 02:26 chapter5
-rwxrwxrwx 1 eon01 sudo 3235 Nov 6 01:15 chapter6
-rwxrwxrwx 1 eon01 sudo 395 Nov 3 00:51 chapter7
-rwxrwxrwx 1 eon01 sudo 466 Nov 3 00:51 chapter8
-rwxrwxrwx 1 eon01 sudo 272 Nov 3 00:51 chapter9
We can use this:
ADD /home/eon01/painlessdocker/chapter? /var/www
Using Docker ADD, you can use download files from links.
ADD https://github.com/eon01/PainlessDocker/blob/master/README.md /var/www/index.html
This instruction will copy the html file called README.md to index.html under /var/www
Docker's ADD will not discover the URL for files, you should not use something like https://github.com
The ADD instruction will recognize formats like gzip, bzip2 or xz.. So if the is a local tar archive is is directly
unpacked as a directory ( tar -x ).
We haven't seen WORKDIR yet but keep in mind the following point. If we would like to copy index.html to
/var/www/painlessdocker/ we should do this:
ADD index.html /var/www/painlessdocker/
But, if our WORKDIR instruction referenced /var/www/ as out work directory, we can use the following instruction:
ADD index.html painlessdocker/
This form that uses an absolute path in the directory will not work:
ADD ../index.html /var/www/
COPY
Like the ADD instruction, the COPY instruction have two forms:
COPY <src>... <dest>
and
COPY ["<src>",... "<dest>"]
The second form is used for files path with spaces.
COPY ["/home/eon01/Painless Docker.html", "/var/www/index.html"]
You can use some tricks like the * operator:
COPY /var/www/* /var/www/
Or the ? operator to replace a character. If we want to add all of the following files:
-rwxrwxrwx 1 eon01 sudo 11474 Nov 3 00:50 chapter1
-rwxrwxrwx 1 eon01 sudo 35163 Nov 3 00:50 chapter2
-rwxrwxrwx 1 root root 5233 Nov 3 00:50 chapter3
-rwxrwxrwx 1 eon01 sudo 22411 Nov 3 00:50 chapter4
-rwxr-xr-x 1 eon01 sudo 13550 Nov 6 02:26 chapter5
-rwxrwxrwx 1 eon01 sudo 3235 Nov 6 01:15 chapter6
-rwxrwxrwx 1 eon01 sudo 395 Nov 3 00:51 chapter7
-rwxrwxrwx 1 eon01 sudo 466 Nov 3 00:51 chapter8
-rwxrwxrwx 1 eon01 sudo 272 Nov 3 00:51 chapter9
We can use this:
COPY /home/eon01/painlessdocker/chapter? /var/www
If we would like to copy index.html to /var/www/painlessdocker/ we should do this:
COPY index.html /var/www/painlessdocker/
But, if our WORKDIR instruction referenced /var/www/ as out work directory, we can use the following instruction:
COPY index.html painlessdocker/
This form that uses an absolute path in the directory will not work:
COPY ../index.html /var/www/
Unlike the ADD instruction, the COPY instruction will not work with archive files and URLs.
ENTRYPOINT
A question that you may ask is what happens when a container starts ?
Imagine we have a tiny Python server that the container should start, the ENTRYPOINT should be :
python -m SimpleHTTPServer
Or a Node.js application to run it inside the container:
node app.js
The ENTRYPOINT is what helps us to start a server in this case or to execute a command in the general case.
That's why we need the ENTRYPOINT instruction.
Whenever we start a Docker container, declaring what command should be executed is important. Otherwise the
container will shutdown.
In the general case, the ENTRYPOINT is declared in the Dockerfile.
This instruction has two forms.
The exec form is the preferred one:
ENTRYPOINT [<"executable">, <"param1">, <"param2">.... <"paramN">]
The second one is the shell from:
ENTRYPOINT <command> <param1> <param2> .. <paramN>
Example:
ENTRYPOINT ["node", "app.js"]
Let's take the example of a simple *Python* application.
``` python
print("Hello World")
We want the container to run the Python script as an ENTRYPOINT.
Now that we know some instructions like FROM, COPY and ENTRYPOINT, we can create a Dockerfile just using
those instructions.
echo "print('Hello World')" > app.py
touch Dockerfile
This is the content of the Dockerfile:
FROM python:2.7
COPY app.py .
ENTRYPOINT python app.py
Reading this Dockerfile, we understand that:
The image will be downloaded from Docker Hub python:2.7
The file app.py will be copied inside the container
The command python app.py will be executed when the container upstarts.
We haven't seen that yet in Painless Docker but this is the process of building and running the command ( we are
going to see the build and the run commands will be detailed later in this book.).
The build:
docker build .
Sending build context to Docker daemon 3.072 kB
Step 1 : FROM python:2.7
---> d0614bfb3c4e
Step 2 : COPY app.py .
---> Using cache
---> 6659a70e1775
Step 3 : ENTRYPOINT python app.py
---> Using cache
---> 236e1648a508
Successfully built 236e1648a508
The run:
docker run 236e1648a508
Hello World
Notice that the Python script was executed just after running the container.
VOLUME
The VOLUME instruction creates an external mount point from an internal directory. Any external volume mounted
using this instruction could be used by another Docker container.
We can use this form:
VOLUME[<"Directory">]
This form
VOLUME <Directory1> <Directory2> .. <DirectoryN>
Or a JSON array.
But why ?
Docker containers are ephemeral so to keep the data persistent even through container restarts, stops
or disappear.
By default, a Docker container does not share its data with the host, volumes allow the host to access
to the container data.
By default, two containers even running in the same host cannot share data so sharing files between a
container in the form of a docker volume will allow other containers to access its data.
Setting up the permission or the ownership of a volume should be done before the VOLUMES instruction in th
Dockerfile.
For example, this form of setting up the owner is wrong.
VOLUME /app
ADD app.py /app/
RUN chown -R foo:foo /app
However the following Dockerfile has a good syntax:
RUN mkdir /app
ADD app.py /app/
RUN chown -R foo:foo /app
VOLUME /app
A good scenario to use Docker volumes is databases containers, where data is mounted outside the container. This
allows making an easy backup to the database.
USER
When running a command you may need doing it using another user (not the default user which is the Root user).
This is when the USER instruction shall be used.
USER is used like this:
USER <user>
So, Root is the default user and Docker has a full access to the host system. You may consider USER as a security
option to consider.
Normally an image should never use the Root user but another user that you choose using the instruction USER.
Example:
USER my_user
Sometimes, you need to run a command using Root, you can simply switch between different users:
USER root
RUN <a command that should be run under Root>
USER my_user
The USER instruction applies to RUN, CMD and ENTRYPOINT instructions so that any command run by one of
these instructions is attributed to the chosen user.
WORKDIR
WORKDIR is use this way:
WORKDIR <Directory>
This instruction sets the a working directory so that any of the following commands: RUN, CMD, ENTRYPOINT,
COPY & ADD will be executed in this directory.
Example:
To copy index.html to /var/www , you may write this:
ADD index.html /var/www
Note that if the WORKDIR does not exist, it will be created.
ARG
If you want to assign a value to a variable but just during the build, you can use the ARG instruction.
The syntax is :
ARG <argument name>[=<its default value>]
or
ARG <argument name>
You can declare variables that you can use during the build with docker build command that we are going to see
later.
Example:
ARG time
You can also set the value of the arguments (variables) in the Dockerfile.
ARG time=3s
ONBUILD
Software should be built automatically that's why Docker has the ONBUILD instruction. It is a trigger instruction
executed when the image is used as the base image for another build.
ONBUILD <Docker Instruction>
The trigger will be executed in the context of the downstream build, as if it had been inserted immediately after the
FROM instruction in the downstream Dockerfile.
The ONBUILD instruction is available in Docker since its version 0.8.
Let's dive int details. First of all, let's define what a child image is :
If the image A contains an ONBUILD instruction and if imageB uses the imageA as its base image in order to add
other instructions (and layers) upon it then imageB is a child image of imageA.
This is the imageA Dockerfile
FROM ubuntu:16.04
ONBUILD RUN echo "I will be automatically executed only in the child image"
THis is the imageB Dockerfile
FROM imageA
If you build the imageA based on Ubuntu 16.04, the ONBUILD instruction will not run the RUN echo "<..>" . But
when you build the imageB, the same instruction will be run just after the execution of the FROM instruction.
This is an example of the imageB build output where we see the message printed automatically:
Uploading context 4.51 kB
Uploading context
Step 0 : FROM imageA
# Executing 1 build triggers
Step onbuild-0 : RUN echo "I will be automatically executed only in the child imager"
---> Running in acefe7b39c5
I will be automatically executed only in the child image
STOPSIGNAL
The STOPSIGNAL instruction let you set the system call signal that will be sent to the container to exit.
STOPSINGAL <signal>
This signal can be a valid unsigned number like 9 or a signal name like SIGNAME, SIGKILL, SIGINT ..etc
SIGTERM is the default in Docker and it is and equivalent to running kill <pid> .
In the following example, let's change it by SIGINT ( the same signal sent when pressing ctrl-C ):
FROM ubuntu/16:04
STOPSIGNAL SIGINT
HEALTHCHECK
The HEALTHCHECK instruction is one of the useful instruction I have been using since the version 1.12.
It has two forms. Either to check a container health by running a command inside the container:
HEALTHCHECK [OPTIONS] CMD <command>
or to disable any healthcheck ( all healthchecks inherited from the base image will be disabled ).
HEALTHCHECK NONE
There are 3 options that we can use before the CMD command.
--interval=<interval duration>
The healthcheck will be executed every unit of time. The default interval duration is 30s.
--timeout=<timeout duration>
The default timeout duration is 30s. The healthcheck will be timeout after unit of time.
--retries=N
The default number of retries is 3. The healthcheck could be failed but no more than times.
An example of a Docker healthcheck that will run every 1 minute with a check that does not take longer than 3
seconds before it will be considered as failed if the same check will be repeated for more than 3 time :
HEALTHCHECK --interval=1m --timeout=3s CMD curl -f http://localhost/ || exit 1
SHELL
When using Docker, the default shell that executes all of the commands is "/bin/sh -c". This means that CMD ls -l
will be in reality run inside the container like this:
/bin/sh -c ls -l
The default shell for Windows is:
cmd /S /C
SHELL instruction must be written in JSON form in the Dockerfile.
SHELL [<"executable">, <"parameters">]
This instruction could be interesting for Windows users to choose between cmd and powershell.
Example:
SHELL ["powershell", "-command"]
SHELL ["cmd", "/S"", "/C"]
In nix you can of course work with bash, zsh, csh, tcsh* ..etc for example:
SHELL ["/bin/bash", "-c"]
ENTRYPOINT VS CMD
Both CMD and ENTRYPOINT instructions allows us to define a command that will be executed once a container
starts.
Back to the CMD instruction: We have seen that the CMD could be a default parameter to ENTRYPOINT
instruction when we use this from :
CMD [<"param1">,<"param2"> .. <"paramN">]
This is a simple Dockerfile:
FROM python:2.7
COPY app.py .
ENTRYPOINT python app.py
You may say that the Dockerfile could be written like this:
FROM python:2.7
COPY app.py .
ENTRYPOINT ["python"]
CMD ["app.py"]
Good guess .. But when you will run the container it will show you an error.
The CMD command in this from should be called like this:
FROM python:2.7
COPY app.py .
ENTRYPOINT ["/usr/bin/python"]
CMD ["app.py"]
As a conclusion,
ENTRYPOINT python app.py
could be called like this
ENTRYPOINT ["/usr/bin/python"]
CMD ["app.py"]
CMD and ENTRYPOINT could be used alone or together but in all cases, you should use one of them at least.
Using neither CMD not ENTRYPOINT will fail the execution of the container.
You can find almost the same table in the official Docker documentation, but this is the best way to understand all of
the possibilities:
No
ENTRYPOINT
ENTRYPOINT entrypoint_exec
entrypoint_param1
ENTRYPOINT
[“entrypoint_exec”,
“entrypoint_param1”]
No CMD Will generate an
error
/bin/sh -c entrypoint_exec
entrypoint_param1
entrypoint_exec
entrypoint_param1
CMD
[“cmd_exec”,
“cmd_param1”]
cmd_exec
cmd_param1
/bin/sh -c entrypoint_exec
entrypoint_param1 cmd_exec
cmd_param1
entrypoint_exec
entrypoint_param1 cmd_exec
cmd_param1
CMD
[“cmd_param1”,
“cmd_param2”]
cmd_param1
cmd_param2
/bin/sh -c entrypoint_exec
entrypoint_param1 cmd_param1
cmd_param2
entrypoint_exec
entrypoint_param1 cmd_param1
cmd_param2
CMD cmd_exec
cmd_param1
/bin/sh -c
cmd_exec
cmd_param1
/bin/sh -c entrypoint_exec
entrypoint_param1 /bin/sh -c cmd_exec
cmd_param1
entrypoint_exec
entrypoint_param1 /bin/sh -c
cmd_exec cmd_param1
Building Images
The Base Image
Probably the smallest Dockerfile (not the smallest image) is the following one:
FROM <image>
Docker needs an image to run : no image, no container.
The base image is an image which you add layers on the top of it to create another image containing your
application. As seen in the last chapter, an image is a set of layers and the FROM <image> is the necessary layer to
create the image.
Using www.imagelayers.io we can visualize an image online. Let's take te example of tutum/hello-world that you can
find on Docker Hub website just by concatenating:
https://hub.docker.com/r/
and
tutum/hello-world
which mean:
https://hub.docker.com/r/tutum/hello-world/
The Dockerfile of this image is the following:
FROM alpine
MAINTAINER support@tutum.co
RUN apk --update add nginx php-fpm && \
mkdir -p /var/log/nginx && \
touch /var/log/nginx/access.log && \
mkdir -p /tmp/nginx && \
echo "clear_env = no" >> /etc/php/php-fpm.conf
ADD www /www
ADD nginx.conf /etc/nginx/
EXPOSE 80
CMD php-fpm -d variables_order="EGPCS" && (tail -F /var/log/nginx/access.log &) && exec nginx -g "daemon off;"
The base image of this simple application is Alpine.
This image of 18 MiB has:
7 unique layers
an average layer of 3 MiB
its largest layer is 13 MB
Let's take another image and use the Docker history command to see its layers
docker pull nginx
This will pull the latest nginx image from the Docker Hub.
docker history nginx will show the different layers of nginx :
Docker history command show you the history of an image and it s different layers. You can see more information
with human readable output about an image by using :
docker history --no-trunc -H nginx
with:
-H, --human Print sizes and dates in human readable format (default true)
--no-trunc Don't truncate output
Let's take a look at the output:
Dockerfile
The Dockerfile is a kind of a script file with different instructive commands and arguments that describe how you
image base image will be at the end of the build. It is possible to run an image directly without building it (like a
public or a private image from Docker Hub or a private repository), but if you want to have your own specific
images and if you want to organize your deployments while distributing the same image to developers and QA
teams, creating a Dockerfile for your application is a good point to start.
The first rule: A Dockerfile should start with the FROM instruction which explains that nothing could be done
without having a base image. The syntax to create a Dockerfile is quite simple and explicit since they should be
followed to the letter. If you need to execute more things than the Docker instructions permit, than you can just RUN
nix commands or use shell scripts with CMD and ENTRYPOINT*.
We wen through the different instructions and the differences between them, you should be able to create Dockerfile
just using this, but we are going to see more examples later like creating micro images for Python and Node.js or
like in the Mongodb example.
Creating An Image Build Using Dockerfile
So we have seen the different instructions that can help us create a Dockerfile. Now in order to have a complete
image, we should build it.
The command to build a Docker image is:
docker build .
The '.' is indicating that the Dockerfile is in the same directory where you are running the build command which is
the context.
The context are simply your local files.
ls -l .
Dockerfile
app/
scripts/
The context is the directory and all the subdirectories where you execute the docker build command.
If you are executing the build from a different directory you can use -f:
docker build -f /path/to/the/Dockerfile/TheDockerfile /path/to/the/context/
Example:
If your Dockerfile is under /tmp and your files are in the /app directory, the command should be:
docker build -f /tmp/Dockerfile /app
Optimizing Docker Images
You can find many images of the same application in the Internet but not all of them are optimized , they can be
really big, they may take time to build or to send through network and of course your deployment time could
increase.
Layers are actually what decides the size of a Docker image and optimizing your Docker clusters start from
optimizing the layers of your image.
This image has three layers:
FROM ubuntu
RUN apt-get update -y
RUN apt-get install python -y
While this one has only two layers:
FROM ubuntu
RUN apt-get update -y && apt-get install python -y
Both images are installing Python in a Ubuntu docker image.
Docker images can get really big. Many are over 1G in size. How do they get so big? Do they really need to be this
big? Can we make them smaller without sacrificing functionality?
Tagging Images
Like git, Docker has the ability to tag specific points in history as being important.
Before using private Docker registry or Docker Hub, you should first use docker login information.
Docker Hub:
docker login
Login with your Docker ID to push and pull images from Docker Hub. If you don't have a Docker ID, head over to https://hub.docker.com to create one.
Username (eon01):
Password:
Login Succeeded
Private registry:
docker login https://localhost:5000
Username: admin
Password:
Login Succeeded
Typically Docker developers use the tagging functionality to mark a release or a version. In this section, you’ll learn
how to list images tags and how to create new tags.
When you type docker images you can get a list of the images you have on your laptop/server.
REPOSITORY TAG IMAGE ID CREATED SIZE
mongo latest c5185a594064 5 days ago 342.7 MB
alpine latest baa5d63471ea 5 weeks ago 4.803 MB
docker/whalesay latest fb434121fc77 4 hours ago 247 MB
hello-world latest 91c95931e552 5 weeks ago 910 B
You can notice in the output of the latter command that every image has a unique id but also a tag.
docker images|awk {'print $3'}|tail -n +2
c5185a594064
baa5d63471ea
fb434121fc77
91c95931e552
If you would like to download the same images, you should pull them with the right tags:
docker pull mongo:latest
docker pull alpine:latest
docker pull docker/whalesay:latest
docker pull hello-world:latest
As you can see in the docker images output, the id of the Alpine image is baa5d63471ea . We are going to use this to
give the image a new tag using the following syntax:
docker tag <image id> <docker username>/<image name>:<tag>
You can also use
docker tag <image>:<tag> <docker username>/<image name>:<tag>
Example:
docker tag baa5d63471ea alpine:me
Now when you list your images, you will notice both of the last and original tags are listed:
alpine latest baa5d63471ea 5 weeks ago 4.803 MB
alpine me baa5d63471ea 5 weeks ago 4.803 MB
You can try also:
docker tag mongo:latest mongo:0.1
Then list your images and you will find the new mongo tag:
REPOSITORY TAG IMAGE ID
mongo 0.1 c5185a594064
mongo latest c5185a594064
alpine latest baa5d63471ea
alpine me baa5d63471ea
Like in a simple git flow you can pull, update, commit and push your changes to a remote repository. The commit
operation can happen on a running container, that's why we are going to run the mongo container. In this section, we
haven't seen yet how to run containers in production environments but we are going to use the run command now
and see all of its details later in another chapter.
docker run -it -d -p 27017:21017 --name mongo mongo
Verify Mongodb container is running:
docker ps
CONTAINER ID IMAGE COMMAND PORTS NAMES
d5faf7fd8e4d mongo "/entrypoint.sh mongo" (1) mongo
(1): 27017/tcp, 0.0.0.0:27017->21017/tcp
We will use the container id d5faf7fd8e4d in order to use the commit command, sure we haven't had any changes
made to the running container until now, but this is just a test to show you the basic command usage. Note that we
an change or keep the original tag when committing:
In the following example, the Docker image that the container d5faf7fd8e4d is running will be tagged with a new tag
mongo:0.2 :
docker commit d5faf7fd8e4d mongo:0.2
Type docker images for your verifications and notice the new tag mongo:0.2 :
REPOSITORY TAG IMAGE ID
mongo 0.2 d022237fd80d
localhost:5000/user/mongo 0.1 c5185a594064
mongo 0.1 c5185a594064
mongo latest c5185a594064
localhost:5000/mongo 0.1 c5185a594064
registry latest c9bd19d022f6
alpine latest baa5d63471ea
alpine me baa5d63471ea
A running container could also have some changes while running. Let's take the example of the Mongodb container,
we will add an administrative account to the running Mongodb instance. Log into the container:
docker exec -it mongo bash
Inside your running container, connect to your running database using mongo command:
root@d5faf7fd8e4d:/# mongo
MongoDB shell version: 3.2.11
connecting to: test
Server has startup warnings:
I CONTROL [initandlisten]
I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.
I CONTROL [initandlisten] ** We suggest setting it to 'never'
I CONTROL [initandlisten]
I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.
I CONTROL [initandlisten] ** We suggest setting it to 'never'
I CONTROL [initandlisten]
>
Now add a new administrator identified by login root and password 958d12fc49437db0c7baac22541f9b93 with the
administrative role root:
> use admin
switched to db admin
> db.createUser(
... {
... user: "root",
... pwd: "958d12fc49437db0c7baac22541f9b93",
... roles:["root"]
... })
Successfully added user: { "user" : "root", "roles" : [ "root" ] }
> exit
bye
Exit your container:
root@d5faf7fd8e4d:/# exit
exit
Now that we had made important changes to our running container, commit those changes. You can use a private
registry or Docker Hub, but I am going to use my public Docker Hub account in this example:
docker commit d5faf7fd8e4d eon01/mongodb
sha256:269685eeaecdea12ddd453cf98685cad1e6d3c76cccd6eebb05d3646fe496688
After the commit operation, we are sure that our changes are saved, we can push the image:
docker push eon01/mongodb
The push refers to a repository [docker.io/eon01/mongodb]
c01c6c921c0b: Layer already exists
80c558316eec: Layer already exists
031fad254fc0: Layer already exists
ddc5125adfe9: Layer already exists
31b3084f360d: Layer already exists
77e69eeb4171: Layer already exists
718248b95529: Layer already exists
8ba476dc30da: Layer already exists
07c6326a8206: Layer already exists
fe4c16cbf7a4: Layer already exists
latest: digest: sha256:ee50ec95fd490d60796d7782a9348ef824d84110beea4f86ced1ed15a1c8976c size: 2406
Notice the The push refers to a repository [docker.io/eon01/mongodb] . This telling us that you can find this public image
on: https://hub.docker.com/r/eon01/mongodb/
Using Docker Hub we can add a description, a README file, change the image to a private one (paid feature) ..etc
If you execute a pull command on the same remote image, you can notice that nothing will be downloaded because
you already have all of the image layers locally:
docker pull eon01/mongodb
Using default tag: latest
latest: Pulling from eon01/mongodb
Digest: sha256:ee50ec95fd490d60796d7782a9348ef824d84110beea4f86ced1ed15a1c8976c
Status: Image is up to date for eon01/mongodb:latest
We can run it like this:
docker run -it -d -p 27018:27017 --name mongo_container eon01/mongodb
Notice that we used the port mapping 27018:27017 because the first Mongodb container is mapped to the host port
27017 and it is impossible to map two containers to the same local port - same thing for the container name.
We have not seen this detail yet but we will go through all of these details in the next chapter.
docker ps
CONTAINER ID IMAGE COMMAND PORTS NAMES
ff4eccdd6bee eon01/mongodb "/entrypoint.sh mongo" (1) zen_dubinsky
d5faf7fd8e4d mongo "/entrypoint.sh mongo" (2) mongo
3118896db039 registry "/entrypoint.sh /etc/" (3) my_registry
(1): 21017/tcp, 0.0.0.0:27018->27017/tcp (2): 27017/tcp, 0.0.0.0:27017->21017/tcp (3): 0.0.0.0:5000->5000/tcp
If you log into your container:
docker exec -it mongo_container bash
Start Mongodb:
root@db926ae25f1e:/# mongo
MongoDB shell version: 3.2.11
connecting to: test
Server has startup warnings:
I CONTROL [initandlisten]
I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.
I CONTROL [initandlisten] ** We suggest setting it to 'never'
I CONTROL [initandlisten]
I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.
I CONTROL [initandlisten] ** We suggest setting it to 'never'
I CONTROL [initandlisten]
List users:
> use admin
switched to db admin
> db.getUsers()
[
{
"_id" : "admin.root",
"user" : "root",
"db" : "admin",
"roles" : [
{
"role" : "root",
"db" : "admin"
}
]
}
]
>
You can see that the changes you made are already stored in the container.
In the same way, you can add other changes, commit, tag (if you need it) and push to your Docker Hub or private
registry.
This is what we have done until now: we had an image mongo that we created from a build (we are going to see the
Dockerfile later in this chapter), run the container, made some changes and push it to a repository.
Your Private Registry
If you are using a private Docker repository, just add your host domain/IP like this:
docker tag <image>:<tag> <registry host>/<image name>:<tag>
Example:
docker tag mongo:latest localhost:5000/mongo:0.1
Let's run a simple local private registry to test this:
docker run -d -p 5000:5000 --name my_registry registry
Unable to find image 'registry:latest' locally
latest: Pulling from library/registry
3690ec4760f9: Already exists
930045f1e8fb: Pull complete
feeaa90cbdbc: Pull complete
61f85310d350: Pull complete
b6082c239858: Pull complete
Digest: sha256:1152291c7f93a4ea2ddc95e46d142c31e743b6dd70e194af9e6ebe530f782c17
Status: Downloaded newer image for registry:latest
3118896db039c26a74127031eefd42264e310d7cdc435e126fa8630bf8ee8c60
Verify that you are really running this with docker ps :
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3118896db039 registry "/entrypoint.sh /etc/" 2 minutes ago Up 2 minutes 0.0.0.0:5000-
>5000/tcp my_registry
Tag your image with your private URL:
docker tag mongo:latest localhost:5000/mongo:0.1
Check your new tag with the docker images command:
REPOSITORY TAG IMAGE ID
localhost:5000/mongo 0.1 c5185a594064
mongo 0.1 c5185a594064
mongo latest c5185a594064
registry latest c9bd19d022f6
alpine latest baa5d63471ea
alpine me baa5d63471ea
Note that if you are testing private Docker registry, your default username/password are admin/admin.
Optimizing Images
You are probably used to Ubuntu (or any other major distribution) so you will may be use Ubuntu on your Docker
container. Ostensibly, your image is fine, it is using a stable distribution and your are just running Ubuntu inside
your container.
FROM ubuntu
ADD app /var/www/
CMD ["start.sh"]
The problem is that you don't really need a complete OS, you installed all Ubuntu files but you are not going to use
them, your container does not need all of them.
When you build an image, generally you will configure CMD or ENTRYPOINT or both of them with something
that will be executed at the container startup. So the only processes that will be running inside the container is the
ENRYPOINT command, and all processes that it spawns, all of the other OS processes will not run. And I am not
sure you will need them.
This is a part of the output of ps aux of my current Ubuntu system:
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.0 0.0 34172 3980 ? Ss 00:36 0:02 /sbin/init
root 2 0.0 0.0 0 0 ? S 00:36 0:00 [kthreadd]
root 3 0.0 0.0 0 0 ? S 00:36 0:00 [ksoftirqd/0]
root 5 0.0 0.0 0 0 ? S< 00:36 0:00 [kworker/0:0H]
root 7 0.0 0.0 0 0 ? S 00:36 0:55 [rcu_sched]
root 8 0.0 0.0 0 0 ? S 00:36 0:17 [rcuos/0]
root 9 0.0 0.0 0 0 ? S 00:36 0:15 [rcuos/1]
root 10 0.0 0.0 0 0 ? S 00:36 0:17 [rcuos/2]
root 11 0.0 0.0 0 0 ? S 00:36 0:12 [rcuos/3]
root 12 0.0 0.0 0 0 ? S 00:36 0:00 [rcuos/4]
root 13 0.0 0.0 0 0 ? S 00:36 0:00 [rcuos/5]
root 14 0.0 0.0 0 0 ? S 00:36 0:00 [rcuos/6]
root 15 0.0 0.0 0 0 ? S 00:36 0:00 [rcuos/7]
root 16 0.0 0.0 0 0 ? S 00:36 0:00 [rcu_bh]
root 17 0.0 0.0 0 0 ? S 00:36 0:00 [rcuob/0]
root 18 0.0 0.0 0 0 ? S 00:36 0:00 [rcuob/1]
root 19 0.0 0.0 0 0 ? S 00:36 0:00 [rcuob/2]
root 20 0.0 0.0 0 0 ? S 00:36 0:00 [rcuob/3]
root 21 0.0 0.0 0 0 ? S 00:36 0:00 [rcuob/4]
root 22 0.0 0.0 0 0 ? S 00:36 0:00 [rcuob/5]
root 23 0.0 0.0 0 0 ? S 00:36 0:00 [rcuob/6]
root 24 0.0 0.0 0 0 ? S 00:36 0:00 [rcuob/7]
root 25 0.0 0.0 0 0 ? S 00:36 0:00 [migration/0]
root 26 0.0 0.0 0 0 ? S 00:36 0:00 [watchdog/0]
root 27 0.0 0.0 0 0 ? S 00:36 0:00 [watchdog/1]
root 28 0.0 0.0 0 0 ? S 00:36 0:00 [migration/1]
root 29 0.0 0.0 0 0 ? S 00:36 0:00 [ksoftirqd/1]
root 31 0.0 0.0 0 0 ? S< 00:36 0:00 [kworker/1:0H]
root 32 0.0 0.0 0 0 ? S 00:36 0:00 [watchdog/2]
root 33 0.0 0.0 0 0 ? S 00:36 0:00 [migration/2]
root 34 0.0 0.0 0 0 ? S 00:36 0:00 [ksoftirqd/2]
root 36 0.0 0.0 0 0 ? S< 00:36 0:00 [kworker/2:0H]
root 37 0.0 0.0 0 0 ? S 00:36 0:00 [watchdog/3]
root 38 0.0 0.0 0 0 ? S 00:36 0:00 [migration/3]
root 39 0.0 0.0 0 0 ? S 00:36 0:00 [ksoftirqd/3]
root 41 0.0 0.0 0 0 ? S< 00:36 0:00 [kworker/3:0H]
root 42 0.0 0.0 0 0 ? S< 00:36 0:00 [khelper]
root 43 0.0 0.0 0 0 ? S 00:36 0:00 [kdevtmpfs]
root 44 0.0 0.0 0 0 ? S< 00:36 0:00 [netns]
root 45 0.0 0.0 0 0 ? S 00:36 0:00 [khungtaskd]
root 46 0.0 0.0 0 0 ? S< 00:36 0:00 [writeback]
root 47 0.0 0.0 0 0 ? SN 00:36 0:00 [ksmd]
root 48 0.0 0.0 0 0 ? SN 00:36 0:20 [khugepaged]
root 49 0.0 0.0 0 0 ? S< 00:36 0:00 [crypto]
root 50 0.0 0.0 0 0 ? S< 00:36 0:00 [kintegrityd]
root 51 0.0 0.0 0 0 ? S< 00:36 0:00 [bioset]
root 52 0.0 0.0 0 0 ? S< 00:36 0:00 [kblockd]
root 53 0.0 0.0 0 0 ? S< 00:36 0:00 [ata_sff]
root 54 0.0 0.0 0 0 ? S 00:36 0:00 [khubd]
root 55 0.0 0.0 0 0 ? S< 00:36 0:00 [md]
root 56 0.0 0.0 0 0 ? S< 00:36 0:00 [devfreq_wq]
root 60 0.0 0.0 0 0 ? S 00:36 0:10 [kswapd0]
root 61 0.0 0.0 0 0 ? S 00:36 0:00 [fsnotify_mark]
root 62 0.0 0.0 0 0 ? S 00:36 0:00 [ecryptfs-kthrea]
root 74 0.0 0.0 0 0 ? S< 00:36 0:00 [kthrotld]
root 75 0.0 0.0 0 0 ? S< 00:36 0:00 [acpi_thermal_pm]
root 77 0.0 0.0 0 0 ? S< 00:36 0:00 [ipv6_addrconf]
root 99 0.0 0.0 0 0 ? S< 00:36 0:00 [deferwq]
root 100 0.0 0.0 0 0 ? S< 00:36 0:00 [charger_manager]
root 108 0.0 0.0 0 0 ? S 00:36 0:03 [kworker/3:1]
root 149 0.0 0.0 0 0 ? S< 00:36 0:00 [kpsmoused]
root 166 0.0 0.0 0 0 ? S 00:36 0:00 [scsi_eh_0]
root 168 0.0 0.0 0 0 ? S< 00:36 0:00 [scsi_tmf_0]
root 169 0.0 0.0 0 0 ? S 00:36 0:00 [scsi_eh_1]
root 170 0.0 0.0 0 0 ? S< 00:36 0:00 [scsi_tmf_1]
root 171 0.0 0.0 0 0 ? S 00:36 0:00 [scsi_eh_2]
root 172 0.0 0.0 0 0 ? S< 00:36 0:00 [scsi_tmf_2]
root 173 0.0 0.0 0 0 ? S 00:36 0:00 [scsi_eh_3]
root 174 0.0 0.0 0 0 ? S< 00:36 0:00 [scsi_tmf_3]
root 252 0.0 0.0 0 0 ? S< 00:36 0:00 [kworker/1:1H]
root 253 0.0 0.0 0 0 ? S 00:36 0:01 [jbd2/sda1-8]
root 254 0.0 0.0 0 0 ? S< 00:36 0:00 [ext4-rsv-conver]
root 256 0.0 0.0 0 0 ? S< 00:36 0:00 [kworker/0:1H]
root 293 0.0 0.0 28948 2080 ? S 00:36 0:00 mountall --daemon
root 471 0.0 0.0 0 0 ? S< 00:36 0:00 [kworker/2:1H]
root 509 0.0 0.0 0 0 ? S 00:36 0:06 [jbd2/sda3-8]
root 510 0.0 0.0 0 0 ? S< 00:36 0:00 [ext4-rsv-conver]
root 540 0.0 0.0 19480 1448 ? S 00:36 0:00 upstart-udev-bridge --daemon
root 547 0.0 0.0 52116 2244 ? Ss 00:36 0:00 /lib/systemd/systemd-udevd --daemon
root 595 0.0 0.0 0 0 ? S< 00:36 0:00 [ktpacpid]
root 622 0.0 0.0 0 0 ? S< 00:36 0:00 [kmemstick]
root 623 0.0 0.0 0 0 ? S< 00:36 0:00 [hd-audio1]
Look at all of these system processes, why should they be running inside a container ? This is the output of a
running container running PHP FPM image :
USER PID %CPU %MEM VSZ RSS TTY STAT COMMAND
root 1 0.0 0.0 113464 1084 ? Ss php-fpm: master process (/usr/local/etc/php-fpm.conf)
www-data 6 0.0 0.0 113464 1044 ? S php-fpm: pool www
www-data 7 0.0 0.0 113464 1064 ? S php-fpm: pool www
root 8 0.0 0.0 20228 3188 ? Ss bash
root 14 0.0 0.0 17500 2076 ? R ps aux
You can see that the only running processes are php-fpm processes and the other processes spawned by php.
If we type use Docker history command to see the CMD command, we can notice that the only process run is php-
fpm:
docker history --human --no-trunc php:7-fpm|grep -i cmd
sha256:1..3 5 weeks ago /bin/sh -c #(nop) CMD ["php-fpm"]
Another inconvenient of using Ubuntu in this case is the size of the image, you will:
increase your build time
increase your deployment time
increase the development time
Using a minimal image will reduce all of this.
You don't also need the init system of an operating system since inside Docker you don't have access to all of the
Kernel resources. If you would like to use a "full" operating system, you are adding problems to your problem list.
This is the case for many other OSs used inside Docker like Centos or Debian unless they are optimized to run
Docker and follow its philosophy.
From Scratch
Actually, the smallest image that we can find in the official Docker Hub is the scratch image.
Even if you can find this image Docker’s Hub, you can’t:
pull it,
run it
tag any image with the same name ("scratch")
But you can still use it as a reference to an image in the FROM instruction.
The scratch image is small, fast, secure and bugless.
Busybox
According to Wikipedia:
It runs in a variety of POSIX environments such as Linux, Android, and FreeBSD, although many of the tools
it provides are designed to work with interfaces provided by the Linux kernel. It was specifically created for
embedded operating systems with very limited resources. The authors dubbed it "The Swiss Army knife of
Embedded Linux", as the single executable replaces basic functions of more than 300 common commands. It is
released as free software under the terms of the GNU General Public License v2.
BusyBox is software that provides several stripped-down Unix tools in a single executable file so that the ls
command (as an example) could be run this way:
/bin/busybox ls
Busybox is the winner of the smallest images (2.5 MB) that we can use with Docker. Executing a docker pull busybox
will take almost 1 second.
With the advantage of the very minimal size comes some cons : Busybox does not have neither a package manager
nor a gcc compiler.
You can run the busybox executable from Docker to see the the list of the binaries it includes :
docker run busybox busybox
BusyBox v1.25.1 (2016-10-07 18:17:00 UTC) multi-call binary.
BusyBox is copyrighted by many authors between 1998-2015.
Licensed under GPLv2. See source distribution for detailed
copyright notices.
Usage: busybox [function [arguments]...]
or: busybox --list[-full]
or: busybox --install [-s] [DIR]
or: function [arguments]...
BusyBox is a multi-call binary that combines many common Unix
utilities into a single executable. Most people will create a
link to busybox for each function they wish to use and BusyBox
will act like whatever it was invoked as.
Currently defined functions:
[, [[, acpid, add-shell, addgroup, adduser, adjtimex, ar, arp, arping,
ash, awk, base64, basename, beep, blkdiscard, blkid, blockdev,
bootchartd, brctl, bunzip2, bzcat, bzip2, cal, cat, catv, chat, chattr,
chgrp, chmod, chown, chpasswd, chpst, chroot, chrt, chvt, cksum, clear,
cmp, comm, conspy, cp, cpio, crond, crontab, cryptpw, cttyhack, cut,
date, dc, dd, deallocvt, delgroup, deluser, depmod, devmem, df,
dhcprelay, diff, dirname, dmesg, dnsd, dnsdomainname, dos2unix, du,
dumpkmap, dumpleases, echo, ed, egrep, eject, env, envdir, envuidgid,
ether-wake, expand, expr, fakeidentd, false, fatattr, fbset, fbsplash,
fdflush, fdformat, fdisk, fgconsole, fgrep, find, findfs, flock, fold,
free, freeramdisk, fsck, fsck.minix, fstrim, fsync, ftpd, ftpget,
ftpput, fuser, getopt, getty, grep, groups, gunzip, gzip, halt, hd,
hdparm, head, hexdump, hostid, hostname, httpd, hush, hwclock,
i2cdetect, i2cdump, i2cget, i2cset, id, ifconfig, ifdown, ifenslave,
ifplugd, ifup, inetd, init, insmod, install, ionice, iostat, ip,
ipaddr, ipcalc, ipcrm, ipcs, iplink, iproute, iprule, iptunnel,
kbd_mode, kill, killall, killall5, klogd, last, less, linux32, linux64,
linuxrc, ln, loadfont, loadkmap, logger, login, logname, logread,
losetup, lpd, lpq, lpr, ls, lsattr, lsmod, lsof, lspci, lsusb, lzcat,
lzma, lzop, lzopcat, makedevs, makemime, man, md5sum, mdev, mesg,
microcom, mkdir, mkdosfs, mke2fs, mkfifo, mkfs.ext2, mkfs.minix,
mkfs.vfat, mknod, mkpasswd, mkswap, mktemp, modinfo, modprobe, more,
mount, mountpoint, mpstat, mt, mv, nameif, nanddump, nandwrite,
nbd-client, nc, netstat, nice, nmeter, nohup, nsenter, nslookup, ntpd,
od, openvt, passwd, patch, pgrep, pidof, ping, ping6, pipe_progress,
pivot_root, pkill, pmap, popmaildir, poweroff, powertop, printenv,
printf, ps, pscan, pstree, pwd, pwdx, raidautorun, rdate, rdev,
readahead, readlink, readprofile, realpath, reboot, reformime,
remove-shell, renice, reset, resize, rev, rm, rmdir, rmmod, route, rpm,
rpm2cpio, rtcwake, run-parts, runlevel, runsv, runsvdir, rx, script,
scriptreplay, sed, sendmail, seq, setarch, setconsole, setfont,
setkeycodes, setlogcons, setserial, setsid, setuidgid, sh, sha1sum,
sha256sum, sha3sum, sha512sum, showkey, shuf, slattach, sleep, smemcap,
softlimit, sort, split, start-stop-daemon, stat, strings, stty, su,
sulogin, sum, sv, svlogd, swapoff, swapon, switch_root, sync, sysctl,
syslogd, tac, tail, tar, tcpsvd, tee, telnet, telnetd, test, tftp,
tftpd, time, timeout, top, touch, tr, traceroute, traceroute6, true,
truncate, tty, ttysize, tunctl, ubiattach, ubidetach, ubimkvol,
ubirename, ubirmvol, ubirsvol, ubiupdatevol, udhcpc, udhcpd, udpsvd,
uevent, umount, uname, unexpand, uniq, unix2dos, unlink, unlzma,
unlzop, unshare, unxz, unzip, uptime, users, usleep, uudecode,
uuencode, vconfig, vi, vlock, volname, wall, watch, watchdog, wc, wget,
which, who, whoami, whois, xargs, xz, xzcat, yes, zcat, zcip
So there is no real package manager for this tiny distribution which is a pain, Alpine is a very good alternative, if
you need a package manager.
Alpine Linux
FROM alpine
Let's build it and see if it is working with just a -one-line Dockerfile.
docker build .
It is a working image, you can see the output:
Sending build context to Docker daemon 2.048 kB
Step 1 : FROM alpine
latest: Pulling from library/alpine
3690ec4760f9: Pull complete
Digest: sha256:1354db23ff5478120c980eca1611a51c9f2b88b61f24283ee8200bf9a54f2e5c
Status: Downloaded newer image for alpine:latest
---> baa5d63471ea
Successfully built baa5d63471ea
Alpine Linux is a security-oriented, lightweight Linux distribution based on musl libc and busybox.
Alpine Linux is very popular with Docker images since it is only 5.5MB and includes a package manager with many
packages.
Alpine Linux is suitable to run micro containers. The idea behind containers is also to allow the distribution of
packages easily, using images like Alpine is helpful for this case.
Phusion Baseimage
Phusion Baseimage or Baseimage-docker consumes 6 MB RAM, it is a special Docker image that is configured for
correct use within Docker containers and it is based on Ubuntu.
According to its authors it has the following feature:
Modifications for Docker-friendliness.
Administration tools that are especially useful in the context of Docker.
Mechanisms for easily running multiple processes, without violating the Docker philosophy.
You can use it as a base for your own Docker images.
Baseimage-docker is composed of :
Component Why is it included? / Remarks
Ubuntu 16.04
LTS The base system.
A correct init
process
Main article: Docker and the PID 1 zombie reaping problem. According to the Unix process
model, the init process -- PID 1 -- inherits all orphaned child processes and must [reap them]
(https://en.wikipedia.org/wiki/Wait(system_call)). Most Docker containers do not have an init
process that does this correctly, and as a result their containers become filled with zombie
processes over time. Furthermore, docker stop sends SIGTERM to the init process, which is then
supposed to stop all services. Unfortunately most init systems don't do this correctly within
Docker since they're built for hardware shutdowns instead. This causes processes to be hard killed
with SIGKILL, which doesn't give them a chance to correctly deinitialize things. This can cause
file corruption. Baseimage-docker comes with an init process /sbin/my_init that performs both
of these tasks correctly.
Fixes APT
incompatibilities
with Docker
See https://github.com/dotcloud/docker/issues/1024 .
syslog-ng
A syslog daemon is necessary so that many services - including the kernel itself - can correctly
log to /var/log/syslog. If no syslog daemon is running, a lot of important messages are silently
swallowed. Only listens locally. All syslog messages are forwarded to "docker logs".
logrotate Rotates and compresses logs on a regular basis.
SSH server
Allows you to easily login to your container to inspect or administer things. _SSH is disabled by
default and is only one of the methods provided by baseimage-docker for this purpose. The other
method is through docker exec. SSH is also provided as an alternative because docker exec
comes with several caveats. Password and challenge-response authentication are disabled by
default. Only key authentication is allowed.
cron The cron daemon must be running for cron jobs to work.
runit
Replaces Ubuntu's Upstart. Used for service supervision and management. Much easier to use
than SysV init and supports restarting daemons when they crash. Much easier to use and more
lightweight than Upstart.
setuser A tool for running a command as another user. Easier to use than su , has a smaller attack vector
than sudo , and unlike chpst this tool sets $HOME correctly. Available as /sbin/setuser .
source: https://github.com/phusion/baseimage-docker/blob/master/README.md
We are going to see how to use this image, but you can find more detailed information in the official github
repository.
The same team created a base image for running Ruby, Python, Node.js and Meteor web apps called passenger-
docker.
Running The Init System
In order to use this special init system, use the CMD instruction:
FROM phusion/baseimage:<VERSION>
CMD ["/sbin/my_init"]
Adding Additional Daemons
To add additional daemons, write a shell script which runs it and add it to the following directory:
/etc/service/
Let's take the same example used in the official documentation: memcached.
echo "#!/bin/sh" > /etc/service/memcached
echo "exec /sbin/setuser memcache /usr/bin/memcached >>/var/log/memcached.log 2>&1" >> /etc/service/memcached
chmod +x /etc/service/memcached
And in your Dockerfile:
FROM phusion/baseimage:<VERSION>
CMD ["/sbin/my_init"]
RUN mkdir /etc/service/memcached
ADD memcached.sh /etc/service/memcached/run
Running Scripts At A Container Startup
You should also create a small script and add it to /etc/my_init.d/
echo "#!/bin/sh" > /script.sh
date > /script.sh
In the Dockerfile add the script under the /etc/my_init.d directory:
RUN mkdir -p /etc/my_init.d
ADD script.sh /etc/my_init.d/script.sh
Creating Environment Variables
While you can use default Docker commands and instructions to work with these variables, you can use the
/etc /container_environment
folder to store the variables that could be used inside your container :
RUN echo LOGIN my_login > /etc/container_environment/username
If you log inside your container, you can verify that the username has been set as an environment variable:
echo $LOGIN
my_login
Building A MongoDB Image Using An Optimized Base Image
In this example we are going to use a known the phusion/baseimage which is based on Ubuntu. As said, the authors
of this image describe it as "A minimal Ubuntu base image modified for Docker-friendliness". This image only
consumes 6 MB RAM and is much powerful than Busybox or Alpine
A Docker image should start by the FROM instruction:
FROM phusion/baseimage
You can then add your name and/or email:
MAINTAINER Aymen El Amri - eon01.com - <amri.aymen@gmail.com>
Update the system
RUN apt-get update
Add the installation prerequisites:
RUN apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 7F0CEB10
RUN echo 'deb http://downloads-distro.mongodb.org/repo/ubuntu-upstart dist 10gen' | tee /etc/apt/sources.list.d/mongodb.list
Then install Mongodb and don't forget to remove the unused files:
RUN apt-get update
RUN apt-get install -y mongodb-10gen && rm -rf /var/lib/apt/lists/*
Create the Mongodb data directory and tell Docker that the port 27017 could be used by another container:
RUN mkdir -p /data/db
EXPOSE 27017
To start Mongodb, we are going to use the command /usr/bin/mongod --port 27017 , that's why the Dockerfile will
contains the following two lines:
CMD ["--port 27017"]
ENTRYPOINT usr/bin/mongod
The final Dockerfile is:
FROM phusion/baseimage
MAINTAINER Aymen El Amri - eon01.com - <amri.aymen@gmail.com>
RUN apt-get update
RUN apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 7F0CEB10
RUN echo 'deb http://downloads-distro.mongodb.org/repo/ubuntu-upstart dist 10gen' | tee /etc/apt/sources.list.d/mongodb.list
RUN apt-get update
RUN apt-get install -y mongodb-10gen
RUN mkdir -p /data/db
EXPOSE 27017
CMD ["--port 27017"]
ENTRYPOINT usr/bin/mongod
Creating A Python Application Micro Image
As said Busybox combines tiny versions of many common UNIX utilities into a single small executable. To run a
Python application we need Python already installed but without a package manager, we are going to use Static-
Python: A fork of cpython that supports building a static interpreter and true standalone executables.
We are going to get the executables from here.
wget https://github.com/pts/staticpython/raw/master/release/python3.2-static
The executable is only 5,7M:
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|151.101.16.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 5930789 (5,7M) [application/octet-stream]
Saving to: ‘python3.2-static’
As a Python application, we are going to run this small script:
cat app.py
print("This is Python !")
So we have 3 files:
ls -l code/chapter5_staticpython/example1
total 5800
-rw-r--r-- 1 eon01 sudo 26 Nov 12 18:05 app.py
-rw-r--r-- 1 eon01 sudo 159 Nov 12 18:28 Dockerfile
-rw-r--r-- 1 eon01 sudo 5930789 Nov 12 18:05 python3.2-static
We should add the Python executable and he app.py into the container. Then execute the script:
FROM busybox
ADD app.py python3.2-static /
RUN chmod +x /python3.2-static
CMD ["/python3.2-static", "/app.py"]
Another different approach to cerate an image is to integrate the wget inside the Dockerfile. The Python application
could be also downloaded (or pulled from a git repository) directly from inside the image.
FROM busybox
RUN busybox wget <url of the executable> && busybox wget <url of the python file>
RUN chmod +x /python3.2-static
ENTRYPOINT ["/python3.2-static", "/app.py"]
Creating A Node.js Application Micro Image
Let's create a simple Node.js application using one of the smallest possible images in order to run a really micro
container.
We have seen that Alpine is a good OS for a base image. Note that apk is the Alpine package manager.
This is the Dockerfile:
FROM alpine
RUN apk update && apk upgrade
RUN apk add nodejs
WORKDIR /usr/src/app
ADD app.js .
CMD [ "node", "app.js" ]
The Node.js script that we are going to run is in app.js
console.log("********** Hello World **********");
Put the js file in the same directory as the Dockerfile and build the image using docker build . .
The image (with the upgrades) has 5 MB as you can see:
Sending build context to Docker daemon 3.072 kB
Step 1 : FROM alpine
---> baa5d63471ea
Step 2 : RUN apk update && apk upgrade
---> Running in 0a8bb4d3be05
fetch http://dl-cdn.alpinelinux.org/alpine/v3.4/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.4/community/x86_64/APKINDEX.tar.gz
v3.4.5-31-g7d74397 [http://dl-cdn.alpinelinux.org/alpine/v3.4/main]
v3.4.4-21-g75fc217 [http://dl-cdn.alpinelinux.org/alpine/v3.4/community]
OK: 5975 distinct packages available
(1/3) Upgrading musl (1.1.14-r12 -> 1.1.14-r14)
(2/3) Upgrading busybox (1.24.2-r11 -> 1.24.2-r12)
Executing busybox-1.24.2-r12.post-upgrade
(3/3) Upgrading musl-utils (1.1.14-r12 -> 1.1.14-r14)
Executing busybox-1.24.2-r12.trigger
OK: 5 MiB in 11 packages
Creating Your Own Docker Base Image
Building your own Docker image is possible.
As said, a Docker Images is a read-only layer, it never changes but in order to make it writable, the Union File
System should add a read-write file system over the read-only file system.
The base image we are going to build is one of the type of images that has no parents. There are two ways to create a
base image.
Using Tar
A base image is a working Linux OS like Ubuntu, Debian .. etc Creating a base image using tar may be different
from a distribution to another. Let's see how to create a Debian based distribution.
We can use debootstrap: is used to fetch the required Debian packages to build the base system.
First of all, if you haven't installed it yet, use your package manager to download the package:
apt-get install debootstrap
This is the man description of this package:
DESCRIPTION
debootstrap bootstraps a basic Debian system of SUITE into TARGET from MIRROR by running SCRIPT. MIRROR can be an http:// or https:// URL, a file:/// URL, or an ssh:/// URL.
The SUITE may be a release code name (eg, sid, jessie, wheezy) or a symbolic name (eg, unstable, testing, stable, oldstable)
Notice that file:/ URLs are translated to file:/// (correct scheme as described in RFC1738 for local filenames), and file:// will not work.
ssh ://USER@HOST/PATH URLs are retrieved using scp; use of ssh-agent or similar is strongly recommended.
Debootstrap can be used to install Debian in a system without using an installation disk but can also be used to run a different Debian flavor in a chroot environment. This way you can create a full (minimal) Debian installation which can be used for testing purposes (see the EXAMPLES section). If you are looking for a chroot system to build packages please take a look at pbuilder.
At this step, we will create an image based on Ubuntu 16.04 Xenial.
sudo debootstrap xenial xenial > /dev/null
Once it is finished you can check you Ubuntu image files:
ls xenial/
bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var
Let's create an archive to import it later by Docker import command:
sudo tar -C xenial/ -c . | sudo docker import - xenial
The import command creates a new filesystem image from the contents of a tarball It is used this way:
docker import [OPTIONS] file|URL|- [REPOSITORY[:TAG]]
The possible options are: -c and -m .
-c or --change <value> is used to apply Dockerfile instruction to the created image. -m or --message <string> is
used to set a commit message for imported image.
You can run the container with the base image you had created :
docker run xenial
But since the container has no command to run at its startup, you will have a similar error to this one:
docker: Error response from daemon: No command specified.
See 'docker run --help'.
You can try for example a command to run like date (just for the testing purpose):
docker run xenial date
Or you can verify the distribution of the created base image's OS:
docker run trusty cat /etc/lsb-release
If you are not familiar yet with the run command, we are going to see a chapter about running containers.
This is just a dirty verification of the integrity image.
You may use the local xenial bas image that you have just created in a Dockerfile:
FROM xenial
RUN ls
Sending build context to Docker daemon 252.9 MB
Step 1 : FROM xenial
---> 7a409243b212
Step 2 : RUN ls
---> Running in 5f544041222a
bin
boot
dev
etc
home
lib
lib64
media
mnt
opt
proc
root
run
sbin
srv
sys
tmp
usr
var
---> 57ce8f25dd63
Removing intermediate container 5f544041222a
Successfully built 57ce8f25dd63
Using Scratch
You don't need in reality to create the scratch image because it already exists, but let's see how to create one.
Actually, a scratch image is an image that was created using an empty tar archive.
tar cv --files-from /dev/null | docker import - scratch
You can see an example in the Docker library Github repository:
FROM scratch
COPY hello /
CMD ["/hello"]
This image uses an executable called hello that you can download here:
wget https://github.com/docker-library/hello-world/raw/master/hello-world/hello
Chapter VI - Working With Docker Containers
o ^__^
o (oo)\_______
(__)\ )\/\
||----w |
|| ||
Creating A Container
To create a container, simply run:
docker create <options> <image> <command> <args>
A simple example to start using this command is running:
docker create hello-world
Verify that the container was created by typing:
docker ps -a
or
docker ps --all
Docker will pick a random name for your container if you did not explicitly specify a name.
CONTAINER ID IMAGE COMMAND CREATED NAMES
ff330cd5505c hello-world "/hello" 2 minutes ago elegant_cori
You can set a name:
docker create --name hello-world-container hello-world
e..8
Verify the creation of the container:
docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS NAMES
e490edeef083 hello-world "/hello" 24 seconds ago Created hello-world-container
ff330cd5505c hello-world "/hello" 3 minutes ago Created elegant_cori
The docker create uses the specified image and add a new writable layer to creates the container and waits for the
specified command to run inside the created container. You may notice that docker create is generally used with -
it options where:
-i or --interactive To keep STDIN open even if not attached
-t or --tty To allocate a pseudo-TTY
Other options may be used like:
-p or --publish value To publish a container's port(s) to the host
-v or --volume value To bind mount a volume.
--dns value To set custom DNS servers (default [])
-e or --env value To set environment variables (default [])
-l or --label value To set meta data on a container (default [])
-m or --memory string To set a memory limit
--no-healthcheck To disable any container-specified HEALTHCHECK
--volume-driver string To set the volume driver for the container
--volumes-from value To mount volumes from the specified container(s) (default [])
-w or --workdir string To set the working directory inside the container
Example:
docker create --name web_server -it -v /etc/nginx/sites-enabled:/etc/nginx/sites-enabled -p 8080:80 nginx
Unable to find image 'nginx:latest' locally
latest: Pulling from library/nginx
75a822cd7888: Pull complete
0aefb9dc4a57: Pull complete
046e44ee6057: Pull complete
Digest: sha256:fab482910aae9630c93bd24fc6fcecb9f9f792c24a8974f5e46d8ad625ac2357
Status: Downloaded newer image for nginx:latest
e..4
You can see the other used options in the official Docker documentation but the most used ones are listed above.
docker create is similar to docker run -d but the container is never started until you make it explicitly using docker
start <container_id> which is e..4 in the last example.
Starting And Restarting A Container
We still use the nginx container example that we created using docker create --name web_server -it -v /etc/nginx/sites-
enabled:/etc/nginx/sites-enabled -p 8080:80 nginx that have e..4 as in id.
We can use the id with the start command to start the created container:
docker start e..4
You can check if the container is running using docker ps :
CONTAINER ID IMAGE COMMAND CREATED PORTS NAMES
e70fbc6e867a nginx "nginx -g 'daemon off" 8 minutes ago 443/tcp, 0.0.0.0:8080->80/tcp web_server
The start command is used like this:
docker start <options> <container_1> .. <container_n>
Where options could be:
-a or --attach To attach STDOUT/STDERR and forward signals
--detach-keys string To override the key sequence for detaching a container
--help To print usage
-i or --interactive To attach container's STDIN
You can start multiple containers using one command:
docker start e..4 e..8
Same as start command, you can restart a container using the restart command:
Nginx container:
docker restart e..4
e..4
hello-world container:
docker restart e..8
e..8
If you want to see the different configurations of a stopped or a running container, say the nginx container that has
the id e..4, you can go to /var/lib/docker/containers/e..4 where you will find these files:
/var/lib/docker/containers/e..4/
├── config.v2.json
├── e..4-json.log
├── hostconfig.json
├── hostname
├── hosts
├── resolv.conf
├── resolv.conf.hash
└── shm
This is the config.v2.json configuration file (the original one is not really formatted but compressed):
{
"StreamConfig":{
},
"State":{
"Running":true,
"Paused":false,
"Restarting":false,
"OOMKilled":false,
"RemovalInProgress":false,
"Dead":false,
"Pid":18722,
"StartedAt":"2017-01-02T23:37:03.770516537Z",
"FinishedAt":"2017-01-02T23:37:02.408427125Z",
"Health":null
},
"ID":"e..4",
"Created":"2017-01-02T23:00:06.494810718Z",
"Managed":false,
"Path":"nginx",
"Args":[
"-g",
"daemon off;"
],
"Config":{
"Hostname":"e70fbc6e867a",
"Domainname":"",
"User":"",
"AttachStdin":true,
"AttachStdout":true,
"AttachStderr":true,
"ExposedPorts":{
"443/tcp":{
},
"80/tcp":{
}
},
"Tty":true,
"OpenStdin":true,
"StdinOnce":true,
"Env":[
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"NGINX_VERSION=1.11.8-1~jessie"
],