Rewrite of the Introduction documentation

1. Re-aligns the introduction with the new product positioning.
2. Cleanup of some issues with language and formatting.
3. Makes the introduction leaner and meaner.
4. Responds to feedback from product.

Docker-DCO-1.1-Signed-off-by: James Turnbull <james@lovedthanlost.net> (github: jamtur01)
This commit is contained in:
James Turnbull 2014-05-18 18:52:41 +02:00
parent 9405a58e99
commit 0056884090
6 changed files with 479 additions and 865 deletions

View File

@ -28,9 +28,7 @@ pages:
- ['index.md', 'About', 'Docker']
- ['introduction/index.md', '**HIDDEN**']
- ['introduction/understanding-docker.md', 'About', 'Understanding Docker']
- ['introduction/technology.md', 'About', 'The Technology']
- ['introduction/working-with-docker.md', 'About', 'Working with Docker']
- ['introduction/get-docker.md', 'About', 'Get Docker']
# Installation:
- ['installation/index.md', '**HIDDEN**']

View File

@ -1,82 +1,99 @@
page_title: About Docker
page_description: Docker introduction home page
page_description: Introduction to Docker.
page_keywords: docker, introduction, documentation, about, technology, understanding, Dockerfile
# About Docker
*Secure And Portable Containers Made Easy*
**Develop, Ship and Run Any Application, Anywhere**
## Introduction
[**Docker**](https://www.docker.io) is a container based virtualization
framework. Unlike traditional virtualization Docker is fast, lightweight
and easy to use. Docker allows you to create containers holding
all the dependencies for an application. Each container is kept isolated
from any other, and nothing gets shared.
[**Docker**](https://www.docker.io) is a platform for developers and
sysadmins to develop, ship, and run applications. Docker consists of:
## Docker highlights
* The Docker Engine - our lightweight and powerful open source container
virtualization technology combined with a work flow to help you build
and containerize your applications.
* [Docker.io](https://index.docker.io) - our SAAS service that helps you
share and manage your applications stacks.
- **Containers provide sand-boxing:**
Applications run securely without outside access.
- **Docker allows simple portability:**
Containers are directories, they can be zipped and transported.
- **It all works fast:**
Starting a container is a very fast single process.
- **Docker is easy on the system resources (unlike VMs):**
No more than what each application needs.
- **Agnostic in its _essence_:**
Free of framework, language or platform dependencies.
Docker enables applications to be quickly assembled from components and
eliminates the friction when shipping code. We want to help you get code
from your desktop, tested and deployed into production as fast as
possible.
And most importantly:
## Why Docker?
- **Docker reduces complexity:**
Docker accepts commands *in plain English*, e.g. `docker run [..]`.
- **Faster delivery of your applications**
* We want to help your environment work better. Docker containers,
and the work flow that comes with them, helps your developers,
sysadmins, QA folks, and release engineers all work together to get code
into production and doing something useful. We've created a standard
container format that allows developers to care about their applications
inside containers and sysadmins and operators to care about running the
container. This creates a separation of duties that makes managing and
deploying code much easier and much more streamlined.
* We make it easy to build new containers and expose how each rapid
iterations and updates as well as visibility of changes.
container is built. This helps everyone in your organization understand
how an application works and how it is built.
* Docker containers are lightweight and fast! Containers have
sub-second launch times! With containers you can reduce the cycle
time in development, testing and deployment.
- **Deploy and scale more easily**
* Docker containers run (almost!) everywhere. You can deploy your
containers on desktops, physical servers, virtual machines, into
data centers and to public and private cloud.
* As Docker runs on so many platforms it makes it easy to move your
appications around. You can easily move an application from a
testing environment into the cloud and back whenever you need.
* The lightweight containers Docker creates also making scaling and
down really fast and easy. If you need more containers you can
quickly launch them and then shut them down when you don't need them
anymore.
- **Get higher density and run more workloads**
* Docker containers don't need a hypervisor so you can pack more of
them onto your hosts. This means you get more value out of every
server and can potentially reduce the money you spend on equipment and
licenses!
- **Faster deployment makes for easier management**
* As Docker speeds up your work flow it makes it easier to make lots
of little changes instead of huge, big bang updates. Smaller
changes mean smaller risks and mean more uptime!
## About this guide
In this introduction we will take you on a tour and show you what
makes Docker tick.
First we'll show you [what makes Docker tick in our Understanding Docker
section](introduction/understanding-docker.md):
On the [**first page**](introduction/understanding-docker.md), which is
**_informative_**:
- You will find information on Docker;
- And discover Docker's features.
- We will also compare Docker to virtual machines;
- You will find see how Docker works at a high level;
- The architecture of Docker;
- Discover Docker's features;
- See how Docker compares to virtual machines;
- And see some common use cases.
> [Click here to go to Understanding Docker](introduction/understanding-docker.md).
> [Click here to go to the Understanding
> Docker section](introduction/understanding-docker.md).
The [**second page**](introduction/technology.md) has **_technical_** information on:
Next we get [**practical** with the Working with Docker
section](introduction/working-with-docker.md) and you can learn about:
- The architecture of Docker;
- The underlying technology, and;
- *How* Docker works.
- Docker on the command line;
- Get introduced to your first Docker commands;
- Get to know your way around the basics of Docker operation.
> [Click here to go to Understanding the Technology](introduction/technology.md).
> [Click here to go to the Working with
> Docker section](introduction/working-with-docker.md).
On the [**third page**](introduction/working-with-docker.md) we get **_practical_**.
There you can:
- Learn about Docker's components (i.e. Containers, Images and the
Dockerfile);
- And get started working with them straight away.
> [Click here to go to Working with Docker](introduction/working-with-docker.md).
Finally, on the [**fourth**](introduction/get-docker.md) page, we go **_hands on_**
and see:
- The installation instructions, and;
- How Docker makes some hard problems much, much easier.
> [Click here to go to Get Docker](introduction/get-docker.md).
If you want to see how to install Docker you can jump to the
[installation](/installation/#installation) section.
> **Note**:
> We know how valuable your time is. Therefore, the documentation is prepared
> in a way to allow anyone to start from any section need. Although we strongly
> recommend that you visit [Understanding Docker](
> introduction/understanding-docker.md) to see how Docker is different, if you
> already have some knowledge and want to quickly get started with Docker,
> don't hesitate to jump to [Working with Docker](
> introduction/working-with-docker.md).
> We know how valuable your time is so you if you want to get started
> with Docker straight away don't hesitate to jump to [Working with
> Docker](introduction/working-with-docker.md). For a fuller
> understanding of Docker though we do recommend you read [Understanding
> Docker]( introduction/understanding-docker.md).

View File

@ -1,77 +0,0 @@
page_title: Getting Docker
page_description: Getting Docker and installation tutorials
page_keywords: docker, introduction, documentation, about, technology, understanding, Dockerfile
# Getting Docker
*How to install Docker?*
## Introductions
Once you are comfortable with your level of knowledge of Docker, and
feel like actually trying the product, you can download and start using
it by following the links listed below. There, you will find
installation instructions, specifically tailored for your platform of choice.
## Installation Instructions
### Linux (Native)
- **Arch Linux:**
[Installation on Arch Linux](../installation/archlinux.md)
- **Fedora:**
[Installation on Fedora](../installation/fedora.md)
- **FrugalWare:**
[Installation on FrugalWare](../installation/frugalware.md)
- **Gentoo:**
[Installation on Gentoo](../installation/gentoolinux.md)
- **Red Hat Enterprise Linux:**
[Installation on Red Hat Enterprise Linux](../installation/rhel.md)
- **Ubuntu:**
[Installation on Ubuntu](../installation/ubuntulinux.md)
- **openSUSE:**
[Installation on openSUSE](../installation/openSUSE.md)
### Mac OS X (Using Boot2Docker)
In order to work, Docker makes use of some Linux Kernel features which
are not supported by Mac OS X. To run Docker on OS X we install and run
a lightweight virtual machine and run Docker on that.
- **Mac OS X :**
[Installation on Mac OS X](../installation/mac.md)
### Windows (Using Boot2Docker)
Docker can also run on Windows using a virtual machine. You then run
Linux and Docker inside that virtual machine.
- **Windows:**
[Installation on Windows](../installation/windows.md)
### Infrastructure-as-a-Service
- **Amazon EC2:**
[Installation on Amazon EC2](../installation/amazon.md)
- **Google Cloud Platform:**
[Installation on Google Cloud Platform](../installation/google.md)
- **Rackspace Cloud:**
[Installation on Rackspace Cloud](../installation/rackspace.md)
## Where to go from here
### Understanding Docker
Visit [Understanding Docker](understanding-docker.md) in our Getting Started manual.
### Learn about parts of Docker and the underlying technology
Visit [Understanding the Technology](technology.md) in our Getting Started manual.
### Get practical and learn how to use Docker straight away
Visit [Working with Docker](working-with-docker.md) in our Getting Started manual.
### Get the whole story
[https://www.docker.io/the_whole_story/](https://www.docker.io/the_whole_story/)

View File

@ -1,268 +0,0 @@
page_title: Understanding the Technology
page_description: Technology of Docker explained in depth
page_keywords: docker, introduction, documentation, about, technology, understanding, Dockerfile
# Understanding the Technology
*What is the architecture of Docker? What is its underlying technology?*
## Introduction
When it comes to understanding Docker and its underlying technology
there is no *magic* involved. Everything is based on tried and tested
features of the *Linux kernel*. Docker either makes use of those
features directly or builds upon them to provide new functionality.
Aside from the technology, one of the major factors that make Docker
great is the way it is built. The project's core is very lightweight and
as much of Docker as possible is designed to be pluggable. Docker is
also built with integration in mind and has a fully featured API that
allows you to access all of the power of Docker from inside your own
applications.
## The Architecture of Docker
Docker is designed for developers and sysadmins. It's built to help you
build applications and services and then deploy them quickly and
efficiently: from development to production.
Let's take a look.
-- Docker is a client-server application.
-- Both the Docker client and the daemon *can* run on the same system, or;
-- You can connect a Docker client with a remote Docker daemon.
-- They communicate via sockets or through a RESTful API.
-- Users interact with the client to command the daemon, e.g. to create, run, and stop containers.
-- The daemon, receiving those commands, does the job, e.g. run a container, stop a container.
![Docker Architecture Diagram](/article-img/architecture.svg)
## The components of Docker
Docker's main components are:
- Docker *daemon*;
- Docker *client*, and;
- [Docker.io](https://index.docker.io) registry.
### The Docker daemon
As shown on the diagram above, the Docker daemon runs on a host machine.
The user does not directly interact with the daemon, but instead through
an intermediary: the Docker client.
### Docker client
The Docker client is the primary user interface to Docker. It is tasked
with accepting commands from the user and communicating back and forth
with a Docker daemon to manage the container lifecycle on any host.
### Docker.io registry
[Docker.io](https://index.docker.io) is the global archive (and
directory) of user supplied Docker container images. It currently hosts
a large in fact, rapidly growing number of projects where you
can find almost any popular application or deployment stack readily
available to download and run with a single command.
As a social community project, Docker tries to provide all necessary
tools for everyone to grow with other *Dockers*. By issuing a single
command through the Docker client you can start sharing your own
creations with the rest of the world.
However, knowing that not everything can be shared the [Docker.io](
https://index.docker.io) also offers private repositories. In order to see
the available plans, you can click [here](https://index.docker.io/plans).
Using [*docker-registry*](https://github.com/dotcloud/docker-registry), it is
also possible to run your own private Docker image registry service on your own
servers.
> **Note:** To learn more about the [*Docker.io*](http://index.docker.io)
> registry (for public *and* private repositories), check out the [Registry &
> Index Spec](http://docs.docker.io/api/registry_index_spec/).
### Summary
- **When you install Docker, you get all the components:**
The daemon, the client and access to the [Docker.io](http://index.docker.io) registry.
- **You can run these components together or distributed:**
Servers with the Docker daemon running, controlled by the Docker client.
- **You can benefit form the public registry:**
Download and build upon images created by the community.
- **You can start a private repository for proprietary use.**
Sign up for a [plan](https://index.docker.io/plans) or host your own [docker-registry](
https://github.com/dotcloud/docker-registry).
## Elements of Docker
The basic elements of Docker are:
- **Containers, which allow:**
The run portion of Docker. Your applications run inside of containers.
- **Images, which provide:**
The build portion of Docker. Your containers are built from images.
- **The Dockerfile, which automates:**
A file that contains simple instructions that build Docker images.
To get practical and learn what they are, and **_how to work_** with
them, continue to [Working with Docker](working-with-docker.md). If you would like to
understand **_how they work_**, stay here and continue reading.
## The underlying technology
The power of Docker comes from the underlying technology it is built
from. A series of operating system features are carefully glued together
to provide Docker's features and provide an easy to use interface to
those features. In this section, we will see the main operating system
features that Docker uses to make easy containerization happen.
### Namespaces
Docker takes advantage of a technology called `namespaces` to provide
an isolated workspace we call a *container*. When you run a container,
Docker creates a set of *namespaces* for that container.
This provides a layer of isolation: each process runs in its own
namespace and does not have access outside it.
Some of the namespaces Docker uses are:
- **The `pid` namespace:**
Used for process numbering (PID: Process ID)
- **The `net` namespace:**
Used for managing network interfaces (NET: Networking)
- **The `ipc` namespace:**
Used for managing access to IPC resources (IPC: InterProcess Communication)
- **The `mnt` namespace:**
Used for managing mount-points (MNT: Mount)
- **The `uts` namespace:**
Used for isolating kernel / version identifiers. (UTS: Unix Timesharing System)
### Control groups
Docker also makes use of another technology called `cgroups` or control
groups. A key need to run applications in isolation is to have them
contained, not just in terms of related filesystem and/or dependencies,
but also, resources. Control groups allow Docker to fairly
share available hardware resources to containers and if asked, set up to
limits and constraints, for example limiting the memory to a maximum of 128
MBs.
### UnionFS
UnionFS or union filesystems are filesystems that operate by creating
layers, making them very lightweight and fast. Docker uses union
filesystems to provide the building blocks for containers. We'll see
more about this below.
### Containers
Docker combines these components to build a container format we call
`libcontainer`. Docker also supports traditional Linux containers like
[LXC](https://linuxcontainers.org/) which also make use of these
components.
## How does everything work
A lot happens when Docker creates a container.
Let's see how it works!
### How does a container work?
A container consists of an operating system, user added files and
meta-data. Each container is built from an image. That image tells
Docker what the container holds, what process to run when the container
is launched and a variety of other configuration data. The Docker image
is read-only. When Docker runs a container from an image it adds a
read-write layer on top of the image (using the UnionFS technology we
saw earlier) to run inside the container.
### What happens when you run a container?
The Docker client (or the API!) tells the Docker daemon to run a
container. Let's take a look at a simple `Hello world` example.
$ docker run -i -t ubuntu /bin/bash
Let's break down this command. The Docker client is launched using the
`docker` binary. The bare minimum the Docker client needs to tell the
Docker daemon is:
* What Docker image to build the container from;
* The command you want to run inside the container when it is launched.
So what happens under the covers when we run this command?
Docker begins with:
- **Pulling the `ubuntu` image:**
Docker checks for the presence of the `ubuntu` image and if it doesn't
exist locally on the host, then Docker downloads it from [Docker.io](https://index.docker.io)
- **Creates a new container:**
Once Docker has the image it creates a container from it.
- **Allocates a filesystem and mounts a read-write _layer_:**
The container is created in the filesystem and a read-write layer is added to the image.
- **Allocates a network / bridge interface:**
Creates a network interface that allows the Docker container to talk to the local host.
- **Sets up an IP address:**
Intelligently finds and attaches an available IP address from a pool.
- **Executes _a_ process that you specify:**
Runs your application, and;
- **Captures and provides application output:**
Connects and logs standard input, outputs and errors for you to see how your application is running.
### How does a Docker Image work?
We've already seen that Docker images are read-only templates that
Docker containers are launched from. When you launch that container it
creates a read-write layer on top of that image that your application is
run in.
Docker images are built using a simple descriptive set of steps we
call *instructions*. Instructions are stored in a file called a
`Dockerfile`. Each instruction writes a new layer to an image using the
UnionFS technology we saw earlier.
Every image starts from a base image, for example `ubuntu` a base Ubuntu
image or `fedora` a base Fedora image. Docker builds and provides these
base images via [Docker.io](http://index.docker.io).
### How does a Docker registry work?
The Docker registry is a store for your Docker images. Once you build a
Docker image you can *push* it to a public or private repository on [Docker.io](
http://index.docker.io) or to your own registry running behind your firewall.
Using the Docker client, you can search for already published images and
then pull them down to your Docker host to build containers from them
(or even build on these images).
[Docker.io](http://index.docker.io) provides both public and
private storage for images. Public storage is searchable and can be
downloaded by anyone. Private repositories are excluded from search
results and only you and your users can pull them down and use them to
build containers. You can [sign up for a plan here](https://index.docker.io/plans).
To learn more, check out the [Working with Repositories](
http://docs.docker.io/use/workingwithrepository) section from the
[Docker documentation](http://docs.docker.io).
## Where to go from here
### Understanding Docker
Visit [Understanding Docker](understanding-docker.md) in our Getting Started manual.
### Get practical and learn how to use Docker straight away
Visit [Working with Docker](working-with-docker.md) in our Getting Started manual.
### Get the product and go hands-on
Visit [Get Docker](get-docker.md) in our Getting Started manual.
### Get the whole story
[https://www.docker.io/the_whole_story/](https://www.docker.io/the_whole_story/)

View File

@ -1,38 +1,129 @@
page_title: Understanding Docker
page_description: Docker explained in depth
page_keywords: docker, introduction, documentation, about, technology, understanding, Dockerfile
page_keywords: docker, introduction, documentation, about, technology, understanding
# Understanding Docker
*What is Docker? What makes it great?*
**What is Docker?**
Building development lifecycles, pipelines and deployment tooling is
hard. It's not easy to create portable applications and services.
There's often high friction getting code from your development
environment to production. It's also hard to ensure those applications
and services are consistent, up-to-date and managed.
Docker is a platform for developing, shipping, and running applications.
Docker is designed to deliver your applications faster. With Docker you
can separate your applications from your infrastructure AND treat your
infrastructure like a managed application. We want to help you ship code
faster, test faster, deploy faster and shorten the cycle between writing
code and running code.
Docker is designed to solve these problem for both developers and
sysadmins. It is a lightweight framework (with a powerful API) that
provides a lifecycle for building and deploying applications into
containers.
Docker does this by combining a lightweight container virtualization
platform with workflow and tooling that helps you manage and deploy your
applications.
Docker provides a way to run almost any application securely isolated
into a container. The isolation and security allows you to run many
containers simultaneously on your host. The lightweight nature of
At its core Docker provides a way to run almost any application securely
isolated into a container. The isolation and security allows you to run
many containers simultaneously on your host. The lightweight nature of
containers, which run without the extra overload of a hypervisor, means
you can get more out of your hardware.
**Note:** Docker itself is *shipped* with the Apache 2.0 license and it
is completely open-source — *the pun? very much intended*.
Surrounding the container virtualization, we provide tooling and a
platform to help you get your applications (and its supporting
components) into Docker containers, to distribute and ship those
containers to your teams to develop and test on them and then to deploy
those applications to your production environment whether it be in a
local data center or the Cloud.
### What are the Docker basics I need to know?
## What can I use Docker for?
Docker has three major components:
* Faster delivery of your applications
Docker is perfect for helping you with the development lifecycle. Docker
can allow your developers to develop on local containers that contain
your applications and services. It can integrate into a continuous
integration and deployment workflow.
Your developers write code locally and share their development stack via
Docker with their colleagues. When they are ready they can push their
code and the stack they are developing on to a test environment and
execute any required tests. From the testing environment you can then
push your Docker images into production and deploy your code.
* Deploy and scale more easily
Docker's container platform allows you to have highly portable
workloads. Docker containers can run on a developer's local host, on
physical or virtual machines in a data center or in the Cloud.
Docker's portability and lightweight nature also makes managing
workloads dynamically easy. You can use Docker to build and scale out
applications and services. Docker's speed means that scaling can be near
real time.
* Get higher density and run more workloads
Docker is lightweight and fast. It provides a viable (and
cost-effective!) alternative to hypervisor-based virtual machines. This
is especially useful in high density environments, for example building
your own Cloud or Platform-as-a-Service. But it is also useful
for small and medium deployments where you want to get more out of the
resources you have.
## What are the major Docker components?
Docker has two major components:
* Docker: the open source container virtualization platform.
* [Docker.io](https://index.docker.io): our Software-as-a-Service
platform for sharing and managing Docker containers.
**Note:** Docker is licensed with the open source Apache 2.0 license.
## What is the architecture of Docker?
Docker has a client-server architecture. The Docker *client* talks to
the Docker *daemon* which does the heavy lifting of building, running
and distributing your Docker containers. Both the Docker client and the
daemon *can* run on the same system, or you can connect a Docker client
with a remote Docker daemon. The Docker client and service can
communicate via sockets or through a RESTful API.
![Docker Architecture Diagram](/article-img/architecture.svg)
### The Docker daemon
As shown on the diagram above, the Docker daemon runs on a host machine.
The user does not directly interact with the daemon, but instead through
the Docker client.
### The Docker client
The Docker client, in the form of the `docker` binary, is the primary user
interface to Docker. It is tasked with accepting commands from the user
and communicating back and forth with a Docker daemon.
### Inside Docker
Inside Docker there are three concepts well need to understand:
* Docker containers.
* Docker images.
* Docker registries.
* Docker containers.
#### Docker images
The Docker image is a read-only template, for example an Ubuntu operating system
with Apache and your web application installed. Docker containers are
created from images. You can download Docker images that other people
have created or Docker provides a simple way to build new images or
update existing images. You can consider Docker images to be the **build**
portion of Docker.
#### Docker Registries
Docker registries hold images. These are public (or private!) stores
that you can upload or download images to and from. The public Docker
registry is called [Docker.io](http://index.docker.io). It provides a
huge collection of existing images that you can use. These images can be
images you create yourself or you can make use of images that others
have previously created. You can consider Docker registries the
**distribution** portion of Docker.
#### Docker containers
@ -40,233 +131,201 @@ Docker containers are like a directory. A Docker container holds
everything that is needed for an application to run. Each container is
created from a Docker image. Docker containers can be run, started,
stopped, moved and deleted. Each container is an isolated and secure
application platform. You can consider Docker containers the *run*
portion of the Docker framework.
application platform. You can consider Docker containers the **run**
portion of Docker.
#### Docker images
## So how does Docker work?
The Docker image is a template, for example an Ubuntu
operating system with Apache and your web application installed. Docker
containers are launched from images. Docker provides a simple way to
build new images or update existing images. You can consider Docker
images to be the *build* portion of the Docker framework.
We've learned so far that:
#### Docker Registries
Docker registries hold images. These are public (or private!) stores
that you can upload or download images to and from. These images can be
images you create yourself or you can make use of images that others
have previously created. Docker registries allow you to build simple and
powerful development and deployment work flows. You can consider Docker
registries the *share* portion of the Docker framework.
### How does Docker work?
Docker is a client-server framework. The Docker *client* commands the Docker
*daemon*, which in turn creates, builds and manages containers.
The Docker daemon takes advantage of some neat Linux kernel and
operating system features, like `namespaces` and `cgroups`, to build
isolated container. Docker provides a simple abstraction layer to these
technologies.
> **Note:** If you would like to learn more about the underlying technology,
> why not jump to [Understanding the Technology](technology.md) where we talk about them? You can
> always come back here to continue learning about features of Docker and what
> makes it different.
## Features of Docker
In order to get a good grasp of the capabilities of Docker you should
read the [User's Manual](http://docs.docker.io). Let's look at a summary
of Docker's features to give you an idea of how Docker might be useful
to you.
### User centric and simple to use
*Docker is made for humans.*
It's easy to get started and easy to build and deploy applications with
Docker: or as we say "*dockerize*" them! As much of Docker as possible
uses plain English for commands and tries to be as lightweight and
transparent as possible. We want to get out of the way so you can build
and deploy your applications.
### Docker is Portable
*Dockerize And Go!*
Docker containers are highly portable. Docker provides a standard
container format to hold your applications:
* You take care of your applications inside the container, and;
* Docker takes care of managing the container.
Any machine, be it bare-metal or virtualized, can run any Docker
container. The sole requirement is to have Docker installed.
**This translates to:**
- Reliability;
- Freeing your applications out of the dependency-hell;
- A natural guarantee that things will work, anywhere.
### Lightweight
*No more resources waste.*
Containers are lightweight, in fact, they are extremely lightweight.
Unlike traditional virtual machines, which have the overhead of a
hypervisor, Docker relies on operating system level features to provide
isolation and security. A Docker container does not need anything more
than what your application needs to run.
This translates to:
- Ability to deploy a large number of applications on a single system;
- Lightning fast start up times and reduced overhead.
### Docker can run anything
*An amazing host! (again, pun intended.)*
Docker isn't prescriptive about what applications or services you can run
inside containers. We provide use cases and examples for running web
services, databases, applications - just about anything you can imagine
can run in a Docker container.
**This translates to:**
- Ability to run a wide range of applications;
- Ability to deploy reliably without repeating yourself.
### Plays well with others
*A wonderful guest.*
Today, it is possible to install and use Docker almost anywhere. Even on
non-Linux systems such as Windows or Mac OS X thanks to a project called
[Boot2Docker](http://boot2docker.io).
**This translates to running Docker (and Docker containers!) _anywhere_:**
- **Linux:**
Ubuntu, CentOS / RHEL, Fedora, Gentoo, openSUSE and more.
- **Infrastructure-as-a-Service:**
Amazon AWS, Google GCE, Rackspace Cloud and probably, your favorite IaaS.
- **Microsoft Windows**
- **OS X**
### Docker is Responsible
*A tool that you can trust.*
Docker does not just bring you a set of tools to isolate and run
applications. It also allows you to specify constraints and controls on
those resources.
**This translates to:**
- Fine tuning available resources for each application;
- Allocating memory or CPU intelligently to make most of your environment;
Without dealing with complicated commands or third party applications.
### Docker is Social
*Docker knows that No One Is an Island.*
Docker allows you to share the images you've built with the world. And
lots of people have already shared their own images.
To facilitate this sharing Docker comes with a public registry called
[Docker.io](http://index.docker.io). If you don't want your images to be
public you can also use private images on [Docker.io](https://index.docker.io)
or even run your own registry behind your firewall.
**This translates to:**
- No more wasting time building everything from scratch;
- Easily and quickly save your application stack;
- Share and benefit from the depth of the Docker community.
## Docker versus Virtual Machines
> I suppose it is tempting, if the *only* tool you have is a hammer, to
> treat *everything* as if it were a nail.
> — **_Abraham Maslow_**
**Docker containers are:**
- Easy on the resources;
- Extremely light to deal with;
- Do not come with substantial overhead;
- Very easy to work with;
- Agnostic;
- Can work *on* virtual machines;
- Secure and isolated;
- *Artful*, *social*, *fun*, and;
- Powerful sand-boxes.
**Docker containers are not:**
- Hardware or OS emulators;
- Resource heavy;
- Platform, software or language dependent.
## Docker Use Cases
Docker is a framework. As a result it's flexible and powerful enough to
be used in a lot of different use cases.
### For developers
- **Developed with developers in mind:**
Build, test and ship applications with nothing but Docker and lean
containers.
- **Re-usable building blocks to create more:**
Docker images are easily updated building blocks.
- **Automatically build-able:**
It has never been this easy to build - *anything*.
- **Easy to integrate:**
A powerful, fully featured API allows you to integrate Docker into your tooling.
### For sysadmins
- **Efficient (and DevOps friendly!) lifecycle:**
Operations and developments are consistent, repeatable and reliable.
- **Balanced environments:**
Processes between development, testing and production are leveled.
- **Improvements on speed and integration:**
Containers are almost nothing more than isolated, secure processes.
- **Lowered costs of infrastructure:**
Containers are lightweight and heavy on resources compared to virtual machines.
- **Portable configurations:**
Issues and overheads with dealing with configurations and systems are eliminated.
### For everyone
- **Increased security without performance loss:**
Replacing VMs with containers provide security without additional
hardware (or software).
- **Portable:**
You can easily move applications and workloads from different operating
systems and platforms.
## Where to go from here
### Learn about Parts of Docker and the underlying technology
Visit [Understanding the Technology](technology.md) in our Getting Started manual.
### Get practical and learn how to use Docker straight away
Visit [Working with Docker](working-with-docker.md) in our Getting Started manual.
### Get the product and go hands-on
Visit [Get Docker](get-docker.md) in our Getting Started manual.
1. You can build Docker images that hold your applications.
2. You can create Docker containers from those Docker images to run your
applications.
3. You can share those Docker images via
[Docker.io](https://index.docker.io) or your own registry.
Let's look at how these elements combine together to make Docker work.
### How does a Docker Image work?
We've already seen that Docker images are read-only templates that
Docker containers are launched from. Each image consists of a series of
layers. Docker makes use of [union file
systems](http://en.wikipedia.org/wiki/UnionFS) to combine these layers
into a single image. Union file systems allow files and directories of
separate file systems, known as branches, to be transparently overlaid,
forming a single coherent file system.
One of the reasons Docker is so lightweight is because of these layers.
When you change a Docker image, for example update an application to a
new version, this builds a new layer. Hence, rather than replacing the whole
image or entirely rebuilding, as you may do with a virtual machine, only
that layer is added or updated. Now you don't need to distribute a whole new image,
just the update, making distributing Docker images fast and simple.
Every image starts from a base image, for example `ubuntu`, a base Ubuntu
image, or `fedora`, a base Fedora image. You can also use images of your
own as the basis for a new image, for example if you have a base Apache
image you could use this as the base of all your web application images.
> **Note:**
> Docker usually gets these base images from [Docker.io](https://index.docker.io).
Docker images are then built from these base images using a simple
descriptive set of steps we call *instructions*. Each instruction
creates a new layer in our image. Instructions include steps like:
* Run a command.
* Add a file or directory.
* Create an environment variable.
* What process to run when launching a container from this image.
These instructions are stored in a file called a `Dockerfile`. Docker
reads this `Dockerfile` when you request an image be built, executes the
instructions and returns a final image.
### How does a Docker registry work?
The Docker registry is the store for your Docker images. Once you build
a Docker image you can *push* it to a public registry [Docker.io](
https://index.docker.io) or to your own registry running behind your
firewall.
Using the Docker client, you can search for already published images and
then pull them down to your Docker host to build containers from them.
[Docker.io](https://index.docker.io) provides both public and
private storage for images. Public storage is searchable and can be
downloaded by anyone. Private storage is excluded from search
results and only you and your users can pull them down and use them to
build containers. You can [sign up for a plan
here](https://index.docker.io/plans).
### How does a container work?
A container consists of an operating system, user added files and
meta-data. As we've discovered each container is built from an image. That image tells
Docker what the container holds, what process to run when the container
is launched and a variety of other configuration data. The Docker image
is read-only. When Docker runs a container from an image it adds a
read-write layer on top of the image (using a union file system as we
saw earlier) in which your application is then run.
### What happens when you run a container?
The Docker client using the `docker` binary, or via the API, tells the
Docker daemon to run a container. Let's take a look at what happens
next.
$ docker run -i -t ubuntu /bin/bash
Let's break down this command. The Docker client is launched using the
`docker` binary with the `run` option telling it to launch a new
container. The bare minimum the Docker client needs to tell the
Docker daemon to run the container is:
* What Docker image to build the container from, here `ubuntu`, a base
Ubuntu image;
* The command you want to run inside the container when it is launched,
here `bin/bash` to shell the Bash shell inside the new container.
So what happens under the covers when we run this command?
Docker begins with:
- **Pulling the `ubuntu` image:**
Docker checks for the presence of the `ubuntu` image and if it doesn't
exist locally on the host, then Docker downloads it from
[Docker.io](https://index.docker.io). If the image already exists then
Docker uses it for the new container.
- **Creates a new container:**
Once Docker has the image it creates a container from it:
* **Allocates a filesystem and mounts a read-write _layer_:**
The container is created in the file system and a read-write layer is
added to the image.
* **Allocates a network / bridge interface:**
Creates a network interface that allows the Docker container to talk to
the local host.
* **Sets up an IP address:**
Finds and attaches an available IP address from a pool.
- **Executes a process that you specify:**
Runs your application, and;
- **Captures and provides application output:**
Connects and logs standard input, outputs and errors for you to see how
your application is running.
Now you have a running container! From here you can manage your running
container, interact with your application and then when finished stop
and remove your container.
## The underlying technology
Docker is written in Go and makes use of several Linux kernel features to
deliver the features we've seen.
### Namespaces
Docker takes advantage of a technology called `namespaces` to provide an
isolated workspace we call a *container*. When you run a container,
Docker creates a set of *namespaces* for that container.
This provides a layer of isolation: each aspect of a container runs in
its own namespace and does not have access outside it.
Some of the namespaces that Docker uses are:
- **The `pid` namespace:**
Used for process isolation (PID: Process ID).
- **The `net` namespace:**
Used for managing network interfaces (NET: Networking).
- **The `ipc` namespace:**
Used for managing access to IPC resources (IPC: InterProcess
Communication).
- **The `mnt` namespace:**
Used for managing mount-points (MNT: Mount).
- **The `uts` namespace:**
Used for isolating kernel and version identifiers. (UTS: Unix Timesharing
System).
### Control groups
Docker also makes use of another technology called `cgroups` or control
groups. A key need to run applications in isolation is to have them only
use the resources you want. This ensures containers are good
multi-tenant citizens on a host. Control groups allow Docker to
share available hardware resources to containers and if required, set up to
limits and constraints, for example limiting the memory available to a
specific container.
### Union file systems
Union file systems or UnionFS are file systems that operate by creating
layers, making them very lightweight and fast. Docker uses union file
systems to provide the building blocks for containers. We learned about
union file systems earlier in this document. Docker can make use of
several union file system variants including: AUFS, btrfs, vfs, and
DeviceMapper.
### Container format
Docker combines these components into a wrapper we call a container
format. The default container format is called `libcontainer`. Docker
also supports traditional Linux containers using
[LXC](https://linuxcontainers.org/). In future Docker may support other
container formats, for example integration with BSD Jails or Solaris
Zones.
## Next steps
### Learning how to use Docker
Visit [Working with Docker](working-with-docker.md).
### Installing Docker
Visit the [installation](/installation/#installation) section.
### Get the whole story
[https://www.docker.io/the_whole_story/](https://www.docker.io/the_whole_story/)

View File

@ -1,80 +1,63 @@
page_title: Working with Docker and the Dockerfile
page_description: Working with Docker and The Dockerfile explained in depth
page_title: Introduction to working with Docker
page_description: Introduction to working with Docker and Docker commands.
page_keywords: docker, introduction, documentation, about, technology, understanding, Dockerfile
# Working with Docker and the Dockerfile
# An Introduction to working with Docker
*How to use and work with Docker?*
**Getting started with Docker**
> **Warning! Don't let this long page bore you.**
> If you prefer a summary and would like to see how a specific command
> **Note:**
> If you would like to see how a specific command
> works, check out the glossary of all available client
> commands on our [User's Manual: Commands Reference](
> http://docs.docker.io/reference/commandline/cli).
> commands on our [Commands Reference](/reference/commandline/cli).
## Introduction
On the last page, [Understanding the Technology](technology.md), we covered the
components that make up Docker and learnt about the
underlying technology and *how* everything works.
In the [Understanding Docker](understanding-docker.md) section we
covered the components that make up Docker, learned about the underlying
technology and saw *how* everything works.
Now, it is time to get practical and see *how to work with* the Docker client,
Docker containers and images and the `Dockerfile`.
Now, let's get an introduction to the basics of interacting with Docker.
> **Note:** You are encouraged to take a good look at the container,
> image and `Dockerfile` explanations here to have a better understanding
> on what exactly they are and to get an overall idea on how to work with
> them. On the next page (i.e., [Get Docker](get-docker.md)), you will be
> able to find links for platform-centric installation instructions.
> **Note:**
> This page assumes you have a host with a running Docker
> daemon and access to a Docker client. To see how to install Docker on
> a variety of platforms see the [installation
> section](/installation/#installation).
## Elements of Docker
As we mentioned on the, [Understanding the Technology](technology.md) page, the main
elements of Docker are:
- Containers;
- Images, and;
- The `Dockerfile`.
> **Note:** This page is more *practical* than *technical*. If you are
> interested in understanding how these tools work behind the scenes
> and do their job, you can always read more on
> [Understanding the Technology](technology.md).
## Working with the Docker client
In order to work with the Docker client, you need to have a host with
the Docker daemon installed and running.
### How to use the client
## How to use the client
The client provides you a command-line interface to Docker. It is
accessed by running the `docker` binary.
> **Tip:** The below instructions can be considered a summary of our
> *interactive tutorial*. If you prefer a more hands-on approach without
> installing anything, why not give that a shot and check out the
> [Docker Interactive Tutorial](https://www.docker.io/gettingstarted).
> **Tip:**
> The below instructions can be considered a summary of our
> [interactive tutorial](https://www.docker.io/gettingstarted). If you
> prefer a more hands-on approach without installing anything, why not
> give that a shot and check out the
> [tutorial](https://www.docker.io/gettingstarted).
The `docker` client usage consists of passing a chain of arguments:
The `docker` client usage is pretty simple. Each action you can take
with Docker is a command and each command can take a series of
flags and arguments.
# Usage: [sudo] docker [option] [command] [arguments] ..
# Usage: [sudo] docker [flags] [command] [arguments] ..
# Example:
$ docker run -i -t ubuntu /bin/bash
### Our first Docker command
## Using the Docker client
Let's get started with our first Docker command by checking the
version of the currently installed Docker client using the `docker
version` command.
Let's get started with the Docker client by running our first Docker
command. We're going to use the `docker version` command to return
version information on the currently installed Docker client and daemon.
# Usage: [sudo] docker version
# Example:
$ docker version
This command will not only provide you the version of Docker client you
are using, but also the version of Go (the programming language powering
Docker).
This command will not only provide you the version of Docker client and
daemon you are using, but also the version of Go (the programming
language powering Docker).
Client version: 0.8.0
Go version (client): go1.2
@ -87,19 +70,16 @@ Docker).
Last stable version: 0.8.0
### Finding out all available commands
### Seeing what the Docker client can do
The user-centric nature of Docker means providing you a constant stream
of helpful instructions. This begins with the client itself.
In order to get a full list of available commands run the `docker`
binary:
We can see all of the commands available to us with the Docker client by
running the `docker` binary without any options.
# Usage: [sudo] docker
# Example:
$ docker
You will get an output with all currently available commands.
You will see a list of all currently available commands.
Commands:
attach Attach to a running container
@ -107,23 +87,23 @@ You will get an output with all currently available commands.
commit Create a new image from a container's changes
. . .
### Command usage instructions
### Seeing Docker command usage
The same way used to learn all available commands can be repeated to find
out usage instructions for a specific command.
You can also zoom in and review the usage for specific Docker commands.
Try typing Docker followed with a `[command]` to see the instructions:
Try typing Docker followed with a `[command]` to see the usage for that
command:
# Usage: [sudo] docker [command] [--help]
# Example:
$ docker attach
Help outputs . . .
Help output . . .
Or you can pass the `--help` flag to the `docker` binary.
Or you can also pass the `--help` flag to the `docker` binary.
$ docker images --help
You will get an output with all available options:
This will display the help text and all available flags:
Usage: docker attach [OPTIONS] CONTAINER
@ -134,6 +114,9 @@ You will get an output with all available options:
## Working with images
Let's get started with using Docker by working with Docker images, the
building blocks of Docker containers.
### Docker Images
As we've discovered a Docker image is a read-only template that we build
@ -146,30 +129,32 @@ runs Apache and our own web application as a starting point to launch containers
To search for Docker image we use the `docker search` command. The
`docker search` command returns a list of all images that match your
search criteria together with additional, useful information about that
image. This includes information such as social metrics like how many
other people like the image - we call these "likes" *stars*. We also
tell you if an image is *trusted*. A *trusted* image is built from a
known source and allows you to introspect in greater detail how the
image is constructed.
search criteria, together with some useful information about that image.
This information includes social metrics like how many other people like
the image: we call these "likes" *stars*. We also tell you if an image
is *trusted*. A *trusted* image is built from a known source and allows
you to introspect in greater detail how the image is constructed.
# Usage: [sudo] docker search [image name]
# Example:
$ docker search nginx
NAME DESCRIPTION STARS OFFICIAL TRUSTED
$ dockerfile/nginx Trusted Nginx (http://nginx.org/) Build 6 [OK]
paintedfox/nginx-php5 A docker image for running Nginx with PHP5. 3 [OK]
$ dockerfiles/django-uwsgi-nginx dockerfile and configuration files to buil... 2 [OK]
NAME DESCRIPTION STARS OFFICIAL TRUSTED
dockerfile/nginx Trusted Nginx (http://nginx.org/) Build 6 [OK]
paintedfox/nginx-php5 A docker image for running Nginx with PHP5. 3 [OK]
dockerfiles/django-uwsgi-nginx Dockerfile and configuration files to buil... 2 [OK]
. . .
> **Note:** To learn more about trusted builds, check out [this](
http://blog.docker.io/2013/11/introducing-trusted-builds) blog post.
> **Note:**
> To learn more about trusted builds, check out
> [this](http://blog.docker.io/2013/11/introducing-trusted-builds) blog
> post.
### Downloading an image
Downloading a Docker image is called *pulling*. To do this we hence use the
`docker pull` command.
Once we find an image we'd like to download we can pull it down from
[Docker.io](https://index.docker.io) using the `docker pull` command.
# Usage: [sudo] docker pull [image name]
# Example:
@ -182,13 +167,13 @@ Downloading a Docker image is called *pulling*. To do this we hence use the
. . .
As you can see, Docker will download, one by one, all the layers forming
the final image. This demonstrates the *building block* philosophy of
Docker.
the image.
### Listing available images
In order to get a full list of available images, you can use the
`docker images` command.
You may already have some images you've pulled down or built yourself
and you can use the `docker images` command to see the images
available to you locally.
# Usage: [sudo] docker images
# Example:
@ -197,28 +182,41 @@ In order to get a full list of available images, you can use the
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
myUserName/nginx latest a0d6c70867d2 41 seconds ago 578.8 MB
nginx latest 173c2dd28ab2 3 minutes ago 578.8 MB
$ dockerfile/nginx latest 0ade68db1d05 3 weeks ago 578.8 MB
dockerfile/nginx latest 0ade68db1d05 3 weeks ago 578.8 MB
### Building our own images
You can build your own images using a `Dockerfile` and the `docker
build` command. The `Dockerfile` is very flexible and provides a
powerful set of instructions for building applications into Docker
images. To learn more about the `Dockerfile` see the [`Dockerfile`
Reference](/reference/builder/) and [tutorial](https://www.docker.io/learn/dockerfile/).
## Working with containers
### Docker Containers
Docker containers are directories on your Docker host that are built
from Docker images. In order to create or start a container, you need an
image. This could be the base `ubuntu` image or an image built and
shared with you or an image you've built yourself.
Docker containers run your applications and are built from Docker
images. In order to create or start a container, you need an image. This
could be the base `ubuntu` image or an image built and shared with you
or an image you've built yourself.
### Running a new container from an image
The easiest way to create a new container is to *run* one from an image.
The easiest way to create a new container is to *run* one from an image
using the `docker run` command.
# Usage: [sudo] docker run [arguments] ..
# Example:
$ docker run -d --name nginx_web nginx /usr/sbin/nginx
25137497b2749e226dd08f84a17e4b2be114ddf4ada04125f130ebfe0f1a03d3
This will create a new container from an image called `nginx` which will
launch the command `/usr/sbin/nginx` when the container is run. We've
also given our container a name, `nginx_web`.
also given our container a name, `nginx_web`. When the container is run
Docker will return a container ID, a long string that uniquely
identifies our container. We use can the container's name or its string
to work with it.
Containers can be run in two modes:
@ -226,7 +224,8 @@ Containers can be run in two modes:
* Daemonized;
An interactive container runs in the foreground and you can connect to
it and interact with it. A daemonized container runs in the background.
it and interact with it, for example sign into a shell on that
container. A daemonized container runs in the background.
A container will run as long as the process you have launched inside it
is running, for example if the `/usr/bin/nginx` process stops running
@ -236,7 +235,7 @@ the container will also stop.
We can see a list of all the containers on our host using the `docker
ps` command. By default the `docker ps` command only shows running
containers. But we can also add the `-a` flag to show *all* containers -
containers. But we can also add the `-a` flag to show *all* containers:
both running and stopped.
# Usage: [sudo] docker ps [-a]
@ -248,8 +247,8 @@ both running and stopped.
### Stopping a container
You can use the `docker stop` command to stop an active container. This will gracefully
end the active process.
You can use the `docker stop` command to stop an active container. This
will gracefully end the active process.
# Usage: [sudo] docker stop [container ID]
# Example:
@ -259,6 +258,10 @@ end the active process.
If the `docker stop` command succeeds it will return the name of
the container it has stopped.
> **Note:**
> If you want you to more aggressively stop a container you can use the
> `docker kill` command.
### Starting a Container
Stopped containers can be started again.
@ -271,136 +274,18 @@ Stopped containers can be started again.
If the `docker start` command succeeds it will return the name of the
freshly started container.
## Working with the Dockerfile
## Next steps
The `Dockerfile` holds the set of instructions Docker uses to build a Docker image.
> **Tip:** Below is a short summary of our full Dockerfile tutorial. In
> order to get a better-grasp of how to work with these automation
> scripts, check out the [Dockerfile step-by-step
> tutorial](https://www.docker.io/learn/dockerfile).
A `Dockerfile` contains instructions written in the following format:
# Usage: Instruction [arguments / command] ..
# Example:
FROM ubuntu
A `#` sign is used to provide a comment:
# Comments ..
> **Tip:** The `Dockerfile` is very flexible and provides a powerful set
> of instructions for building applications. To learn more about the
> `Dockerfile` and its instructions see the [Dockerfile
> Reference](http://docs.docker.io/reference/builder/).
### First steps with the Dockerfile
It's a good idea to add some comments to the start of your `Dockerfile`
to provide explanation and exposition to any future consumers, for
example:
#
# Dockerfile to install Nginx
# VERSION 2 - EDITION 1
The first instruction in any `Dockerfile` must be the `FROM` instruction. The `FROM` instruction specifies the image name that this new image is built from, it is often a base image like `ubuntu`.
# Base image used is Ubuntu:
FROM ubuntu
Next, we recommend you use the `MAINTAINER` instruction to tell people who manages this image.
# Maintainer: O.S. Tezer <ostezer at gmail com> (@ostezer)
MAINTAINER O.S. Tezer, ostezer@gmail.com
After this we can add additional instructions that represent the steps
to build our actual image.
### Our Dockerfile so far
So far our `Dockerfile` will look like.
# Dockerfile to install Nginx
# VERSION 2 - EDITION 1
FROM ubuntu
MAINTAINER O.S. Tezer, ostezer@gmail.com
Let's install a package and configure an application inside our image. To do this we use a new
instruction: `RUN`. The `RUN` instruction executes commands inside our
image, for example. The instruction is just like running a command on
the command line inside a container.
RUN echo "deb http://archive.ubuntu.com/ubuntu/ raring main universe" >> /etc/apt/sources.list
RUN apt-get update
RUN apt-get install -y nginx
RUN echo "\ndaemon off;" >> /etc/nginx/nginx.conf
We can see here that we've *run* four instructions. Each time we run an
instruction a new layer is added to our image. Here's we've added an
Ubuntu package repository, updated the packages, installed the `nginx`
package and then echo'ed some configuration to the default
`/etc/nginx/nginx.conf` configuration file.
Let's specify another instruction, `CMD`, that tells Docker what command
to run when a container is created from this image.
CMD /usr/sbin/nginx
We can now save this file and use it build an image.
### Using a Dockerfile
Docker uses the `Dockerfile` to build images. The build process is initiated by the `docker build` command.
# Use the Dockerfile at the current location
# Usage: [sudo] docker build .
# Example:
$ docker build -t="my_nginx_image" .
Uploading context 25.09 kB
Uploading context
Step 0 : FROM ubuntu
---> 9cd978db300e
Step 1 : MAINTAINER O.S. Tezer, ostezer@gmail.com
---> Using cache
---> 467542d0cdd3
Step 2 : RUN echo "deb http://archive.ubuntu.com/ubuntu/ raring main universe" >> /etc/apt/sources.list
---> Using cache
---> 0a688bd2a48c
Step 3 : RUN apt-get update
---> Running in de2937e8915a
. . .
Step 10 : CMD /usr/sbin/nginx
---> Running in b4908b9b9868
---> 626e92c5fab1
Successfully built 626e92c5fab1
Here we can see that Docker has executed each instruction in turn and
each instruction has created a new layer in turn and each layer identified
by a new ID. The `-t` flag allows us to specify a name for our new
image, here `my_nginx_image`.
We can see our new image using the `docker images` command.
$ docker images
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
my_nginx_img latest 626e92c5fab1 57 seconds ago 337.6 MB
## Where to go from here
Here we've learned the basics of how to interact with Docker images and
how to run and work with our first container.
### Understanding Docker
Visit [Understanding Docker](understanding-docker.md) in our Getting Started manual.
Visit [Understanding Docker](understanding-docker.md).
### Learn about parts of Docker and the underlying technology
### Installing Docker
Visit [Understanding the Technology](technology.md) in our Getting Started manual.
### Get the product and go hands-on
Visit [Get Docker](get-docker.md) in our Getting Started manual.
Visit the [installation](/installation/#installation) section.
### Get the whole story