1
0
Fork 0
mirror of https://github.com/moby/moby.git synced 2022-11-09 12:21:53 -05:00

Merge pull request #6154 from SvenDowideit/pr_out_adding_user_guide

Adding User Guide
This commit is contained in:
Sven Dowideit 2014-06-02 10:39:15 -07:00
commit 2404ce3d26
73 changed files with 1958 additions and 1707 deletions

View file

@ -28,7 +28,6 @@ pages:
- ['index.md', 'About', 'Docker']
- ['introduction/index.md', '**HIDDEN**']
- ['introduction/understanding-docker.md', 'About', 'Understanding Docker']
- ['introduction/working-with-docker.md', 'About', 'Working with Docker']
# Installation:
- ['installation/index.md', '**HIDDEN**']
@ -50,44 +49,47 @@ pages:
- ['installation/windows.md', 'Installation', 'Microsoft Windows']
- ['installation/binaries.md', 'Installation', 'Binaries']
# Examples:
- ['use/index.md', '**HIDDEN**']
- ['use/basics.md', 'Examples', 'First steps with Docker']
- ['examples/index.md', '**HIDDEN**']
- ['examples/hello_world.md', 'Examples', 'Hello World']
- ['examples/nodejs_web_app.md', 'Examples', 'Node.js web application']
- ['examples/python_web_app.md', 'Examples', 'Python web application']
- ['examples/mongodb.md', 'Examples', 'Dockerizing MongoDB']
- ['examples/running_redis_service.md', 'Examples', 'Redis service']
- ['examples/postgresql_service.md', 'Examples', 'PostgreSQL service']
- ['examples/running_riak_service.md', 'Examples', 'Running a Riak service']
- ['examples/running_ssh_service.md', 'Examples', 'Running an SSH service']
- ['examples/couchdb_data_volumes.md', 'Examples', 'CouchDB service']
- ['examples/apt-cacher-ng.md', 'Examples', 'Apt-Cacher-ng service']
- ['examples/https.md', 'Examples', 'Running Docker with HTTPS']
- ['examples/using_supervisord.md', 'Examples', 'Using Supervisor']
- ['examples/cfengine_process_management.md', 'Examples', 'Process management with CFEngine']
- ['use/working_with_links_names.md', 'Examples', 'Linking containers together']
- ['use/working_with_volumes.md', 'Examples', 'Sharing Directories using volumes']
- ['use/puppet.md', 'Examples', 'Using Puppet']
- ['use/chef.md', 'Examples', 'Using Chef']
- ['use/workingwithrepository.md', 'Examples', 'Working with a Docker Repository']
- ['use/port_redirection.md', 'Examples', 'Redirect ports']
- ['use/ambassador_pattern_linking.md', 'Examples', 'Cross-Host linking using Ambassador Containers']
- ['use/host_integration.md', 'Examples', 'Automatically starting Containers']
#- ['user-guide/index.md', '**HIDDEN**']
# - ['user-guide/writing-your-docs.md', 'User Guide', 'Writing your docs']
# - ['user-guide/styling-your-docs.md', 'User Guide', 'Styling your docs']
# - ['user-guide/configuration.md', 'User Guide', 'Configuration']
# ./faq.md
# User Guide:
- ['userguide/index.md', 'User Guide', 'The Docker User Guide' ]
- ['userguide/dockerio.md', 'User Guide', 'Getting Started with Docker.io' ]
- ['userguide/dockerizing.md', 'User Guide', 'Dockerizing Applications' ]
- ['userguide/usingdocker.md', 'User Guide', 'Working with Containers' ]
- ['userguide/dockerimages.md', 'User Guide', 'Working with Docker Images' ]
- ['userguide/dockerlinks.md', 'User Guide', 'Linking containers together' ]
- ['userguide/dockervolumes.md', 'User Guide', 'Managing data in containers' ]
- ['userguide/dockerrepos.md', 'User Guide', 'Working with Docker.io' ]
# Docker.io docs:
- ['docker-io/index.md', '**HIDDEN**']
# - ['index/home.md', 'Docker Index', 'Help']
- ['docker-io/index.md', 'Docker.io', 'Docker.io' ]
- ['docker-io/accounts.md', 'Docker.io', 'Accounts']
- ['docker-io/repos.md', 'Docker.io', 'Repositories']
- ['docker-io/builds.md', 'Docker.io', 'Trusted Builds']
- ['docker-io/builds.md', 'Docker.io', 'Automated Builds']
# Examples:
- ['examples/index.md', '**HIDDEN**']
- ['examples/nodejs_web_app.md', 'Examples', 'Dockerizing a Node.js web application']
- ['examples/mongodb.md', 'Examples', 'Dockerizing MongoDB']
- ['examples/running_redis_service.md', 'Examples', 'Dockerizing a Redis service']
- ['examples/postgresql_service.md', 'Examples', 'Dockerizing a PostgreSQL service']
- ['examples/running_riak_service.md', 'Examples', 'Dockerizing a Riak service']
- ['examples/running_ssh_service.md', 'Examples', 'Dockerizing an SSH service']
- ['examples/couchdb_data_volumes.md', 'Examples', 'Dockerizing a CouchDB service']
- ['examples/apt-cacher-ng.md', 'Examples', 'Dockerizing an Apt-Cacher-ng service']
# Articles
- ['articles/index.md', '**HIDDEN**']
- ['articles/basics.md', 'Articles', 'Docker basics']
- ['articles/networking.md', 'Articles', 'Advanced networking']
- ['articles/security.md', 'Articles', 'Security']
- ['articles/https.md', 'Articles', 'Running Docker with HTTPS']
- ['articles/host_integration.md', 'Articles', 'Automatically starting Containers']
- ['articles/using_supervisord.md', 'Articles', 'Using Supervisor']
- ['articles/cfengine_process_management.md', 'Articles', 'Process management with CFEngine']
- ['articles/puppet.md', 'Articles', 'Using Puppet']
- ['articles/chef.md', 'Articles', 'Using Chef']
- ['articles/ambassador_pattern_linking.md', 'Articles', 'Cross-Host linking using Ambassador Containers']
- ['articles/runmetrics.md', 'Articles', 'Runtime metrics']
- ['articles/baseimages.md', 'Articles', 'Creating a Base Image']
# Reference
- ['reference/index.md', '**HIDDEN**']
@ -96,11 +98,6 @@ pages:
- ['reference/builder.md', 'Reference', 'Dockerfile']
- ['faq.md', 'Reference', 'FAQ']
- ['reference/run.md', 'Reference', 'Run Reference']
- ['articles/index.md', '**HIDDEN**']
- ['articles/runmetrics.md', 'Reference', 'Runtime metrics']
- ['articles/security.md', 'Reference', 'Security']
- ['articles/baseimages.md', 'Reference', 'Creating a Base Image']
- ['use/networking.md', 'Reference', 'Advanced networking']
- ['reference/api/index.md', '**HIDDEN**']
- ['reference/api/docker-io_api.md', 'Reference', 'Docker.io API']
- ['reference/api/registry_api.md', 'Reference', 'Docker Registry API']
@ -134,9 +131,6 @@ pages:
- ['terms/filesystem.md', '**HIDDEN**']
- ['terms/image.md', '**HIDDEN**']
# TODO: our theme adds a dropdown even for sections that have no subsections.
#- ['faq.md', 'FAQ']
# Contribute:
- ['contributing/index.md', '**HIDDEN**']
- ['contributing/contributing.md', 'Contribute', 'Contributing']

View file

@ -11,7 +11,14 @@
{ "Condition": { "KeyPrefixEquals": "en/v0.6.3/" }, "Redirect": { "HostName": "$BUCKET", "ReplaceKeyPrefixWith": "" } },
{ "Condition": { "KeyPrefixEquals": "jsearch/index.html" }, "Redirect": { "HostName": "$BUCKET", "ReplaceKeyPrefixWith": "jsearch/" } },
{ "Condition": { "KeyPrefixEquals": "index/" }, "Redirect": { "HostName": "$BUCKET", "ReplaceKeyPrefixWith": "docker-io/" } },
{ "Condition": { "KeyPrefixEquals": "reference/api/index_api/" }, "Redirect": { "HostName": "$BUCKET", "ReplaceKeyPrefixWith": "reference/api/docker-io_api/" } }
{ "Condition": { "KeyPrefixEquals": "reference/api/index_api/" }, "Redirect": { "HostName": "$BUCKET", "ReplaceKeyPrefixWith": "reference/api/docker-io_api/" } },
{ "Condition": { "KeyPrefixEquals": "examples/hello_world/" }, "Redirect": { "HostName": "$BUCKET", "ReplaceKeyPrefixWith": "userguide/dockerizing/" } },
{ "Condition": { "KeyPrefixEquals": "examples/python_web_app/" }, "Redirect": { "HostName": "$BUCKET", "ReplaceKeyPrefixWith": "userguide/dockerizing/" } },
{ "Condition": { "KeyPrefixEquals": "use/working_with_volumes/" }, "Redirect": { "HostName": "$BUCKET", "ReplaceKeyPrefixWith": "userguide/dockervolumes/" } },
{ "Condition": { "KeyPrefixEquals": "use/working_with_links_names/" }, "Redirect": { "HostName": "$BUCKET", "ReplaceKeyPrefixWith": "userguide/dockerlinks/" } },
{ "Condition": { "KeyPrefixEquals": "use/workingwithrepository/" }, "Redirect": { "HostName": "$BUCKET", "ReplaceKeyPrefixWith": "userguide/dockerrepos/" } },
{ "Condition": { "KeyPrefixEquals": "use/port_redirection" }, "Redirect": { "HostName": "$BUCKET", "ReplaceKeyPrefixWith": "userguide/dockerlinks/" } },
{ "Condition": { "KeyPrefixEquals": "use/" }, "Redirect": { "HostName": "$BUCKET", "ReplaceKeyPrefixWith": "examples/" } },
]
}

View file

@ -1,8 +1,13 @@
# Articles
## Contents:
- [Docker Basics](basics/)
- [Docker Security](security/)
- [Running the Docker daemon with HTTPS](https/)
- [Configure Networking](networking/)
- [Using Supervisor with Docker](using_supervisord/)
- [Process Management with CFEngine](cfengine_process_management/)
- [Using Puppet](puppet/)
- [Create a Base Image](baseimages/)
- [Runtime Metrics](runmetrics/)
- [Automatically Start Containers](host_integration/)
- [Link via an Ambassador Container](ambassador_pattern_linking/)

View file

@ -26,7 +26,7 @@ for installation instructions.
$ sudo docker pull ubuntu
This will find the `ubuntu` image by name on
[*Docker.io*](../workingwithrepository/#find-public-images-on-dockerio)
[*Docker.io*](/userguide/dockerrepos/#find-public-images-on-dockerio)
and download it from [Docker.io](https://index.docker.io) to a local
image cache.
@ -173,6 +173,7 @@ will be stored (as a diff). See which images you already have using the
You now have an image state from which you can create new instances.
Read more about [*Share Images via Repositories*](
../workingwithrepository/#working-with-the-repository) or
continue to the complete [*Command Line*](/reference/commandline/cli/#cli)
Read more about [*Share Images via
Repositories*](/userguide/dockerrepos/#working-with-the-repository) or
continue to the complete [*Command
Line*](/reference/commandline/cli/#cli)

View file

@ -14,6 +14,13 @@ Docker made the choice `172.17.42.1/16` when I started it a few minutes
ago, for example — a 16-bit netmask providing 65,534 addresses for the
host machine and its containers.
> **Note:**
> This document discusses advanced networking configuration
> and options for Docker. In most cases you won't need this information.
> If you're looking to get started with a simpler explanation of Docker
> networking and an introduction to the concept of container linking see
> the [Docker User Guide](/userguide/dockerlinks/).
But `docker0` is no ordinary interface. It is a virtual *Ethernet
bridge* that automatically forwards packets between any other network
interfaces that are attached to it. This lets containers communicate
@ -29,7 +36,7 @@ container.
The remaining sections of this document explain all of the ways that you
can use Docker options and — in advanced cases — raw Linux networking
commands to tweak, supplement, or entirely replace Dockers default
commands to tweak, supplement, or entirely replace Docker's default
networking configuration.
## Quick Guide to the Options
@ -53,9 +60,6 @@ server when it starts up, and cannot be changed once it is running:
it tells the Docker server over what channels
it should be willing to receive commands
like “run container” and “stop container.”
To learn about the option,
read [Bind Docker to another host/port or a Unix socket](../basics/#bind-docker-to-another-hostport-or-a-unix-socket)
over in the Basics document.
* `--icc=true|false` — see
[Communication between containers](#between-containers)
@ -219,7 +223,7 @@ services. If the Docker daemon is running with both `--icc=false` and
`ACCEPT` rules so that the new container can connect to the ports
exposed by the other container — the ports that it mentioned in the
`EXPOSE` lines of its `Dockerfile`. Docker has more documentation on
this subject — see the [Link Containers](working_with_links_names.md)
this subject — see the [linking Docker containers](/userguide/dockerlinks)
page for further details.
> **Note**:
@ -280,7 +284,7 @@ machine that the Docker server creates when it starts:
But if you want containers to accept incoming connections, you will need
to provide special options when invoking `docker run`. These options
are covered in more detail on the [Redirect Ports](port_redirection.md)
are covered in more detail in the [Docker User Guide](/userguide/dockerlinks)
page. There are two approaches.
First, you can supply `-P` or `--publish-all=true|false` to `docker run`
@ -329,7 +333,7 @@ option `--ip=IP_ADDRESS`. Remember to restart your Docker server after
editing this setting.
Again, this topic is covered without all of these low-level networking
details in the [Redirect Ports](port_redirection.md) document if you
details in the [Docker User Guide](/userguide/dockerlinks/) document if you
would like to use that as your port redirection reference instead.
## <a name="docker0"></a>Customizing docker0

View file

@ -38,7 +38,7 @@ of another container. Of course, if the host system is setup
accordingly, containers can interact with each other through their
respective network interfaces — just like they can interact with
external hosts. When you specify public ports for your containers or use
[*links*](/use/working_with_links_names/#working-with-links-names)
[*links*](/userguide/dockerlinks/#working-with-links-names)
then IP traffic is allowed between containers. They can ping each other,
send/receive UDP packets, and establish TCP connections, but that can be
restricted if necessary. From a network architecture point of view, all

View file

@ -5,10 +5,6 @@ page_keywords: docker, supervisor, process management
# Using Supervisor with Docker
> **Note**:
>
> - This example assumes you have Docker running in daemon mode. For
> more information please see [*Check your Docker
> install*](../hello_world/#running-examples).
> - **If you don't like sudo** then see [*Giving non-root
> access*](/installation/binaries/#dockergroup)
@ -16,8 +12,8 @@ Traditionally a Docker container runs a single process when it is
launched, for example an Apache daemon or a SSH server daemon. Often
though you want to run more than one process in a container. There are a
number of ways you can achieve this ranging from using a simple Bash
script as the value of your container's `CMD`
instruction to installing a process management tool.
script as the value of your container's `CMD` instruction to installing
a process management tool.
In this example we're going to make use of the process management tool,
[Supervisor](http://supervisord.org/), to manage multiple processes in

View file

@ -1,33 +1,32 @@
page_title: Trusted Builds on Docker.io
page_description: Docker.io Trusted Builds
page_keywords: Docker, docker, registry, accounts, plans, Dockerfile, Docker.io, docs, documentation, trusted, builds, trusted builds
page_title: Automated Builds on Docker.io
page_description: Docker.io Automated Builds
page_keywords: Docker, docker, registry, accounts, plans, Dockerfile, Docker.io, docs, documentation, trusted, builds, trusted builds, automated builds
# Automated Builds on Docker.io
# Trusted Builds on Docker.io
## Automated Builds
## Trusted Builds
*Trusted Builds* is a special feature allowing you to specify a source
*Automated Builds* is a special feature allowing you to specify a source
repository with a `Dockerfile` to be built by the
[Docker.io](https://index.docker.io) build clusters. The system will
clone your repository and build the `Dockerfile` using the repository as
the context. The resulting image will then be uploaded to the registry
and marked as a *Trusted Build*.
and marked as an *Automated Build*.
Trusted Builds have a number of advantages. For example, users of *your* Trusted
Build can be certain that the resulting image was built exactly how it claims
to be.
Automated Builds have a number of advantages. For example, users of
*your* Automated Build can be certain that the resulting image was built
exactly how it claims to be.
Furthermore, the `Dockerfile` will be available to anyone browsing your repository
on the registry. Another advantage of the Trusted Builds feature is the automated
on the registry. Another advantage of the Automated Builds feature is the automated
builds. This makes sure that your repository is always up to date.
Trusted builds are supported for both public and private repositories on
both [GitHub](http://github.com) and
Automated Builds are supported for both public and private repositories
on both [GitHub](http://github.com) and
[BitBucket](https://bitbucket.org/).
### Setting up Trusted Builds with GitHub
### Setting up Automated Builds with GitHub
In order to setup a Trusted Build, you need to first link your [Docker.io](
In order to setup an Automated Build, you need to first link your [Docker.io](
https://index.docker.io) account with a GitHub one. This will allow the registry
to see your repositories.
@ -35,7 +34,7 @@ to see your repositories.
> https://index.docker.io) needs to setup a GitHub service hook. Although nothing
> else is done with your account, this is how GitHub manages permissions, sorry!
Click on the [Trusted Builds tab](https://index.docker.io/builds/) to
Click on the [Automated Builds tab](https://index.docker.io/builds/) to
get started and then select [+ Add
New](https://index.docker.io/builds/add/).
@ -45,9 +44,9 @@ service](https://index.docker.io/associate/github/).
Then follow the instructions to authorize and link your GitHub account
to Docker.io.
#### Creating a Trusted Build
#### Creating an Automated Build
You can [create a Trusted Build](https://index.docker.io/builds/github/select/)
You can [create an Automated Build](https://index.docker.io/builds/github/select/)
from any of your public or private GitHub repositories with a `Dockerfile`.
#### GitHub organizations
@ -59,7 +58,7 @@ organization on GitHub.
#### GitHub service hooks
You can follow the below steps to configure the GitHub service hooks for your
Trusted Build:
Automated Build:
<table class="table table-bordered">
<thead>
@ -84,13 +83,13 @@ Trusted Build:
</tbody>
</table>
### Setting up Trusted Builds with BitBucket
### Setting up Automated Builds with BitBucket
In order to setup a Trusted Build, you need to first link your
In order to setup an Automated Build, you need to first link your
[Docker.io]( https://index.docker.io) account with a BitBucket one. This
will allow the registry to see your repositories.
Click on the [Trusted Builds tab](https://index.docker.io/builds/) to
Click on the [Automated Builds tab](https://index.docker.io/builds/) to
get started and then select [+ Add
New](https://index.docker.io/builds/add/).
@ -100,14 +99,14 @@ service](https://index.docker.io/associate/bitbucket/).
Then follow the instructions to authorize and link your BitBucket account
to Docker.io.
#### Creating a Trusted Build
#### Creating an Automated Build
You can [create a Trusted
Build](https://index.docker.io/builds/bitbucket/select/)
from any of your public or private BitBucket repositories with a
`Dockerfile`.
### The Dockerfile and Trusted Builds
### The Dockerfile and Automated Builds
During the build process, we copy the contents of your `Dockerfile`. We also
add it to the [Docker.io](https://index.docker.io) for the Docker community
@ -120,20 +119,19 @@ repository's full description.
> **Warning:**
> If you change the full description after a build, it will be
> rewritten the next time the Trusted Build has been built. To make changes,
> rewritten the next time the Automated Build has been built. To make changes,
> modify the README.md from the Git repository. We will look for a README.md
> in the same directory as your `Dockerfile`.
### Build triggers
If you need another way to trigger your Trusted Builds outside of GitHub
If you need another way to trigger your Automated Builds outside of GitHub
or BitBucket, you can setup a build trigger. When you turn on the build
trigger for a Trusted Build, it will give you a URL to which you can
send POST requests. This will trigger the Trusted Build process, which
is similar to GitHub web hooks.
trigger for an Automated Build, it will give you a URL to which you can
send POST requests. This will trigger the Automated Build process, which
is similar to GitHub webhooks.
Build Triggers are available under the Settings tab of each Trusted
Build.
Build Triggers are available under the Settings tab of each Automated Build.
> **Note:**
> You can only trigger one build at a time and no more than one
@ -144,10 +142,10 @@ Build.
### Webhooks
Also available for Trusted Builds are Webhooks. Webhooks can be called
Also available for Automated Builds are Webhooks. Webhooks can be called
after a successful repository push is made.
The web hook call will generate a HTTP POST with the following JSON
The webhook call will generate a HTTP POST with the following JSON
payload:
```
@ -181,7 +179,7 @@ payload:
}
```
Webhooks are available under the Settings tab of each Trusted
Webhooks are available under the Settings tab of each Automated
Build.
> **Note:** If you want to test your webhook out then we recommend using
@ -190,15 +188,15 @@ Build.
### Repository links
Repository links are a way to associate one Trusted Build with another. If one
gets updated, linking system also triggers a build for the other Trusted Build.
This makes it easy to keep your Trusted Builds up to date.
Repository links are a way to associate one Automated Build with another. If one
gets updated, linking system also triggers a build for the other Automated Build.
This makes it easy to keep your Automated Builds up to date.
To add a link, go to the settings page of a Trusted Build and click on
To add a link, go to the settings page of an Automated Build and click on
*Repository Links*. Then enter the name of the repository that you want have
linked.
> **Warning:**
> You can add more than one repository link, however, you should
> be very careful. Creating a two way relationship between Trusted Builds will
> be very careful. Creating a two way relationship between Automated Builds will
> cause a never ending build loop.

View file

@ -0,0 +1,8 @@
# Docker.io
## Contents:
- [Accounts](accounts/)
- [Repositories](repos/)
- [Automated Builds](builds/)

View file

@ -1,25 +1,9 @@
# Examples
## Introduction:
Here are some examples of how to use Docker to create running processes,
starting from a very simple *Hello World* and progressing to more
substantial services like those which you might find in production.
## Contents:
- [Check your Docker install](hello_world/)
- [Hello World](hello_world/#hello-world)
- [Hello World Daemon](hello_world/#hello-world-daemon)
- [Node.js Web App](nodejs_web_app/)
- [Redis Service](running_redis_service/)
- [SSH Daemon Service](running_ssh_service/)
- [CouchDB Service](couchdb_data_volumes/)
- [PostgreSQL Service](postgresql_service/)
- [Building an Image with MongoDB](mongodb/)
- [Riak Service](running_riak_service/)
- [Using Supervisor with Docker](using_supervisord/)
- [Process Management with CFEngine](cfengine_process_management/)
- [Python Web App](python_web_app/)
- [Dockerizing a Node.js Web App](nodejs_web_app/)
- [Dockerizing a Redis Service](running_redis_service/)
- [Dockerizing an SSH Daemon Service](running_ssh_service/)
- [Dockerizing a CouchDB Service](couchdb_data_volumes/)
- [Dockerizing a PostgreSQL Service](postgresql_service/)
- [Dockerizing MongoDB](mongodb/)
- [Dockerizing a Riak Service](running_riak_service/)

View file

@ -1,14 +1,10 @@
page_title: Running an apt-cacher-ng service
page_title: Dockerizing an apt-cacher-ng service
page_description: Installing and running an apt-cacher-ng service
page_keywords: docker, example, package installation, networking, debian, ubuntu
# Apt-Cacher-ng Service
# Dockerizing an Apt-Cacher-ng Service
> **Note**:
>
> - This example assumes you have Docker running in daemon mode. For
> more information please see [*Check your Docker
> install*](../hello_world/#running-examples).
> - **If you don't like sudo** then see [*Giving non-root
> access*](/installation/binaries/#dockergroup).
> - **If you're using OS X or docker via TCP** then you shouldn't use

View file

@ -1,14 +1,10 @@
page_title: Sharing data between 2 couchdb databases
page_title: Dockerizing a CouchDB Service
page_description: Sharing data between 2 couchdb databases
page_keywords: docker, example, package installation, networking, couchdb, data volumes
# CouchDB Service
# Dockerizing a CouchDB Service
> **Note**:
>
> - This example assumes you have Docker running in daemon mode. For
> more information please see [*Check your Docker
> install*](../hello_world/#running-examples).
> - **If you don't like sudo** then see [*Giving non-root
> access*](/installation/binaries/#dockergroup)

View file

@ -1,8 +0,0 @@
.. note::
* This example assumes you have Docker running in daemon mode. For
more information please see :ref:`running_examples`.
* **If you don't like sudo** then see :ref:`dockergroup`
* **If you're using OS X or docker via TCP** then you shouldn't use `sudo`

View file

@ -1,162 +0,0 @@
page_title: Hello world example
page_description: A simple hello world example with Docker
page_keywords: docker, example, hello world
# Check your Docker installation
This guide assumes you have a working installation of Docker. To check
your Docker install, run the following command:
# Check that you have a working install
$ sudo docker info
If you get `docker: command not found` or something
like `/var/lib/docker/repositories: permission denied`
you may have an incomplete Docker installation or insufficient
privileges to access docker on your machine.
Please refer to [*Installation*](/installation/)
for installation instructions.
## Hello World
> **Note**:
>
> - This example assumes you have Docker running in daemon mode. For
> more information please see [*Check your Docker
> install*](#check-your-docker-installation).
> - **If you don't like sudo** then see [*Giving non-root
> access*](/installation/binaries/#dockergroup)
This is the most basic example available for using Docker.
Download the small base image named `busybox`:
# Download a busybox image
$ sudo docker pull busybox
The `busybox` image is a minimal Linux system. You can do the same with
any number of other images, such as `debian`, `ubuntu` or `centos`. The
images can be found and retrieved using the
[Docker.io](http://index.docker.io) registry.
$ sudo docker run busybox /bin/echo hello world
This command will run a simple `echo` command, that
will echo `hello world` back to the console over
standard out.
**Explanation:**
- **"sudo"** execute the following commands as user *root*
- **"docker run"** run a command in a new container
- **"busybox"** is the image we are running the command in.
- **"/bin/echo"** is the command we want to run in the container
- **"hello world"** is the input for the echo command
**Video:**
See the example in action
<iframe width="640" height="480" frameborder="0" sandbox="allow-same-origin allow-scripts" srcdoc="<body><script type=&quot;text/javascript&quot;src=&quot;https://asciinema.org/a/7658.js&quot;id=&quot;asciicast-7658&quot; async></script></body>"></iframe>
<iframe width="640" height="480" frameborder="0" sandbox="allow-same-origin allow-scripts" srcdoc="<body><script type=&quot;text/javascript&quot;src=&quot;https://asciinema.org/a/7658.js&quot;id=&quot;asciicast-7658&quot; async></script></body>"></iframe>
## Hello World Daemon
> **Note**:
>
> - This example assumes you have Docker running in daemon mode. For
> more information please see [*Check your Docker
> install*](#check-your-docker-installation).
> - **If you don't like sudo** then see [*Giving non-root
> access*](/installation/binaries/#dockergroup)
And now for the most boring daemon ever written!
We will use the Ubuntu image to run a simple hello world daemon that
will just print hello world to standard out every second. It will
continue to do this until we stop it.
**Steps:**
$ container_id=$(sudo docker run -d ubuntu /bin/sh -c "while true; do echo hello world; sleep 1; done")
We are going to run a simple hello world daemon in a new container made
from the `ubuntu` image.
- **"sudo docker run -d "** run a command in a new container. We pass
"-d" so it runs as a daemon.
- **"ubuntu"** is the image we want to run the command inside of.
- **"/bin/sh -c"** is the command we want to run in the container
- **"while true; do echo hello world; sleep 1; done"** is the mini
script we want to run, that will just print hello world once a
second until we stop it.
- **$container_id** the output of the run command will return a
container id, we can use in future commands to see what is going on
with this process.
<!-- -->
$ sudo docker logs $container_id
Check the logs make sure it is working correctly.
- **"docker logs**" This will return the logs for a container
- **$container_id** The Id of the container we want the logs for.
<!-- -->
$ sudo docker attach --sig-proxy=false $container_id
Attach to the container to see the results in real-time.
- **"docker attach**" This will allow us to attach to a background
process to see what is going on.
- **"sig-proxy=false"** Do not forward signals to the container;
allows us to exit the attachment using Control-C without stopping
the container.
- **$container_id** The Id of the container we want to attach to.
Exit from the container attachment by pressing Control-C.
$ sudo docker ps
Check the process list to make sure it is running.
- **"docker ps"** this shows all running process managed by docker
<!-- -->
$ sudo docker stop $container_id
Stop the container, since we don't need it anymore.
- **"docker stop"** This stops a container
- **$container_id** The Id of the container we want to stop.
<!-- -->
$ sudo docker ps
Make sure it is really stopped.
**Video:**
See the example in action
<iframe width="640" height="480" frameborder="0" sandbox="allow-same-origin allow-scripts" srcdoc="<body><script type=&quot;text/javascript&quot;src=&quot;https://asciinema.org/a/2562.js&quot;id=&quot;asciicast-2562&quot; async></script></body>"></iframe>
<iframe width="640" height="480" frameborder="0" sandbox="allow-same-origin allow-scripts" srcdoc="<body><script type=&quot;text/javascript&quot;src=&quot;https://asciinema.org/a/2562.js&quot;id=&quot;asciicast-2562&quot; async></script></body>"></iframe>
The next example in the series is a [*Node.js Web App*](
../nodejs_web_app/#nodejs-web-app) example, or you could skip to any of the
other examples:
- [*Node.js Web App*](../nodejs_web_app/#nodejs-web-app)
- [*Redis Service*](../running_redis_service/#running-redis-service)
- [*SSH Daemon Service*](../running_ssh_service/#running-ssh-service)
- [*CouchDB Service*](../couchdb_data_volumes/#running-couchdb-service)
- [*PostgreSQL Service*](../postgresql_service/#postgresql-service)
- [*Building an Image with MongoDB*](../mongodb/#mongodb-image)
- [*Python Web App*](../python_web_app/#python-web-app)

View file

@ -2,7 +2,7 @@ page_title: Dockerizing MongoDB
page_description: Creating a Docker image with MongoDB pre-installed using a Dockerfile and sharing the image on Docker.io
page_keywords: docker, dockerize, dockerizing, article, example, docker.io, platform, package, installation, networking, mongodb, containers, images, image, sharing, dockerfile, build, auto-building, virtualization, framework
# Dockerizing MongoDB
# Dockerizing MongoDB
## Introduction
@ -18,17 +18,10 @@ instances will bring several benefits, such as:
- Ready to run and start working within milliseconds;
- Based on globally accessible and shareable images.
> **Note:**
>
> This example assumes you have Docker running in daemon mode. To verify,
> try running `sudo docker info`.
> For more information, please see: [*Check your Docker installation*](
> /examples/hello_world/#running-examples).
> **Note:**
>
> If you do **_not_** like `sudo`, you might want to check out:
> [*Giving non-root access*](installation/binaries/#giving-non-root-access).
> [*Giving non-root access*](/installation/binaries/#giving-non-root-access).
## Creating a Dockerfile for MongoDB
@ -101,8 +94,7 @@ Now save the file and let's build our image.
> **Note:**
>
> The full version of this `Dockerfile` can be found [here](/
> /examples/mongodb/Dockerfile).
> The full version of this `Dockerfile` can be found [here](/examples/mongodb/Dockerfile).
## Building the MongoDB Docker image
@ -157,8 +149,6 @@ as daemon process(es).
# Usage: mongo --port <port you get from `docker ps`>
$ mongo --port 12345
## Learn more
- [Linking containers](/use/working_with_links_names/)
- [Cross-host linking containers](/use/ambassador_pattern_linking/)
- [Linking containers](/userguide/dockerlinks)
- [Cross-host linking containers](/articles/ambassador_pattern_linking/)
- [Creating a Trusted Build](/docker-io/builds/#trusted-builds)

View file

@ -1,14 +1,10 @@
page_title: Running a Node.js app on CentOS
page_description: Installing and running a Node.js app on CentOS
page_title: Dockerizing a Node.js Web App
page_description: Installing and running a Node.js app with Docker
page_keywords: docker, example, package installation, node, centos
# Node.js Web App
# Dockerizing a Node.js Web App
> **Note**:
>
> - This example assumes you have Docker running in daemon mode. For
> more information please see [*Check your Docker
> install*](../hello_world/#running-examples).
> - **If you don't like sudo** then see [*Giving non-root
> access*](/installation/binaries/#dockergroup)
@ -187,11 +183,10 @@ Now you can call your app using `curl` (install if needed via:
Content-Length: 12
Date: Sun, 02 Jun 2013 03:53:22 GMT
Connection: keep-alive
Hello World
We hope this tutorial helped you get up and running with Node.js and
CentOS on Docker. You can get the full source code at
[https://github.com/gasi/docker-node-hello](https://github.com/gasi/docker-node-hello).
Continue to [*Redis Service*](../running_redis_service/#running-redis-service).

View file

@ -1,26 +1,22 @@
page_title: PostgreSQL service How-To
page_title: Dockerizing PostgreSQL
page_description: Running and installing a PostgreSQL service
page_keywords: docker, example, package installation, postgresql
# PostgreSQL Service
# Dockerizing PostgreSQL
> **Note**:
>
> - This example assumes you have Docker running in daemon mode. For
> more information please see [*Check your Docker
> install*](../hello_world/#running-examples).
> - **If you don't like sudo** then see [*Giving non-root
> access*](/installation/binaries/#dockergroup)
## Installing PostgreSQL on Docker
Assuming there is no Docker image that suits your needs in [the index](
http://index.docker.io), you can create one yourself.
Assuming there is no Docker image that suits your needs on the [Docker
Hub]( http://index.docker.io), you can create one yourself.
Start by creating a new Dockerfile:
Start by creating a new `Dockerfile`:
> **Note**:
> This PostgreSQL setup is for development only purposes. Refer to the
> This PostgreSQL setup is for development-only purposes. Refer to the
> PostgreSQL documentation to fine-tune these settings so that it is
> suitably secure.
@ -32,7 +28,7 @@ Start by creating a new Dockerfile:
MAINTAINER SvenDowideit@docker.com
# Add the PostgreSQL PGP key to verify their Debian packages.
# It should be the same key as https://www.postgresql.org/media/keys/ACCC4CF8.asc
# It should be the same key as https://www.postgresql.org/media/keys/ACCC4CF8.asc
RUN apt-key adv --keyserver keyserver.ubuntu.com --recv-keys B97B0AFCAA1A47F044F244A07FCC7D46ACCC4CF8
# Add PostgreSQL's repository. It contains the most recent stable release
@ -87,11 +83,11 @@ And run the PostgreSQL server container (in the foreground):
$ sudo docker run --rm -P --name pg_test eg_postgresql
There are 2 ways to connect to the PostgreSQL server. We can use [*Link
Containers*](/use/working_with_links_names/#working-with-links-names),
or we can access it from our host (or the network).
Containers*](/userguide/dockerlinks), or we can access it from our host
(or the network).
> **Note**:
> The `-rm` removes the container and its image when
> The `--rm` removes the container and its image when
> the container exists successfully.
### Using container linking

View file

@ -1,127 +0,0 @@
page_title: Python Web app example
page_description: Building your own python web app using docker
page_keywords: docker, example, python, web app
# Python Web App
> **Note**:
>
> - This example assumes you have Docker running in daemon mode. For
> more information please see [*Check your Docker
> install*](../hello_world/#running-examples).
> - **If you don't like sudo** then see [*Giving non-root
> access*](/installation/binaries/#dockergroup)
While using Dockerfiles is the preferred way to create maintainable and
repeatable images, its useful to know how you can try things out and
then commit your live changes to an image.
The goal of this example is to show you how you can modify your own
Docker images by making changes to a running container, and then saving
the results as a new image. We will do that by making a simple `hello
world` Flask web application image.
## Download the initial image
Download the `shykes/pybuilder` Docker image from the `http://index.docker.io`
registry.
This image contains a `buildapp` script to download
the web app and then `pip install` any required
modules, and a `runapp` script that finds the
`app.py` and runs it.
$ sudo docker pull shykes/pybuilder
> **Note**:
> This container was built with a very old version of docker (May 2013 -
> see [shykes/pybuilder](https://github.com/shykes/pybuilder) ), when the
> Dockerfile format was different, but the image can
> still be used now.
## Interactively make some modifications
We then start a new container running interactively using the image.
First, we set a `URL` variable that points to a
tarball of a simple helloflask web app, and then we run a command
contained in the image called `buildapp`, passing it
the `$URL` variable. The container is given a name
`pybuilder_run` which we will use in the next steps.
While this example is simple, you could run any number of interactive
commands, try things out, and then exit when you're done.
$ sudo docker run -i -t --name pybuilder_run shykes/pybuilder bash
$$ URL=http://github.com/shykes/helloflask/archive/master.tar.gz
$$ /usr/local/bin/buildapp $URL
[...]
$$ exit
## Commit the container to create a new image
Save the changes we just made in the container to a new image called
`/builds/github.com/shykes/helloflask/master`. You
now have 3 different ways to refer to the container: name
`pybuilder_run`, short-id `c8b2e8228f11`, or long-id
`c8b2e8228f11b8b3e492cbf9a49923ae66496230056d61e07880dc74c5f495f9`.
$ sudo docker commit pybuilder_run /builds/github.com/shykes/helloflask/master
c8b2e8228f11b8b3e492cbf9a49923ae66496230056d61e07880dc74c5f495f9
## Run the new image to start the web worker
Use the new image to create a new container with network port 5000
mapped to a local port
$ sudo docker run -d -p 5000 --name web_worker /builds/github.com/shykes/helloflask/master /usr/local/bin/runapp
- **"docker run -d "** run a command in a new container. We pass "-d"
so it runs as a daemon.
- **"-p 5000"** the web app is going to listen on this port, so it
must be mapped from the container to the host system.
- **/usr/local/bin/runapp** is the command which starts the web app.
## View the container logs
View the logs for the new `web_worker` container and
if everything worked as planned you should see the line
`Running on http://0.0.0.0:5000/` in the log output.
To exit the view without stopping the container, hit Ctrl-C, or open
another terminal and continue with the example while watching the result
in the logs.
$ sudo docker logs -f web_worker
* Running on http://0.0.0.0:5000/
## See the webapp output
Look up the public-facing port which is NAT-ed. Find the private port
used by the container and store it inside of the `WEB_PORT`
variable.
Access the web app using the `curl` binary. If
everything worked as planned you should see the line
`Hello world!` inside of your console.
$ WEB_PORT=$(sudo docker port web_worker 5000 | awk -F: '{ print $2 }')
# install curl if necessary, then ...
$ curl http://127.0.0.1:$WEB_PORT
Hello world!
## Clean up example containers and images
$ sudo docker ps --all
List `--all` the Docker containers. If this
container had already finished running, it will still be listed here
with a status of `Exit 0`.
$ sudo docker stop web_worker
$ sudo docker rm web_worker pybuilder_run
$ sudo docker rmi /builds/github.com/shykes/helloflask/master shykes/pybuilder:latest
And now stop the running web worker, and delete the containers, so that
we can then delete the images that we used.

View file

@ -1,16 +1,8 @@
page_title: Running a Redis service
page_title: Dockerizing a Redis service
page_description: Installing and running an redis service
page_keywords: docker, example, package installation, networking, redis
# Redis Service
> **Note**:
>
> - This example assumes you have Docker running in daemon mode. For
> more information please see [*Check your Docker
> install*](../hello_world/#running-examples).
> - **If you don't like sudo** then see [*Giving non-root
> access*](/installation/binaries/#dockergroup)
# Dockerizing a Redis Service
Very simple, no frills, Redis service attached to a web application
using a link.

View file

@ -1,30 +1,21 @@
page_title: Running a Riak service
page_title: Dockerizing a Riak service
page_description: Build a Docker image with Riak pre-installed
page_keywords: docker, example, package installation, networking, riak
# Riak Service
> **Note**:
>
> - This example assumes you have Docker running in daemon mode. For
> more information please see [*Check your Docker
> install*](../hello_world/#running-examples).
> - **If you don't like sudo** then see [*Giving non-root
> access*](/installation/binaries/#dockergroup)
# Dockerizing a Riak Service
The goal of this example is to show you how to build a Docker image with
Riak pre-installed.
## Creating a Dockerfile
Create an empty file called Dockerfile:
Create an empty file called `Dockerfile`:
$ touch Dockerfile
Next, define the parent image you want to use to build your image on top
of. We'll use [Ubuntu](https://index.docker.io/_/ubuntu/) (tag:
`latest`), which is available on the [docker
index](http://index.docker.io):
`latest`), which is available on [Docker Hub](http://index.docker.io):
# Riak
#
@ -101,7 +92,7 @@ are started:
## Create a supervisord configuration file
Create an empty file called `supervisord.conf`. Make
sure it's at the same directory level as your Dockerfile:
sure it's at the same directory level as your `Dockerfile`:
touch supervisord.conf

View file

@ -1,17 +1,10 @@
page_title: Running an SSH service
page_description: Installing and running an sshd service
page_title: Dockerizing an SSH service
page_description: Installing and running an SSHd service on Docker
page_keywords: docker, example, package installation, networking
# SSH Daemon Service
# Dockerizing an SSH Daemon Service
> **Note:**
> - This example assumes you have Docker running in daemon mode. For
> more information please see [*Check your Docker
> install*](../hello_world/#running-examples).
> - **If you don't like sudo** then see [*Giving non-root
> access*](/installation/binaries/#dockergroup)
The following Dockerfile sets up an sshd service in a container that you
The following `Dockerfile` sets up an SSHd service in a container that you
can use to connect to and inspect other container's volumes, or to get
quick access to a test container.
@ -27,7 +20,7 @@ quick access to a test container.
RUN apt-get update
RUN apt-get install -y openssh-server
RUN mkdir /var/run/sshd
RUN mkdir /var/run/sshd
RUN echo 'root:screencast' |chpasswd
EXPOSE 22
@ -37,16 +30,15 @@ Build the image using:
$ sudo docker build --rm -t eg_sshd .
Then run it. You can then use `docker port` to find
out what host port the container's port 22 is mapped to:
Then run it. You can then use `docker port` to find out what host port
the container's port 22 is mapped to:
$ sudo docker run -d -P --name test_sshd eg_sshd
$ sudo docker port test_sshd 22
0.0.0.0:49154
And now you can ssh to port `49154` on the Docker
daemon's host IP address (`ip address` or
`ifconfig` can tell you that):
And now you can ssh to port `49154` on the Docker daemon's host IP
address (`ip address` or `ifconfig` can tell you that):
$ ssh root@192.168.1.2 -p 49154
# The password is ``screencast``.
@ -58,3 +50,4 @@ container, and then removing the image.
$ sudo docker stop test_sshd
$ sudo docker rm test_sshd
$ sudo docker rmi eg_sshd

View file

@ -142,12 +142,11 @@ running in parallel.
### How do I connect Docker containers?
Currently the recommended way to link containers is via the link
primitive. You can see details of how to [work with links here](
http://docs.docker.io/use/working_with_links_names/).
primitive. You can see details of how to [work with links
here](/userguide/dockerlinks).
Also of useful when enabling more flexible service portability is the
[Ambassador linking pattern](
http://docs.docker.io/use/ambassador_pattern_linking/).
[Ambassador linking pattern](/articles/ambassador_pattern_linking/).
### How do I run more than one process in a Docker container?
@ -156,8 +155,7 @@ http://supervisord.org/), runit, s6, or daemontools can do the trick.
Docker will start up the process management daemon which will then fork
to run additional processes. As long as the processor manager daemon continues
to run, the container will continue to as well. You can see a more substantial
example [that uses supervisord here](
http://docs.docker.io/examples/using_supervisord/).
example [that uses supervisord here](/articles/using_supervisord/).
### What platforms does Docker run on?
@ -207,5 +205,5 @@ You can find more answers on:
- [Ask questions on Stackoverflow](http://stackoverflow.com/search?q=docker)
- [Join the conversation on Twitter](http://twitter.com/docker)
Looking for something else to read? Checkout the [*Hello World*](
../examples/hello_world/#hello-world) example.
Looking for something else to read? Checkout the [User
Guide](/userguide/).

View file

@ -6,8 +6,6 @@ page_keywords: docker, introduction, documentation, about, technology, understan
**Develop, Ship and Run Any Application, Anywhere**
## Introduction
[**Docker**](https://www.docker.io) is a platform for developers and
sysadmins to develop, ship, and run applications. Docker consists of:
@ -78,22 +76,17 @@ section](introduction/understanding-docker.md):
> [Click here to go to the Understanding
> Docker section](introduction/understanding-docker.md).
Next we get [**practical** with the Working with Docker
section](introduction/working-with-docker.md) and you can learn about:
### Installation Guides
- Docker on the command line;
- Get introduced to your first Docker commands;
- Get to know your way around the basics of Docker operation.
> [Click here to go to the Working with
> Docker section](introduction/working-with-docker.md).
If you want to see how to install Docker you can jump to the
Then we'll learn how to install Docker on a variety of platforms in our
[installation](/installation/#installation) section.
> **Note**:
> We know how valuable your time is so you if you want to get started
> with Docker straight away don't hesitate to jump to [Working with
> Docker](introduction/working-with-docker.md). For a fuller
> understanding of Docker though we do recommend you read [Understanding
> Docker]( introduction/understanding-docker.md).
> [Click here to go to the Installation
> section](/installation/#installation).
### Docker User Guide
Once you've gotten Docker installed we recommend you step through our [Docker User Guide](/userguide/), which will give you an in depth introduction to Docker.
> [Click here to go to the Docker User Guide](/userguide/).

View file

@ -53,8 +53,7 @@ add the *ubuntu* user to it so that you don't have to use
`sudo` for every Docker command.
Once you`ve got Docker installed, you're ready to try it out head on
over to the [*First steps with Docker*](/use/basics/) or
[*Examples*](/examples/) section.
over to the [User Guide](/userguide).
## Amazon QuickStart (Release Candidate - March 2014)
@ -94,4 +93,4 @@ QuickStart*](#amazon-quickstart) to pick an image (or use one of your
own) and skip the step with the *User Data*. Then continue with the
[*Ubuntu*](../ubuntulinux/#ubuntu-linux) instructions.
Continue with the [*Hello World*](/examples/hello_world/#hello-world) example.
Continue with the [User Guide](/userguide/).

View file

@ -56,20 +56,17 @@ Linux kernel (it even builds on OSX!).
## Giving non-root access
The `docker` daemon always runs as the root user,
and since Docker version 0.5.2, the `docker` daemon
binds to a Unix socket instead of a TCP port. By default that Unix
socket is owned by the user *root*, and so, by default, you can access
it with `sudo`.
The `docker` daemon always runs as the root user, and the `docker`
daemon binds to a Unix socket instead of a TCP port. By default that
Unix socket is owned by the user *root*, and so, by default, you can
access it with `sudo`.
Starting in version 0.5.3, if you (or your Docker installer) create a
Unix group called *docker* and add users to it, then the
`docker` daemon will make the ownership of the Unix
socket read/writable by the *docker* group when the daemon starts. The
`docker` daemon must always run as the root user,
but if you run the `docker` client as a user in the
*docker* group then you don't need to add `sudo` to
all the client commands.
If you (or your Docker installer) create a Unix group called *docker*
and add users to it, then the `docker` daemon will make the ownership of
the Unix socket read/writable by the *docker* group when the daemon
starts. The `docker` daemon must always run as the root user, but if you
run the `docker` client as a user in the *docker* group then you don't
need to add `sudo` to all the client commands.
> **Warning**:
> The *docker* group (or the group specified with `-G`) is root-equivalent;
@ -93,4 +90,4 @@ Then follow the regular installation steps.
# run a container and open an interactive shell in the container
$ sudo ./docker run -i -t ubuntu /bin/bash
Continue with the [*Hello World*](/examples/hello_world/#hello-world) example.
Continue with the [User Guide](/userguide/).

View file

@ -2,23 +2,12 @@ page_title: Installation on CentOS
page_description: Instructions for installing Docker on CentOS
page_keywords: Docker, Docker documentation, requirements, linux, centos, epel, docker.io, docker-io
# CentOS
# CentOS
> **Note**:
> Docker is still under heavy development! We don't recommend using it in
> production yet, but we're getting closer with each release. Please see
> our blog post, [Getting to Docker 1.0](
> http://blog.docker.io/2013/08/getting-to-docker-1-0/)
> **Note**:
> This is a community contributed installation path. The only `official`
> installation is using the [*Ubuntu*](../ubuntulinux/#ubuntu-linux)
> installation path. This version may be out of date because it depends on
> some binaries to be updated and published.
The Docker package is available via the EPEL repository. These instructions work
for CentOS 6 and later. They will likely work for other binary compatible EL6
distributions such as Scientific Linux, but they haven't been tested.
The Docker package is available via the EPEL repository. These
instructions work for CentOS 6 and later. They will likely work for
other binary compatible EL6 distributions such as Scientific Linux, but
they haven't been tested.
Please note that this package is part of [Extra Packages for Enterprise
Linux (EPEL)](https://fedoraproject.org/wiki/EPEL), a community effort
@ -27,13 +16,13 @@ to create and maintain additional packages for the RHEL distribution.
Also note that due to the current Docker limitations, Docker is able to
run only on the **64 bit** architecture.
To run Docker, you will need [CentOS6](http://www.centos.org) or higher, with
a kernel version 2.6.32-431 or higher as this has specific kernel fixes
to allow Docker to run.
To run Docker, you will need [CentOS6](http://www.centos.org) or higher,
with a kernel version 2.6.32-431 or higher as this has specific kernel
fixes to allow Docker to run.
## Installation
Firstly, you need to ensure you have the EPEL repository enabled. Please
Firstly, you need to ensure you have the EPEL repository enabled. Please
follow the [EPEL installation instructions](
https://fedoraproject.org/wiki/EPEL#How_can_I_use_these_extra_packages.3F).
@ -59,7 +48,7 @@ If we want Docker to start at boot, we should also:
$ sudo chkconfig docker on
Now let's verify that Docker is working. First we'll need to get the latest
centos image.
`centos` image.
$ sudo docker pull centos:latest
@ -73,15 +62,15 @@ This should generate some output similar to:
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
centos latest 0b443ba03958 2 hours ago 297.6 MB
Run a simple bash shell to test the image:
Run a simple bash shell to test the image:
$ sudo docker run -i -t centos /bin/bash
If everything is working properly, you'll get a simple bash prompt. Type exit to continue.
If everything is working properly, you'll get a simple bash prompt. Type
exit to continue.
**Done!**
You can either continue with the [*Hello World*](/examples/hello_world/#hello-world) example,
or explore and build on the images yourself.
**Done!** You can either continue with the [Docker User
Guide](/userguide/) or explore and build on the images yourself.
## Issues?

View file

@ -38,18 +38,18 @@ Which should download the `ubuntu` image, and then start `bash` in a container.
### Giving non-root access
The `docker` daemon always runs as the `root` user, and since Docker
version 0.5.2, the `docker` daemon binds to a Unix socket instead of a
TCP port. By default that Unix socket is owned by the user `root`, and
so, by default, you can access it with `sudo`.
The `docker` daemon always runs as the `root` user and the `docker`
daemon binds to a Unix socket instead of a TCP port. By default that
Unix socket is owned by the user `root`, and so, by default, you can
access it with `sudo`.
Starting in version 0.5.3, if you (or your Docker installer) create a
Unix group called `docker` and add users to it, then the `docker` daemon
will make the ownership of the Unix socket read/writable by the `docker`
group when the daemon starts. The `docker` daemon must always run as the
root user, but if you run the `docker` client as a user in the `docker`
group then you don't need to add `sudo` to all the client commands. From
Docker 0.9.0 you can use the `-G` flag to specify an alternative group.
If you (or your Docker installer) create a Unix group called `docker`
and add users to it, then the `docker` daemon will make the ownership of
the Unix socket read/writable by the `docker` group when the daemon
starts. The `docker` daemon must always run as the root user, but if you
run the `docker` client as a user in the `docker` group then you don't
need to add `sudo` to all the client commands. From Docker 0.9.0 you can
use the `-G` flag to specify an alternative group.
> **Warning**:
> The `docker` group (or the group specified with the `-G` flag) is
@ -70,3 +70,7 @@ Docker 0.9.0 you can use the `-G` flag to specify an alternative group.
# Restart the Docker daemon.
$ sudo service docker restart
## What next?
Continue with the [User Guide](/userguide/).

View file

@ -48,5 +48,7 @@ Now let's verify that Docker is working.
$ sudo docker run -i -t fedora /bin/bash
**Done!**, now continue with the [*Hello
World*](/examples/hello_world/#hello-world) example.
## What next?
Continue with the [User Guide](/userguide/).

View file

@ -40,9 +40,8 @@ virtual machine and run the Docker daemon.
(but least secure) is to just hit [Enter]. This passphrase is used by the
`boot2docker ssh` command.
Once you have an initialized virtual machine, you can `boot2docker stop` and
`boot2docker start` it.
Once you have an initialized virtual machine, you can `boot2docker stop`
and `boot2docker start` it.
## Upgrading
@ -60,29 +59,19 @@ To upgrade:
boot2docker start
```
## Running Docker
From your terminal, you can try the “hello world” example. Run:
$ docker run ubuntu echo hello world
This will download the ubuntu image and print hello world.
This will download the `ubuntu` image and print `hello world`.
# Further details
## Container port redirection
The Boot2Docker management tool provides some commands:
```
$ ./boot2docker
Usage: ./boot2docker [<options>] {help|init|up|ssh|save|down|poweroff|reset|restart|config|status|info|delete|download|version} [<args>]
```
## Container port redirection
The latest version of `boot2docker` sets up two network adaptors: one using NAT
The latest version of `boot2docker` sets up two network adapters: one using NAT
to allow the VM to download images and files from the Internet, and one host only
network adaptor to which the container's ports will be exposed on.
network adapter to which the container's ports will be exposed on.
If you run a container with an exposed port:
@ -103,6 +92,17 @@ If you want to share container ports with other computers on your LAN, you will
need to set up [NAT adaptor based port forwarding](
https://github.com/boot2docker/boot2docker/blob/master/doc/WORKAROUNDS.md)
# Further details
The Boot2Docker management tool provides some commands:
```
$ ./boot2docker
Usage: ./boot2docker [<options>]
{help|init|up|ssh|save|down|poweroff|reset|restart|config|status|info|delete|download|version}
[<args>]
```
Continue with the [User Guide](/userguide/).
For further information or to report issues, please see the [Boot2Docker site](http://boot2docker.io).

View file

@ -48,5 +48,6 @@ Docker daemon.
$ sudo usermod -G docker <username>
**Done!**
Now continue with the [*Hello World*](
/examples/hello_world/#hello-world) example.
Continue with the [User Guide](/userguide/).

View file

@ -56,7 +56,8 @@ Now let's verify that Docker is working.
$ sudo docker run -i -t fedora /bin/bash
**Done!**
Now continue with the [*Hello World*](/examples/hello_world/#hello-world) example.
Continue with the [User Guide](/userguide/).
## Issues?

View file

@ -24,5 +24,7 @@ page_keywords: IBM SoftLayer, virtualization, cloud, docker, documentation, inst
7. Then continue with the [*Ubuntu*](../ubuntulinux/#ubuntu-linux)
instructions.
Continue with the [*Hello World*](
/examples/hello_world/#hello-world) example.
## What next?
Continue with the [User Guide](/userguide/).

View file

@ -111,8 +111,7 @@ Now verify that the installation has worked by downloading the
Type `exit` to exit
**Done!**, now continue with the [*Hello
World*](/examples/hello_world/#hello-world) example.
**Done!**, continue with the [User Guide](/userguide/).
## Ubuntu Raring 13.04 and Saucy 13.10 (64 bit)
@ -159,8 +158,7 @@ Now verify that the installation has worked by downloading the
Type `exit` to exit
**Done!**, now continue with the [*Hello
World*](/examples/hello_world/#hello-world) example.
**Done!**, now continue with the [User Guide](/userguide/).
### Giving non-root access

View file

@ -7,7 +7,7 @@ page_keywords: docker, introduction, documentation, about, technology, understan
**What is Docker?**
Docker is a platform for developing, shipping, and running applications.
Docker is designed to deliver your applications faster. With Docker you
Docker is designed to deliver your applications faster. With Docker you
can separate your applications from your infrastructure AND treat your
infrastructure like a managed application. We want to help you ship code
faster, test faster, deploy faster and shorten the cycle between writing
@ -317,15 +317,12 @@ Zones.
## Next steps
### Learning how to use Docker
Visit [Working with Docker](working-with-docker.md).
### Installing Docker
Visit the [installation](/installation/#installation) section.
### Get the whole story
### The Docker User Guide
[Learn how to use Docker](/userguide/).
[https://www.docker.io/the_whole_story/](https://www.docker.io/the_whole_story/)

View file

@ -1,292 +0,0 @@
page_title: Introduction to working with Docker
page_description: Introduction to working with Docker and Docker commands.
page_keywords: docker, introduction, documentation, about, technology, understanding, Dockerfile
# An Introduction to working with Docker
**Getting started with Docker**
> **Note:**
> If you would like to see how a specific command
> works, check out the glossary of all available client
> commands on our [Commands Reference](/reference/commandline/cli).
## Introduction
In the [Understanding Docker](understanding-docker.md) section we
covered the components that make up Docker, learned about the underlying
technology and saw *how* everything works.
Now, let's get an introduction to the basics of interacting with Docker.
> **Note:**
> This page assumes you have a host with a running Docker
> daemon and access to a Docker client. To see how to install Docker on
> a variety of platforms see the [installation
> section](/installation/#installation).
## How to use the client
The client provides you a command-line interface to Docker. It is
accessed by running the `docker` binary.
> **Tip:**
> The below instructions can be considered a summary of our
> [interactive tutorial](https://www.docker.io/gettingstarted). If you
> prefer a more hands-on approach without installing anything, why not
> give that a shot and check out the
> [tutorial](https://www.docker.io/gettingstarted).
The `docker` client usage is pretty simple. Each action you can take
with Docker is a command and each command can take a series of
flags and arguments.
# Usage: [sudo] docker [flags] [command] [arguments] ..
# Example:
$ docker run -i -t ubuntu /bin/bash
## Using the Docker client
Let's get started with the Docker client by running our first Docker
command. We're going to use the `docker version` command to return
version information on the currently installed Docker client and daemon.
# Usage: [sudo] docker version
# Example:
$ docker version
This command will not only provide you the version of Docker client and
daemon you are using, but also the version of Go (the programming
language powering Docker).
Client version: 0.8.0
Go version (client): go1.2
Git commit (client): cc3a8c8
Server version: 0.8.0
Git commit (server): cc3a8c8
Go version (server): go1.2
Last stable version: 0.8.0
### Seeing what the Docker client can do
We can see all of the commands available to us with the Docker client by
running the `docker` binary without any options.
# Usage: [sudo] docker
# Example:
$ docker
You will see a list of all currently available commands.
Commands:
attach Attach to a running container
build Build an image from a Dockerfile
commit Create a new image from a container's changes
. . .
### Seeing Docker command usage
You can also zoom in and review the usage for specific Docker commands.
Try typing Docker followed with a `[command]` to see the usage for that
command:
# Usage: [sudo] docker [command] [--help]
# Example:
$ docker attach
Help output . . .
Or you can also pass the `--help` flag to the `docker` binary.
$ docker images --help
This will display the help text and all available flags:
Usage: docker attach [OPTIONS] CONTAINER
Attach to a running container
--no-stdin=false: Do not attach stdin
--sig-proxy=true: Proxify all received signal to the process (even in non-tty mode)
## Working with images
Let's get started with using Docker by working with Docker images, the
building blocks of Docker containers.
### Docker Images
As we've discovered a Docker image is a read-only template that we build
containers from. Every Docker container is launched from an image. You can
use both images provided by Docker, such as the base `ubuntu` image,
as well as images built by others. For example we can build an image that
runs Apache and our own web application as a starting point to launch containers.
### Searching for images
To search for Docker image we use the `docker search` command. The
`docker search` command returns a list of all images that match your
search criteria, together with some useful information about that image.
This information includes social metrics like how many other people like
the image: we call these "likes" *stars*. We also tell you if an image
is *trusted*. A *trusted* image is built from a known source and allows
you to introspect in greater detail how the image is constructed.
# Usage: [sudo] docker search [image name]
# Example:
$ docker search nginx
NAME DESCRIPTION STARS OFFICIAL TRUSTED
dockerfile/nginx Trusted Nginx (http://nginx.org/) Build 6 [OK]
paintedfox/nginx-php5 A docker image for running Nginx with PHP5. 3 [OK]
dockerfiles/django-uwsgi-nginx Dockerfile and configuration files to buil... 2 [OK]
. . .
> **Note:**
> To learn more about trusted builds, check out
> [this](http://blog.docker.io/2013/11/introducing-trusted-builds) blog
> post.
### Downloading an image
Once we find an image we'd like to download we can pull it down from
[Docker.io](https://index.docker.io) using the `docker pull` command.
# Usage: [sudo] docker pull [image name]
# Example:
$ docker pull dockerfile/nginx
Pulling repository dockerfile/nginx
0ade68db1d05: Pulling dependent layers
27cf78414709: Download complete
b750fe79269d: Download complete
. . .
As you can see, Docker will download, one by one, all the layers forming
the image.
### Listing available images
You may already have some images you've pulled down or built yourself
and you can use the `docker images` command to see the images
available to you locally.
# Usage: [sudo] docker images
# Example:
$ docker images
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
myUserName/nginx latest a0d6c70867d2 41 seconds ago 578.8 MB
nginx latest 173c2dd28ab2 3 minutes ago 578.8 MB
dockerfile/nginx latest 0ade68db1d05 3 weeks ago 578.8 MB
### Building our own images
You can build your own images using a `Dockerfile` and the `docker
build` command. The `Dockerfile` is very flexible and provides a
powerful set of instructions for building applications into Docker
images. To learn more about the `Dockerfile` see the [`Dockerfile`
Reference](/reference/builder/) and [tutorial](https://www.docker.io/learn/dockerfile/).
## Working with containers
### Docker Containers
Docker containers run your applications and are built from Docker
images. In order to create or start a container, you need an image. This
could be the base `ubuntu` image or an image built and shared with you
or an image you've built yourself.
### Running a new container from an image
The easiest way to create a new container is to *run* one from an image
using the `docker run` command.
# Usage: [sudo] docker run [arguments] ..
# Example:
$ docker run -d --name nginx_web nginx /usr/sbin/nginx
25137497b2749e226dd08f84a17e4b2be114ddf4ada04125f130ebfe0f1a03d3
This will create a new container from an image called `nginx` which will
launch the command `/usr/sbin/nginx` when the container is run. We've
also given our container a name, `nginx_web`. When the container is run
Docker will return a container ID, a long string that uniquely
identifies our container. We use can the container's name or its string
to work with it.
Containers can be run in two modes:
* Interactive;
* Daemonized;
An interactive container runs in the foreground and you can connect to
it and interact with it, for example sign into a shell on that
container. A daemonized container runs in the background.
A container will run as long as the process you have launched inside it
is running, for example if the `/usr/bin/nginx` process stops running
the container will also stop.
### Listing containers
We can see a list of all the containers on our host using the `docker
ps` command. By default the `docker ps` command only shows running
containers. But we can also add the `-a` flag to show *all* containers:
both running and stopped.
# Usage: [sudo] docker ps [-a]
# Example:
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
842a50a13032 $ dockerfile/nginx:latest nginx 35 minutes ago Up 30 minutes 0.0.0.0:80->80/tcp nginx_web
### Stopping a container
You can use the `docker stop` command to stop an active container. This
will gracefully end the active process.
# Usage: [sudo] docker stop [container ID]
# Example:
$ docker stop nginx_web
nginx_web
If the `docker stop` command succeeds it will return the name of
the container it has stopped.
> **Note:**
> If you want you to more aggressively stop a container you can use the
> `docker kill` command.
### Starting a Container
Stopped containers can be started again.
# Usage: [sudo] docker start [container ID]
# Example:
$ docker start nginx_web
nginx_web
If the `docker start` command succeeds it will return the name of the
freshly started container.
## Next steps
Here we've learned the basics of how to interact with Docker images and
how to run and work with our first container.
### Understanding Docker
Visit [Understanding Docker](understanding-docker.md).
### Installing Docker
Visit the [installation](/installation/#installation) section.
### Get the whole story
[https://www.docker.io/the_whole_story/](https://www.docker.io/the_whole_story/)

View file

@ -7,9 +7,8 @@ page_keywords: API, Docker, rcli, REST, documentation
## 1. Brief introduction
- The Remote API has replaced rcli
- The daemon listens on `unix:///var/run/docker.sock` but you can
[*Bind Docker to another host/port or a Unix socket*](
/use/basics/#bind-docker).
- The daemon listens on `unix:///var/run/docker.sock` but you can bind
Docker to another host/port or a Unix socket.
- The API tends to be REST, but for some complex commands, like `attach`
or `pull`, the HTTP connection is hijacked to transport `stdout, stdin`
and `stderr`

View file

@ -7,9 +7,8 @@ page_keywords: API, Docker, rcli, REST, documentation
## 1. Brief introduction
- The Remote API has replaced rcli
- The daemon listens on `unix:///var/run/docker.sock` but you can
[*Bind Docker to another host/port or a Unix socket*](
/use/basics/#bind-docker).
- The daemon listens on `unix:///var/run/docker.sock` but you can bind
Docker to another host/port or a Unix socket.
- The API tends to be REST, but for some complex commands, like `attach`
or `pull`, the HTTP connection is hijacked to transport `stdout, stdin`
and `stderr`

View file

@ -7,9 +7,8 @@ page_keywords: API, Docker, rcli, REST, documentation
# 1. Brief introduction
- The Remote API has replaced rcli
- The daemon listens on `unix:///var/run/docker.sock` but you can
[*Bind Docker to another host/port or a Unix socket*](
/use/basics/#bind-docker).
- The daemon listens on `unix:///var/run/docker.sock` but you can bind
Docker to another host/port or a Unix socket.
- The API tends to be REST, but for some complex commands, like `attach`
or `pull`, the HTTP connection is hijacked to transport `stdout, stdin`
and `stderr`

View file

@ -7,9 +7,8 @@ page_keywords: API, Docker, rcli, REST, documentation
# 1. Brief introduction
- The Remote API has replaced rcli
- The daemon listens on `unix:///var/run/docker.sock` but you can
[*Bind Docker to another host/port or a Unix socket*](
/use/basics/#bind-docker).
- The daemon listens on `unix:///var/run/docker.sock` but you can bind
Docker to another host/port or a Unix socket.
- The API tends to be REST, but for some complex commands, like `attach`
or `pull`, the HTTP connection is hijacked to transport `stdout, stdin`
and `stderr`

View file

@ -7,9 +7,8 @@ page_keywords: API, Docker, rcli, REST, documentation
# 1. Brief introduction
- The Remote API has replaced rcli
- The daemon listens on `unix:///var/run/docker.sock` but you can
[*Bind Docker to another host/port or a Unix socket*](
/use/basics/#bind-docker).
- The daemon listens on `unix:///var/run/docker.sock` but you can bind
Docker to another host/port or a Unix socket.
- The API tends to be REST, but for some complex commands, like `attach`
or `pull`, the HTTP connection is hijacked to transport `stdout, stdin`
and `stderr`

View file

@ -7,9 +7,8 @@ page_keywords: API, Docker, rcli, REST, documentation
# 1. Brief introduction
- The Remote API has replaced rcli
- The daemon listens on `unix:///var/run/docker.sock` but you can
[*Bind Docker to another host/port or a Unix socket*](
/use/basics/#bind-docker).
- The daemon listens on `unix:///var/run/docker.sock` but you can bind
Docker to another host/port or a Unix socket.
- The API tends to be REST, but for some complex commands, like `attach`
or `pull`, the HTTP connection is hijacked to transport `stdout, stdin`
and `stderr`

View file

@ -57,7 +57,7 @@ accelerating `docker build` significantly (indicated by `Using cache`):
When you're done with your build, you're ready to look into
[*Pushing a repository to its registry*](
/use/workingwithrepository/#image-push).
/userguide/dockerrepos/#image-push).
## Format
@ -95,7 +95,7 @@ The `FROM` instruction sets the [*Base Image*](/terms/image/#base-image-def)
for subsequent instructions. As such, a valid Dockerfile must have `FROM` as
its first instruction. The image can be any valid image it is especially easy
to start by **pulling an image** from the [*Public Repositories*](
/use/workingwithrepository/#using-public-repositories).
/userguide/dockerrepos/#using-public-repositories).
`FROM` must be the first non-comment instruction in the Dockerfile.
@ -200,10 +200,8 @@ default specified in CMD.
The `EXPOSE` instructions informs Docker that the container will listen on the
specified network ports at runtime. Docker uses this information to interconnect
containers using links (see
[*links*](/use/working_with_links_names/#working-with-links-names)),
and to setup port redirection on the host system (see [*Redirect Ports*](
/use/port_redirection/#port-redirection)).
containers using links (see the [Docker User
Guide](/userguide/dockerlinks)).
## ENV
@ -380,7 +378,7 @@ and mark it as holding externally mounted volumes from native host or other
containers. The value can be a JSON array, `VOLUME ["/var/log/"]`, or a plain
string, `VOLUME /var/log`. For more information/examples and mounting
instructions via the Docker client, refer to [*Share Directories via Volumes*](
/use/working_with_volumes/#volume-def) documentation.
/userguide/dockervolumes/#volume-def) documentation.
## USER

View file

@ -602,15 +602,6 @@ contains complex json object, so to grab it as JSON, you use
The main process inside the container will be sent SIGKILL, or any
signal specified with option `--signal`.
### Known Issues (kill)
- [Issue 197](https://github.com/dotcloud/docker/issues/197) indicates
that `docker kill` may leave directories behind
and make it difficult to remove the container.
- [Issue 3844](https://github.com/dotcloud/docker/issues/3844) lxc
1.0.0 beta3 removed `lcx-kill` which is used by
Docker versions before 0.8.0; see the issue for a workaround.
## load
Usage: docker load
@ -864,11 +855,9 @@ of all containers.
The `docker run` command can be used in combination with `docker commit` to
[*change the command that a container runs*](#commit-an-existing-container).
See [*Redirect Ports*](/use/port_redirection/#port-redirection)
for more detailed information about the `--expose`, `-p`, `-P` and `--link`
parameters, and [*Link Containers*](
/use/working_with_links_names/#working-with-links-names) for specific
examples using `--link`.
See the [Docker User Guide](/userguide/dockerlinks/) for more detailed
information about the `--expose`, `-p`, `-P` and `--link` parameters,
and linking containers.
### Known Issues (run volumes-from)
@ -934,16 +923,16 @@ manipulate the host's docker daemon.
$ sudo docker run -p 127.0.0.1:80:8080 ubuntu bash
This binds port `8080` of the container to port `80` on `127.0.0.1` of the host
machine. [*Redirect Ports*](/use/port_redirection/#port-redirection)
This binds port `8080` of the container to port `80` on `127.0.0.1` of
the host machine. The [Docker User Guide](/userguide/dockerlinks/)
explains in detail how to manipulate ports in Docker.
$ sudo docker run --expose 80 ubuntu bash
This exposes port `80` of the container for use within a link without publishing
the port to the host system's interfaces. [*Redirect Ports*](
/use/port_redirection/#port-redirection) explains in detail how to
manipulate ports in Docker.
This exposes port `80` of the container for use within a link without
publishing the port to the host system's interfaces. The [Docker User
Guide](/userguide/dockerlinks) explains in detail how to manipulate
ports in Docker.
$ sudo docker run -e MYVAR1 --env MYVAR2=foo --env-file ./env.list ubuntu bash
@ -1097,7 +1086,7 @@ Search [Docker.io](https://index.docker.io) for images
-t, --trusted=false Only show trusted builds
See [*Find Public Images on Docker.io*](
/use/workingwithrepository/#find-public-images-on-dockerio) for
/userguide/dockerrepos/#find-public-images-on-dockerio) for
more details on finding shared images from the commandline.
## start
@ -1130,7 +1119,7 @@ grace period, SIGKILL
You can group your images together using names and tags, and then upload
them to [*Share Images via Repositories*](
/use/workingwithrepository/#working-with-the-repository).
/userguide/dockerrepos/#working-with-the-repository).
## top

View file

@ -11,21 +11,17 @@ The [*Image*](/terms/image/#image-def) which starts the process may
define defaults related to the binary to run, the networking to expose,
and more, but `docker run` gives final control to
the operator who starts the container from the image. That's the main
reason [*run*](/commandline/cli/#cli-run) has more options than any
reason [*run*](/reference/commandline/cli/#cli-run) has more options than any
other `docker` command.
Every one of the [*Examples*](/examples/#example-list) shows
running containers, and so here we try to give more in-depth guidance.
## General Form
As you`ve seen in the [*Examples*](/examples/#example-list), the
basic run command takes this form:
The basic `docker run` command takes this form:
$ docker run [OPTIONS] IMAGE[:TAG] [COMMAND] [ARG...]
To learn how to interpret the types of `[OPTIONS]`,
see [*Option types*](/commandline/cli/#cli-options).
see [*Option types*](/reference/commandline/cli/#cli-options).
The list of `[OPTIONS]` breaks down into two groups:
@ -75,9 +71,9 @@ default foreground mode:
In detached mode (`-d=true` or just `-d`), all I/O should be done
through network connections or shared volumes because the container is
no longer listening to the commandline where you executed `docker run`.
no longer listening to the command line where you executed `docker run`.
You can reattach to a detached container with `docker`
[*attach*](commandline/cli/#attach). If you choose to run a
[*attach*](/reference/commandline/cli/#attach). If you choose to run a
container in the detached mode, then you cannot use the `--rm` option.
### Foreground
@ -85,7 +81,7 @@ container in the detached mode, then you cannot use the `--rm` option.
In foreground mode (the default when `-d` is not specified), `docker run`
can start the process in the container and attach the console to the process's
standard input, output, and standard error. It can even pretend to be a TTY
(this is what most commandline executables expect) and pass along signals. All
(this is what most command line executables expect) and pass along signals. All
of that is configurable:
-a=[] : Attach to ``stdin``, ``stdout`` and/or ``stderr``
@ -121,11 +117,11 @@ assign a name to the container with `--name` then
the daemon will also generate a random string name too. The name can
become a handy way to add meaning to a container since you can use this
name when defining
[*links*](/use/working_with_links_names/#working-with-links-names)
[*links*](/userguide/dockerlinks/#working-with-links-names)
(or any other place you need to identify a container). This works for
both background and foreground Docker containers.
### PID Equivalent
### PID Equivalent
And finally, to help with automation, you can have Docker write the
container ID out to a file of your choosing. This is similar to how some
@ -256,7 +252,7 @@ familiar with using LXC directly.
## Overriding Dockerfile Image Defaults
When a developer builds an image from a [*Dockerfile*](builder/#dockerbuilder)
When a developer builds an image from a [*Dockerfile*](/reference/builder/#dockerbuilder)
or when she commits it, the developer can set a number of default parameters
that take effect when the image starts up as a container.
@ -425,7 +421,7 @@ mechanism to communicate with a linked container by its alias:
--volumes-from="": Mount all volumes from the given container(s)
The volumes commands are complex enough to have their own documentation in
section [*Share Directories via Volumes*](/use/working_with_volumes/#volume-def).
section [*Share Directories via Volumes*](/userguide/dockervolumes/#volume-def).
A developer can define one or more `VOLUME's associated with an image, but only the
operator can give access from one container to another (or from a container to a
volume mounted on the host).

View file

@ -8,18 +8,19 @@ page_keywords: containers, lxc, concepts, explanation, image, container
![](/terms/images/docker-filesystems-busyboxrw.png)
Once you start a process in Docker from an [*Image*](image.md), Docker fetches
the image and its [*Parent Image*](image.md), and repeats the process until it
reaches the [*Base Image*](image.md/#base-image-def). Then the
[*Union File System*](layer.md) adds a read-write layer on top. That read-write
layer, plus the information about its [*Parent Image*](image.md) and some
additional information like its unique id, networking configuration, and
resource limits is called a **container**.
Once you start a process in Docker from an [*Image*](/terms/image), Docker
fetches the image and its [*Parent Image*](/terms/image), and repeats the
process until it reaches the [*Base Image*](/terms/image/#base-image-def). Then
the [*Union File System*](/terms/layer) adds a read-write layer on top. That
read-write layer, plus the information about its [*Parent
Image*](/terms/image)
and some additional information like its unique id, networking
configuration, and resource limits is called a **container**.
## Container State
Containers can change, and so they have state. A container may be **running** or
**exited**.
Containers can change, and so they have state. A container may be
**running** or **exited**.
When a container is running, the idea of a "container" also includes a
tree of processes running on the CPU, isolated from the other processes
@ -31,13 +32,13 @@ processes restart from scratch (their memory state is **not** preserved
in a container), but the file system is just as it was when the
container was stopped.
You can promote a container to an [*Image*](image.md) with `docker commit`.
You can promote a container to an [*Image*](/terms/image) with `docker commit`.
Once a container is an image, you can use it as a parent for new containers.
## Container IDs
All containers are identified by a 64 hexadecimal digit string
(internally a 256bit value). To simplify their use, a short ID of the
first 12 characters can be used on the commandline. There is a small
first 12 characters can be used on the command line. There is a small
possibility of short id collisions, so the docker server will always
return the long ID.

View file

@ -8,10 +8,10 @@ page_keywords: containers, lxc, concepts, explanation, image, container
![](/terms/images/docker-filesystems-debian.png)
In Docker terminology, a read-only [*Layer*](../layer/#layer-def) is
In Docker terminology, a read-only [*Layer*](/terms/layer/#layer-def) is
called an **image**. An image never changes.
Since Docker uses a [*Union File System*](../layer/#ufs-def), the
Since Docker uses a [*Union File System*](/terms/layer/#ufs-def), the
processes think the whole file system is mounted read-write. But all the
changes go to the top-most writeable layer, and underneath, the original
file in the read-only image is unchanged. Since images don't change,

View file

@ -7,7 +7,7 @@ page_keywords: containers, lxc, concepts, explanation, image, container
## Introduction
In a traditional Linux boot, the kernel first mounts the root [*File
System*](../filesystem/#filesystem-def) as read-only, checks its
System*](/terms/filesystem/#filesystem-def) as read-only, checks its
integrity, and then switches the whole rootfs volume to read-write mode.
## Layer

View file

@ -6,9 +6,9 @@ page_keywords: containers, concepts, explanation, image, repository, container
## Introduction
A Registry is a hosted service containing [*repositories*](
../repository/#repository-def) of [*images*](../image/#image-def) which
responds to the Registry API.
A Registry is a hosted service containing
[*repositories*](/terms/repository/#repository-def) of
[*images*](/terms/image/#image-def) which responds to the Registry API.
The default registry can be accessed using a browser at
[Docker.io](http://index.docker.io) or using the
@ -16,5 +16,5 @@ The default registry can be accessed using a browser at
## Further Reading
For more information see [*Working with Repositories*](
../use/workingwithrepository/#working-with-the-repository)
For more information see [*Working with
Repositories*](/userguide/dockerrepos/#working-with-the-repository)

View file

@ -7,7 +7,7 @@ page_keywords: containers, concepts, explanation, image, repository, container
## Introduction
A repository is a set of images either on your local Docker server, or
shared, by pushing it to a [*Registry*](../registry/#registry-def)
shared, by pushing it to a [*Registry*](/terms/registry/#registry-def)
server.
Images can be associated with a repository (or multiple) by giving them
@ -31,5 +31,5 @@ If you create a new repository which you want to share, you will need to
set at least the `user_name`, as the `default` blank `user_name` prefix is
reserved for official Docker images.
For more information see [*Working with Repositories*](
../use/workingwithrepository/#working-with-the-repository)
For more information see [*Working with
Repositories*](/userguide/dockerrepos/#working-with-the-repository)

View file

@ -1,13 +0,0 @@
# Use
## Contents:
- [First steps with Docker](basics/)
- [Share Images via Repositories](workingwithrepository/)
- [Redirect Ports](port_redirection/)
- [Configure Networking](networking/)
- [Automatically Start Containers](host_integration/)
- [Share Directories via Volumes](working_with_volumes/)
- [Link Containers](working_with_links_names/)
- [Link via an Ambassador Container](ambassador_pattern_linking/)
- [Using Puppet](puppet/)

View file

@ -1,127 +0,0 @@
page_title: Redirect Ports
page_description: usage about port redirection
page_keywords: Usage, basic port, docker, documentation, examples
# Redirect Ports
## Introduction
Interacting with a service is commonly done through a connection to a
port. When this service runs inside a container, one can connect to the
port after finding the IP address of the container as follows:
# Find IP address of container with ID <container_id>
$ docker inspect --format='{{.NetworkSettings.IPAddress}}' <container_id>
However, this IP address is local to the host system and the container
port is not reachable by the outside world. Furthermore, even if the
port is used locally, e.g. by another container, this method is tedious
as the IP address of the container changes every time it starts.
Docker addresses these two problems and give a simple and robust way to
access services running inside containers.
To allow non-local clients to reach the service running inside the
container, Docker provide ways to bind the container port to an
interface of the host system. To simplify communication between
containers, Docker provides the linking mechanism.
## Auto map all exposed ports on the host
To bind all the exposed container ports to the host automatically, use
`docker run -P <imageid>`. The mapped host ports will be auto-selected
from a pool of unused ports (49000..49900), and you will need to use
`docker ps`, `docker inspect <container_id>` or `docker port
<container_id> <port>` to determine what they are.
## Binding a port to a host interface
To bind a port of the container to a specific interface of the host
system, use the `-p` parameter of the `docker run` command:
# General syntax
$ docker run -p [([<host_interface>:[host_port]])|(<host_port>):]<container_port>[/udp] <image> <cmd>
When no host interface is provided, the port is bound to all available
interfaces of the host machine (aka INADDR_ANY, or 0.0.0.0). When no
host port is provided, one is dynamically allocated. The possible
combinations of options for TCP port are the following:
# Bind TCP port 8080 of the container to TCP port 80 on 127.0.0.1 of the host machine.
$ docker run -p 127.0.0.1:80:8080 <image> <cmd>
# Bind TCP port 8080 of the container to a dynamically allocated TCP port on 127.0.0.1 of the host machine.
$ docker run -p 127.0.0.1::8080 <image> <cmd>
# Bind TCP port 8080 of the container to TCP port 80 on all available interfaces of the host machine.
$ docker run -p 80:8080 <image> <cmd>
# Bind TCP port 8080 of the container to a dynamically allocated TCP port on all available interfaces of the host machine.
$ docker run -p 8080 <image> <cmd>
UDP ports can also be bound by adding a trailing `/udp`. All the
combinations described for TCP work. Here is only one example:
# Bind UDP port 5353 of the container to UDP port 53 on 127.0.0.1 of the host machine.
$ docker run -p 127.0.0.1:53:5353/udp <image> <cmd>
The command `docker port` lists the interface and port on the host
machine bound to a given container port. It is useful when using
dynamically allocated ports:
# Bind to a dynamically allocated port
$ docker run -p 127.0.0.1::8080 --name dyn-bound <image> <cmd>
# Lookup the actual port
$ docker port dyn-bound 8080
127.0.0.1:49160
## Linking a container
Communication between two containers can also be established in a
Docker-specific way called linking.
To briefly present the concept of linking, let us consider two
containers: `server`, containing the service, and `client`, accessing
the service. Once `server` is running, `client` is started and links to
server. Linking sets environment variables in `client` giving it some
information about `server`. In this sense, linking is a method of
service discovery.
Let us now get back to our topic of interest; communication between the
two containers. We mentioned that the tricky part about this
communication was that the IP address of `server` was not fixed.
Therefore, some of the environment variables are going to be used to
inform `client` about this IP address. This process called exposure, is
possible because the `client` is started after the `server` has been started.
Here is a full example. On `server`, the port of interest is exposed.
The exposure is done either through the `--expose` parameter to the
`docker run` command, or the `EXPOSE` build command in a `Dockerfile`:
# Expose port 80
$ docker run --expose 80 --name server <image> <cmd>
The `client` then links to the `server`:
# Link
$ docker run --name client --link server:linked-server <image> <cmd>
Here `client` locally refers to `server` as `linked-server`. The following
environment variables, among others, are available on `client`:
# The default protocol, ip, and port of the service running in the container
$ LINKED-SERVER_PORT=tcp://172.17.0.8:80
# A specific protocol, ip, and port of various services
$ LINKED-SERVER_PORT_80_TCP=tcp://172.17.0.8:80
$ LINKED-SERVER_PORT_80_TCP_PROTO=tcp
$ LINKED-SERVER_PORT_80_TCP_ADDR=172.17.0.8
$ LINKED-SERVER_PORT_80_TCP_PORT=80
This tells `client` that a service is running on port 80 of `server` and
that `server` is accessible at the IP address `172.17.0.8`:
> **Note:**
> Using the `-p` parameter also exposes the port.

View file

@ -1,139 +0,0 @@
page_title: Link Containers
page_description: How to create and use both links and names
page_keywords: Examples, Usage, links, linking, docker, documentation, examples, names, name, container naming
# Link Containers
## Introduction
From version 0.6.5 you are now able to `name` a container and `link` it
to another container by referring to its name. This will create a parent
-> child relationship where the parent container can see selected
information about its child.
## Container Naming
You can now name your container by using the `--name` flag. If no name
is provided, Docker will automatically generate a name. You can see this
name using the `docker ps` command.
# format is "sudo docker run --name <container_name> <image_name> <command>"
$ sudo docker run --name test ubuntu /bin/bash
# the flag "-a" Show all containers. Only running containers are shown by default.
$ sudo docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
2522602a0d99 ubuntu:12.04 /bin/bash 14 seconds ago Exit 0 test
## Links: service discovery for docker
Links allow containers to discover and securely communicate with each
other by using the flag `--link name:alias`. Inter-container
communication can be disabled with the daemon flag `--icc=false`. With
this flag set to `false`, Container A cannot access Container B unless
explicitly allowed via a link. This is a huge win for securing your
containers. When two containers are linked together Docker creates a
parent child relationship between the containers. The parent container
will be able to access information via environment variables of the
child such as name, exposed ports, IP and other selected environment
variables.
When linking two containers Docker will use the exposed ports of the
container to create a secure tunnel for the parent to access. If a
database container only exposes port 8080 then the linked container will
only be allowed to access port 8080 and nothing else if inter-container
communication is set to false.
For example, there is an image called `crosbymichael/redis` that exposes
the port 6379 and starts the Redis server. Let's name the container as
`redis` based on that image and run it as daemon.
$ sudo docker run -d --name redis crosbymichael/redis
We can issue all the commands that you would expect using the name
`redis`; start, stop, attach, using the name for our container. The name
also allows us to link other containers into this one.
Next, we can start a new web application that has a dependency on Redis
and apply a link to connect both containers. If you noticed when running
our Redis server we did not use the `-p` flag to publish the Redis port
to the host system. Redis exposed port 6379 and this is all we need to
establish a link.
$ sudo docker run -t -i --link redis:db --name webapp ubuntu bash
When you specified `--link redis:db` you are telling Docker to link the
container named `redis` into this new container with the alias `db`.
Environment variables are prefixed with the alias so that the parent
container can access network and environment information from the
containers that are linked into it.
If we inspect the environment variables of the second container, we
would see all the information about the child container.
$ root@4c01db0b339c:/# env
HOSTNAME=4c01db0b339c
DB_NAME=/webapp/db
TERM=xterm
DB_PORT=tcp://172.17.0.8:6379
DB_PORT_6379_TCP=tcp://172.17.0.8:6379
DB_PORT_6379_TCP_PROTO=tcp
DB_PORT_6379_TCP_ADDR=172.17.0.8
DB_PORT_6379_TCP_PORT=6379
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
PWD=/
SHLVL=1
HOME=/
container=lxc
_=/usr/bin/env
root@4c01db0b339c:/#
Accessing the network information along with the environment of the
child container allows us to easily connect to the Redis service on the
specific IP and port in the environment.
> **Note**:
> These Environment variables are only set for the first process in the
> container. Similarly, some daemons (such as `sshd`)
> will scrub them when spawning shells for connection.
You can work around this by storing the initial `env` in a file, or
looking at `/proc/1/environ`.
Running `docker ps` shows the 2 containers, and the `webapp/db` alias
name for the Redis container.
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
4c01db0b339c ubuntu:12.04 bash 17 seconds ago Up 16 seconds webapp
d7886598dbe2 crosbymichael/redis:latest /redis-server --dir 33 minutes ago Up 33 minutes 6379/tcp redis,webapp/db
## Resolving Links by Name
> *Note:* New in version v0.11.
Linked containers can be accessed by hostname. Hostnames are mapped by
appending entries to '/etc/hosts' using the linked container's alias.
For example, linking a container using '--link redis:db' will generate
the following '/etc/hosts' file:
root@6541a75d44a0:/# cat /etc/hosts
172.17.0.3 6541a75d44a0
172.17.0.2 db
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
root@6541a75d44a0:/#
Using this mechanism, you can communicate with the linked container by
name:
root@6541a75d44a0:/# echo PING | redis-cli -h db
PONG
root@6541a75d44a0:/#

View file

@ -1,171 +0,0 @@
page_title: Share Directories via Volumes
page_description: How to create and share volumes
page_keywords: Examples, Usage, volume, docker, documentation, examples
# Share Directories via Volumes
## Introduction
A *data volume* is a specially-designated directory within one or more
containers that bypasses the [*Union File
System*](/terms/layer/#ufs-def) to provide several useful features for
persistent or shared data:
- **Data volumes can be shared and reused between containers:**
This is the feature that makes data volumes so powerful. You can
use it for anything from hot database upgrades to custom backup or
replication tools. See the example below.
- **Changes to a data volume are made directly:**
Without the overhead of a copy-on-write mechanism. This is good for
very large files.
- **Changes to a data volume will not be included at the next commit:**
Because they are not recorded as regular filesystem changes in the
top layer of the [*Union File System*](/terms/layer/#ufs-def)
- **Volumes persist until no containers use them:**
As they are a reference counted resource. The container does not need to be
running to share its volumes, but running it can help protect it
against accidental removal via `docker rm`.
Each container can have zero or more data volumes.
## Getting Started
Using data volumes is as simple as adding a `-v` parameter to the
`docker run` command. The `-v` parameter can be used more than once in
order to create more volumes within the new container. To create a new
container with two new volumes:
$ docker run -v /var/volume1 -v /var/volume2 busybox true
This command will create the new container with two new volumes that
exits instantly (`true` is pretty much the smallest, simplest program
that you can run). You can then mount its volumes in any other container
using the `run` `--volumes-from` option; irrespective of whether the
volume container is running or not.
Or, you can use the `VOLUME` instruction in a `Dockerfile` to add one or
more new volumes to any container created from that image:
# BUILD-USING: $ docker build -t data .
# RUN-USING: $ docker run --name DATA data
FROM busybox
VOLUME ["/var/volume1", "/var/volume2"]
CMD ["/bin/true"]
### Creating and mounting a Data Volume Container
If you have some persistent data that you want to share between
containers, or want to use from non-persistent containers, it's best to
create a named Data Volume Container, and then to mount the data from
it.
Create a named container with volumes to share (`/var/volume1` and
`/var/volume2`):
$ docker run -v /var/volume1 -v /var/volume2 --name DATA busybox true
Then mount those data volumes into your application containers:
$ docker run -t -i --rm --from DATA --name client1 ubuntu bash
You can use multiple `-volumes-from` parameters to bring together
multiple data volumes from multiple containers.
Interestingly, you can mount the volumes that came from the `DATA`
container in yet another container via the `client1` middleman
container:
$ docker run -t -i --rm --volumes-from client1 --name client2 ubuntu bash
This allows you to abstract the actual data source from users of that
data, similar to [*Ambassador Pattern Linking*](
../ambassador_pattern_linking/#ambassador-pattern-linking).
If you remove containers that mount volumes, including the initial DATA
container, or the middleman, the volumes will not be deleted until there
are no containers still referencing those volumes. This allows you to
upgrade, or effectively migrate data volumes between containers.
### Mount a Host Directory as a Container Volume:
-v=[]: Create a bind mount with: [host-dir]:[container-dir]:[rw|ro].
You must specify an absolute path for `host-dir`. If `host-dir` is
missing from the command, then Docker creates a new volume. If
`host-dir` is present but points to a non-existent directory on the
host, Docker will automatically create this directory and use it as the
source of the bind-mount.
Note that this is not available from a `Dockerfile` due the portability
and sharing purpose of it. The `host-dir` volumes are entirely
host-dependent and might not work on any other machine.
For example:
# Usage:
# sudo docker run [OPTIONS] -v /(dir. on host):/(dir. in container):(Read-Write or Read-Only) [ARG..]
# Example:
$ sudo docker run -i -t -v /var/log:/logs_from_host:ro ubuntu bash
The command above mounts the host directory `/var/log` into the
container with *read only* permissions as `/logs_from_host`.
New in version v0.5.0.
### Note for OS/X users and remote daemon users:
OS/X users run `boot2docker` to create a minimalist virtual machine
running the docker daemon. That virtual machine then launches docker
commands on behalf of the OS/X command line. The means that `host
directories` refer to directories in the `boot2docker` virtual machine,
not the OS/X filesystem.
Similarly, anytime when the docker daemon is on a remote machine, the
`host directories` always refer to directories on the daemon's machine.
### Backup, restore, or migrate data volumes
You cannot back up volumes using `docker export`, `docker save` and
`docker cp` because they are external to images. Instead you can use
`--volumes-from` to start a new container that can access the
data-container's volume. For example:
$ sudo docker run --rm --volumes-from DATA -v $(pwd):/backup busybox tar cvf /backup/backup.tar /data
- `-rm`:
remove the container when it exits
- `--volumes-from DATA`:
attach to the volumes shared by the `DATA` container
- `-v $(pwd):/backup`:
bind mount the current directory into the container; to write the tar file to
- `busybox`:
a small simpler image - good for quick maintenance
- `tar cvf /backup/backup.tar /data`:
creates an uncompressed tar file of all the files in the `/data` directory
Then to restore to the same container, or another that you've made
elsewhere:
# create a new data container
$ sudo docker run -v /data --name DATA2 busybox true
# untar the backup files into the new container᾿s data volume
$ sudo docker run --rm --volumes-from DATA2 -v $(pwd):/backup busybox tar xvf /backup/backup.tar
data/
data/sven.txt
# compare to the original container
$ sudo docker run --rm --volumes-from DATA -v `pwd`:/backup busybox ls /data
sven.txt
You can use the basic techniques above to automate backup, migration and
restore testing using your preferred tools.
## Known Issues
- [Issue 2702](https://github.com/dotcloud/docker/issues/2702):
"lxc-start: Permission denied - failed to mount" could indicate a
permissions problem with AppArmor. Please see the issue for a
workaround.
- [Issue 2528](https://github.com/dotcloud/docker/issues/2528): the
busybox container is used to make the resulting container as small
and simple as possible - whenever you need to interact with the data
in the volume you mount it into another container.

View file

@ -1,251 +0,0 @@
page_title: Share Images via Repositories
page_description: Repositories allow users to share images.
page_keywords: repo, repositories, usage, pull image, push image, image, documentation
# Share Images via Repositories
## Introduction
Docker is not only a tool for creating and managing your own
[*containers*](/terms/container/#container-def) **Docker is also a
tool for sharing**. A *repository* is a shareable collection of tagged
[*images*](/terms/image/#image-def) that together create the file
systems for containers. The repository's name is a label that indicates
the provenance of the repository, i.e. who created it and where the
original copy is located.
You can find one or more repositories hosted on a *registry*. There are
two types of *registry*: public and private. There's also a default
*registry* that Docker uses which is called
[Docker.io](http://index.docker.io).
[Docker.io](http://index.docker.io) is the home of "top-level"
repositories and public "user" repositories. The Docker project
provides [Docker.io](http://index.docker.io) to host public and [private
repositories](https://index.docker.io/plans/), namespaced by user. We
provide user authentication and search over all the public repositories.
Docker acts as a client for these services via the `docker search`,
`pull`, `login` and `push` commands.
## Repositories
### Local Repositories
Docker images which have been created and labeled on your local Docker
server need to be pushed to a Public (by default they are pushed to
[Docker.io](http://index.docker.io)) or Private registry to be shared.
### Public Repositories
There are two types of public repositories: *top-level* repositories
which are controlled by the Docker team, and *user* repositories created
by individual contributors. Anyone can read from these repositories
they really help people get started quickly! You could also use
[*Trusted Builds*](#trusted-builds) if you need to keep control of who
accesses your images.
- Top-level repositories can easily be recognized by **not** having a
`/` (slash) in their name. These repositories represent trusted images
provided by the Docker team.
- User repositories always come in the form of `<username>/<repo_name>`.
This is what your published images will look like if you push to the
public [Docker.io](http://index.docker.io) registry.
- Only the authenticated user can push to their *username* namespace on
a [Docker.io](http://index.docker.io) repository.
- User images are not curated, it is therefore up to you whether or not
you trust the creator of this image.
### Private repositories
You can also create private repositories on
[Docker.io](https://index.docker.io/plans/). These allow you to store
images that you don't want to share publicly. Only authenticated users
can push to private repositories.
## Find Public Images on Docker.io
You can search the [Docker.io](https://index.docker.io) registry or
using the command line interface. Searching can find images by name,
user name or description:
$ sudo docker help search
Usage: docker search NAME
Search the docker index for images
--no-trunc=false: Don᾿t truncate output
$ sudo docker search centos
Found 25 results matching your query ("centos")
NAME DESCRIPTION
centos
slantview/centos-chef-solo CentOS 6.4 with chef-solo.
...
There you can see two example results: `centos` and
`slantview/centos-chef-solo`. The second result shows that it comes from
the public repository of a user, `slantview/`, while the first result
(`centos`) doesn't explicitly list a repository so it comes from the
trusted top-level namespace. The `/` character separates a user's
repository and the image name.
Once you have found the image name, you can download it:
# sudo docker pull <value>
$ sudo docker pull centos
Pulling repository centos
539c0211cd76: Download complete
What can you do with that image? Check out the
[*Examples*](/examples/#example-list) and, when you're ready with your
own image, come back here to learn how to share it.
## Contributing to Docker.io
Anyone can pull public images from the
[Docker.io](http://index.docker.io) registry, but if you would like to
share one of your own images, then you must register a unique user name
first. You can create your username and login on
[Docker.io](https://index.docker.io/account/signup/), or by running
$ sudo docker login
This will prompt you for a username, which will become a public
namespace for your public repositories.
If your username is available then `docker` will also prompt you to
enter a password and your e-mail address. It will then automatically log
you in. Now you're ready to commit and push your own images!
> **Note:**
> Your authentication credentials will be stored in the [`.dockercfg`
> authentication file](#authentication-file).
## Committing a Container to a Named Image
When you make changes to an existing image, those changes get saved to a
container's file system. You can then promote that container to become
an image by making a `commit`. In addition to converting the container
to an image, this is also your opportunity to name the image,
specifically a name that includes your user name from
[Docker.io](http://index.docker.io) (as you did a `login` above) and a
meaningful name for the image.
# format is "sudo docker commit <container_id> <username>/<imagename>"
$ sudo docker commit $CONTAINER_ID myname/kickassapp
## Pushing a repository to its registry
In order to push an repository to its registry you need to have named an
image, or committed your container to a named image (see above)
Now you can push this repository to the registry designated by its name
or tag.
# format is "docker push <username>/<repo_name>"
$ sudo docker push myname/kickassapp
## Trusted Builds
Trusted Builds automate the building and updating of images from GitHub
or BitBucket, directly on Docker.io. It works by adding a commit hook to
your selected repository, triggering a build and update when you push a
commit.
### To setup a trusted build
1. Create a [Docker.io account](https://index.docker.io/) and login.
2. Link your GitHub or BitBucket account through the [`Link Accounts`](https://index.docker.io/account/accounts/) menu.
3. [Configure a Trusted build](https://index.docker.io/builds/).
4. Pick a GitHub or BitBucket project that has a `Dockerfile` that you want to build.
5. Pick the branch you want to build (the default is the `master` branch).
6. Give the Trusted Build a name.
7. Assign an optional Docker tag to the Build.
8. Specify where the `Dockerfile` is located. The default is `/`.
Once the Trusted Build is configured it will automatically trigger a
build, and in a few minutes, if there are no errors, you will see your
new trusted build on the [Docker.io](https://index.docker.io) Registry.
It will stay in sync with your GitHub and BitBucket repository until you
deactivate the Trusted Build.
If you want to see the status of your Trusted Builds you can go to your
[Trusted Builds page](https://index.docker.io/builds/) on the Docker.io,
and it will show you the status of your builds, and the build history.
Once you've created a Trusted Build you can deactivate or delete it. You
cannot however push to a Trusted Build with the `docker push` command.
You can only manage it by committing code to your GitHub or BitBucket
repository.
You can create multiple Trusted Builds per repository and configure them
to point to specific `Dockerfile`'s or Git branches.
## Private Registry
Private registries are possible by hosting [your own
registry](https://github.com/dotcloud/docker-registry).
> **Note**:
> You can also use private repositories on
> [Docker.io](https://index.docker.io/plans/).
To push or pull to a repository on your own registry, you must prefix
the tag with the address of the registry's host (a `.` or `:` is used to
identify a host), like this:
# Tag to create a repository with the full registry location.
# The location (e.g. localhost.localdomain:5000) becomes
# a permanent part of the repository name
$ sudo docker tag 0u812deadbeef localhost.localdomain:5000/repo_name
# Push the new repository to its home location on localhost
$ sudo docker push localhost.localdomain:5000/repo_name
Once a repository has your registry's host name as part of the tag, you
can push and pull it like any other repository, but it will **not** be
searchable (or indexed at all) on [Docker.io](http://index.docker.io),
and there will be no user name checking performed. Your registry will
function completely independently from the
[Docker.io](http://index.docker.io) registry.
<iframe width="640" height="360" src="//www.youtube.com/embed/CAewZCBT4PI?rel=0" frameborder="0" allowfullscreen></iframe>
See also
[Docker Blog: How to use your own registry](
http://blog.docker.io/2013/07/how-to-use-your-own-registry/)
## Authentication File
The authentication is stored in a JSON file, `.dockercfg`, located in
your home directory. It supports multiple registry URLs.
The `docker login` command will create the:
[https://index.docker.io/v1/](https://index.docker.io/v1/)
key.
The `docker login https://my-registry.com` command will create the:
[https://my-registry.com](https://my-registry.com)
key.
For example:
{
"https://index.docker.io/v1/": {
"auth": "xXxXxXxXxXx=",
"email": "email@example.com"
},
"https://my-registry.com": {
"auth": "XxXxXxXxXxX=",
"email": "email@my-registry.com"
}
}
The `auth` field represents
base64(<username>:<password>)

View file

@ -0,0 +1,397 @@
page_title: Working with Docker Images
page_description: How to work with Docker images.
page_keywords: documentation, docs, the docker guide, docker guide, docker, docker platform, virtualization framework, docker.io, Docker images, Docker image, image management, Docker repos, Docker repositories, docker, docker tag, docker tags, Docker.io, collaboration
# Working with Docker Images
In the [introduction](/introduction/) we've discovered that Docker
images are the basis of containers. In the
[previous](/userguide/dockerizing/) [sections](/userguide/usingdocker/)
we've used Docker images that already exist, for example the `ubuntu`
image and the `training/webapp` image.
We've also discovered that Docker stores downloaded images on the Docker
host. If an image isn't already present on the host then it'll be
downloaded from a registry: by default the
[Docker.io](https://index.docker.io) public registry.
In this section we're going to explore Docker images a bit more
including:
* Managing and working with images locally on your Docker host;
* Creating basic images;
* Uploading images to [Docker.io](https://index.docker.io).
## Listing images on the host
Let's start with listing the images we have locally on our host. You can
do this using the `docker images` command like so:
$ sudo docker images
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
training/webapp latest fc77f57ad303 3 weeks ago 280.5 MB
ubuntu 13.10 5e019ab7bf6d 4 weeks ago 180 MB
ubuntu saucy 5e019ab7bf6d 4 weeks ago 180 MB
ubuntu 12.04 74fe38d11401 4 weeks ago 209.6 MB
ubuntu precise 74fe38d11401 4 weeks ago 209.6 MB
ubuntu 12.10 a7cf8ae4e998 4 weeks ago 171.3 MB
ubuntu quantal a7cf8ae4e998 4 weeks ago 171.3 MB
ubuntu 14.04 99ec81b80c55 4 weeks ago 266 MB
ubuntu latest 99ec81b80c55 4 weeks ago 266 MB
ubuntu trusty 99ec81b80c55 4 weeks ago 266 MB
ubuntu 13.04 316b678ddf48 4 weeks ago 169.4 MB
ubuntu raring 316b678ddf48 4 weeks ago 169.4 MB
ubuntu 10.04 3db9c44f4520 4 weeks ago 183 MB
ubuntu lucid 3db9c44f4520 4 weeks ago 183 MB
We can see the images we've previously used in our [user guide](/userguide/).
Each has been downloaded from [Docker.io](https://index.docker.io) when we
launched a container using that image.
We can see three crucial pieces of information about our images in the listing.
* What repository they came from, for example `ubuntu`.
* The tags for each image, for example `14.04`.
* The image ID of each image.
A repository potentially holds multiple variants of an image. In the case of
our `ubuntu` image we can see multiple variants covering Ubuntu 10.04, 12.04,
12.10, 13.04, 13.10 and 14.04. Each variant is identified by a tag and you can
refer to a tagged image like so:
ubuntu:14.04
So when we run a container we refer to a tagged image like so:
$ sudo docker run -t -i ubuntu:14.04 /bin/bash
If instead we wanted to build an Ubuntu 12.04 image we'd use:
$ sudo docker run -t -i ubuntu:12.04 /bin/bash
If you don't specify a variant, for example you just use `ubuntu`, then Docker
will default to using the `ubunut:latest` image.
> **Tip:**
> We recommend you always use a specific tagged image, for example
> `ubuntu:12.04`. That way you always know exactly what variant of an image is
> being used.
## Getting a new image
So how do we get new images? Well Docker will automatically download any image
we use that isn't already present on the Docker host. But this can potentially
add some time to the launch of a container. If we want to pre-load an image we
can download it using the `docker pull` command. Let's say we'd like to
download the `centos` image.
$ sudo docker pull centos
Pulling repository centos
b7de3133ff98: Pulling dependent layers
5cc9e91966f7: Pulling fs layer
511136ea3c5a: Download complete
ef52fb1fe610: Download complete
. . .
We can see that each layer of the image has been pulled down and now we
can run a container from this image and we won't have to wait to
download the image.
$ sudo docker run -t -i centos /bin/bash
bash-4.1#
## Finding images
One of the features of Docker is that a lot of people have created Docker
images for a variety of purposes. Many of these have been uploaded to
[Docker.io](https://index.docker.io). We can search these images on the
[Docker.io](https://index.docker.io) website.
![indexsearch](/userguide/search.png)
We can also search for images on the command line using the `docker search`
command. Let's say our team wants an image with Ruby and Sinatra installed on
which to do our web application development. We can search for a suitable image
by using the `docker search` command to find all the images that contain the
term `sinatra`.
$ sudo docker search sinatra
NAME DESCRIPTION STARS OFFICIAL TRUSTED
training/sinatra Sinatra training image 0 [OK]
marceldegraaf/sinatra Sinatra test app 0
mattwarren/docker-sinatra-demo 0 [OK]
luisbebop/docker-sinatra-hello-world 0 [OK]
bmorearty/handson-sinatra handson-ruby + Sinatra for Hands on with D... 0
subwiz/sinatra 0
bmorearty/sinatra 0
. . .
We can see we've returned a lot of images that use the term `sinatra`. We've
returned a list of image names, descriptions, Stars (which measure the social
popularity of images - if a user likes an image then they can "star" it), and
the Official and Trusted statuses. Official repositories are XXX and Trusted
repositories are [Trusted Build](/userguide/dockerrepos/) that allow you to
validate the source and content of an image.
We've reviewed the images available to use and we decided to use the
`training/sinatra` image. So far we've seen two types of images repositories,
images like `ubuntu`, which are called base or root images. These base images
are provided by Docker Inc and are built, validated and supported. These can be
identified by their single word names.
We've also seen user images, for example the `training/sinatra` image we've
chosen. A user image belongs to a member of the Docker community and is built
and maintained by them. You can identify user images as they are always
prefixed with the user name, here `training`, of the user that created them.
## Pulling our image
We've identified a suitable image, `training/sinatra`, and now we can download it using the `docker pull` command.
$ sudo docker pull training/sinatra
The team can now use this image by run their own containers.
$ sudo docker run -t -i training/sinatra /bin/bash
root@a8cb6ce02d85:/#
## Creating our own images
The team has found the `training/sinatra` image pretty useful but it's not quite what
they need and we need to make some changes to it. There are two ways we can
update and create images.
1. We can update a container created from an image and commit the results to an image.
2. We can use a `Dockerfile` to specify instructions to create an image.
### Updating and committing an image
To update an image we first need to create a container from the image
we'd like to update.
$ sudo docker run -t -i training/sinatra /bin/bash
root@0b2616b0e5a8:/#
> **Note:**
> Take note of the container ID that has been created, `0b2616b0e5a8`, as we'll
> need it in a moment.
Inside our running container let's add the `json` gem.
root@0b2616b0e5a8:/# gem install json
Once this has completed let's exit our container using the `exit`
command.
Now we have a container with the change we want to make. We can then
commit a copy of this container to an image using the `docker commit`
command.
$ sudo docker commit -m="Added json gem" -a="Kate Smith" \
0b2616b0e5a8 ouruser/sinatra:v2
4f177bd27a9ff0f6dc2a830403925b5360bfe0b93d476f7fc3231110e7f71b1c
Here we've used the `docker commit` command. We've specified two flags: `-m`
and `-a`. The `-m` flag allows us to specify a commit message, much like you
would with a commit on a version control system. The `-a` flag allows us to
specify an author for our update.
We've also specified the container we want to create this new image from,
`0b2616b0e5a8` (the ID we recorded earlier) and we've specified a target for
the image:
ouruser/sinatra:v2
Let's break this target down. It consists of a new user, `ouruser`, that we're
writing this image to. We've also specified the name of the image, here we're
keeping the original image name `sinatra`. Finally we're specifying a tag for
the image: `v2`.
We can then look at our new `ouruser/sinatra` image using the `docker images`
command.
$ sudo docker images
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
training/sinatra latest 5bc342fa0b91 10 hours ago 446.7 MB
ouruser/sinatra v2 3c59e02ddd1a 10 hours ago 446.7 MB
ouruser/sinatra latest 5db5f8471261 10 hours ago 446.7 MB
To use our new image to create a container we can then:
$ sudo docker run -t -i ouruser/sinatra:v2 /bin/bash
root@78e82f680994:/#
### Building an image from a `Dockerfile`
Using the `docker commit` command is a pretty simple way of extending an image
but it's a bit cumbersome and it's not easy to share a development process for
images amongst a team. Instead we can use a new command, `docker build`, to
build new images from scratch.
To do this we create a `Dockerfile` that contains a set of instructions that
tell Docker how to build our image.
Let's create a directory and a `Dockerfile` first.
$ mkdir sinatra
$ cd sinatra
$ touch Dockerfile
Each instructions creates a new layer of the image. Let's look at a simple
example now for building our own Sinatra image for our development team.
# This is a comment
FROM ubuntu:14.04
MAINTAINER Kate Smith <ksmith@example.com>
RUN apt-get -qq update
RUN apt-get -qqy install ruby ruby-dev
RUN gem install sinatra
Let's look at what our `Dockerfile` does. Each instruction prefixes a statement and is capitalized.
INSTRUCTION statement
> **Note:**
> We use `#` to indicate a comment
The first instruction `FROM` tells Docker what the source of our image is, in
this case we're basing our new image on an Ubuntu 14.04 image.
Next we use the `MAINTAINER` instruction to specify who maintains our new image.
Lastly, we've specified three `RUN` instructions. A `RUN` instruction executes
a command inside the image, for example installing a package. Here we're
updating our APT cache, installing Ruby and RubyGems and then installing the
Sinatra gem.
> **Note:**
> There are [a lot more instructions available to us in a Dockerfile](/reference/builder).
Now let's take our `Dockerfile` and use the `docker build` command to build an image.
$ sudo docker build -t="ouruser/sinatra:v2" .
Uploading context 2.56 kB
Uploading context
Step 0 : FROM ubuntu:14.04
---> 99ec81b80c55
Step 1 : MAINTAINER Kate Smith <ksmith@example.com>
---> Running in 7c5664a8a0c1
---> 2fa8ca4e2a13
Removing intermediate container 7c5664a8a0c1
Step 2 : RUN apt-get -qq update
---> Running in b07cc3fb4256
---> 50d21070ec0c
Removing intermediate container b07cc3fb4256
Step 3 : RUN apt-get -qqy install ruby ruby-dev
---> Running in a5b038dd127e
Selecting previously unselected package libasan0:amd64.
(Reading database ... 11518 files and directories currently installed.)
Preparing to unpack .../libasan0_4.8.2-19ubuntu1_amd64.deb ...
. . .
Setting up ruby (1:1.9.3.4) ...
Setting up ruby1.9.1 (1.9.3.484-2ubuntu1) ...
Processing triggers for libc-bin (2.19-0ubuntu6) ...
---> 2acb20f17878
Removing intermediate container a5b038dd127e
Step 4 : RUN gem install sinatra
---> Running in 5e9d0065c1f7
. . .
Successfully installed rack-protection-1.5.3
Successfully installed sinatra-1.4.5
4 gems installed
---> 324104cde6ad
Removing intermediate container 5e9d0065c1f7
Successfully built 324104cde6ad
We've specified our `docker build` command and used the `-t` flag to identify
our new image as belonging to the user `ouruser`, the repository name `sinatra`
and given it the tag `v2`.
We've also specified the location of our `Dockerfile` using the `.` to
indicate a `Dockerfile` in the current directory.
> **Note::**
> You can also specify a path to a `Dockerfile`.
Now we can see the build process at work. The first thing Docker does is
upload the build context: basically the contents of the directory you're
building in. This is done because the Docker daemon does the actual
build of the image and it needs the local context to do it.
Next we can see each instruction in the `Dockerfile` being executed
step-by-step. We can see that each step creates a new container, runs
the instruction inside that container and then commits that change -
just like the `docker commit` work flow we saw earlier. When all the
instructions have executed we're left with the `324104cde6ad` image
(also helpfully tagged as `ouruser/sinatra:v2`) and all intermediate
containers will get removed to clean things up.
We can then create a container from our new image.
$ sudo docker run -t -i ouruser/sinatra /bin/bash
root@8196968dac35:/#
> **Note:**
> This is just the briefest introduction to creating images. We've
> skipped a whole bunch of other instructions that you can use. We'll see more of
> those instructions in later sections of the Guide or you can refer to the
> [`Dockerfile`](/reference/builder/) reference for a
> detailed description and examples of every instruction.
## Setting tags on an image
You can also add a tag to an existing image after you commit or build it. We
can do this using the `docker tag` command. Let's add a new tag to our
`ouruser/sinatra` image.
$ sudo docker tag 5db5f8471261 ouruser/sinatra:devel
The `docker tag` command takes the ID of the image, here `5db5f8471261`, and our
user name, the repository name and the new tag.
Let's see our new tag using the `docker images` command.
$ sudo docker images ouruser/sinatra
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
ouruser/sinatra latest 5db5f8471261 11 hours ago 446.7 MB
ouruser/sinatra devel 5db5f8471261 11 hours ago 446.7 MB
ouruser/sinatra v2 5db5f8471261 11 hours ago 446.7 MB
## Push an image to Docker.io
Once you've built or created a new image you can push it to [Docker.io](
https://index.docker.io) using the `docker push` command. This allows you to
share it with others, either publicly, or push it into [a private
repository](https://index.docker.io/plans/).
$ sudo docker push ouruser/sinatra
The push refers to a repository [ouruser/sinatra] (len: 1)
Sending image list
Pushing repository ouruser/sinatra (3 tags)
. . .
## Remove an image from the host
You can also remove images on your Docker host in a way [similar to
containers](
/userguide/usingdocker) using the `docker rmi` command.
Let's delete the `training/sinatra` image as we don't need it anymore.
$ docker rmi training/sinatra
Untagged: training/sinatra:latest
Deleted: 5bc342fa0b91cabf65246837015197eecfa24b2213ed6a51a8974ae250fedd8d
Deleted: ed0fffdcdae5eb2c3a55549857a8be7fc8bc4241fb19ad714364cbfd7a56b22f
Deleted: 5c58979d73ae448df5af1d8142436d81116187a7633082650549c52c3a2418f0
> **Note:** In order to remove an image from the host, please make sure
> that there are no containers actively based on it.
# Next steps
Until now we've seen how to build individual applications inside Docker
containers. Now learn how to build whole application stacks with Docker
by linking together multiple Docker containers.
Go to [Linking Containers Together](/userguide/dockerlinks).

View file

@ -0,0 +1,73 @@
page_title: Getting started with Docker.io
page_description: Introductory guide to getting an account on Docker.io
page_keywords: documentation, docs, the docker guide, docker guide, docker, docker platform, virtualization framework, docker.io, central service, services, how to, container, containers, automation, collaboration, collaborators, registry, repo, repository, technology, github webhooks, trusted builds
# Getting Started with Docker.io
*How do I use Docker.io?*
In this section we're going to introduce you, very quickly!, to
[Docker.io](https://index.docker.io) and create an account.
[Docker.io](https://www.docker.io) is the central hub for Docker. It
helps you to manage Docker and its components. It provides services such
as:
* Hosting images.
* User authentication.
* Automated image builds and work flow tools like build triggers and web
hooks.
* Integration with GitHub and BitBucket.
Docker.io helps you collaborate with colleagues and get the most out of
Docker.
In order to use Docker.io you will need to register an account. Don't
panic! It's totally free and really easy.
## Creating a Docker.io Account
There are two ways you can create a Docker.io account:
* Via the web, or
* Via the command line.
### Sign up via the web!
Fill in the [sign-up form](https://www.docker.io/account/signup/) and
choose your user name and specify some details such as an email address.
![Register using the sign-up page](/userguide/register-web.png)
### Signup via the command line
You can also create a Docker.io account via the command line using the
`docker login` command.
$ sudo docker login
### Confirm your email
Once you've filled in the form then check your email for a welcome
message and activate your account.
![Confirm your registration](/userguide/register-confirm.png)
### Login!
Then you can login using the web console:
![Login using the web console](/userguide/login-web.png)
Or via the command line and the `docker login` command:
$ sudo docker login
Now your Docker.io account is active and ready for you to use!
## Next steps
Now let's start Dockerizing applications with our "Hello World!" exercise.
Go to [Dockerizing Applications](/userguide/dockerizing).

View file

@ -0,0 +1,193 @@
page_title: Dockerizing Applications: A "Hello World!"
page_description: A simple "Hello World!" exercise that introduced you to Docker.
page_keywords: docker guide, docker, docker platform, virtualization framework, how to, dockerize, dockerizing apps, dockerizing applications, container, containers
# Dockerizing Applications: A "Hello World!"
*So what's this Docker thing all about?*
Docker allows you to run applications inside containers. Running an
application inside a container takes a single command: `docker run`.
## Hello World!
Let's try it now.
$ sudo docker run ubuntu:14.04 /bin/echo "Hello World!"
Hello World!
And you just launched your first container!
So what just happened? Let's step through what the `docker run` command
did.
First we specified the `docker` binary and the command we wanted to
execute, `run`. The `docker run` combination *runs* containers.
Next we specified an image: `ubuntu:14.04`. This is the source of the container
we ran. Docker calls this an image. In this case we used an Ubuntu 14.04
operating system image.
When you specify an image, Docker looks first for the image on your
Docker host. If it can't find it then it downloads the image from the public
image registry: [Docker.io](https://index.docker.io).
Next we told Docker what command to run inside our new container:
/bin/echo "Hello World!"
When our container was launched Docker created a new Ubuntu 14.04
environment and then executed the `/bin/echo` command inside it. We saw
the result on the command line:
Hello World!
So what happened to our container after that? Well Docker containers
only run as long as the command you specify is active. Here, as soon as
`Hello World!` was echoed, the container stopped.
## An Interactive Container
Let's try the `docker run` command again, this time specifying a new
command to run in our container.
$ sudo docker run -t -i ubuntu:14.04 /bin/bash
root@af8bae53bdd3:/#
Here we've again specified the `docker run` command and launched an
`ubuntu:14.04` image. But we've also passed in two flags: `-t` and `-i`.
The `-t` flag assigns a pseudo-tty or terminal inside our new container
and the `-i` flag allows us to make an interactive connection by
grabbing the standard in (`STDIN`) of the container.
We've also specified a new command for our container to run:
`/bin/bash`. This will launch a Bash shell inside our container.
So now when our container is launched we can see that we've got a
command prompt inside it:
root@af8bae53bdd3:/#
Let's try running some commands inside our container:
root@af8bae53bdd3:/# pwd
/
root@af8bae53bdd3:/# ls
bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var
You can see we've run the `pwd` to show our current directory and can
see we're in the `/` root directory. We've also done a directory listing
of the root directory which shows us what looks like a typical Linux
file system.
You can play around inside this container and when you're done you can
use the `exit` command to finish.
root@af8bae53bdd3:/# exit
As with our previous container, once the Bash shell process has
finished, the container is stopped.
## A Daemonized Hello World!
Now a container that runs a command and then exits has some uses but
it's not overly helpful. Let's create a container that runs as a daemon,
like most of the applications we're probably going to run with Docker.
Again we can do this with the `docker run` command:
$ sudo docker run -d ubuntu:14.04 /bin/sh -c "while true; do echo hello world; sleep 1; done"
1e5535038e285177d5214659a068137486f96ee5c2e85a4ac52dc83f2ebe4147
Wait what? Where's our "Hello World!" Let's look at what we've run here.
It should look pretty familiar. We ran `docker run` but this time we
specified a flag: `-d`. The `-d` flag tells Docker to run the container
and put it in the background, to daemonize it.
We also specified the same image: `ubuntu:14.04`.
Finally, we specified a command to run:
/bin/sh -c "while true; do echo hello world; sleep 1; done"
This is the (hello) world's silliest daemon: a shell script that echoes
`hello world` forever.
So why aren't we seeing any `hello world`'s? Instead Docker has returned
a really long string:
1e5535038e285177d5214659a068137486f96ee5c2e85a4ac52dc83f2ebe4147
This really long string is called a *container ID*. It uniquely
identifies a container so we can work with it.
> **Note:**
> The container ID is a bit long and unwieldy and a bit later
> on we'll see a shorter ID and some ways to name our containers to make
> working with them easier.
We can use this container ID to see what's happening with our `hello
world` daemon.
Firstly let's make sure our container is running. We can
do that with the `docker ps` command. The `docker ps` command queries
the Docker daemon for information about all the container it knows
about.
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1e5535038e28 ubuntu:14.04 /bin/sh -c 'while tr 2 minutes ago Up 1 minute insane_babbage
Here we can see our daemonized container. The `docker ps` has returned some useful
information about it, starting with a shorter variant of its container ID:
`1e5535038e28`.
We can also see the image we used to build it, `ubuntu:14.04`, the command it
is running, its status and an automatically assigned name,
`insane_babbage`.
> **NoteL**
> Docker automatically names any containers you start, a
> little later on we'll see how you can specify your own names.
Okay, so we now know it's running. But is it doing what we asked it to do? To see this
we're going to look inside the container using the `docker logs`
command. Let's use the container name Docker assigned.
$ sudo docker logs insane_babbage
hello world
hello world
hello world
. . .
The `docker logs` command looks inside the container and returns its standard
output: in this case the output of our command `hello world`.
Awesome! Our daemon is working and we've just created our first
Dockerized application!
Now we've established we can create our own containers let's tidy up
after ourselves and stop our daemonized container. To do this we use the
`docker stop` command.
$ sudo docker stop insane_babbage
insane_babbage
The `docker stop` command tells Docker to politely stop the running
container. If it succeeds it will return the name of the container it
has just stopped.
Let's check it worked with the `docker ps` command.
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
Excellent. Our container has been stopped.
# Next steps
Now we've seen how simple it is to get started with Docker let's learn how to
do some more advanced tasks.
Go to [Working With Containers](/userguide/usingdocker).

View file

@ -0,0 +1,241 @@
page_title: Linking Containers Together
page_description: Learn how to connect Docker containers together.
page_keywords: Examples, Usage, user guide, links, linking, docker, documentation, examples, names, name, container naming, port, map, network port, network
# Linking Containers Together
In [the Using Docker section](/userguide/usingdocker) we touched on
connecting to a service running inside a Docker container via a network
port. This is one of the ways that you can interact with services and
applications running inside Docker containers. In this section we're
going to give you a refresher on connecting to a Docker container via a
network port as well as introduce you to the concepts of container
linking.
## Network port mapping refresher
In [the Using Docker section](/userguide/usingdocker) we created a
container that ran a Python Flask application.
$ sudo docker run -d -P training/webapp python app.py
> **Note:**
> Containers have an internal network and an IP address
> (remember we used the `docker inspect` command to show the container's
> IP address in the [Using Docker](/userguide/usingdocker/) section).
> Docker can have a variety of network configurations. You can see more
> information on Docker networking [here](/articles/networking/).
When we created that container we used the `-P` flag to automatically map any
network ports inside that container to a random high port from the range 49000
to 49900 on our Docker host. When we subsequently ran `docker ps` we saw that
port 5000 was bound to port 49155.
$ sudo docker ps nostalgic_morse
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
bc533791f3f5 training/webapp:latest python app.py 5 seconds ago Up 2 seconds 0.0.0.0:49155->5000/tcp nostalgic_morse
We also saw how we can bind a container's ports to a specific port using
the `-p` flag.
$ sudo docker run -d -p 5000:5000 training/webapp python app.py
And we saw why this isn't such a great idea because it constrains us to
only one container on that specific port.
There are also a few other ways we can configure the `-p` flag. By
default the `-p` flag will bind the specified port to all interfaces on
the host machine. But we can also specify a binding to a specific
interface, for example only to the `localhost`.
$ sudo docker run -d -p 127.0.0.1:5000:5000 training/webapp python app.py
This would bind port 5000 inside the container to port 5000 on the
`localhost` or `127.0.0.1` interface on the host machine.
Or to bind port 5000 of the container to a dynamic port but only on the
`localhost` we could:
$ sudo docker run -d -p 127.0.0.1::5000 training/webapp python app.py
We can also bind UDP ports by adding a trailing `/udp`, for example:
$ sudo docker run -d -p 127.0.0.1:5000:5000/udp training/webapp python app.py
We also saw the useful `docker port` shortcut which showed us the
current port bindings, this is also useful for showing us specific port
configurations. For example if we've bound the container port to the
`localhost` on the host machine this will be shown in the `docker port`
output.
$ docker port nostalgic_morse
127.0.0.1:49155
> **Note:**
> The `-p` flag can be used multiple times to configure multiple ports.
## Docker Container Linking
Network port mappings are not the only way Docker containers can connect
to one another. Docker also has a linking system that allows you to link
multiple containers together and share connection information between
them. Docker linking will create a parent child relationship where the
parent container can see selected information about its child.
## Container naming
To perform this linking Docker relies on the names of your containers.
We've already seen that each container we create has an automatically
created name, indeed we've become familiar with our old friend
`nostalgic_morse` during this guide. You can also name containers
yourself. This naming provides two useful functions:
1. It's useful to name containers that do specific functions in a way
that makes it easier for you to remember them, for example naming a
container with a web application in it `web`.
2. It provides Docker with reference point that allows it to refer to other
containers, for example link container `web` to container `db`.
You can name your container by using the `--name` flag, for example:
$ sudo docker run -d -P --name web training/webapp python app.py
You can see we've launched a new container and used the `--name` flag to
call the container `web`. We can see the container's name using the
`docker ps` command.
$ sudo docker ps -l
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
aed84ee21bde training/webapp:latest python app.py 12 hours ago Up 2 seconds 0.0.0.0:49154->5000/tcp web
We can also use `docker inspect` to return the container's name.
$ sudo docker inspect -f "{{ .Name }}" aed84ee21bde
/web
> **Note:**
> Container names have to be unique. That means you can only call
> one container `web`. If you want to re-use a container name you must delete the
> old container with the `docker rm` command before you can create a new
> container with the same name. As an alternative you can use the `--rm`
> flag with the `docker run` command. This will delete the container
> immediately after it stops.
## Container Linking
Links allow containers to discover and securely communicate with each
other. To create a link you use the `--link` flag. Let's create a new
container, this one a database.
$ sudo docker run -d --name db training/postgres
Here we've created a new container called `db` using the `training/postgres`
image, which contains a PostgreSQL database.
Now let's create a new `web` container and link it with our `db` container.
$ sudo docker run -d -P --name web --link db:db training/webapp python app.py
This will link the new `web` container with the `db` container we created
earlier. The `--link` flag takes the form:
--link name:alias
Where `name` is the name of the container we're linking to and `alias` is an
alias for the link name. We'll see how that alias gets used shortly.
Let's look at our linked containers using `docker ps`.
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
349169744e49 training/postgres:latest su postgres -c '/usr About a minute ago Up About a minute 5432/tcp db
aed84ee21bde training/webapp:latest python app.py 16 hours ago Up 2 minutes 0.0.0.0:49154->5000/tcp db/web,web
We can see our named containers, `db` and `web`, and we can see that the `web`
containers also shows `db/web` in the `NAMES` column. This tells us that the
`web` container is linked to the `db` container in a parent/child relationship.
So what does linking the containers do? Well we've discovered the link creates
a parent-child relationship between the two containers. The parent container,
here `db`, can access information on the child container `web`. To do this
Docker creates a secure tunnel between the containers without the need to
expose any ports externally on the container. You'll note when we started the
`db` container we did not use either of the `-P` or `-p` flags. As we're
linking the containers we don't need to expose the PostgreSQL database via the
network.
Docker exposes connectivity information for the parent container inside the
child container in two ways:
* Environment variables,
* Updating the `/etc/host` file.
Let's look first at the environment variables Docker sets. Inside the `web`
container let's run the `env` command to list the container's environment
variables.
root@aed84ee21bde:/opt/webapp# env
HOSTNAME=aed84ee21bde
. . .
DB_NAME=/web/db
DB_PORT=tcp://172.17.0.5:5432
DB_PORT_5000_TCP=tcp://172.17.0.5:5432
DB_PORT_5000_TCP_PROTO=tcp
DB_PORT_5000_TCP_PORT=5432
DB_PORT_5000_TCP_ADDR=172.17.0.5
. . .
> **Note**:
> These Environment variables are only set for the first process in the
> container. Similarly, some daemons (such as `sshd`)
> will scrub them when spawning shells for connection.
We can see that Docker has created a series of environment variables with
useful information about our `db` container. Each variables is prefixed with
`DB` which is populated from the `alias` we specified above. If our `alias`
were `db1` the variables would be prefixed with `DB1_`. You can use these
environment variables to configure your applications to connect to the database
on the `db` container. The connection will be secure, private and only the
linked `web` container will be able to talk to the `db` container.
In addition to the environment variables Docker adds a host entry for the
linked parent to the `/etc/hosts` file. Let's look at this file on the `web`
container now.
root@aed84ee21bde:/opt/webapp# cat /etc/hosts
172.17.0.7 aed84ee21bde
. . .
172.17.0.5 db
We can see two relevant host entries. The first is an entry for the `web`
container that uses the Container ID as a host name. The second entry uses the
link alias to reference the IP address of the `db` container. Let's try to ping
that host now via this host name.
root@aed84ee21bde:/opt/webapp# apt-get install -yqq inetutils-ping
root@aed84ee21bde:/opt/webapp# ping db
PING db (172.17.0.5): 48 data bytes
56 bytes from 172.17.0.5: icmp_seq=0 ttl=64 time=0.267 ms
56 bytes from 172.17.0.5: icmp_seq=1 ttl=64 time=0.250 ms
56 bytes from 172.17.0.5: icmp_seq=2 ttl=64 time=0.256 ms
> **Note:**
> We had to install `ping` because our container didn't have it.
We've used the `ping` command to ping the `db` container using it's host entry
which resolves to `172.17.0.5`. We can make use of this host entry to configure
an application to make use of our `db` container.
> **Note:**
> You can link multiple child containers to a single parent. For
> example, we could have multiple web containers attached to our `db`
> container.
# Next step
Now we know how to link Docker containers together the next step is
learning how to manage data, volumes and mounts inside our containers.
Go to [Managing Data in Containers](/userguide/dockervolumes).

View file

@ -0,0 +1,176 @@
page_title: Working with Docker.io
page_description: Learning how to use Docker.io to manage images and work flow
page_keywords: repo, Docker.io, Docker Hub, registry, index, repositories, usage, pull image, push image, image, documentation
# Working with Docker.io
So far we've seen a lot about how to use Docker on the command line and
your local host. We've seen [how to pull down
images](/userguide/usingdocker/) that you can run your containers from
and we've seen how to [create your own images](/userguide/dockerimages).
Now we're going to learn a bit more about
[Docker.io](https://index.docker.io) and how you can use it to enhance
your Docker work flows.
[Docker.io](https://index.docker.io) is the public registry that Docker
Inc maintains. It contains a huge collection of images, over 15,000,
that you can download and use to build your containers. It also provides
authentication, structure (you can setup teams and organizations), work
flow tools like webhooks and build triggers as well as privacy features
like private repositories for storing images you don't want to publicly
share.
## Docker commands and Docker.io
Docker acts as a client for these services via the `docker search`,
`pull`, `login` and `push` commands.
## Searching for images
As we've already seen we can search the
[Docker.io](https://index.docker.io) registry via it's search interface
or using the command line interface. Searching can find images by name,
user name or description:
$ sudo docker search centos
NAME DESCRIPTION STARS OFFICIAL TRUSTED
centos Official CentOS 6 Image as of 12 April 2014 88
tianon/centos CentOS 5 and 6, created using rinse instea... 21
...
There you can see two example results: `centos` and
`tianon/centos`. The second result shows that it comes from
the public repository of a user, `tianon/`, while the first result,
`centos`, doesn't explicitly list a repository so it comes from the
trusted top-level namespace. The `/` character separates a user's
repository and the image name.
Once you have found the image you want, you can download it:
$ sudo docker pull centos
Pulling repository centos
0b443ba03958: Download complete
539c0211cd76: Download complete
511136ea3c5a: Download complete
7064731afe90: Download complete
The image is now available to run a container from.
## Contributing to Docker.io
Anyone can pull public images from the [Docker.io](http://index.docker.io)
registry, but if you would like to share your own images, then you must
register a user first as we saw in the [first section of the Docker User
Guide](/userguide/dockerio/).
To refresh your memory, you can create your user name and login to
[Docker.io](https://index.docker.io/account/signup/), or by running:
$ sudo docker login
This will prompt you for a user name, which will become a public
namespace for your public repositories, for example:
training/webapp
Here `training` is the user name and `webapp` is a repository owned by
that user.
If your user name is available then `docker` will also prompt you to
enter a password and your e-mail address. It will then automatically log
you in. Now you're ready to commit and push your own images!
> **Note:**
> Your authentication credentials will be stored in the [`.dockercfg`
> authentication file](#authentication-file) in your home directory.
## Pushing a repository to Docker.io
In order to push an repository to its registry you need to have named an image,
or committed your container to a named image as we saw
[here](/userguide/dockerimages).
Now you can push this repository to the registry designated by its name
or tag.
$ sudo docker push yourname/newimage
The image will then be uploaded and available for use.
## Features of Docker.io
Now let's look at some of the features of Docker.io. You can find more
information [here](/docker-io/).
* Private repositories
* Organizations and teams
* Automated Builds
* Webhooks
## Private Repositories
Sometimes you have images you don't want to make public and share with
everyone. So Docker.io allows you to have private repositories. You can
sign up for a plan [here](https://index.docker.io/plans/).
## Organizations and teams
One of the useful aspects of private repositories is that you can share
them only with members of your organization or team. Docker.io lets you
create organizations where you can collaborate with your colleagues and
manage private repositories. You can create and manage an organization
[here](https://index.docker.io/account/organizations/).
## Automated Builds
Automated Builds automate the building and updating of images from [GitHub](https://www.github.com)
or [BitBucket](http://bitbucket.com), directly on Docker.io. It works by adding a commit hook to
your selected GitHub or BitBucket repository, triggering a build and update when you push a
commit.
### To setup an Automated Build
1. Create a [Docker.io account](https://index.docker.io/) and login.
2. Link your GitHub or BitBucket account through the [`Link Accounts`](https://index.docker.io/account/accounts/) menu.
3. [Configure an Automated Build](https://index.docker.io/builds/).
4. Pick a GitHub or BitBucket project that has a `Dockerfile` that you want to build.
5. Pick the branch you want to build (the default is the `master` branch).
6. Give the Automated Build a name.
7. Assign an optional Docker tag to the Build.
8. Specify where the `Dockerfile` is located. The default is `/`.
Once the Automated Build is configured it will automatically trigger a
build, and in a few minutes, if there are no errors, you will see your
new Automated Build on the [Docker.io](https://index.docker.io) Registry.
It will stay in sync with your GitHub and BitBucket repository until you
deactivate the Automated Build.
If you want to see the status of your Automated Builds you can go to your
[Automated Builds page](https://index.docker.io/builds/) on the Docker.io,
and it will show you the status of your builds, and the build history.
Once you've created an Automated Build you can deactivate or delete it. You
cannot however push to an Automated Build with the `docker push` command.
You can only manage it by committing code to your GitHub or BitBucket
repository.
You can create multiple Automated Builds per repository and configure them
to point to specific `Dockerfile`'s or Git branches.
### Build Triggers
Automated Builds can also be triggered via a URL on Docker.io. This
allows you to rebuild an Automated build image on demand.
## Webhooks
Webhooks are attached to your repositories and allow you to trigger an
event when an image or updated image is pushed to the repository. With
a webhook you can specify a target URL and a JSON payload will be
delivered when the image is pushed.
## Next steps
Go and use Docker!

View file

@ -0,0 +1,142 @@
page_title: Managing Data in Containers
page_description: How to manage data inside your Docker containers.
page_keywords: Examples, Usage, volume, docker, documentation, user guide, data, volumes
# Managing Data in Containers
So far we've been introduced some [basic Docker
concepts](/userguide/usingdocker/), seen how to work with [Docker
images](/userguide/dockerimages/) as well as learned about [networking
and links between containers](/userguide/dockerlinks/). In this section
we're going to discuss how you can manage data inside and between your
Docker containers.
We're going to look at the two primary ways you can manage data in
Docker.
* Data volumes, and
* Data volume containers.
## Data volumes
A *data volume* is a specially-designated directory within one or more
containers that bypasses the [*Union File
System*](/terms/layer/#ufs-def) to provide several useful features for
persistent or shared data:
- Data volumes can be shared and reused between containers
- Changes to a data volume are made directly
- Changes to a data volume will not be included when you update an image
- Volumes persist until no containers use them
### Adding a data volume
You can add a data volume to a container using the `-v` flag with the
`docker run` command. You can use the `-v` multiple times in a single
`docker run` to mount multiple data volumes. Let's mount a single volume
now in our web application container.
$ sudo docker run -d -P --name web -v /webapp training/webapp python app.py
This will create a new volume inside a container at `/webapp`.
> **Note:**
> You can also use the `VOLUME` instruction in a `Dockerfile` to add one or
> more new volumes to any container created from that image.
### Mount a Host Directory as a Data Volume
In addition to creating a volume using the `-v` flag you can also mount a
directory from your own host into a container.
$ sudo docker run -d -P --name web -v /src/webapp:/opt/webapp training/webapp python app.py
This will mount the local directory, `/src/webapp`, into the container as the
`/opt/webapp` directory. This is very useful for testing, for example we can
mount our source code inside the container and see our application at work as
we change the source code. The directory on the host must be specified as an
absolute path and if the directory doesn't exist Docker will automatically
create it for you.
> **Note::**
> This is not available from a `Dockerfile` due the portability
> and sharing purpose of it. As the host directory is, by its nature,
> host-dependent it might not work all hosts.
Docker defaults to a read-write volume but we can also mount a directory
read-only.
$ sudo docker run -d -P --name web -v /src/webapp:/opt/webapp:ro training/webapp python app.py
Here we've mounted the same `/src/webapp` directory but we've added the `ro`
option to specify that the mount should be read-only.
## Creating and mounting a Data Volume Container
If you have some persistent data that you want to share between
containers, or want to use from non-persistent containers, it's best to
create a named Data Volume Container, and then to mount the data from
it.
Let's create a new named container with a volume to share.
$ docker run -d -v /dbdata --name dbdata training/postgres
You can then use the `--volumes-from` flag to mount the `/dbdata` volume in another container.
$ docker run -d --volumes-from dbdata --name db1 training/postgres
And another:
$ docker run -d --volumes-from dbdata --name db2 training/postgres
You can use multiple `-volumes-from` parameters to bring together multiple data
volumes from multiple containers.
You can also extend the chain by mounting the volume that came from the
`dbdata` container in yet another container via the `db1` or `db2` containers.
$ docker run -d --name db3 --volumes-from db1 training/postgres
If you remove containers that mount volumes, including the initial `dbdata`
container, or the subsequent containers `db1` and `db2`, the volumes will not
be deleted until there are no containers still referencing those volumes. This
allows you to upgrade, or effectively migrate data volumes between containers.
## Backup, restore, or migrate data volumes
Another useful function we can perform with volumes is use them for
backups, restores or migrations. We do this by using the
`--volumes-from` flag to create a new container that mounts that volume,
like so:
$ sudo docker run --volumes-from dbdata -v $(pwd):/backup ubuntu tar cvf /backup/backup.tar /dbdata
Here's we've launched a new container and mounted the volume from the
`dbdata` container. We've then mounted a local host directory as
`/backup`. Finally, we've passed a command that uses `tar` to backup the
contents of the `dbdata` volume to a `backup.tar` file inside our
`/backup` directory. When the command completes and the container stops
we'll be left with a backup of our `dbdata` volume.
You could then to restore to the same container, or another that you've made
elsewhere. Create a new container.
$ sudo docker run -v /dbdata --name dbdata2 ubuntu
Then un-tar the backup file in the new container's data volume.
$ sudo docker run --volumes-from dbdata2 -v $(pwd):/backup busybox tar xvf /backup/backup.tar
You can use this techniques above to automate backup, migration and
restore testing using your preferred tools.
# Next steps
Now we've learned a bit more about how to use Docker we're going to see how to
combine Docker with the services available on
[Docker.io](https://index.docker.io) including Automated Builds and private
repositories.
Go to [Working with Docker.io](/userguide/dockerrepos).

View file

@ -0,0 +1,98 @@
page_title: The Docker User Guide
page_description: The Docker User Guide home page
page_keywords: docker, introduction, documentation, about, technology, docker.io, user, guide, user's, manual, platform, framework, virtualization, home, intro
# Welcome to the Docker User Guide
In the [Introduction](/) you got a taste of what Docker is and how it
works. In this guide we're going to take you through the fundamentals of
using Docker and integrating it into your environment.
Well teach you how to use Docker to:
* Dockerizing your applications.
* Run your own containers.
* Build Docker images.
* Share your Docker images with others.
* And a whole lot more!
We've broken this guide into major sections that take you through
the Docker life cycle:
## Getting Started with Docker.io
*How do I use Docker.io?*
Docker.io is the central hub for Docker. It hosts public Docker images
and provides services to help you build and manage your Docker
environment. To learn more;
Go to [Using Docker.io](/userguide/dockerio).
## Dockerizing Applications: A "Hello World!"
*How do I run applications inside containers?*
Docker offers a *container-based* virtualization platform to power your
applications. To learn how to Dockerize applications and run them.
Go to [Dockerizing Applications](/userguide/dockerizing).
## Working with Containers
*How do I manage my containers?*
Once you get a grip on running your applications in Docker containers
we're going to show you how to manage those containers. To find out
about how to inspect, monitor and manage containers:
Go to [Working With Containers](/userguide/usingdocker).
## Working with Docker Images
*How can I access, share and build my own images?*
Once you've learnt how to use Docker it's time to take the next step and
learn how to build your own application images with Docker.
Go to [Working with Docker Images](/userguide/dockerimages)
## Linking Containers Together
Until now we've seen how to build individual applications inside Docker
containers. Now learn how to build whole application stacks with Docker
by linking together multiple Docker containers.
Go to [Linking Containers Together](/userguide/dockerlinks).
## Managing Data in Containers
Now we know how to link Docker containers together the next step is
learning how to manage data, volumes and mounts inside our containers.
Go to [Managing Data in Containers](/userguide/dockervolumes).
## Working with Docker.io
Now we've learned a bit more about how to use Docker we're going to see
how to combine Docker with the services available on Docker.io including
Trusted Builds and private repositories.
Go to [Working with Docker.io](/userguide/dockerrepos).
## Getting help
* [Docker homepage](http://www.docker.io/)
* [Docker.io](http://index.docker.io)
* [Docker blog](http://blog.docker.io/)
* [Docker documentation](http://docs.docker.io/)
* [Docker Getting Started Guide](http://www.docker.io/gettingstarted/)
* [Docker code on GitHub](https://github.com/dotcloud/docker)
* [Docker mailing
list](https://groups.google.com/forum/#!forum/docker-user)
* Docker on IRC: irc.freenode.net and channel #docker
* [Docker on Twitter](http://twitter.com/docker)
* Get [Docker help](http://stackoverflow.com/search?q=docker) on
StackOverflow
* [Docker.com](http://www.docker.com/)

Binary file not shown.

After

Width:  |  Height:  |  Size: 31 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 64 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 71 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 99 KiB

View file

@ -0,0 +1,316 @@
page_title: Working with Containers
page_description: Learn how to manage and operate Docker containers.
page_keywords: docker, the docker guide, documentation, docker.io, monitoring containers, docker top, docker inspect, docker port, ports, docker logs, log, Logs
# Working with Containers
In the [last section of the Docker User Guide](/userguide/dockerizing)
we launched our first containers. We launched two containers using the
`docker run` command.
* Containers we ran interactively in the foreground.
* One container we ran daemonized in the background.
In the process we learned about several Docker commands:
* `docker ps` - Lists containers.
* `docker logs` - Shows us the standard output of a container.
* `docker stop` - Stops running containers.
> **Tip:**
> Another way to learn about `docker` commands is our
> [interactive tutorial](https://www.docker.io/gettingstarted).
The `docker` client is pretty simple. Each action you can take
with Docker is a command and each command can take a series of
flags and arguments.
# Usage: [sudo] docker [flags] [command] [arguments] ..
# Example:
$ docker run -i -t ubuntu /bin/bash
Let's see this in action by using the `docker version` command to return
version information on the currently installed Docker client and daemon.
$ sudo docker version
This command will not only provide you the version of Docker client and
daemon you are using, but also the version of Go (the programming
language powering Docker).
Client version: 0.8.0
Go version (client): go1.2
Git commit (client): cc3a8c8
Server version: 0.8.0
Git commit (server): cc3a8c8
Go version (server): go1.2
Last stable version: 0.8.0
### Seeing what the Docker client can do
We can see all of the commands available to us with the Docker client by
running the `docker` binary without any options.
$ sudo docker
You will see a list of all currently available commands.
Commands:
attach Attach to a running container
build Build an image from a Dockerfile
commit Create a new image from a container's changes
. . .
### Seeing Docker command usage
You can also zoom in and review the usage for specific Docker commands.
Try typing Docker followed with a `[command]` to see the usage for that
command:
$ sudo docker attach
Help output . . .
Or you can also pass the `--help` flag to the `docker` binary.
$ sudo docker images --help
This will display the help text and all available flags:
Usage: docker attach [OPTIONS] CONTAINER
Attach to a running container
--no-stdin=false: Do not attach stdin
--sig-proxy=true: Proxify all received signal to the process (even in non-tty mode)
None of the containers we've run did anything particularly useful
though. So let's build on that experience by running an example web
application in Docker.
> **Note:**
> You can see a full list of Docker's commands
> [here](/reference/commandline/cli/).
## Running a Web Application in Docker
So now we've learnt a bit more about the `docker` client let's move onto
the important stuff: running more containers. So far none of the
containers we've run did anything particularly useful though. So let's
build on that experience by running an example web application in
Docker.
For our web application we're going to run a Python Flask application.
Let's start with a `docker run` command.
$ sudo docker run -d -P training/webapp python app.py
Let's review what our command did. We've specified two flags: `-d` and
`-P`. We've already seen the `-d` flag which tells Docker to run the
container in the background. The `-P` flag is new and tells Docker to
map any required network ports inside our container to our host. This
lets us view our web application.
We've specified an image: `training/webapp`. This image is a
pre-built image we've created that contains a simple Python Flask web
application.
Lastly, we've specified a command for our container to run: `python
app.py`. This launches our web application.
> **Note:**
> You can see more detail on the `docker run` command in the [command
> reference](/reference/commandline/cli/#run) and the [Docker Run
> Reference](/reference/run/).
## Viewing our Web Application Container
Now let's see our running container using the `docker ps` command.
$ sudo docker ps -l
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
bc533791f3f5 training/webapp:latest python app.py 5 seconds ago Up 2 seconds 0.0.0.0:49155->5000/tcp nostalgic_morse
You can see we've specified a new flag, `-l`, for the `docker ps`
command. This tells the `docker ps` command to return the details of the
*last* container started.
> **Note:**
> The `docker ps` command only shows running containers. If you want to
> see stopped containers too use the `-a` flag.
We can see the same details we saw [when we first Dockerized a
container](/userguide/dockerizing) with one important addition in the `PORTS`
column.
PORTS
0.0.0.0:49155->5000/tcp
When we passed the `-P` flag to the `docker run` command Docker mapped any
ports exposed in our image to our host.
> **Note:**
> We'll learn more about how to expose ports in Docker images when
> [we learn how to build images](/userguide/dockerimages).
In this case Docker has exposed port 5000 (the default Python Flask
port) on port 49155.
Network port bindings are very configurable in Docker. In our last
example the `-P` flag is a shortcut for `-p 5000` that makes port 5000
inside the container to a high port (from the range 49000 to 49900) on
the local Docker host. We can also bind Docker container's to specific
ports using the `-p` flag, for example:
$ sudo docker run -d -p 5000:5000 training/webapp python app.py
This would map port 5000 inside our container to port 5000 on our local
host. You might be asking about now: why wouldn't we just want to always
use 1:1 port mappings in Docker containers rather than mapping to high
ports? Well 1:1 mappings have the constraint of only being able to map
one of each port on your local host. Let's say you want to test two
Python applications: both bound to port 5000 inside your container.
Without Docker's port mapping you could only access one at a time.
So let's now browse to port 49155 in a web browser to
see the application.
![Viewing the web application](/userguide/webapp1.png).
Our Python application is live!
## A Network Port Shortcut
Using the `docker ps` command to return the mapped port is a bit clumsy so
Docker has a useful shortcut we can use: `docker port`. To use `docker port` we
specify the ID or name of our container and then the port for which we need the
corresponding public-facing port.
$ sudo docker port nostalgic_morse 5000
0.0.0.0:49155
In this case we've looked up what port is mapped externally to port 5000 inside
the container.
## Viewing the Web Application's Logs
Let's also find out a bit more about what's happening with our application and
use another of the commands we've learnt, `docker logs`.
$ sudo docker logs -f nostalgic_morse
* Running on http://0.0.0.0:5000/
10.0.2.2 - - [23/May/2014 20:16:31] "GET / HTTP/1.1" 200 -
10.0.2.2 - - [23/May/2014 20:16:31] "GET /favicon.ico HTTP/1.1" 404 -
This time though we've added a new flag, `-f`. This causes the `docker
logs` command to act like the `tail -f` command and watch the
container's standard out. We can see here the logs from Flask showing
the application running on port 5000 and the access log entries for it.
## Looking at our Web Application Container's processes
In addition to the container's logs we can also examine the processes
running inside it using the `docker top` command.
$ sudo docker top nostalgic_morse
PID USER COMMAND
854 root python app.py
Here we can see our `python app.py` command is the only process running inside
the container.
## Inspecting our Web Application Container
Lastly, we can take a low-level dive into our Docker container using the
`docker inspect` command. It returns a JSON hash of useful configuration
and status information about Docker containers.
$ docker inspect nostalgic_morse
Let's see a sample of that JSON output.
[{
"ID": "bc533791f3f500b280a9626688bc79e342e3ea0d528efe3a86a51ecb28ea20",
"Created": "2014-05-26T05:52:40.808952951Z",
"Path": "python",
"Args": [
"app.py"
],
"Config": {
"Hostname": "bc533791f3f5",
"Domainname": "",
"User": "",
. . .
We can also narrow down the information we want to return by requesting a
specific element, for example to return the container's IP address we would:
$ sudo docker inspect -f '{{ .NetworkSettings.IPAddress }}'
172.17.0.5
## Stopping our Web Application Container
Okay we've seen web application working. Now let's stop it using the
`docker stop` command and the name of our container: `nostalgic_morse`.
$ sudo docker stop nostalgic_morse
nostalgic_morse
We can now use the `docker ps` command to check if the container has
been stopped.
$ sudo docker ps -l
## Restarting our Web Application Container
Oops! Just after you stopped the container you get a call to say another
developer needs the container back. From here you have two choices: you
can create a new container or restart the old one. Let's look at
starting our previous container back up.
$ sudo docker start nostalgic_morse
nostalgic_morse
Now quickly run `docker ps -l` again to see the running container is
back up or browse to the container's URL to see if the application
responds.
> **Note:**
> Also available is the `docker restart` command that runs a stop and
> then start on the container.
## Removing our Web Application Container
Your colleague has let you know that they've now finished with the container
and won't need it again. So let's remove it using the `docker rm` command.
$ sudo docker rm nostalgic_morse
Error: Impossible to remove a running container, please stop it first or use -f
2014/05/24 08:12:56 Error: failed to remove one or more containers
What's happened? We can't actually remove a running container. This protects
you from accidentally removing a running container you might need. Let's try
this again by stopping the container first.
$ sudo docker stop nostalgic_morse
nostalgic_morse
$ sudo docker rm nostalgic_morse
nostalgic_morse
And now our container is stopped and deleted.
> **Note:**
> Always remember that deleting a container is final!
# Next steps
Until now we've only used images that we've downloaded from
[Docker.io](https://index.docker.io) now let's get introduced to
building and sharing our own images.
Go to [Working with Docker Images](/userguide/dockerimages).

Binary file not shown.

After

Width:  |  Height:  |  Size: 51 KiB