Updated a bunch of formatting in the docs/sources/use files

Docker-DCO-1.1-Signed-off-by: James Turnbull <james@lovedthanlost.net> (github: jamtur01)
This commit is contained in:
James Turnbull 2014-05-14 19:22:49 +02:00
parent cb0f2a2823
commit 2269472f3a
10 changed files with 261 additions and 260 deletions

View File

@ -7,53 +7,48 @@ page_keywords: Examples, Usage, links, docker, documentation, examples, names, n
## Introduction ## Introduction
Rather than hardcoding network links between a service consumer and Rather than hardcoding network links between a service consumer and
provider, Docker encourages service portability. provider, Docker encourages service portability, for example instead of:
eg, instead of
(consumer) --> (redis) (consumer) --> (redis)
requiring you to restart the `consumer` to attach it Requiring you to restart the `consumer` to attach it to a different
to a different `redis` service, you can add `redis` service, you can add ambassadors:
ambassadors
(consumer) --> (redis-ambassador) --> (redis) (consumer) --> (redis-ambassador) --> (redis)
or Or
(consumer) --> (redis-ambassador) ---network---> (redis-ambassador) --> (redis) (consumer) --> (redis-ambassador) ---network---> (redis-ambassador) --> (redis)
When you need to rewire your consumer to talk to a different redis When you need to rewire your consumer to talk to a different Redis
server, you can just restart the `redis-ambassador` server, you can just restart the `redis-ambassador` container that the
container that the consumer is connected to. consumer is connected to.
This pattern also allows you to transparently move the redis server to a This pattern also allows you to transparently move the Redis server to a
different docker host from the consumer. different docker host from the consumer.
Using the `svendowideit/ambassador` container, the Using the `svendowideit/ambassador` container, the link wiring is
link wiring is controlled entirely from the `docker run` controlled entirely from the `docker run` parameters.
parameters.
## Two host Example ## Two host Example
Start actual redis server on one Docker host Start actual Redis server on one Docker host
big-server $ docker run -d -name redis crosbymichael/redis big-server $ docker run -d -name redis crosbymichael/redis
Then add an ambassador linked to the redis server, mapping a port to the Then add an ambassador linked to the Redis server, mapping a port to the
outside world outside world
big-server $ docker run -d -link redis:redis -name redis_ambassador -p 6379:6379 svendowideit/ambassador big-server $ docker run -d -link redis:redis -name redis_ambassador -p 6379:6379 svendowideit/ambassador
On the other host, you can set up another ambassador setting environment On the other host, you can set up another ambassador setting environment
variables for each remote port we want to proxy to the variables for each remote port we want to proxy to the `big-server`
`big-server`
client-server $ docker run -d -name redis_ambassador -expose 6379 -e REDIS_PORT_6379_TCP=tcp://192.168.1.52:6379 svendowideit/ambassador client-server $ docker run -d -name redis_ambassador -expose 6379 -e REDIS_PORT_6379_TCP=tcp://192.168.1.52:6379 svendowideit/ambassador
Then on the `client-server` host, you can use a Then on the `client-server` host, you can use a Redis client container
redis client container to talk to the remote redis server, just by to talk to the remote Redis server, just by linking to the local Redis
linking to the local redis ambassador. ambassador.
client-server $ docker run -i -t -rm -link redis_ambassador:redis relateiq/redis-cli client-server $ docker run -i -t -rm -link redis_ambassador:redis relateiq/redis-cli
redis 172.17.0.160:6379> ping redis 172.17.0.160:6379> ping
@ -61,10 +56,10 @@ linking to the local redis ambassador.
## How it works ## How it works
The following example shows what the `svendowideit/ambassador` The following example shows what the `svendowideit/ambassador` container
container does automatically (with a tiny amount of `sed`) does automatically (with a tiny amount of `sed`)
On the docker host (192.168.1.52) that redis will run on: On the Docker host (192.168.1.52) that Redis will run on:
# start actual redis server # start actual redis server
$ docker run -d -name redis crosbymichael/redis $ docker run -d -name redis crosbymichael/redis
@ -81,8 +76,8 @@ On the docker host (192.168.1.52) that redis will run on:
# add redis ambassador # add redis ambassador
$ docker run -t -i -link redis:redis -name redis_ambassador -p 6379:6379 busybox sh $ docker run -t -i -link redis:redis -name redis_ambassador -p 6379:6379 busybox sh
in the redis_ambassador container, you can see the linked redis In the `redis_ambassador` container, you can see the linked Redis
containers'senv containers `env`:
$ env $ env
REDIS_PORT=tcp://172.17.0.136:6379 REDIS_PORT=tcp://172.17.0.136:6379
@ -98,8 +93,8 @@ containers'senv
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
PWD=/ PWD=/
This environment is used by the ambassador socat script to expose redis This environment is used by the ambassador `socat` script to expose Redis
to the world (via the -p 6379:6379 port mapping) to the world (via the `-p 6379:6379` port mapping):
$ docker rm redis_ambassador $ docker rm redis_ambassador
$ sudo ./contrib/mkimage-unittest.sh $ sudo ./contrib/mkimage-unittest.sh
@ -107,16 +102,16 @@ to the world (via the -p 6379:6379 port mapping)
$ socat TCP4-LISTEN:6379,fork,reuseaddr TCP4:172.17.0.136:6379 $ socat TCP4-LISTEN:6379,fork,reuseaddr TCP4:172.17.0.136:6379
then ping the redis server via the ambassador Now ping the Redis server via the ambassador:
Now goto a different server Now go to a different server:
$ sudo ./contrib/mkimage-unittest.sh $ sudo ./contrib/mkimage-unittest.sh
$ docker run -t -i -expose 6379 -name redis_ambassador docker-ut sh $ docker run -t -i -expose 6379 -name redis_ambassador docker-ut sh
$ socat TCP4-LISTEN:6379,fork,reuseaddr TCP4:192.168.1.52:6379 $ socat TCP4-LISTEN:6379,fork,reuseaddr TCP4:192.168.1.52:6379
and get the redis-cli image so we can talk over the ambassador bridge And get the `redis-cli` image so we can talk over the ambassador bridge.
$ docker pull relateiq/redis-cli $ docker pull relateiq/redis-cli
$ docker run -i -t -rm -link redis_ambassador:redis relateiq/redis-cli $ docker run -i -t -rm -link redis_ambassador:redis relateiq/redis-cli
@ -125,16 +120,16 @@ and get the redis-cli image so we can talk over the ambassador bridge
## The svendowideit/ambassador Dockerfile ## The svendowideit/ambassador Dockerfile
The `svendowideit/ambassador` image is a small The `svendowideit/ambassador` image is a small `busybox` image with
busybox image with `socat` built in. When you start `socat` built in. When you start the container, it uses a small `sed`
the container, it uses a small `sed` script to parse script to parse out the (possibly multiple) link environment variables
out the (possibly multiple) link environment variables to set up the to set up the port forwarding. On the remote host, you need to set the
port forwarding. On the remote host, you need to set the variable using variable using the `-e` command line option.
the `-e` command line option.
`--expose 1234 -e REDIS_PORT_1234_TCP=tcp://192.168.1.52:6379` --expose 1234 -e REDIS_PORT_1234_TCP=tcp://192.168.1.52:6379
will forward the local `1234` port to the
remote IP and port - in this case `192.168.1.52:6379`. Will forward the local `1234` port to the remote IP and port, in this
case `192.168.1.52:6379`.
# #
# #

View File

@ -12,10 +12,10 @@ your Docker install, run the following command:
# Check that you have a working install # Check that you have a working install
$ docker info $ docker info
If you get `docker: command not found` or something If you get `docker: command not found` or something like
like `/var/lib/docker/repositories: permission denied` `/var/lib/docker/repositories: permission denied` you may have an
you may have an incomplete docker installation or insufficient incomplete Docker installation or insufficient privileges to access
privileges to access Docker on your machine. Docker on your machine.
Please refer to [*Installation*](/installation/#installation-list) Please refer to [*Installation*](/installation/#installation-list)
for installation instructions. for installation instructions.
@ -26,9 +26,9 @@ for installation instructions.
$ sudo docker pull ubuntu $ sudo docker pull ubuntu
This will find the `ubuntu` image by name on This will find the `ubuntu` image by name on
[*Docker.io*](../workingwithrepository/#find-public-images-on-dockerio) and [*Docker.io*](../workingwithrepository/#find-public-images-on-dockerio)
download it from [Docker.io](https://index.docker.io) to a local image and download it from [Docker.io](https://index.docker.io) to a local
cache. image cache.
> **Note**: > **Note**:
> When the image has successfully downloaded, you will see a 12 character > When the image has successfully downloaded, you will see a 12 character
@ -50,7 +50,7 @@ cache.
## Bind Docker to another host/port or a Unix socket ## Bind Docker to another host/port or a Unix socket
> **Warning**: > **Warning**:
> Changing the default `docker` daemon binding to a > Changing the default `docker` daemon binding to a
> TCP port or Unix *docker* user group will increase your security risks > TCP port or Unix *docker* user group will increase your security risks
> by allowing non-root users to gain *root* access on the host. Make sure > by allowing non-root users to gain *root* access on the host. Make sure
@ -58,41 +58,44 @@ cache.
> to a TCP port, anyone with access to that port has full Docker access; > to a TCP port, anyone with access to that port has full Docker access;
> so it is not advisable on an open network. > so it is not advisable on an open network.
With `-H` it is possible to make the Docker daemon With `-H` it is possible to make the Docker daemon to listen on a
to listen on a specific IP and port. By default, it will listen on specific IP and port. By default, it will listen on
`unix:///var/run/docker.sock` to allow only local `unix:///var/run/docker.sock` to allow only local connections by the
connections by the *root* user. You *could* set it to *root* user. You *could* set it to `0.0.0.0:4243` or a specific host IP
`0.0.0.0:4243` or a specific host IP to give access to give access to everybody, but that is **not recommended** because
to everybody, but that is **not recommended** because then it is trivial then it is trivial for someone to gain root access to the host where the
for someone to gain root access to the host where the daemon is running. daemon is running.
Similarly, the Docker client can use `-H` to connect Similarly, the Docker client can use `-H` to connect to a custom port.
to a custom port.
`-H` accepts host and port assignment in the `-H` accepts host and port assignment in the following format:
following format: `tcp://[host][:port]` or
`unix://path` tcp://[host][:port]` or `unix://path
For example: For example:
- `tcp://host:4243` -> tcp connection on - `tcp://host:4243` -> TCP connection on
host:4243 host:4243
- `unix://path/to/socket` -> unix socket located - `unix://path/to/socket` -> Unix socket located
at `path/to/socket` at `path/to/socket`
`-H`, when empty, will default to the same value as `-H`, when empty, will default to the same value as
when no `-H` was passed in. when no `-H` was passed in.
`-H` also accepts short form for TCP bindings: `-H` also accepts short form for TCP bindings:
`host[:port]` or `:port`
# Run docker in daemon mode host[:port]` or `:port
Run Docker in daemon mode:
$ sudo <path to>/docker -H 0.0.0.0:5555 -d & $ sudo <path to>/docker -H 0.0.0.0:5555 -d &
# Download an ubuntu image
Download an `ubuntu` image:
$ sudo docker -H :5555 pull ubuntu $ sudo docker -H :5555 pull ubuntu
You can use multiple `-H`, for example, if you want You can use multiple `-H`, for example, if you want to listen on both
to listen on both TCP and a Unix socket TCP and a Unix socket
# Run docker in daemon mode # Run docker in daemon mode
$ sudo <path to>/docker -H tcp://127.0.0.1:4243 -H unix:///var/run/docker.sock -d & $ sudo <path to>/docker -H tcp://127.0.0.1:4243 -H unix:///var/run/docker.sock -d &

View File

@ -19,8 +19,8 @@ operating systems.
## Installation ## Installation
The cookbook is available on the [Chef Community The cookbook is available on the [Chef Community
Site](http://community.opscode.com/cookbooks/docker) and can be installed using Site](http://community.opscode.com/cookbooks/docker) and can be
your favorite cookbook dependency manager. installed using your favorite cookbook dependency manager.
The source can be found on The source can be found on
[GitHub](https://github.com/bflad/chef-docker). [GitHub](https://github.com/bflad/chef-docker).
@ -71,4 +71,4 @@ This is equivalent to running the following command, but under upstart:
$ docker run --detach=true --publish='5000:5000' --env='SETTINGS_FLAVOR=local' --volume='/mnt/docker:/docker-storage' samalba/docker-registry $ docker run --detach=true --publish='5000:5000' --env='SETTINGS_FLAVOR=local' --volume='/mnt/docker:/docker-storage' samalba/docker-registry
The resources will accept a single string or an array of values for any The resources will accept a single string or an array of values for any
docker flags that allow multiple values. Docker flags that allow multiple values.

View File

@ -10,16 +10,15 @@ You can use your Docker containers with process managers like
## Introduction ## Introduction
If you want a process manager to manage your containers you will need to If you want a process manager to manage your containers you will need to
run the docker daemon with the `-r=false` so that run the docker daemon with the `-r=false` so that docker will not
docker will not automatically restart your containers when the host is automatically restart your containers when the host is restarted.
restarted.
When you have finished setting up your image and are happy with your When you have finished setting up your image and are happy with your
running container, you can then attach a process manager to manage it. running container, you can then attach a process manager to manage it.
When your run `docker start -a` docker will When your run `docker start -a` docker will automatically attach to the
automatically attach to the running container, or start it if needed and running container, or start it if needed and forward all signals so that
forward all signals so that the process manager can detect when a the process manager can detect when a container stops and correctly
container stops and correctly restart it. restart it.
Here are a few sample scripts for systemd and upstart to integrate with Here are a few sample scripts for systemd and upstart to integrate with
docker. docker.
@ -27,9 +26,8 @@ docker.
## Sample Upstart Script ## Sample Upstart Script
In this example We've already created a container to run Redis with In this example We've already created a container to run Redis with
`--name redis_server`. To create an upstart script `--name redis_server`. To create an upstart script for our container, we
for our container, we create a file named create a file named `/etc/init/redis.conf` and place the following into
`/etc/init/redis.conf` and place the following into
it: it:
description "Redis container" description "Redis container"

View File

@ -7,13 +7,13 @@ page_keywords: network, networking, bridge, docker, documentation
## Introduction ## Introduction
Docker uses Linux bridge capabilities to provide network connectivity to Docker uses Linux bridge capabilities to provide network connectivity to
containers. The `docker0` bridge interface is containers. The `docker0` bridge interface is managed by Docker for this
managed by Docker for this purpose. When the Docker daemon starts it : purpose. When the Docker daemon starts it:
- creates the `docker0` bridge if not present - Creates the `docker0` bridge if not present
- searches for an IP address range which doesn't overlap with an existing route - Searches for an IP address range which doesn't overlap with an existing route
- picks an IP in the selected range - Picks an IP in the selected range
- assigns this IP to the `docker0` bridge - Assigns this IP to the `docker0` bridge
<!-- --> <!-- -->
@ -42,7 +42,7 @@ for the container.
docker0 8000.fef213db5a66 no vethQCDY1N docker0 8000.fef213db5a66 no vethQCDY1N
Above, `docker0` acts as a bridge for the `vethQCDY1N` interface which Above, `docker0` acts as a bridge for the `vethQCDY1N` interface which
is dedicated to the 52f811c5d3d6 container. is dedicated to the `52f811c5d3d6` container.
## How to use a specific IP address range ## How to use a specific IP address range
@ -50,16 +50,15 @@ Docker will try hard to find an IP range that is not used by the host.
Even though it works for most cases, it's not bullet-proof and sometimes Even though it works for most cases, it's not bullet-proof and sometimes
you need to have more control over the IP addressing scheme. you need to have more control over the IP addressing scheme.
For this purpose, Docker allows you to manage the `docker0` For this purpose, Docker allows you to manage the `docker0` bridge or
bridge or your own one using the `-b=<bridgename>` your own one using the `-b=<bridgename>` parameter.
parameter.
In this scenario: In this scenario:
- ensure Docker is stopped - Ensure Docker is stopped
- create your own bridge (`bridge0` for example) - Create your own bridge (`bridge0` for example)
- assign a specific IP to this bridge - Assign a specific IP to this bridge
- start Docker with the `-b=bridge0` parameter - Start Docker with the `-b=bridge0` parameter
<!-- --> <!-- -->
@ -107,29 +106,27 @@ In this scenario:
## Container intercommunication ## Container intercommunication
The value of the Docker daemon's `icc` parameter The value of the Docker daemon's `icc` parameter determines whether
determines whether containers can communicate with each other over the containers can communicate with each other over the bridge network.
bridge network.
- The default, `-icc=true` allows containers to communicate with each other. - The default, `-icc=true` allows containers to communicate with each other.
- `-icc=false` means containers are isolated from each other. - `-icc=false` means containers are isolated from each other.
Docker uses `iptables` under the hood to either Docker uses `iptables` under the hood to either accept or drop
accept or drop communication between containers. communication between containers.
## What is the vethXXXX device? ## What is the vethXXXX device?
Well. Things get complicated here. Well. Things get complicated here.
The `vethXXXX` interface is the host side of a The `vethXXXX` interface is the host side of a point-to-point link
point-to-point link between the host and the corresponding container; between the host and the corresponding container; the other side of the
the other side of the link is the container's `eth0` link is the container's `eth0` interface. This pair (host `vethXXX` and
interface. This pair (host `vethXXX` and container container `eth0`) are connected like a tube. Everything that comes in
`eth0`) are connected like a tube. Everything that one side will come out the other side.
comes in one side will come out the other side.
All the plumbing is delegated to Linux network capabilities (check the All the plumbing is delegated to Linux network capabilities (check the
ip link command) and the namespaces infrastructure. `ip link` command) and the namespaces infrastructure.
## I want more ## I want more

View File

@ -29,10 +29,10 @@ containers, Docker provides the linking mechanism.
## Auto map all exposed ports on the host ## Auto map all exposed ports on the host
To bind all the exposed container ports to the host automatically, use To bind all the exposed container ports to the host automatically, use
`docker run -P <imageid>`. The mapped host ports `docker run -P <imageid>`. The mapped host ports will be auto-selected
will be auto-selected from a pool of unused ports (49000..49900), and from a pool of unused ports (49000..49900), and you will need to use
you will need to use `docker ps`, `docker inspect <container_id>` or `docker ps`, `docker inspect <container_id>` or `docker port
`docker port <container_id> <port>` to determine what they are. <container_id> <port>` to determine what they are.
## Binding a port to a host interface ## Binding a port to a host interface
@ -65,9 +65,9 @@ combinations described for TCP work. Here is only one example:
# Bind UDP port 5353 of the container to UDP port 53 on 127.0.0.1 of the host machine. # Bind UDP port 5353 of the container to UDP port 53 on 127.0.0.1 of the host machine.
$ docker run -p 127.0.0.1:53:5353/udp <image> <cmd> $ docker run -p 127.0.0.1:53:5353/udp <image> <cmd>
The command `docker port` lists the interface and port on the host machine The command `docker port` lists the interface and port on the host
bound to a given container port. It is useful when using dynamically allocated machine bound to a given container port. It is useful when using
ports: dynamically allocated ports:
# Bind to a dynamically allocated port # Bind to a dynamically allocated port
$ docker run -p 127.0.0.1::8080 --name dyn-bound <image> <cmd> $ docker run -p 127.0.0.1::8080 --name dyn-bound <image> <cmd>
@ -79,24 +79,25 @@ ports:
## Linking a container ## Linking a container
Communication between two containers can also be established in a Communication between two containers can also be established in a
docker-specific way called linking. Docker-specific way called linking.
To briefly present the concept of linking, let us consider two containers: To briefly present the concept of linking, let us consider two
`server`, containing the service, and `client`, accessing the service. Once containers: `server`, containing the service, and `client`, accessing
`server` is running, `client` is started and links to server. Linking sets the service. Once `server` is running, `client` is started and links to
environment variables in `client` giving it some information about `server`. server. Linking sets environment variables in `client` giving it some
In this sense, linking is a method of service discovery. information about `server`. In this sense, linking is a method of
service discovery.
Let us now get back to our topic of interest; communication between the two Let us now get back to our topic of interest; communication between the
containers. We mentioned that the tricky part about this communication was that two containers. We mentioned that the tricky part about this
the IP address of `server` was not fixed. Therefore, some of the environment communication was that the IP address of `server` was not fixed.
variables are going to be used to inform `client` about this IP address. This Therefore, some of the environment variables are going to be used to
process called exposure, is possible because `client` is started after `server` inform `client` about this IP address. This process called exposure, is
has been started. possible because the `client` is started after the `server` has been started.
Here is a full example. On `server`, the port of interest is exposed. The Here is a full example. On `server`, the port of interest is exposed.
exposure is done either through the `--expose` parameter to the `docker run` The exposure is done either through the `--expose` parameter to the
command, or the `EXPOSE` build command in a Dockerfile: `docker run` command, or the `EXPOSE` build command in a `Dockerfile`:
# Expose port 80 # Expose port 80
$ docker run --expose 80 --name server <image> <cmd> $ docker run --expose 80 --name server <image> <cmd>
@ -106,7 +107,7 @@ The `client` then links to the `server`:
# Link # Link
$ docker run --name client --link server:linked-server <image> <cmd> $ docker run --name client --link server:linked-server <image> <cmd>
`client` locally refers to `server` as `linked-server`. The following Here `client` locally refers to `server` as `linked-server`. The following
environment variables, among others, are available on `client`: environment variables, among others, are available on `client`:
# The default protocol, ip, and port of the service running in the container # The default protocol, ip, and port of the service running in the container
@ -118,7 +119,9 @@ environment variables, among others, are available on `client`:
$ LINKED-SERVER_PORT_80_TCP_ADDR=172.17.0.8 $ LINKED-SERVER_PORT_80_TCP_ADDR=172.17.0.8
$ LINKED-SERVER_PORT_80_TCP_PORT=80 $ LINKED-SERVER_PORT_80_TCP_PORT=80
This tells `client` that a service is running on port 80 of `server` and that This tells `client` that a service is running on port 80 of `server` and
`server` is accessible at the IP address 172.17.0.8 that `server` is accessible at the IP address `172.17.0.8`:
> **Note:**
> Using the `-p` parameter also exposes the port.
Note: Using the `-p` parameter also exposes the port.

View File

@ -12,7 +12,7 @@ page_keywords: puppet, installation, usage, docker, documentation
## Requirements ## Requirements
To use this guide you'll need a working installation of Puppet from To use this guide you'll need a working installation of Puppet from
[Puppetlabs](https://puppetlabs.com) . [Puppet Labs](https://puppetlabs.com) .
The module also currently uses the official PPA so only works with The module also currently uses the official PPA so only works with
Ubuntu. Ubuntu.
@ -26,8 +26,8 @@ installed using the built-in module tool.
$ puppet module install garethr/docker $ puppet module install garethr/docker
It can also be found on It can also be found on
[GitHub](https://github.com/garethr/garethr-docker) if you would [GitHub](https://github.com/garethr/garethr-docker) if you would rather
rather download the source. download the source.
## Usage ## Usage
@ -88,5 +88,6 @@ Run also contains a number of optional parameters:
dns => ['8.8.8.8', '8.8.4.4'], dns => ['8.8.8.8', '8.8.4.4'],
} }
Note that ports, env, dns and volumes can be set with either a single > *Note:*
string or as above with an array of values. > The `ports`, `env`, `dns` and `volumes` attributes can be set with either a single
> string or as above with an array of values.

View File

@ -6,18 +6,16 @@ page_keywords: Examples, Usage, links, linking, docker, documentation, examples,
## Introduction ## Introduction
From version 0.6.5 you are now able to `name` a container and `link` it to From version 0.6.5 you are now able to `name` a container and `link` it
another container by referring to its name. This will create a parent -> child to another container by referring to its name. This will create a parent
relationship where the parent container can see selected information about its -> child relationship where the parent container can see selected
child. information about its child.
## Container Naming ## Container Naming
New in version v0.6.5. You can now name your container by using the `--name` flag. If no name
is provided, Docker will automatically generate a name. You can see this
You can now name your container by using the `--name` flag. If no name is name using the `docker ps` command.
provided, Docker will automatically generate a name. You can see this name
using the `docker ps` command.
# format is "sudo docker run --name <container_name> <image_name> <command>" # format is "sudo docker run --name <container_name> <image_name> <command>"
$ sudo docker run --name test ubuntu /bin/bash $ sudo docker run --name test ubuntu /bin/bash
@ -29,48 +27,49 @@ using the `docker ps` command.
## Links: service discovery for docker ## Links: service discovery for docker
New in version v0.6.5.
Links allow containers to discover and securely communicate with each Links allow containers to discover and securely communicate with each
other by using the flag `-link name:alias`. Inter-container communication other by using the flag `-link name:alias`. Inter-container
can be disabled with the daemon flag `-icc=false`. With this flag set to communication can be disabled with the daemon flag `-icc=false`. With
`false`, Container A cannot access Container unless explicitly allowed via this flag set to `false`, Container A cannot access Container unless
a link. This is a huge win for securing your containers. When two containers explicitly allowed via a link. This is a huge win for securing your
are linked together Docker creates a parent child relationship between the containers. When two containers are linked together Docker creates a
containers. The parent container will be able to access information via parent child relationship between the containers. The parent container
environment variables of the child such as name, exposed ports, IP and other will be able to access information via environment variables of the
selected environment variables. child such as name, exposed ports, IP and other selected environment
variables.
When linking two containers Docker will use the exposed ports of the container When linking two containers Docker will use the exposed ports of the
to create a secure tunnel for the parent to access. If a database container container to create a secure tunnel for the parent to access. If a
only exposes port 8080 then the linked container will only be allowed to access database container only exposes port 8080 then the linked container will
port 8080 and nothing else if inter-container communication is set to false. only be allowed to access port 8080 and nothing else if inter-container
communication is set to false.
For example, there is an image called `crosbymichael/redis` that exposes the For example, there is an image called `crosbymichael/redis` that exposes
port 6379 and starts the Redis server. Let's name the container as `redis` the port 6379 and starts the Redis server. Let's name the container as
based on that image and run it as daemon. `redis` based on that image and run it as daemon.
$ sudo docker run -d --name redis crosbymichael/redis $ sudo docker run -d --name redis crosbymichael/redis
We can issue all the commands that you would expect using the name `redis`; We can issue all the commands that you would expect using the name
start, stop, attach, using the name for our container. The name also allows `redis`; start, stop, attach, using the name for our container. The name
us to link other containers into this one. also allows us to link other containers into this one.
Next, we can start a new web application that has a dependency on Redis and Next, we can start a new web application that has a dependency on Redis
apply a link to connect both containers. If you noticed when running our Redis and apply a link to connect both containers. If you noticed when running
server we did not use the `-p` flag to publish the Redis port to the host our Redis server we did not use the `-p` flag to publish the Redis port
system. Redis exposed port 6379 and this is all we need to establish a link. to the host system. Redis exposed port 6379 and this is all we need to
establish a link.
$ sudo docker run -t -i --link redis:db --name webapp ubuntu bash $ sudo docker run -t -i --link redis:db --name webapp ubuntu bash
When you specified `--link redis:db` you are telling Docker to link the When you specified `--link redis:db` you are telling Docker to link the
container named `redis` into this new container with the alias `db`. container named `redis` into this new container with the alias `db`.
Environment variables are prefixed with the alias so that the parent container Environment variables are prefixed with the alias so that the parent
can access network and environment information from the containers that are container can access network and environment information from the
linked into it. containers that are linked into it.
If we inspect the environment variables of the second container, we would see If we inspect the environment variables of the second container, we
all the information about the child container. would see all the information about the child container.
$ root@4c01db0b339c:/# env $ root@4c01db0b339c:/# env
@ -90,20 +89,20 @@ all the information about the child container.
_=/usr/bin/env _=/usr/bin/env
root@4c01db0b339c:/# root@4c01db0b339c:/#
Accessing the network information along with the environment of the child Accessing the network information along with the environment of the
container allows us to easily connect to the Redis service on the specific child container allows us to easily connect to the Redis service on the
IP and port in the environment. specific IP and port in the environment.
> **Note**: > **Note**:
> These Environment variables are only set for the first process in the > These Environment variables are only set for the first process in the
> container. Similarly, some daemons (such as `sshd`) > container. Similarly, some daemons (such as `sshd`)
> will scrub them when spawning shells for connection. > will scrub them when spawning shells for connection.
You can work around this by storing the initial `env` in a file, or looking You can work around this by storing the initial `env` in a file, or
at `/proc/1/environ`. looking at `/proc/1/environ`.
Running `docker ps` shows the 2 containers, and the `webapp/db` alias name for Running `docker ps` shows the 2 containers, and the `webapp/db` alias
the Redis container. name for the Redis container.
$ docker ps $ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
@ -112,13 +111,13 @@ the Redis container.
## Resolving Links by Name ## Resolving Links by Name
New in version v0.11. > *Note:* New in version v0.11.
Linked containers can be accessed by hostname. Hostnames are mapped by Linked containers can be accessed by hostname. Hostnames are mapped by
appending entries to '/etc/hosts' using the linked container's alias. appending entries to '/etc/hosts' using the linked container's alias.
For example, linking a container using '--link redis:db' will generate the For example, linking a container using '--link redis:db' will generate
following '/etc/hosts' file: the following '/etc/hosts' file:
root@6541a75d44a0:/# cat /etc/hosts root@6541a75d44a0:/# cat /etc/hosts
172.17.0.3 6541a75d44a0 172.17.0.3 6541a75d44a0

View File

@ -8,8 +8,8 @@ page_keywords: Examples, Usage, volume, docker, documentation, examples
A *data volume* is a specially-designated directory within one or more A *data volume* is a specially-designated directory within one or more
containers that bypasses the [*Union File containers that bypasses the [*Union File
System*](/terms/layer/#ufs-def) to provide several useful features System*](/terms/layer/#ufs-def) to provide several useful features for
for persistent or shared data: persistent or shared data:
- **Data volumes can be shared and reused between containers:** - **Data volumes can be shared and reused between containers:**
This is the feature that makes data volumes so powerful. You can This is the feature that makes data volumes so powerful. You can
@ -28,26 +28,22 @@ for persistent or shared data:
Each container can have zero or more data volumes. Each container can have zero or more data volumes.
New in version v0.3.0.
## Getting Started ## Getting Started
Using data volumes is as simple as adding a `-v` Using data volumes is as simple as adding a `-v` parameter to the
parameter to the `docker run` command. The `docker run` command. The `-v` parameter can be used more than once in
`-v` parameter can be used more than once in order order to create more volumes within the new container. To create a new
to create more volumes within the new container. To create a new
container with two new volumes: container with two new volumes:
$ docker run -v /var/volume1 -v /var/volume2 busybox true $ docker run -v /var/volume1 -v /var/volume2 busybox true
This command will create the new container with two new volumes that This command will create the new container with two new volumes that
exits instantly (`true` is pretty much the smallest, exits instantly (`true` is pretty much the smallest, simplest program
simplest program that you can run). You can then mount its that you can run). You can then mount its volumes in any other container
volumes in any other container using the `run` `--volumes-from` using the `run` `--volumes-from` option; irrespective of whether the
option; irrespective of whether the volume container is running or volume container is running or not.
not.
Or, you can use the VOLUME instruction in a Dockerfile to add one or Or, you can use the `VOLUME` instruction in a `Dockerfile` to add one or
more new volumes to any container created from that image: more new volumes to any container created from that image:
# BUILD-USING: $ docker build -t data . # BUILD-USING: $ docker build -t data .
@ -63,8 +59,8 @@ containers, or want to use from non-persistent containers, it's best to
create a named Data Volume Container, and then to mount the data from create a named Data Volume Container, and then to mount the data from
it. it.
Create a named container with volumes to share (`/var/volume1` Create a named container with volumes to share (`/var/volume1` and
and `/var/volume2`): `/var/volume2`):
$ docker run -v /var/volume1 -v /var/volume2 -name DATA busybox true $ docker run -v /var/volume1 -v /var/volume2 -name DATA busybox true
@ -72,12 +68,12 @@ Then mount those data volumes into your application containers:
$ docker run -t -i -rm -volumes-from DATA -name client1 ubuntu bash $ docker run -t -i -rm -volumes-from DATA -name client1 ubuntu bash
You can use multiple `-volumes-from` parameters to You can use multiple `-volumes-from` parameters to bring together
bring together multiple data volumes from multiple containers. multiple data volumes from multiple containers.
Interestingly, you can mount the volumes that came from the Interestingly, you can mount the volumes that came from the `DATA`
`DATA` container in yet another container via the container in yet another container via the `client1` middleman
`client1` middleman container: container:
$ docker run -t -i -rm -volumes-from client1 -name client2 ubuntu bash $ docker run -t -i -rm -volumes-from client1 -name client2 ubuntu bash
@ -94,14 +90,15 @@ upgrade, or effectively migrate data volumes between containers.
-v=[]: Create a bind mount with: [host-dir]:[container-dir]:[rw|ro]. -v=[]: Create a bind mount with: [host-dir]:[container-dir]:[rw|ro].
You must specify an absolute path for `host-dir`. If `host-dir` is missing from You must specify an absolute path for `host-dir`. If `host-dir` is
the command, then Docker creates a new volume. If `host-dir` is present but missing from the command, then Docker creates a new volume. If
points to a non-existent directory on the host, Docker will automatically `host-dir` is present but points to a non-existent directory on the
create this directory and use it as the source of the bind-mount. host, Docker will automatically create this directory and use it as the
source of the bind-mount.
Note that this is not available from a Dockerfile due the portability and Note that this is not available from a `Dockerfile` due the portability
sharing purpose of it. The `host-dir` volumes are entirely host-dependent and sharing purpose of it. The `host-dir` volumes are entirely
and might not work on any other machine. host-dependent and might not work on any other machine.
For example: For example:
@ -110,26 +107,28 @@ For example:
# Example: # Example:
$ sudo docker run -i -t -v /var/log:/logs_from_host:ro ubuntu bash $ sudo docker run -i -t -v /var/log:/logs_from_host:ro ubuntu bash
The command above mounts the host directory `/var/log` into the container The command above mounts the host directory `/var/log` into the
with *read only* permissions as `/logs_from_host`. container with *read only* permissions as `/logs_from_host`.
New in version v0.5.0. New in version v0.5.0.
### Note for OS/X users and remote daemon users: ### Note for OS/X users and remote daemon users:
OS/X users run `boot2docker` to create a minimalist virtual machine running OS/X users run `boot2docker` to create a minimalist virtual machine
the docker daemon. That virtual machine then launches docker commands on running the docker daemon. That virtual machine then launches docker
behalf of the OS/X command line. The means that `host directories` refer to commands on behalf of the OS/X command line. The means that `host
directories in the `boot2docker` virtual machine, not the OS/X filesystem. directories` refer to directories in the `boot2docker` virtual machine,
not the OS/X filesystem.
Similarly, anytime when the docker daemon is on a remote machine, the Similarly, anytime when the docker daemon is on a remote machine, the
`host directories` always refer to directories on the daemon's machine. `host directories` always refer to directories on the daemon's machine.
### Backup, restore, or migrate data volumes ### Backup, restore, or migrate data volumes
You cannot back up volumes using `docker export`, `docker save` and `docker cp` You cannot back up volumes using `docker export`, `docker save` and
because they are external to images. Instead you can use `--volumes-from` to `docker cp` because they are external to images. Instead you can use
start a new container that can access the data-container's volume. For example: `--volumes-from` to start a new container that can access the
data-container's volume. For example:
$ sudo docker run -rm --volumes-from DATA -v $(pwd):/backup busybox tar cvf /backup/backup.tar /data $ sudo docker run -rm --volumes-from DATA -v $(pwd):/backup busybox tar cvf /backup/backup.tar /data
@ -144,7 +143,8 @@ start a new container that can access the data-container's volume. For example:
- `tar cvf /backup/backup.tar /data`: - `tar cvf /backup/backup.tar /data`:
creates an uncompressed tar file of all the files in the `/data` directory creates an uncompressed tar file of all the files in the `/data` directory
Then to restore to the same container, or another that you`ve made elsewhere: Then to restore to the same container, or another that you've made
elsewhere:
# create a new data container # create a new data container
$ sudo docker run -v /data -name DATA2 busybox true $ sudo docker run -v /data -name DATA2 busybox true

View File

@ -18,15 +18,14 @@ You can find one or more repositories hosted on a *registry*. There are
two types of *registry*: public and private. There's also a default two types of *registry*: public and private. There's also a default
*registry* that Docker uses which is called *registry* that Docker uses which is called
[Docker.io](http://index.docker.io). [Docker.io](http://index.docker.io).
[Docker.io](http://index.docker.io) is the home of [Docker.io](http://index.docker.io) is the home of "top-level"
"top-level" repositories and public "user" repositories. The Docker repositories and public "user" repositories. The Docker project
project provides [Docker.io](http://index.docker.io) to host public and provides [Docker.io](http://index.docker.io) to host public and [private
[private repositories](https://index.docker.io/plans/), namespaced by repositories](https://index.docker.io/plans/), namespaced by user. We
user. We provide user authentication and search over all the public provide user authentication and search over all the public repositories.
repositories.
Docker acts as a client for these services via the `docker search`, `pull`, Docker acts as a client for these services via the `docker search`,
`login` and `push` commands. `pull`, `login` and `push` commands.
## Repositories ## Repositories
@ -42,8 +41,8 @@ There are two types of public repositories: *top-level* repositories
which are controlled by the Docker team, and *user* repositories created which are controlled by the Docker team, and *user* repositories created
by individual contributors. Anyone can read from these repositories by individual contributors. Anyone can read from these repositories
they really help people get started quickly! You could also use they really help people get started quickly! You could also use
[*Trusted Builds*](#trusted-builds) if you need to keep [*Trusted Builds*](#trusted-builds) if you need to keep control of who
control of who accesses your images. accesses your images.
- Top-level repositories can easily be recognized by **not** having a - Top-level repositories can easily be recognized by **not** having a
`/` (slash) in their name. These repositories represent trusted images `/` (slash) in their name. These repositories represent trusted images
@ -83,12 +82,11 @@ user name or description:
... ...
There you can see two example results: `centos` and There you can see two example results: `centos` and
`slantview/centos-chef-solo`. The second result `slantview/centos-chef-solo`. The second result shows that it comes from
shows that it comes from the public repository of a user, the public repository of a user, `slantview/`, while the first result
`slantview/`, while the first result (`centos`) doesn't explicitly list a repository so it comes from the
(`centos`) doesn't explicitly list a repository so trusted top-level namespace. The `/` character separates a user's
it comes from the trusted top-level namespace. The `/` repository and the image name.
character separates a user's repository and the image name.
Once you have found the image name, you can download it: Once you have found the image name, you can download it:
@ -98,8 +96,8 @@ Once you have found the image name, you can download it:
539c0211cd76: Download complete 539c0211cd76: Download complete
What can you do with that image? Check out the What can you do with that image? Check out the
[*Examples*](/examples/#example-list) and, when you're ready with [*Examples*](/examples/#example-list) and, when you're ready with your
your own image, come back here to learn how to share it. own image, come back here to learn how to share it.
## Contributing to Docker.io ## Contributing to Docker.io
@ -114,10 +112,9 @@ first. You can create your username and login on
This will prompt you for a username, which will become a public This will prompt you for a username, which will become a public
namespace for your public repositories. namespace for your public repositories.
If your username is available then `docker` will If your username is available then `docker` will also prompt you to
also prompt you to enter a password and your e-mail address. It will enter a password and your e-mail address. It will then automatically log
then automatically log you in. Now you're ready to commit and push your you in. Now you're ready to commit and push your own images!
own images!
> **Note:** > **Note:**
> Your authentication credentials will be stored in the [`.dockercfg` > Your authentication credentials will be stored in the [`.dockercfg`
@ -150,9 +147,9 @@ or tag.
## Trusted Builds ## Trusted Builds
Trusted Builds automate the building and updating of images from GitHub, Trusted Builds automate the building and updating of images from GitHub,
directly on `docker.io` servers. It works by adding directly on Docker.io. It works by adding a commit hook to
a commit hook to your selected repository, triggering a build and update your selected repository, triggering a build and update when you push a
when you push a commit. commit.
### To setup a trusted build ### To setup a trusted build
@ -206,9 +203,10 @@ identify a host), like this:
Once a repository has your registry's host name as part of the tag, you Once a repository has your registry's host name as part of the tag, you
can push and pull it like any other repository, but it will **not** be can push and pull it like any other repository, but it will **not** be
searchable (or indexed at all) on [Docker.io](http://index.docker.io), and there will be searchable (or indexed at all) on [Docker.io](http://index.docker.io),
no user name checking performed. Your registry will function completely and there will be no user name checking performed. Your registry will
independently from the [Docker.io](http://index.docker.io) registry. function completely independently from the
[Docker.io](http://index.docker.io) registry.
<iframe width="640" height="360" src="//www.youtube.com/embed/CAewZCBT4PI?rel=0" frameborder="0" allowfullscreen></iframe> <iframe width="640" height="360" src="//www.youtube.com/embed/CAewZCBT4PI?rel=0" frameborder="0" allowfullscreen></iframe>
@ -219,15 +217,20 @@ http://blog.docker.io/2013/07/how-to-use-your-own-registry/)
## Authentication File ## Authentication File
The authentication is stored in a json file, `.dockercfg` The authentication is stored in a JSON file, `.dockercfg`, located in
located in your home directory. It supports multiple registry your home directory. It supports multiple registry URLs.
urls.
`docker login` will create the "[https://index.docker.io/v1/]( The `docker login` command will create the:
https://index.docker.io/v1/)" key.
`docker login https://my-registry.com` will create the [https://index.docker.io/v1/](https://index.docker.io/v1/)
"[https://my-registry.com](https://my-registry.com)" key.
key.
The `docker login https://my-registry.com` command will create the:
[https://my-registry.com](https://my-registry.com)
key.
For example: For example:
@ -243,4 +246,6 @@ For example:
} }
The `auth` field represents The `auth` field represents
`base64(<username>:<password>)`
base64(<username>:<password>)