Updated a bunch of formatting in the docs/sources/use files

Docker-DCO-1.1-Signed-off-by: James Turnbull <james@lovedthanlost.net> (github: jamtur01)
This commit is contained in:
James Turnbull 2014-05-14 19:22:49 +02:00
parent cb0f2a2823
commit 2269472f3a
10 changed files with 261 additions and 260 deletions

View File

@ -7,53 +7,48 @@ page_keywords: Examples, Usage, links, docker, documentation, examples, names, n
## Introduction
Rather than hardcoding network links between a service consumer and
provider, Docker encourages service portability.
eg, instead of
provider, Docker encourages service portability, for example instead of:
(consumer) --> (redis)
requiring you to restart the `consumer` to attach it
to a different `redis` service, you can add
ambassadors
Requiring you to restart the `consumer` to attach it to a different
`redis` service, you can add ambassadors:
(consumer) --> (redis-ambassador) --> (redis)
or
Or
(consumer) --> (redis-ambassador) ---network---> (redis-ambassador) --> (redis)
When you need to rewire your consumer to talk to a different redis
server, you can just restart the `redis-ambassador`
container that the consumer is connected to.
When you need to rewire your consumer to talk to a different Redis
server, you can just restart the `redis-ambassador` container that the
consumer is connected to.
This pattern also allows you to transparently move the redis server to a
This pattern also allows you to transparently move the Redis server to a
different docker host from the consumer.
Using the `svendowideit/ambassador` container, the
link wiring is controlled entirely from the `docker run`
parameters.
Using the `svendowideit/ambassador` container, the link wiring is
controlled entirely from the `docker run` parameters.
## Two host Example
Start actual redis server on one Docker host
Start actual Redis server on one Docker host
big-server $ docker run -d -name redis crosbymichael/redis
Then add an ambassador linked to the redis server, mapping a port to the
Then add an ambassador linked to the Redis server, mapping a port to the
outside world
big-server $ docker run -d -link redis:redis -name redis_ambassador -p 6379:6379 svendowideit/ambassador
On the other host, you can set up another ambassador setting environment
variables for each remote port we want to proxy to the
`big-server`
variables for each remote port we want to proxy to the `big-server`
client-server $ docker run -d -name redis_ambassador -expose 6379 -e REDIS_PORT_6379_TCP=tcp://192.168.1.52:6379 svendowideit/ambassador
Then on the `client-server` host, you can use a
redis client container to talk to the remote redis server, just by
linking to the local redis ambassador.
Then on the `client-server` host, you can use a Redis client container
to talk to the remote Redis server, just by linking to the local Redis
ambassador.
client-server $ docker run -i -t -rm -link redis_ambassador:redis relateiq/redis-cli
redis 172.17.0.160:6379> ping
@ -61,10 +56,10 @@ linking to the local redis ambassador.
## How it works
The following example shows what the `svendowideit/ambassador`
container does automatically (with a tiny amount of `sed`)
The following example shows what the `svendowideit/ambassador` container
does automatically (with a tiny amount of `sed`)
On the docker host (192.168.1.52) that redis will run on:
On the Docker host (192.168.1.52) that Redis will run on:
# start actual redis server
$ docker run -d -name redis crosbymichael/redis
@ -81,8 +76,8 @@ On the docker host (192.168.1.52) that redis will run on:
# add redis ambassador
$ docker run -t -i -link redis:redis -name redis_ambassador -p 6379:6379 busybox sh
in the redis_ambassador container, you can see the linked redis
containers'senv
In the `redis_ambassador` container, you can see the linked Redis
containers `env`:
$ env
REDIS_PORT=tcp://172.17.0.136:6379
@ -98,8 +93,8 @@ containers'senv
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
PWD=/
This environment is used by the ambassador socat script to expose redis
to the world (via the -p 6379:6379 port mapping)
This environment is used by the ambassador `socat` script to expose Redis
to the world (via the `-p 6379:6379` port mapping):
$ docker rm redis_ambassador
$ sudo ./contrib/mkimage-unittest.sh
@ -107,16 +102,16 @@ to the world (via the -p 6379:6379 port mapping)
$ socat TCP4-LISTEN:6379,fork,reuseaddr TCP4:172.17.0.136:6379
then ping the redis server via the ambassador
Now ping the Redis server via the ambassador:
Now goto a different server
Now go to a different server:
$ sudo ./contrib/mkimage-unittest.sh
$ docker run -t -i -expose 6379 -name redis_ambassador docker-ut sh
$ socat TCP4-LISTEN:6379,fork,reuseaddr TCP4:192.168.1.52:6379
and get the redis-cli image so we can talk over the ambassador bridge
And get the `redis-cli` image so we can talk over the ambassador bridge.
$ docker pull relateiq/redis-cli
$ docker run -i -t -rm -link redis_ambassador:redis relateiq/redis-cli
@ -125,16 +120,16 @@ and get the redis-cli image so we can talk over the ambassador bridge
## The svendowideit/ambassador Dockerfile
The `svendowideit/ambassador` image is a small
busybox image with `socat` built in. When you start
the container, it uses a small `sed` script to parse
out the (possibly multiple) link environment variables to set up the
port forwarding. On the remote host, you need to set the variable using
the `-e` command line option.
The `svendowideit/ambassador` image is a small `busybox` image with
`socat` built in. When you start the container, it uses a small `sed`
script to parse out the (possibly multiple) link environment variables
to set up the port forwarding. On the remote host, you need to set the
variable using the `-e` command line option.
`--expose 1234 -e REDIS_PORT_1234_TCP=tcp://192.168.1.52:6379`
will forward the local `1234` port to the
remote IP and port - in this case `192.168.1.52:6379`.
--expose 1234 -e REDIS_PORT_1234_TCP=tcp://192.168.1.52:6379
Will forward the local `1234` port to the remote IP and port, in this
case `192.168.1.52:6379`.
#
#

View File

@ -12,10 +12,10 @@ your Docker install, run the following command:
# Check that you have a working install
$ docker info
If you get `docker: command not found` or something
like `/var/lib/docker/repositories: permission denied`
you may have an incomplete docker installation or insufficient
privileges to access Docker on your machine.
If you get `docker: command not found` or something like
`/var/lib/docker/repositories: permission denied` you may have an
incomplete Docker installation or insufficient privileges to access
Docker on your machine.
Please refer to [*Installation*](/installation/#installation-list)
for installation instructions.
@ -26,9 +26,9 @@ for installation instructions.
$ sudo docker pull ubuntu
This will find the `ubuntu` image by name on
[*Docker.io*](../workingwithrepository/#find-public-images-on-dockerio) and
download it from [Docker.io](https://index.docker.io) to a local image
cache.
[*Docker.io*](../workingwithrepository/#find-public-images-on-dockerio)
and download it from [Docker.io](https://index.docker.io) to a local
image cache.
> **Note**:
> When the image has successfully downloaded, you will see a 12 character
@ -50,7 +50,7 @@ cache.
## Bind Docker to another host/port or a Unix socket
> **Warning**:
> **Warning**:
> Changing the default `docker` daemon binding to a
> TCP port or Unix *docker* user group will increase your security risks
> by allowing non-root users to gain *root* access on the host. Make sure
@ -58,41 +58,44 @@ cache.
> to a TCP port, anyone with access to that port has full Docker access;
> so it is not advisable on an open network.
With `-H` it is possible to make the Docker daemon
to listen on a specific IP and port. By default, it will listen on
`unix:///var/run/docker.sock` to allow only local
connections by the *root* user. You *could* set it to
`0.0.0.0:4243` or a specific host IP to give access
to everybody, but that is **not recommended** because then it is trivial
for someone to gain root access to the host where the daemon is running.
With `-H` it is possible to make the Docker daemon to listen on a
specific IP and port. By default, it will listen on
`unix:///var/run/docker.sock` to allow only local connections by the
*root* user. You *could* set it to `0.0.0.0:4243` or a specific host IP
to give access to everybody, but that is **not recommended** because
then it is trivial for someone to gain root access to the host where the
daemon is running.
Similarly, the Docker client can use `-H` to connect
to a custom port.
Similarly, the Docker client can use `-H` to connect to a custom port.
`-H` accepts host and port assignment in the
following format: `tcp://[host][:port]` or
`unix://path`
`-H` accepts host and port assignment in the following format:
tcp://[host][:port]` or `unix://path
For example:
- `tcp://host:4243` -> tcp connection on
- `tcp://host:4243` -> TCP connection on
host:4243
- `unix://path/to/socket` -> unix socket located
- `unix://path/to/socket` -> Unix socket located
at `path/to/socket`
`-H`, when empty, will default to the same value as
when no `-H` was passed in.
`-H` also accepts short form for TCP bindings:
`host[:port]` or `:port`
# Run docker in daemon mode
host[:port]` or `:port
Run Docker in daemon mode:
$ sudo <path to>/docker -H 0.0.0.0:5555 -d &
# Download an ubuntu image
Download an `ubuntu` image:
$ sudo docker -H :5555 pull ubuntu
You can use multiple `-H`, for example, if you want
to listen on both TCP and a Unix socket
You can use multiple `-H`, for example, if you want to listen on both
TCP and a Unix socket
# Run docker in daemon mode
$ sudo <path to>/docker -H tcp://127.0.0.1:4243 -H unix:///var/run/docker.sock -d &

View File

@ -19,8 +19,8 @@ operating systems.
## Installation
The cookbook is available on the [Chef Community
Site](http://community.opscode.com/cookbooks/docker) and can be installed using
your favorite cookbook dependency manager.
Site](http://community.opscode.com/cookbooks/docker) and can be
installed using your favorite cookbook dependency manager.
The source can be found on
[GitHub](https://github.com/bflad/chef-docker).
@ -71,4 +71,4 @@ This is equivalent to running the following command, but under upstart:
$ docker run --detach=true --publish='5000:5000' --env='SETTINGS_FLAVOR=local' --volume='/mnt/docker:/docker-storage' samalba/docker-registry
The resources will accept a single string or an array of values for any
docker flags that allow multiple values.
Docker flags that allow multiple values.

View File

@ -10,16 +10,15 @@ You can use your Docker containers with process managers like
## Introduction
If you want a process manager to manage your containers you will need to
run the docker daemon with the `-r=false` so that
docker will not automatically restart your containers when the host is
restarted.
run the docker daemon with the `-r=false` so that docker will not
automatically restart your containers when the host is restarted.
When you have finished setting up your image and are happy with your
running container, you can then attach a process manager to manage it.
When your run `docker start -a` docker will
automatically attach to the running container, or start it if needed and
forward all signals so that the process manager can detect when a
container stops and correctly restart it.
When your run `docker start -a` docker will automatically attach to the
running container, or start it if needed and forward all signals so that
the process manager can detect when a container stops and correctly
restart it.
Here are a few sample scripts for systemd and upstart to integrate with
docker.
@ -27,9 +26,8 @@ docker.
## Sample Upstart Script
In this example We've already created a container to run Redis with
`--name redis_server`. To create an upstart script
for our container, we create a file named
`/etc/init/redis.conf` and place the following into
`--name redis_server`. To create an upstart script for our container, we
create a file named `/etc/init/redis.conf` and place the following into
it:
description "Redis container"

View File

@ -7,13 +7,13 @@ page_keywords: network, networking, bridge, docker, documentation
## Introduction
Docker uses Linux bridge capabilities to provide network connectivity to
containers. The `docker0` bridge interface is
managed by Docker for this purpose. When the Docker daemon starts it :
containers. The `docker0` bridge interface is managed by Docker for this
purpose. When the Docker daemon starts it:
- creates the `docker0` bridge if not present
- searches for an IP address range which doesn't overlap with an existing route
- picks an IP in the selected range
- assigns this IP to the `docker0` bridge
- Creates the `docker0` bridge if not present
- Searches for an IP address range which doesn't overlap with an existing route
- Picks an IP in the selected range
- Assigns this IP to the `docker0` bridge
<!-- -->
@ -42,7 +42,7 @@ for the container.
docker0 8000.fef213db5a66 no vethQCDY1N
Above, `docker0` acts as a bridge for the `vethQCDY1N` interface which
is dedicated to the 52f811c5d3d6 container.
is dedicated to the `52f811c5d3d6` container.
## How to use a specific IP address range
@ -50,16 +50,15 @@ Docker will try hard to find an IP range that is not used by the host.
Even though it works for most cases, it's not bullet-proof and sometimes
you need to have more control over the IP addressing scheme.
For this purpose, Docker allows you to manage the `docker0`
bridge or your own one using the `-b=<bridgename>`
parameter.
For this purpose, Docker allows you to manage the `docker0` bridge or
your own one using the `-b=<bridgename>` parameter.
In this scenario:
- ensure Docker is stopped
- create your own bridge (`bridge0` for example)
- assign a specific IP to this bridge
- start Docker with the `-b=bridge0` parameter
- Ensure Docker is stopped
- Create your own bridge (`bridge0` for example)
- Assign a specific IP to this bridge
- Start Docker with the `-b=bridge0` parameter
<!-- -->
@ -107,29 +106,27 @@ In this scenario:
## Container intercommunication
The value of the Docker daemon's `icc` parameter
determines whether containers can communicate with each other over the
bridge network.
The value of the Docker daemon's `icc` parameter determines whether
containers can communicate with each other over the bridge network.
- The default, `-icc=true` allows containers to communicate with each other.
- `-icc=false` means containers are isolated from each other.
Docker uses `iptables` under the hood to either
accept or drop communication between containers.
Docker uses `iptables` under the hood to either accept or drop
communication between containers.
## What is the vethXXXX device?
Well. Things get complicated here.
The `vethXXXX` interface is the host side of a
point-to-point link between the host and the corresponding container;
the other side of the link is the container's `eth0`
interface. This pair (host `vethXXX` and container
`eth0`) are connected like a tube. Everything that
comes in one side will come out the other side.
The `vethXXXX` interface is the host side of a point-to-point link
between the host and the corresponding container; the other side of the
link is the container's `eth0` interface. This pair (host `vethXXX` and
container `eth0`) are connected like a tube. Everything that comes in
one side will come out the other side.
All the plumbing is delegated to Linux network capabilities (check the
ip link command) and the namespaces infrastructure.
`ip link` command) and the namespaces infrastructure.
## I want more

View File

@ -29,10 +29,10 @@ containers, Docker provides the linking mechanism.
## Auto map all exposed ports on the host
To bind all the exposed container ports to the host automatically, use
`docker run -P <imageid>`. The mapped host ports
will be auto-selected from a pool of unused ports (49000..49900), and
you will need to use `docker ps`, `docker inspect <container_id>` or
`docker port <container_id> <port>` to determine what they are.
`docker run -P <imageid>`. The mapped host ports will be auto-selected
from a pool of unused ports (49000..49900), and you will need to use
`docker ps`, `docker inspect <container_id>` or `docker port
<container_id> <port>` to determine what they are.
## Binding a port to a host interface
@ -65,9 +65,9 @@ combinations described for TCP work. Here is only one example:
# Bind UDP port 5353 of the container to UDP port 53 on 127.0.0.1 of the host machine.
$ docker run -p 127.0.0.1:53:5353/udp <image> <cmd>
The command `docker port` lists the interface and port on the host machine
bound to a given container port. It is useful when using dynamically allocated
ports:
The command `docker port` lists the interface and port on the host
machine bound to a given container port. It is useful when using
dynamically allocated ports:
# Bind to a dynamically allocated port
$ docker run -p 127.0.0.1::8080 --name dyn-bound <image> <cmd>
@ -79,24 +79,25 @@ ports:
## Linking a container
Communication between two containers can also be established in a
docker-specific way called linking.
Docker-specific way called linking.
To briefly present the concept of linking, let us consider two containers:
`server`, containing the service, and `client`, accessing the service. Once
`server` is running, `client` is started and links to server. Linking sets
environment variables in `client` giving it some information about `server`.
In this sense, linking is a method of service discovery.
To briefly present the concept of linking, let us consider two
containers: `server`, containing the service, and `client`, accessing
the service. Once `server` is running, `client` is started and links to
server. Linking sets environment variables in `client` giving it some
information about `server`. In this sense, linking is a method of
service discovery.
Let us now get back to our topic of interest; communication between the two
containers. We mentioned that the tricky part about this communication was that
the IP address of `server` was not fixed. Therefore, some of the environment
variables are going to be used to inform `client` about this IP address. This
process called exposure, is possible because `client` is started after `server`
has been started.
Let us now get back to our topic of interest; communication between the
two containers. We mentioned that the tricky part about this
communication was that the IP address of `server` was not fixed.
Therefore, some of the environment variables are going to be used to
inform `client` about this IP address. This process called exposure, is
possible because the `client` is started after the `server` has been started.
Here is a full example. On `server`, the port of interest is exposed. The
exposure is done either through the `--expose` parameter to the `docker run`
command, or the `EXPOSE` build command in a Dockerfile:
Here is a full example. On `server`, the port of interest is exposed.
The exposure is done either through the `--expose` parameter to the
`docker run` command, or the `EXPOSE` build command in a `Dockerfile`:
# Expose port 80
$ docker run --expose 80 --name server <image> <cmd>
@ -106,7 +107,7 @@ The `client` then links to the `server`:
# Link
$ docker run --name client --link server:linked-server <image> <cmd>
`client` locally refers to `server` as `linked-server`. The following
Here `client` locally refers to `server` as `linked-server`. The following
environment variables, among others, are available on `client`:
# The default protocol, ip, and port of the service running in the container
@ -118,7 +119,9 @@ environment variables, among others, are available on `client`:
$ LINKED-SERVER_PORT_80_TCP_ADDR=172.17.0.8
$ LINKED-SERVER_PORT_80_TCP_PORT=80
This tells `client` that a service is running on port 80 of `server` and that
`server` is accessible at the IP address 172.17.0.8
This tells `client` that a service is running on port 80 of `server` and
that `server` is accessible at the IP address `172.17.0.8`:
> **Note:**
> Using the `-p` parameter also exposes the port.
Note: Using the `-p` parameter also exposes the port.

View File

@ -12,7 +12,7 @@ page_keywords: puppet, installation, usage, docker, documentation
## Requirements
To use this guide you'll need a working installation of Puppet from
[Puppetlabs](https://puppetlabs.com) .
[Puppet Labs](https://puppetlabs.com) .
The module also currently uses the official PPA so only works with
Ubuntu.
@ -26,8 +26,8 @@ installed using the built-in module tool.
$ puppet module install garethr/docker
It can also be found on
[GitHub](https://github.com/garethr/garethr-docker) if you would
rather download the source.
[GitHub](https://github.com/garethr/garethr-docker) if you would rather
download the source.
## Usage
@ -88,5 +88,6 @@ Run also contains a number of optional parameters:
dns => ['8.8.8.8', '8.8.4.4'],
}
Note that ports, env, dns and volumes can be set with either a single
string or as above with an array of values.
> *Note:*
> The `ports`, `env`, `dns` and `volumes` attributes can be set with either a single
> string or as above with an array of values.

View File

@ -6,18 +6,16 @@ page_keywords: Examples, Usage, links, linking, docker, documentation, examples,
## Introduction
From version 0.6.5 you are now able to `name` a container and `link` it to
another container by referring to its name. This will create a parent -> child
relationship where the parent container can see selected information about its
child.
From version 0.6.5 you are now able to `name` a container and `link` it
to another container by referring to its name. This will create a parent
-> child relationship where the parent container can see selected
information about its child.
## Container Naming
New in version v0.6.5.
You can now name your container by using the `--name` flag. If no name is
provided, Docker will automatically generate a name. You can see this name
using the `docker ps` command.
You can now name your container by using the `--name` flag. If no name
is provided, Docker will automatically generate a name. You can see this
name using the `docker ps` command.
# format is "sudo docker run --name <container_name> <image_name> <command>"
$ sudo docker run --name test ubuntu /bin/bash
@ -29,48 +27,49 @@ using the `docker ps` command.
## Links: service discovery for docker
New in version v0.6.5.
Links allow containers to discover and securely communicate with each
other by using the flag `-link name:alias`. Inter-container communication
can be disabled with the daemon flag `-icc=false`. With this flag set to
`false`, Container A cannot access Container unless explicitly allowed via
a link. This is a huge win for securing your containers. When two containers
are linked together Docker creates a parent child relationship between the
containers. The parent container will be able to access information via
environment variables of the child such as name, exposed ports, IP and other
selected environment variables.
other by using the flag `-link name:alias`. Inter-container
communication can be disabled with the daemon flag `-icc=false`. With
this flag set to `false`, Container A cannot access Container unless
explicitly allowed via a link. This is a huge win for securing your
containers. When two containers are linked together Docker creates a
parent child relationship between the containers. The parent container
will be able to access information via environment variables of the
child such as name, exposed ports, IP and other selected environment
variables.
When linking two containers Docker will use the exposed ports of the container
to create a secure tunnel for the parent to access. If a database container
only exposes port 8080 then the linked container will only be allowed to access
port 8080 and nothing else if inter-container communication is set to false.
When linking two containers Docker will use the exposed ports of the
container to create a secure tunnel for the parent to access. If a
database container only exposes port 8080 then the linked container will
only be allowed to access port 8080 and nothing else if inter-container
communication is set to false.
For example, there is an image called `crosbymichael/redis` that exposes the
port 6379 and starts the Redis server. Let's name the container as `redis`
based on that image and run it as daemon.
For example, there is an image called `crosbymichael/redis` that exposes
the port 6379 and starts the Redis server. Let's name the container as
`redis` based on that image and run it as daemon.
$ sudo docker run -d --name redis crosbymichael/redis
We can issue all the commands that you would expect using the name `redis`;
start, stop, attach, using the name for our container. The name also allows
us to link other containers into this one.
We can issue all the commands that you would expect using the name
`redis`; start, stop, attach, using the name for our container. The name
also allows us to link other containers into this one.
Next, we can start a new web application that has a dependency on Redis and
apply a link to connect both containers. If you noticed when running our Redis
server we did not use the `-p` flag to publish the Redis port to the host
system. Redis exposed port 6379 and this is all we need to establish a link.
Next, we can start a new web application that has a dependency on Redis
and apply a link to connect both containers. If you noticed when running
our Redis server we did not use the `-p` flag to publish the Redis port
to the host system. Redis exposed port 6379 and this is all we need to
establish a link.
$ sudo docker run -t -i --link redis:db --name webapp ubuntu bash
When you specified `--link redis:db` you are telling Docker to link the
container named `redis` into this new container with the alias `db`.
Environment variables are prefixed with the alias so that the parent container
can access network and environment information from the containers that are
linked into it.
Environment variables are prefixed with the alias so that the parent
container can access network and environment information from the
containers that are linked into it.
If we inspect the environment variables of the second container, we would see
all the information about the child container.
If we inspect the environment variables of the second container, we
would see all the information about the child container.
$ root@4c01db0b339c:/# env
@ -90,20 +89,20 @@ all the information about the child container.
_=/usr/bin/env
root@4c01db0b339c:/#
Accessing the network information along with the environment of the child
container allows us to easily connect to the Redis service on the specific
IP and port in the environment.
Accessing the network information along with the environment of the
child container allows us to easily connect to the Redis service on the
specific IP and port in the environment.
> **Note**:
> These Environment variables are only set for the first process in the
> container. Similarly, some daemons (such as `sshd`)
> will scrub them when spawning shells for connection.
You can work around this by storing the initial `env` in a file, or looking
at `/proc/1/environ`.
You can work around this by storing the initial `env` in a file, or
looking at `/proc/1/environ`.
Running `docker ps` shows the 2 containers, and the `webapp/db` alias name for
the Redis container.
Running `docker ps` shows the 2 containers, and the `webapp/db` alias
name for the Redis container.
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
@ -112,13 +111,13 @@ the Redis container.
## Resolving Links by Name
New in version v0.11.
> *Note:* New in version v0.11.
Linked containers can be accessed by hostname. Hostnames are mapped by
appending entries to '/etc/hosts' using the linked container's alias.
For example, linking a container using '--link redis:db' will generate the
following '/etc/hosts' file:
For example, linking a container using '--link redis:db' will generate
the following '/etc/hosts' file:
root@6541a75d44a0:/# cat /etc/hosts
172.17.0.3 6541a75d44a0

View File

@ -8,8 +8,8 @@ page_keywords: Examples, Usage, volume, docker, documentation, examples
A *data volume* is a specially-designated directory within one or more
containers that bypasses the [*Union File
System*](/terms/layer/#ufs-def) to provide several useful features
for persistent or shared data:
System*](/terms/layer/#ufs-def) to provide several useful features for
persistent or shared data:
- **Data volumes can be shared and reused between containers:**
This is the feature that makes data volumes so powerful. You can
@ -28,26 +28,22 @@ for persistent or shared data:
Each container can have zero or more data volumes.
New in version v0.3.0.
## Getting Started
Using data volumes is as simple as adding a `-v`
parameter to the `docker run` command. The
`-v` parameter can be used more than once in order
to create more volumes within the new container. To create a new
Using data volumes is as simple as adding a `-v` parameter to the
`docker run` command. The `-v` parameter can be used more than once in
order to create more volumes within the new container. To create a new
container with two new volumes:
$ docker run -v /var/volume1 -v /var/volume2 busybox true
This command will create the new container with two new volumes that
exits instantly (`true` is pretty much the smallest,
simplest program that you can run). You can then mount its
volumes in any other container using the `run` `--volumes-from`
option; irrespective of whether the volume container is running or
not.
exits instantly (`true` is pretty much the smallest, simplest program
that you can run). You can then mount its volumes in any other container
using the `run` `--volumes-from` option; irrespective of whether the
volume container is running or not.
Or, you can use the VOLUME instruction in a Dockerfile to add one or
Or, you can use the `VOLUME` instruction in a `Dockerfile` to add one or
more new volumes to any container created from that image:
# BUILD-USING: $ docker build -t data .
@ -63,8 +59,8 @@ containers, or want to use from non-persistent containers, it's best to
create a named Data Volume Container, and then to mount the data from
it.
Create a named container with volumes to share (`/var/volume1`
and `/var/volume2`):
Create a named container with volumes to share (`/var/volume1` and
`/var/volume2`):
$ docker run -v /var/volume1 -v /var/volume2 -name DATA busybox true
@ -72,12 +68,12 @@ Then mount those data volumes into your application containers:
$ docker run -t -i -rm -volumes-from DATA -name client1 ubuntu bash
You can use multiple `-volumes-from` parameters to
bring together multiple data volumes from multiple containers.
You can use multiple `-volumes-from` parameters to bring together
multiple data volumes from multiple containers.
Interestingly, you can mount the volumes that came from the
`DATA` container in yet another container via the
`client1` middleman container:
Interestingly, you can mount the volumes that came from the `DATA`
container in yet another container via the `client1` middleman
container:
$ docker run -t -i -rm -volumes-from client1 -name client2 ubuntu bash
@ -94,14 +90,15 @@ upgrade, or effectively migrate data volumes between containers.
-v=[]: Create a bind mount with: [host-dir]:[container-dir]:[rw|ro].
You must specify an absolute path for `host-dir`. If `host-dir` is missing from
the command, then Docker creates a new volume. If `host-dir` is present but
points to a non-existent directory on the host, Docker will automatically
create this directory and use it as the source of the bind-mount.
You must specify an absolute path for `host-dir`. If `host-dir` is
missing from the command, then Docker creates a new volume. If
`host-dir` is present but points to a non-existent directory on the
host, Docker will automatically create this directory and use it as the
source of the bind-mount.
Note that this is not available from a Dockerfile due the portability and
sharing purpose of it. The `host-dir` volumes are entirely host-dependent
and might not work on any other machine.
Note that this is not available from a `Dockerfile` due the portability
and sharing purpose of it. The `host-dir` volumes are entirely
host-dependent and might not work on any other machine.
For example:
@ -110,26 +107,28 @@ For example:
# Example:
$ sudo docker run -i -t -v /var/log:/logs_from_host:ro ubuntu bash
The command above mounts the host directory `/var/log` into the container
with *read only* permissions as `/logs_from_host`.
The command above mounts the host directory `/var/log` into the
container with *read only* permissions as `/logs_from_host`.
New in version v0.5.0.
### Note for OS/X users and remote daemon users:
OS/X users run `boot2docker` to create a minimalist virtual machine running
the docker daemon. That virtual machine then launches docker commands on
behalf of the OS/X command line. The means that `host directories` refer to
directories in the `boot2docker` virtual machine, not the OS/X filesystem.
OS/X users run `boot2docker` to create a minimalist virtual machine
running the docker daemon. That virtual machine then launches docker
commands on behalf of the OS/X command line. The means that `host
directories` refer to directories in the `boot2docker` virtual machine,
not the OS/X filesystem.
Similarly, anytime when the docker daemon is on a remote machine, the
Similarly, anytime when the docker daemon is on a remote machine, the
`host directories` always refer to directories on the daemon's machine.
### Backup, restore, or migrate data volumes
You cannot back up volumes using `docker export`, `docker save` and `docker cp`
because they are external to images. Instead you can use `--volumes-from` to
start a new container that can access the data-container's volume. For example:
You cannot back up volumes using `docker export`, `docker save` and
`docker cp` because they are external to images. Instead you can use
`--volumes-from` to start a new container that can access the
data-container's volume. For example:
$ sudo docker run -rm --volumes-from DATA -v $(pwd):/backup busybox tar cvf /backup/backup.tar /data
@ -144,7 +143,8 @@ start a new container that can access the data-container's volume. For example:
- `tar cvf /backup/backup.tar /data`:
creates an uncompressed tar file of all the files in the `/data` directory
Then to restore to the same container, or another that you`ve made elsewhere:
Then to restore to the same container, or another that you've made
elsewhere:
# create a new data container
$ sudo docker run -v /data -name DATA2 busybox true

View File

@ -18,15 +18,14 @@ You can find one or more repositories hosted on a *registry*. There are
two types of *registry*: public and private. There's also a default
*registry* that Docker uses which is called
[Docker.io](http://index.docker.io).
[Docker.io](http://index.docker.io) is the home of
"top-level" repositories and public "user" repositories. The Docker
project provides [Docker.io](http://index.docker.io) to host public and
[private repositories](https://index.docker.io/plans/), namespaced by
user. We provide user authentication and search over all the public
repositories.
[Docker.io](http://index.docker.io) is the home of "top-level"
repositories and public "user" repositories. The Docker project
provides [Docker.io](http://index.docker.io) to host public and [private
repositories](https://index.docker.io/plans/), namespaced by user. We
provide user authentication and search over all the public repositories.
Docker acts as a client for these services via the `docker search`, `pull`,
`login` and `push` commands.
Docker acts as a client for these services via the `docker search`,
`pull`, `login` and `push` commands.
## Repositories
@ -42,8 +41,8 @@ There are two types of public repositories: *top-level* repositories
which are controlled by the Docker team, and *user* repositories created
by individual contributors. Anyone can read from these repositories
they really help people get started quickly! You could also use
[*Trusted Builds*](#trusted-builds) if you need to keep
control of who accesses your images.
[*Trusted Builds*](#trusted-builds) if you need to keep control of who
accesses your images.
- Top-level repositories can easily be recognized by **not** having a
`/` (slash) in their name. These repositories represent trusted images
@ -83,12 +82,11 @@ user name or description:
...
There you can see two example results: `centos` and
`slantview/centos-chef-solo`. The second result
shows that it comes from the public repository of a user,
`slantview/`, while the first result
(`centos`) doesn't explicitly list a repository so
it comes from the trusted top-level namespace. The `/`
character separates a user's repository and the image name.
`slantview/centos-chef-solo`. The second result shows that it comes from
the public repository of a user, `slantview/`, while the first result
(`centos`) doesn't explicitly list a repository so it comes from the
trusted top-level namespace. The `/` character separates a user's
repository and the image name.
Once you have found the image name, you can download it:
@ -98,8 +96,8 @@ Once you have found the image name, you can download it:
539c0211cd76: Download complete
What can you do with that image? Check out the
[*Examples*](/examples/#example-list) and, when you're ready with
your own image, come back here to learn how to share it.
[*Examples*](/examples/#example-list) and, when you're ready with your
own image, come back here to learn how to share it.
## Contributing to Docker.io
@ -114,10 +112,9 @@ first. You can create your username and login on
This will prompt you for a username, which will become a public
namespace for your public repositories.
If your username is available then `docker` will
also prompt you to enter a password and your e-mail address. It will
then automatically log you in. Now you're ready to commit and push your
own images!
If your username is available then `docker` will also prompt you to
enter a password and your e-mail address. It will then automatically log
you in. Now you're ready to commit and push your own images!
> **Note:**
> Your authentication credentials will be stored in the [`.dockercfg`
@ -150,9 +147,9 @@ or tag.
## Trusted Builds
Trusted Builds automate the building and updating of images from GitHub,
directly on `docker.io` servers. It works by adding
a commit hook to your selected repository, triggering a build and update
when you push a commit.
directly on Docker.io. It works by adding a commit hook to
your selected repository, triggering a build and update when you push a
commit.
### To setup a trusted build
@ -206,9 +203,10 @@ identify a host), like this:
Once a repository has your registry's host name as part of the tag, you
can push and pull it like any other repository, but it will **not** be
searchable (or indexed at all) on [Docker.io](http://index.docker.io), and there will be
no user name checking performed. Your registry will function completely
independently from the [Docker.io](http://index.docker.io) registry.
searchable (or indexed at all) on [Docker.io](http://index.docker.io),
and there will be no user name checking performed. Your registry will
function completely independently from the
[Docker.io](http://index.docker.io) registry.
<iframe width="640" height="360" src="//www.youtube.com/embed/CAewZCBT4PI?rel=0" frameborder="0" allowfullscreen></iframe>
@ -219,15 +217,20 @@ http://blog.docker.io/2013/07/how-to-use-your-own-registry/)
## Authentication File
The authentication is stored in a json file, `.dockercfg`
located in your home directory. It supports multiple registry
urls.
The authentication is stored in a JSON file, `.dockercfg`, located in
your home directory. It supports multiple registry URLs.
`docker login` will create the "[https://index.docker.io/v1/](
https://index.docker.io/v1/)" key.
The `docker login` command will create the:
`docker login https://my-registry.com` will create the
"[https://my-registry.com](https://my-registry.com)" key.
[https://index.docker.io/v1/](https://index.docker.io/v1/)
key.
The `docker login https://my-registry.com` command will create the:
[https://my-registry.com](https://my-registry.com)
key.
For example:
@ -243,4 +246,6 @@ For example:
}
The `auth` field represents
`base64(<username>:<password>)`
base64(<username>:<password>)