1
0
Fork 0
mirror of https://github.com/moby/moby.git synced 2022-11-09 12:21:53 -05:00

Merge pull request #5287 from jamtur01/tickets/5283

Fixed #5283 - literal leftover from cutover
This commit is contained in:
Sven Dowideit 2014-04-18 09:04:16 +10:00
commit fb2110b465
20 changed files with 88 additions and 128 deletions

View file

@ -49,13 +49,11 @@ For each container, one cgroup will be created in each hierarchy. On
older systems with older versions of the LXC userland tools, the name of
the cgroup will be the name of the container. With more recent versions
of the LXC tools, the cgroup will be `lxc/<container_name>.`
.literal}
For Docker containers using cgroups, the container name will be the full
ID or long ID of the container. If a container shows up as ae836c95b4c3
in `docker ps`, its long ID might be something like
`ae836c95b4c3c9e9179e0e91015512da89fdec91612f63cebae57df9a5444c79`
.literal}. You can look it up with `docker inspect`
`ae836c95b4c3c9e9179e0e91015512da89fdec91612f63cebae57df9a5444c79`. You can look it up with `docker inspect`
or `docker ps -notrunc`.
Putting everything together to look at the memory metrics for a Docker
@ -206,14 +204,13 @@ Now that weve covered memory metrics, everything else will look very
simple in comparison. CPU metrics will be found in the
`cpuacct` controller.
For each container, you will find a pseudo-file `cpuacct.stat`
.literal}, containing the CPU usage accumulated by the processes of the
container, broken down between `user` and
`system` time. If youre not familiar with the
distinction, `user` is the time during which the
processes were in direct control of the CPU (i.e. executing process
code), and `system` is the time during which the CPU
was executing system calls on behalf of those processes.
For each container, you will find a pseudo-file `cpuacct.stat`,
containing the CPU usage accumulated by the processes of the container,
broken down between `user` and `system` time. If youre not familiar
with the distinction, `user` is the time during which the processes were
in direct control of the CPU (i.e. executing process code), and `system`
is the time during which the CPU was executing system calls on behalf of
those processes.
Those times are expressed in ticks of 1/100th of a second. Actually,
they are expressed in "user jiffies". There are `USER_HZ`
@ -407,11 +404,10 @@ Sometimes, you do not care about real time metric collection, but when a
container exits, you want to know how much CPU, memory, etc. it has
used.
Docker makes this difficult because it relies on `lxc-start`
.literal}, which carefully cleans up after itself, but it is still
possible. It is usually easier to collect metrics at regular intervals
(e.g. every minute, with the collectd LXC plugin) and rely on that
instead.
Docker makes this difficult because it relies on `lxc-start`, which
carefully cleans up after itself, but it is still possible. It is
usually easier to collect metrics at regular intervals (e.g. every
minute, with the collectd LXC plugin) and rely on that instead.
But, if youd still like to gather the stats when a container stops,
here is how:

View file

@ -71,8 +71,7 @@ a local version of a common base:
# docker build -t my_ubuntu .
**Option 2** is good for testing, but will break other HTTP clients
which obey `http_proxy`, such as `curl`
.literal}, `wget` and others:
which obey `http_proxy`, such as `curl`, `wget` and others:
$ sudo docker run --rm -t -i -e http_proxy=http://dockerhost:3142/ debian bash

View file

@ -25,9 +25,8 @@ volume.
## Add data to the first database
Were assuming your Docker host is reachable at `localhost`
.literal}. If not, replace `localhost` with the
public IP of your Docker host.
Were assuming your Docker host is reachable at `localhost`. If not,
replace `localhost` with the public IP of your Docker host.
HOST=localhost
URL="http://$HOST:$(sudo docker port $COUCH1 5984 | grep -Po '\d+$')/_utils/"
@ -35,8 +34,7 @@ public IP of your Docker host.
## Create second database
This time, were requesting shared access to `$COUCH1`
.literal}s volumes.
This time, were requesting shared access to `$COUCH1`'s volumes.
COUCH2=$(sudo docker run -d -p 5984 --volumes-from $COUCH1 shykes/couchdb:2013-05-03)

View file

@ -64,10 +64,8 @@ commands, try things out, and then exit when youre done.
Save the changes we just made in the container to a new image called
`/builds/github.com/shykes/helloflask/master`. You
now have 3 different ways to refer to the container: name
`pybuilder_run`, short-id `c8b2e8228f11`
.literal}, or long-id
`c8b2e8228f11b8b3e492cbf9a49923ae66496230056d61e07880dc74c5f495f9`
.literal}.
`pybuilder_run`, short-id `c8b2e8228f11`, or long-id
`c8b2e8228f11b8b3e492cbf9a49923ae66496230056d61e07880dc74c5f495f9`.
$ sudo docker commit pybuilder_run /builds/github.com/shykes/helloflask/master
c8b2e8228f11b8b3e492cbf9a49923ae66496230056d61e07880dc74c5f495f9

View file

@ -101,8 +101,7 @@ are started:
## Create a `supervisord` configuration file
Create an empty file called `supervisord.conf`. Make
sure its at the same directory level as your `Dockerfile`
.literal}:
sure its at the same directory level as your `Dockerfile`:
touch supervisord.conf

View file

@ -86,7 +86,7 @@ page_keywords: faq, questions, documentation, docker
> full traceability from the production server all the way back
> to the upstream developer. Docker also implements incremental
> uploads and downloads, similar to `git pull`
> .literal}, so new versions of a container can be transferred
> , so new versions of a container can be transferred
> by only sending diffs.
>
> - *Component re-use.*

View file

@ -77,7 +77,6 @@ Repository.
left side
- Search for 2014.03 and select one of the Amazon provided AMI,
for example `amzn-ami-pv-2014.03.rc-0.x86_64-ebs`
.literal}
- For testing you can use the default (possibly free)
`t1.micro` instance (more info on
[pricing](http://aws.amazon.com/en/ec2/pricing/)).

View file

@ -26,12 +26,11 @@ bit** architecture.
The `docker-io` package provides Docker on Fedora.
If you have the (unrelated) `docker` package
installed already, it will conflict with `docker-io`
.literal}. Theres a [bug
If you have the (unrelated) `docker` package installed already, it will
conflict with `docker-io`. Theres a [bug
report](https://bugzilla.redhat.com/show_bug.cgi?id=1043676) filed for
it. To proceed with `docker-io` installation on
Fedora 19, please remove `docker` first.
it. To proceed with `docker-io` installation on Fedora 19, please remove
`docker` first.
sudo yum -y remove docker

View file

@ -35,8 +35,8 @@ you will need to set the kernel manually.
# install the new kernel
apt-get install linux-generic-lts-raring
Great, now you have the kernel installed in `/boot/`
.literal}, next you need to make it boot next time.
Great, now you have the kernel installed in `/boot/`, next you need to
make it boot next time.
# find the exact names
find /boot/ -name '*3.8*'

View file

@ -172,8 +172,8 @@ than `docker` should own the Unix socket with the
Warning
The *docker* group (or the group specified with `-G`
.literal}) is root-equivalent; see [*Docker Daemon Attack
The *docker* group (or the group specified with `-G`) is
root-equivalent; see [*Docker Daemon Attack
Surface*](../../articles/security/#dockersecurity-daemon) details.
**Example:**

View file

@ -72,10 +72,8 @@ following a link in your application to an OAuth Authorization endpoint.
included, it must be one of the URIs which were submitted when
registering your application.
- **scope** The extent of access permissions you are requesting.
Currently, the scope options are `profile_read`
.literal}, `profile_write`,
`email_read`, and `email_write`
.literal}. Scopes must be separated by a space. If omitted, the
Currently, the scope options are `profile_read`, `profile_write`,
`email_read`, and `email_write`. Scopes must be separated by a space. If omitted, the
default scopes `profile_read email_read` are
used.
- **state** (Recommended) Used by your application to maintain
@ -140,7 +138,6 @@ to get an Access Token.
 
- **grant\_type** MUST be set to `authorization_code`
.literal}
- **code** The authorization code received from the users
redirect request.
- **redirect\_uri** The same `redirect_uri`
@ -204,7 +201,6 @@ if the user has not revoked access from your application.
 
- **grant\_type** MUST be set to `refresh_token`
.literal}
- **refresh\_token** The `refresh_token`
which was issued to your application.
- **scope** (optional) The scope of the access token to be

View file

@ -254,9 +254,9 @@ All new files and directories are created with mode 0755, uid and gid 0.
Note
if you build using STDIN (`docker build - < somefile`
.literal}), there is no build context, so the Dockerfile can only
contain an URL based ADD statement.
if you build using STDIN (`docker build - < somefile`), there is no
build context, so the Dockerfile can only contain an URL based ADD
statement.
Note
@ -335,12 +335,11 @@ that you can run as an executable. That is, when you specify an
`ENTRYPOINT`, then the whole container runs as if it
was just that executable.
The `ENTRYPOINT` instruction adds an entry command
that will **not** be overwritten when arguments are passed to
`docker run`, unlike the behavior of `CMD`
.literal}. This allows arguments to be passed to the entrypoint. i.e.
`docker run <image> -d` will pass the "-d" argument
to the ENTRYPOINT.
The `ENTRYPOINT` instruction adds an entry command that will **not** be
overwritten when arguments are passed to `docker run`, unlike the
behavior of `CMD`. This allows arguments to be passed to the entrypoint.
i.e. `docker run <image> -d` will pass the "-d" argument to the
ENTRYPOINT.
You can specify parameters either in the ENTRYPOINT JSON array (as in
"like an exec" above), or by using a CMD statement. Parameters in the

View file

@ -19,8 +19,7 @@ no parameters or execute `docker help`:
Single character commandline options can be combined, so rather than
typing `docker run -t -i --name test busybox sh`,
you can write `docker run -ti --name test busybox sh`
.literal}.
you can write `docker run -ti --name test busybox sh`.
### Boolean
@ -92,11 +91,9 @@ To set the DNS server for all Docker containers, use
To set the DNS search domain for all Docker containers, use
`docker -d --dns-search example.com`.
To run the daemon with debug output, use `docker -d -D`
.literal}.
To run the daemon with debug output, use `docker -d -D`.
To use lxc as the execution driver, use `docker -d -e lxc`
.literal}.
To use lxc as the execution driver, use `docker -d -e lxc`.
The docker client will also honor the `DOCKER_HOST`
environment variable to set the `-H` flag for the
@ -119,8 +116,7 @@ systemd in the [docker source
tree](https://github.com/dotcloud/docker/blob/master/contrib/init/systemd/socket-activation/).
Docker supports softlinks for the Docker data directory
(`/var/lib/docker`) and for `/tmp`
.literal}. TMPDIR and the data directory can be set like this:
(`/var/lib/docker`) and for `/tmp`. TMPDIR and the data directory can be set like this:
TMPDIR=/mnt/disk2/tmp /usr/local/bin/docker -d -D -g /var/lib/docker -H unix:// > /var/lib/boot2docker/docker.log 2>&1
# or
@ -254,8 +250,7 @@ machine and that no parsing of the `Dockerfile`
happens at the client side (where youre running
`docker build`). That means that *all* the files at
`PATH` get sent, not just the ones listed to
[*ADD*](../../builder/#dockerfile-add) in the `Dockerfile`
.literal}.
[*ADD*](../../builder/#dockerfile-add) in the `Dockerfile`.
The transfer of context from the local machine to the Docker daemon is
what the `docker` client means when you see the
@ -658,9 +653,8 @@ Restores both images and tags.
The `docker logs` command batch-retrieves all logs
present at the time of execution.
The `docker logs --follow` command combines
`docker logs` and `docker attach`
.literal}: it will first return all logs from the beginning and then
The `docker logs --follow` command combines `docker logs` and `docker
attach`: it will first return all logs from the beginning and then
continue streaming new output from the containers stdout and stderr.
## `port`
@ -957,10 +951,8 @@ container). All three flags, `-e`, `--env`
and `--env-file` can be repeated.
Regardless of the order of these three flags, the `--env-file`
are processed first, and then `-e`
.literal}/`--env` flags. This way, the
`-e` or `--env` will override
variables as needed.
are processed first, and then `-e`, `--env` flags. This way, the
`-e` or `--env` will override variables as needed.
$ cat ./env.list
TEST_FOO=BAR

View file

@ -46,8 +46,8 @@ and nearly all the defaults set by the Docker runtime itself.
## [Operator Exclusive Options](#id4)
Only the operator (the person executing `docker run`
.literal}) can set the following options.
Only the operator (the person executing `docker run`) can set the
following options.
- [Detached vs Foreground](#detached-vs-foreground)
- [Detached (-d)](#detached-d)
@ -72,14 +72,12 @@ default foreground mode:
#### [Detached (-d)](#id3)
In detached mode (`-d=true` or just `-d`
.literal}), all I/O should be done through network connections or shared
volumes because the container is no longer listening to the commandline
where you executed `docker run`. You can reattach to
a detached container with `docker`
In detached mode (`-d=true` or just `-d`), all I/O should be done
through network connections or shared volumes because the container is
no longer listening to the commandline where you executed `docker run`.
You can reattach to a detached container with `docker`
[*attach*](../commandline/cli/#cli-attach). If you choose to run a
container in the detached mode, then you cannot use the `--rm`
option.
container in the detached mode, then you cannot use the `--rm` option.
#### [Foreground](#id4)
@ -196,12 +194,12 @@ by default a container is not allowed to access any devices, but a
and documentation on [cgroups
devices](https://www.kernel.org/doc/Documentation/cgroups/devices.txt)).
When the operator executes `docker run --privileged`
.literal}, Docker will enable to access to all devices on the host as
well as set some configuration in AppArmor to allow the container nearly
all the same access to the host as processes running outside containers
on the host. Additional information about running with
`--privileged` is available on the [Docker
When the operator executes `docker run --privileged`, Docker will enable
to access to all devices on the host as well as set some configuration
in AppArmor to allow the container nearly all the same access to the
host as processes running outside containers on the host. Additional
information about running with `--privileged` is available on the
[Docker
Blog](http://blog.docker.io/2013/09/docker-can-now-run-within-docker/).
If the Docker daemon was started using the `lxc`
@ -259,19 +257,17 @@ as arguments to the `ENTRYPOINT`.
--entrypoint="": Overwrite the default entrypoint set by the image
The ENTRYPOINT of an image is similar to a `COMMAND`
because it specifies what executable to run when the container starts,
but it is (purposely) more difficult to override. The
`ENTRYPOINT` gives a container its default nature or
behavior, so that when you set an `ENTRYPOINT` you
can run the container *as if it were that binary*, complete with default
options, and you can pass in more options via the `COMMAND`
.literal}. But, sometimes an operator may want to run something else
inside the container, so you can override the default
`ENTRYPOINT` at runtime by using a string to specify
the new `ENTRYPOINT`. Here is an example of how to
run a shell in a container that has been set up to automatically run
something else (like `/usr/bin/redis-server`):
The ENTRYPOINT of an image is similar to a `COMMAND` because it
specifies what executable to run when the container starts, but it is
(purposely) more difficult to override. The `ENTRYPOINT` gives a
container its default nature or behavior, so that when you set an
`ENTRYPOINT` you can run the container *as if it were that binary*,
complete with default options, and you can pass in more options via the
`COMMAND`. But, sometimes an operator may want to run something else
inside the container, so you can override the default `ENTRYPOINT` at
runtime by using a string to specify the new `ENTRYPOINT`. Here is an
example of how to run a shell in a container that has been set up to
automatically run something else (like `/usr/bin/redis-server`):
docker run -i -t --entrypoint /bin/bash example/redis
@ -330,8 +326,7 @@ port to use.
The operator can **set any environment variable** in the container by
using one or more `-e` flags, even overriding those
already defined by the developer with a Dockefile `ENV`
.literal}:
already defined by the developer with a Dockefile `ENV`:
$ docker run -e "deep=purple" --rm ubuntu /bin/bash -c export
declare -x HOME="/"
@ -343,8 +338,7 @@ already defined by the developer with a Dockefile `ENV`
declare -x container="lxc"
declare -x deep="purple"
Similarly the operator can set the **hostname** with `-h`
.literal}.
Similarly the operator can set the **hostname** with `-h`.
`--link name:alias` also sets environment variables,
using the *alias* string to define environment variables within the

View file

@ -23,7 +23,6 @@ an image name using one of three different commands:
A Fully Qualified Image Name (FQIN) can be made up of 3 parts:
`[registry_hostname[:port]/][user_name/](repository_name:version_tag)`
.literal}
`username` and `registry_hostname`
default to an empty string. When `registry_hostname`

View file

@ -135,8 +135,7 @@ the `-e` command line option.
`--expose 1234 -e REDIS_PORT_1234_TCP=tcp://192.168.1.52:6379`
will forward the local `1234` port to the
remote IP and port - in this case `192.168.1.52:6379`
.literal}.
remote IP and port - in this case `192.168.1.52:6379`.
#
#

View file

@ -39,8 +39,7 @@ characters of the full image ID - which can be found using
`docker inspect` or
`docker images --no-trunc=true`
**If youre using OS X** then you shouldnt use `sudo`
.literal}
**If youre using OS X** then you shouldnt use `sudo`.
## Running an interactive shell

View file

@ -27,12 +27,11 @@ managed by Docker for this purpose. When the Docker daemon starts it :
docker0 Link encap:Ethernet HWaddr xx:xx:xx:xx:xx:xx
inet addr:172.17.42.1 Bcast:0.0.0.0 Mask:255.255.0.0
At runtime, a [*specific kind of virtual interface*](#what-is-the-vethxxxx-device)
is given to each container which is then bonded to the
`docker0` bridge. Each container also receives a
dedicated IP address from the same range as `docker0`
.literal}. The `docker0` IP address is used as the
default gateway for the container.
At runtime, a [*specific kind of virtual interface*](#vethxxxx-device)
is given to each container which is then bonded to the `docker0` bridge.
Each container also receives a dedicated IP address from the same range
as `docker0`. The `docker0` IP address is used as the default gateway
for the container.
# Run a container
$ sudo docker run -t -i -d base /bin/bash
@ -42,9 +41,8 @@ default gateway for the container.
bridge name bridge id STP enabled interfaces
docker0 8000.fef213db5a66 no vethQCDY1N
Above, `docker0` acts as a bridge for the
`vethQCDY1N` interface which is dedicated to the
52f811c5d3d6 container.
Above, `docker0` acts as a bridge for the `vethQCDY1N` interface which
is dedicated to the 52f811c5d3d6 container.
## How to use a specific IP address range

View file

@ -62,9 +62,8 @@ combinations of options for TCP port are the following:
# Bind TCP port 8080 of the container to a dynamically allocated TCP port on all available interfaces of the host machine.
docker run -p 8080 <image> <cmd>
UDP ports can also be bound by adding a trailing `/udp`
.literal}. All the combinations described for TCP work. Here is only one
example:
UDP ports can also be bound by adding a trailing `/udp`. All the
combinations described for TCP work. Here is only one example:
# Bind UDP port 5353 of the container to UDP port 53 on 127.0.0.1 of the host machine.
docker run -p 127.0.0.1:53:5353/udp <image> <cmd>
@ -112,16 +111,14 @@ a Dockerfile:
# Expose port 80
docker run --expose 80 --name server <image> <cmd>
The `client` then links to the `server`
.literal}:
The `client` then links to the `server`:
# Link
docker run --name client --link server:linked-server <image> <cmd>
`client` locally refers to `server`
as `linked-server`. The following
environment variables, among others, are available on `client`
.literal}:
environment variables, among others, are available on `client`:
# The default protocol, ip, and port of the service running in the container
LINKED-SERVER_PORT=tcp://172.17.0.8:80

View file

@ -109,8 +109,7 @@ container. Similarly, some daemons (such as `sshd`)
will scrub them when spawning shells for connection.
You can work around this by storing the initial `env`
in a file, or looking at `/proc/1/environ`
.literal}.
in a file, or looking at `/proc/1/environ`.
Running `docker ps` shows the 2 containers, and the
`webapp/db` alias name for the Redis container.