Fixes some docs issues with using single-dash arguments where they should be double

I found a bunch of issues where we have "-<opt>" instead of "--<opt>".
Also a couple of other issues, like "-notrunc", which is now "--no-trunc"
Fixes #5963

Docker-DCO-1.1-Signed-off-by: Brian Goff <cpuguy83@gmail.com> (github: cpuguy83)
This commit is contained in:
Brian Goff 2014-05-21 09:35:22 -04:00
parent a94a87778c
commit 6d9e64b27b
8 changed files with 29 additions and 29 deletions

View File

@ -164,7 +164,7 @@ and foreground Docker containers.
Docker container. This is because by default a container is not allowed to
access any devices. A “privileged” container is given access to all devices.
When the operator executes **docker run -privileged**, Docker will enable access
When the operator executes **docker run --privileged**, Docker will enable access
to all devices on the host as well as set some configuration in AppArmor to
allow the container nearly all the same access to the host as processes running
outside of a container on the host.
@ -317,7 +317,7 @@ fedora-data image:
# docker run --name=data -v /var/volume1 -v /tmp/volume2 -i -t fedora-data true
# docker run --volumes-from=data --name=fedora-container1 -i -t fedora bash
Multiple -volumes-from parameters will bring together multiple data volumes from
Multiple --volumes-from parameters will bring together multiple data volumes from
multiple containers. And it's possible to mount the volumes that came from the
DATA container in yet another container via the fedora-container1 intermidiery
container, allowing to abstract the actual data source from users of that data:

View File

@ -245,7 +245,7 @@ docker run --volumes-from=data --name=fedora-container1 -i -t fedora bash
.RE
.sp
.TP
Multiple -volumes-from parameters will bring together multiple data volumes from multiple containers. And it's possible to mount the volumes that came from the DATA container in yet another container via the fedora-container1 intermidiery container, allowing to abstract the actual data source from users of that data:
Multiple --volumes-from parameters will bring together multiple data volumes from multiple containers. And it's possible to mount the volumes that came from the DATA container in yet another container via the fedora-container1 intermidiery container, allowing to abstract the actual data source from users of that data:
.sp
.RS
docker run --volumes-from=fedora-container1 --name=fedora-container2 -i -t fedora bash

View File

@ -50,7 +50,7 @@ For Docker containers using cgroups, the container name will be the full
ID or long ID of the container. If a container shows up as ae836c95b4c3
in `docker ps`, its long ID might be something like
`ae836c95b4c3c9e9179e0e91015512da89fdec91612f63cebae57df9a5444c79`. You can
look it up with `docker inspect` or `docker ps -notrunc`.
look it up with `docker inspect` or `docker ps --no-trunc`.
Putting everything together to look at the memory metrics for a Docker
container, take a look at `/sys/fs/cgroup/memory/lxc/<longid>/`.

View File

@ -84,7 +84,7 @@ Build an image from the Dockerfile assign it a name.
And run the PostgreSQL server container (in the foreground):
$ sudo docker run -rm -P -name pg_test eg_postgresql
$ sudo docker run --rm -P --name pg_test eg_postgresql
There are 2 ways to connect to the PostgreSQL server. We can use [*Link
Containers*](/use/working_with_links_names/#working-with-links-names),
@ -101,7 +101,7 @@ Containers can be linked to another container's ports directly using
`docker run`. This will set a number of environment
variables that can then be used to connect:
$ sudo docker run -rm -t -i -link pg_test:pg eg_postgresql bash
$ sudo docker run --rm -t -i --link pg_test:pg eg_postgresql bash
postgres@7ef98b1b7243:/$ psql -h $PG_PORT_5432_TCP_ADDR -p $PG_PORT_5432_TCP_PORT -d docker -U docker --password
@ -143,7 +143,7 @@ prompt, you can create a table and populate it.
You can use the defined volumes to inspect the PostgreSQL log files and
to backup your configuration and data:
$ docker run -rm --volumes-from pg_test -t -i busybox sh
$ docker run --rm --volumes-from pg_test -t -i busybox sh
/ # ls
bin etc lib linuxrc mnt proc run sys usr

View File

@ -51,7 +51,7 @@ the `$URL` variable. The container is given a name
While this example is simple, you could run any number of interactive
commands, try things out, and then exit when you're done.
$ sudo docker run -i -t -name pybuilder_run shykes/pybuilder bash
$ sudo docker run -i -t --name pybuilder_run shykes/pybuilder bash
$$ URL=http://github.com/shykes/helloflask/archive/master.tar.gz
$$ /usr/local/bin/buildapp $URL

View File

@ -34,23 +34,23 @@ controlled entirely from the `docker run` parameters.
Start actual Redis server on one Docker host
big-server $ docker run -d -name redis crosbymichael/redis
big-server $ docker run -d --name redis crosbymichael/redis
Then add an ambassador linked to the Redis server, mapping a port to the
outside world
big-server $ docker run -d -link redis:redis -name redis_ambassador -p 6379:6379 svendowideit/ambassador
big-server $ docker run -d --link redis:redis --name redis_ambassador -p 6379:6379 svendowideit/ambassador
On the other host, you can set up another ambassador setting environment
variables for each remote port we want to proxy to the `big-server`
client-server $ docker run -d -name redis_ambassador -expose 6379 -e REDIS_PORT_6379_TCP=tcp://192.168.1.52:6379 svendowideit/ambassador
client-server $ docker run -d --name redis_ambassador --expose 6379 -e REDIS_PORT_6379_TCP=tcp://192.168.1.52:6379 svendowideit/ambassador
Then on the `client-server` host, you can use a Redis client container
to talk to the remote Redis server, just by linking to the local Redis
ambassador.
client-server $ docker run -i -t -rm -link redis_ambassador:redis relateiq/redis-cli
client-server $ docker run -i -t --rm --link redis_ambassador:redis relateiq/redis-cli
redis 172.17.0.160:6379> ping
PONG
@ -62,19 +62,19 @@ does automatically (with a tiny amount of `sed`)
On the Docker host (192.168.1.52) that Redis will run on:
# start actual redis server
$ docker run -d -name redis crosbymichael/redis
$ docker run -d --name redis crosbymichael/redis
# get a redis-cli container for connection testing
$ docker pull relateiq/redis-cli
# test the redis server by talking to it directly
$ docker run -t -i -rm -link redis:redis relateiq/redis-cli
$ docker run -t -i --rm --link redis:redis relateiq/redis-cli
redis 172.17.0.136:6379> ping
PONG
^D
# add redis ambassador
$ docker run -t -i -link redis:redis -name redis_ambassador -p 6379:6379 busybox sh
$ docker run -t -i --link redis:redis --name redis_ambassador -p 6379:6379 busybox sh
In the `redis_ambassador` container, you can see the linked Redis
containers `env`:
@ -98,7 +98,7 @@ to the world (via the `-p 6379:6379` port mapping):
$ docker rm redis_ambassador
$ sudo ./contrib/mkimage-unittest.sh
$ docker run -t -i -link redis:redis -name redis_ambassador -p 6379:6379 docker-ut sh
$ docker run -t -i --link redis:redis --name redis_ambassador -p 6379:6379 docker-ut sh
$ socat TCP4-LISTEN:6379,fork,reuseaddr TCP4:172.17.0.136:6379
@ -107,14 +107,14 @@ Now ping the Redis server via the ambassador:
Now go to a different server:
$ sudo ./contrib/mkimage-unittest.sh
$ docker run -t -i -expose 6379 -name redis_ambassador docker-ut sh
$ docker run -t -i --expose 6379 --name redis_ambassador docker-ut sh
$ socat TCP4-LISTEN:6379,fork,reuseaddr TCP4:192.168.1.52:6379
And get the `redis-cli` image so we can talk over the ambassador bridge.
$ docker pull relateiq/redis-cli
$ docker run -i -t -rm -link redis_ambassador:redis relateiq/redis-cli
$ docker run -i -t --rm --link redis_ambassador:redis relateiq/redis-cli
redis 172.17.0.160:6379> ping
PONG
@ -139,9 +139,9 @@ case `192.168.1.52:6379`.
# docker build -t SvenDowideit/ambassador .
# docker tag SvenDowideit/ambassador ambassador
# then to run it (on the host that has the real backend on it)
# docker run -t -i -link redis:redis -name redis_ambassador -p 6379:6379 ambassador
# docker run -t -i --link redis:redis --name redis_ambassador -p 6379:6379 ambassador
# on the remote host, you can set up another ambassador
# docker run -t -i -name redis_ambassador -expose 6379 sh
# docker run -t -i --name redis_ambassador --expose 6379 sh
FROM docker-ut
MAINTAINER SvenDowideit@home.org.au

View File

@ -47,7 +47,7 @@ Or, you can use the `VOLUME` instruction in a `Dockerfile` to add one or
more new volumes to any container created from that image:
# BUILD-USING: $ docker build -t data .
# RUN-USING: $ docker run -name DATA data
# RUN-USING: $ docker run --name DATA data
FROM busybox
VOLUME ["/var/volume1", "/var/volume2"]
CMD ["/bin/true"]
@ -62,11 +62,11 @@ it.
Create a named container with volumes to share (`/var/volume1` and
`/var/volume2`):
$ docker run -v /var/volume1 -v /var/volume2 -name DATA busybox true
$ docker run -v /var/volume1 -v /var/volume2 --name DATA busybox true
Then mount those data volumes into your application containers:
$ docker run -t -i -rm -volumes-from DATA -name client1 ubuntu bash
$ docker run -t -i --rm --from DATA --name client1 ubuntu bash
You can use multiple `-volumes-from` parameters to bring together
multiple data volumes from multiple containers.
@ -75,7 +75,7 @@ Interestingly, you can mount the volumes that came from the `DATA`
container in yet another container via the `client1` middleman
container:
$ docker run -t -i -rm -volumes-from client1 -name client2 ubuntu bash
$ docker run -t -i --rm --volumes-from client1 --name client2 ubuntu bash
This allows you to abstract the actual data source from users of that
data, similar to [*Ambassador Pattern Linking*](
@ -130,7 +130,7 @@ You cannot back up volumes using `docker export`, `docker save` and
`--volumes-from` to start a new container that can access the
data-container's volume. For example:
$ sudo docker run -rm --volumes-from DATA -v $(pwd):/backup busybox tar cvf /backup/backup.tar /data
$ sudo docker run --rm --volumes-from DATA -v $(pwd):/backup busybox tar cvf /backup/backup.tar /data
- `-rm`:
remove the container when it exits
@ -147,13 +147,13 @@ Then to restore to the same container, or another that you've made
elsewhere:
# create a new data container
$ sudo docker run -v /data -name DATA2 busybox true
$ sudo docker run -v /data --name DATA2 busybox true
# untar the backup files into the new container᾿s data volume
$ sudo docker run -rm --volumes-from DATA2 -v $(pwd):/backup busybox tar xvf /backup/backup.tar
$ sudo docker run --rm --volumes-from DATA2 -v $(pwd):/backup busybox tar xvf /backup/backup.tar
data/
data/sven.txt
# compare to the original container
$ sudo docker run -rm --volumes-from DATA -v `pwd`:/backup busybox ls /data
$ sudo docker run --rm --volumes-from DATA -v `pwd`:/backup busybox ls /data
sven.txt
You can use the basic techniques above to automate backup, migration and

View File

@ -73,7 +73,7 @@ user name or description:
Search the docker index for images
-notrunc=false: Don᾿t truncate output
--no-trunc=false: Don᾿t truncate output
$ sudo docker search centos
Found 25 results matching your query ("centos")
NAME DESCRIPTION