Remove "secrets" leftovers from docs

f5e1f6f688 replaced "secrets"
with "join tokens", which also removed the "auto-accept"
policy.

This removes some remaining references to those features.

Note that there are other references, but those
are already addressed in another pull request.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
This commit is contained in:
Sebastiaan van Stijn 2016-07-22 13:13:30 +02:00
parent 771cf83807
commit 987511712f
4 changed files with 26 additions and 28 deletions

View File

@ -3618,11 +3618,6 @@ JSON Parameters:
the networking interface used for the VXLAN Tunnel Endpoint (VTEP).
- **ForceNewCluster** Force creating a new Swarm even if already part of one.
- **Spec** Configuration settings of the new Swarm.
- **Policies** An array of acceptance policies.
- **Role** The role that policy applies to (`MANAGER` or `WORKER`)
- **Autoaccept** A boolean indicating whether nodes joining for that role should be
automatically accepted in the Swarm.
- **Secret** An optional secret to provide for nodes to join the Swarm.
- **Orchestration** Configuration settings for the orchestration aspects of the Swarm.
- **TaskHistoryRetentionLimit** Maximum number of tasks history stored.
- **Raft** Raft related configuration.

View File

@ -3619,11 +3619,6 @@ JSON Parameters:
the networking interface used for the VXLAN Tunnel Endpoint (VTEP).
- **ForceNewCluster** Force creating a new Swarm even if already part of one.
- **Spec** Configuration settings of the new Swarm.
- **Policies** An array of acceptance policies.
- **Role** The role that policy applies to (`MANAGER` or `WORKER`)
- **Autoaccept** A boolean indicating whether nodes joining for that role should be
automatically accepted in the Swarm.
- **Secret** An optional secret to provide for nodes to join the Swarm.
- **Orchestration** Configuration settings for the orchestration aspects of the Swarm.
- **TaskHistoryRetentionLimit** Maximum number of tasks history stored.
- **Raft** Raft related configuration.

View File

@ -29,12 +29,14 @@ Lists all the nodes that the Docker Swarm manager knows about. You can filter us
Example output:
$ docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
1bcef6utixb0l0ca7gxuivsj0 swarm-worker2 Ready Active
38ciaotwjuritcdtn9npbnkuz swarm-worker1 Ready Active
e216jshn25ckzbvmwlnh5jr3g * swarm-manager1 Ready Active Leader
```bash
$ docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
1bcef6utixb0l0ca7gxuivsj0 swarm-worker2 Ready Active
38ciaotwjuritcdtn9npbnkuz swarm-worker1 Ready Active
e216jshn25ckzbvmwlnh5jr3g * swarm-manager1 Ready Active Leader
```
## Filtering
@ -53,18 +55,23 @@ The `name` filter matches on all or part of a node name.
The following filter matches the node with a name equal to `swarm-master` string.
$ docker node ls -f name=swarm-manager1
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
e216jshn25ckzbvmwlnh5jr3g * swarm-manager1 Ready Active Leader
```bash
$ docker node ls -f name=swarm-manager1
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
e216jshn25ckzbvmwlnh5jr3g * swarm-manager1 Ready Active Leader
```
### id
The `id` filter matches all or part of a node's id.
$ docker node ls -f id=1
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
1bcef6utixb0l0ca7gxuivsj0 swarm-worker2 Ready Active
```bash
$ docker node ls -f id=1
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
1bcef6utixb0l0ca7gxuivsj0 swarm-worker2 Ready Active
```
#### label
@ -75,6 +82,7 @@ The following filter matches nodes with the `usage` label regardless of its valu
```bash
$ docker node ls -f "label=foo"
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
1bcef6utixb0l0ca7gxuivsj0 swarm-worker2 Ready Active
```

View File

@ -37,7 +37,7 @@ run your manager node. For example, the tutorial uses a machine named
e216jshn25ckzbvmwlnh5jr3g * manager1 Ready Active Leader
```
2. If you aren't still running the `redis` service from the [rolling
3. If you aren't still running the `redis` service from the [rolling
update](rolling-update.md) tutorial, start it now:
```bash
@ -46,7 +46,7 @@ update](rolling-update.md) tutorial, start it now:
c5uo6kdmzpon37mgj9mwglcfw
```
3. Run `docker service tasks redis` to see how the Swarm manager assigned the
4. Run `docker service tasks redis` to see how the Swarm manager assigned the
tasks to different nodes:
```bash
@ -61,7 +61,7 @@ tasks to different nodes:
In this case the swarm manager distributed one task to each node. You may
see the tasks distributed differently among the nodes in your environment.
4. Run `docker node update --availability drain <NODE-ID>` to drain a node that
5. Run `docker node update --availability drain <NODE-ID>` to drain a node that
had a task assigned to it:
```bash
@ -70,7 +70,7 @@ had a task assigned to it:
worker1
```
5. Inspect the node to check its availability:
6. Inspect the node to check its availability:
```bash
$ docker node inspect --pretty worker1
@ -85,7 +85,7 @@ had a task assigned to it:
The drained node shows `Drain` for `AVAILABILITY`.
6. Run `docker service tasks redis` to see how the Swarm manager updated the
7. Run `docker service tasks redis` to see how the Swarm manager updated the
task assignments for the `redis` service:
```bash
@ -101,7 +101,7 @@ task assignments for the `redis` service:
with `Drain` availability and creating a new task on a node with `Active`
availability.
7. Run `docker node update --availability active <NODE-ID>` to return the
8. Run `docker node update --availability active <NODE-ID>` to return the
drained node to an active state:
```bash
@ -110,7 +110,7 @@ drained node to an active state:
worker1
```
8. Inspect the node to see the updated state:
9. Inspect the node to see the updated state:
```bash
$ docker node inspect --pretty worker1