2016-06-07 17:08:36 -07:00
|
|
|
<!--[metadata]>
|
|
|
|
+++
|
|
|
|
title = "Drain a node"
|
|
|
|
description = "Drain nodes on the Swarm"
|
|
|
|
keywords = ["tutorial, cluster management, swarm, service, drain"]
|
|
|
|
[menu.main]
|
|
|
|
identifier="swarm-tutorial-drain-node"
|
|
|
|
parent="swarm-tutorial"
|
|
|
|
weight=21
|
|
|
|
+++
|
|
|
|
<![end-metadata]-->
|
|
|
|
|
2016-06-15 13:26:13 -07:00
|
|
|
# Drain a node on the swarm
|
2016-06-07 17:08:36 -07:00
|
|
|
|
|
|
|
In earlier steps of the tutorial, all the nodes have been running with `ACTIVE`
|
2016-06-15 13:26:13 -07:00
|
|
|
availability. The swarm manager can assign tasks to any `ACTIVE` node, so up to
|
|
|
|
now all nodes have been available to receive tasks.
|
2016-06-07 17:08:36 -07:00
|
|
|
|
|
|
|
Sometimes, such as planned maintenance times, you need to set a node to `DRAIN`
|
2016-06-15 13:26:13 -07:00
|
|
|
availability. `DRAIN` availability prevents a node from receiving new tasks
|
|
|
|
from the swarm manager. It also means the manager stops tasks running on the
|
2016-06-07 17:08:36 -07:00
|
|
|
node and launches replica tasks on a node with `ACTIVE` availability.
|
|
|
|
|
|
|
|
1. If you haven't already, open a terminal and ssh into the machine where you
|
|
|
|
run your manager node. For example, the tutorial uses a machine named
|
|
|
|
`manager1`.
|
|
|
|
|
|
|
|
2. Verify that all your nodes are actively available.
|
|
|
|
|
2016-06-15 13:26:13 -07:00
|
|
|
```bash
|
2016-06-07 17:08:36 -07:00
|
|
|
$ docker node ls
|
|
|
|
|
2016-07-21 13:12:23 -07:00
|
|
|
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
|
|
|
|
1bcef6utixb0l0ca7gxuivsj0 worker2 Ready Active
|
|
|
|
38ciaotwjuritcdtn9npbnkuz worker1 Ready Active
|
|
|
|
e216jshn25ckzbvmwlnh5jr3g * manager1 Ready Active Leader
|
2016-06-07 17:08:36 -07:00
|
|
|
```
|
|
|
|
|
2016-07-22 13:13:30 +02:00
|
|
|
3. If you aren't still running the `redis` service from the [rolling
|
2016-06-07 17:08:36 -07:00
|
|
|
update](rolling-update.md) tutorial, start it now:
|
|
|
|
|
|
|
|
```bash
|
2016-07-22 19:07:10 -07:00
|
|
|
$ docker service create --replicas 3 --name redis --update-delay 10s redis:3.0.6
|
2016-06-07 17:08:36 -07:00
|
|
|
|
2016-06-15 13:26:13 -07:00
|
|
|
c5uo6kdmzpon37mgj9mwglcfw
|
2016-06-07 17:08:36 -07:00
|
|
|
```
|
|
|
|
|
2016-07-19 14:01:31 -07:00
|
|
|
4. Run `docker service ps redis` to see how the Swarm manager assigned the
|
2016-06-07 17:08:36 -07:00
|
|
|
tasks to different nodes:
|
|
|
|
|
2016-06-15 13:26:13 -07:00
|
|
|
```bash
|
2016-07-19 14:01:31 -07:00
|
|
|
$ docker service ps redis
|
2016-06-14 16:29:10 -07:00
|
|
|
|
2016-06-07 17:08:36 -07:00
|
|
|
ID NAME SERVICE IMAGE LAST STATE DESIRED STATE NODE
|
2016-06-15 13:26:13 -07:00
|
|
|
7q92v0nr1hcgts2amcjyqg3pq redis.1 redis redis:3.0.6 Running 26 seconds Running manager1
|
|
|
|
7h2l8h3q3wqy5f66hlv9ddmi6 redis.2 redis redis:3.0.6 Running 26 seconds Running worker1
|
|
|
|
9bg7cezvedmkgg6c8yzvbhwsd redis.3 redis redis:3.0.6 Running 26 seconds Running worker2
|
2016-06-07 17:08:36 -07:00
|
|
|
```
|
|
|
|
|
2016-06-15 13:26:13 -07:00
|
|
|
In this case the swarm manager distributed one task to each node. You may
|
2016-06-07 17:08:36 -07:00
|
|
|
see the tasks distributed differently among the nodes in your environment.
|
|
|
|
|
2016-07-22 13:13:30 +02:00
|
|
|
5. Run `docker node update --availability drain <NODE-ID>` to drain a node that
|
2016-06-07 17:08:36 -07:00
|
|
|
had a task assigned to it:
|
|
|
|
|
|
|
|
```bash
|
|
|
|
docker node update --availability drain worker1
|
2016-06-15 13:26:13 -07:00
|
|
|
|
2016-06-07 17:08:36 -07:00
|
|
|
worker1
|
|
|
|
```
|
|
|
|
|
2016-07-22 13:13:30 +02:00
|
|
|
6. Inspect the node to check its availability:
|
2016-06-07 17:08:36 -07:00
|
|
|
|
2016-06-15 13:26:13 -07:00
|
|
|
```bash
|
2016-06-07 17:08:36 -07:00
|
|
|
$ docker node inspect --pretty worker1
|
2016-06-15 13:26:13 -07:00
|
|
|
|
|
|
|
ID: 38ciaotwjuritcdtn9npbnkuz
|
2016-06-07 17:08:36 -07:00
|
|
|
Hostname: worker1
|
|
|
|
Status:
|
2016-06-15 13:26:13 -07:00
|
|
|
State: Ready
|
|
|
|
Availability: Drain
|
2016-06-07 17:08:36 -07:00
|
|
|
...snip...
|
|
|
|
```
|
|
|
|
|
|
|
|
The drained node shows `Drain` for `AVAILABILITY`.
|
|
|
|
|
2016-07-19 14:01:31 -07:00
|
|
|
7. Run `docker service ps redis` to see how the Swarm manager updated the
|
2016-06-07 17:08:36 -07:00
|
|
|
task assignments for the `redis` service:
|
|
|
|
|
2016-06-15 13:26:13 -07:00
|
|
|
```bash
|
2016-07-19 14:01:31 -07:00
|
|
|
$ docker service ps redis
|
2016-06-15 13:26:13 -07:00
|
|
|
|
|
|
|
ID NAME SERVICE IMAGE LAST STATE DESIRED STATE NODE
|
|
|
|
7q92v0nr1hcgts2amcjyqg3pq redis.1 redis redis:3.0.6 Running 4 minutes Running manager1
|
|
|
|
b4hovzed7id8irg1to42egue8 redis.2 redis redis:3.0.6 Running About a minute Running worker2
|
|
|
|
9bg7cezvedmkgg6c8yzvbhwsd redis.3 redis redis:3.0.6 Running 4 minutes Running worker2
|
2016-06-07 17:08:36 -07:00
|
|
|
```
|
|
|
|
|
|
|
|
The Swarm manager maintains the desired state by ending the task on a node
|
|
|
|
with `Drain` availability and creating a new task on a node with `Active`
|
|
|
|
availability.
|
|
|
|
|
2016-07-22 13:13:30 +02:00
|
|
|
8. Run `docker node update --availability active <NODE-ID>` to return the
|
2016-06-14 16:29:10 -07:00
|
|
|
drained node to an active state:
|
2016-06-07 17:08:36 -07:00
|
|
|
|
|
|
|
```bash
|
|
|
|
$ docker node update --availability active worker1
|
2016-06-15 13:26:13 -07:00
|
|
|
|
2016-06-07 17:08:36 -07:00
|
|
|
worker1
|
|
|
|
```
|
|
|
|
|
2016-07-22 13:13:30 +02:00
|
|
|
9. Inspect the node to see the updated state:
|
2016-06-07 17:08:36 -07:00
|
|
|
|
2016-06-15 13:26:13 -07:00
|
|
|
```bash
|
2016-06-07 17:08:36 -07:00
|
|
|
$ docker node inspect --pretty worker1
|
2016-06-15 13:26:13 -07:00
|
|
|
|
|
|
|
ID: 38ciaotwjuritcdtn9npbnkuz
|
2016-06-07 17:08:36 -07:00
|
|
|
Hostname: worker1
|
|
|
|
Status:
|
2016-06-15 13:26:13 -07:00
|
|
|
State: Ready
|
|
|
|
Availability: Active
|
2016-06-07 17:08:36 -07:00
|
|
|
...snip...
|
|
|
|
```
|
|
|
|
|
|
|
|
When you set the node back to `Active` availability, it can receive new tasks:
|
|
|
|
|
|
|
|
* during a service update to scale up
|
|
|
|
* during a rolling update
|
|
|
|
* when you set another node to `Drain` availability
|
|
|
|
* when a task fails on another active node
|