add overlay networking guide

Signed-off-by: Charles Smith <charles.smith@docker.com>
This commit is contained in:
Charles Smith 2016-08-01 14:17:21 -07:00
parent ccf3dd85f0
commit e56dd0e0e7
7 changed files with 315 additions and 540 deletions

Binary file not shown.

After

Width:  |  Height:  |  Size: 69 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 42 KiB

308
docs/swarm/networking.md Normal file
View File

@ -0,0 +1,308 @@
<!--[metadata]>
+++
title = "Attach services to an overlay network"
description = "Use swarm mode networking features"
keywords = ["guide", "swarm mode", "swarm", "network"]
[menu.main]
identifier="networking-guide"
parent="engine_swarm"
weight=16
+++
<![end-metadata]-->
# Attach services to an overlay network
Docker Engine swarm mode natively supports **overlay networks**, so you can
enable container-to-container networks. When you use swarm mode, you don't need
an external key-value store. Features of swarm mode overlay networks include the
following:
* You can attach multiple services to the same network.
* By default, **service discovery** assigns a virtual IP address (VIP) and DNS
entry to each service in the swarm, making it available by its service name to
containers on the same network.
* You can configure the service to use DNS round-robin instead of a VIP.
In order to use overlay networks in the swarm, you need to have the following
ports open between the swarm nodes before you enable swarm mode:
* Port `7946` TCP/UDP for container network discovery.
* Port `4789` UDP for the container overlay network.
## Create an overlay network in a swarm
When you run Docker Engine in swarm mode, you can run `docker network create`
from a manager node to create an overlay network. For instance, to create a
network named `my-network`:
```
$ docker network create \
--driver overlay \
--subnet 10.0.9.0/24 \
--opt encrypted \
my-network
273d53261bcdfda5f198587974dae3827e947ccd7e74a41bf1f482ad17fa0d33
```
By default nodes in the swarm encrypt traffic between themselves and other
nodes. The optional `--opt encrypted` flag enables an additional layer of
encryption in the overlay driver for vxlan traffic between containers on
different nodes. For more information, refer to [Docker swarm mode overlay network security model](../userguide/networking/overlay-security-model.md).
The `--subnet` flag specifies the subnet for use with the overlay network. When
you don't specify a subnet, the swarm manager automatically chooses a subnet and
assigns it to the network. On some older kernels, including kernel 3.10,
automatically assigned adresses may overlap with another subnet in your
infrastructure. Such overlaps can cause connectivity issues or failures with containers connected to the network.
Before you attach a service to the network, the network only extends to manager
nodes. You can run `docker network ls` to view the network:
```bash
$ docker network ls
NETWORK ID NAME DRIVER SCOPE
f9145f09b38b bridge bridge local
..snip..
bd0befxwiva4 my-network overlay swarm
```
The `swarm` scope indicates that the network is available for use with services
deployed to the swarm. After you create a service attached to the network, the
swarm only extends the network to worker nodes where the scheduler places tasks
for the service. On workers without tasks running for a service attached to the
network, `network ls` does not display the network.
## Attach a service to an overlay network
To attach a service to an overlay network, pass the `--network` flag when you
create a service. For example to create an nginx service attached to a
network called `my-network`:
```bash
$ docker service create \
--replicas 3 \
--name my-web \
--network my-network \
nginx
```
>**Note:** You have to create the network before you can attach a service to it.
The containers for the tasks in the service can connect to one another on the
overlay network. The swarm extends the network to all the nodes with `Running`
tasks for the service.
From a manager node, run `docker service ps <SERVICE>` to view the nodes where
tasks are running for the service:
```bash
$ docker service ps my-web
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR
63s86gf6a0ms34mvboniev7bs my-web.1 nginx node1 Running Running 58 seconds ago
6b3q2qbjveo4zauc6xig7au10 my-web.2 nginx node2 Running Running 58 seconds ago
66u2hcrz0miqpc8h0y0f3v7aw my-web.3 nginx node3 Running Running about a minute ago
```
![service vip image](images/service-vip.png)
You can inspect the network from any node with a `Running` task for a service
attached to the network:
```bash
$ docker network inspect <NETWORK>
```
The network information includes a list of the containers on the node that are
attached to the network. For instance:
```bash
$ docker network inspect my-network
[
{
"Name": "my-network",
"Id": "7m2rjx0a97n88wzr4nu8772r3",
"Scope": "swarm",
"Driver": "overlay",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "10.0.9.0/24",
"Gateway": "10.0.9.1"
}
]
},
"Internal": false,
"Containers": {
"404d1dec939a021678132a35259c3604b9657649437e59060621a17edae7a819": {
"Name": "my-web.1.63s86gf6a0ms34mvboniev7bs",
"EndpointID": "3c9588d04db9bc2bf8749cb079689a3072c44c68e544944cbea8e4bc20eb7de7",
"MacAddress": "02:42:0a:00:09:03",
"IPv4Address": "10.0.9.3/24",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.driver.overlay.vxlanid_list": "257"
},
"Labels": {}
}
]
```
In the example above, the container `my-web.1.63s86gf6a0ms34mvboniev7bs` for the
`my-web` service is attached to the `my-network` network on node2.
## Use swarm mode service discovery
By default, when you create a service attached to a network, the swarm assigns
the service a VIP. The VIP maps to a DNS alias based upon the service name.
Containers on the network share DNS mappings for the service via gossip so any container on the network can access the service via its service
name.
You don't need to expose service-specific ports to make the service
available to other services on the same overlay network. The swarm's internal
load balancer automatically distributes requests to the service VIP among the
active tasks.
You can inspect the service to view the virtual IP. For example:
```bash
$ docker service inspect \
--format='{{json .Endpoint.VirtualIPs}}' \
my-web
[{"NetworkID":"7m2rjx0a97n88wzr4nu8772r3" "Addr":"10.0.0.2/24"}]
```
The following example shows how you can add a `busybox` service on the same
network as the `nginx` service and the busybox service is able to access `nginx`
using the DNS name `my-web`:
1. From a manager node, deploy a busybox service to the same network as
`my-web`:
```bash
$ docker service create \
--name my-busybox \
--network my-network \
busybox \
sleep 3000
```
2. Lookup the node where `my-busybox` is running:
```bash
$ docker service ps my-busybox
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR
1dok2cmx2mln5hbqve8ilnair my-busybox.1 busybox node1 Running Running 5 seconds ago
```
3. From the node where the busybox task is running, open an interactive shell to
the busybox container:
```bash
$ docker exec -it my-busybox.1.1dok2cmx2mln5hbqve8ilnair /bin/sh
```
You can deduce the container name as `<TASK-NAME>`+`<ID>`. Alternatively,
you can run `docker ps` on the node where the task is running.
4. From inside the busybox container, query the DNS to view the VIP for the
`my-web` service:
```bash
$ nslookup my-web
Server: 127.0.0.11
Address 1: 127.0.0.11
Name: my-web
Address 1: 10.0.9.2 ip-10-0-9-2.us-west-2.compute.internal
```
>**Note:** the examples here use `nslookup`, but you can use `dig` or any
available DNS query tool.
5. From inside the busybox container, query the DNS using a special query
<tasks.SERVICE-NAME> to find the IP addresses of all the containers for the
`my-web` service:
```bash
$ nslookup tasks.my-web
Server: 127.0.0.11
Address 1: 127.0.0.11
Name: tasks.my-web
Address 1: 10.0.9.4 my-web.2.6b3q2qbjveo4zauc6xig7au10.my-network
Address 2: 10.0.9.3 my-web.1.63s86gf6a0ms34mvboniev7bs.my-network
Address 3: 10.0.9.5 my-web.3.66u2hcrz0miqpc8h0y0f3v7aw.my-network
```
6. From inside the busybox container, run `wget` to access the nginx web server
running in the `my-web` service:
```bash
$ wget -O- my-web
Connecting to my-web (10.0.9.2:80)
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
...snip...
```
The swarm load balancer automatically routes the HTTP request to the
service's VIP to an active task. It distributes subsequent requests to
other tasks using round-robin selection.
## Use DNS round-robin for a service
You can configure the service to use DNS round-robin directly without using a
VIP, by setting the `--endpoint-mode dnsrr` when you create the service. DNS round-robin is useful in cases where you want to use your own load balancer.
The following example shows a service with `dnsrr` endpoint mode:
```bash
$ docker service create \
--replicas 3 \
--name my-dnsrr-service \
--network my-network \
--endpoint-mode dnsrr \
nginx
```
When you query the DNS for the service name, the DNS service returns the IP
addresses for all the task containers:
```bash
$ nslookup my-dnsrr-service
Server: 127.0.0.11
Address 1: 127.0.0.11
Name: my-dnsrr
Address 1: 10.0.9.8 my-dnsrr-service.1.bd3a67p61by5dfdkyk7kog7pr.my-network
Address 2: 10.0.9.10 my-dnsrr-service.3.0sb1jxr99bywbvzac8xyw73b1.my-network
Address 3: 10.0.9.9 my-dnsrr-service.2.am6fx47p3bropyy2dy4f8hofb.my-network
```
## Confirm VIP connectivity
In genaral we recommend you use `dig`, `nslookup`, or another DNS query tool to
test access to the service name via DNS. Because a VIP is a logical IP, `ping`
is not the right tool to confirm VIP connectivity.
## Learn More
* [Deploy services to a swarm](services.md)
* [Swarm administration guide](admin_guide.md)
* [Docker Engine command line reference](../reference/commandline/index.md)
* [Swarm mode tutorial](swarm-tutorial/index.md)

View File

@ -213,8 +213,9 @@ $ docker service create \
The swarm extends `my-network` to each node running the service.
<!-- TODO when overlay-security-model is published
For more information, refer to [Note on Docker 1.12 Overlay Network Security Model](../userguide/networking/overlay-security-model.md).-->
For more information on overlay networking and service discovery, refer to
[Attach services to an overlay network](networking.md). See also
[Docker swarm mode overlay network security model](../userguide/networking/overlay-security-model.md).
## Configure update behavior

View File

@ -1,538 +0,0 @@
<!--[metadata]>
+++
title = "Docker container networking"
description = "How do we connect docker containers within and across hosts ?"
keywords = ["Examples, Usage, network, docker, documentation, user guide, multihost, cluster"]
[menu.main]
parent = "smn_networking"
weight = -5
+++
<![end-metadata]-->
# Understand Docker container networks
To build web applications that act in concert but do so securely, use the Docker
networks feature. Networks, by definition, provide complete isolation for
containers. So, it is important to have control over the networks your
applications run on. Docker container networks give you that control.
This section provides an overview of the default networking behavior that Docker
Engine delivers natively. It describes the type of networks created by default
and how to create your own, user-defined networks. It also describes the
resources required to create networks on a single host or across a cluster of
hosts.
## Default Networks
When you install Docker, it creates three networks automatically. You can list
these networks using the `docker network ls` command:
```
$ docker network ls
NETWORK ID NAME DRIVER
7fca4eb8c647 bridge bridge
9f904ee27bf5 none null
cf03ee007fb4 host host
```
Historically, these three networks are part of Docker's implementation. When
you run a container you can use the `--network` flag to specify which network you
want to run a container on. These three networks are still available to you.
The `bridge` network represents the `docker0` network present in all Docker
installations. Unless you specify otherwise with the `docker run
--network=<NETWORK>` option, the Docker daemon connects containers to this network
by default. You can see this bridge as part of a host's network stack by using
the `ifconfig` command on the host.
```
$ ifconfig
docker0 Link encap:Ethernet HWaddr 02:42:47:bc:3a:eb
inet addr:172.17.0.1 Bcast:0.0.0.0 Mask:255.255.0.0
inet6 addr: fe80::42:47ff:febc:3aeb/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:9001 Metric:1
RX packets:17 errors:0 dropped:0 overruns:0 frame:0
TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:1100 (1.1 KB) TX bytes:648 (648.0 B)
```
The `none` network adds a container to a container-specific network stack. That container lacks a network interface. Attaching to such a container and looking at its stack you see this:
```
$ docker attach nonenetcontainer
root@0cb243cd1293:/# cat /etc/hosts
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
root@0cb243cd1293:/# ifconfig
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
root@0cb243cd1293:/#
```
>**Note**: You can detach from the container and leave it running with `CTRL-p CTRL-q`.
The `host` network adds a container on the hosts network stack. You'll find the
network configuration inside the container is identical to the host.
With the exception of the `bridge` network, you really don't need to
interact with these default networks. While you can list and inspect them, you
cannot remove them. They are required by your Docker installation. However, you
can add your own user-defined networks and these you can remove when you no
longer need them. Before you learn more about creating your own networks, it is
worth looking at the default `bridge` network a bit.
### The default bridge network in detail
The default `bridge` network is present on all Docker hosts. The `docker network inspect`
command returns information about a network:
```
$ docker network inspect bridge
[
{
"Name": "bridge",
"Id": "f7ab26d71dbd6f557852c7156ae0574bbf62c42f539b50c8ebde0f728a253b6f",
"Scope": "local",
"Driver": "bridge",
"IPAM": {
"Driver": "default",
"Config": [
{
"Subnet": "172.17.0.1/16",
"Gateway": "172.17.0.1"
}
]
},
"Containers": {},
"Options": {
"com.docker.network.bridge.default_bridge": "true",
"com.docker.network.bridge.enable_icc": "true",
"com.docker.network.bridge.enable_ip_masquerade": "true",
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
"com.docker.network.bridge.name": "docker0",
"com.docker.network.driver.mtu": "9001"
}
}
]
```
The Engine automatically creates a `Subnet` and `Gateway` to the network.
The `docker run` command automatically adds new containers to this network.
```
$ docker run -itd --name=container1 busybox
3386a527aa08b37ea9232cbcace2d2458d49f44bb05a6b775fba7ddd40d8f92c
$ docker run -itd --name=container2 busybox
94447ca479852d29aeddca75c28f7104df3c3196d7b6d83061879e339946805c
```
Inspecting the `bridge` network again after starting two containers shows both newly launched containers in the network. Their ids show up in the "Containers" section of `docker network inspect`:
```
$ docker network inspect bridge
{[
{
"Name": "bridge",
"Id": "f7ab26d71dbd6f557852c7156ae0574bbf62c42f539b50c8ebde0f728a253b6f",
"Scope": "local",
"Driver": "bridge",
"IPAM": {
"Driver": "default",
"Config": [
{
"Subnet": "172.17.0.1/16",
"Gateway": "172.17.0.1"
}
]
},
"Containers": {
"3386a527aa08b37ea9232cbcace2d2458d49f44bb05a6b775fba7ddd40d8f92c": {
"EndpointID": "647c12443e91faf0fd508b6edfe59c30b642abb60dfab890b4bdccee38750bc1",
"MacAddress": "02:42:ac:11:00:02",
"IPv4Address": "172.17.0.2/16",
"IPv6Address": ""
},
"94447ca479852d29aeddca75c28f7104df3c3196d7b6d83061879e339946805c": {
"EndpointID": "b047d090f446ac49747d3c37d63e4307be745876db7f0ceef7b311cbba615f48",
"MacAddress": "02:42:ac:11:00:03",
"IPv4Address": "172.17.0.3/16",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.bridge.default_bridge": "true",
"com.docker.network.bridge.enable_icc": "true",
"com.docker.network.bridge.enable_ip_masquerade": "true",
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
"com.docker.network.bridge.name": "docker0",
"com.docker.network.driver.mtu": "9001"
}
}
]
```
The `docker network inspect` command above shows all the connected containers and their network resources on a given network. Containers in this default network are able to communicate with each other using IP addresses. Docker does not support automatic service discovery on the default bridge network. If you want to communicate with container names in this default bridge network, you must connect the containers via the legacy `docker run --link` option.
You can `attach` to a running `container` and investigate its configuration:
```
$ docker attach container1
root@0cb243cd1293:/# ifconfig
ifconfig
eth0 Link encap:Ethernet HWaddr 02:42:AC:11:00:02
inet addr:172.17.0.2 Bcast:0.0.0.0 Mask:255.255.0.0
inet6 addr: fe80::42:acff:fe11:2/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:9001 Metric:1
RX packets:16 errors:0 dropped:0 overruns:0 frame:0
TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:1296 (1.2 KiB) TX bytes:648 (648.0 B)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
```
Then use `ping` for about 3 seconds to test the connectivity of the containers on this `bridge` network.
```
root@0cb243cd1293:/# ping -w3 172.17.0.3
PING 172.17.0.3 (172.17.0.3): 56 data bytes
64 bytes from 172.17.0.3: seq=0 ttl=64 time=0.096 ms
64 bytes from 172.17.0.3: seq=1 ttl=64 time=0.080 ms
64 bytes from 172.17.0.3: seq=2 ttl=64 time=0.074 ms
--- 172.17.0.3 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0.074/0.083/0.096 ms
```
Finally, use the `cat` command to check the `container1` network configuration:
```
root@0cb243cd1293:/# cat /etc/hosts
172.17.0.2 3386a527aa08
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
```
To detach from a `container1` and leave it running use `CTRL-p CTRL-q`.Then, attach to `container2` and repeat these three commands.
```
$ docker attach container2
root@0cb243cd1293:/# ifconfig
eth0 Link encap:Ethernet HWaddr 02:42:AC:11:00:03
inet addr:172.17.0.3 Bcast:0.0.0.0 Mask:255.255.0.0
inet6 addr: fe80::42:acff:fe11:3/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:9001 Metric:1
RX packets:15 errors:0 dropped:0 overruns:0 frame:0
TX packets:13 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:1166 (1.1 KiB) TX bytes:1026 (1.0 KiB)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
root@0cb243cd1293:/# ping -w3 172.17.0.2
PING 172.17.0.2 (172.17.0.2): 56 data bytes
64 bytes from 172.17.0.2: seq=0 ttl=64 time=0.067 ms
64 bytes from 172.17.0.2: seq=1 ttl=64 time=0.075 ms
64 bytes from 172.17.0.2: seq=2 ttl=64 time=0.072 ms
--- 172.17.0.2 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0.067/0.071/0.075 ms
/ # cat /etc/hosts
172.17.0.3 94447ca47985
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
```
The default `docker0` bridge network supports the use of port mapping and `docker run --link` to allow communications between containers in the `docker0` network. These techniques are cumbersome to set up and prone to error. While they are still available to you as techniques, it is better to avoid them and define your own bridge networks instead.
## User-defined networks
You can create your own user-defined networks that better isolate containers.
Docker provides some default **network drivers** for creating these
networks. You can create a new **bridge network** or **overlay network**. You
can also create a **network plugin** or **remote network** written to your own
specifications.
You can create multiple networks. You can add containers to more than one
network. Containers can only communicate within networks but not across
networks. A container attached to two networks can communicate with member
containers in either network. When a container is connected to multiple
networks, its external connectivity is provided via the first non-internal
network, in lexical order.
The next few sections describe each of Docker's built-in network drivers in
greater detail.
### A bridge network
The easiest user-defined network to create is a `bridge` network. This network
is similar to the historical, default `docker0` network. There are some added
features and some old features that aren't available.
```
$ docker network create --driver bridge isolated_nw
1196a4c5af43a21ae38ef34515b6af19236a3fc48122cf585e3f3054d509679b
$ docker network inspect isolated_nw
[
{
"Name": "isolated_nw",
"Id": "1196a4c5af43a21ae38ef34515b6af19236a3fc48122cf585e3f3054d509679b",
"Scope": "local",
"Driver": "bridge",
"IPAM": {
"Driver": "default",
"Config": [
{
"Subnet": "172.21.0.0/16",
"Gateway": "172.21.0.1"
}
]
},
"Containers": {},
"Options": {}
}
]
$ docker network ls
NETWORK ID NAME DRIVER
9f904ee27bf5 none null
cf03ee007fb4 host host
7fca4eb8c647 bridge bridge
c5ee82f76de3 isolated_nw bridge
```
After you create the network, you can launch containers on it using the `docker run --network=<NETWORK>` option.
```
$ docker run --network=isolated_nw -itd --name=container3 busybox
8c1a0a5be480921d669a073393ade66a3fc49933f08bcc5515b37b8144f6d47c
$ docker network inspect isolated_nw
[
{
"Name": "isolated_nw",
"Id": "1196a4c5af43a21ae38ef34515b6af19236a3fc48122cf585e3f3054d509679b",
"Scope": "local",
"Driver": "bridge",
"IPAM": {
"Driver": "default",
"Config": [
{}
]
},
"Containers": {
"8c1a0a5be480921d669a073393ade66a3fc49933f08bcc5515b37b8144f6d47c": {
"EndpointID": "93b2db4a9b9a997beb912d28bcfc117f7b0eb924ff91d48cfa251d473e6a9b08",
"MacAddress": "02:42:ac:15:00:02",
"IPv4Address": "172.21.0.2/16",
"IPv6Address": ""
}
},
"Options": {}
}
]
```
The containers you launch into this network must reside on the same Docker host.
Each container in the network can immediately communicate with other containers
in the network. Though, the network itself isolates the containers from external
networks.
![An isolated network](images/bridge_network.png)
Within a user-defined bridge network, linking is not supported. You can
expose and publish container ports on containers in this network. This is useful
if you want to make a portion of the `bridge` network available to an outside
network.
![Bridge network](images/network_access.png)
A bridge network is useful in cases where you want to run a relatively small
network on a single host. You can, however, create significantly larger networks
by creating an `overlay` network.
### An overlay network
Docker's `overlay` network driver supports multi-host networking natively
out-of-the-box. This support is accomplished with the help of `libnetwork`, a
built-in VXLAN-based overlay network driver, and Docker's `libkv` library.
The `overlay` network requires a valid key-value store service. Currently,
Docker's `libkv` supports Consul, Etcd, and ZooKeeper (Distributed store). Before
creating a network you must install and configure your chosen key-value store
service. The Docker hosts that you intend to network and the service must be
able to communicate.
![Key-value store](images/key_value.png)
Each host in the network must run a Docker Engine instance. The easiest way to
provision the hosts are with Docker Machine.
![Engine on each host](images/engine_on_net.png)
You should open the following ports between each of your hosts.
| Protocol | Port | Description |
|----------|------|-----------------------|
| udp | 4789 | Data plane (VXLAN) |
| tcp/udp | 7946 | Control plane |
Your key-value store service may require additional ports.
Check your vendor's documentation and open any required ports.
Once you have several machines provisioned, you can use Docker Swarm to quickly
form them into a swarm which includes a discovery service as well.
To create an overlay network, you configure options on the `daemon` on each
Docker Engine for use with `overlay` network. There are three options to set:
<table>
<thead>
<tr>
<th>Option</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td><pre>--cluster-store=PROVIDER://URL</pre></td>
<td>Describes the location of the KV service.</td>
</tr>
<tr>
<td><pre>--cluster-advertise=HOST_IP|HOST_IFACE:PORT</pre></td>
<td>The IP address or interface of the HOST used for clustering.</td>
</tr>
<tr>
<td><pre>--cluster-store-opt=KEY-VALUE OPTIONS</pre></td>
<td>Options such as TLS certificate or tuning discovery Timers</td>
</tr>
</tbody>
</table>
Create an `overlay` network on one of the machines in the swarm.
$ docker network create --driver overlay my-multi-host-network
This results in a single network spanning multiple hosts. An `overlay` network
provides complete isolation for the containers.
![An overlay network](images/overlay_network.png)
Then, on each host, launch containers making sure to specify the network name.
$ docker run -itd --network=my-multi-host-network busybox
Once connected, each container has access to all the containers in the network
regardless of which Docker host the container was launched on.
![Published port](images/overlay-network-final.png)
If you would like to try this for yourself, see the [Getting started for
overlay](get-started-overlay.md).
### Custom network plugin
If you like, you can write your own network driver plugin. A network
driver plugin makes use of Docker's plugin infrastructure. In this
infrastructure, a plugin is a process running on the same Docker host as the
Docker `daemon`.
Network plugins follow the same restrictions and installation rules as other
plugins. All plugins make use of the plugin API. They have a lifecycle that
encompasses installation, starting, stopping and activation.
Once you have created and installed a custom network driver, you use it like the
built-in network drivers. For example:
$ docker network create --driver weave mynet
You can inspect it, add containers to and from it, and so forth. Of course,
different plugins may make use of different technologies or frameworks. Custom
networks can include features not present in Docker's default networks. For more
information on writing plugins, see [Extending Docker](../../extend/index.md) and
[Writing a network driver plugin](../../extend/plugins_network.md).
### Docker embedded DNS server
Docker daemon runs an embedded DNS server to provide automatic service discovery
for containers connected to user defined networks. Name resolution requests from
the containers are handled first by the embedded DNS server. If the embedded DNS
server is unable to resolve the request it will be forwarded to any external DNS
servers configured for the container. To facilitate this when the container is
created, only the embedded DNS server reachable at `127.0.0.11` will be listed
in the container's `resolv.conf` file. More information on embedded DNS server on
user-defined networks can be found in the [embedded DNS server in user-defined networks]
(configure-dns.md)
## Links
Before the Docker network feature, you could use the Docker link feature to
allow containers to discover each other. With the introduction of Docker networks,
containers can be discovered by its name automatically. But you can still create
links but they behave differently when used in the default `docker0` bridge network
compared to user-defined networks. For more information, please refer to
[Legacy Links](default_network/dockerlinks.md) for link feature in default `bridge` network
and the [linking containers in user-defined networks](work-with-networks.md#linking-containers-in-user-defined-networks) for links
functionality in user-defined networks.
## Related information
- [Work with network commands](work-with-networks.md)
- [Get started with multi-host networking](get-started-overlay.md)
- [Managing Data in Containers](../../tutorials/dockervolumes.md)
- [Docker Machine overview](https://docs.docker.com/machine)
- [Docker Swarm overview](https://docs.docker.com/swarm)
- [Investigate the LibNetwork project](https://github.com/docker/libnetwork)

View File

@ -54,6 +54,7 @@ $ $ docker service create --replicas 2 --network my-multi-host-network --name my
Overlay networks for a swarm are not available to unmanaged containers. For more information refer to [Docker swarm mode overlay network security model](overlay-security-model.md).
See also [Attach services to an overlay network](../../swarm/networking.md).
## Overlay networking with an external key-value store

View File

@ -437,6 +437,8 @@ Overlay networks for a swarm are not available to containers started with
`docker run` that don't run as part of a swarm mode service. For more
information refer to [Docker swarm mode overlay network security model](overlay-security-model.md).
See also [Attach services to an overlay network](../../swarm/networking.md).
### An overlay network with an external key-value store
If you are not using Docker Engine in swarm mode, the `overlay` network requires