1
0
Fork 0
mirror of https://github.com/moby/moby.git synced 2022-11-09 12:21:53 -05:00

Add design documentation

This is an intial pass at the design docs.
Hopefully, we can merge this and then start accepting PRs to improve it!

Signed-off-by: Dave Tucker <dt@docker.com>
This commit is contained in:
Dave Tucker 2015-04-23 12:46:35 -07:00 committed by Madhu Venugopal
parent db8100743b
commit 4f7eb502bf
5 changed files with 158 additions and 26 deletions

View file

@ -1,33 +1,24 @@
# libnetwork: what's next?
# Roadmap
This document is a high-level overview of where we want to take libnetwork next.
It is a curated selection of planned improvements which are either important, difficult, or both.
Libnetwork is a young project and is still being defined.
This document defines the high-level goals of the project and defines the release-relationship to the Docker Platform.
For a more complete view of planned and requested improvements, see [the Github issues](https://github.com/docker/libnetwork/issues).
* [Goals](#goals)
* [Project Planning](#project-planning): release-relationship to the Docker Platform.
To suggest changes to the roadmap, including additions, please write the change as if it were already in effect, and make a pull request.
## Goals
## Container Network Model (CNM)
- Combine the networking logic in Docker Engine and libcontainer in to a single, reusable library
- Replace the networking subsystem of Docker Engine, with libnetwork
- Define a flexible model that allows local and remote drivers to provide networking to containers
- Provide a stand-alone tool for using/testing libnetwork
#### Concepts
## Project Planning
1. Sandbox: An isolated environment. This is more or less a standard docker container.
2. Endpoint: An addressable endpoint used for communication over a specific network. Endpoints join exactly one network and are expected to create a method of network communication for a container. Example : veth pair
3. Network: A collection of endpoints that are able to communicate to each other. Networks are intended to be isolated from each other and to not cross communicate.
#### axioms
The container network model assumes the following axioms about how libnetwork provides network connectivity to containers:
1. All containers on a specific network can communicate with each other freely.
2. Multiple networks are the way to segment traffic between containers and should be supported by all drivers.
3. Multiple endpoints per container are the way to join a container to multiple networks.
4. An endpoint is added to a sandbox to provide it with network connectivity.
## Bridge Driver using CNM
Existing native networking functionality of Docker will be implemented as a Bridge Driver using the above CNM. In order to prove the effectiveness of the Bridge Driver, we will make the necessary modifications to Docker Daemon and LibContainer to replace the existing networking functionality with libnetwork & Bridge Driver.
## Plugin support
The Driver model provides a modular way to allow different networking solutions to be used as the backend, but is static in nature.
Plugins promise to allow dynamic pluggable networking backends for libnetwork.
There are other community efforts implementing Plugin support for the Docker platform, and the libnetwork project intends to make use of such support when it becomes available.
Libnetwork versions do not map 1:1 with Docker Platform releases.
Milestones and Project Pages are used to define the set of features that are included in each release.
| Platform Version | Libnetwork Version | Planning |
|------------------|--------------------|----------|
| Docker 1.7 | [0.3](https://github.com/docker/libnetwork/milestones/0.3) | [Project Page](https://github.com/docker/libnetwork/wiki/Docker-1.7-Project-Page) |
| Docker 1.8 | [1.0](https://github.com/docker/libnetwork/milestones/1.0) | [Project Page](https://github.com/docker/libnetwork/wiki/Docker-1.8-Project-Page) |

13
libnetwork/docs/bridge.md Normal file
View file

@ -0,0 +1,13 @@
Bridge Driver
=============
The bridge driver is an implementation that uses Linux Bridging and iptables to provide connectvity for containers
It creates a single bridge, called `docker0` by default, and attaches a `veth pair` between the bridge and every endpoint.
## Configuration
The bridge driver supports configuration through the Docker Daemon flags.
## Usage
This driver is supported for the default "bridge" network only and it cannot be used for any other networks.

116
libnetwork/docs/design.md Normal file
View file

@ -0,0 +1,116 @@
Design
======
The main goals of libnetwork are highlighted in the [roadmap](../ROADMAP.md).
This document describes how libnetwork has been designed in order to acheive this.
Requirements for individual releases can be found on the [Project Page](https://github.com/docker/libnetwork/wiki)
## Legacy Docker Networking
Prior to libnetwork a container's networking was handled in both Docker Engine and libcontainer.
Docker Engine was responsible for providing the configuration of the container's networking stack.
Libcontainer would then use this information to create the necessary networking devices and move them in to a network namespace.
This namespace would then be used when the container is started.
## The Container Network Model
Libnetwork implements Container Network Model (CNM) which formalizes the steps required to provide networking for containers while providing an abstraction that can be used to support multiple network drivers. The CNM is built on 3 main components.
**Sandbox**
A Sandbox contains the configuration of a container's network stack.
This includes management of the container's interfaces, routing table and DNS settings.
An implementation of a Sandbox could be a Linux Network Namespace, a FreeBSD Jail or other similar concept.
A Sandbox may contain *many* endpoints from *multiple* networks
**Endpoint**
An Endpoint joins a Sandbox to a Network.
An implementation of an Endpoint could be a `veth` pair, an Open vSwitch internal port or similar.
An Endpoint can belong to *only one* network but may only belong to *one* Sandbox
**Network**
A Network is a group of Endpoints that are able to communicate with each-other directly.
An implementation of a Network could be a Linux bridge, a VLAN etc...
Networks consist of *many* endpoints
## API
Consumers of the CNM, like Docker for example, interact through the following APIs
The `NetworkController` object is created to manage the allocation of Networks and the binding of these Networks to a specific Driver
Once a Network is created, `network.CreateEndpoint` can be called to create a new Endpoint in a given network.
When an Endpoint exists, it can be joined to a Sandbox using `endpoint.Join(id)`. If no Sandbox exists, one will be created, but if the Sandbox already exists, the endpoint will be added there.
The result of the Join operation is a Sandbox Key which identifies the Sandbox to the Operating System (e.g a path)
This Key can be passed to the container runtime so the Sandbox is used when the container is started.
When the container is stopped, `endpoint.Leave` will be called on each endpoint within the Sandbox
Finally once, endpoint.
## Component Lifecycle
### Sandbox Lifecycle
The Sandbox is created during the first `endpoint.Join` and deleted when `endpoint.Leave` is called on the last endpoint.
<TODO @mrjana or @mavenugo to more details>
### Endpoint Lifecycle
The Endpoint is created on `network.CreateEndpoint` and removed on `endpoint.Delete`
<TODO @mrjana or @mavenugo to add details on when this is called>
### Network Lifecycle
Networks are created when the CNM API call is invoked and are not cleaned up until an corresponding delete API call is made.
## Implementation
Networks and Endpoints are mostly implemented in drivers. For more information on these details, please see [the drivers section](#Drivers)
## Sandbox
Libnetwork provides an implementation of a Sandbox for Linux.
This creates a Network Namespace for each sandbox which is uniquely identified by a path on the host filesystem.
Netlink calls are used to move interfaces from the global namespace to the Sandbox namespace.
Netlink is also used to manage the routing table in the namespace.
# Drivers
## API
The Driver API allows libnetwork to defer to a driver to provide Networking services.
For Networks, drivers are notified on Create and Delete events
For Endpoints, drivers are also notified on Create and Delete events
## Implementations
Libnetwork includes the following drivers:
- null
- bridge
- overlay
- remote
### Null
The null driver is a `noop` implementation of the driver API, used only in cases where no networking is desired.
### Bridge
The `bridge` driver provides a Linux-specific bridging implementation based on the Linux Bridge.
For more details, please [see the Bridge Driver documentation](bridge.md)
### Overlay
The `overlay` driver implements networking that can span multiple hosts using overlay network encapsulations such as VXLAN.
For more details on its design, please see the [Overlay Driver Design](overlay.md)
### Remote
The `remote` driver, provides a means of supporting drivers over a remote transport.
This allows a driver to be written in a language of your choice.
For further details, please see the [Remote Driver Design](remote.md)

View file

@ -0,0 +1,6 @@
Overlay Driver
==============
## Configuration
## Usage

View file

@ -0,0 +1,6 @@
Remote Driver
=============
## Configuration
## Usage