- It specifies whether the network driver can
provide containers connectivity across hosts.
- As of now, the data scope of the driver was
being overloaded with this notion.
- The driver scope information is still valid
and it defines whether the data allocation
of the network resources can be done globally
or only locally.
- With the scope network option, user can now
force a network as swarm scoped
regardless of the driver data scope.
- In case the network is configured as swarm scoped,
and the network driver is multihost capable,
a network DB instance will be launched for it.
Signed-off-by: Alessandro Boch <aboch@docker.com>
With the introduction of a driver generic gossip in libnetwork it is not
necessary for drivers to run their own gossip protocol (like what
overlay driver is doing currently) but instead rely on the gossip
instance run centrally in libnetwork. In order to achieve this, certain
enhancements to driver api are needed. This api aims to provide these
enhancements.
The new api provides a way for drivers to register interest on table
names of their choice by returning a list of table names of interest as
a response to CreateNetwork. By doing that they will get notified if a
CRUD operation happened on the tables of their interest, via the newly
added EventNotify call.
Drivers themselves can add entries to any table during a Join call by
invoking AddTableEntry method any number of times during the Join
call. These entries lifetime is the same as the endpoint itself. As soon
as the container leaves the endpoint, those entries added by driver
during that endpoint's Join call will be automatically removed by
libnetwork. This action may trigger notification of such deletion to all
driver instances in the cluster who have registered interest in that
table's notification.
Signed-off-by: Jana Radhakrishnan <mrjana@docker.com>
Because overlay is a builtin driver and global allocation of overlay
resources is probably going to happen in a different node (a single
node) and the actual plumbing of the network is probably going to happen
in all nodes, it makes sense to split the functionality of allocation
into two different packages. The central component(this package) only
implements the NetworkAllocate/Free apis while the distributed
component(the existing overlay driver) implements the rest of the driver
api. This way we can reduce the memory footprint overall.
Signed-off-by: Jana Radhakrishnan <mrjana@docker.com>
Added NetworkAllocate and NetworkFree apis to the list of
driver apis. The intention of the api is to provide a
centralized way of allocating and freeing network resources
for a network which is cross-host.
Signed-off-by: Jana Radhakrishnan <mrjana@docker.com>
- Move DiscoverNew() and DiscoverDelete() methods into the new interface
- Add DatastoreUpdate notification
- Now this interface can be implemented by any drivers, not only network drivers
Signed-off-by: Alessandro Boch <aboch@docker.com>
* integrated hostdiscovery package with the new Docker Discovery
* Integrated hostdiscovery package with libnetwork core
* removed libnetwork_discovery tag
* Introduced driver apis for discovery events
* moved overlay driver to make use of the discovery events
* Using Docker Discovery service.
* Changed integration-tests to make use of the new discovery
Signed-off-by: Madhu Venugopal <madhu@docker.com>
Currently the driver configuration is pushed through a separate
api. This makes driver configuration possible at any arbitrary
time. This unncessarily complicates the driver implementation.
More importantly the driver does not get access to it's
configuration before it can do the handshake with libnetwork.
This make the internal drivers a little bit different to
external plugins which can get their configuration before the handshake
with libnetwork.
This PR attempts to fix that mismatch between internal drivers and
external plugins.
Signed-off-by: Jana Radhakrishnan <mrjana@docker.com>
- Maps 1 to 1 with container's networking stack
- It holds container's specific nw options which
before were incorrectly owned by Endpoint.
- Sandbox creation no longer coupled with Endpoint Join,
sandbox and endpoint have now separate lifecycle.
- LeaveAll naturally replaced by Sandbox.Delete
- some pkg and file renaming in order to have clear
mapping between structure name and entity ("sandbox")
- Revisited hosts and resolv.conf handling
- Removed from JoinInfo interface capability of setting hosts and resolv.conf paths
- Changed etchosts.Build() to first write the search domains and then the nameservers
Signed-off-by: Alessandro Boch <aboch@docker.com>
For the moment in 1.7.1 since we provide a resolv.conf set api
to the driver honor that so that for host driver we can use the
the host's /etc/resolv.conf file as is rather than putting the
contents through a filtering logic.
It should be noted that the driver side capability to set the
resolv.conf file is most likely going to go away in the future
but this should be fine for 1.7.1
Signed-off-by: Jana Radhakrishnan <mrjana@docker.com>
Currently store makes use of a static isReservedNetwork check to decide
if a network needs to be stored in the distributed store or not. But it
is better if the check is not static, but be determined based on the
capability of the driver that backs the network.
Hence introducing a new capability mechanism to the driver which it can
express its capability during registration. Making use of first such
capability : Scope. This can be expanded in the future for more such cases.
Signed-off-by: Madhu Venugopal <madhu@docker.com>
Refactored the driver api so that is aligns well with the design
of endpoint lifecycle becoming decoupled from the container lifecycle.
Introduced go interfaces to obtain address information during CreateEndpoint.
Go interfaces are also used to get data from driver during join.
This sort of deisgn hides the libnetwork specific type details from drivers.
Another adjustment is to provide a list of interfaces during CreateEndpoint. The
goal of this is many-fold:
* To indicate to the driver that IP address has been assigned by some other
entity (like a user wanting to use their own static IP for an endpoint/container)
and asking the driver to honor this. Driver may reject this configuration
and return an error but it may not try to allocate an IP address and override
the passed one.
* To indicate to the driver that IP address has already been allocated once
for this endpoint by an instance of the same driver in some docker host
in the cluster and this is merely a notification about that endpoint and the
allocated resources.
* In case the list of interfaces is empty the driver is required to allocate and
assign IP addresses for this endpoint.
Signed-off-by: Jana Radhakrishnan <mrjana@docker.com>
In the present code, each driver package provides a `New()` method
which constructs a driver of its type, which is then registered with
the controller.
However, this is not suitable for the `drivers/remote` package, since
it does not provide a (singleton) driver, but a mechanism for drivers
to be added dynamically. As a result, the implementation is oddly
dual-purpose, and a spurious `"remote"` driver is added to the
controller's list of available drivers.
Instead, it is better to provide the registration callback to each
package and let it register its own driver or drivers. That way, the
singleton driver packages can construct one and register it, and the
remote package can hook the callback up with whatever the dynamic
driver mechanism turns out to be.
NB there are some method signature changes; in particular to
controller.New, which can return an error if the built-in driver
packages fail to initialise.
Signed-off-by: Michael Bridgen <mikeb@squaremobius.net>
This commits brings in a functionality for remote drivers to register
with LibNetwork. The Built-In remote driver is responsible for the
actual "remote" plugin to be made available.
Having such a mechanism makes libnetwork core not dependent on any
external plugin mechanism and also the Libnetwork NB apis are free of
Driver interface.
Signed-off-by: Madhu Venugopal <madhu@docker.com>