The CopyTo function for joininfo is not copying the driver table entries
which then is missing when the endpoint is re-read for the store cache.
Signed-off-by: Jana Radhakrishnan <mrjana@docker.com>
libnetwork agent mode is a mode where libnetwork can act as a local
agent for network and discovery plumbing alone while the state
management is done elsewhere. This completes the support for making
libnetwork and its associated drivers to be completely independent of a
k/v store(if needed) and work purely based on the state information
passed along by some some external controller or manager. This does not
mean that libnetwork support for decentralized state management via a
k/v store is removed.
Signed-off-by: Jana Radhakrishnan <mrjana@docker.com>
With the introduction of a driver generic gossip in libnetwork it is not
necessary for drivers to run their own gossip protocol (like what
overlay driver is doing currently) but instead rely on the gossip
instance run centrally in libnetwork. In order to achieve this, certain
enhancements to driver api are needed. This api aims to provide these
enhancements.
The new api provides a way for drivers to register interest on table
names of their choice by returning a list of table names of interest as
a response to CreateNetwork. By doing that they will get notified if a
CRUD operation happened on the tables of their interest, via the newly
added EventNotify call.
Drivers themselves can add entries to any table during a Join call by
invoking AddTableEntry method any number of times during the Join
call. These entries lifetime is the same as the endpoint itself. As soon
as the container leaves the endpoint, those entries added by driver
during that endpoint's Join call will be automatically removed by
libnetwork. This action may trigger notification of such deletion to all
driver instances in the cluster who have registered interest in that
table's notification.
Signed-off-by: Jana Radhakrishnan <mrjana@docker.com>
Stale sandbox and endpoints are cleaned up during controller init.
Since we reuse the exact same code-path, for sandbox and endpoint
delete, they try to load the plugin and it causes daemon startup
timeouts since the external plugin containers cant be loaded at that
time. Since the cleanup is actually performed for the libnetwork core
states, we can force delete sandbox and endpoint even if the driver is
not loaded.
Signed-off-by: Madhu Venugopal <madhu@docker.com>
Always on watching of networks and endpoints can
affect scalability of the cluster beyond a few nodes.
Remove pro active watching and watch only the objects
you are interested in.
Signed-off-by: Jana Radhakrishnan <mrjana@docker.com>
Currently the endpoint data model consists of multiple
interfaces per-endpoint. This seems to be an overkill
since there is no real use case for it. Removing it
to remove unnecessary complexity from the code.
Signed-off-by: Jana Radhakrishnan <mrjana@docker.com>
- Maps 1 to 1 with container's networking stack
- It holds container's specific nw options which
before were incorrectly owned by Endpoint.
- Sandbox creation no longer coupled with Endpoint Join,
sandbox and endpoint have now separate lifecycle.
- LeaveAll naturally replaced by Sandbox.Delete
- some pkg and file renaming in order to have clear
mapping between structure name and entity ("sandbox")
- Revisited hosts and resolv.conf handling
- Removed from JoinInfo interface capability of setting hosts and resolv.conf paths
- Changed etchosts.Build() to first write the search domains and then the nameservers
Signed-off-by: Alessandro Boch <aboch@docker.com>
* Removed network from being marshalled (it is part of the key anyways)
* Reworked the watch function to handle container-id on endpoints
* Included ContainerInfo to be marshalled which needs to be synchronized
* Resolved multiple race issues by introducing data locks
Signed-off-by: Madhu Venugopal <madhu@docker.com>
Currently the driver api allows the driver to specify the
full interface name for the interface inside the container.
This is not appropriate since the driver does not have the full
view of the sandbox to correcly allocate an unambiguous interface
name. Instead with this PR the driver will be allowed to specify
a prefix for the name and libnetwork and sandbox layers will
disambiguate it with an appropriate suffix.
Signed-off-by: Jana Radhakrishnan <mrjana@docker.com>
This is need to decouple types from netutils which has linux
dependencies. This way the client code which needs network types
can just pull in types package which makes client code platform
agnostic.
Signed-off-by: Jana Radhakrishnan <mrjana@docker.com>
Refactored the driver api so that is aligns well with the design
of endpoint lifecycle becoming decoupled from the container lifecycle.
Introduced go interfaces to obtain address information during CreateEndpoint.
Go interfaces are also used to get data from driver during join.
This sort of deisgn hides the libnetwork specific type details from drivers.
Another adjustment is to provide a list of interfaces during CreateEndpoint. The
goal of this is many-fold:
* To indicate to the driver that IP address has been assigned by some other
entity (like a user wanting to use their own static IP for an endpoint/container)
and asking the driver to honor this. Driver may reject this configuration
and return an error but it may not try to allocate an IP address and override
the passed one.
* To indicate to the driver that IP address has already been allocated once
for this endpoint by an instance of the same driver in some docker host
in the cluster and this is merely a notification about that endpoint and the
allocated resources.
* In case the list of interfaces is empty the driver is required to allocate and
assign IP addresses for this endpoint.
Signed-off-by: Jana Radhakrishnan <mrjana@docker.com>