A local endpoint is known to the watch database only
during Join. But the same endpoint can be known to the
watch database as remote endpoint well before the Join
because a CreateEndpoint updates the endpoint to the store.
So on Join when you come to know that this is indeed a
local endpoint remove it from remote endpoint list and add it
to local endpoint list.
Signed-off-by: Jana Radhakrishnan <mrjana@docker.com>
When we bootup cleanup all dangling local
endpoints since they are not needed anymore.
The only reason it can happen is when the process
went down ungracefully after an endpoint is
created but before join is successfull.
Signed-off-by: Jana Radhakrishnan <mrjana@docker.com>
Currently endpoint count is maintained as part of
network object and the endpoint count gets updated
frequently while the rest of network is quite stable.
Because of the frequent updates to endpoint count the
network object is getting marshalled and unmarshalled
ferquently. This is causing a lot of churn and transient
memory usage. Fix this by creating a deparate object of
endpoint count so that only that gets updated.
Signed-off-by: Jana Radhakrishnan <mrjana@docker.com>
Today we try to increment/decrement endpoint count
only once even if it is a key modified error. In case
of key modified error we should retry it to allow it to
succeed.
Signed-off-by: Jana Radhakrishnan <mrjana@docker.com>
Always on watching of networks and endpoints can
affect scalability of the cluster beyond a few nodes.
Remove pro active watching and watch only the objects
you are interested in.
Signed-off-by: Jana Radhakrishnan <mrjana@docker.com>
1. Don't save localscope endpoints to localstore for now.
2. Add common function updateToStore/deleteFromStore to store KVObjects.
3. Merge `getNetworksFromGlobalStore` and `getNetworksFromLocalStore`
4. Add `n.isGlobalScoped` before `n.watchEndpoints` in `addNetwork`
5. Fix integration-tests
6. Fix test failure in drivers/remote/driver_test.go
7. Restore network to store if deleteNework failed
- Maps 1 to 1 with container's networking stack
- It holds container's specific nw options which
before were incorrectly owned by Endpoint.
- Sandbox creation no longer coupled with Endpoint Join,
sandbox and endpoint have now separate lifecycle.
- LeaveAll naturally replaced by Sandbox.Delete
- some pkg and file renaming in order to have clear
mapping between structure name and entity ("sandbox")
- Revisited hosts and resolv.conf handling
- Removed from JoinInfo interface capability of setting hosts and resolv.conf paths
- Changed etchosts.Build() to first write the search domains and then the nameservers
Signed-off-by: Alessandro Boch <aboch@docker.com>
In that commit, AtomicPutCreate takes previous = nil to Atomically create keys
that don't exist. We need a create operation that is atomic to prevent races
between multiple libnetworks creating the same object.
Previously, we just created new KVs with an index of 0 and wrote them to the
datastore. Consul accepts this behaviour and interprets index of 0 as
non-existing, but other data backends do no.
- Add Exists() to the KV interface. SetIndex() should also modify a KV so
that it exists.
- Call SetIndex() from within the GetObject() method on DataStore interface.
- This ensures objects have the updated values for exists and index.
- Add SetValue() to the KV interface. This allows implementers to define
their own method to marshall and unmarshall (as bitseq and allocator have).
- Update existing users of the DataStore (endpoint, network, bitseq,
allocator, ov_network) to new interfaces.
- Fix UTs.
Currently we rely on watch to catchup after the init. But there could be
a small time window on which, we might end up in a race condition on
network creates. By reading and populating networks during init, we
avoid any such conditions, especially for default network handling.
Signed-off-by: Madhu Venugopal <madhu@docker.com>
The configuration format for docker runtime is based on daemon flags and
hence adjusting the libnetwork configuration to accomodate it by moving
the TOML based configuration to the dnet tool.
Also changed the controller configuration via options
Signed-off-by: Madhu Venugopal <madhu@docker.com>
Currently store makes use of a static isReservedNetwork check to decide
if a network needs to be stored in the distributed store or not. But it
is better if the check is not static, but be determined based on the
capability of the driver that backs the network.
Hence introducing a new capability mechanism to the driver which it can
express its capability during registration. Making use of first such
capability : Scope. This can be expanded in the future for more such cases.
Signed-off-by: Madhu Venugopal <madhu@docker.com>
* Removed network from being marshalled (it is part of the key anyways)
* Reworked the watch function to handle container-id on endpoints
* Included ContainerInfo to be marshalled which needs to be synchronized
* Resolved multiple race issues by introducing data locks
Signed-off-by: Madhu Venugopal <madhu@docker.com>