This patch attempts to allow endpoints to complete servicing connections
while being removed from a service. The change adds a flag to the
endpoint.deleteServiceInfoFromCluster() method to indicate whether this
removal should fully remove connectivity through the load balancer
to the endpoint or should just disable directing further connections to
the endpoint. If the flag is 'false', then the load balancer assigns
a weight of 0 to the endpoint but does not remove it as a linux load
balancing destination. It does remove the endpoint as a docker load
balancing endpoint but tracks it in a special map of "disabled-but-not-
destroyed" load balancing endpoints. This allows traffic to continue
flowing, at least under Linux. If the flag is 'true', then the code
removes the endpoint entirely as a load balancing destination.
The sandbox.DisableService() method invokes deleteServiceInfoFromCluster()
with the flag sent to 'false', while the endpoint.sbLeave() method invokes
it with the flag set to 'true' to complete the removal on endpoint
finalization. Renaming the endpoint invokes deleteServiceInfoFromCluster()
with the flag set to 'true' because renaming attempts to completely
remove and then re-add each endpoint service entry.
The controller.rmServiceBinding() method, which carries out the operation,
similarly gets a new flag for whether to fully remove the endpoint. If
the flag is false, it does the job of moving the endpoint from the
load balancing set to the 'disabled' set. It then removes or
de-weights the entry in the OS load balancing table via
network.rmLBBackend(). It removes the service entirely via said method
ONLY IF there are no more live or disabled load balancing endpoints.
Similarly network.addLBBackend() requires slight tweaking to properly
manage the disabled set.
Finally, this change requires propagating the status of disabled
service endpoints via the networkDB. Accordingly, the patch includes
both code to generate and handle service update messages. It also
augments the service structure with a ServiceDisabled boolean to convey
whether an endpoint should ultimately be removed or just disabled.
This, naturally, required a rebuild of the protocol buffer code as well.
Signed-off-by: Chris Telfer <ctelfer@docker.com>
The golang.org/x/sync package was vendored using the
github.com/golang/sync URL, but this is not the canonical
URL.
Because of this, vendoring failed in Moby, as it detects
these to be a duplicate import:
vndr github.com/golang/sync
2018/03/14 11:54:37 Collecting initial packages
2018/03/14 11:55:00 Download dependencies
2018/03/14 11:55:00 Failed to parse config: invalid config format: // FIXME this should be golang.org/x/sync, which is already vendored above
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
This commit contains fixes for duplicate IP with 3 issues addressed:
1) Race condition when datastore is not present in cases like swarmkit
2) Byte Offset calculation depending on where the start of the bit
in the bitsequence is, the offset was adding more bytes to the offset
when the start of the bit is in the middle of one of the instances in
a block
3) Finding the available bit was returning the last bit in the curent instance in
a block if the block is not full and the current bit is after the last
available bit.
Signed-off-by: Abhinandan Prativadi <abhi@docker.com>
Add a simple check and a summary report for the support script.
Report:
==SUMMARY==
Processed 3 networks
IP overlap found: 1
Processed 167 containers
Overlap found:
*** OVERLAP on Network 0ewr5iqraa8zv9l4qskp93wxo ***
2 "192.168.1.138",
Signed-off-by: Flavio Crisciani <flavio.crisciani@docker.com>
We should not delete an ingress network just because its endpoint count
drops to 1 (the IP address of the sandbox). This addresses a regression
where the ingress sandbox could be deleted on workers when the last
container leave said sandbox.
Signed-off-by: Chris Telfer <ctelfer@docker.com>
Set a limit to the max size of the transient log to avoid
filling up logs in case of issues
Signed-off-by: Flavio Crisciani <flavio.crisciani@docker.com>
Usually a diagnostic session wants to check the local state
without this flag the network is joined and left every iteration
altering actually the daemon status.
Also if the diagnostic client is used against a live node, the
network leave has a very bad side effect of kicking the node
out of the network killing its internal status.
For the above reason introducing the flag -a to be explicit
so that the current state is always preserved
Signed-off-by: Flavio Crisciani <flavio.crisciani@docker.com>
This is new feature that allows user to specify which subnetwork
Docker contrainer should choose from when it creates bridge network.
This libnetwork commit is to address moby PR 36054
Signed-off-by: selansen <elango.siva@docker.com>
Previously, support script dumped the host iptables filter/nat tables,
and each overlay network's network inspect and 'bridge fdb show' and
'brctl showmacs'. Now we collect much more information. Support script
dumps iptables filter/nat/mangle, routes and interfaces from iproute2,
bridge fdb table, & ipvsadm table, for the host and containers/overlay
networks on the host. We also dump a redacted copy of the container
health check status and other debugging information for each container,
in JSON format, and 'docker network inspect -v' for each overlay, if the
client/server support the -v flag.
Signed-off-by: ada mancini <ada@docker.com>
Setting ndots to 0 does not allow to resolve search domains
The default will remain ndots:0 that will directly resolve
services, but if the user specify a different ndots value
just propagate it into the container
Signed-off-by: Flavio Crisciani <flavio.crisciani@docker.com>
- the client allows to talk to the diagnostic server and
decode the internal values of the overlay and service discovery
- the tool also allows to remediate in case of orphans entries
- added README
Signed-off-by: Flavio Crisciani <flavio.crisciani@docker.com>