After moving libnetwork to this repo, we need to update all the import
paths for libnetwork to point to docker/docker/libnetwork instead of
docker/libnetwork.
This change implements that.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
This prevents consumers of the opts package to also having to
depend on daemon/network, and everything related.
We can probably change some of the other constants to strings,
for easier concatenating, and need to review the windows-specific
"127.0.0.1" (instead of "localhost"), which may no longer be
needed.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
When failing to destroy a stale sandbox, we logged that the removal
failed, but omitted the original error message.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
The defer function was checking for the local `err` variable, not
on the error that was returned by the function. As a result, the
sandbox would never be cleaned up for containers that used "none"
networking, and a failiure occured during setup.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Docker Desktop (on MAC and Windows hosts) allows containers
running inside a Linux VM to connect to the host using
the host.docker.internal DNS name, which is implemented by
VPNkit (DNS proxy on the host)
This PR allows containers to connect to Linux hosts
by appending a special string "host-gateway" to --add-host
e.g. "--add-host=host.docker.internal:host-gateway" which adds
host.docker.internal DNS entry in /etc/hosts and maps it to host-gateway-ip
This PR also add a daemon flag call host-gateway-ip which defaults to
the default bridge IP
Docker Desktop will need to set this field to the Host Proxy IP
so DNS requests for host.docker.internal can be routed to VPNkit
Addresses: https://github.com/docker/for-linux/issues/264
Signed-off-by: Arko Dasgupta <arko.dasgupta@docker.com>
```
daemon/container_operations.go:787:2: S1033: unnecessary guard around call to delete (gosimple)
if _, ok := container.NetworkSettings.Networks[n.ID()]; ok {
^
```
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
It has been declared deprecated by the author, and has a knack for
false-positives (as well as giving bad advice when it comes to APIs --
which is quite clear when looking at "nolint: interfacer" comments).
Signed-off-by: Aleksa Sarai <asarai@suse.de>
Handle the case of systemd-resolved, and if in place
use a different resolv.conf source.
Set appropriately the option on libnetwork.
Move unix specific code to container_operation_unix
Signed-off-by: Flavio Crisciani <flavio.crisciani@docker.com>
These network operations really don't have anything to do with the
container but rather are setting up the networking.
Ideally these wouldn't get shoved into the daemon package, but doing
something else (e.g. extract a network service into a new package) but
there's a lot more work to do in that regard.
In reality, this probably simplifies some of that work as it moves all
the network operations to the same place.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
This call was added as part of commit a042e5a20 and at the time was
useful. sandbox.DisableService() basically calls
endpoint.deleteServiceInfoFromCluster() for every endpoint in the
sandbox. However, with the libnetwork change, endpoint.sbLeave()
invokes endpoint.deleteServiceInfoFromCluster(). The releaseNetwork()
call invokes sandbox.Delete() immediately after
sandbox.DisableService(). The sandbox.Delete() in turn ultimately
invokes endpoint.sbLeave() for every endpoint in the sandbox which thus
removes the endpoint's load balancing entry via
endpoint.deleteServiceInfoFromCluster(). So the call to
sandbox.DisableService() is now redundant.
It is noteworthy that, while redundant, the presence of the call would
not cause errors. It would just be sub-optimal. The DisableService()
call would cause libnetwork to down-weight the load balancing entries
while the call to sandbox.Delete() would cause it to remove the entries
immediately afterwards. Aside from the wasted computation, the extra
call would also propagate an extra state change in the networkDB gossip
messages. So, overall, it is much better to just avoid the extra
overhead.
Signed-off-by: Chris Telfer <ctelfer@docker.com>
Attachable networks are networks created on the cluster which can then
be attached to by non-swarm containers. These networks are lazily
created on the node that wants to attach to that network.
When no container is currently attached to one of these networks on a
node, and then multiple containers which want that network are started
concurrently, this can cause a race condition in the network attachment
where essentially we try to attach the same network to the node twice.
To easily reproduce this issue you must use a multi-node cluster with a
worker node that has lots of CPUs (I used a 36 CPU node).
Repro steps:
1. On manager, `docker network create -d overlay --attachable test`
2. On worker, `docker create --restart=always --network test busybox
top`, many times... 200 is a good number (but not much more due to
subnet size restrictions)
3. Restart the daemon
When the daemon restarts, it will attempt to start all those containers
simultaneously. Note that you could try to do this yourself over the API,
but it's harder to trigger due to the added latency from going over
the API.
The error produced happens when the daemon tries to start the container
upon allocating the network resources:
```
attaching to network failed, make sure your network options are correct and check manager logs: context deadline exceeded
```
What happens here is the worker makes a network attachment request to
the manager. This is an async call which in the happy case would cause a
task to be placed on the node, which the worker is waiting for to get
the network configuration.
In the case of this race, the error ocurrs on the manager like this:
```
task allocation failure" error="failed during network allocation for task n7bwwwbymj2o2h9asqkza8gom: failed to allocate network IP for task n7bwwwbymj2o2h9asqkza8gom network rj4szie2zfauqnpgh4eri1yue: could not find an available IP" module=node node.id=u3489c490fx1df8onlyfo1v6e
```
The task is not created and the worker times out waiting for the task.
---
The mitigation for this is to make sure that only one attachment reuest
is in flight for a given network at a time *when the network doesn't
already exist on the node*. If the network already exists on the node
there is no need for synchronization because the network is already
allocated and on the node so there is no need to request it from the
manager.
This basically comes down to a race with `Find(network) ||
Create(network)` without any sort of syncronization.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
This fix tries to address the issue raised in 33661 where
network alias does not work when connect to a network the second time.
This fix address the issue.
This fix fixes 33661.
Signed-off-by: Yong Tang <yong.tang.github@outlook.com>
This PR contains a fix for moby/moby#30321. There was a moby/moby#31142
PR intending to fix the issue by adding a delay between disabling the
service in the cluster and the shutdown of the tasks. However
disabling the service was not deleting the service info in the cluster.
Added a fix to delete service info from cluster and verified using siege
to ensure there is zero downtime on rolling update of a service.In order
to support it and ensure consitency of enabling and disable service knob
from the daemon, we need to ensure we disable service when we release
the network from the container. This helps in making the enable and
disable service less racy. The corresponding part of libnetwork fix is
part of docker/libnetwork#1824
Signed-off-by: abhi <abhi@docker.com>
This fix is a follow up to 30397, with `FindUniqueNetwork`
changed to `FindNetwork` based on the review feedback.
Signed-off-by: Yong Tang <yong.tang.github@outlook.com>
Instead of having to create a bunch of custom error types that are doing
nothing but wrapping another error in sub-packages, use a common helper
to create errors of the requested type.
e.g. instead of re-implementing this over and over:
```go
type notFoundError struct {
cause error
}
func(e notFoundError) Error() string {
return e.cause.Error()
}
func(e notFoundError) NotFound() {}
func(e notFoundError) Cause() error {
return e.cause
}
```
Packages can instead just do:
```
errdefs.NotFound(err)
```
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
This fix is part of the effort to address 30242 where
issue arise because of the fact that multiple networks
may share the same name (within or across local/swarm scopes).
The focus of this fix is to allow creation of service
when a network in local scope has the same name as the
service network.
An integration test has been added.
This fix fixes 30242.
Signed-off-by: Yong Tang <yong.tang.github@outlook.com>
Use strongly typed errors to set HTTP status codes.
Error interfaces are defined in the api/errors package and errors
returned from controllers are checked against these interfaces.
Errors can be wraeped in a pkg/errors.Causer, as long as somewhere in the
line of causes one of the interfaces is implemented. The special error
interfaces take precedence over Causer, meaning if both Causer and one
of the new error interfaces are implemented, the Causer is not
traversed.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
Reuse existing structures and rely on json serialization to deep copy
Container objects.
Also consolidate all "save" operations on container.CheckpointTo, which
now both saves a serialized json to disk, and replicates state to the
ACID in-memory store.
Signed-off-by: Fabio Kung <fabio.kung@gmail.com>
Replicate relevant mutations to the in-memory ACID store. Readers will
then be able to query container state without locking.
Signed-off-by: Fabio Kung <fabio.kung@gmail.com>
Both method are trying to detach the container from a cluster
network. The code is exactly the same, this removes the duplication.
Signed-off-by: Vincent Demeester <vincent@sbr.pm>