There is a not-insignificant performance overhead for all containers (if
containerd is a child of Docker, which is the current setup) if rlimits are
set on the main Docker daemon process (because the limits
propogate to all children).
We recommend using cgroups to do container-local accounting.
This applies the change added in 8db61095a3
to other init scripts.
Note that nfile cannot be set to unlimited, and the limit
is hardcoded to 1048576 (2^20) , see:
http://stackoverflow.com/a/1213069/1811501
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Previously, this check only worked if no host was specified and was
hard coded to check for "/var/run/docker.sock"
This change generalizes that check and captures any specified socket
and waits for it to be created.
Caveat: This will only check the first socket specified, but it is an
improvement over none at all.
Fixes#185160
Signed-off-by: Andrew Guenther <guenther.andrew.j@gmail.com>
Give Docker more time to kill containers before upstart kills Docker.
The default kill timeout is 5 seconds.
This will help decrease the chance of but not eliminate the chance of
orphaned container processes.
Signed-off-by: David Xia <dxia@spotify.com>
Once the job has failed and is respawned, the status becomes `docker
respawn/post-start` after subsequent failures (as opposed to `docker
stop/post-start`), so the post-start script needs to take this into
account.
I could not find specific documentation on the job transitioning to the
`respawn/post-start` state, but this was observed on Ubuntu 14.04.2.
Signed-off-by: Lewis Marshall <lewis@lmars.net>
Fixes#6647: Other upstart jobs that depend on docker by specifying
"start on started docker" would often start before the docker daemon was
ready, so they'd fail with "Cannot connect to the Docker daemon" or
"dial unix /var/run/docker.sock: no such file or directory".
This is because "docker -d" doesn't daemonize, it runs in the
foreground, so upstart can't know when the daemon is ready to receive
incoming connections. (Traditionally, a daemon will create all necessary
sockets and then fork to signal that it's ready; according to @tianon
this "isn't possible in Go"[1]. See also [2].)
Presumably this isn't a problem with systemd init with its socket
activation. The SysV init scripts may or may not suffer from this
problem but I have no motivation to fix them.
This commit adds a "post-start" stanza to the upstart configuration
that waits for the socket to be available. Upstart won't emit the
"started" event until the "post-start" script completes.[3]
Note that the system administrator might have specified a different path
for the socket, or a tcp socket instead, by customising
/etc/default/docker. In that case we don't try to figure out what the
new socket is, but at least we don't wait in vain for
/var/run/docker.sock to appear.
If the main script (`docker -d`) fails to start, the `initctl status
$UPSTART_JOB | grep -q "stop/"` line ensures that we don't loop forever.
I stole this idea from Steve Langasek.[4]
If for some reason we *still* end up in an infinite loop --I guess
`docker -d` must have hung-- then at least we'll be able to see the
"Waiting for /var/run/docker.sock" debug output in
/var/log/upstart/docker.log.
I considered using inotifywait instead of sleep, but it isn't worth
the complexity & the extra dependency.
[1] https://github.com/docker/docker/issues/6647#issuecomment-47001613
[2] https://code.google.com/p/go/issues/detail?id=227
[3] http://upstart.ubuntu.com/cookbook/#post-start
[4] https://lists.ubuntu.com/archives/upstart-devel/2013-April/002492.html
Signed-off-by: David Röthlisberger <david@rothlis.net>
This resolves a problem that I have been having where docker starts before networking is up. See issue #5944 for more details.
Docker-DCO-1.1-Signed-off-by: Jeffrey Bolle <jeffreybolle@gmail.com> (github: jeffreybolle)
This changes the upstart init script to start on `local-filesystems`.
Docker-DCO-1.1-Signed-off-by: Cristian Staretu <cristian.staretu@gmail.com> (github: unclejack)