The errors returned from Mount and Unmount functions are raw
syscall.Errno errors (like EPERM or EINVAL), which provides
no context about what has happened and why.
Similar to os.PathError type, introduce mount.Error type
with some context. The error messages will now look like this:
> mount /tmp/mount-tests/source:/tmp/mount-tests/target, flags: 0x1001: operation not permitted
or
> mount tmpfs:/tmp/mount-test-source-516297835: operation not permitted
Before this patch, it was just
> operation not permitted
[v2: add Cause()]
[v3: rename MountError to Error, document Cause()]
[v4: fixes; audited all users]
[v5: make Error type private; changes after @cpuguy83 reviews]
Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
This implements chown support on Windows. Built-in accounts as well
as accounts included in the SAM database of the container are supported.
NOTE: IDPair is now named Identity and IDMappings is now named
IdentityMapping.
The following are valid examples:
ADD --chown=Guest . <some directory>
COPY --chown=Administrator . <some directory>
COPY --chown=Guests . <some directory>
COPY --chown=ContainerUser . <some directory>
On Windows an owner is only granted the permission to read the security
descriptor and read/write the discretionary access control list. This
fix also grants read/write and execute permissions to the owner.
Signed-off-by: Salahuddin Khan <salah@docker.com>
Now all of the storage drivers use the field "storage-driver" in their log
messages, which is set to name of the respective driver.
Storage drivers changed:
- Aufs
- Btrfs
- Devicemapper
- Overlay
- Overlay 2
- Zfs
Signed-off-by: Alejandro GonzÃlez Hevia <alejandrgh11@gmail.com>
Move the "unmount and deactivate" code into a separate method, and
optimize it a bit:
1. Do not use filepath.Walk() as there's no requirement to recursively
go into every directory under home/mnt; a list of directories in mnt
is sufficient. With filepath.Walk(), in case some container will fail
to unmount, it'll go through the whole container filesystem which is
excessive and useless.
2. Do not use GetMounts() and check if a directory is mounted; just
unmount it and ignore "not mounted" error. Note the same error
is returned in case of wrong flags set, but as flags are hardcoded
we can safely ignore such case.
While at it, promote "can't unmount" log level from debug to warning.
Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
1. Make sure it's clear the error is from unmount.
2. Simplify the code a bit to make it more readable.
[v2: use errors.Wrap]
[v3: use errors.Wrapf]
[v4: lowercase the error message]
Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
1. Replace EnsureRemoveAll() with Rmdir(), as here we are removing
the container's mount point, which is already properly unmounted
and is therefore an empty directory.
2. Ignore the Rmdir() error (but log it unless it's ENOENT). This
is a mount point, currently unmounted (i.e. an empty directory),
and an older kernel can return EBUSY if e.g. the mount was
leaked to other mount namespaces.
Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
The idea behind making the graphdrivers private is to prevent leaking
mounts into other namespaces.
Unfortunately this is not really what happens.
There is one case where this does work, and that is when the namespace
was created before the daemon's namespace.
However with systemd each system servie winds up with it's own mount
namespace. This causes a race betwen daemon startup and other system
services as to if the mount is actually private.
This also means there is a negative impact when other system services
are started while the daemon is running.
Basically there are too many things that the daemon does not have
control over (nor should it) to be able to protect against these kinds
of leakages. One thing is certain, setting the graphdriver roots to
private disconnects the mount ns heirarchy preventing propagation of
unmounts... new mounts are of course not propagated either, but the
behavior is racey (or just bad in the case of restarting services)... so
it's better to just be able to keep mount propagation in tact.
It also does not protect situations like `-v
/var/lib/docker:/var/lib/docker` where all mounts are recursively bound
into the container anyway.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
Right now we only log source and destination (and demsg) if mount operation
fails. fstype and mount options are available easily. It probably is a good
idea to log these as well. Especially sometimes failures can happen due to
mount options.
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
This subtle bug keeps lurking in because error checking for `Mkdir()`
and `MkdirAll()` is slightly different wrt to `EEXIST`/`IsExist`:
- for `Mkdir()`, `IsExist` error should (usually) be ignored
(unless you want to make sure directory was not there before)
as it means "the destination directory was already there"
- for `MkdirAll()`, `IsExist` error should NEVER be ignored.
Mostly, this commit just removes ignoring the IsExist error, as it
should not be ignored.
Also, there are a couple of cases then IsExist is handled as
"directory already exist" which is wrong. As a result, some code
that never worked as intended is now removed.
NOTE that `idtools.MkdirAndChown()` behaves like `os.MkdirAll()`
rather than `os.Mkdir()` -- so its description is amended accordingly,
and its usage is handled as such (i.e. IsExist error is not ignored).
For more details, a quote from my runc commit 6f82d4b (July 2015):
TL;DR: check for IsExist(err) after a failed MkdirAll() is both
redundant and wrong -- so two reasons to remove it.
Quoting MkdirAll documentation:
> MkdirAll creates a directory named path, along with any necessary
> parents, and returns nil, or else returns an error. If path
> is already a directory, MkdirAll does nothing and returns nil.
This means two things:
1. If a directory to be created already exists, no error is
returned.
2. If the error returned is IsExist (EEXIST), it means there exists
a non-directory with the same name as MkdirAll need to use for
directory. Example: we want to MkdirAll("a/b"), but file "a"
(or "a/b") already exists, so MkdirAll fails.
The above is a theory, based on quoted documentation and my UNIX
knowledge.
3. In practice, though, current MkdirAll implementation [1] returns
ENOTDIR in most of cases described in #2, with the exception when
there is a race between MkdirAll and someone else creating the
last component of MkdirAll argument as a file. In this very case
MkdirAll() will indeed return EEXIST.
Because of #1, IsExist check after MkdirAll is not needed.
Because of #2 and #3, ignoring IsExist error is just plain wrong,
as directory we require is not created. It's cleaner to report
the error now.
Note this error is all over the tree, I guess due to copy-paste,
or trying to follow the same usage pattern as for Mkdir(),
or some not quite correct examples on the Internet.
[1] https://github.com/golang/go/blob/f9ed2f75/src/os/path.go
Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
In order to avoid reverting our fix for mount leakage in devicemapper,
add a test which checks that devicemapper's Get() and Put() cycle can
survive having a command running in an rprivate mount propagation setup
in-between. While this is quite rudimentary, it should be sufficient.
We have to skip this test for pre-3.18 kernels.
Signed-off-by: Aleksa Sarai <asarai@suse.de>
Instead of providing a generic message listing all possible reasons
why xfs is not available on the system, let's be specific.
Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
If mount fails, the reason might be right there in the kernel log ring buffer.
Let's include it in the error message, it might be of great help.
Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
Since the update to Debian Stretch, devmapper unit test fails. One
reason is, the combination of somewhat old (less than 3.16) kernel and
relatively new xfsprogs leads to creating a filesystem which is not supported
by the kernel:
> [12206.467518] XFS (dm-1): Superblock has unknown read-only compatible features (0x1) enabled.
> [12206.472046] XFS (dm-1): Attempted to mount read-only compatible filesystem read-write.
> Filesystem can only be safely mounted read only.
> [12206.472079] XFS (dm-1): SB validate failed with error 22.
Ideally, that would be automatically and implicitly handled by xfsprogs.
In real life, we have to take care about it here. Sigh.
Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
This enables docker cp and ADD/COPY docker build support for LCOW.
Originally, the graphdriver.Get() interface returned a local path
to the container root filesystem. This does not work for LCOW, so
the Get() method now returns an interface that LCOW implements to
support copying to and from the container.
Signed-off-by: Akash Gupta <akagup@microsoft.com>
libdm currently has a fairly substantial DoS bug that makes certain
operations fail on a libdm device if the device has active references
through mountpoints. This is a significant problem with the advent of
mount namespaces and MS_PRIVATE, and can cause certain --volume mounts
to cause libdm to no longer be able to remove containers:
% docker run -d --name testA busybox top
% docker run -d --name testB -v /var/lib/docker:/docker busybox top
% docker rm -f testA
[fails on libdm with dm_task_run errors.]
This also solves the problem of unprivileged users being able to DoS
docker by using unprivileged mount namespaces to preseve mounts that
Docker has dropped.
Signed-off-by: Aleksa Sarai <asarai@suse.de>
I was able to successfully use device mapper autoconfig feature
(commit 5ef07d79c) but it stopped working after a reboot.
Investigation shown that the dm device was not activated because of
a missing binary, that is not used during initial setup, but every
following time. Here's an error shown when trying to manually activate
the device:
> kir@kd:~/go/src/github.com/docker/docker$ sudo lvchange -a y /dev/docker/thinpool
> /usr/sbin/thin_check: execvp failed: No such file or directory
> Check of pool docker/thinpool failed (status:2). Manual repair required!
Surely, there is no solution to this other than to have a package that
provides the thin_check binary installed beforehand. Due to the fact
the issue revealed itself way later than DM setup was performed, it was
somewhat harder to investigate.
With this in mind, let's check for binary presense before setting up DM,
refusing to proceed if the binary is not there, saving a user from later
frustration.
While at it, eliminate repeated binary checking code. The downside is
that the binary lookup is happening more than once now -- I think the
clarity of code overweights this minor de-optimization.
Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
I tried using dm.directlvm_device but it ended up with the following
error:
> Error starting daemon: error initializing graphdriver: error
> writing docker thinp autoextend profile: open
> /etc/lvm/profile/docker-thinpool.profile: no such file or directory
The reason is /etc/lvm/profile directory does not exist. I think it is
better to try creating it beforehand.
Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
Make sure user understands this is about the in-kernel driver
(not the dockerd driver or smth).
While at it, amend the comment as well.
Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
Switch some more usage of the Stat function and the Stat_t type from the
syscall package to golang.org/x/sys. Those were missing in PR #33399.
Signed-off-by: Tobias Klauser <tklauser@distanz.ch>
Changes most references of syscall to golang.org/x/sys/
Ones aren't changes include, Errno, Signal and SysProcAttr
as they haven't been implemented in /x/sys/.
Signed-off-by: Christopher Jones <tophj@linux.vnet.ibm.com>
[s390x] switch utsname from unsigned to signed
per 33267e036f
char in s390x in the /x/sys/unix package is now signed, so
change the buildtags
Signed-off-by: Christopher Jones <tophj@linux.vnet.ibm.com>
Because we use our own logging callbacks in order to use libdm
effectively, it is quite difficult to debug complicated devicemapper
issues (because any warnings or notices from libdm are muted by our own
callback function). e07d3cd9a ("devmapper: Fix libdm logging") further
reduced the ability of this debugging by only allowing _LOG_FATAL errors
to be passed to the output.
Unfortunately libdm is very chatty, so in order to avoid making the logs
even more crowded, add a dm.libdm_log_level storage option that allows
people who are debugging the lovely world of libdm to be able to dive in
without recompiling binaries.
The valid values of dm.libdm_log_level map directly to the libdm logging
levels, and are in the range [2,7] as of the time of writing with 7
being _LOG_DEBUG and 2 being _LOG_FATAL. The default is _LOG_FATAL.
Signed-off-by: Aleksa Sarai <asarai@suse.de>
LogInit used to act as a manual way of registering the *necessary*
pkg/devicemapper logging callbacks. In addition, it was used to split up
the logic of pkg/devicemapper into daemon/graphdriver/devmapper (such
that some things were logged from libdm).
The manual aspect of this API was completely non-sensical and was just
begging for incorrect usage of pkg/devicemapper, so remove that semantic
and always register our own libdm callbacks.
In addition, recombine the split out logging callbacks into
pkg/devicemapper so that the default logger is local to the library and
also shown to be the recommended logger. This makes the code
substantially easier to read. Also the new DefaultLogger now has
configurable upper-bound for the log level, which allows for dynamically
changing the logging level.
Signed-off-by: Aleksa Sarai <asarai@suse.de>
There have been some cases where umount, a device can be busy for a very
short duration. Maybe its udev rules, or maybe it is runc related races
or probably it is something else. We don't know yet.
If deferred removal is enabled but deferred deletion is not, then for the
case of "docker run -ti --rm fedora bash", a container will exit, device
will be deferred removed and then immediately a call will come to delete
the device. It is possible that deletion will fail if device was busy
at that time.
A device can't be deleted if it can't be removed/deactivated first. There
is only one exception and that is when deferred deletion is on. In that
case graph driver will keep track of deleted device and try to delete it
later and return success to caller.
Always make sure that device deactivation is synchronous when device is
being deleted (except the case when deferred deletion is enabled).
This should also take care of small races when device is busy for a short
duration and it is being deleted.
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
This enables deferred device deletion/removal by default if the driver
version in the kernel is new enough to support the feature.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
The test was failing because TarOptions was using a non-pointer for
ChownOpts, which meant the check for nil was never true, and
createTarFile was never using the hdr.UID/GID
Signed-off-by: Daniel Nephin <dnephin@docker.com>
Before this, if `forceRemove` is set the container data will be removed
no matter what, including if there are issues with removing container
on-disk state (rw layer, container root).
In practice this causes a lot of issues with leaked data sitting on
disk that users are not able to clean up themselves.
This is particularly a problem while the `EBUSY` errors on remove are so
prevalent. So for now let's not keep this behavior.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
Instead of forcing users to manually configure a block device to use
with devmapper, this gives the user the option to let the devmapper
driver configure a device for them.
Adds several new options to the devmapper storage-opts:
- dm.directlvm_device="" - path to the block device to configure for
direct-lvm
- dm.thinp_percent=95 - sets the percentage of space to use for
storage from the passed in block device
- dm.thinp_metapercent=1 - sets the percentage of space to for metadata
storage from the passed in block device
- dm.thinp_autoextend_threshold=80 - sets the threshold for when `lvm`
should automatically extend the thin pool as a percentage of the total
storage space
- dm.thinp_autoextend_percent=20 - sets the percentage to increase the
thin pool by when an autoextend is triggered.
Defaults are taken from
[here](https://docs.docker.com/engine/userguide/storagedriver/device-mapper-driver/#/configure-direct-lvm-mode-for-production)
The only option that is required is `dm.directlvm_device` for docker to
set everything up.
Changes to these settings are not currently supported and will error
out.
Future work could support allowing changes to these values.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
if initDevmapper failed after creating thin-pool, the thin-pool will not be removed,
this would cause we can't use the same lvm to create another thin-pool.
Signed-off-by: Lei Jitang <leijitang@huawei.com>