All LVM actions in the devicemapper library are asyncronous, involving a call to
a task enqueue function (dm_run_task) and a wait on a resultant udev event
(UdevWait). Currently devmapper.go defers all calls to UdevWait, which discards
the return value. While it still generates an error message in the log (if
debugging is enabled), the calling thread is still allowed to continue as if no
error has occured, leading to subsequent errors, and significant confusion when
debugging, due to those subsequent errors. Given that there is no risk of panic
between the task submission and the wait operation, it seems more reasonable to
preform the UdevWait inline at the end of any given lvm action so that errors
can be caught and returned before docker can continue and create additional
failures.
Signed-off-by: Neil Horman <nhorman@tuxdriver.com>
Currently, the devicemapper library sets cookies to correlate wait operations,
which must be unique (as the lvm2 library doesn't detect duplicate cookies).
The current method for cookie generation is to take the address of a cookie
variable. However, because the variable is declared on the stack, execution
patterns can lead to the cookie variable being declared at the same stack
location, which results in a high likelyhood of duplicate cookie use, which in
turn can lead to various odd lvm behaviors, which can be hard to track down
(object use before create, duplicate completions, etc). Lets guarantee that the
cookie we generate is unique by declaring it on the heap instead. This
guarantees that the address of the variable won't be reused until such time as
the UdevWait operation completes, and drops its reference to it, at which time
the gc can reclaim it.
Signed-off-by: Neil Horman <nhorman@tuxdriver.com>
If a wait event fails when preforming a devicemapper operation, it would be good
to know, in addition to the cookie that its waiting on, we reported the error
that was reported from the lvm2 library.
Signed-off-by: Neil Horman <nhorman@tuxdriver.com>
Before this, if `forceRemove` is set the container data will be removed
no matter what, including if there are issues with removing container
on-disk state (rw layer, container root).
In practice this causes a lot of issues with leaked data sitting on
disk that users are not able to clean up themselves.
This is particularly a problem while the `EBUSY` errors on remove are so
prevalent. So for now let's not keep this behavior.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
This is synonymous with `docker run --cidfile=FILE` and writes the digest of
the newly built image to the named file. This is intended to be used by build
systems which want to avoid tagging (perhaps because they are in CI or
otherwise want to avoid fixed names which can clash) by enabling e.g. Makefile
constructs like:
image.id: Dockerfile
docker build --iidfile=image.id .
do-some-more-stuff: image.id
do-stuff-with <image.id
Currently the only way to achieve this is to use `docker build -q` and capture
the stdout, but at the expense of losing the build output.
In non-silent mode (without `-q`) with API >= v1.29 the caller will now see a
`JSONMessage` with the `Aux` field containing a `types.BuildResult` in the
output stream for each image/layer produced during the build, with the final
one being the end product. Having all of the intermediate images might be
interesting in some cases.
In silent mode (with `-q`) there is no change, on success the only output will
be the resulting image digest as it was previosuly.
There was no wrapper to just output an Aux section without enclosing it in a
Progress, so add one here.
Added some tests to integration cli tests.
Signed-off-by: Ian Campbell <ian.campbell@docker.com>
Previously, only perm-related bits where preserved when rewriting
FileMode in tar entries on Windows. This had the nasty side effect of
having tarsum returning different values when executing from a tar filed
produced on Windows or Linux.
This fix the issue, and pave the way for incremental build context
to work in hybrid contexts.
Signed-off-by: Simon Ferquel <simon.ferquel@docker.com>
Don't error if no group is specified, as this was the prior API. Also
don't return a docker specific error message as this is in `/pkg` and
used by other projects. Just set the default group for the current
user/group consuming the package.
Signed-off-by: Michael Crosby <crosbymichael@gmail.com>
This fixes issues where the underlying filesystem may be disconnected and
attempting to unmount may cause a hang.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
I noticed that we're using a homegrown package for assertions. The
functions are extremely similar to testify, but with enough slight
differences to be confusing (for example, Equal takes its arguments in a
different order). We already vendor testify, and it's used in a few
places by tests.
I also found some problems with pkg/testutil/assert. For example, the
NotNil function seems to be broken. It checks the argument against
"nil", which only works for an interface. If you pass in a nil map or
slice, the equality check will fail.
In the interest of avoiding NIH, I'm proposing replacing
pkg/testutil/assert with testify. The test code looks almost the same,
but we avoid the confusion of having two similar but slightly different
assertion packages, and having to maintain our own package instead of
using a commonly-used one.
In the process, I found a few places where the tests should halt if an
assertion fails, so I've made those cases (that I noticed) use "require"
instead of "assert", and I've vendored the "require" package from
testify alongside the already-present "assert" package.
Signed-off-by: Aaron Lehmann <aaron.lehmann@docker.com>
Change "service create" and "service update" to wait until the creation
or update finishes, when --detach=false is specified. Show progress bars
for the overall operation and for each individual task (when there are a
small enough number of tasks), unless "-q" / "--quiet" is specified.
Signed-off-by: Aaron Lehmann <aaron.lehmann@docker.com>