Files in the .wh..wh.plnk directory are ignored, but other files
inside the tarfile can be hardlinks to these files. This is not
something that normally happens, as on aufs unmount such files are
supposed to be dropped via the "auplink" too, yet images on the index
(such as shipyard/shipyard, e.g. layer
f73c835af6d58b6fc827b400569f79a8f28e54f5bb732be063e1aacefbc374d0)
contains such files.
We handle these by extracting these files to a temporary directory
and resolve such hardlinks via the temporary files.
This fixes https://github.com/dotcloud/docker/issues/3884
Docker-DCO-1.1-Signed-off-by: Alexander Larsson <alexl@redhat.com> (github: alexlarsson)
All archive that are created from somewhere generally have to be closed, because
at some point there is a file or a pipe or something that backs them. So, we
make archive.Archive a ReadCloser. However, code consuming archives does not
typically close them so we add an archive.ArchiveReader and use that when we're
only reading.
We then change all the Tar/Archive places to create ReadClosers, and to properly
close them everywhere.
As an added bonus we can use ReadCloserWrapper rather than EofReader in several places,
which is good as EofReader doesn't always work right. For instance, many compression
schemes like gzip knows it is EOF before having read the EOF from the stream, so the
EofCloser never sees an EOF.
Docker-DCO-1.1-Signed-off-by: Alexander Larsson <alexl@redhat.com> (github: alexlarsson)
When pulling from a registry we get a compressed tar archive, so
we need to wrap the stream in the right kind of compress reader.
Unfortunately go doesn't have an Xz decompression method, but I
don't think any docker layers use that atm anyway.
The TestLookupImage test seems to use a layer that contains
/etc/postgres/postgres.conf, but not e.g. /etc/postgres.
To handle this we ensure that the parent directory always
exists, and if not we create it.
Rather than calling out to tar we use the golang tar parser
to directly extract the tar files. This has two major advantages:
1) We're able to replace an existing directory with a file in the
new layer. This currently breaks with the external tar, since
it refuses to recursively remove the destination directory in
this case, and there are no options to make it do that.
2) We avoid extracting the whiteout files just to later remove them.