mirror of
https://github.com/moby/moby.git
synced 2022-11-09 12:21:53 -05:00
vendor: update buildkit to leases support
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
This commit is contained in:
parent
76dbd884d3
commit
fb1601d5ab
15 changed files with 1419 additions and 445 deletions
|
@ -26,7 +26,7 @@ github.com/imdario/mergo 1afb36080aec31e0d1528973ebe6
|
|||
golang.org/x/sync e225da77a7e68af35c70ccbf71af2b83e6acac3c
|
||||
|
||||
# buildkit
|
||||
github.com/moby/buildkit f7042823e340d38d1746aa675b83d1aca431cee3
|
||||
github.com/moby/buildkit 4f4e03067523b2fc5ca2f17514a5e75ad63e02fb
|
||||
github.com/tonistiigi/fsutil 3bbb99cdbd76619ab717299830c60f6f2a533a6b
|
||||
github.com/grpc-ecosystem/grpc-opentracing 8e809c8a86450a29b90dcc9efbf062d0fe6d9746
|
||||
github.com/opentracing/opentracing-go 1361b9cd60be79c4c3a7fa9841b3c132e40066a7
|
||||
|
|
360
vendor/github.com/moby/buildkit/README.md
generated
vendored
360
vendor/github.com/moby/buildkit/README.md
generated
vendored
|
@ -1,6 +1,6 @@
|
|||
[![asciicinema example](https://asciinema.org/a/gPEIEo1NzmDTUu2bEPsUboqmU.png)](https://asciinema.org/a/gPEIEo1NzmDTUu2bEPsUboqmU)
|
||||
|
||||
## BuildKit
|
||||
# BuildKit
|
||||
|
||||
[![GoDoc](https://godoc.org/github.com/moby/buildkit?status.svg)](https://godoc.org/github.com/moby/buildkit/client/llb)
|
||||
[![Build Status](https://travis-ci.org/moby/buildkit.svg?branch=master)](https://travis-ci.org/moby/buildkit)
|
||||
|
@ -25,49 +25,107 @@ Read the proposal from https://github.com/moby/moby/issues/32925
|
|||
|
||||
Introductory blog post https://blog.mobyproject.org/introducing-buildkit-17e056cc5317
|
||||
|
||||
Join `#buildkit` channel on [Docker Community Slack](http://dockr.ly/slack)
|
||||
|
||||
:information_source: If you are visiting this repo for the usage of experimental Dockerfile features like `RUN --mount=type=(bind|cache|tmpfs|secret|ssh)`, please refer to [`frontend/dockerfile/docs/experimental.md`](frontend/dockerfile/docs/experimental.md).
|
||||
|
||||
### Used by
|
||||
:information_source: [BuildKit has been integrated to `docker build` since Docker 18.06 .](https://docs.docker.com/develop/develop-images/build_enhancements/)
|
||||
You don't need to read this document unless you want to use the full-featured standalone version of BuildKit.
|
||||
|
||||
<!-- START doctoc generated TOC please keep comment here to allow auto update -->
|
||||
<!-- DON'T EDIT THIS SECTION, INSTEAD RE-RUN doctoc TO UPDATE -->
|
||||
|
||||
|
||||
- [Used by](#used-by)
|
||||
- [Quick start](#quick-start)
|
||||
- [Starting the `buildkitd` daemon:](#starting-the-buildkitd-daemon)
|
||||
- [Exploring LLB](#exploring-llb)
|
||||
- [Exploring Dockerfiles](#exploring-dockerfiles)
|
||||
- [Building a Dockerfile with `buildctl`](#building-a-dockerfile-with-buildctl)
|
||||
- [Building a Dockerfile using external frontend:](#building-a-dockerfile-using-external-frontend)
|
||||
- [Building a Dockerfile with experimental features like `RUN --mount=type=(bind|cache|tmpfs|secret|ssh)`](#building-a-dockerfile-with-experimental-features-like-run---mounttypebindcachetmpfssecretssh)
|
||||
- [Output](#output)
|
||||
- [Registry](#registry)
|
||||
- [Local directory](#local-directory)
|
||||
- [Docker tarball](#docker-tarball)
|
||||
- [OCI tarball](#oci-tarball)
|
||||
- [containerd image store](#containerd-image-store)
|
||||
- [Cache](#cache)
|
||||
- [Garbage collection](#garbage-collection)
|
||||
- [Export cache](#export-cache)
|
||||
- [Inline (push image and cache together)](#inline-push-image-and-cache-together)
|
||||
- [Registry (push image and cache separately)](#registry-push-image-and-cache-separately)
|
||||
- [Local directory](#local-directory-1)
|
||||
- [`--export-cache` options](#--export-cache-options)
|
||||
- [`--import-cache` options](#--import-cache-options)
|
||||
- [Consistent hashing](#consistent-hashing)
|
||||
- [Expose BuildKit as a TCP service](#expose-buildkit-as-a-tcp-service)
|
||||
- [Load balancing](#load-balancing)
|
||||
- [Containerizing BuildKit](#containerizing-buildkit)
|
||||
- [Kubernetes](#kubernetes)
|
||||
- [Daemonless](#daemonless)
|
||||
- [Opentracing support](#opentracing-support)
|
||||
- [Running BuildKit without root privileges](#running-buildkit-without-root-privileges)
|
||||
- [Building multi-platform images](#building-multi-platform-images)
|
||||
- [Contributing](#contributing)
|
||||
|
||||
<!-- END doctoc generated TOC please keep comment here to allow auto update -->
|
||||
|
||||
## Used by
|
||||
|
||||
BuildKit is used by the following projects:
|
||||
|
||||
- [Moby & Docker](https://github.com/moby/moby/pull/37151)
|
||||
- [Moby & Docker](https://github.com/moby/moby/pull/37151) (`DOCKER_BUILDKIT=1 docker build`)
|
||||
- [img](https://github.com/genuinetools/img)
|
||||
- [OpenFaaS Cloud](https://github.com/openfaas/openfaas-cloud)
|
||||
- [container build interface](https://github.com/containerbuilding/cbi)
|
||||
- [Knative Build Templates](https://github.com/knative/build-templates)
|
||||
- [Tekton Pipelines](https://github.com/tektoncd/catalog) (formerly [Knative Build Templates](https://github.com/knative/build-templates))
|
||||
- [the Sanic build tool](https://github.com/distributed-containers-inc/sanic)
|
||||
- [vab](https://github.com/stellarproject/vab)
|
||||
- [Rio](https://github.com/rancher/rio)
|
||||
- [PouchContainer](https://github.com/alibaba/pouch)
|
||||
- [Docker buildx](https://github.com/docker/buildx)
|
||||
|
||||
### Quick start
|
||||
## Quick start
|
||||
|
||||
Dependencies:
|
||||
:information_source: For Kubernetes deployments, see [`examples/kubernetes`](./examples/kubernetes).
|
||||
|
||||
BuildKit is composed of the `buildkitd` daemon and the `buildctl` client.
|
||||
While the `buildctl` client is available for Linux, macOS, and Windows, the `buildkitd` daemon is only available for Linux currently.
|
||||
|
||||
The `buildkitd` daemon requires the following components to be installed:
|
||||
- [runc](https://github.com/opencontainers/runc)
|
||||
- [containerd](https://github.com/containerd/containerd) (if you want to use containerd worker)
|
||||
|
||||
The following command installs `buildkitd` and `buildctl` to `/usr/local/bin`:
|
||||
The latest binaries of BuildKit are available [here](https://github.com/moby/buildkit/releases) for Linux, macOS, and Windows.
|
||||
|
||||
```bash
|
||||
$ make && sudo make install
|
||||
[Homebrew package](https://formulae.brew.sh/formula/buildkit) (unofficial) is available for macOS.
|
||||
```console
|
||||
$ brew install buildkit
|
||||
```
|
||||
|
||||
You can also use `make binaries-all` to prepare `buildkitd.containerd_only` and `buildkitd.oci_only`.
|
||||
To build BuildKit from source, see [`.github/CONTRIBUTING.md`](./.github/CONTRIBUTING.md).
|
||||
|
||||
#### Starting the buildkitd daemon:
|
||||
### Starting the `buildkitd` daemon:
|
||||
|
||||
You need to run `buildkitd` as the root user on the host.
|
||||
|
||||
```bash
|
||||
buildkitd --debug --root /var/lib/buildkit
|
||||
$ sudo buildkitd
|
||||
```
|
||||
|
||||
To run `buildkitd` as a non-root user, see [`docs/rootless.md`](docs/rootless.md).
|
||||
|
||||
The buildkitd daemon supports two worker backends: OCI (runc) and containerd.
|
||||
|
||||
By default, the OCI (runc) worker is used. You can set `--oci-worker=false --containerd-worker=true` to use the containerd worker.
|
||||
|
||||
We are open to adding more backends.
|
||||
|
||||
#### Exploring LLB
|
||||
The buildkitd daemon listens gRPC API on `/run/buildkit/buildkitd.sock` by default, but you can also use TCP sockets.
|
||||
See [Expose BuildKit as a TCP service](#expose-buildkit-as-a-tcp-service).
|
||||
|
||||
### Exploring LLB
|
||||
|
||||
BuildKit builds are based on a binary intermediate format called LLB that is used for defining the dependency graph for processes running part of your build. tl;dr: LLB is to Dockerfile what LLVM IR is to C.
|
||||
|
||||
|
@ -76,49 +134,23 @@ BuildKit builds are based on a binary intermediate format called LLB that is use
|
|||
- Efficiently cacheable
|
||||
- Vendor-neutral (i.e. non-Dockerfile languages can be easily implemented)
|
||||
|
||||
See [`solver/pb/ops.proto`](./solver/pb/ops.proto) for the format definition.
|
||||
See [`solver/pb/ops.proto`](./solver/pb/ops.proto) for the format definition, and see [`./examples/README.md`](./examples/README.md) for example LLB applications.
|
||||
|
||||
Currently, following high-level languages has been implemented for LLB:
|
||||
Currently, the following high-level languages has been implemented for LLB:
|
||||
|
||||
- Dockerfile (See [Exploring Dockerfiles](#exploring-dockerfiles))
|
||||
- [Buildpacks](https://github.com/tonistiigi/buildkit-pack)
|
||||
- [Mockerfile](https://matt-rickard.com/building-a-new-dockerfile-frontend/)
|
||||
- [Gockerfile](https://github.com/po3rin/gockerfile)
|
||||
- (open a PR to add your own language)
|
||||
|
||||
For understanding the basics of LLB, `examples/buildkit*` directory contains scripts that define how to build different configurations of BuildKit itself and its dependencies using the `client` package. Running one of these scripts generates a protobuf definition of a build graph. Note that the script itself does not execute any steps of the build.
|
||||
### Exploring Dockerfiles
|
||||
|
||||
You can use `buildctl debug dump-llb` to see what data is in this definition. Add `--dot` to generate dot layout.
|
||||
Frontends are components that run inside BuildKit and convert any build definition to LLB. There is a special frontend called gateway (`gateway.v0`) that allows using any image as a frontend.
|
||||
|
||||
```bash
|
||||
go run examples/buildkit0/buildkit.go \
|
||||
| buildctl debug dump-llb \
|
||||
| jq .
|
||||
```
|
||||
During development, Dockerfile frontend (`dockerfile.v0`) is also part of the BuildKit repo. In the future, this will be moved out, and Dockerfiles can be built using an external image.
|
||||
|
||||
To start building use `buildctl build` command. The example script accepts `--with-containerd` flag to choose if containerd binaries and support should be included in the end result as well.
|
||||
|
||||
```bash
|
||||
go run examples/buildkit0/buildkit.go \
|
||||
| buildctl build
|
||||
```
|
||||
|
||||
`buildctl build` will show interactive progress bar by default while the build job is running. If the path to the trace file is specified, the trace file generated will contain all information about the timing of the individual steps and logs.
|
||||
|
||||
Different versions of the example scripts show different ways of describing the build definition for this project to show the capabilities of the library. New versions have been added when new features have become available.
|
||||
|
||||
- `./examples/buildkit0` - uses only exec operations, defines a full stage per component.
|
||||
- `./examples/buildkit1` - cloning git repositories has been separated for extra concurrency.
|
||||
- `./examples/buildkit2` - uses git sources directly instead of running `git clone`, allowing better performance and much safer caching.
|
||||
- `./examples/buildkit3` - allows using local source files for separate components eg. `./buildkit3 --runc=local | buildctl build --local runc-src=some/local/path`
|
||||
- `./examples/dockerfile2llb` - can be used to convert a Dockerfile to LLB for debugging purposes
|
||||
- `./examples/gobuild` - shows how to use nested invocation to generate LLB for Go package internal dependencies
|
||||
|
||||
#### Exploring Dockerfiles
|
||||
|
||||
Frontends are components that run inside BuildKit and convert any build definition to LLB. There is a special frontend called gateway (gateway.v0) that allows using any image as a frontend.
|
||||
|
||||
During development, Dockerfile frontend (dockerfile.v0) is also part of the BuildKit repo. In the future, this will be moved out, and Dockerfiles can be built using an external image.
|
||||
|
||||
##### Building a Dockerfile with `buildctl`
|
||||
#### Building a Dockerfile with `buildctl`
|
||||
|
||||
```bash
|
||||
buildctl build \
|
||||
|
@ -136,22 +168,7 @@ buildctl build \
|
|||
|
||||
`--local` exposes local source files from client to the builder. `context` and `dockerfile` are the names Dockerfile frontend looks for build context and Dockerfile location.
|
||||
|
||||
##### build-using-dockerfile utility
|
||||
|
||||
For people familiar with `docker build` command, there is an example wrapper utility in `./examples/build-using-dockerfile` that allows building Dockerfiles with BuildKit using a syntax similar to `docker build`.
|
||||
|
||||
```bash
|
||||
go build ./examples/build-using-dockerfile \
|
||||
&& sudo install build-using-dockerfile /usr/local/bin
|
||||
|
||||
build-using-dockerfile -t myimage .
|
||||
build-using-dockerfile -t mybuildkit -f ./hack/dockerfiles/test.Dockerfile .
|
||||
|
||||
# build-using-dockerfile will automatically load the resulting image to Docker
|
||||
docker inspect myimage
|
||||
```
|
||||
|
||||
##### Building a Dockerfile using [external frontend](https://hub.docker.com/r/docker/dockerfile/tags/):
|
||||
#### Building a Dockerfile using external frontend:
|
||||
|
||||
External versions of the Dockerfile frontend are pushed to https://hub.docker.com/r/docker/dockerfile-upstream and https://hub.docker.com/r/docker/dockerfile and can be used with the gateway frontend. The source for the external frontend is currently located in `./frontend/dockerfile/cmd/dockerfile-frontend` but will move out of this repository in the future ([#163](https://github.com/moby/buildkit/issues/163)). For automatic build from master branch of this repository `docker/dockerfile-upsteam:master` or `docker/dockerfile-upstream:master-experimental` image can be used.
|
||||
|
||||
|
@ -168,7 +185,7 @@ buildctl build \
|
|||
--opt build-arg:APT_MIRROR=cdn-fastly.deb.debian.org
|
||||
```
|
||||
|
||||
##### Building a Dockerfile with experimental features like `RUN --mount=type=(bind|cache|tmpfs|secret|ssh)`
|
||||
#### Building a Dockerfile with experimental features like `RUN --mount=type=(bind|cache|tmpfs|secret|ssh)`
|
||||
|
||||
See [`frontend/dockerfile/docs/experimental.md`](frontend/dockerfile/docs/experimental.md).
|
||||
|
||||
|
@ -176,24 +193,26 @@ See [`frontend/dockerfile/docs/experimental.md`](frontend/dockerfile/docs/experi
|
|||
|
||||
By default, the build result and intermediate cache will only remain internally in BuildKit. An output needs to be specified to retrieve the result.
|
||||
|
||||
##### Exporting resulting image to containerd
|
||||
|
||||
The containerd worker needs to be used
|
||||
|
||||
```bash
|
||||
buildctl build ... --output type=image,name=docker.io/username/image
|
||||
ctr --namespace=buildkit images ls
|
||||
```
|
||||
|
||||
##### Push resulting image to registry
|
||||
#### Registry
|
||||
|
||||
```bash
|
||||
buildctl build ... --output type=image,name=docker.io/username/image,push=true
|
||||
```
|
||||
|
||||
If credentials are required, `buildctl` will attempt to read Docker configuration file.
|
||||
To export and import the cache along with the image, you need to specify `--export-cache type=inline` and `--import-cache type=registry,ref=...`.
|
||||
See [Export cache](#export-cache).
|
||||
|
||||
##### Exporting build result back to client
|
||||
```bash
|
||||
buildctl build ...\
|
||||
--output type=image,name=docker.io/username/image,push=true \
|
||||
--export-cache type=inline \
|
||||
--import-cache type=registry,ref=docker.io/username/image
|
||||
```
|
||||
|
||||
If credentials are required, `buildctl` will attempt to read Docker configuration file `$DOCKER_CONFIG/config.json`.
|
||||
`$DOCKER_CONFIG` defaults to `~/.docker`.
|
||||
|
||||
#### Local directory
|
||||
|
||||
The local client will copy the files directly to the client. This is useful if BuildKit is being used for building something else than container images.
|
||||
|
||||
|
@ -222,70 +241,150 @@ buildctl build ... --output type=tar,dest=out.tar
|
|||
buildctl build ... --output type=tar > out.tar
|
||||
```
|
||||
|
||||
##### Exporting built image to Docker
|
||||
#### Docker tarball
|
||||
|
||||
```bash
|
||||
# exported tarball is also compatible with OCI spec
|
||||
buildctl build ... --output type=docker,name=myimage | docker load
|
||||
```
|
||||
|
||||
##### Exporting [OCI Image Format](https://github.com/opencontainers/image-spec) tarball to client
|
||||
#### OCI tarball
|
||||
|
||||
```bash
|
||||
buildctl build ... --output type=oci,dest=path/to/output.tar
|
||||
buildctl build ... --output type=oci > output.tar
|
||||
```
|
||||
#### containerd image store
|
||||
|
||||
### Exporting/Importing build cache (not image itself)
|
||||
|
||||
#### To/From registry
|
||||
The containerd worker needs to be used
|
||||
|
||||
```bash
|
||||
buildctl build ... --export-cache type=registry,ref=localhost:5000/myrepo:buildcache
|
||||
buildctl build ... --import-cache type=registry,ref=localhost:5000/myrepo:buildcache
|
||||
buildctl build ... --output type=image,name=docker.io/username/image
|
||||
ctr --namespace=buildkit images ls
|
||||
```
|
||||
|
||||
#### To/From local filesystem
|
||||
To change the containerd namespace, you need to change `worker.containerd.namespace` in [`/etc/buildkit/buildkitd.toml`](./docs/buildkitd.toml.md).
|
||||
|
||||
```bash
|
||||
buildctl build ... --export-cache type=local,dest=path/to/output-dir
|
||||
buildctl build ... --import-cache type=local,src=path/to/input-dir
|
||||
```
|
||||
|
||||
The directory layout conforms to OCI Image Spec v1.0.
|
||||
## Cache
|
||||
|
||||
#### `--export-cache` options
|
||||
|
||||
- `mode=min` (default): only export layers for the resulting image
|
||||
- `mode=max`: export all the layers of all intermediate steps
|
||||
- `ref=docker.io/user/image:tag`: reference for `registry` cache exporter
|
||||
- `dest=path/to/output-dir`: directory for `local` cache exporter
|
||||
|
||||
#### `--import-cache` options
|
||||
|
||||
- `ref=docker.io/user/image:tag`: reference for `registry` cache importer
|
||||
- `src=path/to/input-dir`: directory for `local` cache importer
|
||||
- `digest=sha256:deadbeef`: digest of the manifest list to import for `local` cache importer. Defaults to the digest of "latest" tag in `index.json`
|
||||
|
||||
### Other
|
||||
|
||||
#### View build cache
|
||||
To show local build cache (`/var/lib/buildkit`):
|
||||
|
||||
```bash
|
||||
buildctl du -v
|
||||
```
|
||||
|
||||
#### Show enabled workers
|
||||
|
||||
To prune local build cache:
|
||||
```bash
|
||||
buildctl debug workers -v
|
||||
buildctl prune
|
||||
```
|
||||
|
||||
### Running containerized buildkit
|
||||
### Garbage collection
|
||||
|
||||
BuildKit can also be used by running the `buildkitd` daemon inside a Docker container and accessing it remotely. The client tool `buildctl` is also available for Mac and Windows.
|
||||
See [`./docs/buildkitd.toml.md`](./docs/buildkitd.toml.md).
|
||||
|
||||
We provide `buildkitd` container images as [`moby/buildkit`](https://hub.docker.com/r/moby/buildkit/tags/):
|
||||
### Export cache
|
||||
|
||||
BuildKit supports the following cache exporters:
|
||||
* `inline`: embed the cache into the image, and push them to the registry together
|
||||
* `registry`: push the image and the cache separately
|
||||
* `local`: export to a local directory
|
||||
|
||||
In most case you want to use the `inline` cache exporter.
|
||||
However, note that the `inline` cache exporter only supports `min` cache mode.
|
||||
To enable `max` cache mode, push the image and the cache separately by using `registry` cache exporter.
|
||||
|
||||
#### Inline (push image and cache together)
|
||||
|
||||
```bash
|
||||
buildctl build ... \
|
||||
--output type=image,name=docker.io/username/image,push=true \
|
||||
--export-cache type=inline \
|
||||
--import-cache type=registry,ref=docker.io/username/image
|
||||
```
|
||||
|
||||
Note that the inline cache is not imported unless `--import-cache type=registry,ref=...` is provided.
|
||||
|
||||
:information_source: Docker-integrated BuildKit (`DOCKER_BUILDKIT=1 docker build`) and `docker buildx`requires
|
||||
`--build-arg BUILDKIT_INLINE_CACHE=1` to be specified to enable the `inline` cache exporter.
|
||||
However, the standalone `buildctl` does NOT require `--opt build-arg:BUILDKIT_INLINE_CACHE=1` and the build-arg is simply ignored.
|
||||
|
||||
#### Registry (push image and cache separately)
|
||||
|
||||
```bash
|
||||
buildctl build ... \
|
||||
--output type=image,name=localhost:5000/myrepo:image,push=true \
|
||||
--export-cache type=registry,ref=localhost:5000/myrepo:buildcache \
|
||||
--import-cache type=registry,ref=localhost:5000/myrepo:buildcache \
|
||||
```
|
||||
|
||||
#### Local directory
|
||||
|
||||
```bash
|
||||
buildctl build ... --export-cache type=local,dest=path/to/output-dir
|
||||
buildctl build ... --import-cache type=local,src=path/to/input-dir,digest=sha256:deadbeef
|
||||
```
|
||||
|
||||
The directory layout conforms to OCI Image Spec v1.0.
|
||||
|
||||
Currently, you need to specify the `digest` of the manifest list to import for `local` cache importer.
|
||||
This is planned to default to the digest of "latest" tag in `index.json` in future.
|
||||
|
||||
#### `--export-cache` options
|
||||
- `type`: `inline`, `registry`, or `local`
|
||||
- `mode=min` (default): only export layers for the resulting image
|
||||
- `mode=max`: export all the layers of all intermediate steps. Not supported for `inline` cache exporter.
|
||||
- `ref=docker.io/user/image:tag`: reference for `registry` cache exporter
|
||||
- `dest=path/to/output-dir`: directory for `local` cache exporter
|
||||
|
||||
#### `--import-cache` options
|
||||
- `type`: `registry` or `local`. Use `registry` to import `inline` cache.
|
||||
- `ref=docker.io/user/image:tag`: reference for `registry` cache importer
|
||||
- `src=path/to/input-dir`: directory for `local` cache importer
|
||||
- `digest=sha256:deadbeef`: digest of the manifest list to import for `local` cache importer.
|
||||
|
||||
### Consistent hashing
|
||||
|
||||
If you have multiple BuildKit daemon instances but you don't want to use registry for sharing cache across the cluster,
|
||||
consider client-side load balancing using consistent hashing.
|
||||
|
||||
See [`./examples/kubernetes/consistenthash`](./examples/kubernetes/consistenthash).
|
||||
|
||||
## Expose BuildKit as a TCP service
|
||||
|
||||
The `buildkitd` daemon can listen the gRPC API on a TCP socket.
|
||||
|
||||
It is highly recommended to create TLS certificates for both the daemon and the client (mTLS).
|
||||
Enabling TCP without mTLS is dangerous because the executor containers (aka Dockerfile `RUN` containers) can call BuildKit API as well.
|
||||
|
||||
```bash
|
||||
buildkitd \
|
||||
--addr tcp://0.0.0.0:1234 \
|
||||
--tlscacert /path/to/ca.pem \
|
||||
--tlscert /path/to/cert.pem \
|
||||
--tlskey /path/to/key.pem
|
||||
```
|
||||
|
||||
```bash
|
||||
buildctl \
|
||||
--addr tcp://example.com:1234 \
|
||||
--tlscacert /path/to/ca.pem \
|
||||
--tlscert /path/to/clientcert.pem \
|
||||
--tlskey /path/to/clientkey.pem \
|
||||
build ...
|
||||
```
|
||||
|
||||
### Load balancing
|
||||
|
||||
`buildctl build` can be called against randomly load balanced the `buildkitd` daemon.
|
||||
|
||||
See also [Consistent hashing](#consistenthashing) for client-side load balancing.
|
||||
|
||||
## Containerizing BuildKit
|
||||
|
||||
BuildKit can also be used by running the `buildkitd` daemon inside a Docker container and accessing it remotely.
|
||||
|
||||
We provide the container images as [`moby/buildkit`](https://hub.docker.com/r/moby/buildkit/tags/):
|
||||
|
||||
- `moby/buildkit:latest`: built from the latest regular [release](https://github.com/moby/buildkit/releases)
|
||||
- `moby/buildkit:rootless`: same as `latest` but runs as an unprivileged user, see [`docs/rootless.md`](docs/rootless.md)
|
||||
|
@ -295,11 +394,17 @@ We provide `buildkitd` container images as [`moby/buildkit`](https://hub.docker.
|
|||
To run daemon in a container:
|
||||
|
||||
```bash
|
||||
docker run -d --privileged -p 1234:1234 moby/buildkit:latest --addr tcp://0.0.0.0:1234
|
||||
export BUILDKIT_HOST=tcp://0.0.0.0:1234
|
||||
docker run -d --name buildkitd --privileged moby/buildkit:latest
|
||||
export BUILDKIT_HOST=docker-container://buildkitd
|
||||
buildctl build --help
|
||||
```
|
||||
|
||||
### Kubernetes
|
||||
|
||||
For Kubernetes deployments, see [`examples/kubernetes`](./examples/kubernetes).
|
||||
|
||||
### Daemonless
|
||||
|
||||
To run client and an ephemeral daemon in a single container ("daemonless mode"):
|
||||
|
||||
```bash
|
||||
|
@ -335,21 +440,7 @@ docker run \
|
|||
--local dockerfile=/tmp/work
|
||||
```
|
||||
|
||||
The images can be also built locally using `./hack/dockerfiles/test.Dockerfile` (or `./hack/dockerfiles/test.buildkit.Dockerfile` if you already have BuildKit). Run `make images` to build the images as `moby/buildkit:local` and `moby/buildkit:local-rootless`.
|
||||
|
||||
#### Connection helpers
|
||||
|
||||
If you are running `moby/buildkit:master` or `moby/buildkit:master-rootless` as a Docker/Kubernetes container, you can use special `BUILDKIT_HOST` URL for connecting to the BuildKit daemon in the container:
|
||||
|
||||
```bash
|
||||
export BUILDKIT_HOST=docker-container://<container>
|
||||
```
|
||||
|
||||
```bash
|
||||
export BUILDKIT_HOST=kube-pod://<pod>
|
||||
```
|
||||
|
||||
### Opentracing support
|
||||
## Opentracing support
|
||||
|
||||
BuildKit supports opentracing for buildkitd gRPC API and buildctl commands. To capture the trace to [Jaeger](https://github.com/jaegertracing/jaeger), set `JAEGER_TRACE` environment variable to the collection address.
|
||||
|
||||
|
@ -360,14 +451,15 @@ export JAEGER_TRACE=0.0.0.0:6831
|
|||
# any buildctl command should be traced to http://127.0.0.1:16686/
|
||||
```
|
||||
|
||||
### Supported runc version
|
||||
|
||||
During development, BuildKit is tested with the version of runc that is being used by the containerd repository. Please refer to [runc.md](https://github.com/containerd/containerd/blob/v1.2.1/RUNC.md) for more information.
|
||||
|
||||
### Running BuildKit without root privileges
|
||||
## Running BuildKit without root privileges
|
||||
|
||||
Please refer to [`docs/rootless.md`](docs/rootless.md).
|
||||
|
||||
### Contributing
|
||||
## Building multi-platform images
|
||||
|
||||
See [`docker buildx` documentation](https://github.com/docker/buildx#building-multi-platform-images)
|
||||
|
||||
## Contributing
|
||||
|
||||
Want to contribute to BuildKit? Awesome! You can find information about contributing to this project in the [CONTRIBUTING.md](/.github/CONTRIBUTING.md)
|
||||
|
||||
|
|
304
vendor/github.com/moby/buildkit/cache/manager.go
generated
vendored
304
vendor/github.com/moby/buildkit/cache/manager.go
generated
vendored
|
@ -6,13 +6,19 @@ import (
|
|||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/containerd/containerd/content"
|
||||
"github.com/containerd/containerd/diff"
|
||||
"github.com/containerd/containerd/filters"
|
||||
"github.com/containerd/containerd/snapshots"
|
||||
"github.com/containerd/containerd/gc"
|
||||
"github.com/containerd/containerd/leases"
|
||||
"github.com/docker/docker/pkg/idtools"
|
||||
"github.com/moby/buildkit/cache/metadata"
|
||||
"github.com/moby/buildkit/client"
|
||||
"github.com/moby/buildkit/identity"
|
||||
"github.com/moby/buildkit/snapshot"
|
||||
"github.com/opencontainers/go-digest"
|
||||
imagespecidentity "github.com/opencontainers/image-spec/identity"
|
||||
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
|
||||
"github.com/pkg/errors"
|
||||
"github.com/sirupsen/logrus"
|
||||
"golang.org/x/sync/errgroup"
|
||||
|
@ -25,15 +31,20 @@ var (
|
|||
)
|
||||
|
||||
type ManagerOpt struct {
|
||||
Snapshotter snapshot.SnapshotterBase
|
||||
Snapshotter snapshot.Snapshotter
|
||||
MetadataStore *metadata.Store
|
||||
ContentStore content.Store
|
||||
LeaseManager leases.Manager
|
||||
PruneRefChecker ExternalRefCheckerFunc
|
||||
GarbageCollect func(ctx context.Context) (gc.Stats, error)
|
||||
Applier diff.Applier
|
||||
}
|
||||
|
||||
type Accessor interface {
|
||||
GetByBlob(ctx context.Context, desc ocispec.Descriptor, parent ImmutableRef, opts ...RefOption) (ImmutableRef, error)
|
||||
Get(ctx context.Context, id string, opts ...RefOption) (ImmutableRef, error)
|
||||
GetFromSnapshotter(ctx context.Context, id string, opts ...RefOption) (ImmutableRef, error)
|
||||
New(ctx context.Context, s ImmutableRef, opts ...RefOption) (MutableRef, error)
|
||||
|
||||
New(ctx context.Context, parent ImmutableRef, opts ...RefOption) (MutableRef, error)
|
||||
GetMutable(ctx context.Context, id string) (MutableRef, error) // Rebase?
|
||||
IdentityMapping() *idtools.IdentityMapping
|
||||
Metadata(string) *metadata.StorageItem
|
||||
|
@ -53,7 +64,7 @@ type Manager interface {
|
|||
type ExternalRefCheckerFunc func() (ExternalRefChecker, error)
|
||||
|
||||
type ExternalRefChecker interface {
|
||||
Exists(key string) bool
|
||||
Exists(string, []digest.Digest) bool
|
||||
}
|
||||
|
||||
type cacheManager struct {
|
||||
|
@ -81,6 +92,159 @@ func NewManager(opt ManagerOpt) (Manager, error) {
|
|||
return cm, nil
|
||||
}
|
||||
|
||||
func (cm *cacheManager) GetByBlob(ctx context.Context, desc ocispec.Descriptor, parent ImmutableRef, opts ...RefOption) (ir ImmutableRef, err error) {
|
||||
diffID, err := diffIDFromDescriptor(desc)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
chainID := diffID
|
||||
blobChainID := imagespecidentity.ChainID([]digest.Digest{desc.Digest, diffID})
|
||||
|
||||
if desc.Digest != "" {
|
||||
if _, err := cm.ContentStore.Info(ctx, desc.Digest); err != nil {
|
||||
return nil, errors.Wrapf(err, "failed to get blob %s", desc.Digest)
|
||||
}
|
||||
}
|
||||
|
||||
var p *immutableRef
|
||||
var parentID string
|
||||
if parent != nil {
|
||||
pInfo := parent.Info()
|
||||
if pInfo.ChainID == "" || pInfo.BlobChainID == "" {
|
||||
return nil, errors.Errorf("failed to get ref by blob on non-adressable parent")
|
||||
}
|
||||
chainID = imagespecidentity.ChainID([]digest.Digest{pInfo.ChainID, chainID})
|
||||
blobChainID = imagespecidentity.ChainID([]digest.Digest{pInfo.BlobChainID, blobChainID})
|
||||
p2, err := cm.Get(ctx, parent.ID(), NoUpdateLastUsed)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if err := p2.Finalize(ctx, true); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
parentID = p2.ID()
|
||||
p = p2.(*immutableRef)
|
||||
}
|
||||
|
||||
releaseParent := false
|
||||
defer func() {
|
||||
if releaseParent || err != nil && p != nil {
|
||||
p.Release(context.TODO())
|
||||
}
|
||||
}()
|
||||
|
||||
cm.mu.Lock()
|
||||
defer cm.mu.Unlock()
|
||||
|
||||
sis, err := cm.MetadataStore.Search("blobchainid:" + blobChainID.String())
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
for _, si := range sis {
|
||||
ref, err := cm.get(ctx, si.ID(), opts...)
|
||||
if err != nil && errors.Cause(err) != errNotFound {
|
||||
return nil, errors.Wrapf(err, "failed to get record %s by blobchainid", si.ID())
|
||||
}
|
||||
if p != nil {
|
||||
releaseParent = true
|
||||
}
|
||||
return ref, nil
|
||||
}
|
||||
|
||||
sis, err = cm.MetadataStore.Search("chainid:" + chainID.String())
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
var link ImmutableRef
|
||||
for _, si := range sis {
|
||||
ref, err := cm.get(ctx, si.ID(), opts...)
|
||||
if err != nil && errors.Cause(err) != errNotFound {
|
||||
return nil, errors.Wrapf(err, "failed to get record %s by chainid", si.ID())
|
||||
}
|
||||
link = ref
|
||||
break
|
||||
}
|
||||
|
||||
id := identity.NewID()
|
||||
snapshotID := chainID.String()
|
||||
blobOnly := true
|
||||
if link != nil {
|
||||
snapshotID = getSnapshotID(link.Metadata())
|
||||
blobOnly = getBlobOnly(link.Metadata())
|
||||
go link.Release(context.TODO())
|
||||
}
|
||||
|
||||
l, err := cm.ManagerOpt.LeaseManager.Create(ctx, func(l *leases.Lease) error {
|
||||
l.ID = id
|
||||
l.Labels = map[string]string{
|
||||
"containerd.io/gc.flat": time.Now().UTC().Format(time.RFC3339Nano),
|
||||
}
|
||||
return nil
|
||||
})
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "failed to create lease")
|
||||
}
|
||||
|
||||
defer func() {
|
||||
if err != nil {
|
||||
if err := cm.ManagerOpt.LeaseManager.Delete(context.TODO(), leases.Lease{
|
||||
ID: l.ID,
|
||||
}); err != nil {
|
||||
logrus.Errorf("failed to remove lease: %+v", err)
|
||||
}
|
||||
}
|
||||
}()
|
||||
|
||||
if err := cm.ManagerOpt.LeaseManager.AddResource(ctx, l, leases.Resource{
|
||||
ID: snapshotID,
|
||||
Type: "snapshots/" + cm.ManagerOpt.Snapshotter.Name(),
|
||||
}); err != nil {
|
||||
return nil, errors.Wrapf(err, "failed to add snapshot %s to lease", id)
|
||||
}
|
||||
|
||||
if desc.Digest != "" {
|
||||
if err := cm.ManagerOpt.LeaseManager.AddResource(ctx, leases.Lease{ID: id}, leases.Resource{
|
||||
ID: desc.Digest.String(),
|
||||
Type: "content",
|
||||
}); err != nil {
|
||||
return nil, errors.Wrapf(err, "failed to add blob %s to lease", id)
|
||||
}
|
||||
}
|
||||
|
||||
md, _ := cm.md.Get(id)
|
||||
|
||||
rec := &cacheRecord{
|
||||
mu: &sync.Mutex{},
|
||||
cm: cm,
|
||||
refs: make(map[ref]struct{}),
|
||||
parent: p,
|
||||
md: md,
|
||||
}
|
||||
|
||||
if err := initializeMetadata(rec, parentID, opts...); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
queueDiffID(rec.md, diffID.String())
|
||||
queueBlob(rec.md, desc.Digest.String())
|
||||
queueChainID(rec.md, chainID.String())
|
||||
queueBlobChainID(rec.md, blobChainID.String())
|
||||
queueSnapshotID(rec.md, snapshotID)
|
||||
queueBlobOnly(rec.md, blobOnly)
|
||||
queueMediaType(rec.md, desc.MediaType)
|
||||
queueCommitted(rec.md)
|
||||
|
||||
if err := rec.md.Commit(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
cm.records[id] = rec
|
||||
|
||||
return rec.ref(true), nil
|
||||
}
|
||||
|
||||
// init loads all snapshots from metadata state and tries to load the records
|
||||
// from the snapshotter. If snaphot can't be found, metadata is deleted as well.
|
||||
func (cm *cacheManager) init(ctx context.Context) error {
|
||||
|
@ -90,10 +254,10 @@ func (cm *cacheManager) init(ctx context.Context) error {
|
|||
}
|
||||
|
||||
for _, si := range items {
|
||||
if _, err := cm.getRecord(ctx, si.ID(), false); err != nil {
|
||||
logrus.Debugf("could not load snapshot %s: %v", si.ID(), err)
|
||||
if _, err := cm.getRecord(ctx, si.ID()); err != nil {
|
||||
logrus.Debugf("could not load snapshot %s: %+v", si.ID(), err)
|
||||
cm.md.Clear(si.ID())
|
||||
// TODO: make sure content is deleted as well
|
||||
cm.LeaseManager.Delete(ctx, leases.Lease{ID: si.ID()})
|
||||
}
|
||||
}
|
||||
return nil
|
||||
|
@ -115,14 +279,7 @@ func (cm *cacheManager) Close() error {
|
|||
func (cm *cacheManager) Get(ctx context.Context, id string, opts ...RefOption) (ImmutableRef, error) {
|
||||
cm.mu.Lock()
|
||||
defer cm.mu.Unlock()
|
||||
return cm.get(ctx, id, false, opts...)
|
||||
}
|
||||
|
||||
// Get returns an immutable snapshot reference for ID
|
||||
func (cm *cacheManager) GetFromSnapshotter(ctx context.Context, id string, opts ...RefOption) (ImmutableRef, error) {
|
||||
cm.mu.Lock()
|
||||
defer cm.mu.Unlock()
|
||||
return cm.get(ctx, id, true, opts...)
|
||||
return cm.get(ctx, id, opts...)
|
||||
}
|
||||
|
||||
func (cm *cacheManager) Metadata(id string) *metadata.StorageItem {
|
||||
|
@ -136,8 +293,8 @@ func (cm *cacheManager) Metadata(id string) *metadata.StorageItem {
|
|||
}
|
||||
|
||||
// get requires manager lock to be taken
|
||||
func (cm *cacheManager) get(ctx context.Context, id string, fromSnapshotter bool, opts ...RefOption) (*immutableRef, error) {
|
||||
rec, err := cm.getRecord(ctx, id, fromSnapshotter, opts...)
|
||||
func (cm *cacheManager) get(ctx context.Context, id string, opts ...RefOption) (*immutableRef, error) {
|
||||
rec, err := cm.getRecord(ctx, id, opts...)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
@ -165,7 +322,7 @@ func (cm *cacheManager) get(ctx context.Context, id string, fromSnapshotter bool
|
|||
}
|
||||
|
||||
// getRecord returns record for id. Requires manager lock.
|
||||
func (cm *cacheManager) getRecord(ctx context.Context, id string, fromSnapshotter bool, opts ...RefOption) (cr *cacheRecord, retErr error) {
|
||||
func (cm *cacheManager) getRecord(ctx context.Context, id string, opts ...RefOption) (cr *cacheRecord, retErr error) {
|
||||
if rec, ok := cm.records[id]; ok {
|
||||
if rec.isDead() {
|
||||
return nil, errors.Wrapf(errNotFound, "failed to get dead record %s", id)
|
||||
|
@ -174,11 +331,11 @@ func (cm *cacheManager) getRecord(ctx context.Context, id string, fromSnapshotte
|
|||
}
|
||||
|
||||
md, ok := cm.md.Get(id)
|
||||
if !ok && !fromSnapshotter {
|
||||
return nil, errors.WithStack(errNotFound)
|
||||
if !ok {
|
||||
return nil, errors.Wrapf(errNotFound, "%s not found", id)
|
||||
}
|
||||
if mutableID := getEqualMutable(md); mutableID != "" {
|
||||
mutable, err := cm.getRecord(ctx, mutableID, fromSnapshotter)
|
||||
mutable, err := cm.getRecord(ctx, mutableID)
|
||||
if err != nil {
|
||||
// check loading mutable deleted record from disk
|
||||
if errors.Cause(err) == errNotFound {
|
||||
|
@ -199,14 +356,10 @@ func (cm *cacheManager) getRecord(ctx context.Context, id string, fromSnapshotte
|
|||
return rec, nil
|
||||
}
|
||||
|
||||
info, err := cm.Snapshotter.Stat(ctx, id)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(errNotFound, err.Error())
|
||||
}
|
||||
|
||||
var parent *immutableRef
|
||||
if info.Parent != "" {
|
||||
parent, err = cm.get(ctx, info.Parent, fromSnapshotter, append(opts, NoUpdateLastUsed)...)
|
||||
if parentID := getParent(md); parentID != "" {
|
||||
var err error
|
||||
parent, err = cm.get(ctx, parentID, append(opts, NoUpdateLastUsed)...)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
@ -221,7 +374,7 @@ func (cm *cacheManager) getRecord(ctx context.Context, id string, fromSnapshotte
|
|||
|
||||
rec := &cacheRecord{
|
||||
mu: &sync.Mutex{},
|
||||
mutable: info.Kind != snapshots.KindCommitted,
|
||||
mutable: !getCommitted(md),
|
||||
cm: cm,
|
||||
refs: make(map[ref]struct{}),
|
||||
parent: parent,
|
||||
|
@ -236,7 +389,7 @@ func (cm *cacheManager) getRecord(ctx context.Context, id string, fromSnapshotte
|
|||
return nil, errors.Wrapf(errNotFound, "failed to get deleted record %s", id)
|
||||
}
|
||||
|
||||
if err := initializeMetadata(rec, opts...); err != nil {
|
||||
if err := initializeMetadata(rec, getParent(md), opts...); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
|
@ -244,11 +397,12 @@ func (cm *cacheManager) getRecord(ctx context.Context, id string, fromSnapshotte
|
|||
return rec, nil
|
||||
}
|
||||
|
||||
func (cm *cacheManager) New(ctx context.Context, s ImmutableRef, opts ...RefOption) (MutableRef, error) {
|
||||
func (cm *cacheManager) New(ctx context.Context, s ImmutableRef, opts ...RefOption) (mr MutableRef, err error) {
|
||||
id := identity.NewID()
|
||||
|
||||
var parent *immutableRef
|
||||
var parentID string
|
||||
var parentSnapshotID string
|
||||
if s != nil {
|
||||
p, err := cm.Get(ctx, s.ID(), NoUpdateLastUsed)
|
||||
if err != nil {
|
||||
|
@ -257,14 +411,46 @@ func (cm *cacheManager) New(ctx context.Context, s ImmutableRef, opts ...RefOpti
|
|||
if err := p.Finalize(ctx, true); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
parentID = p.ID()
|
||||
parent = p.(*immutableRef)
|
||||
parentSnapshotID = getSnapshotID(parent.md)
|
||||
parentID = parent.ID()
|
||||
}
|
||||
|
||||
if err := cm.Snapshotter.Prepare(ctx, id, parentID); err != nil {
|
||||
if parent != nil {
|
||||
defer func() {
|
||||
if err != nil && parent != nil {
|
||||
parent.Release(context.TODO())
|
||||
}
|
||||
}()
|
||||
|
||||
l, err := cm.ManagerOpt.LeaseManager.Create(ctx, func(l *leases.Lease) error {
|
||||
l.ID = id
|
||||
l.Labels = map[string]string{
|
||||
"containerd.io/gc.flat": time.Now().UTC().Format(time.RFC3339Nano),
|
||||
}
|
||||
return nil
|
||||
})
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "failed to create lease")
|
||||
}
|
||||
|
||||
defer func() {
|
||||
if err != nil {
|
||||
if err := cm.ManagerOpt.LeaseManager.Delete(context.TODO(), leases.Lease{
|
||||
ID: l.ID,
|
||||
}); err != nil {
|
||||
logrus.Errorf("failed to remove lease: %+v", err)
|
||||
}
|
||||
}
|
||||
}()
|
||||
|
||||
if err := cm.ManagerOpt.LeaseManager.AddResource(ctx, l, leases.Resource{
|
||||
ID: id,
|
||||
Type: "snapshots/" + cm.ManagerOpt.Snapshotter.Name(),
|
||||
}); err != nil {
|
||||
return nil, errors.Wrapf(err, "failed to add snapshot %s to lease", id)
|
||||
}
|
||||
|
||||
if err := cm.Snapshotter.Prepare(ctx, id, parentSnapshotID); err != nil {
|
||||
return nil, errors.Wrapf(err, "failed to prepare %s", id)
|
||||
}
|
||||
|
||||
|
@ -279,10 +465,7 @@ func (cm *cacheManager) New(ctx context.Context, s ImmutableRef, opts ...RefOpti
|
|||
md: md,
|
||||
}
|
||||
|
||||
if err := initializeMetadata(rec, opts...); err != nil {
|
||||
if parent != nil {
|
||||
parent.Release(context.TODO())
|
||||
}
|
||||
if err := initializeMetadata(rec, parentID, opts...); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
|
@ -297,7 +480,7 @@ func (cm *cacheManager) GetMutable(ctx context.Context, id string) (MutableRef,
|
|||
cm.mu.Lock()
|
||||
defer cm.mu.Unlock()
|
||||
|
||||
rec, err := cm.getRecord(ctx, id, false)
|
||||
rec, err := cm.getRecord(ctx, id)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
@ -328,13 +511,22 @@ func (cm *cacheManager) GetMutable(ctx context.Context, id string) (MutableRef,
|
|||
|
||||
func (cm *cacheManager) Prune(ctx context.Context, ch chan client.UsageInfo, opts ...client.PruneInfo) error {
|
||||
cm.muPrune.Lock()
|
||||
defer cm.muPrune.Unlock()
|
||||
|
||||
for _, opt := range opts {
|
||||
if err := cm.pruneOnce(ctx, ch, opt); err != nil {
|
||||
cm.muPrune.Unlock()
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
cm.muPrune.Unlock()
|
||||
|
||||
if cm.GarbageCollect != nil {
|
||||
if _, err := cm.GarbageCollect(ctx); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
|
@ -360,10 +552,8 @@ func (cm *cacheManager) pruneOnce(ctx context.Context, ch chan client.UsageInfo,
|
|||
return err
|
||||
}
|
||||
for _, ui := range du {
|
||||
if check != nil {
|
||||
if check.Exists(ui.ID) {
|
||||
continue
|
||||
}
|
||||
if ui.Shared {
|
||||
continue
|
||||
}
|
||||
totalSize += ui.Size
|
||||
}
|
||||
|
@ -418,7 +608,7 @@ func (cm *cacheManager) prune(ctx context.Context, ch chan client.UsageInfo, opt
|
|||
|
||||
shared := false
|
||||
if opt.checkShared != nil {
|
||||
shared = opt.checkShared.Exists(cr.ID())
|
||||
shared = opt.checkShared.Exists(cr.ID(), cr.parentChain())
|
||||
}
|
||||
|
||||
if !opt.all {
|
||||
|
@ -577,7 +767,7 @@ func (cm *cacheManager) markShared(m map[string]*cacheUsageInfo) error {
|
|||
if m[id].shared {
|
||||
continue
|
||||
}
|
||||
if b := c.Exists(id); b {
|
||||
if b := c.Exists(id, m[id].parentChain); b {
|
||||
markAllParentsShared(id)
|
||||
}
|
||||
}
|
||||
|
@ -596,6 +786,7 @@ type cacheUsageInfo struct {
|
|||
doubleRef bool
|
||||
recordType client.UsageRecordType
|
||||
shared bool
|
||||
parentChain []digest.Digest
|
||||
}
|
||||
|
||||
func (cm *cacheManager) DiskUsage(ctx context.Context, opt client.DiskUsageInfo) ([]*client.UsageInfo, error) {
|
||||
|
@ -628,6 +819,7 @@ func (cm *cacheManager) DiskUsage(ctx context.Context, opt client.DiskUsageInfo)
|
|||
description: GetDescription(cr.md),
|
||||
doubleRef: cr.equalImmutable != nil,
|
||||
recordType: GetRecordType(cr),
|
||||
parentChain: cr.parentChain(),
|
||||
}
|
||||
if c.recordType == "" {
|
||||
c.recordType = client.UsageRecordTypeRegular
|
||||
|
@ -769,12 +961,16 @@ func WithCreationTime(tm time.Time) RefOption {
|
|||
}
|
||||
}
|
||||
|
||||
func initializeMetadata(m withMetadata, opts ...RefOption) error {
|
||||
func initializeMetadata(m withMetadata, parent string, opts ...RefOption) error {
|
||||
md := m.Metadata()
|
||||
if tm := GetCreatedAt(md); !tm.IsZero() {
|
||||
return nil
|
||||
}
|
||||
|
||||
if err := queueParent(md, parent); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if err := queueCreatedAt(md, time.Now()); err != nil {
|
||||
return err
|
||||
}
|
||||
|
@ -882,3 +1078,15 @@ func sortDeleteRecords(toDelete []*deleteRecord) {
|
|||
float64(toDelete[j].usageCountIndex)/float64(maxUsageCountIndex)
|
||||
})
|
||||
}
|
||||
|
||||
func diffIDFromDescriptor(desc ocispec.Descriptor) (digest.Digest, error) {
|
||||
diffIDStr, ok := desc.Annotations["containerd.io/uncompressed"]
|
||||
if !ok {
|
||||
return "", errors.Errorf("missing uncompressed annotation for %s", desc.Digest)
|
||||
}
|
||||
diffID, err := digest.Parse(diffIDStr)
|
||||
if err != nil {
|
||||
return "", errors.Wrapf(err, "failed to parse diffID %q for %s", diffIDStr, desc.Digest)
|
||||
}
|
||||
return diffID, nil
|
||||
}
|
||||
|
|
241
vendor/github.com/moby/buildkit/cache/metadata.go
generated
vendored
241
vendor/github.com/moby/buildkit/cache/metadata.go
generated
vendored
|
@ -19,13 +19,203 @@ const keyLastUsedAt = "cache.lastUsedAt"
|
|||
const keyUsageCount = "cache.usageCount"
|
||||
const keyLayerType = "cache.layerType"
|
||||
const keyRecordType = "cache.recordType"
|
||||
const keyCommitted = "snapshot.committed"
|
||||
const keyParent = "cache.parent"
|
||||
const keyDiffID = "cache.diffID"
|
||||
const keyChainID = "cache.chainID"
|
||||
const keyBlobChainID = "cache.blobChainID"
|
||||
const keyBlob = "cache.blob"
|
||||
const keySnapshot = "cache.snapshot"
|
||||
const keyBlobOnly = "cache.blobonly"
|
||||
const keyMediaType = "cache.mediatype"
|
||||
|
||||
const keyDeleted = "cache.deleted"
|
||||
|
||||
func queueDiffID(si *metadata.StorageItem, str string) error {
|
||||
if str == "" {
|
||||
return nil
|
||||
}
|
||||
v, err := metadata.NewValue(str)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "failed to create diffID value")
|
||||
}
|
||||
si.Update(func(b *bolt.Bucket) error {
|
||||
return si.SetValue(b, keyDiffID, v)
|
||||
})
|
||||
return nil
|
||||
}
|
||||
|
||||
func getMediaType(si *metadata.StorageItem) string {
|
||||
v := si.Get(keyMediaType)
|
||||
if v == nil {
|
||||
return si.ID()
|
||||
}
|
||||
var str string
|
||||
if err := v.Unmarshal(&str); err != nil {
|
||||
return ""
|
||||
}
|
||||
return str
|
||||
}
|
||||
|
||||
func queueMediaType(si *metadata.StorageItem, str string) error {
|
||||
if str == "" {
|
||||
return nil
|
||||
}
|
||||
v, err := metadata.NewValue(str)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "failed to create mediaType value")
|
||||
}
|
||||
si.Queue(func(b *bolt.Bucket) error {
|
||||
return si.SetValue(b, keyMediaType, v)
|
||||
})
|
||||
return nil
|
||||
}
|
||||
|
||||
func getSnapshotID(si *metadata.StorageItem) string {
|
||||
v := si.Get(keySnapshot)
|
||||
if v == nil {
|
||||
return si.ID()
|
||||
}
|
||||
var str string
|
||||
if err := v.Unmarshal(&str); err != nil {
|
||||
return ""
|
||||
}
|
||||
return str
|
||||
}
|
||||
|
||||
func queueSnapshotID(si *metadata.StorageItem, str string) error {
|
||||
if str == "" {
|
||||
return nil
|
||||
}
|
||||
v, err := metadata.NewValue(str)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "failed to create chainID value")
|
||||
}
|
||||
si.Queue(func(b *bolt.Bucket) error {
|
||||
return si.SetValue(b, keySnapshot, v)
|
||||
})
|
||||
return nil
|
||||
}
|
||||
|
||||
func getDiffID(si *metadata.StorageItem) string {
|
||||
v := si.Get(keyDiffID)
|
||||
if v == nil {
|
||||
return ""
|
||||
}
|
||||
var str string
|
||||
if err := v.Unmarshal(&str); err != nil {
|
||||
return ""
|
||||
}
|
||||
return str
|
||||
}
|
||||
|
||||
func queueChainID(si *metadata.StorageItem, str string) error {
|
||||
if str == "" {
|
||||
return nil
|
||||
}
|
||||
v, err := metadata.NewValue(str)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "failed to create chainID value")
|
||||
}
|
||||
v.Index = "chainid:" + str
|
||||
si.Update(func(b *bolt.Bucket) error {
|
||||
return si.SetValue(b, keyChainID, v)
|
||||
})
|
||||
return nil
|
||||
}
|
||||
|
||||
func getBlobChainID(si *metadata.StorageItem) string {
|
||||
v := si.Get(keyBlobChainID)
|
||||
if v == nil {
|
||||
return ""
|
||||
}
|
||||
var str string
|
||||
if err := v.Unmarshal(&str); err != nil {
|
||||
return ""
|
||||
}
|
||||
return str
|
||||
}
|
||||
|
||||
func queueBlobChainID(si *metadata.StorageItem, str string) error {
|
||||
if str == "" {
|
||||
return nil
|
||||
}
|
||||
v, err := metadata.NewValue(str)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "failed to create chainID value")
|
||||
}
|
||||
v.Index = "blobchainid:" + str
|
||||
si.Update(func(b *bolt.Bucket) error {
|
||||
return si.SetValue(b, keyBlobChainID, v)
|
||||
})
|
||||
return nil
|
||||
}
|
||||
|
||||
func getChainID(si *metadata.StorageItem) string {
|
||||
v := si.Get(keyChainID)
|
||||
if v == nil {
|
||||
return ""
|
||||
}
|
||||
var str string
|
||||
if err := v.Unmarshal(&str); err != nil {
|
||||
return ""
|
||||
}
|
||||
return str
|
||||
}
|
||||
|
||||
func queueBlob(si *metadata.StorageItem, str string) error {
|
||||
if str == "" {
|
||||
return nil
|
||||
}
|
||||
v, err := metadata.NewValue(str)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "failed to create blob value")
|
||||
}
|
||||
si.Update(func(b *bolt.Bucket) error {
|
||||
return si.SetValue(b, keyBlob, v)
|
||||
})
|
||||
return nil
|
||||
}
|
||||
|
||||
func getBlob(si *metadata.StorageItem) string {
|
||||
v := si.Get(keyBlob)
|
||||
if v == nil {
|
||||
return ""
|
||||
}
|
||||
var str string
|
||||
if err := v.Unmarshal(&str); err != nil {
|
||||
return ""
|
||||
}
|
||||
return str
|
||||
}
|
||||
|
||||
func queueBlobOnly(si *metadata.StorageItem, b bool) error {
|
||||
v, err := metadata.NewValue(b)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "failed to create blobonly value")
|
||||
}
|
||||
si.Queue(func(b *bolt.Bucket) error {
|
||||
return si.SetValue(b, keyBlobOnly, v)
|
||||
})
|
||||
return nil
|
||||
}
|
||||
|
||||
func getBlobOnly(si *metadata.StorageItem) bool {
|
||||
v := si.Get(keyBlobOnly)
|
||||
if v == nil {
|
||||
return false
|
||||
}
|
||||
var blobOnly bool
|
||||
if err := v.Unmarshal(&blobOnly); err != nil {
|
||||
return false
|
||||
}
|
||||
return blobOnly
|
||||
}
|
||||
|
||||
func setDeleted(si *metadata.StorageItem) error {
|
||||
v, err := metadata.NewValue(true)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "failed to create size value")
|
||||
return errors.Wrap(err, "failed to create deleted value")
|
||||
}
|
||||
si.Update(func(b *bolt.Bucket) error {
|
||||
return si.SetValue(b, keyDeleted, v)
|
||||
|
@ -45,6 +235,55 @@ func getDeleted(si *metadata.StorageItem) bool {
|
|||
return deleted
|
||||
}
|
||||
|
||||
func queueCommitted(si *metadata.StorageItem) error {
|
||||
v, err := metadata.NewValue(true)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "failed to create committed value")
|
||||
}
|
||||
si.Queue(func(b *bolt.Bucket) error {
|
||||
return si.SetValue(b, keyCommitted, v)
|
||||
})
|
||||
return nil
|
||||
}
|
||||
|
||||
func getCommitted(si *metadata.StorageItem) bool {
|
||||
v := si.Get(keyCommitted)
|
||||
if v == nil {
|
||||
return false
|
||||
}
|
||||
var committed bool
|
||||
if err := v.Unmarshal(&committed); err != nil {
|
||||
return false
|
||||
}
|
||||
return committed
|
||||
}
|
||||
|
||||
func queueParent(si *metadata.StorageItem, parent string) error {
|
||||
if parent == "" {
|
||||
return nil
|
||||
}
|
||||
v, err := metadata.NewValue(parent)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "failed to create parent value")
|
||||
}
|
||||
si.Update(func(b *bolt.Bucket) error {
|
||||
return si.SetValue(b, keyParent, v)
|
||||
})
|
||||
return nil
|
||||
}
|
||||
|
||||
func getParent(si *metadata.StorageItem) string {
|
||||
v := si.Get(keyParent)
|
||||
if v == nil {
|
||||
return ""
|
||||
}
|
||||
var parent string
|
||||
if err := v.Unmarshal(&parent); err != nil {
|
||||
return ""
|
||||
}
|
||||
return parent
|
||||
}
|
||||
|
||||
func setSize(si *metadata.StorageItem, s int64) error {
|
||||
v, err := metadata.NewValue(s)
|
||||
if err != nil {
|
||||
|
|
257
vendor/github.com/moby/buildkit/cache/migrate_v2.go
generated
vendored
Normal file
257
vendor/github.com/moby/buildkit/cache/migrate_v2.go
generated
vendored
Normal file
|
@ -0,0 +1,257 @@
|
|||
package cache
|
||||
|
||||
import (
|
||||
"context"
|
||||
"io"
|
||||
"os"
|
||||
"time"
|
||||
|
||||
"github.com/containerd/containerd/content"
|
||||
"github.com/containerd/containerd/errdefs"
|
||||
"github.com/containerd/containerd/images"
|
||||
"github.com/containerd/containerd/leases"
|
||||
"github.com/containerd/containerd/snapshots"
|
||||
"github.com/moby/buildkit/cache/metadata"
|
||||
"github.com/moby/buildkit/snapshot"
|
||||
"github.com/opencontainers/go-digest"
|
||||
"github.com/pkg/errors"
|
||||
"github.com/sirupsen/logrus"
|
||||
)
|
||||
|
||||
func migrateChainID(si *metadata.StorageItem, all map[string]*metadata.StorageItem) (digest.Digest, digest.Digest, error) {
|
||||
diffID := digest.Digest(getDiffID(si))
|
||||
if diffID == "" {
|
||||
return "", "", nil
|
||||
}
|
||||
blobID := digest.Digest(getBlob(si))
|
||||
if blobID == "" {
|
||||
return "", "", nil
|
||||
}
|
||||
chainID := digest.Digest(getChainID(si))
|
||||
blobChainID := digest.Digest(getBlobChainID(si))
|
||||
|
||||
if chainID != "" && blobChainID != "" {
|
||||
return chainID, blobChainID, nil
|
||||
}
|
||||
|
||||
chainID = diffID
|
||||
blobChainID = digest.FromBytes([]byte(blobID + " " + diffID))
|
||||
|
||||
parent := getParent(si)
|
||||
if parent != "" {
|
||||
pChainID, pBlobChainID, err := migrateChainID(all[parent], all)
|
||||
if err != nil {
|
||||
return "", "", err
|
||||
}
|
||||
chainID = digest.FromBytes([]byte(pChainID + " " + chainID))
|
||||
blobChainID = digest.FromBytes([]byte(pBlobChainID + " " + blobChainID))
|
||||
}
|
||||
|
||||
queueChainID(si, chainID.String())
|
||||
queueBlobChainID(si, blobChainID.String())
|
||||
|
||||
return chainID, blobChainID, si.Commit()
|
||||
}
|
||||
|
||||
func MigrateV2(ctx context.Context, from, to string, cs content.Store, s snapshot.Snapshotter, lm leases.Manager) error {
|
||||
_, err := os.Stat(to)
|
||||
if err != nil {
|
||||
if !os.IsNotExist(errors.Cause(err)) {
|
||||
return errors.WithStack(err)
|
||||
}
|
||||
} else {
|
||||
return nil
|
||||
}
|
||||
|
||||
_, err = os.Stat(from)
|
||||
if err != nil {
|
||||
if !os.IsNotExist(errors.Cause(err)) {
|
||||
return errors.WithStack(err)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
tmpPath := to + ".tmp"
|
||||
tmpFile, err := os.Create(tmpPath)
|
||||
if err != nil {
|
||||
return errors.WithStack(err)
|
||||
}
|
||||
src, err := os.Open(from)
|
||||
if err != nil {
|
||||
tmpFile.Close()
|
||||
return errors.WithStack(err)
|
||||
}
|
||||
if _, err = io.Copy(tmpFile, src); err != nil {
|
||||
tmpFile.Close()
|
||||
src.Close()
|
||||
return errors.Wrapf(err, "failed to copy db for migration")
|
||||
}
|
||||
src.Close()
|
||||
tmpFile.Close()
|
||||
|
||||
md, err := metadata.NewStore(tmpPath)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
items, err := md.All()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
byID := map[string]*metadata.StorageItem{}
|
||||
for _, item := range items {
|
||||
byID[item.ID()] = item
|
||||
}
|
||||
|
||||
// add committed, parent, snapshot
|
||||
for id, item := range byID {
|
||||
em := getEqualMutable(item)
|
||||
if em == "" {
|
||||
info, err := s.Stat(ctx, id)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if info.Kind == snapshots.KindCommitted {
|
||||
queueCommitted(item)
|
||||
}
|
||||
if info.Parent != "" {
|
||||
queueParent(item, info.Parent)
|
||||
}
|
||||
} else {
|
||||
queueCommitted(item)
|
||||
}
|
||||
queueSnapshotID(item, id)
|
||||
item.Commit()
|
||||
}
|
||||
|
||||
for _, item := range byID {
|
||||
em := getEqualMutable(item)
|
||||
if em != "" {
|
||||
if getParent(item) == "" {
|
||||
queueParent(item, getParent(byID[em]))
|
||||
item.Commit()
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
type diffPair struct {
|
||||
Blobsum string
|
||||
DiffID string
|
||||
}
|
||||
// move diffID, blobsum to new location
|
||||
for _, item := range byID {
|
||||
v := item.Get("blobmapping.blob")
|
||||
if v == nil {
|
||||
continue
|
||||
}
|
||||
var blob diffPair
|
||||
if err := v.Unmarshal(&blob); err != nil {
|
||||
return errors.WithStack(err)
|
||||
}
|
||||
if _, err := cs.Info(ctx, digest.Digest(blob.Blobsum)); err != nil {
|
||||
continue
|
||||
}
|
||||
queueDiffID(item, blob.DiffID)
|
||||
queueBlob(item, blob.Blobsum)
|
||||
queueMediaType(item, images.MediaTypeDockerSchema2LayerGzip)
|
||||
if err := item.Commit(); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
// calculate new chainid/blobsumid
|
||||
for _, item := range byID {
|
||||
if _, _, err := migrateChainID(item, byID); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
ctx = context.TODO() // no cancellation allowed pass this point
|
||||
|
||||
// add new leases
|
||||
for _, item := range byID {
|
||||
l, err := lm.Create(ctx, func(l *leases.Lease) error {
|
||||
l.ID = item.ID()
|
||||
l.Labels = map[string]string{
|
||||
"containerd.io/gc.flat": time.Now().UTC().Format(time.RFC3339Nano),
|
||||
}
|
||||
return nil
|
||||
})
|
||||
if err != nil {
|
||||
// if we are running the migration twice
|
||||
if errdefs.IsAlreadyExists(err) {
|
||||
continue
|
||||
}
|
||||
return errors.Wrap(err, "failed to create lease")
|
||||
}
|
||||
|
||||
if err := lm.AddResource(ctx, l, leases.Resource{
|
||||
ID: getSnapshotID(item),
|
||||
Type: "snapshots/" + s.Name(),
|
||||
}); err != nil {
|
||||
return errors.Wrapf(err, "failed to add snapshot %s to lease", item.ID())
|
||||
}
|
||||
|
||||
if blobID := getBlob(item); blobID != "" {
|
||||
if err := lm.AddResource(ctx, l, leases.Resource{
|
||||
ID: blobID,
|
||||
Type: "content",
|
||||
}); err != nil {
|
||||
return errors.Wrapf(err, "failed to add blob %s to lease", item.ID())
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// remove old root labels
|
||||
for _, item := range byID {
|
||||
if _, err := s.Update(ctx, snapshots.Info{
|
||||
Name: getSnapshotID(item),
|
||||
}, "labels.containerd.io/gc.root"); err != nil {
|
||||
if !errdefs.IsNotFound(errors.Cause(err)) {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
if blob := getBlob(item); blob != "" {
|
||||
if _, err := cs.Update(ctx, content.Info{
|
||||
Digest: digest.Digest(blob),
|
||||
}, "labels.containerd.io/gc.root"); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// previous implementation can leak views, just clean up all views
|
||||
err = s.Walk(ctx, func(ctx context.Context, info snapshots.Info) error {
|
||||
if info.Kind == snapshots.KindView {
|
||||
if _, err := s.Update(ctx, snapshots.Info{
|
||||
Name: info.Name,
|
||||
}, "labels.containerd.io/gc.root"); err != nil {
|
||||
if !errdefs.IsNotFound(errors.Cause(err)) {
|
||||
return err
|
||||
}
|
||||
}
|
||||
}
|
||||
return nil
|
||||
})
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// switch to new DB
|
||||
if err := md.Close(); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if err := os.Rename(tmpPath, to); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
for _, item := range byID {
|
||||
logrus.Infof("migrated %s parent:%q snapshot:%v committed:%v blob:%v diffid:%v chainID:%v blobChainID:%v",
|
||||
item.ID(), getParent(item), getSnapshotID(item), getCommitted(item), getBlob(item), getDiffID(item), getChainID(item), getBlobChainID(item))
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
274
vendor/github.com/moby/buildkit/cache/refs.go
generated
vendored
274
vendor/github.com/moby/buildkit/cache/refs.go
generated
vendored
|
@ -2,15 +2,24 @@ package cache
|
|||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"strings"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/containerd/containerd/errdefs"
|
||||
"github.com/containerd/containerd/leases"
|
||||
"github.com/containerd/containerd/mount"
|
||||
"github.com/containerd/containerd/snapshots"
|
||||
"github.com/docker/docker/pkg/idtools"
|
||||
"github.com/moby/buildkit/cache/metadata"
|
||||
"github.com/moby/buildkit/identity"
|
||||
"github.com/moby/buildkit/snapshot"
|
||||
"github.com/moby/buildkit/util/flightcontrol"
|
||||
"github.com/moby/buildkit/util/leaseutil"
|
||||
"github.com/opencontainers/go-digest"
|
||||
imagespecidentity "github.com/opencontainers/image-spec/identity"
|
||||
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
|
||||
"github.com/pkg/errors"
|
||||
"github.com/sirupsen/logrus"
|
||||
)
|
||||
|
@ -30,6 +39,20 @@ type ImmutableRef interface {
|
|||
Parent() ImmutableRef
|
||||
Finalize(ctx context.Context, commit bool) error // Make sure reference is flushed to driver
|
||||
Clone() ImmutableRef
|
||||
|
||||
Info() RefInfo
|
||||
SetBlob(ctx context.Context, desc ocispec.Descriptor) error
|
||||
Extract(ctx context.Context) error // +progress
|
||||
}
|
||||
|
||||
type RefInfo struct {
|
||||
SnapshotID string
|
||||
ChainID digest.Digest
|
||||
BlobChainID digest.Digest
|
||||
DiffID digest.Digest
|
||||
Blob digest.Digest
|
||||
MediaType string
|
||||
Extracted bool
|
||||
}
|
||||
|
||||
type MutableRef interface {
|
||||
|
@ -65,6 +88,8 @@ type cacheRecord struct {
|
|||
// these are filled if multiple refs point to same data
|
||||
equalMutable *mutableRef
|
||||
equalImmutable *immutableRef
|
||||
|
||||
parentChainCache []digest.Digest
|
||||
}
|
||||
|
||||
// hold ref lock before calling
|
||||
|
@ -81,6 +106,26 @@ func (cr *cacheRecord) mref(triggerLastUsed bool) *mutableRef {
|
|||
return ref
|
||||
}
|
||||
|
||||
func (cr *cacheRecord) parentChain() []digest.Digest {
|
||||
if cr.parentChainCache != nil {
|
||||
return cr.parentChainCache
|
||||
}
|
||||
blob := getBlob(cr.md)
|
||||
if blob == "" {
|
||||
return nil
|
||||
}
|
||||
|
||||
var parent []digest.Digest
|
||||
if cr.parent != nil {
|
||||
parent = cr.parent.parentChain()
|
||||
}
|
||||
pcc := make([]digest.Digest, len(parent)+1)
|
||||
copy(pcc, parent)
|
||||
pcc[len(parent)] = digest.Digest(blob)
|
||||
cr.parentChainCache = pcc
|
||||
return pcc
|
||||
}
|
||||
|
||||
// hold ref lock before calling
|
||||
func (cr *cacheRecord) isDead() bool {
|
||||
return cr.dead || (cr.equalImmutable != nil && cr.equalImmutable.dead) || (cr.equalMutable != nil && cr.equalMutable.dead)
|
||||
|
@ -99,20 +144,32 @@ func (cr *cacheRecord) Size(ctx context.Context) (int64, error) {
|
|||
cr.mu.Unlock()
|
||||
return s, nil
|
||||
}
|
||||
driverID := cr.ID()
|
||||
driverID := getSnapshotID(cr.md)
|
||||
if cr.equalMutable != nil {
|
||||
driverID = cr.equalMutable.ID()
|
||||
driverID = getSnapshotID(cr.equalMutable.md)
|
||||
}
|
||||
cr.mu.Unlock()
|
||||
usage, err := cr.cm.ManagerOpt.Snapshotter.Usage(ctx, driverID)
|
||||
if err != nil {
|
||||
cr.mu.Lock()
|
||||
isDead := cr.isDead()
|
||||
cr.mu.Unlock()
|
||||
if isDead {
|
||||
return int64(0), nil
|
||||
var usage snapshots.Usage
|
||||
if !getBlobOnly(cr.md) {
|
||||
var err error
|
||||
usage, err = cr.cm.ManagerOpt.Snapshotter.Usage(ctx, driverID)
|
||||
if err != nil {
|
||||
cr.mu.Lock()
|
||||
isDead := cr.isDead()
|
||||
cr.mu.Unlock()
|
||||
if isDead {
|
||||
return int64(0), nil
|
||||
}
|
||||
if !errdefs.IsNotFound(err) {
|
||||
return s, errors.Wrapf(err, "failed to get usage for %s", cr.ID())
|
||||
}
|
||||
}
|
||||
}
|
||||
if dgst := getBlob(cr.md); dgst != "" {
|
||||
info, err := cr.cm.ContentStore.Info(ctx, digest.Digest(dgst))
|
||||
if err == nil {
|
||||
usage.Size += info.Size
|
||||
}
|
||||
return s, errors.Wrapf(err, "failed to get usage for %s", cr.ID())
|
||||
}
|
||||
cr.mu.Lock()
|
||||
setSize(cr.md, usage.Size)
|
||||
|
@ -148,7 +205,7 @@ func (cr *cacheRecord) Mount(ctx context.Context, readonly bool) (snapshot.Mount
|
|||
defer cr.mu.Unlock()
|
||||
|
||||
if cr.mutable {
|
||||
m, err := cr.cm.Snapshotter.Mounts(ctx, cr.ID())
|
||||
m, err := cr.cm.Snapshotter.Mounts(ctx, getSnapshotID(cr.md))
|
||||
if err != nil {
|
||||
return nil, errors.Wrapf(err, "failed to mount %s", cr.ID())
|
||||
}
|
||||
|
@ -159,7 +216,7 @@ func (cr *cacheRecord) Mount(ctx context.Context, readonly bool) (snapshot.Mount
|
|||
}
|
||||
|
||||
if cr.equalMutable != nil && readonly {
|
||||
m, err := cr.cm.Snapshotter.Mounts(ctx, cr.equalMutable.ID())
|
||||
m, err := cr.cm.Snapshotter.Mounts(ctx, getSnapshotID(cr.equalMutable.md))
|
||||
if err != nil {
|
||||
return nil, errors.Wrapf(err, "failed to mount %s", cr.equalMutable.ID())
|
||||
}
|
||||
|
@ -170,12 +227,24 @@ func (cr *cacheRecord) Mount(ctx context.Context, readonly bool) (snapshot.Mount
|
|||
return nil, err
|
||||
}
|
||||
if cr.viewMount == nil { // TODO: handle this better
|
||||
cr.view = identity.NewID()
|
||||
m, err := cr.cm.Snapshotter.View(ctx, cr.view, cr.ID())
|
||||
view := identity.NewID()
|
||||
l, err := cr.cm.LeaseManager.Create(ctx, func(l *leases.Lease) error {
|
||||
l.ID = view
|
||||
l.Labels = map[string]string{
|
||||
"containerd.io/gc.flat": time.Now().UTC().Format(time.RFC3339Nano),
|
||||
}
|
||||
return nil
|
||||
}, leaseutil.MakeTemporary)
|
||||
if err != nil {
|
||||
cr.view = ""
|
||||
return nil, err
|
||||
}
|
||||
ctx = leases.WithLease(ctx, l.ID)
|
||||
m, err := cr.cm.Snapshotter.View(ctx, view, getSnapshotID(cr.md))
|
||||
if err != nil {
|
||||
cr.cm.LeaseManager.Delete(context.TODO(), leases.Lease{ID: l.ID})
|
||||
return nil, errors.Wrapf(err, "failed to mount %s", cr.ID())
|
||||
}
|
||||
cr.view = view
|
||||
cr.viewMount = m
|
||||
}
|
||||
return cr.viewMount, nil
|
||||
|
@ -190,7 +259,7 @@ func (cr *cacheRecord) remove(ctx context.Context, removeSnapshot bool) error {
|
|||
}
|
||||
}
|
||||
if removeSnapshot {
|
||||
if err := cr.cm.Snapshotter.Remove(ctx, cr.ID()); err != nil {
|
||||
if err := cr.cm.LeaseManager.Delete(context.TODO(), leases.Lease{ID: cr.ID()}); err != nil {
|
||||
return errors.Wrapf(err, "failed to remove %s", cr.ID())
|
||||
}
|
||||
}
|
||||
|
@ -221,6 +290,134 @@ func (sr *immutableRef) Clone() ImmutableRef {
|
|||
return ref
|
||||
}
|
||||
|
||||
func (sr *immutableRef) Info() RefInfo {
|
||||
return RefInfo{
|
||||
ChainID: digest.Digest(getChainID(sr.md)),
|
||||
DiffID: digest.Digest(getDiffID(sr.md)),
|
||||
Blob: digest.Digest(getBlob(sr.md)),
|
||||
MediaType: getMediaType(sr.md),
|
||||
BlobChainID: digest.Digest(getBlobChainID(sr.md)),
|
||||
SnapshotID: getSnapshotID(sr.md),
|
||||
Extracted: !getBlobOnly(sr.md),
|
||||
}
|
||||
}
|
||||
|
||||
func (sr *immutableRef) Extract(ctx context.Context) error {
|
||||
_, err := sr.sizeG.Do(ctx, sr.ID()+"-extract", func(ctx context.Context) (interface{}, error) {
|
||||
snapshotID := getSnapshotID(sr.md)
|
||||
if _, err := sr.cm.Snapshotter.Stat(ctx, snapshotID); err == nil {
|
||||
queueBlobOnly(sr.md, false)
|
||||
return nil, sr.md.Commit()
|
||||
}
|
||||
|
||||
parentID := ""
|
||||
if sr.parent != nil {
|
||||
if err := sr.parent.Extract(ctx); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
parentID = getSnapshotID(sr.parent.md)
|
||||
}
|
||||
info := sr.Info()
|
||||
key := fmt.Sprintf("extract-%s %s", identity.NewID(), info.ChainID)
|
||||
|
||||
err := sr.cm.Snapshotter.Prepare(ctx, key, parentID)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
mountable, err := sr.cm.Snapshotter.Mounts(ctx, key)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
mounts, unmount, err := mountable.Mount()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
_, err = sr.cm.Applier.Apply(ctx, ocispec.Descriptor{
|
||||
Digest: info.Blob,
|
||||
MediaType: info.MediaType,
|
||||
}, mounts)
|
||||
if err != nil {
|
||||
unmount()
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if err := unmount(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if err := sr.cm.Snapshotter.Commit(ctx, getSnapshotID(sr.md), key); err != nil {
|
||||
if !errdefs.IsAlreadyExists(err) {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
queueBlobOnly(sr.md, false)
|
||||
if err := sr.md.Commit(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return nil, nil
|
||||
})
|
||||
return err
|
||||
}
|
||||
|
||||
// SetBlob associates a blob with the cache record.
|
||||
// A lease must be held for the blob when calling this function
|
||||
// Caller should call Info() for knowing what current values are actually set
|
||||
func (sr *immutableRef) SetBlob(ctx context.Context, desc ocispec.Descriptor) error {
|
||||
diffID, err := diffIDFromDescriptor(desc)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if _, err := sr.cm.ContentStore.Info(ctx, desc.Digest); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
sr.mu.Lock()
|
||||
defer sr.mu.Unlock()
|
||||
|
||||
if getChainID(sr.md) != "" {
|
||||
return nil
|
||||
}
|
||||
|
||||
if err := sr.finalize(ctx, true); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
p := sr.parent
|
||||
var parentChainID digest.Digest
|
||||
var parentBlobChainID digest.Digest
|
||||
if p != nil {
|
||||
pInfo := p.Info()
|
||||
if pInfo.ChainID == "" || pInfo.BlobChainID == "" {
|
||||
return errors.Errorf("failed to set blob for reference with non-addressable parent")
|
||||
}
|
||||
parentChainID = pInfo.ChainID
|
||||
parentBlobChainID = pInfo.BlobChainID
|
||||
}
|
||||
|
||||
if err := sr.cm.LeaseManager.AddResource(ctx, leases.Lease{ID: sr.ID()}, leases.Resource{
|
||||
ID: desc.Digest.String(),
|
||||
Type: "content",
|
||||
}); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
queueDiffID(sr.md, diffID.String())
|
||||
queueBlob(sr.md, desc.Digest.String())
|
||||
chainID := diffID
|
||||
blobChainID := imagespecidentity.ChainID([]digest.Digest{desc.Digest, diffID})
|
||||
if parentChainID != "" {
|
||||
chainID = imagespecidentity.ChainID([]digest.Digest{parentChainID, chainID})
|
||||
blobChainID = imagespecidentity.ChainID([]digest.Digest{parentBlobChainID, blobChainID})
|
||||
}
|
||||
queueChainID(sr.md, chainID.String())
|
||||
queueBlobChainID(sr.md, blobChainID.String())
|
||||
queueMediaType(sr.md, desc.MediaType)
|
||||
if err := sr.md.Commit(); err != nil {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (sr *immutableRef) Release(ctx context.Context) error {
|
||||
sr.cm.mu.Lock()
|
||||
defer sr.cm.mu.Unlock()
|
||||
|
@ -259,8 +456,8 @@ func (sr *immutableRef) release(ctx context.Context) error {
|
|||
|
||||
if len(sr.refs) == 0 {
|
||||
if sr.viewMount != nil { // TODO: release viewMount earlier if possible
|
||||
if err := sr.cm.Snapshotter.Remove(ctx, sr.view); err != nil {
|
||||
return errors.Wrapf(err, "failed to remove view %s", sr.view)
|
||||
if err := sr.cm.LeaseManager.Delete(ctx, leases.Lease{ID: sr.view}); err != nil {
|
||||
return errors.Wrapf(err, "failed to remove view lease %s", sr.view)
|
||||
}
|
||||
sr.view = ""
|
||||
sr.viewMount = nil
|
||||
|
@ -269,7 +466,6 @@ func (sr *immutableRef) release(ctx context.Context) error {
|
|||
if sr.equalMutable != nil {
|
||||
sr.equalMutable.release(ctx)
|
||||
}
|
||||
// go sr.cm.GC()
|
||||
}
|
||||
|
||||
return nil
|
||||
|
@ -298,18 +494,42 @@ func (cr *cacheRecord) finalize(ctx context.Context, commit bool) error {
|
|||
}
|
||||
return nil
|
||||
}
|
||||
err := cr.cm.Snapshotter.Commit(ctx, cr.ID(), mutable.ID())
|
||||
|
||||
_, err := cr.cm.ManagerOpt.LeaseManager.Create(ctx, func(l *leases.Lease) error {
|
||||
l.ID = cr.ID()
|
||||
l.Labels = map[string]string{
|
||||
"containerd.io/gc.flat": time.Now().UTC().Format(time.RFC3339Nano),
|
||||
}
|
||||
return nil
|
||||
})
|
||||
if err != nil {
|
||||
if !errdefs.IsAlreadyExists(err) { // migrator adds leases for everything
|
||||
return errors.Wrap(err, "failed to create lease")
|
||||
}
|
||||
}
|
||||
|
||||
if err := cr.cm.ManagerOpt.LeaseManager.AddResource(ctx, leases.Lease{ID: cr.ID()}, leases.Resource{
|
||||
ID: cr.ID(),
|
||||
Type: "snapshots/" + cr.cm.ManagerOpt.Snapshotter.Name(),
|
||||
}); err != nil {
|
||||
cr.cm.LeaseManager.Delete(context.TODO(), leases.Lease{ID: cr.ID()})
|
||||
return errors.Wrapf(err, "failed to add snapshot %s to lease", cr.ID())
|
||||
}
|
||||
|
||||
err = cr.cm.Snapshotter.Commit(ctx, cr.ID(), mutable.ID())
|
||||
if err != nil {
|
||||
cr.cm.LeaseManager.Delete(context.TODO(), leases.Lease{ID: cr.ID()})
|
||||
return errors.Wrapf(err, "failed to commit %s", mutable.ID())
|
||||
}
|
||||
mutable.dead = true
|
||||
go func() {
|
||||
cr.cm.mu.Lock()
|
||||
defer cr.cm.mu.Unlock()
|
||||
if err := mutable.remove(context.TODO(), false); err != nil {
|
||||
if err := mutable.remove(context.TODO(), true); err != nil {
|
||||
logrus.Error(err)
|
||||
}
|
||||
}()
|
||||
|
||||
cr.equalMutable = nil
|
||||
clearEqualMutable(cr.md)
|
||||
return cr.md.Commit()
|
||||
|
@ -341,7 +561,11 @@ func (sr *mutableRef) commit(ctx context.Context) (*immutableRef, error) {
|
|||
}
|
||||
}
|
||||
|
||||
if err := initializeMetadata(rec); err != nil {
|
||||
parentID := ""
|
||||
if rec.parent != nil {
|
||||
parentID = rec.parent.ID()
|
||||
}
|
||||
if err := initializeMetadata(rec, parentID); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
|
@ -351,6 +575,7 @@ func (sr *mutableRef) commit(ctx context.Context) (*immutableRef, error) {
|
|||
return nil, err
|
||||
}
|
||||
|
||||
queueCommitted(md)
|
||||
setSize(md, sizeUnknown)
|
||||
setEqualMutable(md, sr.ID())
|
||||
if err := md.Commit(); err != nil {
|
||||
|
@ -401,11 +626,6 @@ func (sr *mutableRef) release(ctx context.Context) error {
|
|||
return err
|
||||
}
|
||||
}
|
||||
if sr.parent != nil {
|
||||
if err := sr.parent.release(ctx); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
return sr.remove(ctx, true)
|
||||
} else {
|
||||
if sr.updateLastUsed() {
|
||||
|
|
19
vendor/github.com/moby/buildkit/executor/oci/resolvconf.go
generated
vendored
19
vendor/github.com/moby/buildkit/executor/oci/resolvconf.go
generated
vendored
|
@ -16,6 +16,9 @@ var g flightcontrol.Group
|
|||
var notFirstRun bool
|
||||
var lastNotEmpty bool
|
||||
|
||||
// overridden by tests
|
||||
var resolvconfGet = resolvconf.Get
|
||||
|
||||
type DNSConfig struct {
|
||||
Nameservers []string
|
||||
Options []string
|
||||
|
@ -59,7 +62,7 @@ func GetResolvConf(ctx context.Context, stateDir string, idmap *idtools.Identity
|
|||
}
|
||||
|
||||
var dt []byte
|
||||
f, err := resolvconf.Get()
|
||||
f, err := resolvconfGet()
|
||||
if err != nil {
|
||||
if !os.IsNotExist(err) {
|
||||
return "", err
|
||||
|
@ -88,14 +91,12 @@ func GetResolvConf(ctx context.Context, stateDir string, idmap *idtools.Identity
|
|||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
} else {
|
||||
// Logic seems odd here: why are we filtering localhost IPs
|
||||
// only if neither of the DNS configs were specified?
|
||||
// Logic comes from https://github.com/docker/libnetwork/blob/164a77ee6d24fb2b1d61f8ad3403a51d8453899e/sandbox_dns_unix.go#L230-L269
|
||||
f, err = resolvconf.FilterResolvDNS(f.Content, true)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
dt = f.Content
|
||||
}
|
||||
|
||||
f, err = resolvconf.FilterResolvDNS(dt, true)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
tmpPath := p + ".tmp"
|
||||
|
|
3
vendor/github.com/moby/buildkit/frontend/dockerfile/dockerfile2llb/convert.go
generated
vendored
3
vendor/github.com/moby/buildkit/frontend/dockerfile/dockerfile2llb/convert.go
generated
vendored
|
@ -5,6 +5,7 @@ import (
|
|||
"context"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"math"
|
||||
"net/url"
|
||||
"path"
|
||||
"path/filepath"
|
||||
|
@ -1325,7 +1326,7 @@ func prefixCommand(ds *dispatchState, str string, prefixPlatform bool, platform
|
|||
out += ds.stageName + " "
|
||||
}
|
||||
ds.cmdIndex++
|
||||
out += fmt.Sprintf("%d/%d] ", ds.cmdIndex, ds.cmdTotal)
|
||||
out += fmt.Sprintf("%*d/%d] ", int(1+math.Log10(float64(ds.cmdTotal))), ds.cmdIndex, ds.cmdTotal)
|
||||
return out + str
|
||||
}
|
||||
|
||||
|
|
2
vendor/github.com/moby/buildkit/go.mod
generated
vendored
2
vendor/github.com/moby/buildkit/go.mod
generated
vendored
|
@ -9,7 +9,7 @@ require (
|
|||
github.com/codahale/hdrhistogram v0.0.0-20160425231609-f8ad88b59a58 // indirect
|
||||
github.com/containerd/cgroups v0.0.0-20190717030353-c4b9ac5c7601 // indirect
|
||||
github.com/containerd/console v0.0.0-20181022165439-0650fd9eeb50
|
||||
github.com/containerd/containerd v1.3.0
|
||||
github.com/containerd/containerd v1.4.0-0.20191014053712-acdcf13d5eaf
|
||||
github.com/containerd/continuity v0.0.0-20190827140505-75bee3e2ccb6
|
||||
github.com/containerd/fifo v0.0.0-20190816180239-bda0ff6ed73c // indirect
|
||||
github.com/containerd/go-cni v0.0.0-20190813230227-49fbd9b210f3
|
||||
|
|
151
vendor/github.com/moby/buildkit/snapshot/blobmapping/snapshotter.go
generated
vendored
151
vendor/github.com/moby/buildkit/snapshot/blobmapping/snapshotter.go
generated
vendored
|
@ -1,151 +0,0 @@
|
|||
package blobmapping
|
||||
|
||||
import (
|
||||
"context"
|
||||
"time"
|
||||
|
||||
"github.com/containerd/containerd/content"
|
||||
"github.com/containerd/containerd/snapshots"
|
||||
"github.com/moby/buildkit/cache/metadata"
|
||||
"github.com/moby/buildkit/snapshot"
|
||||
digest "github.com/opencontainers/go-digest"
|
||||
"github.com/sirupsen/logrus"
|
||||
bolt "go.etcd.io/bbolt"
|
||||
)
|
||||
|
||||
const blobKey = "blobmapping.blob"
|
||||
|
||||
type Opt struct {
|
||||
Content content.Store
|
||||
Snapshotter snapshot.SnapshotterBase
|
||||
MetadataStore *metadata.Store
|
||||
}
|
||||
|
||||
type Info struct {
|
||||
snapshots.Info
|
||||
Blob string
|
||||
}
|
||||
|
||||
type DiffPair struct {
|
||||
Blobsum digest.Digest
|
||||
DiffID digest.Digest
|
||||
}
|
||||
|
||||
// this snapshotter keeps an internal mapping between a snapshot and a blob
|
||||
|
||||
type Snapshotter struct {
|
||||
snapshot.SnapshotterBase
|
||||
opt Opt
|
||||
}
|
||||
|
||||
func NewSnapshotter(opt Opt) snapshot.Snapshotter {
|
||||
s := &Snapshotter{
|
||||
SnapshotterBase: opt.Snapshotter,
|
||||
opt: opt,
|
||||
}
|
||||
|
||||
return s
|
||||
}
|
||||
|
||||
// Remove also removes a reference to a blob. If it is a last reference then it deletes it the blob as well
|
||||
// Remove is not safe to be called concurrently
|
||||
func (s *Snapshotter) Remove(ctx context.Context, key string) error {
|
||||
_, blob, err := s.GetBlob(ctx, key)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
blobs, err := s.opt.MetadataStore.Search(index(blob))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if err := s.SnapshotterBase.Remove(ctx, key); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if len(blobs) == 1 && blobs[0].ID() == key { // last snapshot
|
||||
if err := s.opt.Content.Delete(ctx, blob); err != nil {
|
||||
logrus.Errorf("failed to delete blob %v: %+v", blob, err)
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (s *Snapshotter) Usage(ctx context.Context, key string) (snapshots.Usage, error) {
|
||||
u, err := s.SnapshotterBase.Usage(ctx, key)
|
||||
if err != nil {
|
||||
return snapshots.Usage{}, err
|
||||
}
|
||||
_, blob, err := s.GetBlob(ctx, key)
|
||||
if err != nil {
|
||||
return u, err
|
||||
}
|
||||
if blob != "" {
|
||||
info, err := s.opt.Content.Info(ctx, blob)
|
||||
if err != nil {
|
||||
return u, err
|
||||
}
|
||||
(&u).Add(snapshots.Usage{Size: info.Size, Inodes: 1})
|
||||
}
|
||||
return u, nil
|
||||
}
|
||||
|
||||
func (s *Snapshotter) GetBlob(ctx context.Context, key string) (digest.Digest, digest.Digest, error) {
|
||||
md, _ := s.opt.MetadataStore.Get(key)
|
||||
v := md.Get(blobKey)
|
||||
if v == nil {
|
||||
return "", "", nil
|
||||
}
|
||||
var blob DiffPair
|
||||
if err := v.Unmarshal(&blob); err != nil {
|
||||
return "", "", err
|
||||
}
|
||||
return blob.DiffID, blob.Blobsum, nil
|
||||
}
|
||||
|
||||
// Validates that there is no blob associated with the snapshot.
|
||||
// Checks that there is a blob in the content store.
|
||||
// If same blob has already been set then this is a noop.
|
||||
func (s *Snapshotter) SetBlob(ctx context.Context, key string, diffID, blobsum digest.Digest) error {
|
||||
info, err := s.opt.Content.Info(ctx, blobsum)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if _, ok := info.Labels["containerd.io/uncompressed"]; !ok {
|
||||
labels := map[string]string{
|
||||
"containerd.io/uncompressed": diffID.String(),
|
||||
}
|
||||
if _, err := s.opt.Content.Update(ctx, content.Info{
|
||||
Digest: blobsum,
|
||||
Labels: labels,
|
||||
}, "labels.containerd.io/uncompressed"); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
// update gc.root cause blob might be held by lease only
|
||||
if _, err := s.opt.Content.Update(ctx, content.Info{
|
||||
Digest: blobsum,
|
||||
Labels: map[string]string{
|
||||
"containerd.io/gc.root": time.Now().UTC().Format(time.RFC3339Nano),
|
||||
},
|
||||
}, "labels.containerd.io/gc.root"); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
md, _ := s.opt.MetadataStore.Get(key)
|
||||
|
||||
v, err := metadata.NewValue(DiffPair{DiffID: diffID, Blobsum: blobsum})
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
v.Index = index(blobsum)
|
||||
|
||||
return md.Update(func(b *bolt.Bucket) error {
|
||||
return md.SetValue(b, blobKey, v)
|
||||
})
|
||||
}
|
||||
|
||||
func index(blob digest.Digest) string {
|
||||
return "blobmap::" + blob.String()
|
||||
}
|
82
vendor/github.com/moby/buildkit/snapshot/containerd/content.go
generated
vendored
Normal file
82
vendor/github.com/moby/buildkit/snapshot/containerd/content.go
generated
vendored
Normal file
|
@ -0,0 +1,82 @@
|
|||
package containerd
|
||||
|
||||
import (
|
||||
"context"
|
||||
|
||||
"github.com/containerd/containerd/content"
|
||||
"github.com/containerd/containerd/namespaces"
|
||||
"github.com/opencontainers/go-digest"
|
||||
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
|
||||
"github.com/pkg/errors"
|
||||
)
|
||||
|
||||
func NewContentStore(store content.Store, ns string) content.Store {
|
||||
return &nsContent{ns, store}
|
||||
}
|
||||
|
||||
type nsContent struct {
|
||||
ns string
|
||||
content.Store
|
||||
}
|
||||
|
||||
func (c *nsContent) Info(ctx context.Context, dgst digest.Digest) (content.Info, error) {
|
||||
ctx = namespaces.WithNamespace(ctx, c.ns)
|
||||
return c.Store.Info(ctx, dgst)
|
||||
}
|
||||
|
||||
func (c *nsContent) Update(ctx context.Context, info content.Info, fieldpaths ...string) (content.Info, error) {
|
||||
ctx = namespaces.WithNamespace(ctx, c.ns)
|
||||
return c.Store.Update(ctx, info, fieldpaths...)
|
||||
}
|
||||
|
||||
func (c *nsContent) Walk(ctx context.Context, fn content.WalkFunc, filters ...string) error {
|
||||
ctx = namespaces.WithNamespace(ctx, c.ns)
|
||||
return c.Store.Walk(ctx, fn, filters...)
|
||||
}
|
||||
|
||||
func (c *nsContent) Delete(ctx context.Context, dgst digest.Digest) error {
|
||||
return errors.Errorf("contentstore.Delete usage is forbidden")
|
||||
}
|
||||
|
||||
func (c *nsContent) Status(ctx context.Context, ref string) (content.Status, error) {
|
||||
ctx = namespaces.WithNamespace(ctx, c.ns)
|
||||
return c.Store.Status(ctx, ref)
|
||||
}
|
||||
|
||||
func (c *nsContent) ListStatuses(ctx context.Context, filters ...string) ([]content.Status, error) {
|
||||
ctx = namespaces.WithNamespace(ctx, c.ns)
|
||||
return c.Store.ListStatuses(ctx, filters...)
|
||||
}
|
||||
|
||||
func (c *nsContent) Abort(ctx context.Context, ref string) error {
|
||||
ctx = namespaces.WithNamespace(ctx, c.ns)
|
||||
return c.Store.Abort(ctx, ref)
|
||||
}
|
||||
|
||||
func (c *nsContent) ReaderAt(ctx context.Context, desc ocispec.Descriptor) (content.ReaderAt, error) {
|
||||
ctx = namespaces.WithNamespace(ctx, c.ns)
|
||||
return c.Store.ReaderAt(ctx, desc)
|
||||
}
|
||||
|
||||
func (c *nsContent) Writer(ctx context.Context, opts ...content.WriterOpt) (content.Writer, error) {
|
||||
return c.writer(ctx, 3, opts...)
|
||||
}
|
||||
|
||||
func (c *nsContent) writer(ctx context.Context, retries int, opts ...content.WriterOpt) (content.Writer, error) {
|
||||
ctx = namespaces.WithNamespace(ctx, c.ns)
|
||||
w, err := c.Store.Writer(ctx, opts...)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return &nsWriter{Writer: w, ns: c.ns}, nil
|
||||
}
|
||||
|
||||
type nsWriter struct {
|
||||
content.Writer
|
||||
ns string
|
||||
}
|
||||
|
||||
func (w *nsWriter) Commit(ctx context.Context, size int64, expected digest.Digest, opts ...content.Opt) error {
|
||||
ctx = namespaces.WithNamespace(ctx, w.ns)
|
||||
return w.Writer.Commit(ctx, size, expected, opts...)
|
||||
}
|
63
vendor/github.com/moby/buildkit/snapshot/containerd/snapshotter.go
generated
vendored
Normal file
63
vendor/github.com/moby/buildkit/snapshot/containerd/snapshotter.go
generated
vendored
Normal file
|
@ -0,0 +1,63 @@
|
|||
package containerd
|
||||
|
||||
import (
|
||||
"context"
|
||||
|
||||
"github.com/containerd/containerd/mount"
|
||||
"github.com/containerd/containerd/namespaces"
|
||||
"github.com/containerd/containerd/snapshots"
|
||||
"github.com/docker/docker/pkg/idtools"
|
||||
"github.com/moby/buildkit/snapshot"
|
||||
"github.com/pkg/errors"
|
||||
)
|
||||
|
||||
func NewSnapshotter(name string, snapshotter snapshots.Snapshotter, ns string, idmap *idtools.IdentityMapping) snapshot.Snapshotter {
|
||||
return snapshot.FromContainerdSnapshotter(name, &nsSnapshotter{ns, snapshotter}, idmap)
|
||||
}
|
||||
|
||||
func NSSnapshotter(ns string, snapshotter snapshots.Snapshotter) snapshots.Snapshotter {
|
||||
return &nsSnapshotter{ns: ns, Snapshotter: snapshotter}
|
||||
}
|
||||
|
||||
type nsSnapshotter struct {
|
||||
ns string
|
||||
snapshots.Snapshotter
|
||||
}
|
||||
|
||||
func (s *nsSnapshotter) Stat(ctx context.Context, key string) (snapshots.Info, error) {
|
||||
ctx = namespaces.WithNamespace(ctx, s.ns)
|
||||
return s.Snapshotter.Stat(ctx, key)
|
||||
}
|
||||
|
||||
func (s *nsSnapshotter) Update(ctx context.Context, info snapshots.Info, fieldpaths ...string) (snapshots.Info, error) {
|
||||
ctx = namespaces.WithNamespace(ctx, s.ns)
|
||||
return s.Snapshotter.Update(ctx, info, fieldpaths...)
|
||||
}
|
||||
|
||||
func (s *nsSnapshotter) Usage(ctx context.Context, key string) (snapshots.Usage, error) {
|
||||
ctx = namespaces.WithNamespace(ctx, s.ns)
|
||||
return s.Snapshotter.Usage(ctx, key)
|
||||
}
|
||||
func (s *nsSnapshotter) Mounts(ctx context.Context, key string) ([]mount.Mount, error) {
|
||||
ctx = namespaces.WithNamespace(ctx, s.ns)
|
||||
return s.Snapshotter.Mounts(ctx, key)
|
||||
}
|
||||
func (s *nsSnapshotter) Prepare(ctx context.Context, key, parent string, opts ...snapshots.Opt) ([]mount.Mount, error) {
|
||||
ctx = namespaces.WithNamespace(ctx, s.ns)
|
||||
return s.Snapshotter.Prepare(ctx, key, parent, opts...)
|
||||
}
|
||||
func (s *nsSnapshotter) View(ctx context.Context, key, parent string, opts ...snapshots.Opt) ([]mount.Mount, error) {
|
||||
ctx = namespaces.WithNamespace(ctx, s.ns)
|
||||
return s.Snapshotter.View(ctx, key, parent, opts...)
|
||||
}
|
||||
func (s *nsSnapshotter) Commit(ctx context.Context, name, key string, opts ...snapshots.Opt) error {
|
||||
ctx = namespaces.WithNamespace(ctx, s.ns)
|
||||
return s.Snapshotter.Commit(ctx, name, key, opts...)
|
||||
}
|
||||
func (s *nsSnapshotter) Remove(ctx context.Context, key string) error {
|
||||
return errors.Errorf("calling snapshotter.Remove is forbidden")
|
||||
}
|
||||
func (s *nsSnapshotter) Walk(ctx context.Context, fn func(context.Context, snapshots.Info) error) error {
|
||||
ctx = namespaces.WithNamespace(ctx, s.ns)
|
||||
return s.Snapshotter.Walk(ctx, fn)
|
||||
}
|
17
vendor/github.com/moby/buildkit/snapshot/snapshotter.go
generated
vendored
17
vendor/github.com/moby/buildkit/snapshot/snapshotter.go
generated
vendored
|
@ -9,7 +9,6 @@ import (
|
|||
"github.com/containerd/containerd/mount"
|
||||
"github.com/containerd/containerd/snapshots"
|
||||
"github.com/docker/docker/pkg/idtools"
|
||||
digest "github.com/opencontainers/go-digest"
|
||||
)
|
||||
|
||||
type Mountable interface {
|
||||
|
@ -18,7 +17,8 @@ type Mountable interface {
|
|||
IdentityMapping() *idtools.IdentityMapping
|
||||
}
|
||||
|
||||
type SnapshotterBase interface {
|
||||
// Snapshotter defines interface that any snapshot implementation should satisfy
|
||||
type Snapshotter interface {
|
||||
Name() string
|
||||
Mounts(ctx context.Context, key string) (Mountable, error)
|
||||
Prepare(ctx context.Context, key, parent string, opts ...snapshots.Opt) error
|
||||
|
@ -34,18 +34,7 @@ type SnapshotterBase interface {
|
|||
IdentityMapping() *idtools.IdentityMapping
|
||||
}
|
||||
|
||||
// Snapshotter defines interface that any snapshot implementation should satisfy
|
||||
type Snapshotter interface {
|
||||
Blobmapper
|
||||
SnapshotterBase
|
||||
}
|
||||
|
||||
type Blobmapper interface {
|
||||
GetBlob(ctx context.Context, key string) (digest.Digest, digest.Digest, error)
|
||||
SetBlob(ctx context.Context, key string, diffID, blob digest.Digest) error
|
||||
}
|
||||
|
||||
func FromContainerdSnapshotter(name string, s snapshots.Snapshotter, idmap *idtools.IdentityMapping) SnapshotterBase {
|
||||
func FromContainerdSnapshotter(name string, s snapshots.Snapshotter, idmap *idtools.IdentityMapping) Snapshotter {
|
||||
return &fromContainerd{name: name, Snapshotter: s, idmap: idmap}
|
||||
}
|
||||
|
||||
|
|
54
vendor/github.com/moby/buildkit/util/imageutil/config.go
generated
vendored
54
vendor/github.com/moby/buildkit/util/imageutil/config.go
generated
vendored
|
@ -3,7 +3,6 @@ package imageutil
|
|||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
|
@ -50,7 +49,7 @@ func Config(ctx context.Context, str string, resolver remotes.Resolver, cache Co
|
|||
}
|
||||
|
||||
if leaseManager != nil {
|
||||
ctx2, done, err := leaseutil.WithLease(ctx, leaseManager, leases.WithExpiration(5*time.Minute))
|
||||
ctx2, done, err := leaseutil.WithLease(ctx, leaseManager, leases.WithExpiration(5*time.Minute), leaseutil.MakeTemporary)
|
||||
if err != nil {
|
||||
return "", nil, errors.WithStack(err)
|
||||
}
|
||||
|
@ -94,12 +93,9 @@ func Config(ctx context.Context, str string, resolver remotes.Resolver, cache Co
|
|||
}
|
||||
|
||||
children := childrenConfigHandler(cache, platform)
|
||||
if m, ok := cache.(content.Manager); ok {
|
||||
children = SetChildrenLabelsNonBlobs(m, children)
|
||||
}
|
||||
|
||||
handlers := []images.Handler{
|
||||
fetchWithoutRoot(remotes.FetchHandler(cache, fetcher)),
|
||||
remotes.FetchHandler(cache, fetcher),
|
||||
children,
|
||||
}
|
||||
if err := images.Dispatch(ctx, images.Handlers(handlers...), nil, desc); err != nil {
|
||||
|
@ -118,16 +114,6 @@ func Config(ctx context.Context, str string, resolver remotes.Resolver, cache Co
|
|||
return desc.Digest, dt, nil
|
||||
}
|
||||
|
||||
func fetchWithoutRoot(fetch images.HandlerFunc) images.HandlerFunc {
|
||||
return func(ctx context.Context, desc specs.Descriptor) ([]specs.Descriptor, error) {
|
||||
if desc.Annotations == nil {
|
||||
desc.Annotations = map[string]string{}
|
||||
}
|
||||
desc.Annotations["buildkit/noroot"] = "true"
|
||||
return fetch(ctx, desc)
|
||||
}
|
||||
}
|
||||
|
||||
func childrenConfigHandler(provider content.Provider, platform platforms.MatchComparer) images.HandlerFunc {
|
||||
return func(ctx context.Context, desc specs.Descriptor) ([]specs.Descriptor, error) {
|
||||
var descs []specs.Descriptor
|
||||
|
@ -207,39 +193,3 @@ func DetectManifestBlobMediaType(dt []byte) (string, error) {
|
|||
}
|
||||
return images.MediaTypeDockerSchema2ManifestList, nil
|
||||
}
|
||||
|
||||
func SetChildrenLabelsNonBlobs(manager content.Manager, f images.HandlerFunc) images.HandlerFunc {
|
||||
return func(ctx context.Context, desc specs.Descriptor) ([]specs.Descriptor, error) {
|
||||
children, err := f(ctx, desc)
|
||||
if err != nil {
|
||||
return children, err
|
||||
}
|
||||
|
||||
if len(children) > 0 {
|
||||
info := content.Info{
|
||||
Digest: desc.Digest,
|
||||
Labels: map[string]string{},
|
||||
}
|
||||
fields := []string{}
|
||||
for i, ch := range children {
|
||||
switch ch.MediaType {
|
||||
case images.MediaTypeDockerSchema2Layer, images.MediaTypeDockerSchema2LayerGzip, specs.MediaTypeImageLayer, specs.MediaTypeImageLayerGzip:
|
||||
continue
|
||||
default:
|
||||
}
|
||||
|
||||
info.Labels[fmt.Sprintf("containerd.io/gc.ref.content.%d", i)] = ch.Digest.String()
|
||||
fields = append(fields, fmt.Sprintf("labels.containerd.io/gc.ref.content.%d", i))
|
||||
}
|
||||
|
||||
if len(info.Labels) > 0 {
|
||||
_, err := manager.Update(ctx, info, fields...)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return children, err
|
||||
}
|
||||
}
|
||||
|
|
35
vendor/github.com/moby/buildkit/util/leaseutil/manager.go
generated
vendored
35
vendor/github.com/moby/buildkit/util/leaseutil/manager.go
generated
vendored
|
@ -27,26 +27,49 @@ func WithLease(ctx context.Context, ls leases.Manager, opts ...leases.Opt) (cont
|
|||
}, nil
|
||||
}
|
||||
|
||||
func MakeTemporary(l *leases.Lease) error {
|
||||
if l.Labels == nil {
|
||||
l.Labels = map[string]string{}
|
||||
}
|
||||
l.Labels["buildkit/lease.temporary"] = time.Now().UTC().Format(time.RFC3339Nano)
|
||||
return nil
|
||||
}
|
||||
|
||||
func WithNamespace(lm leases.Manager, ns string) leases.Manager {
|
||||
return &nsLM{Manager: lm, ns: ns}
|
||||
return &nsLM{manager: lm, ns: ns}
|
||||
}
|
||||
|
||||
type nsLM struct {
|
||||
leases.Manager
|
||||
ns string
|
||||
manager leases.Manager
|
||||
ns string
|
||||
}
|
||||
|
||||
func (l *nsLM) Create(ctx context.Context, opts ...leases.Opt) (leases.Lease, error) {
|
||||
ctx = namespaces.WithNamespace(ctx, l.ns)
|
||||
return l.Manager.Create(ctx, opts...)
|
||||
return l.manager.Create(ctx, opts...)
|
||||
}
|
||||
|
||||
func (l *nsLM) Delete(ctx context.Context, lease leases.Lease, opts ...leases.DeleteOpt) error {
|
||||
ctx = namespaces.WithNamespace(ctx, l.ns)
|
||||
return l.Manager.Delete(ctx, lease, opts...)
|
||||
return l.manager.Delete(ctx, lease, opts...)
|
||||
}
|
||||
|
||||
func (l *nsLM) List(ctx context.Context, filters ...string) ([]leases.Lease, error) {
|
||||
ctx = namespaces.WithNamespace(ctx, l.ns)
|
||||
return l.Manager.List(ctx, filters...)
|
||||
return l.manager.List(ctx, filters...)
|
||||
}
|
||||
|
||||
func (l *nsLM) AddResource(ctx context.Context, lease leases.Lease, resource leases.Resource) error {
|
||||
ctx = namespaces.WithNamespace(ctx, l.ns)
|
||||
return l.manager.AddResource(ctx, lease, resource)
|
||||
}
|
||||
|
||||
func (l *nsLM) DeleteResource(ctx context.Context, lease leases.Lease, resource leases.Resource) error {
|
||||
ctx = namespaces.WithNamespace(ctx, l.ns)
|
||||
return l.manager.DeleteResource(ctx, lease, resource)
|
||||
}
|
||||
|
||||
func (l *nsLM) ListResources(ctx context.Context, lease leases.Lease) ([]leases.Resource, error) {
|
||||
ctx = namespaces.WithNamespace(ctx, l.ns)
|
||||
return l.manager.ListResources(ctx, lease)
|
||||
}
|
||||
|
|
Loading…
Reference in a new issue