From a36ba19ccad8ea551a912ce16921af89d9e59012 Mon Sep 17 00:00:00 2001 From: Sven Dowideit Date: Wed, 8 Oct 2014 11:23:54 +1000 Subject: [PATCH] Add a best practice to reduce cache invalidations inspired by https://github.com/docker-training/docker-fundamentals/pull/206 Signed-off-by: Sven Dowideit --- .../articles/dockerfile_best-practices.md | 16 +++++++++++++++- 1 file changed, 15 insertions(+), 1 deletion(-) diff --git a/docs/sources/articles/dockerfile_best-practices.md b/docs/sources/articles/dockerfile_best-practices.md index 910912bbb5..31f932d651 100644 --- a/docs/sources/articles/dockerfile_best-practices.md +++ b/docs/sources/articles/dockerfile_best-practices.md @@ -261,9 +261,23 @@ some features (like local-only tar extraction and remote URL support) that are not immediately obvious. Consequently, the best use for `ADD` is local tar file auto-extraction into the image, as in `ADD rootfs.tar.xz /`. +If you have multiple `Dockerfile` steps that use different files from your +context, `COPY` them individually, rather than all at once. This will ensure that +each step's build cache is only invalidated (forcing the step to be re-run) if the +specifically required files change. + +For example: + + COPY requirements.txt /tmp/ + RUN pip install /tmp/requirements.txt + COPY . /tmp/ + +Results in fewer cache invalidations for the `RUN` step, than if you put the +`COPY . /tmp/` before it. + Because image size matters, using `ADD` to fetch packages from remote URLs is strongly discouraged; you should use `curl` or `wget` instead. That way you can -delete the files you no longer need after they’ve been extracted and you won't +delete the files you no longer need after they've been extracted and you won't have to add another layer in your image. For example, you should avoid doing things like: