1
0
Fork 0
mirror of https://github.com/moby/moby.git synced 2022-11-09 12:21:53 -05:00
moby--moby/daemon/oci_windows.go

523 lines
16 KiB
Go
Raw Normal View History

package daemon // import "github.com/docker/docker/daemon"
import (
Windows: Experimental: Allow containerd for runtime Signed-off-by: John Howard <jhoward@microsoft.com> This is the first step in refactoring moby (dockerd) to use containerd on Windows. Similar to the current model in Linux, this adds the option to enable it for runtime. It does not switch the graphdriver to containerd snapshotters. - Refactors libcontainerd to a series of subpackages so that either a "local" containerd (1) or a "remote" (2) containerd can be loaded as opposed to conditional compile as "local" for Windows and "remote" for Linux. - Updates libcontainerd such that Windows has an option to allow the use of a "remote" containerd. Here, it communicates over a named pipe using GRPC. This is currently guarded behind the experimental flag, an environment variable, and the providing of a pipename to connect to containerd. - Infrastructure pieces such as under pkg/system to have helper functions for determining whether containerd is being used. (1) "local" containerd is what the daemon on Windows has used since inception. It's not really containerd at all - it's simply local invocation of HCS APIs directly in-process from the daemon through the Microsoft/hcsshim library. (2) "remote" containerd is what docker on Linux uses for it's runtime. It means that there is a separate containerd service running, and docker communicates over GRPC to it. To try this out, you will need to start with something like the following: Window 1: containerd --log-level debug Window 2: $env:DOCKER_WINDOWS_CONTAINERD=1 dockerd --experimental -D --containerd \\.\pipe\containerd-containerd You will need the following binary from github.com/containerd/containerd in your path: - containerd.exe You will need the following binaries from github.com/Microsoft/hcsshim in your path: - runhcs.exe - containerd-shim-runhcs-v1.exe For LCOW, it will require and initrd.img and kernel in `C:\Program Files\Linux Containers`. This is no different to the current requirements. However, you may need updated binaries, particularly initrd.img built from Microsoft/opengcs as (at the time of writing), Linuxkit binaries are somewhat out of date. Note that containerd and hcsshim for HCS v2 APIs do not yet support all the required functionality needed for docker. This will come in time - this is a baby (although large) step to migrating Docker on Windows to containerd. Note that the HCS v2 APIs are only called on RS5+ builds. RS1..RS4 will still use HCS v1 APIs as the v2 APIs were not fully developed enough on these builds to be usable. This abstraction is done in HCSShim. (Referring specifically to runtime) Note the LCOW graphdriver still uses HCS v1 APIs regardless. Note also that this does not migrate docker to use containerd snapshotters rather than graphdrivers. This needs to be done in conjunction with Linux also doing the same switch.
2019-01-08 17:30:52 -05:00
"encoding/json"
"fmt"
"io/ioutil"
"path/filepath"
"runtime"
"strings"
"github.com/Microsoft/hcsshim/osversion"
containertypes "github.com/docker/docker/api/types/container"
"github.com/docker/docker/container"
Making it possible to pass Windows credential specs directly to the engine Instead of having to go through files or registry values as is currently the case. While adding GMSA support to Kubernetes (https://github.com/kubernetes/kubernetes/pull/73726) I stumbled upon the fact that Docker currently only allows passing Windows credential specs through files or registry values, forcing the Kubelet to perform a rather awkward dance of writing-then-deleting to either the disk or the registry to be able to create a Windows container with cred specs. This patch solves this problem by making it possible to directly pass whole base64-encoded cred specs to the engine's API. I took the opportunity to slightly refactor the method responsible for Windows cred spec as it seemed hard to read to me. Added some unit tests on Windows credential specs handling, as there were previously none. Added/amended the relevant integration tests. I have also tested it manually: given a Windows container using a cred spec that you would normally start with e.g. ```powershell docker run --rm --security-opt "credentialspec=file://win.json" mcr.microsoft.com/windows/servercore:ltsc2019 nltest /parentdomain # output: # my.ad.domain.com. (1) # The command completed successfully ``` can now equivalently be started with ```powershell $rawCredSpec = & cat 'C:\ProgramData\docker\credentialspecs\win.json' $escaped = $rawCredSpec.Replace('"', '\"') docker run --rm --security-opt "credentialspec=raw://$escaped" mcr.microsoft.com/windows/servercore:ltsc2019 nltest /parentdomain # same output! ``` I'll do another PR on Swarmkit after this is merged to allow services to use the same option. (It's worth noting that @dperny faced the same problem adding GMSA support to Swarmkit, to which he came up with an interesting solution - see https://github.com/moby/moby/pull/38632 - but alas these tricks are not available to the Kubelet.) Signed-off-by: Jean Rouge <rougej+github@gmail.com>
2019-03-06 20:44:36 -05:00
"github.com/docker/docker/errdefs"
"github.com/docker/docker/oci"
"github.com/docker/docker/oci/caps"
"github.com/docker/docker/pkg/sysinfo"
"github.com/docker/docker/pkg/system"
specs "github.com/opencontainers/runtime-spec/specs-go"
"github.com/pkg/errors"
Windows: Experimental: Allow containerd for runtime Signed-off-by: John Howard <jhoward@microsoft.com> This is the first step in refactoring moby (dockerd) to use containerd on Windows. Similar to the current model in Linux, this adds the option to enable it for runtime. It does not switch the graphdriver to containerd snapshotters. - Refactors libcontainerd to a series of subpackages so that either a "local" containerd (1) or a "remote" (2) containerd can be loaded as opposed to conditional compile as "local" for Windows and "remote" for Linux. - Updates libcontainerd such that Windows has an option to allow the use of a "remote" containerd. Here, it communicates over a named pipe using GRPC. This is currently guarded behind the experimental flag, an environment variable, and the providing of a pipename to connect to containerd. - Infrastructure pieces such as under pkg/system to have helper functions for determining whether containerd is being used. (1) "local" containerd is what the daemon on Windows has used since inception. It's not really containerd at all - it's simply local invocation of HCS APIs directly in-process from the daemon through the Microsoft/hcsshim library. (2) "remote" containerd is what docker on Linux uses for it's runtime. It means that there is a separate containerd service running, and docker communicates over GRPC to it. To try this out, you will need to start with something like the following: Window 1: containerd --log-level debug Window 2: $env:DOCKER_WINDOWS_CONTAINERD=1 dockerd --experimental -D --containerd \\.\pipe\containerd-containerd You will need the following binary from github.com/containerd/containerd in your path: - containerd.exe You will need the following binaries from github.com/Microsoft/hcsshim in your path: - runhcs.exe - containerd-shim-runhcs-v1.exe For LCOW, it will require and initrd.img and kernel in `C:\Program Files\Linux Containers`. This is no different to the current requirements. However, you may need updated binaries, particularly initrd.img built from Microsoft/opengcs as (at the time of writing), Linuxkit binaries are somewhat out of date. Note that containerd and hcsshim for HCS v2 APIs do not yet support all the required functionality needed for docker. This will come in time - this is a baby (although large) step to migrating Docker on Windows to containerd. Note that the HCS v2 APIs are only called on RS5+ builds. RS1..RS4 will still use HCS v1 APIs as the v2 APIs were not fully developed enough on these builds to be usable. This abstraction is done in HCSShim. (Referring specifically to runtime) Note the LCOW graphdriver still uses HCS v1 APIs regardless. Note also that this does not migrate docker to use containerd snapshotters rather than graphdrivers. This needs to be done in conjunction with Linux also doing the same switch.
2019-01-08 17:30:52 -05:00
"github.com/sirupsen/logrus"
"golang.org/x/sys/windows/registry"
)
const (
credentialSpecRegistryLocation = `SOFTWARE\Microsoft\Windows NT\CurrentVersion\Virtualization\Containers\CredentialSpecs`
credentialSpecFileLocation = "CredentialSpecs"
)
func (daemon *Daemon) createSpec(c *container.Container) (*specs.Spec, error) {
Windows: Experimental: Allow containerd for runtime Signed-off-by: John Howard <jhoward@microsoft.com> This is the first step in refactoring moby (dockerd) to use containerd on Windows. Similar to the current model in Linux, this adds the option to enable it for runtime. It does not switch the graphdriver to containerd snapshotters. - Refactors libcontainerd to a series of subpackages so that either a "local" containerd (1) or a "remote" (2) containerd can be loaded as opposed to conditional compile as "local" for Windows and "remote" for Linux. - Updates libcontainerd such that Windows has an option to allow the use of a "remote" containerd. Here, it communicates over a named pipe using GRPC. This is currently guarded behind the experimental flag, an environment variable, and the providing of a pipename to connect to containerd. - Infrastructure pieces such as under pkg/system to have helper functions for determining whether containerd is being used. (1) "local" containerd is what the daemon on Windows has used since inception. It's not really containerd at all - it's simply local invocation of HCS APIs directly in-process from the daemon through the Microsoft/hcsshim library. (2) "remote" containerd is what docker on Linux uses for it's runtime. It means that there is a separate containerd service running, and docker communicates over GRPC to it. To try this out, you will need to start with something like the following: Window 1: containerd --log-level debug Window 2: $env:DOCKER_WINDOWS_CONTAINERD=1 dockerd --experimental -D --containerd \\.\pipe\containerd-containerd You will need the following binary from github.com/containerd/containerd in your path: - containerd.exe You will need the following binaries from github.com/Microsoft/hcsshim in your path: - runhcs.exe - containerd-shim-runhcs-v1.exe For LCOW, it will require and initrd.img and kernel in `C:\Program Files\Linux Containers`. This is no different to the current requirements. However, you may need updated binaries, particularly initrd.img built from Microsoft/opengcs as (at the time of writing), Linuxkit binaries are somewhat out of date. Note that containerd and hcsshim for HCS v2 APIs do not yet support all the required functionality needed for docker. This will come in time - this is a baby (although large) step to migrating Docker on Windows to containerd. Note that the HCS v2 APIs are only called on RS5+ builds. RS1..RS4 will still use HCS v1 APIs as the v2 APIs were not fully developed enough on these builds to be usable. This abstraction is done in HCSShim. (Referring specifically to runtime) Note the LCOW graphdriver still uses HCS v1 APIs regardless. Note also that this does not migrate docker to use containerd snapshotters rather than graphdrivers. This needs to be done in conjunction with Linux also doing the same switch.
2019-01-08 17:30:52 -05:00
img, err := daemon.imageService.GetImage(string(c.ImageID))
if err != nil {
return nil, err
}
s := oci.DefaultOSSpec(img.OS)
linkedEnv, err := daemon.setupLinkedContainers(c)
if err != nil {
return nil, err
}
Windows: Unify workdir handling Signed-off-by: John Howard <jhoward@microsoft.com> Working directory processing was handled differently for Hyper-V and Windows-Server containers, as annotated in the builder documentation (updated in this PR). For Hyper-V containers, the working directory set by WORKDIR was not created. This PR makes Hyper-V containers work the same as Windows Server containers (and the same as Linux). Example (only applies to Hyper-V containers, so not reproducible under CI environment) Dockerfile: FROM microsoft/nanoserver WORKDIR c:\installer ENV GOROOT=c:\installer ADD go.exe . RUN go --help Running on Windows Server 2016, using docker master without this change, but with daemon set to --exec-opt isolation=hyperv as it would be for Client operating systems. PS E:\go\src\github.com\docker\docker> dockerd -g c:\control --exec-opt isolation=hyperv time="2017-02-01T15:48:09.657286100-08:00" level=info msg="Windows default isolation mode: hyperv" time="2017-02-01T15:48:09.662720900-08:00" level=info msg="[graphdriver] using prior storage driver: windowsfilter" time="2017-02-01T15:48:10.011588000-08:00" level=info msg="Graph migration to content-addressability took 0.00 seconds" time="2017-02-01T15:48:10.016655800-08:00" level=info msg="Loading containers: start." time="2017-02-01T15:48:10.460820000-08:00" level=info msg="Loading containers: done." time="2017-02-01T15:48:10.509859600-08:00" level=info msg="Daemon has completed initialization" time="2017-02-01T15:48:10.509859600-08:00" level=info msg="Docker daemon" commit=3c64061 graphdriver=windowsfilter version=1.14.0-dev First with no explicit isolation: PS E:\docker\build\unifyworkdir> docker build --no-cache . Sending build context to Docker daemon 10.1 MB Step 1/5 : FROM microsoft/nanoserver ---> 89b8556cb9ca Step 2/5 : WORKDIR c:\installer ---> 7e0f41d08204 Removing intermediate container 236c7802042a Step 3/5 : ENV GOROOT c:\installer ---> Running in 8ea5237183c1 ---> 394b70435261 Removing intermediate container 8ea5237183c1 Step 4/5 : ADD go.exe . ---> e47401a1745c Removing intermediate container 88dcc28e74b1 Step 5/5 : RUN go --help ---> Running in efe90e1b6b8b container efe90e1b6b8b76586abc5c1dc0e2797b75adc26517c48733d90651e767c8463b encountered an error during CreateProcess: failure in a Windows system call: The directory name is invalid. (0x10b) extra info: {"ApplicationName":"","CommandLine":"cmd /S /C go --help","User":"","WorkingDirectory":"C:\\installer","Environment":{"GOROOT":"c:\\installer"},"EmulateConsole":false,"CreateStdInPipe":true,"CreateStdOutPipe":true,"CreateStdErrPipe":true,"ConsoleSize":[0,0]} PS E:\docker\build\unifyworkdir> Then forcing process isolation: PS E:\docker\build\unifyworkdir> docker build --isolation=process --no-cache . Sending build context to Docker daemon 10.1 MB Step 1/5 : FROM microsoft/nanoserver ---> 89b8556cb9ca Step 2/5 : WORKDIR c:\installer ---> 350c955980c8 Removing intermediate container 8339c1e9250c Step 3/5 : ENV GOROOT c:\installer ---> Running in bde511c5e3e0 ---> b8820063b5b6 Removing intermediate container bde511c5e3e0 Step 4/5 : ADD go.exe . ---> e4ac32f8902b Removing intermediate container d586e8492eda Step 5/5 : RUN go --help ---> Running in 9e1aa235af5f Cannot mkdir: C:\installer is not a directory PS E:\docker\build\unifyworkdir> Now compare the same results after this PR. Again, first with no explicit isolation (defaulting to Hyper-V containers as that's what the daemon it set to) - note it now succeeds 😄 PS E:\docker\build\unifyworkdir> docker build --no-cache . Sending build context to Docker daemon 10.1 MB Step 1/5 : FROM microsoft/nanoserver ---> 89b8556cb9ca Step 2/5 : WORKDIR c:\installer ---> 4f319f301c69 Removing intermediate container 61b9c0b1ff6f Step 3/5 : ENV GOROOT c:\installer ---> Running in c464a1d612d8 ---> 96a26ab9a7b5 Removing intermediate container c464a1d612d8 Step 4/5 : ADD go.exe . ---> 0290d61faf57 Removing intermediate container dc5a085fffe3 Step 5/5 : RUN go --help ---> Running in 60bd56042ff8 Go is a tool for managing Go source code. Usage: go command [arguments] The commands are: build compile packages and dependencies clean remove object files doc show documentation for package or symbol env print Go environment information fix run go tool fix on packages fmt run gofmt on package sources generate generate Go files by processing source get download and install packages and dependencies install compile and install packages and dependencies list list packages run compile and run Go program test test packages tool run specified go tool version print Go version vet run go tool vet on packages Use "go help [command]" for more information about a command. Additional help topics: c calling between Go and C buildmode description of build modes filetype file types gopath GOPATH environment variable environment environment variables importpath import path syntax packages description of package lists testflag description of testing flags testfunc description of testing functions Use "go help [topic]" for more information about that topic. The command 'cmd /S /C go --help' returned a non-zero code: 2 And the same with forcing process isolation. Also works 😄 PS E:\docker\build\unifyworkdir> docker build --isolation=process --no-cache . Sending build context to Docker daemon 10.1 MB Step 1/5 : FROM microsoft/nanoserver ---> 89b8556cb9ca Step 2/5 : WORKDIR c:\installer ---> f423b9cc3e78 Removing intermediate container 41330c88893d Step 3/5 : ENV GOROOT c:\installer ---> Running in 0b99a2d7bf19 ---> e051144bf8ec Removing intermediate container 0b99a2d7bf19 Step 4/5 : ADD go.exe . ---> 7072e32b7c37 Removing intermediate container a7a97aa37fd1 Step 5/5 : RUN go --help ---> Running in 7097438a54e5 Go is a tool for managing Go source code. Usage: go command [arguments] The commands are: build compile packages and dependencies clean remove object files doc show documentation for package or symbol env print Go environment information fix run go tool fix on packages fmt run gofmt on package sources generate generate Go files by processing source get download and install packages and dependencies install compile and install packages and dependencies list list packages run compile and run Go program test test packages tool run specified go tool version print Go version vet run go tool vet on packages Use "go help [command]" for more information about a command. Additional help topics: c calling between Go and C buildmode description of build modes filetype file types gopath GOPATH environment variable environment environment variables importpath import path syntax packages description of package lists testflag description of testing flags testfunc description of testing functions Use "go help [topic]" for more information about that topic. The command 'cmd /S /C go --help' returned a non-zero code: 2 PS E:\docker\build\unifyworkdir>
2017-02-01 17:56:19 -05:00
// Note, unlike Unix, we do NOT call into SetupWorkingDirectory as
// this is done in VMCompute. Further, we couldn't do it for Hyper-V
// containers anyway.
if err := daemon.setupSecretDir(c); err != nil {
return nil, err
}
if err := daemon.setupConfigDir(c); err != nil {
return nil, err
}
// In s.Mounts
mounts, err := daemon.setupMounts(c)
if err != nil {
return nil, err
}
var isHyperV bool
if c.HostConfig.Isolation.IsDefault() {
// Container using default isolation, so take the default from the daemon configuration
isHyperV = daemon.defaultIsolation.IsHyperV()
} else {
// Container may be requesting an explicit isolation mode.
isHyperV = c.HostConfig.Isolation.IsHyperV()
}
if isHyperV {
s.Windows.HyperV = &specs.WindowsHyperV{}
}
// If the container has not been started, and has configs or secrets
// secrets, create symlinks to each config and secret. If it has been
// started before, the symlinks should have already been created. Also, it
// is important to not mount a Hyper-V container that has been started
// before, to protect the host from the container; for example, from
// malicious mutation of NTFS data structures.
if !c.HasBeenStartedBefore && (len(c.SecretReferences) > 0 || len(c.ConfigReferences) > 0) {
// The container file system is mounted before this function is called,
// except for Hyper-V containers, so mount it here in that case.
if isHyperV {
if err := daemon.Mount(c); err != nil {
return nil, err
}
defer daemon.Unmount(c)
}
if err := c.CreateSecretSymlinks(); err != nil {
return nil, err
}
if err := c.CreateConfigSymlinks(); err != nil {
return nil, err
}
}
secretMounts, err := c.SecretMounts()
if err != nil {
return nil, err
}
if secretMounts != nil {
mounts = append(mounts, secretMounts...)
}
configMounts := c.ConfigMounts()
if configMounts != nil {
mounts = append(mounts, configMounts...)
}
for _, mount := range mounts {
m := specs.Mount{
Source: mount.Source,
Destination: mount.Destination,
}
if !mount.Writable {
m.Options = append(m.Options, "ro")
}
if img.OS != runtime.GOOS {
m.Type = "bind"
m.Options = append(m.Options, "rbind")
m.Options = append(m.Options, fmt.Sprintf("uvmpath=/tmp/gcs/%s/binds", c.ID))
}
s.Mounts = append(s.Mounts, m)
}
// In s.Process
s.Process.Cwd = c.Config.WorkingDir
s.Process.Env = c.CreateDaemonEnvironment(c.Config.Tty, linkedEnv)
s.Process.Terminal = c.Config.Tty
if c.Config.Tty {
s.Process.ConsoleSize = &specs.Box{
Height: c.HostConfig.ConsoleSize[0],
Width: c.HostConfig.ConsoleSize[1],
}
}
s.Process.User.Username = c.Config.User
s.Windows.LayerFolders, err = daemon.imageService.GetLayerFolders(img, c.RWLayer)
if err != nil {
return nil, errors.Wrapf(err, "container %s", c.ID)
}
dnsSearch := daemon.getDNSSearchSettings(c)
// Get endpoints for the libnetwork allocated networks to the container
var epList []string
AllowUnqualifiedDNSQuery := false
gwHNSID := ""
if c.NetworkSettings != nil {
for n := range c.NetworkSettings.Networks {
sn, err := daemon.FindNetwork(n)
if err != nil {
continue
}
ep, err := getEndpointInNetwork(c.Name, sn)
if err != nil {
continue
}
data, err := ep.DriverInfo()
if err != nil {
continue
}
if data["GW_INFO"] != nil {
gwInfo := data["GW_INFO"].(map[string]interface{})
if gwInfo["hnsid"] != nil {
gwHNSID = gwInfo["hnsid"].(string)
}
}
if data["hnsid"] != nil {
epList = append(epList, data["hnsid"].(string))
}
if data["AllowUnqualifiedDNSQuery"] != nil {
AllowUnqualifiedDNSQuery = true
}
}
}
var networkSharedContainerID string
if c.HostConfig.NetworkMode.IsContainer() {
networkSharedContainerID = c.NetworkSharedContainerID
for _, ep := range c.SharedEndpointList {
epList = append(epList, ep)
}
}
if gwHNSID != "" {
epList = append(epList, gwHNSID)
}
s.Windows.Network = &specs.WindowsNetwork{
AllowUnqualifiedDNSQuery: AllowUnqualifiedDNSQuery,
DNSSearchList: dnsSearch,
EndpointList: epList,
NetworkSharedContainerName: networkSharedContainerID,
}
switch img.OS {
case "windows":
if err := daemon.createSpecWindowsFields(c, &s, isHyperV); err != nil {
return nil, err
}
case "linux":
if !system.LCOWSupported() {
return nil, fmt.Errorf("Linux containers on Windows are not supported")
}
if err := daemon.createSpecLinuxFields(c, &s); err != nil {
return nil, err
}
default:
return nil, fmt.Errorf("Unsupported platform %q", img.OS)
}
Windows: Experimental: Allow containerd for runtime Signed-off-by: John Howard <jhoward@microsoft.com> This is the first step in refactoring moby (dockerd) to use containerd on Windows. Similar to the current model in Linux, this adds the option to enable it for runtime. It does not switch the graphdriver to containerd snapshotters. - Refactors libcontainerd to a series of subpackages so that either a "local" containerd (1) or a "remote" (2) containerd can be loaded as opposed to conditional compile as "local" for Windows and "remote" for Linux. - Updates libcontainerd such that Windows has an option to allow the use of a "remote" containerd. Here, it communicates over a named pipe using GRPC. This is currently guarded behind the experimental flag, an environment variable, and the providing of a pipename to connect to containerd. - Infrastructure pieces such as under pkg/system to have helper functions for determining whether containerd is being used. (1) "local" containerd is what the daemon on Windows has used since inception. It's not really containerd at all - it's simply local invocation of HCS APIs directly in-process from the daemon through the Microsoft/hcsshim library. (2) "remote" containerd is what docker on Linux uses for it's runtime. It means that there is a separate containerd service running, and docker communicates over GRPC to it. To try this out, you will need to start with something like the following: Window 1: containerd --log-level debug Window 2: $env:DOCKER_WINDOWS_CONTAINERD=1 dockerd --experimental -D --containerd \\.\pipe\containerd-containerd You will need the following binary from github.com/containerd/containerd in your path: - containerd.exe You will need the following binaries from github.com/Microsoft/hcsshim in your path: - runhcs.exe - containerd-shim-runhcs-v1.exe For LCOW, it will require and initrd.img and kernel in `C:\Program Files\Linux Containers`. This is no different to the current requirements. However, you may need updated binaries, particularly initrd.img built from Microsoft/opengcs as (at the time of writing), Linuxkit binaries are somewhat out of date. Note that containerd and hcsshim for HCS v2 APIs do not yet support all the required functionality needed for docker. This will come in time - this is a baby (although large) step to migrating Docker on Windows to containerd. Note that the HCS v2 APIs are only called on RS5+ builds. RS1..RS4 will still use HCS v1 APIs as the v2 APIs were not fully developed enough on these builds to be usable. This abstraction is done in HCSShim. (Referring specifically to runtime) Note the LCOW graphdriver still uses HCS v1 APIs regardless. Note also that this does not migrate docker to use containerd snapshotters rather than graphdrivers. This needs to be done in conjunction with Linux also doing the same switch.
2019-01-08 17:30:52 -05:00
if logrus.IsLevelEnabled(logrus.DebugLevel) {
if b, err := json.Marshal(&s); err == nil {
logrus.Debugf("Generated spec: %s", string(b))
}
}
return (*specs.Spec)(&s), nil
}
// Sets the Windows-specific fields of the OCI spec
func (daemon *Daemon) createSpecWindowsFields(c *container.Container, s *specs.Spec, isHyperV bool) error {
Windows: Experimental: Allow containerd for runtime Signed-off-by: John Howard <jhoward@microsoft.com> This is the first step in refactoring moby (dockerd) to use containerd on Windows. Similar to the current model in Linux, this adds the option to enable it for runtime. It does not switch the graphdriver to containerd snapshotters. - Refactors libcontainerd to a series of subpackages so that either a "local" containerd (1) or a "remote" (2) containerd can be loaded as opposed to conditional compile as "local" for Windows and "remote" for Linux. - Updates libcontainerd such that Windows has an option to allow the use of a "remote" containerd. Here, it communicates over a named pipe using GRPC. This is currently guarded behind the experimental flag, an environment variable, and the providing of a pipename to connect to containerd. - Infrastructure pieces such as under pkg/system to have helper functions for determining whether containerd is being used. (1) "local" containerd is what the daemon on Windows has used since inception. It's not really containerd at all - it's simply local invocation of HCS APIs directly in-process from the daemon through the Microsoft/hcsshim library. (2) "remote" containerd is what docker on Linux uses for it's runtime. It means that there is a separate containerd service running, and docker communicates over GRPC to it. To try this out, you will need to start with something like the following: Window 1: containerd --log-level debug Window 2: $env:DOCKER_WINDOWS_CONTAINERD=1 dockerd --experimental -D --containerd \\.\pipe\containerd-containerd You will need the following binary from github.com/containerd/containerd in your path: - containerd.exe You will need the following binaries from github.com/Microsoft/hcsshim in your path: - runhcs.exe - containerd-shim-runhcs-v1.exe For LCOW, it will require and initrd.img and kernel in `C:\Program Files\Linux Containers`. This is no different to the current requirements. However, you may need updated binaries, particularly initrd.img built from Microsoft/opengcs as (at the time of writing), Linuxkit binaries are somewhat out of date. Note that containerd and hcsshim for HCS v2 APIs do not yet support all the required functionality needed for docker. This will come in time - this is a baby (although large) step to migrating Docker on Windows to containerd. Note that the HCS v2 APIs are only called on RS5+ builds. RS1..RS4 will still use HCS v1 APIs as the v2 APIs were not fully developed enough on these builds to be usable. This abstraction is done in HCSShim. (Referring specifically to runtime) Note the LCOW graphdriver still uses HCS v1 APIs regardless. Note also that this does not migrate docker to use containerd snapshotters rather than graphdrivers. This needs to be done in conjunction with Linux also doing the same switch.
2019-01-08 17:30:52 -05:00
s.Hostname = c.FullHostname()
if len(s.Process.Cwd) == 0 {
// We default to C:\ to workaround the oddity of the case that the
// default directory for cmd running as LocalSystem (or
// ContainerAdministrator) is c:\windows\system32. Hence docker run
// <image> cmd will by default end in c:\windows\system32, rather
// than 'root' (/) on Linux. The oddity is that if you have a dockerfile
// which has no WORKDIR and has a COPY file ., . will be interpreted
// as c:\. Hence, setting it to default of c:\ makes for consistency.
s.Process.Cwd = `C:\`
}
Windows: (WCOW) Generate OCI spec that remote runtime can escape Signed-off-by: John Howard <jhoward@microsoft.com> Also fixes https://github.com/moby/moby/issues/22874 This commit is a pre-requisite to moving moby/moby on Windows to using Containerd for its runtime. The reason for this is that the interface between moby and containerd for the runtime is an OCI spec which must be unambigious. It is the responsibility of the runtime (runhcs in the case of containerd on Windows) to ensure that arguments are escaped prior to calling into HCS and onwards to the Win32 CreateProcess call. Previously, the builder was always escaping arguments which has led to several bugs in moby. Because the local runtime in libcontainerd had context of whether or not arguments were escaped, it was possible to hack around in daemon/oci_windows.go with knowledge of the context of the call (from builder or not). With a remote runtime, this is not possible as there's rightly no context of the caller passed across in the OCI spec. Put another way, as I put above, the OCI spec must be unambigious. The other previous limitation (which leads to various subtle bugs) is that moby is coded entirely from a Linux-centric point of view. Unfortunately, Windows != Linux. Windows CreateProcess uses a command line, not an array of arguments. And it has very specific rules about how to escape a command line. Some interesting reading links about this are: https://blogs.msdn.microsoft.com/twistylittlepassagesallalike/2011/04/23/everyone-quotes-command-line-arguments-the-wrong-way/ https://stackoverflow.com/questions/31838469/how-do-i-convert-argv-to-lpcommandline-parameter-of-createprocess https://docs.microsoft.com/en-us/cpp/cpp/parsing-cpp-command-line-arguments?view=vs-2017 For this reason, the OCI spec has recently been updated to cater for more natural syntax by including a CommandLine option in Process. What does this commit do? Primary objective is to ensure that the built OCI spec is unambigious. It changes the builder so that `ArgsEscaped` as commited in a layer is only controlled by the use of CMD or ENTRYPOINT. Subsequently, when calling in to create a container from the builder, if follows a different path to both `docker run` and `docker create` using the added `ContainerCreateIgnoreImagesArgsEscaped`. This allows a RUN from the builder to control how to escape in the OCI spec. It changes the builder so that when shell form is used for RUN, CMD or ENTRYPOINT, it builds (for WCOW) a more natural command line using the original as put by the user in the dockerfile, not the parsed version as a set of args which loses fidelity. This command line is put into args[0] and `ArgsEscaped` is set to true for CMD or ENTRYPOINT. A RUN statement does not commit `ArgsEscaped` to the commited layer regardless or whether shell or exec form were used.
2019-01-17 19:03:29 -05:00
if c.Config.ArgsEscaped {
s.Process.CommandLine = c.Path
if len(c.Args) > 0 {
s.Process.CommandLine += " " + system.EscapeArgs(c.Args)
}
} else {
s.Process.Args = append([]string{c.Path}, c.Args...)
}
s.Root.Readonly = false // Windows does not support a read-only root filesystem
if !isHyperV {
if c.BaseFS == nil {
return errors.New("createSpecWindowsFields: BaseFS of container " + c.ID + " is unexpectedly nil")
}
s.Root.Path = c.BaseFS.Path() // This is not set for Hyper-V containers
if !strings.HasSuffix(s.Root.Path, `\`) {
s.Root.Path = s.Root.Path + `\` // Ensure a correctly formatted volume GUID path \\?\Volume{GUID}\
}
}
// First boot optimization
s.Windows.IgnoreFlushesDuringBoot = !c.HasBeenStartedBefore
setResourcesInSpec(c, s, isHyperV)
// Read and add credentials from the security options if a credential spec has been provided.
Making it possible to pass Windows credential specs directly to the engine Instead of having to go through files or registry values as is currently the case. While adding GMSA support to Kubernetes (https://github.com/kubernetes/kubernetes/pull/73726) I stumbled upon the fact that Docker currently only allows passing Windows credential specs through files or registry values, forcing the Kubelet to perform a rather awkward dance of writing-then-deleting to either the disk or the registry to be able to create a Windows container with cred specs. This patch solves this problem by making it possible to directly pass whole base64-encoded cred specs to the engine's API. I took the opportunity to slightly refactor the method responsible for Windows cred spec as it seemed hard to read to me. Added some unit tests on Windows credential specs handling, as there were previously none. Added/amended the relevant integration tests. I have also tested it manually: given a Windows container using a cred spec that you would normally start with e.g. ```powershell docker run --rm --security-opt "credentialspec=file://win.json" mcr.microsoft.com/windows/servercore:ltsc2019 nltest /parentdomain # output: # my.ad.domain.com. (1) # The command completed successfully ``` can now equivalently be started with ```powershell $rawCredSpec = & cat 'C:\ProgramData\docker\credentialspecs\win.json' $escaped = $rawCredSpec.Replace('"', '\"') docker run --rm --security-opt "credentialspec=raw://$escaped" mcr.microsoft.com/windows/servercore:ltsc2019 nltest /parentdomain # same output! ``` I'll do another PR on Swarmkit after this is merged to allow services to use the same option. (It's worth noting that @dperny faced the same problem adding GMSA support to Swarmkit, to which he came up with an interesting solution - see https://github.com/moby/moby/pull/38632 - but alas these tricks are not available to the Kubelet.) Signed-off-by: Jean Rouge <rougej+github@gmail.com>
2019-03-06 20:44:36 -05:00
if err := daemon.setWindowsCredentialSpec(c, s); err != nil {
return err
}
// Do we have any assigned devices?
if len(c.HostConfig.Devices) > 0 {
if isHyperV {
return errors.New("device assignment is not supported for HyperV containers")
}
if osversion.Build() < osversion.RS5 {
return errors.New("device assignment requires Windows builds RS5 (17763+) or later")
}
for _, deviceMapping := range c.HostConfig.Devices {
srcParts := strings.SplitN(deviceMapping.PathOnHost, "/", 2)
if len(srcParts) != 2 {
return errors.New("invalid device assignment path")
}
if srcParts[0] != "class" {
return errors.Errorf("invalid device assignment type: '%s' should be 'class'", srcParts[0])
}
wd := specs.WindowsDevice{
ID: srcParts[1],
IDType: srcParts[0],
}
s.Windows.Devices = append(s.Windows.Devices, wd)
}
}
return nil
}
Making it possible to pass Windows credential specs directly to the engine Instead of having to go through files or registry values as is currently the case. While adding GMSA support to Kubernetes (https://github.com/kubernetes/kubernetes/pull/73726) I stumbled upon the fact that Docker currently only allows passing Windows credential specs through files or registry values, forcing the Kubelet to perform a rather awkward dance of writing-then-deleting to either the disk or the registry to be able to create a Windows container with cred specs. This patch solves this problem by making it possible to directly pass whole base64-encoded cred specs to the engine's API. I took the opportunity to slightly refactor the method responsible for Windows cred spec as it seemed hard to read to me. Added some unit tests on Windows credential specs handling, as there were previously none. Added/amended the relevant integration tests. I have also tested it manually: given a Windows container using a cred spec that you would normally start with e.g. ```powershell docker run --rm --security-opt "credentialspec=file://win.json" mcr.microsoft.com/windows/servercore:ltsc2019 nltest /parentdomain # output: # my.ad.domain.com. (1) # The command completed successfully ``` can now equivalently be started with ```powershell $rawCredSpec = & cat 'C:\ProgramData\docker\credentialspecs\win.json' $escaped = $rawCredSpec.Replace('"', '\"') docker run --rm --security-opt "credentialspec=raw://$escaped" mcr.microsoft.com/windows/servercore:ltsc2019 nltest /parentdomain # same output! ``` I'll do another PR on Swarmkit after this is merged to allow services to use the same option. (It's worth noting that @dperny faced the same problem adding GMSA support to Swarmkit, to which he came up with an interesting solution - see https://github.com/moby/moby/pull/38632 - but alas these tricks are not available to the Kubelet.) Signed-off-by: Jean Rouge <rougej+github@gmail.com>
2019-03-06 20:44:36 -05:00
var errInvalidCredentialSpecSecOpt = errdefs.InvalidParameter(fmt.Errorf("invalid credential spec security option - value must be prefixed by 'file://', 'registry://', or 'raw://' followed by a non-empty value"))
// setWindowsCredentialSpec sets the spec's `Windows.CredentialSpec`
// field if relevant
func (daemon *Daemon) setWindowsCredentialSpec(c *container.Container, s *specs.Spec) error {
if c.HostConfig == nil || c.HostConfig.SecurityOpt == nil {
return nil
}
// TODO (jrouge/wk8): if provided with several security options, we silently ignore
// all but the last one (provided they're all valid, otherwise we do return an error);
// this doesn't seem like a great idea?
credentialSpec := ""
for _, secOpt := range c.HostConfig.SecurityOpt {
optSplits := strings.SplitN(secOpt, "=", 2)
if len(optSplits) != 2 {
return errdefs.InvalidParameter(fmt.Errorf("invalid security option: no equals sign in supplied value %s", secOpt))
}
if !strings.EqualFold(optSplits[0], "credentialspec") {
return errdefs.InvalidParameter(fmt.Errorf("security option not supported: %s", optSplits[0]))
}
credSpecSplits := strings.SplitN(optSplits[1], "://", 2)
if len(credSpecSplits) != 2 || credSpecSplits[1] == "" {
return errInvalidCredentialSpecSecOpt
}
value := credSpecSplits[1]
var err error
switch strings.ToLower(credSpecSplits[0]) {
case "file":
if credentialSpec, err = readCredentialSpecFile(c.ID, daemon.root, filepath.Clean(value)); err != nil {
return errdefs.InvalidParameter(err)
}
case "registry":
if credentialSpec, err = readCredentialSpecRegistry(c.ID, value); err != nil {
return errdefs.InvalidParameter(err)
}
case "config":
// if the container does not have a DependencyStore, then it
// isn't swarmkit managed. In order to avoid creating any
// impression that `config://` is a valid API, return the same
// error as if you'd passed any other random word.
if c.DependencyStore == nil {
return errInvalidCredentialSpecSecOpt
}
csConfig, err := c.DependencyStore.Configs().Get(value)
if err != nil {
return errdefs.System(errors.Wrap(err, "error getting value from config store"))
}
// stuff the resulting secret data into a string to use as the
// CredentialSpec
credentialSpec = string(csConfig.Spec.Data)
case "raw":
credentialSpec = value
default:
return errInvalidCredentialSpecSecOpt
}
}
if credentialSpec != "" {
if s.Windows == nil {
s.Windows = &specs.Windows{}
}
s.Windows.CredentialSpec = credentialSpec
}
return nil
}
// Sets the Linux-specific fields of the OCI spec
// TODO: LCOW Support. We need to do a lot more pulling in what can
// be pulled in from oci_linux.go.
func (daemon *Daemon) createSpecLinuxFields(c *container.Container, s *specs.Spec) error {
s.Root = &specs.Root{
Path: "rootfs",
Readonly: c.HostConfig.ReadonlyRootfs,
}
s.Hostname = c.Config.Hostname
setLinuxDomainname(c, s)
if len(s.Process.Cwd) == 0 {
s.Process.Cwd = `/`
}
Windows: (WCOW) Generate OCI spec that remote runtime can escape Signed-off-by: John Howard <jhoward@microsoft.com> Also fixes https://github.com/moby/moby/issues/22874 This commit is a pre-requisite to moving moby/moby on Windows to using Containerd for its runtime. The reason for this is that the interface between moby and containerd for the runtime is an OCI spec which must be unambigious. It is the responsibility of the runtime (runhcs in the case of containerd on Windows) to ensure that arguments are escaped prior to calling into HCS and onwards to the Win32 CreateProcess call. Previously, the builder was always escaping arguments which has led to several bugs in moby. Because the local runtime in libcontainerd had context of whether or not arguments were escaped, it was possible to hack around in daemon/oci_windows.go with knowledge of the context of the call (from builder or not). With a remote runtime, this is not possible as there's rightly no context of the caller passed across in the OCI spec. Put another way, as I put above, the OCI spec must be unambigious. The other previous limitation (which leads to various subtle bugs) is that moby is coded entirely from a Linux-centric point of view. Unfortunately, Windows != Linux. Windows CreateProcess uses a command line, not an array of arguments. And it has very specific rules about how to escape a command line. Some interesting reading links about this are: https://blogs.msdn.microsoft.com/twistylittlepassagesallalike/2011/04/23/everyone-quotes-command-line-arguments-the-wrong-way/ https://stackoverflow.com/questions/31838469/how-do-i-convert-argv-to-lpcommandline-parameter-of-createprocess https://docs.microsoft.com/en-us/cpp/cpp/parsing-cpp-command-line-arguments?view=vs-2017 For this reason, the OCI spec has recently been updated to cater for more natural syntax by including a CommandLine option in Process. What does this commit do? Primary objective is to ensure that the built OCI spec is unambigious. It changes the builder so that `ArgsEscaped` as commited in a layer is only controlled by the use of CMD or ENTRYPOINT. Subsequently, when calling in to create a container from the builder, if follows a different path to both `docker run` and `docker create` using the added `ContainerCreateIgnoreImagesArgsEscaped`. This allows a RUN from the builder to control how to escape in the OCI spec. It changes the builder so that when shell form is used for RUN, CMD or ENTRYPOINT, it builds (for WCOW) a more natural command line using the original as put by the user in the dockerfile, not the parsed version as a set of args which loses fidelity. This command line is put into args[0] and `ArgsEscaped` is set to true for CMD or ENTRYPOINT. A RUN statement does not commit `ArgsEscaped` to the commited layer regardless or whether shell or exec form were used.
2019-01-17 19:03:29 -05:00
s.Process.Args = append([]string{c.Path}, c.Args...)
// Note these are against the UVM.
setResourcesInSpec(c, s, true) // LCOW is Hyper-V only
capabilities, err := caps.TweakCapabilities(caps.DefaultCapabilities(), c.HostConfig.CapAdd, c.HostConfig.CapDrop, c.HostConfig.Capabilities, c.HostConfig.Privileged)
if err != nil {
return fmt.Errorf("linux spec capabilities: %v", err)
}
if err := oci.SetCapabilities(s, capabilities); err != nil {
return fmt.Errorf("linux spec capabilities: %v", err)
}
devPermissions, err := oci.AppendDevicePermissionsFromCgroupRules(nil, c.HostConfig.DeviceCgroupRules)
if err != nil {
return fmt.Errorf("linux runtime spec devices: %v", err)
}
s.Linux.Resources.Devices = devPermissions
return nil
}
func setResourcesInSpec(c *container.Container, s *specs.Spec, isHyperV bool) {
// In s.Windows.Resources
cpuShares := uint16(c.HostConfig.CPUShares)
cpuMaximum := uint16(c.HostConfig.CPUPercent) * 100
cpuCount := uint64(c.HostConfig.CPUCount)
if c.HostConfig.NanoCPUs > 0 {
if isHyperV {
cpuCount = uint64(c.HostConfig.NanoCPUs / 1e9)
leftoverNanoCPUs := c.HostConfig.NanoCPUs % 1e9
if leftoverNanoCPUs != 0 {
cpuCount++
cpuMaximum = uint16(c.HostConfig.NanoCPUs / int64(cpuCount) / (1e9 / 10000))
if cpuMaximum < 1 {
// The requested NanoCPUs is so small that we rounded to 0, use 1 instead
cpuMaximum = 1
}
}
} else {
cpuMaximum = uint16(c.HostConfig.NanoCPUs / int64(sysinfo.NumCPU()) / (1e9 / 10000))
if cpuMaximum < 1 {
// The requested NanoCPUs is so small that we rounded to 0, use 1 instead
cpuMaximum = 1
}
}
}
Windows: Experimental: Allow containerd for runtime Signed-off-by: John Howard <jhoward@microsoft.com> This is the first step in refactoring moby (dockerd) to use containerd on Windows. Similar to the current model in Linux, this adds the option to enable it for runtime. It does not switch the graphdriver to containerd snapshotters. - Refactors libcontainerd to a series of subpackages so that either a "local" containerd (1) or a "remote" (2) containerd can be loaded as opposed to conditional compile as "local" for Windows and "remote" for Linux. - Updates libcontainerd such that Windows has an option to allow the use of a "remote" containerd. Here, it communicates over a named pipe using GRPC. This is currently guarded behind the experimental flag, an environment variable, and the providing of a pipename to connect to containerd. - Infrastructure pieces such as under pkg/system to have helper functions for determining whether containerd is being used. (1) "local" containerd is what the daemon on Windows has used since inception. It's not really containerd at all - it's simply local invocation of HCS APIs directly in-process from the daemon through the Microsoft/hcsshim library. (2) "remote" containerd is what docker on Linux uses for it's runtime. It means that there is a separate containerd service running, and docker communicates over GRPC to it. To try this out, you will need to start with something like the following: Window 1: containerd --log-level debug Window 2: $env:DOCKER_WINDOWS_CONTAINERD=1 dockerd --experimental -D --containerd \\.\pipe\containerd-containerd You will need the following binary from github.com/containerd/containerd in your path: - containerd.exe You will need the following binaries from github.com/Microsoft/hcsshim in your path: - runhcs.exe - containerd-shim-runhcs-v1.exe For LCOW, it will require and initrd.img and kernel in `C:\Program Files\Linux Containers`. This is no different to the current requirements. However, you may need updated binaries, particularly initrd.img built from Microsoft/opengcs as (at the time of writing), Linuxkit binaries are somewhat out of date. Note that containerd and hcsshim for HCS v2 APIs do not yet support all the required functionality needed for docker. This will come in time - this is a baby (although large) step to migrating Docker on Windows to containerd. Note that the HCS v2 APIs are only called on RS5+ builds. RS1..RS4 will still use HCS v1 APIs as the v2 APIs were not fully developed enough on these builds to be usable. This abstraction is done in HCSShim. (Referring specifically to runtime) Note the LCOW graphdriver still uses HCS v1 APIs regardless. Note also that this does not migrate docker to use containerd snapshotters rather than graphdrivers. This needs to be done in conjunction with Linux also doing the same switch.
2019-01-08 17:30:52 -05:00
if cpuMaximum != 0 || cpuShares != 0 || cpuCount != 0 {
if s.Windows.Resources == nil {
s.Windows.Resources = &specs.WindowsResources{}
}
s.Windows.Resources.CPU = &specs.WindowsCPUResources{
Maximum: &cpuMaximum,
Shares: &cpuShares,
Count: &cpuCount,
Windows: Experimental: Allow containerd for runtime Signed-off-by: John Howard <jhoward@microsoft.com> This is the first step in refactoring moby (dockerd) to use containerd on Windows. Similar to the current model in Linux, this adds the option to enable it for runtime. It does not switch the graphdriver to containerd snapshotters. - Refactors libcontainerd to a series of subpackages so that either a "local" containerd (1) or a "remote" (2) containerd can be loaded as opposed to conditional compile as "local" for Windows and "remote" for Linux. - Updates libcontainerd such that Windows has an option to allow the use of a "remote" containerd. Here, it communicates over a named pipe using GRPC. This is currently guarded behind the experimental flag, an environment variable, and the providing of a pipename to connect to containerd. - Infrastructure pieces such as under pkg/system to have helper functions for determining whether containerd is being used. (1) "local" containerd is what the daemon on Windows has used since inception. It's not really containerd at all - it's simply local invocation of HCS APIs directly in-process from the daemon through the Microsoft/hcsshim library. (2) "remote" containerd is what docker on Linux uses for it's runtime. It means that there is a separate containerd service running, and docker communicates over GRPC to it. To try this out, you will need to start with something like the following: Window 1: containerd --log-level debug Window 2: $env:DOCKER_WINDOWS_CONTAINERD=1 dockerd --experimental -D --containerd \\.\pipe\containerd-containerd You will need the following binary from github.com/containerd/containerd in your path: - containerd.exe You will need the following binaries from github.com/Microsoft/hcsshim in your path: - runhcs.exe - containerd-shim-runhcs-v1.exe For LCOW, it will require and initrd.img and kernel in `C:\Program Files\Linux Containers`. This is no different to the current requirements. However, you may need updated binaries, particularly initrd.img built from Microsoft/opengcs as (at the time of writing), Linuxkit binaries are somewhat out of date. Note that containerd and hcsshim for HCS v2 APIs do not yet support all the required functionality needed for docker. This will come in time - this is a baby (although large) step to migrating Docker on Windows to containerd. Note that the HCS v2 APIs are only called on RS5+ builds. RS1..RS4 will still use HCS v1 APIs as the v2 APIs were not fully developed enough on these builds to be usable. This abstraction is done in HCSShim. (Referring specifically to runtime) Note the LCOW graphdriver still uses HCS v1 APIs regardless. Note also that this does not migrate docker to use containerd snapshotters rather than graphdrivers. This needs to be done in conjunction with Linux also doing the same switch.
2019-01-08 17:30:52 -05:00
}
}
memoryLimit := uint64(c.HostConfig.Memory)
if memoryLimit != 0 {
if s.Windows.Resources == nil {
s.Windows.Resources = &specs.WindowsResources{}
}
s.Windows.Resources.Memory = &specs.WindowsMemoryResources{
Limit: &memoryLimit,
Windows: Experimental: Allow containerd for runtime Signed-off-by: John Howard <jhoward@microsoft.com> This is the first step in refactoring moby (dockerd) to use containerd on Windows. Similar to the current model in Linux, this adds the option to enable it for runtime. It does not switch the graphdriver to containerd snapshotters. - Refactors libcontainerd to a series of subpackages so that either a "local" containerd (1) or a "remote" (2) containerd can be loaded as opposed to conditional compile as "local" for Windows and "remote" for Linux. - Updates libcontainerd such that Windows has an option to allow the use of a "remote" containerd. Here, it communicates over a named pipe using GRPC. This is currently guarded behind the experimental flag, an environment variable, and the providing of a pipename to connect to containerd. - Infrastructure pieces such as under pkg/system to have helper functions for determining whether containerd is being used. (1) "local" containerd is what the daemon on Windows has used since inception. It's not really containerd at all - it's simply local invocation of HCS APIs directly in-process from the daemon through the Microsoft/hcsshim library. (2) "remote" containerd is what docker on Linux uses for it's runtime. It means that there is a separate containerd service running, and docker communicates over GRPC to it. To try this out, you will need to start with something like the following: Window 1: containerd --log-level debug Window 2: $env:DOCKER_WINDOWS_CONTAINERD=1 dockerd --experimental -D --containerd \\.\pipe\containerd-containerd You will need the following binary from github.com/containerd/containerd in your path: - containerd.exe You will need the following binaries from github.com/Microsoft/hcsshim in your path: - runhcs.exe - containerd-shim-runhcs-v1.exe For LCOW, it will require and initrd.img and kernel in `C:\Program Files\Linux Containers`. This is no different to the current requirements. However, you may need updated binaries, particularly initrd.img built from Microsoft/opengcs as (at the time of writing), Linuxkit binaries are somewhat out of date. Note that containerd and hcsshim for HCS v2 APIs do not yet support all the required functionality needed for docker. This will come in time - this is a baby (although large) step to migrating Docker on Windows to containerd. Note that the HCS v2 APIs are only called on RS5+ builds. RS1..RS4 will still use HCS v1 APIs as the v2 APIs were not fully developed enough on these builds to be usable. This abstraction is done in HCSShim. (Referring specifically to runtime) Note the LCOW graphdriver still uses HCS v1 APIs regardless. Note also that this does not migrate docker to use containerd snapshotters rather than graphdrivers. This needs to be done in conjunction with Linux also doing the same switch.
2019-01-08 17:30:52 -05:00
}
}
Windows: Experimental: Allow containerd for runtime Signed-off-by: John Howard <jhoward@microsoft.com> This is the first step in refactoring moby (dockerd) to use containerd on Windows. Similar to the current model in Linux, this adds the option to enable it for runtime. It does not switch the graphdriver to containerd snapshotters. - Refactors libcontainerd to a series of subpackages so that either a "local" containerd (1) or a "remote" (2) containerd can be loaded as opposed to conditional compile as "local" for Windows and "remote" for Linux. - Updates libcontainerd such that Windows has an option to allow the use of a "remote" containerd. Here, it communicates over a named pipe using GRPC. This is currently guarded behind the experimental flag, an environment variable, and the providing of a pipename to connect to containerd. - Infrastructure pieces such as under pkg/system to have helper functions for determining whether containerd is being used. (1) "local" containerd is what the daemon on Windows has used since inception. It's not really containerd at all - it's simply local invocation of HCS APIs directly in-process from the daemon through the Microsoft/hcsshim library. (2) "remote" containerd is what docker on Linux uses for it's runtime. It means that there is a separate containerd service running, and docker communicates over GRPC to it. To try this out, you will need to start with something like the following: Window 1: containerd --log-level debug Window 2: $env:DOCKER_WINDOWS_CONTAINERD=1 dockerd --experimental -D --containerd \\.\pipe\containerd-containerd You will need the following binary from github.com/containerd/containerd in your path: - containerd.exe You will need the following binaries from github.com/Microsoft/hcsshim in your path: - runhcs.exe - containerd-shim-runhcs-v1.exe For LCOW, it will require and initrd.img and kernel in `C:\Program Files\Linux Containers`. This is no different to the current requirements. However, you may need updated binaries, particularly initrd.img built from Microsoft/opengcs as (at the time of writing), Linuxkit binaries are somewhat out of date. Note that containerd and hcsshim for HCS v2 APIs do not yet support all the required functionality needed for docker. This will come in time - this is a baby (although large) step to migrating Docker on Windows to containerd. Note that the HCS v2 APIs are only called on RS5+ builds. RS1..RS4 will still use HCS v1 APIs as the v2 APIs were not fully developed enough on these builds to be usable. This abstraction is done in HCSShim. (Referring specifically to runtime) Note the LCOW graphdriver still uses HCS v1 APIs regardless. Note also that this does not migrate docker to use containerd snapshotters rather than graphdrivers. This needs to be done in conjunction with Linux also doing the same switch.
2019-01-08 17:30:52 -05:00
if c.HostConfig.IOMaximumBandwidth != 0 || c.HostConfig.IOMaximumIOps != 0 {
if s.Windows.Resources == nil {
s.Windows.Resources = &specs.WindowsResources{}
}
s.Windows.Resources.Storage = &specs.WindowsStorageResources{
Bps: &c.HostConfig.IOMaximumBandwidth,
Iops: &c.HostConfig.IOMaximumIOps,
}
}
}
// mergeUlimits merge the Ulimits from HostConfig with daemon defaults, and update HostConfig
// It will do nothing on non-Linux platform
func (daemon *Daemon) mergeUlimits(c *containertypes.HostConfig) {
return
}
Making it possible to pass Windows credential specs directly to the engine Instead of having to go through files or registry values as is currently the case. While adding GMSA support to Kubernetes (https://github.com/kubernetes/kubernetes/pull/73726) I stumbled upon the fact that Docker currently only allows passing Windows credential specs through files or registry values, forcing the Kubelet to perform a rather awkward dance of writing-then-deleting to either the disk or the registry to be able to create a Windows container with cred specs. This patch solves this problem by making it possible to directly pass whole base64-encoded cred specs to the engine's API. I took the opportunity to slightly refactor the method responsible for Windows cred spec as it seemed hard to read to me. Added some unit tests on Windows credential specs handling, as there were previously none. Added/amended the relevant integration tests. I have also tested it manually: given a Windows container using a cred spec that you would normally start with e.g. ```powershell docker run --rm --security-opt "credentialspec=file://win.json" mcr.microsoft.com/windows/servercore:ltsc2019 nltest /parentdomain # output: # my.ad.domain.com. (1) # The command completed successfully ``` can now equivalently be started with ```powershell $rawCredSpec = & cat 'C:\ProgramData\docker\credentialspecs\win.json' $escaped = $rawCredSpec.Replace('"', '\"') docker run --rm --security-opt "credentialspec=raw://$escaped" mcr.microsoft.com/windows/servercore:ltsc2019 nltest /parentdomain # same output! ``` I'll do another PR on Swarmkit after this is merged to allow services to use the same option. (It's worth noting that @dperny faced the same problem adding GMSA support to Swarmkit, to which he came up with an interesting solution - see https://github.com/moby/moby/pull/38632 - but alas these tricks are not available to the Kubelet.) Signed-off-by: Jean Rouge <rougej+github@gmail.com>
2019-03-06 20:44:36 -05:00
// registryKey is an interface wrapper around `registry.Key`,
// listing only the methods we care about here.
// It's mainly useful to easily allow mocking the registry in tests.
type registryKey interface {
GetStringValue(name string) (val string, valtype uint32, err error)
Close() error
}
var registryOpenKeyFunc = func(baseKey registry.Key, path string, access uint32) (registryKey, error) {
return registry.OpenKey(baseKey, path, access)
}
// readCredentialSpecRegistry is a helper function to read a credential spec from
// the registry. If not found, we return an empty string and warn in the log.
// This allows for staging on machines which do not have the necessary components.
func readCredentialSpecRegistry(id, name string) (string, error) {
Making it possible to pass Windows credential specs directly to the engine Instead of having to go through files or registry values as is currently the case. While adding GMSA support to Kubernetes (https://github.com/kubernetes/kubernetes/pull/73726) I stumbled upon the fact that Docker currently only allows passing Windows credential specs through files or registry values, forcing the Kubelet to perform a rather awkward dance of writing-then-deleting to either the disk or the registry to be able to create a Windows container with cred specs. This patch solves this problem by making it possible to directly pass whole base64-encoded cred specs to the engine's API. I took the opportunity to slightly refactor the method responsible for Windows cred spec as it seemed hard to read to me. Added some unit tests on Windows credential specs handling, as there were previously none. Added/amended the relevant integration tests. I have also tested it manually: given a Windows container using a cred spec that you would normally start with e.g. ```powershell docker run --rm --security-opt "credentialspec=file://win.json" mcr.microsoft.com/windows/servercore:ltsc2019 nltest /parentdomain # output: # my.ad.domain.com. (1) # The command completed successfully ``` can now equivalently be started with ```powershell $rawCredSpec = & cat 'C:\ProgramData\docker\credentialspecs\win.json' $escaped = $rawCredSpec.Replace('"', '\"') docker run --rm --security-opt "credentialspec=raw://$escaped" mcr.microsoft.com/windows/servercore:ltsc2019 nltest /parentdomain # same output! ``` I'll do another PR on Swarmkit after this is merged to allow services to use the same option. (It's worth noting that @dperny faced the same problem adding GMSA support to Swarmkit, to which he came up with an interesting solution - see https://github.com/moby/moby/pull/38632 - but alas these tricks are not available to the Kubelet.) Signed-off-by: Jean Rouge <rougej+github@gmail.com>
2019-03-06 20:44:36 -05:00
key, err := registryOpenKeyFunc(registry.LOCAL_MACHINE, credentialSpecRegistryLocation, registry.QUERY_VALUE)
if err != nil {
return "", errors.Wrapf(err, "failed handling spec %q for container %s - registry key %s could not be opened", name, id, credentialSpecRegistryLocation)
}
defer key.Close()
value, _, err := key.GetStringValue(name)
if err != nil {
if err == registry.ErrNotExist {
Making it possible to pass Windows credential specs directly to the engine Instead of having to go through files or registry values as is currently the case. While adding GMSA support to Kubernetes (https://github.com/kubernetes/kubernetes/pull/73726) I stumbled upon the fact that Docker currently only allows passing Windows credential specs through files or registry values, forcing the Kubelet to perform a rather awkward dance of writing-then-deleting to either the disk or the registry to be able to create a Windows container with cred specs. This patch solves this problem by making it possible to directly pass whole base64-encoded cred specs to the engine's API. I took the opportunity to slightly refactor the method responsible for Windows cred spec as it seemed hard to read to me. Added some unit tests on Windows credential specs handling, as there were previously none. Added/amended the relevant integration tests. I have also tested it manually: given a Windows container using a cred spec that you would normally start with e.g. ```powershell docker run --rm --security-opt "credentialspec=file://win.json" mcr.microsoft.com/windows/servercore:ltsc2019 nltest /parentdomain # output: # my.ad.domain.com. (1) # The command completed successfully ``` can now equivalently be started with ```powershell $rawCredSpec = & cat 'C:\ProgramData\docker\credentialspecs\win.json' $escaped = $rawCredSpec.Replace('"', '\"') docker run --rm --security-opt "credentialspec=raw://$escaped" mcr.microsoft.com/windows/servercore:ltsc2019 nltest /parentdomain # same output! ``` I'll do another PR on Swarmkit after this is merged to allow services to use the same option. (It's worth noting that @dperny faced the same problem adding GMSA support to Swarmkit, to which he came up with an interesting solution - see https://github.com/moby/moby/pull/38632 - but alas these tricks are not available to the Kubelet.) Signed-off-by: Jean Rouge <rougej+github@gmail.com>
2019-03-06 20:44:36 -05:00
return "", fmt.Errorf("registry credential spec %q for container %s was not found", name, id)
}
Making it possible to pass Windows credential specs directly to the engine Instead of having to go through files or registry values as is currently the case. While adding GMSA support to Kubernetes (https://github.com/kubernetes/kubernetes/pull/73726) I stumbled upon the fact that Docker currently only allows passing Windows credential specs through files or registry values, forcing the Kubelet to perform a rather awkward dance of writing-then-deleting to either the disk or the registry to be able to create a Windows container with cred specs. This patch solves this problem by making it possible to directly pass whole base64-encoded cred specs to the engine's API. I took the opportunity to slightly refactor the method responsible for Windows cred spec as it seemed hard to read to me. Added some unit tests on Windows credential specs handling, as there were previously none. Added/amended the relevant integration tests. I have also tested it manually: given a Windows container using a cred spec that you would normally start with e.g. ```powershell docker run --rm --security-opt "credentialspec=file://win.json" mcr.microsoft.com/windows/servercore:ltsc2019 nltest /parentdomain # output: # my.ad.domain.com. (1) # The command completed successfully ``` can now equivalently be started with ```powershell $rawCredSpec = & cat 'C:\ProgramData\docker\credentialspecs\win.json' $escaped = $rawCredSpec.Replace('"', '\"') docker run --rm --security-opt "credentialspec=raw://$escaped" mcr.microsoft.com/windows/servercore:ltsc2019 nltest /parentdomain # same output! ``` I'll do another PR on Swarmkit after this is merged to allow services to use the same option. (It's worth noting that @dperny faced the same problem adding GMSA support to Swarmkit, to which he came up with an interesting solution - see https://github.com/moby/moby/pull/38632 - but alas these tricks are not available to the Kubelet.) Signed-off-by: Jean Rouge <rougej+github@gmail.com>
2019-03-06 20:44:36 -05:00
return "", errors.Wrapf(err, "error reading credential spec %q from registry for container %s", name, id)
}
Making it possible to pass Windows credential specs directly to the engine Instead of having to go through files or registry values as is currently the case. While adding GMSA support to Kubernetes (https://github.com/kubernetes/kubernetes/pull/73726) I stumbled upon the fact that Docker currently only allows passing Windows credential specs through files or registry values, forcing the Kubelet to perform a rather awkward dance of writing-then-deleting to either the disk or the registry to be able to create a Windows container with cred specs. This patch solves this problem by making it possible to directly pass whole base64-encoded cred specs to the engine's API. I took the opportunity to slightly refactor the method responsible for Windows cred spec as it seemed hard to read to me. Added some unit tests on Windows credential specs handling, as there were previously none. Added/amended the relevant integration tests. I have also tested it manually: given a Windows container using a cred spec that you would normally start with e.g. ```powershell docker run --rm --security-opt "credentialspec=file://win.json" mcr.microsoft.com/windows/servercore:ltsc2019 nltest /parentdomain # output: # my.ad.domain.com. (1) # The command completed successfully ``` can now equivalently be started with ```powershell $rawCredSpec = & cat 'C:\ProgramData\docker\credentialspecs\win.json' $escaped = $rawCredSpec.Replace('"', '\"') docker run --rm --security-opt "credentialspec=raw://$escaped" mcr.microsoft.com/windows/servercore:ltsc2019 nltest /parentdomain # same output! ``` I'll do another PR on Swarmkit after this is merged to allow services to use the same option. (It's worth noting that @dperny faced the same problem adding GMSA support to Swarmkit, to which he came up with an interesting solution - see https://github.com/moby/moby/pull/38632 - but alas these tricks are not available to the Kubelet.) Signed-off-by: Jean Rouge <rougej+github@gmail.com>
2019-03-06 20:44:36 -05:00
return value, nil
}
// readCredentialSpecFile is a helper function to read a credential spec from
// a file. If not found, we return an empty string and warn in the log.
// This allows for staging on machines which do not have the necessary components.
func readCredentialSpecFile(id, root, location string) (string, error) {
if filepath.IsAbs(location) {
return "", fmt.Errorf("invalid credential spec - file:// path cannot be absolute")
}
base := filepath.Join(root, credentialSpecFileLocation)
full := filepath.Join(base, location)
if !strings.HasPrefix(full, base) {
return "", fmt.Errorf("invalid credential spec - file:// path must be under %s", base)
}
bcontents, err := ioutil.ReadFile(full)
if err != nil {
Making it possible to pass Windows credential specs directly to the engine Instead of having to go through files or registry values as is currently the case. While adding GMSA support to Kubernetes (https://github.com/kubernetes/kubernetes/pull/73726) I stumbled upon the fact that Docker currently only allows passing Windows credential specs through files or registry values, forcing the Kubelet to perform a rather awkward dance of writing-then-deleting to either the disk or the registry to be able to create a Windows container with cred specs. This patch solves this problem by making it possible to directly pass whole base64-encoded cred specs to the engine's API. I took the opportunity to slightly refactor the method responsible for Windows cred spec as it seemed hard to read to me. Added some unit tests on Windows credential specs handling, as there were previously none. Added/amended the relevant integration tests. I have also tested it manually: given a Windows container using a cred spec that you would normally start with e.g. ```powershell docker run --rm --security-opt "credentialspec=file://win.json" mcr.microsoft.com/windows/servercore:ltsc2019 nltest /parentdomain # output: # my.ad.domain.com. (1) # The command completed successfully ``` can now equivalently be started with ```powershell $rawCredSpec = & cat 'C:\ProgramData\docker\credentialspecs\win.json' $escaped = $rawCredSpec.Replace('"', '\"') docker run --rm --security-opt "credentialspec=raw://$escaped" mcr.microsoft.com/windows/servercore:ltsc2019 nltest /parentdomain # same output! ``` I'll do another PR on Swarmkit after this is merged to allow services to use the same option. (It's worth noting that @dperny faced the same problem adding GMSA support to Swarmkit, to which he came up with an interesting solution - see https://github.com/moby/moby/pull/38632 - but alas these tricks are not available to the Kubelet.) Signed-off-by: Jean Rouge <rougej+github@gmail.com>
2019-03-06 20:44:36 -05:00
return "", errors.Wrapf(err, "credential spec for container %s could not be read from file %q", id, full)
}
return string(bcontents[:]), nil
}