1
0
Fork 0
mirror of https://github.com/moby/moby.git synced 2022-11-09 12:21:53 -05:00

Merge pull request #33322 from jsoref/spelling

Spelling
This commit is contained in:
Vincent Demeester 2017-07-04 15:46:34 +02:00 committed by GitHub
commit ff4f700f74
72 changed files with 129 additions and 129 deletions

View file

@ -190,7 +190,7 @@ be found.
* Update runc to 54296cf40ad8143b62dbcaa1d90e520a2136ddfe [#31666](https://github.com/docker/docker/pull/31666) * Update runc to 54296cf40ad8143b62dbcaa1d90e520a2136ddfe [#31666](https://github.com/docker/docker/pull/31666)
* Ignore cgroup2 mountpoints [opencontainers/runc#1266](https://github.com/opencontainers/runc/pull/1266) * Ignore cgroup2 mountpoints [opencontainers/runc#1266](https://github.com/opencontainers/runc/pull/1266)
* Update containerd to 4ab9917febca54791c5f071a9d1f404867857fcc [#31662](https://github.com/docker/docker/pull/31662) [#31852](https://github.com/docker/docker/pull/31852) * Update containerd to 4ab9917febca54791c5f071a9d1f404867857fcc [#31662](https://github.com/docker/docker/pull/31662) [#31852](https://github.com/docker/docker/pull/31852)
* Register healtcheck service before calling restore() [docker/containerd#609](https://github.com/docker/containerd/pull/609) * Register healthcheck service before calling restore() [docker/containerd#609](https://github.com/docker/containerd/pull/609)
* Fix `docker exec` not working after unattended upgrades that reload apparmor profiles [#31773](https://github.com/docker/docker/pull/31773) * Fix `docker exec` not working after unattended upgrades that reload apparmor profiles [#31773](https://github.com/docker/docker/pull/31773)
* Fix unmounting layer without merge dir with Overlay2 [#31069](https://github.com/docker/docker/pull/31069) * Fix unmounting layer without merge dir with Overlay2 [#31069](https://github.com/docker/docker/pull/31069)
* Do not ignore "volume in use" errors when force-delete [#31450](https://github.com/docker/docker/pull/31450) * Do not ignore "volume in use" errors when force-delete [#31450](https://github.com/docker/docker/pull/31450)
@ -1087,12 +1087,12 @@ installing docker, please make sure to update them accordingly.
+ Add security options to `docker info` output [#21172](https://github.com/docker/docker/pull/21172) [#23520](https://github.com/docker/docker/pull/23520) + Add security options to `docker info` output [#21172](https://github.com/docker/docker/pull/21172) [#23520](https://github.com/docker/docker/pull/23520)
+ Add insecure registries to `docker info` output [#20410](https://github.com/docker/docker/pull/20410) + Add insecure registries to `docker info` output [#20410](https://github.com/docker/docker/pull/20410)
+ Extend Docker authorization with TLS user information [#21556](https://github.com/docker/docker/pull/21556) + Extend Docker authorization with TLS user information [#21556](https://github.com/docker/docker/pull/21556)
+ devicemapper: expose Mininum Thin Pool Free Space through `docker info` [#21945](https://github.com/docker/docker/pull/21945) + devicemapper: expose Minimum Thin Pool Free Space through `docker info` [#21945](https://github.com/docker/docker/pull/21945)
* API now returns a JSON object when an error occurs making it more consistent [#22880](https://github.com/docker/docker/pull/22880) * API now returns a JSON object when an error occurs making it more consistent [#22880](https://github.com/docker/docker/pull/22880)
- Prevent `docker run -i --restart` from hanging on exit [#22777](https://github.com/docker/docker/pull/22777) - Prevent `docker run -i --restart` from hanging on exit [#22777](https://github.com/docker/docker/pull/22777)
- Fix API/CLI discrepancy on hostname validation [#21641](https://github.com/docker/docker/pull/21641) - Fix API/CLI discrepancy on hostname validation [#21641](https://github.com/docker/docker/pull/21641)
- Fix discrepancy in the format of sizes in `stats` from HumanSize to BytesSize [#21773](https://github.com/docker/docker/pull/21773) - Fix discrepancy in the format of sizes in `stats` from HumanSize to BytesSize [#21773](https://github.com/docker/docker/pull/21773)
- authz: when request is denied return forbbiden exit code (403) [#22448](https://github.com/docker/docker/pull/22448) - authz: when request is denied return forbidden exit code (403) [#22448](https://github.com/docker/docker/pull/22448)
- Windows: fix tty-related displaying issues [#23878](https://github.com/docker/docker/pull/23878) - Windows: fix tty-related displaying issues [#23878](https://github.com/docker/docker/pull/23878)
### Runtime ### Runtime
@ -1887,7 +1887,7 @@ by another client (#15489)
#### Remote API #### Remote API
- Fix unmarshalling of Command and Entrypoint - Fix unmarshaling of Command and Entrypoint
- Set limit for minimum client version supported - Set limit for minimum client version supported
- Validate port specification - Validate port specification
- Return proper errors when attach/reattach fail - Return proper errors when attach/reattach fail
@ -2572,7 +2572,7 @@ With the ongoing changes to the networking and execution subsystems of docker te
- Fix ADD caching issue with . prefixed path - Fix ADD caching issue with . prefixed path
- Fix docker build on devicemapper by reverting sparse file tar option - Fix docker build on devicemapper by reverting sparse file tar option
- Fix issue with file caching and prevent wrong cache hit - Fix issue with file caching and prevent wrong cache hit
* Use same error handling while unmarshalling CMD and ENTRYPOINT * Use same error handling while unmarshaling CMD and ENTRYPOINT
#### Documentation #### Documentation

View file

@ -93,7 +93,7 @@ RUN set -x \
&& rm -rf "$SECCOMP_PATH" && rm -rf "$SECCOMP_PATH"
# Install Go # Install Go
# We don't have official binary golang 1.7.5 tarballs for ARM64, eigher for Go or # We don't have official binary golang 1.7.5 tarballs for ARM64, either for Go or
# bootstrap, so we use golang-go (1.6) as bootstrap to build Go from source code. # bootstrap, so we use golang-go (1.6) as bootstrap to build Go from source code.
# We don't use the official ARMv6 released binaries as a GOROOT_BOOTSTRAP, because # We don't use the official ARMv6 released binaries as a GOROOT_BOOTSTRAP, because
# not all ARM64 platforms support 32-bit mode. 32-bit mode is optional for ARMv8. # not all ARM64 platforms support 32-bit mode. 32-bit mode is optional for ARMv8.

View file

@ -102,7 +102,7 @@ func (s *containerRouter) getContainersLogs(ctx context.Context, w http.Response
} }
// doesn't matter what version the client is on, we're using this internally only // doesn't matter what version the client is on, we're using this internally only
// also do we need size? i'm thinkin no we don't // also do we need size? i'm thinking no we don't
raw, err := s.backend.ContainerInspect(containerName, false, api.DefaultVersion) raw, err := s.backend.ContainerInspect(containerName, false, api.DefaultVersion)
if err != nil { if err != nil {
return err return err

View file

@ -1637,7 +1637,7 @@ definitions:
may not be applied if the version number has changed from the last read. In other words, may not be applied if the version number has changed from the last read. In other words,
if two update requests specify the same base version, only one of the requests can succeed. if two update requests specify the same base version, only one of the requests can succeed.
As a result, two separate update requests that happen at the same time will not As a result, two separate update requests that happen at the same time will not
unintentially overwrite each other. unintentionally overwrite each other.
type: "object" type: "object"
properties: properties:
Index: Index:

View file

@ -228,7 +228,7 @@ func TestArgsMatch(t *testing.T) {
"created": {"tod": true}}, "created": {"tod": true}},
}: "created", }: "created",
{map[string]map[string]bool{ {map[string]map[string]bool{
"created": {"anyting": true, "to*": true}}, "created": {"anything": true, "to*": true}},
}: "created", }: "created",
} }

View file

@ -2,7 +2,7 @@ package swarm
import "time" import "time"
// ClusterInfo represents info about the cluster for outputing in "info" // ClusterInfo represents info about the cluster for outputting in "info"
// it contains the same information as "Swarm", but without the JoinTokens // it contains the same information as "Swarm", but without the JoinTokens
type ClusterInfo struct { type ClusterInfo struct {
ID string ID string

View file

@ -20,7 +20,7 @@ func TestGetAllAllowed(t *testing.T) {
}) })
buildArgs.AddMetaArg("ArgFromMeta", strPtr("frommeta1")) buildArgs.AddMetaArg("ArgFromMeta", strPtr("frommeta1"))
buildArgs.AddMetaArg("ArgFromMetaOverriden", strPtr("frommeta2")) buildArgs.AddMetaArg("ArgFromMetaOverridden", strPtr("frommeta2"))
buildArgs.AddMetaArg("ArgFromMetaNotUsed", strPtr("frommeta3")) buildArgs.AddMetaArg("ArgFromMetaNotUsed", strPtr("frommeta3"))
buildArgs.AddArg("ArgOverriddenByOptions", strPtr("fromdockerfile2")) buildArgs.AddArg("ArgOverriddenByOptions", strPtr("fromdockerfile2"))
@ -28,7 +28,7 @@ func TestGetAllAllowed(t *testing.T) {
buildArgs.AddArg("ArgNoDefaultInDockerfile", nil) buildArgs.AddArg("ArgNoDefaultInDockerfile", nil)
buildArgs.AddArg("ArgNoDefaultInDockerfileFromOptions", nil) buildArgs.AddArg("ArgNoDefaultInDockerfileFromOptions", nil)
buildArgs.AddArg("ArgFromMeta", nil) buildArgs.AddArg("ArgFromMeta", nil)
buildArgs.AddArg("ArgFromMetaOverriden", strPtr("fromdockerfile3")) buildArgs.AddArg("ArgFromMetaOverridden", strPtr("fromdockerfile3"))
all := buildArgs.GetAllAllowed() all := buildArgs.GetAllAllowed()
expected := map[string]string{ expected := map[string]string{
@ -37,7 +37,7 @@ func TestGetAllAllowed(t *testing.T) {
"ArgWithDefaultInDockerfile": "fromdockerfile1", "ArgWithDefaultInDockerfile": "fromdockerfile1",
"ArgNoDefaultInDockerfileFromOptions": "fromopt3", "ArgNoDefaultInDockerfileFromOptions": "fromopt3",
"ArgFromMeta": "frommeta1", "ArgFromMeta": "frommeta1",
"ArgFromMetaOverriden": "fromdockerfile3", "ArgFromMetaOverridden": "fromdockerfile3",
} }
assert.Equal(t, expected, all) assert.Equal(t, expected, all)
} }

View file

@ -91,7 +91,7 @@ type Client struct {
// CheckRedirect specifies the policy for dealing with redirect responses: // CheckRedirect specifies the policy for dealing with redirect responses:
// If the request is non-GET return `ErrRedirect`. Otherwise use the last response. // If the request is non-GET return `ErrRedirect`. Otherwise use the last response.
// //
// Go 1.8 changes behavior for HTTP redirects (specificlaly 301, 307, and 308) in the client . // Go 1.8 changes behavior for HTTP redirects (specifically 301, 307, and 308) in the client .
// The Docker client (and by extension docker API client) can be made to to send a request // The Docker client (and by extension docker API client) can be made to to send a request
// like POST /containers//start where what would normally be in the name section of the URL is empty. // like POST /containers//start where what would normally be in the name section of the URL is empty.
// This triggers an HTTP 301 from the daemon. // This triggers an HTTP 301 from the daemon.

View file

@ -14,7 +14,7 @@ import (
// indicated by the given condition, either "not-running" (default), // indicated by the given condition, either "not-running" (default),
// "next-exit", or "removed". // "next-exit", or "removed".
// //
// If this client's API version is beforer 1.30, condition is ignored and // If this client's API version is before 1.30, condition is ignored and
// ContainerWait will return immediately with the two channels, as the server // ContainerWait will return immediately with the two channels, as the server
// will wait as if the condition were "not-running". // will wait as if the condition were "not-running".
// //
@ -23,7 +23,7 @@ import (
// then returns two channels on which the caller can wait for the exit status // then returns two channels on which the caller can wait for the exit status
// of the container or an error if there was a problem either beginning the // of the container or an error if there was a problem either beginning the
// wait request or in getting the response. This allows the caller to // wait request or in getting the response. This allows the caller to
// sychronize ContainerWait with other calls, such as specifying a // synchronize ContainerWait with other calls, such as specifying a
// "next-exit" condition before issuing a ContainerStart request. // "next-exit" condition before issuing a ContainerStart request.
func (cli *Client) ContainerWait(ctx context.Context, containerID string, condition container.WaitCondition) (<-chan container.ContainerWaitOKBody, <-chan error) { func (cli *Client) ContainerWait(ctx context.Context, containerID string, condition container.WaitCondition) (<-chan container.ContainerWaitOKBody, <-chan error) {
if versions.LessThan(cli.ClientVersion(), "1.30") { if versions.LessThan(cli.ClientVersion(), "1.30") {

View file

@ -269,7 +269,7 @@ func (container *Container) UpdateContainer(hostConfig *containertypes.HostConfi
cResources := &container.HostConfig.Resources cResources := &container.HostConfig.Resources
// validate NanoCPUs, CPUPeriod, and CPUQuota // validate NanoCPUs, CPUPeriod, and CPUQuota
// Becuase NanoCPU effectively updates CPUPeriod/CPUQuota, // Because NanoCPU effectively updates CPUPeriod/CPUQuota,
// once NanoCPU is already set, updating CPUPeriod/CPUQuota will be blocked, and vice versa. // once NanoCPU is already set, updating CPUPeriod/CPUQuota will be blocked, and vice versa.
// In the following we make sure the intended update (resources) does not conflict with the existing (cResource). // In the following we make sure the intended update (resources) does not conflict with the existing (cResource).
if resources.NanoCPUs > 0 && cResources.CPUPeriod > 0 { if resources.NanoCPUs > 0 && cResources.CPUPeriod > 0 {

View file

@ -185,7 +185,7 @@ const (
// timeouts, and avoiding goroutine leaks. Wait must be called without holding // timeouts, and avoiding goroutine leaks. Wait must be called without holding
// the state lock. Returns a channel from which the caller will receive the // the state lock. Returns a channel from which the caller will receive the
// result. If the container exited on its own, the result's Err() method will // result. If the container exited on its own, the result's Err() method will
// be nil and its ExitCode() method will return the conatiners exit code, // be nil and its ExitCode() method will return the container's exit code,
// otherwise, the results Err() method will return an error indicating why the // otherwise, the results Err() method will return an error indicating why the
// wait operation failed. // wait operation failed.
func (s *State) Wait(ctx context.Context, condition WaitCondition) <-chan StateStatus { func (s *State) Wait(ctx context.Context, condition WaitCondition) <-chan StateStatus {

View file

@ -343,7 +343,7 @@ func (r *controller) Shutdown(ctx context.Context) error {
} }
// add a delay for gossip converge // add a delay for gossip converge
// TODO(dongluochen): this delay shoud be configurable to fit different cluster size and network delay. // TODO(dongluochen): this delay should be configurable to fit different cluster size and network delay.
time.Sleep(defaultGossipConvergeDelay) time.Sleep(defaultGossipConvergeDelay)
} }

View file

@ -87,8 +87,8 @@ func TestDiscoveryOpts(t *testing.T) {
t.Fatalf("Heartbeat - Expected : %v, Actual : %v", expected, heartbeat) t.Fatalf("Heartbeat - Expected : %v, Actual : %v", expected, heartbeat)
} }
discaveryTTL := fmt.Sprintf("%d", defaultDiscoveryTTLFactor-1) discoveryTTL := fmt.Sprintf("%d", defaultDiscoveryTTLFactor-1)
clusterOpts = map[string]string{"discovery.ttl": discaveryTTL} clusterOpts = map[string]string{"discovery.ttl": discoveryTTL}
heartbeat, ttl, err = discoveryOpts(clusterOpts) heartbeat, ttl, err = discoveryOpts(clusterOpts)
if err == nil && heartbeat == 0 { if err == nil && heartbeat == 0 {
t.Fatal("discovery.heartbeat must be positive") t.Fatal("discovery.heartbeat must be positive")

View file

@ -247,7 +247,7 @@ func TestLoadBufferedEventsOnlyFromPast(t *testing.T) {
} }
// #13753 // #13753
func TestIngoreBufferedWhenNoTimes(t *testing.T) { func TestIgnoreBufferedWhenNoTimes(t *testing.T) {
m1, err := eventstestutils.Scan("2016-03-07T17:28:03.022433271+02:00 container die 0b863f2a26c18557fc6cdadda007c459f9ec81b874780808138aea78a3595079 (image=ubuntu, name=small_hoover)") m1, err := eventstestutils.Scan("2016-03-07T17:28:03.022433271+02:00 container die 0b863f2a26c18557fc6cdadda007c459f9ec81b874780808138aea78a3595079 (image=ubuntu, name=small_hoover)")
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)

View file

@ -174,27 +174,27 @@ func writeLVMConfig(root string, cfg directLVMConfig) error {
func setupDirectLVM(cfg directLVMConfig) error { func setupDirectLVM(cfg directLVMConfig) error {
pvCreate, err := exec.LookPath("pvcreate") pvCreate, err := exec.LookPath("pvcreate")
if err != nil { if err != nil {
return errors.Wrap(err, "error lookuping up command `pvcreate` while setting up direct lvm") return errors.Wrap(err, "error looking up command `pvcreate` while setting up direct lvm")
} }
vgCreate, err := exec.LookPath("vgcreate") vgCreate, err := exec.LookPath("vgcreate")
if err != nil { if err != nil {
return errors.Wrap(err, "error lookuping up command `vgcreate` while setting up direct lvm") return errors.Wrap(err, "error looking up command `vgcreate` while setting up direct lvm")
} }
lvCreate, err := exec.LookPath("lvcreate") lvCreate, err := exec.LookPath("lvcreate")
if err != nil { if err != nil {
return errors.Wrap(err, "error lookuping up command `lvcreate` while setting up direct lvm") return errors.Wrap(err, "error looking up command `lvcreate` while setting up direct lvm")
} }
lvConvert, err := exec.LookPath("lvconvert") lvConvert, err := exec.LookPath("lvconvert")
if err != nil { if err != nil {
return errors.Wrap(err, "error lookuping up command `lvconvert` while setting up direct lvm") return errors.Wrap(err, "error looking up command `lvconvert` while setting up direct lvm")
} }
lvChange, err := exec.LookPath("lvchange") lvChange, err := exec.LookPath("lvchange")
if err != nil { if err != nil {
return errors.Wrap(err, "error lookuping up command `lvchange` while setting up direct lvm") return errors.Wrap(err, "error looking up command `lvchange` while setting up direct lvm")
} }
if cfg.AutoExtendPercent == 0 { if cfg.AutoExtendPercent == 0 {

View file

@ -95,7 +95,7 @@ func GetFSMagic(rootpath string) (FsMagic, error) {
return FsMagic(buf.Type), nil return FsMagic(buf.Type), nil
} }
// NewFsChecker returns a checker configured for the provied FsMagic // NewFsChecker returns a checker configured for the provided FsMagic
func NewFsChecker(t FsMagic) Checker { func NewFsChecker(t FsMagic) Checker {
return &fsChecker{ return &fsChecker{
t: t, t: t,

View file

@ -54,7 +54,7 @@ func (c *fsChecker) IsMounted(path string) bool {
return m return m
} }
// NewFsChecker returns a checker configured for the provied FsMagic // NewFsChecker returns a checker configured for the provided FsMagic
func NewFsChecker(t FsMagic) Checker { func NewFsChecker(t FsMagic) Checker {
return &fsChecker{ return &fsChecker{
t: t, t: t,

View file

@ -328,7 +328,7 @@ func makeBackingFsDev(home string) (string, error) {
} }
backingFsBlockDev := path.Join(home, "backingFsBlockDev") backingFsBlockDev := path.Join(home, "backingFsBlockDev")
// Re-create just in case comeone copied the home directory over to a new device // Re-create just in case someone copied the home directory over to a new device
syscall.Unlink(backingFsBlockDev) syscall.Unlink(backingFsBlockDev)
stat := fileinfo.Sys().(*syscall.Stat_t) stat := fileinfo.Sys().(*syscall.Stat_t)
if err := syscall.Mknod(backingFsBlockDev, syscall.S_IFBLK|0600, int(stat.Dev)); err != nil { if err := syscall.Mknod(backingFsBlockDev, syscall.S_IFBLK|0600, int(stat.Dev)); err != nil {

View file

@ -300,7 +300,7 @@ func (d *Driver) Remove(id string) error {
// //
// TODO @jhowardmsft - For RS3, we can remove the retries. Also consider // TODO @jhowardmsft - For RS3, we can remove the retries. Also consider
// using platform APIs (if available) to get this more succinctly. Also // using platform APIs (if available) to get this more succinctly. Also
// consider enlighting the Remove() interface to have context of why // consider enhancing the Remove() interface to have context of why
// the remove is being called - that could improve efficiency by not // the remove is being called - that could improve efficiency by not
// enumerating compute systems during a remove of a container as it's // enumerating compute systems during a remove of a container as it's
// not required. // not required.

View file

@ -363,7 +363,7 @@ var newTicker = func(freq time.Duration) *time.Ticker {
// awslogs-datetime-format options have been configured, multiline processing // awslogs-datetime-format options have been configured, multiline processing
// is enabled, where log messages are stored in an event buffer until a multiline // is enabled, where log messages are stored in an event buffer until a multiline
// pattern match is found, at which point the messages in the event buffer are // pattern match is found, at which point the messages in the event buffer are
// pushed to CloudWatch logs as a single log event. Multline messages are processed // pushed to CloudWatch logs as a single log event. Multiline messages are processed
// according to the maximumBytesPerPut constraint, and the implementation only // according to the maximumBytesPerPut constraint, and the implementation only
// allows for messages to be buffered for a maximum of 2*batchPublishFrequency // allows for messages to be buffered for a maximum of 2*batchPublishFrequency
// seconds. When events are ready to be processed for submission to CloudWatch // seconds. When events are ready to be processed for submission to CloudWatch

View file

@ -121,7 +121,7 @@ func (r *RingLogger) run() {
type messageRing struct { type messageRing struct {
mu sync.Mutex mu sync.Mutex
// singals callers of `Dequeue` to wake up either on `Close` or when a new `Message` is added // signals callers of `Dequeue` to wake up either on `Close` or when a new `Message` is added
wait *sync.Cond wait *sync.Cond
sizeBytes int64 // current buffer size sizeBytes int64 // current buffer size

View file

@ -55,7 +55,7 @@ func (daemon *Daemon) createSpec(c *container.Container) (*specs.Spec, error) {
} }
// If the container has not been started, and has configs or secrets // If the container has not been started, and has configs or secrets
// secrets, create symlinks to each confing and secret. If it has been // secrets, create symlinks to each config and secret. If it has been
// started before, the symlinks should have already been created. Also, it // started before, the symlinks should have already been created. Also, it
// is important to not mount a Hyper-V container that has been started // is important to not mount a Hyper-V container that has been started
// before, to protect the host from the container; for example, from // before, to protect the host from the container; for example, from

View file

@ -39,7 +39,7 @@ func (daemon *Daemon) Reload(conf *config.Config) (err error) {
daemon.reloadPlatform(conf, attributes) daemon.reloadPlatform(conf, attributes)
daemon.reloadDebug(conf, attributes) daemon.reloadDebug(conf, attributes)
daemon.reloadMaxConcurrentDowloadsAndUploads(conf, attributes) daemon.reloadMaxConcurrentDownloadsAndUploads(conf, attributes)
daemon.reloadShutdownTimeout(conf, attributes) daemon.reloadShutdownTimeout(conf, attributes)
if err := daemon.reloadClusterDiscovery(conf, attributes); err != nil { if err := daemon.reloadClusterDiscovery(conf, attributes); err != nil {
@ -74,9 +74,9 @@ func (daemon *Daemon) reloadDebug(conf *config.Config, attributes map[string]str
attributes["debug"] = fmt.Sprintf("%t", daemon.configStore.Debug) attributes["debug"] = fmt.Sprintf("%t", daemon.configStore.Debug)
} }
// reloadMaxConcurrentDowloadsAndUploads updates configuration with max concurrent // reloadMaxConcurrentDownloadsAndUploads updates configuration with max concurrent
// download and upload options and updates the passed attributes // download and upload options and updates the passed attributes
func (daemon *Daemon) reloadMaxConcurrentDowloadsAndUploads(conf *config.Config, attributes map[string]string) { func (daemon *Daemon) reloadMaxConcurrentDownloadsAndUploads(conf *config.Config, attributes map[string]string) {
// If no value is set for max-concurrent-downloads we assume it is the default value // If no value is set for max-concurrent-downloads we assume it is the default value
// We always "reset" as the cost is lightweight and easy to maintain. // We always "reset" as the cost is lightweight and easy to maintain.
if conf.IsValueSet("max-concurrent-downloads") && conf.MaxConcurrentDownloads != nil { if conf.IsValueSet("max-concurrent-downloads") && conf.MaxConcurrentDownloads != nil {

View file

@ -206,7 +206,7 @@ func TestBackportMountSpec(t *testing.T) {
BindOptions: &mounttypes.BindOptions{Propagation: "shared"}, BindOptions: &mounttypes.BindOptions{Propagation: "shared"},
}, },
}, },
comment: "bind mount with read/write + shared propgation", comment: "bind mount with read/write + shared propagation",
}, },
{ {
mp: &volume.MountPoint{ mp: &volume.MountPoint{

View file

@ -203,7 +203,7 @@ func (serv *v2MetadataService) TagAndAdd(diffID layer.DiffID, hmacKey []byte, me
return serv.Add(diffID, meta) return serv.Add(diffID, meta)
} }
// Remove unassociates a metadata entry from a layer DiffID. // Remove disassociates a metadata entry from a layer DiffID.
func (serv *v2MetadataService) Remove(metadata V2Metadata) error { func (serv *v2MetadataService) Remove(metadata V2Metadata) error {
if serv.store == nil { if serv.store == nil {
// Support a service which has no backend storage, in this case // Support a service which has no backend storage, in this case

View file

@ -185,7 +185,7 @@ func TestLayerAlreadyExists(t *testing.T) {
expectedRequests: []string{"apple"}, expectedRequests: []string{"apple"},
}, },
{ {
name: "not matching reposies", name: "not matching repositories",
targetRepo: "busybox", targetRepo: "busybox",
maxExistenceChecks: 3, maxExistenceChecks: 3,
metadata: []metadata.V2Metadata{ metadata: []metadata.V2Metadata{

View file

@ -52,8 +52,8 @@ func escapeStr(s string, charsToEscape string) string {
var ret string var ret string
for _, currRune := range s { for _, currRune := range s {
appended := false appended := false
for _, escapeableRune := range charsToEscape { for _, escapableRune := range charsToEscape {
if currRune == escapeableRune { if currRune == escapableRune {
ret += `\` + string(currRune) ret += `\` + string(currRune)
appended = true appended = true
break break

View file

@ -826,7 +826,7 @@ Get `stdout` and `stderr` logs from the container ``id``
**Query parameters**: **Query parameters**:
- **details** - 1/True/true or 0/False/flase, Show extra details provided to logs. Default `false`. - **details** - 1/True/true or 0/False/false, Show extra details provided to logs. Default `false`.
- **follow** 1/True/true or 0/False/false, return stream. Default `false`. - **follow** 1/True/true or 0/False/false, return stream. Default `false`.
- **stdout** 1/True/true or 0/False/false, show `stdout` log. Default `false`. - **stdout** 1/True/true or 0/False/false, show `stdout` log. Default `false`.
- **stderr** 1/True/true or 0/False/false, show `stderr` log. Default `false`. - **stderr** 1/True/true or 0/False/false, show `stderr` log. Default `false`.

View file

@ -13,7 +13,7 @@ SCRIPT_VER="Wed Apr 20 18:30:19 UTC 2016"
# - Error if running 32-bit posix tools. Probably can take from bash --version and check contains "x86_64" # - Error if running 32-bit posix tools. Probably can take from bash --version and check contains "x86_64"
# - Warn if the CI directory cannot be deleted afterwards. Otherwise turdlets are left behind # - Warn if the CI directory cannot be deleted afterwards. Otherwise turdlets are left behind
# - Use %systemdrive% ($SYSTEMDRIVE) rather than hard code to c: for TEMP # - Use %systemdrive% ($SYSTEMDRIVE) rather than hard code to c: for TEMP
# - Consider cross builing the Windows binary and copy across. That's a bit of a heavy lift. Only reason # - Consider cross building the Windows binary and copy across. That's a bit of a heavy lift. Only reason
# for doing that is that it mirrors the actual release process for docker.exe which is cross-built. # for doing that is that it mirrors the actual release process for docker.exe which is cross-built.
# However, should absolutely not be a problem if built natively, so nit-picking. # However, should absolutely not be a problem if built natively, so nit-picking.
# - Tidy up of images and containers. Either here, or in the teardown script. # - Tidy up of images and containers. Either here, or in the teardown script.
@ -116,7 +116,7 @@ fi
# Get the commit has and verify we have something # Get the commit has and verify we have something
if [ $ec -eq 0 ]; then if [ $ec -eq 0 ]; then
export COMMITHASH=$(git rev-parse --short HEAD) export COMMITHASH=$(git rev-parse --short HEAD)
echo INFO: Commmit hash is $COMMITHASH echo INFO: Commit hash is $COMMITHASH
if [ -z $COMMITHASH ]; then if [ -z $COMMITHASH ]; then
echo "ERROR: Failed to get commit hash. Are you sure this is a docker repository?" echo "ERROR: Failed to get commit hash. Are you sure this is a docker repository?"
ec=1 ec=1

View file

@ -24,7 +24,7 @@ func enumerateTestsForBytes(b []byte) ([]string, error) {
return tests, nil return tests, nil
} }
// enumareteTests enumerates valid `-check.f` strings for all the test functions. // enumerateTests enumerates valid `-check.f` strings for all the test functions.
// Note that we use regexp rather than parsing Go files for performance reason. // Note that we use regexp rather than parsing Go files for performance reason.
// (Try `TESTFLAGS=-check.list make test-integration-cli` to see the slowness of parsing) // (Try `TESTFLAGS=-check.list make test-integration-cli` to see the slowness of parsing)
// The files needs to be `gofmt`-ed // The files needs to be `gofmt`-ed

View file

@ -36,10 +36,10 @@ func xmain() (int, error) {
// Should we use cobra maybe? // Should we use cobra maybe?
replicas := flag.Int("replicas", 1, "Number of worker service replica") replicas := flag.Int("replicas", 1, "Number of worker service replica")
chunks := flag.Int("chunks", 0, "Number of test chunks executed in batch (0 == replicas)") chunks := flag.Int("chunks", 0, "Number of test chunks executed in batch (0 == replicas)")
pushWorkerImage := flag.String("push-worker-image", "", "Push the worker image to the registry. Required for distribuetd execution. (empty == not to push)") pushWorkerImage := flag.String("push-worker-image", "", "Push the worker image to the registry. Required for distributed execution. (empty == not to push)")
shuffle := flag.Bool("shuffle", false, "Shuffle the input so as to mitigate makespan nonuniformity") shuffle := flag.Bool("shuffle", false, "Shuffle the input so as to mitigate makespan nonuniformity")
// flags below are rarely used // flags below are rarely used
randSeed := flag.Int64("rand-seed", int64(0), "Random seed used for shuffling (0 == curent time)") randSeed := flag.Int64("rand-seed", int64(0), "Random seed used for shuffling (0 == current time)")
filtersFile := flag.String("filters-file", "", "Path to optional file composed of `-check.f` filter strings") filtersFile := flag.String("filters-file", "", "Path to optional file composed of `-check.f` filter strings")
dryRun := flag.Bool("dry-run", false, "Dry run") dryRun := flag.Bool("dry-run", false, "Dry run")
keepExecutor := flag.Bool("keep-executor", false, "Do not auto-remove executor containers, which is used for running privileged programs on Swarm") keepExecutor := flag.Bool("keep-executor", false, "Do not auto-remove executor containers, which is used for running privileged programs on Swarm")

View file

@ -175,7 +175,7 @@ Function Execute-Build($type, $additionalBuildTags, $directory) {
if ($Race) { Write-Warning "Using race detector"; $raceParm=" -race"} if ($Race) { Write-Warning "Using race detector"; $raceParm=" -race"}
if ($ForceBuildAll) { $allParm=" -a" } if ($ForceBuildAll) { $allParm=" -a" }
if ($NoOpt) { $optParm=" -gcflags "+""""+"-N -l"+"""" } if ($NoOpt) { $optParm=" -gcflags "+""""+"-N -l"+"""" }
if ($addtionalBuildTags -ne "") { $buildTags += $(" " + $additionalBuildTags) } if ($additionalBuildTags -ne "") { $buildTags += $(" " + $additionalBuildTags) }
# Do the go build in the appropriate directory # Do the go build in the appropriate directory
# Note -linkmode=internal is required to be able to debug on Windows. # Note -linkmode=internal is required to be able to debug on Windows.

View file

@ -40,7 +40,7 @@ create_index() {
# change IFS locally within subshell so the for loop saves line correctly to L var # change IFS locally within subshell so the for loop saves line correctly to L var
IFS=$'\n'; IFS=$'\n';
# pretty sweet, will mimick the normal apache output. skipping "index" and hidden files # pretty sweet, will mimic the normal apache output. skipping "index" and hidden files
for L in $(find -L . -mount -depth -maxdepth 1 -type f ! -name 'index' ! -name '.*' -prune -printf "<a href=\"%f\">%f|@_@%Td-%Tb-%TY %Tk:%TM @%f@\n"|sort|column -t -s '|' | sed 's,\([\ ]\+\)@_@,</a>\1,g'); for L in $(find -L . -mount -depth -maxdepth 1 -type f ! -name 'index' ! -name '.*' -prune -printf "<a href=\"%f\">%f|@_@%Td-%Tb-%TY %Tk:%TM @%f@\n"|sort|column -t -s '|' | sed 's,\([\ ]\+\)@_@,</a>\1,g');
do do
# file # file

View file

@ -985,7 +985,7 @@ func (s *DockerSwarmSuite) TestSwarmRepeatedRootRotation(c *check.C) {
if cert != nil { if cert != nil {
c.Assert(clusterTLSInfo.TrustRoot, checker.Equals, expectedCert) c.Assert(clusterTLSInfo.TrustRoot, checker.Equals, expectedCert)
} }
// could take another second or two for the nodes to trust the new roots after the've all gotten // could take another second or two for the nodes to trust the new roots after they've all gotten
// new TLS certificates // new TLS certificates
for j := 0; j < 18; j++ { for j := 0; j < 18; j++ {
mInfo := m.GetNode(c, m.NodeID).Description.TLSInfo mInfo := m.GetNode(c, m.NodeID).Description.TLSInfo

View file

@ -1712,7 +1712,7 @@ func (s *DockerSuite) TestBuildEntrypoint(c *check.C) {
} }
// #6445 ensure ONBUILD triggers aren't committed to grandchildren // #6445 ensure ONBUILD triggers aren't committed to grandchildren
func (s *DockerSuite) TestBuildOnBuildLimitedInheritence(c *check.C) { func (s *DockerSuite) TestBuildOnBuildLimitedInheritance(c *check.C) {
buildImageSuccessfully(c, "testonbuildtrigger1", build.WithDockerfile(` buildImageSuccessfully(c, "testonbuildtrigger1", build.WithDockerfile(`
FROM busybox FROM busybox
RUN echo "GRANDPARENT" RUN echo "GRANDPARENT"
@ -3063,7 +3063,7 @@ func (s *DockerSuite) TestBuildFromGitWithContext(c *check.C) {
} }
} }
func (s *DockerSuite) TestBuildFromGitwithF(c *check.C) { func (s *DockerSuite) TestBuildFromGitWithF(c *check.C) {
name := "testbuildfromgitwithf" name := "testbuildfromgitwithf"
git := fakegit.New(c, "repo", map[string]string{ git := fakegit.New(c, "repo", map[string]string{
"myApp/myDockerfile": `FROM busybox "myApp/myDockerfile": `FROM busybox
@ -3225,7 +3225,7 @@ func (s *DockerSuite) TestBuildCmdJSONNoShDashC(c *check.C) {
} }
} }
func (s *DockerSuite) TestBuildEntrypointCanBeOverridenByChild(c *check.C) { func (s *DockerSuite) TestBuildEntrypointCanBeOverriddenByChild(c *check.C) {
buildImageSuccessfully(c, "parent", build.WithDockerfile(` buildImageSuccessfully(c, "parent", build.WithDockerfile(`
FROM busybox FROM busybox
ENTRYPOINT exit 130 ENTRYPOINT exit 130
@ -3245,7 +3245,7 @@ func (s *DockerSuite) TestBuildEntrypointCanBeOverridenByChild(c *check.C) {
}) })
} }
func (s *DockerSuite) TestBuildEntrypointCanBeOverridenByChildInspect(c *check.C) { func (s *DockerSuite) TestBuildEntrypointCanBeOverriddenByChildInspect(c *check.C) {
var ( var (
name = "testbuildepinherit" name = "testbuildepinherit"
name2 = "testbuildepinherit2" name2 = "testbuildepinherit2"
@ -4472,26 +4472,26 @@ func (s *DockerSuite) TestBuildBuildTimeArgOverrideArgDefinedBeforeEnv(c *check.
imgName := "bldargtest" imgName := "bldargtest"
envKey := "foo" envKey := "foo"
envVal := "bar" envVal := "bar"
envValOveride := "barOverride" envValOverride := "barOverride"
dockerfile := fmt.Sprintf(`FROM busybox dockerfile := fmt.Sprintf(`FROM busybox
ARG %s ARG %s
ENV %s %s ENV %s %s
RUN echo $%s RUN echo $%s
CMD echo $%s CMD echo $%s
`, envKey, envKey, envValOveride, envKey, envKey) `, envKey, envKey, envValOverride, envKey, envKey)
result := buildImage(imgName, result := buildImage(imgName,
cli.WithFlags("--build-arg", fmt.Sprintf("%s=%s", envKey, envVal)), cli.WithFlags("--build-arg", fmt.Sprintf("%s=%s", envKey, envVal)),
build.WithDockerfile(dockerfile), build.WithDockerfile(dockerfile),
) )
result.Assert(c, icmd.Success) result.Assert(c, icmd.Success)
if strings.Count(result.Combined(), envValOveride) != 2 { if strings.Count(result.Combined(), envValOverride) != 2 {
c.Fatalf("failed to access environment variable in output: %q expected: %q", result.Combined(), envValOveride) c.Fatalf("failed to access environment variable in output: %q expected: %q", result.Combined(), envValOverride)
} }
containerName := "bldargCont" containerName := "bldargCont"
if out, _ := dockerCmd(c, "run", "--name", containerName, imgName); !strings.Contains(out, envValOveride) { if out, _ := dockerCmd(c, "run", "--name", containerName, imgName); !strings.Contains(out, envValOverride) {
c.Fatalf("run produced invalid output: %q, expected %q", out, envValOveride) c.Fatalf("run produced invalid output: %q, expected %q", out, envValOverride)
} }
} }
@ -4501,25 +4501,25 @@ func (s *DockerSuite) TestBuildBuildTimeArgOverrideEnvDefinedBeforeArg(c *check.
imgName := "bldargtest" imgName := "bldargtest"
envKey := "foo" envKey := "foo"
envVal := "bar" envVal := "bar"
envValOveride := "barOverride" envValOverride := "barOverride"
dockerfile := fmt.Sprintf(`FROM busybox dockerfile := fmt.Sprintf(`FROM busybox
ENV %s %s ENV %s %s
ARG %s ARG %s
RUN echo $%s RUN echo $%s
CMD echo $%s CMD echo $%s
`, envKey, envValOveride, envKey, envKey, envKey) `, envKey, envValOverride, envKey, envKey, envKey)
result := buildImage(imgName, result := buildImage(imgName,
cli.WithFlags("--build-arg", fmt.Sprintf("%s=%s", envKey, envVal)), cli.WithFlags("--build-arg", fmt.Sprintf("%s=%s", envKey, envVal)),
build.WithDockerfile(dockerfile), build.WithDockerfile(dockerfile),
) )
result.Assert(c, icmd.Success) result.Assert(c, icmd.Success)
if strings.Count(result.Combined(), envValOveride) != 2 { if strings.Count(result.Combined(), envValOverride) != 2 {
c.Fatalf("failed to access environment variable in output: %q expected: %q", result.Combined(), envValOveride) c.Fatalf("failed to access environment variable in output: %q expected: %q", result.Combined(), envValOverride)
} }
containerName := "bldargCont" containerName := "bldargCont"
if out, _ := dockerCmd(c, "run", "--name", containerName, imgName); !strings.Contains(out, envValOveride) { if out, _ := dockerCmd(c, "run", "--name", containerName, imgName); !strings.Contains(out, envValOverride) {
c.Fatalf("run produced invalid output: %q, expected %q", out, envValOveride) c.Fatalf("run produced invalid output: %q, expected %q", out, envValOverride)
} }
} }
@ -4616,25 +4616,25 @@ func (s *DockerSuite) TestBuildBuildTimeArgExpansionOverride(c *check.C) {
envKey := "foo" envKey := "foo"
envVal := "bar" envVal := "bar"
envKey1 := "foo1" envKey1 := "foo1"
envValOveride := "barOverride" envValOverride := "barOverride"
dockerfile := fmt.Sprintf(`FROM busybox dockerfile := fmt.Sprintf(`FROM busybox
ARG %s ARG %s
ENV %s %s ENV %s %s
ENV %s ${%s} ENV %s ${%s}
RUN echo $%s RUN echo $%s
CMD echo $%s`, envKey, envKey, envValOveride, envKey1, envKey, envKey1, envKey1) CMD echo $%s`, envKey, envKey, envValOverride, envKey1, envKey, envKey1, envKey1)
result := buildImage(imgName, result := buildImage(imgName,
cli.WithFlags("--build-arg", fmt.Sprintf("%s=%s", envKey, envVal)), cli.WithFlags("--build-arg", fmt.Sprintf("%s=%s", envKey, envVal)),
build.WithDockerfile(dockerfile), build.WithDockerfile(dockerfile),
) )
result.Assert(c, icmd.Success) result.Assert(c, icmd.Success)
if strings.Count(result.Combined(), envValOveride) != 2 { if strings.Count(result.Combined(), envValOverride) != 2 {
c.Fatalf("failed to access environment variable in output: %q expected: %q", result.Combined(), envValOveride) c.Fatalf("failed to access environment variable in output: %q expected: %q", result.Combined(), envValOverride)
} }
containerName := "bldargCont" containerName := "bldargCont"
if out, _ := dockerCmd(c, "run", "--name", containerName, imgName); !strings.Contains(out, envValOveride) { if out, _ := dockerCmd(c, "run", "--name", containerName, imgName); !strings.Contains(out, envValOverride) {
c.Fatalf("run produced invalid output: %q, expected %q", out, envValOveride) c.Fatalf("run produced invalid output: %q, expected %q", out, envValOverride)
} }
} }
@ -4690,24 +4690,24 @@ func (s *DockerSuite) TestBuildBuildTimeArgDefaultOverride(c *check.C) {
imgName := "bldargtest" imgName := "bldargtest"
envKey := "foo" envKey := "foo"
envVal := "bar" envVal := "bar"
envValOveride := "barOverride" envValOverride := "barOverride"
dockerfile := fmt.Sprintf(`FROM busybox dockerfile := fmt.Sprintf(`FROM busybox
ARG %s=%s ARG %s=%s
ENV %s $%s ENV %s $%s
RUN echo $%s RUN echo $%s
CMD echo $%s`, envKey, envVal, envKey, envKey, envKey, envKey) CMD echo $%s`, envKey, envVal, envKey, envKey, envKey, envKey)
result := buildImage(imgName, result := buildImage(imgName,
cli.WithFlags("--build-arg", fmt.Sprintf("%s=%s", envKey, envValOveride)), cli.WithFlags("--build-arg", fmt.Sprintf("%s=%s", envKey, envValOverride)),
build.WithDockerfile(dockerfile), build.WithDockerfile(dockerfile),
) )
result.Assert(c, icmd.Success) result.Assert(c, icmd.Success)
if strings.Count(result.Combined(), envValOveride) != 1 { if strings.Count(result.Combined(), envValOverride) != 1 {
c.Fatalf("failed to access environment variable in output: %q expected: %q", result.Combined(), envValOveride) c.Fatalf("failed to access environment variable in output: %q expected: %q", result.Combined(), envValOverride)
} }
containerName := "bldargCont" containerName := "bldargCont"
if out, _ := dockerCmd(c, "run", "--name", containerName, imgName); !strings.Contains(out, envValOveride) { if out, _ := dockerCmd(c, "run", "--name", containerName, imgName); !strings.Contains(out, envValOverride) {
c.Fatalf("run produced invalid output: %q, expected %q", out, envValOveride) c.Fatalf("run produced invalid output: %q, expected %q", out, envValOverride)
} }
} }
@ -4824,7 +4824,7 @@ func (s *DockerSuite) TestBuildBuildTimeArgEmptyValVariants(c *check.C) {
buildImageSuccessfully(c, imgName, build.WithDockerfile(dockerfile)) buildImageSuccessfully(c, imgName, build.WithDockerfile(dockerfile))
} }
func (s *DockerSuite) TestBuildBuildTimeArgDefintionWithNoEnvInjection(c *check.C) { func (s *DockerSuite) TestBuildBuildTimeArgDefinitionWithNoEnvInjection(c *check.C) {
imgName := "bldargtest" imgName := "bldargtest"
envKey := "foo" envKey := "foo"
dockerfile := fmt.Sprintf(`FROM busybox dockerfile := fmt.Sprintf(`FROM busybox
@ -5785,7 +5785,7 @@ func (s *DockerSuite) TestBuildWithExtraHostInvalidFormat(c *check.C) {
buildFlag string buildFlag string
}{ }{
{"extra_host_missing_ip", dockerfile, "--add-host=foo"}, {"extra_host_missing_ip", dockerfile, "--add-host=foo"},
{"extra_host_missing_ip_with_delimeter", dockerfile, "--add-host=foo:"}, {"extra_host_missing_ip_with_delimiter", dockerfile, "--add-host=foo:"},
{"extra_host_missing_hostname", dockerfile, "--add-host=:127.0.0.1"}, {"extra_host_missing_hostname", dockerfile, "--add-host=:127.0.0.1"},
{"extra_host_invalid_ipv4", dockerfile, "--add-host=foo:101.10.2"}, {"extra_host_invalid_ipv4", dockerfile, "--add-host=foo:101.10.2"},
{"extra_host_invalid_ipv6", dockerfile, "--add-host=foo:2001::1::3F"}, {"extra_host_invalid_ipv6", dockerfile, "--add-host=foo:2001::1::3F"},

View file

@ -54,9 +54,9 @@ func (s *DockerSuite) TestCommitPausedContainer(c *check.C) {
} }
func (s *DockerSuite) TestCommitNewFile(c *check.C) { func (s *DockerSuite) TestCommitNewFile(c *check.C) {
dockerCmd(c, "run", "--name", "commiter", "busybox", "/bin/sh", "-c", "echo koye > /foo") dockerCmd(c, "run", "--name", "committer", "busybox", "/bin/sh", "-c", "echo koye > /foo")
imageID, _ := dockerCmd(c, "commit", "commiter") imageID, _ := dockerCmd(c, "commit", "committer")
imageID = strings.TrimSpace(imageID) imageID = strings.TrimSpace(imageID)
out, _ := dockerCmd(c, "run", imageID, "cat", "/foo") out, _ := dockerCmd(c, "run", imageID, "cat", "/foo")

View file

@ -969,7 +969,7 @@ func (s *DockerDaemonSuite) TestDaemonUlimitDefaults(c *check.C) {
c.Fatalf("expected `ulimit -n` to be `42`, got: %s", nofile) c.Fatalf("expected `ulimit -n` to be `42`, got: %s", nofile)
} }
if nproc != "2048" { if nproc != "2048" {
c.Fatalf("exepcted `ulimit -p` to be 2048, got: %s", nproc) c.Fatalf("expected `ulimit -p` to be 2048, got: %s", nproc)
} }
// Now restart daemon with a new default // Now restart daemon with a new default
@ -991,7 +991,7 @@ func (s *DockerDaemonSuite) TestDaemonUlimitDefaults(c *check.C) {
c.Fatalf("expected `ulimit -n` to be `43`, got: %s", nofile) c.Fatalf("expected `ulimit -n` to be `43`, got: %s", nofile)
} }
if nproc != "2048" { if nproc != "2048" {
c.Fatalf("exepcted `ulimit -p` to be 2048, got: %s", nproc) c.Fatalf("expected `ulimit -p` to be 2048, got: %s", nproc)
} }
} }
@ -1412,7 +1412,7 @@ func (s *DockerDaemonSuite) TestDaemonRestartWithSocketAsVolume(c *check.C) {
} }
// os.Kill should kill daemon ungracefully, leaving behind container mounts. // os.Kill should kill daemon ungracefully, leaving behind container mounts.
// A subsequent daemon restart shoud clean up said mounts. // A subsequent daemon restart should clean up said mounts.
func (s *DockerDaemonSuite) TestCleanupMountsAfterDaemonAndContainerKill(c *check.C) { func (s *DockerDaemonSuite) TestCleanupMountsAfterDaemonAndContainerKill(c *check.C) {
d := daemon.New(c, dockerBinary, dockerdBinary, daemon.Config{ d := daemon.New(c, dockerBinary, dockerdBinary, daemon.Config{
Experimental: testEnv.ExperimentalDaemon(), Experimental: testEnv.ExperimentalDaemon(),

View file

@ -111,7 +111,7 @@ func (s *DockerExternalGraphdriverSuite) setUpPlugin(c *check.C, name string, ex
} }
respond := func(w http.ResponseWriter, data interface{}) { respond := func(w http.ResponseWriter, data interface{}) {
w.Header().Set("Content-Type", "appplication/vnd.docker.plugins.v1+json") w.Header().Set("Content-Type", "application/vnd.docker.plugins.v1+json")
switch t := data.(type) { switch t := data.(type) {
case error: case error:
fmt.Fprintln(w, fmt.Sprintf(`{"Err": %q}`, t.Error())) fmt.Fprintln(w, fmt.Sprintf(`{"Err": %q}`, t.Error()))

View file

@ -16,7 +16,7 @@ func (s *DockerSuite) TestLoginWithoutTTY(c *check.C) {
// run the command and block until it's done // run the command and block until it's done
err := cmd.Run() err := cmd.Run()
c.Assert(err, checker.NotNil) //"Expected non nil err when loginning in & TTY not available" c.Assert(err, checker.NotNil) //"Expected non nil err when logging in & TTY not available"
} }
func (s *DockerRegistryAuthHtpasswdSuite) TestLoginToPrivateRegistry(c *check.C) { func (s *DockerRegistryAuthHtpasswdSuite) TestLoginToPrivateRegistry(c *check.C) {

View file

@ -1151,7 +1151,7 @@ func (s *DockerNetworkSuite) TestDockerNetworkHostModeUngracefulDaemonRestart(c
out, err := s.d.Cmd("run", "-d", "--name", cName, "--net=host", "--restart=always", "busybox", "top") out, err := s.d.Cmd("run", "-d", "--name", cName, "--net=host", "--restart=always", "busybox", "top")
c.Assert(err, checker.IsNil, check.Commentf(out)) c.Assert(err, checker.IsNil, check.Commentf(out))
// verfiy container has finished starting before killing daemon // verify container has finished starting before killing daemon
err = s.d.WaitRun(cName) err = s.d.WaitRun(cName)
c.Assert(err, checker.IsNil) c.Assert(err, checker.IsNil)
} }

View file

@ -475,6 +475,6 @@ func (s *DockerSuite) TestPluginMetricsCollector(c *check.C) {
b, err := ioutil.ReadAll(resp.Body) b, err := ioutil.ReadAll(resp.Body)
c.Assert(err, checker.IsNil) c.Assert(err, checker.IsNil)
// check that a known metric is there... don't epect this metric to change over time.. probably safe // check that a known metric is there... don't expect this metric to change over time.. probably safe
c.Assert(string(b), checker.Contains, "container_actions") c.Assert(string(b), checker.Contains, "container_actions")
} }

View file

@ -746,7 +746,7 @@ func (s *DockerSuite) TestPsShowMounts(c *check.C) {
fields = strings.Fields(lines[1]) fields = strings.Fields(lines[1])
c.Assert(fields, checker.HasLen, 2) c.Assert(fields, checker.HasLen, 2)
annonymounsVolumeID := fields[1] anonymousVolumeID := fields[1]
fields = strings.Fields(lines[2]) fields = strings.Fields(lines[2])
c.Assert(fields[1], checker.Equals, "ps-volume-test") c.Assert(fields[1], checker.Equals, "ps-volume-test")
@ -771,7 +771,7 @@ func (s *DockerSuite) TestPsShowMounts(c *check.C) {
c.Assert(lines, checker.HasLen, 2) c.Assert(lines, checker.HasLen, 2)
fields = strings.Fields(lines[0]) fields = strings.Fields(lines[0])
c.Assert(fields[1], checker.Equals, annonymounsVolumeID) c.Assert(fields[1], checker.Equals, anonymousVolumeID)
fields = strings.Fields(lines[1]) fields = strings.Fields(lines[1])
c.Assert(fields[1], checker.Equals, "ps-volume-test") c.Assert(fields[1], checker.Equals, "ps-volume-test")

View file

@ -212,7 +212,7 @@ func (s *DockerSwarmSuite) TestServiceLogsTaskLogs(c *check.C) {
fmt.Sprintf("--replicas=%v", replicas), fmt.Sprintf("--replicas=%v", replicas),
// which has this the task id as an environment variable templated in // which has this the task id as an environment variable templated in
"--env", "TASK={{.Task.ID}}", "--env", "TASK={{.Task.ID}}",
// and runs this command to print exaclty 6 logs lines // and runs this command to print exactly 6 logs lines
"busybox", "sh", "-c", "for line in $(seq 0 5); do echo $TASK log test $line; done; sleep 100000", "busybox", "sh", "-c", "for line in $(seq 0 5); do echo $TASK log test $line; done; sleep 100000",
)) ))
result.Assert(c, icmd.Expected{}) result.Assert(c, icmd.Expected{})

View file

@ -1887,7 +1887,7 @@ func (s *DockerSwarmSuite) TestNetworkInspectWithDuplicateNames(c *check.C) {
out, err = d.Cmd("network", "rm", n2.ID) out, err = d.Cmd("network", "rm", n2.ID)
c.Assert(err, checker.IsNil, check.Commentf(out)) c.Assert(err, checker.IsNil, check.Commentf(out))
// Dupliates with name but with different driver // Duplicates with name but with different driver
networkCreateRequest.NetworkCreate.Driver = "overlay" networkCreateRequest.NetworkCreate.Driver = "overlay"
status, body, err = d.SockRequest("POST", "/networks/create", networkCreateRequest) status, body, err = d.SockRequest("POST", "/networks/create", networkCreateRequest)

View file

@ -34,7 +34,7 @@ func (s *DockerSuite) TestVolumeCLICreate(c *check.C) {
func (s *DockerSuite) TestVolumeCLIInspect(c *check.C) { func (s *DockerSuite) TestVolumeCLIInspect(c *check.C) {
c.Assert( c.Assert(
exec.Command(dockerBinary, "volume", "inspect", "doesntexist").Run(), exec.Command(dockerBinary, "volume", "inspect", "doesnotexist").Run(),
check.Not(check.IsNil), check.Not(check.IsNil),
check.Commentf("volume inspect should error on non-existent volume"), check.Commentf("volume inspect should error on non-existent volume"),
) )
@ -54,10 +54,10 @@ func (s *DockerSuite) TestVolumeCLIInspectMulti(c *check.C) {
dockerCmd(c, "volume", "create", "test2") dockerCmd(c, "volume", "create", "test2")
dockerCmd(c, "volume", "create", "test3") dockerCmd(c, "volume", "create", "test3")
result := dockerCmdWithResult("volume", "inspect", "--format={{ .Name }}", "test1", "test2", "doesntexist", "test3") result := dockerCmdWithResult("volume", "inspect", "--format={{ .Name }}", "test1", "test2", "doesnotexist", "test3")
c.Assert(result, icmd.Matches, icmd.Expected{ c.Assert(result, icmd.Matches, icmd.Expected{
ExitCode: 1, ExitCode: 1,
Err: "No such volume: doesntexist", Err: "No such volume: doesnotexist",
}) })
out := result.Stdout() out := result.Stdout()
@ -185,7 +185,7 @@ func (s *DockerSuite) TestVolumeCLILsFilterDangling(c *check.C) {
out, _ = dockerCmd(c, "volume", "ls", "--filter", "name=testisin") out, _ = dockerCmd(c, "volume", "ls", "--filter", "name=testisin")
c.Assert(out, check.Not(checker.Contains), "testnotinuse1\n", check.Commentf("expected volume 'testnotinuse1' in output")) c.Assert(out, check.Not(checker.Contains), "testnotinuse1\n", check.Commentf("expected volume 'testnotinuse1' in output"))
c.Assert(out, checker.Contains, "testisinuse1\n", check.Commentf("execpeted volume 'testisinuse1' in output")) c.Assert(out, checker.Contains, "testisinuse1\n", check.Commentf("expected volume 'testisinuse1' in output"))
c.Assert(out, checker.Contains, "testisinuse2\n", check.Commentf("expected volume 'testisinuse2' in output")) c.Assert(out, checker.Contains, "testisinuse2\n", check.Commentf("expected volume 'testisinuse2' in output"))
} }
@ -234,7 +234,7 @@ func (s *DockerSuite) TestVolumeCLIRm(c *check.C) {
dockerCmd(c, "volume", "rm", volumeID) dockerCmd(c, "volume", "rm", volumeID)
c.Assert( c.Assert(
exec.Command("volume", "rm", "doesntexist").Run(), exec.Command("volume", "rm", "doesnotexist").Run(),
check.Not(check.IsNil), check.Not(check.IsNil),
check.Commentf("volume rm should fail with non-existent volume"), check.Commentf("volume rm should fail with non-existent volume"),
) )

View file

@ -155,7 +155,7 @@ func (s *DockerNetworkSuite) TestDockerNetworkMacvlanMultiSubnet(c *check.C) {
_, _, err := dockerCmdWithError("exec", "second", "ping", "-c", "1", strings.TrimSpace(ip)) _, _, err := dockerCmdWithError("exec", "second", "ping", "-c", "1", strings.TrimSpace(ip))
c.Assert(err, check.IsNil) c.Assert(err, check.IsNil)
// verify ipv6 connectivity to the explicit --ipv6 address second to first // verify ipv6 connectivity to the explicit --ipv6 address second to first
c.Skip("Temporarily skipping while invesitigating sporadic v6 CI issues") c.Skip("Temporarily skipping while investigating sporadic v6 CI issues")
_, _, err = dockerCmdWithError("exec", "second", "ping6", "-c", "1", strings.TrimSpace(ip6)) _, _, err = dockerCmdWithError("exec", "second", "ping6", "-c", "1", strings.TrimSpace(ip6))
c.Assert(err, check.IsNil) c.Assert(err, check.IsNil)

View file

@ -22,7 +22,7 @@ func NewIPOpt(ref *net.IP, defaultVal string) *IPOpt {
} }
// Set sets an IPv4 or IPv6 address from a given string. If the given // Set sets an IPv4 or IPv6 address from a given string. If the given
// string is not parseable as an IP address it returns an error. // string is not parsable as an IP address it returns an error.
func (o *IPOpt) Set(val string) error { func (o *IPOpt) Set(val string) error {
ip := net.ParseIP(val) ip := net.ParseIP(val)
if ip == nil { if ip == nil {

View file

@ -157,7 +157,7 @@ func TestValidateDNSSearch(t *testing.T) {
`foo.bar-.baz`, `foo.bar-.baz`,
`foo.-bar`, `foo.-bar`,
`foo.-bar.baz`, `foo.-bar.baz`,
`foo.bar.baz.this.should.fail.on.long.name.beause.it.is.longer.thanisshouldbethis.should.fail.on.long.name.beause.it.is.longer.thanisshouldbethis.should.fail.on.long.name.beause.it.is.longer.thanisshouldbethis.should.fail.on.long.name.beause.it.is.longer.thanisshouldbe`, `foo.bar.baz.this.should.fail.on.long.name.because.it.is.longer.thanitshouldbethis.should.fail.on.long.name.because.it.is.longer.thanitshouldbethis.should.fail.on.long.name.because.it.is.longer.thanitshouldbethis.should.fail.on.long.name.because.it.is.longer.thanitshouldbe`,
} }
for _, domain := range valid { for _, domain := range valid {

View file

@ -180,7 +180,7 @@ func DecompressStream(archive io.Reader) (io.ReadCloser, error) {
} }
} }
// CompressStream compresseses the dest with specified compression algorithm. // CompressStream compresses the dest with specified compression algorithm.
func CompressStream(dest io.Writer, compression Compression) (io.WriteCloser, error) { func CompressStream(dest io.Writer, compression Compression) (io.WriteCloser, error) {
p := pools.BufioWriter32KPool p := pools.BufioWriter32KPool
buf := p.Get(dest) buf := p.Get(dest)

View file

@ -102,10 +102,10 @@ func createSampleDir(t *testing.T, root string) {
} }
func TestChangeString(t *testing.T) { func TestChangeString(t *testing.T) {
modifiyChange := Change{"change", ChangeModify} modifyChange := Change{"change", ChangeModify}
toString := modifiyChange.String() toString := modifyChange.String()
if toString != "C change" { if toString != "C change" {
t.Fatalf("String() of a change with ChangeModifiy Kind should have been %s but was %s", "C change", toString) t.Fatalf("String() of a change with ChangeModify Kind should have been %s but was %s", "C change", toString)
} }
addChange := Change{"change", ChangeAdd} addChange := Change{"change", ChangeAdd}
toString = addChange.String() toString = addChange.String()

View file

@ -99,7 +99,7 @@ func TestAuthZResponsePlugin(t *testing.T) {
request := Request{ request := Request{
User: "user", User: "user",
RequestURI: "someting.com/auth", RequestURI: "something.com/auth",
RequestBody: []byte("sample body"), RequestBody: []byte("sample body"),
} }
server.replayResponse = Response{ server.replayResponse = Response{

View file

@ -373,7 +373,7 @@ func RemoveDeviceDeferred(name string) error {
// semaphores created in `task.setCookie` will be cleaned up in `UdevWait`. // semaphores created in `task.setCookie` will be cleaned up in `UdevWait`.
// So these two function call must come in pairs, otherwise semaphores will // So these two function call must come in pairs, otherwise semaphores will
// be leaked, and the limit of number of semaphores defined in `/proc/sys/kernel/sem` // be leaked, and the limit of number of semaphores defined in `/proc/sys/kernel/sem`
// will be reached, which will eventually make all follwing calls to 'task.SetCookie' // will be reached, which will eventually make all following calls to 'task.SetCookie'
// fail. // fail.
// this call will not wait for the deferred removal's final executing, since no // this call will not wait for the deferred removal's final executing, since no
// udev event will be generated, and the semaphore's value will not be incremented // udev event will be generated, and the semaphore's value will not be incremented

View file

@ -2,7 +2,7 @@ package filenotify
import "github.com/fsnotify/fsnotify" import "github.com/fsnotify/fsnotify"
// fsNotifyWatcher wraps the fsnotify package to satisfy the FileNotifer interface // fsNotifyWatcher wraps the fsnotify package to satisfy the FileNotifier interface
type fsNotifyWatcher struct { type fsNotifyWatcher struct {
*fsnotify.Watcher *fsnotify.Watcher
} }

View file

@ -136,7 +136,7 @@ func TestParseWithMultipleFuncs(t *testing.T) {
} }
} }
func TestParseWithUnamedReturn(t *testing.T) { func TestParseWithUnnamedReturn(t *testing.T) {
_, err := Parse(testFixture, "Fooer4") _, err := Parse(testFixture, "Fooer4")
if !strings.HasSuffix(err.Error(), errBadReturn.Error()) { if !strings.HasSuffix(err.Error(), errBadReturn.Error()) {
t.Fatalf("expected ErrBadReturn, got %v", err) t.Fatalf("expected ErrBadReturn, got %v", err)

View file

@ -60,7 +60,7 @@ func TestGetNames(t *testing.T) {
} }
if !reflect.DeepEqual(names, names2) { if !reflect.DeepEqual(names, names2) {
t.Fatalf("Exepected: %v, Got: %v", names, names2) t.Fatalf("Expected: %v, Got: %v", names, names2)
} }
} }

View file

@ -16,7 +16,7 @@ func TestNewStdWriter(t *testing.T) {
} }
} }
func TestWriteWithUnitializedStdWriter(t *testing.T) { func TestWriteWithUninitializedStdWriter(t *testing.T) {
writer := stdWriter{ writer := stdWriter{
Writer: nil, Writer: nil,
prefix: byte(Stdout), prefix: byte(Stdout),

View file

@ -40,7 +40,7 @@ func FollowSymlinkInScope(path, root string) (string, error) {
// //
// Example: // Example:
// If /foo/bar -> /outside, // If /foo/bar -> /outside,
// FollowSymlinkInScope("/foo/bar", "/foo") == "/foo/outside" instead of "/oustide" // FollowSymlinkInScope("/foo/bar", "/foo") == "/foo/outside" instead of "/outside"
// //
// IMPORTANT: it is the caller's responsibility to call evalSymlinksInScope *after* relevant symlinks // IMPORTANT: it is the caller's responsibility to call evalSymlinksInScope *after* relevant symlinks
// are created and not to create subsequently, additional symlinks that could potentially make a // are created and not to create subsequently, additional symlinks that could potentially make a

View file

@ -12,7 +12,7 @@ import (
// This is used, for example, when validating a user provided path in docker cp. // This is used, for example, when validating a user provided path in docker cp.
// If a drive letter is supplied, it must be the system drive. The drive letter // If a drive letter is supplied, it must be the system drive. The drive letter
// is always removed. Also, it translates it to OS semantics (IOW / to \). We // is always removed. Also, it translates it to OS semantics (IOW / to \). We
// need the path in this syntax so that it can ultimately be contatenated with // need the path in this syntax so that it can ultimately be concatenated with
// a Windows long-path which doesn't support drive-letters. Examples: // a Windows long-path which doesn't support drive-letters. Examples:
// C: --> Fail // C: --> Fail
// C:\ --> \ // C:\ --> \

View file

@ -20,7 +20,7 @@ import (
// These types of errors do not need to be returned since it's ok for the dir to // These types of errors do not need to be returned since it's ok for the dir to
// be gone we can just retry the remove operation. // be gone we can just retry the remove operation.
// //
// This should not return a `os.ErrNotExist` kind of error under any cirucmstances // This should not return a `os.ErrNotExist` kind of error under any circumstances
func EnsureRemoveAll(dir string) error { func EnsureRemoveAll(dir string) error {
notExistErr := make(map[string]bool) notExistErr := make(map[string]bool)

View file

@ -30,7 +30,7 @@ var basicFunctions = template.FuncMap{
// HeaderFunctions are used to created headers of a table. // HeaderFunctions are used to created headers of a table.
// This is a replacement of basicFunctions for header generation // This is a replacement of basicFunctions for header generation
// because we want the header to remain intact. // because we want the header to remain intact.
// Some functions like `split` are irrevelant so not added. // Some functions like `split` are irrelevant so not added.
var HeaderFunctions = template.FuncMap{ var HeaderFunctions = template.FuncMap{
"json": func(v string) string { "json": func(v string) string {
return v return v

View file

@ -53,7 +53,7 @@ type Result struct {
} }
// Assert compares the Result against the Expected struct, and fails the test if // Assert compares the Result against the Expected struct, and fails the test if
// any of the expcetations are not met. // any of the expectations are not met.
func (r *Result) Assert(t testingT, exp Expected) *Result { func (r *Result) Assert(t testingT, exp Expected) *Result {
err := r.Compare(exp) err := r.Compare(exp)
if err == nil { if err == nil {

View file

@ -292,7 +292,7 @@ func (pm *Manager) save(p *v2.Plugin) error {
return nil return nil
} }
// GC cleans up unrefrenced blobs. This is recommended to run in a goroutine // GC cleans up unreferenced blobs. This is recommended to run in a goroutine
func (pm *Manager) GC() { func (pm *Manager) GC() {
pm.muGC.Lock() pm.muGC.Lock()
defer pm.muGC.Unlock() defer pm.muGC.Unlock()

View file

@ -68,7 +68,7 @@ func TestIsSettable(t *testing.T) {
} }
} }
func TestUpdateSettinsEnv(t *testing.T) { func TestUpdateSettingsEnv(t *testing.T) {
contexts := []struct { contexts := []struct {
env []string env []string
set settable set settable

View file

@ -221,7 +221,7 @@ func (store *store) Delete(ref reference.Named) (bool, error) {
func (store *store) Get(ref reference.Named) (digest.Digest, error) { func (store *store) Get(ref reference.Named) (digest.Digest, error) {
if canonical, ok := ref.(reference.Canonical); ok { if canonical, ok := ref.(reference.Canonical); ok {
// If reference contains both tag and digest, only // If reference contains both tag and digest, only
// lookup by digest as it takes precendent over // lookup by digest as it takes precedence over
// tag, until tag/digest combos are stored. // tag, until tag/digest combos are stored.
if _, ok := ref.(reference.Tagged); ok { if _, ok := ref.(reference.Tagged); ok {
var err error var err error

View file

@ -252,7 +252,7 @@ skip:
return nil return nil
} }
// allowNondistributableArtifacts returns true if the provided hostname is part of the list of regsitries // allowNondistributableArtifacts returns true if the provided hostname is part of the list of registries
// that allow push of nondistributable artifacts. // that allow push of nondistributable artifacts.
// //
// The list can contain elements with CIDR notation to specify a whole subnet. If the subnet contains an IP // The list can contain elements with CIDR notation to specify a whole subnet. If the subnet contains an IP

View file

@ -175,7 +175,7 @@ func (e *V1Endpoint) Ping() (PingResult, error) {
Standalone: true, Standalone: true,
} }
if err := json.Unmarshal(jsonString, &info); err != nil { if err := json.Unmarshal(jsonString, &info); err != nil {
logrus.Debugf("Error unmarshalling the _ping PingResult: %s", err) logrus.Debugf("Error unmarshaling the _ping PingResult: %s", err)
// don't stop here. Just assume sane defaults // don't stop here. Just assume sane defaults
} }
if hdr := resp.Header.Get("X-Docker-Registry-Version"); hdr != "" { if hdr := resp.Header.Get("X-Docker-Registry-Version"); hdr != "" {

View file

@ -9,7 +9,7 @@ During this meeting, we are talking about the [tasks](https://github.com/moby/mo
### The CLI split ### The CLI split
The Docker CLI was succesfully moved to [https://github.com/docker/cli](https://github.com/docker/cli) last week thanks to @tiborvass The Docker CLI was successfully moved to [https://github.com/docker/cli](https://github.com/docker/cli) last week thanks to @tiborvass
The Docker CLI is now compiled from the [Dockerfile](https://github.com/moby/moby/blob/a762ceace4e8c1c7ce4fb582789af9d8074be3e1/Dockerfile#L248) The Docker CLI is now compiled from the [Dockerfile](https://github.com/moby/moby/blob/a762ceace4e8c1c7ce4fb582789af9d8074be3e1/Dockerfile#L248)
### Mailing list ### Mailing list

View file

@ -27,7 +27,7 @@ breaking up / removing existing packages that likely are not good candidates to
With the removal of the CLI from the moby repository, new pull requests will have to be tested using API tests instead With the removal of the CLI from the moby repository, new pull requests will have to be tested using API tests instead
of using the CLI. Discussion took place whether or not these tests should use the API `client` package, or be completely of using the CLI. Discussion took place whether or not these tests should use the API `client` package, or be completely
independend, and make raw HTTP calls. independent, and make raw HTTP calls.
A topic was created on the forum to discuss options: [evolution of testing](https://forums.mobyproject.org/t/evolution-of-testing-moby/38) A topic was created on the forum to discuss options: [evolution of testing](https://forums.mobyproject.org/t/evolution-of-testing-moby/38)

View file

@ -102,7 +102,7 @@ func (a *volumeDriverAdapter) getCapabilities() volume.Capability {
if err != nil { if err != nil {
// `GetCapabilities` is a not a required endpoint. // `GetCapabilities` is a not a required endpoint.
// On error assume it's a local-only driver // On error assume it's a local-only driver
logrus.Warnf("Volume driver %s returned an error while trying to query its capabilities, using default capabilties: %v", a.name, err) logrus.Warnf("Volume driver %s returned an error while trying to query its capabilities, using default capabilities: %v", a.name, err)
return volume.Capability{Scope: volume.LocalScope} return volume.Capability{Scope: volume.LocalScope}
} }

View file

@ -25,7 +25,7 @@ func (NoopVolume) Mount(_ string) (string, error) { return "noop", nil }
// Unmount unmounts the volume from the container // Unmount unmounts the volume from the container
func (NoopVolume) Unmount(_ string) error { return nil } func (NoopVolume) Unmount(_ string) error { return nil }
// Status proivdes low-level details about the volume // Status provides low-level details about the volume
func (NoopVolume) Status() map[string]interface{} { return nil } func (NoopVolume) Status() map[string]interface{} { return nil }
// CreatedAt provides the time the volume (directory) was created at // CreatedAt provides the time the volume (directory) was created at
@ -57,7 +57,7 @@ func (FakeVolume) Mount(_ string) (string, error) { return "fake", nil }
// Unmount unmounts the volume from the container // Unmount unmounts the volume from the container
func (FakeVolume) Unmount(_ string) error { return nil } func (FakeVolume) Unmount(_ string) error { return nil }
// Status proivdes low-level details about the volume // Status provides low-level details about the volume
func (FakeVolume) Status() map[string]interface{} { return nil } func (FakeVolume) Status() map[string]interface{} { return nil }
// CreatedAt provides the time the volume (directory) was created at // CreatedAt provides the time the volume (directory) was created at

View file

@ -125,7 +125,7 @@ type MountPoint struct {
Spec mounttypes.Mount Spec mounttypes.Mount
// Track usage of this mountpoint // Track usage of this mountpoint
// Specicially needed for containers which are running and calls to `docker cp` // Specifically needed for containers which are running and calls to `docker cp`
// because both these actions require mounting the volumes. // because both these actions require mounting the volumes.
active int active int
} }

View file

@ -26,7 +26,7 @@ func ConvertTmpfsOptions(opt *mounttypes.TmpfsOptions, readOnly bool) (string, e
// okay, since API is that way anyways. // okay, since API is that way anyways.
// we do this by finding the suffix that divides evenly into the // we do this by finding the suffix that divides evenly into the
// value, returing the value itself, with no suffix, if it fails. // value, returning the value itself, with no suffix, if it fails.
// //
// For the most part, we don't enforce any semantic to this values. // For the most part, we don't enforce any semantic to this values.
// The operating system will usually align this and enforce minimum // The operating system will usually align this and enforce minimum