Commit Graph

23 Commits

Author SHA1 Message Date
Daniel Nephin 231c5cbd50 Remove unused JSONLog marshaling
Signed-off-by: Daniel Nephin <dnephin@docker.com>
2017-09-25 16:07:25 -04:00
Daniel Nephin f7f101d57e Add gosimple linter
Update gometalinter

Signed-off-by: Daniel Nephin <dnephin@docker.com>
2017-09-12 12:09:59 -04:00
Derek McGowan 1009e6a40b
Update logrus to v1.0.1
Fixes case sensitivity issue

Signed-off-by: Derek McGowan <derek@mcgstyle.net>
2017-07-31 13:16:46 -07:00
Aaron Lehmann b642b3f21f Avoid using a map for log attributes
Having a map per log entry seemed heavier than necessary. These
attributes end up being sorted and serialized, so storing them in a map
doesn't add anything (there's no random access element). In SwarmKit,
they originate as a slice, so there's an unnecessary conversion to a map
and back.

This also fixes the sort comparator, which used to inefficiently split
the string on each comparison.

Signed-off-by: Aaron Lehmann <aaron.lehmann@docker.com>
2017-07-18 19:01:20 -07:00
Brian Goff e2209185ed Fix log readers can block writes indefinitely
Before this patch, a log reader is able to block all log writes
indefinitely (and other operations) by simply opening the log stream and
not consuming all the messages.

The reason for this is we protect the read stream from corruption by
ensuring there are no new writes while the log stream is consumed (and
caught up with the live entries).

We can get around this issue because log files are append only, so we
can limit reads to only the section of the file that was written to when
the log stream was first requested.

Now logs are only blocked until all files are opened, rather than
streamed to the client.

Signed-off-by: Brian Goff <cpuguy83@gmail.com>
2017-07-05 14:04:52 -04:00
Victor Vieux 2445e6b99d move multireader out of /pkg
Signed-off-by: Victor Vieux <victorvieux@gmail.com>
2017-06-01 16:22:00 -07:00
Jim Minter 4fdb17c777 Prevent ContainerLogs from hanging if container doesn't run for long
Signed-off-by: Jim Minter <jminter@redhat.com>
2017-04-20 12:27:51 +01:00
Brian Goff 7a179972ff Fix cpu spin waiting for log write events
This loop is not ever going to return since it's never actually setting
the `err` var except on the first iteration.

Signed-off-by: Brian Goff <cpuguy83@gmail.com>
2017-02-15 21:41:32 -05:00
Kenfe-Mickael Laventure fb2bb3653e Close logwatcher on context cancellation
This commit addresses 2 issues:

  1. in `tailfile()` if somehow the `logWatcher.Msg` were to become full and the watcher closed before space was made into it, we were getting stuck there forever since we were not checking for the watcher getting closed
  2. when servicing `docker logs`, if the command was cancelled we were not closing the watcher (and hence notifying it to stop copying data)

Signed-off-by: Kenfe-Mickael Laventure <mickael.laventure@gmail.com>
2017-01-17 14:36:13 -08:00
Alexander Morozov f2614f2107 project: use vndr for vendoring
Signed-off-by: Alexander Morozov <lk4d4@docker.com>
2016-11-03 15:31:46 -07:00
Tom Booth a69a59ffc7 Decouple removing the fileWatcher from reading
Fixes #27779

Currently `followLogs` can get into a deadlock if we receive an inotify
IN_MODIFY event while we are trying to close the `fileWatcher`. This is
due to the fact that closing the `fileWatcher` happens in the same block
as consumes events from the `fileWatcher`. We are trying to run
`fileWatcher.Close`, which is waiting for an IN_IGNORE event to come in
over inotify to confirm the watch was been removed. But, because an
IN_MODIFY event has appeared after `Close` was entered but before the
IN_IGNORE, the broadcast never comes. The IN_MODIFY cannot be consumed
as the events channel is unbuffered and the only `select` that reads
from it is busy waiting for the IN_IGNORE event.

In order to try and fix this race condition I've moved the removal of
the `fileWatcher` out to a separate go block that waits for a signal to
close, removes the watcher and then signals to the previous selects on
the close signal.

This has introduced a `fileWatcher.Remove` in the final case, but if we
try and remove a watcher that does not exist it will just return an
error saying so. We are not doing any checking on the return of `Remove`
so this shouldn't cause any side-effects.

Signed-off-by: Tom Booth <tombooth@gmail.com>
2016-10-28 10:53:38 +01:00
Tom Booth 6314bec641 Do not directly cast io.ReadSeeker to io.Reader
`golint` had the following issue when linting this file:

```
daemon/logger/jsonfilelog/read.go:116:10: should omit type io.Reader
from declaration of var rdr; it will be inferred from the right-hand
side
```

In order to keep it happy changing it to an indirect assignment will
still maintain the same functionality.

Signed-off-by: Tom Booth <tombooth@gmail.com>
2016-10-26 17:37:48 +01:00
Aaron Lehmann cbd4e4d48a Merge pull request #24514 from cpuguy83/fix_log_follow_rotate
Fix issues with tailing rotated jsonlog file
2016-08-26 09:47:40 -07:00
Brian Goff 84e60a7e10 Fix issues with tailing rotated jsonlog file
Fixes a race where the log reader would get events for both an actual
rotation as we from fsnotify (`fsnotify.Rename`).
This issue becomes extremely apparent when rotations are fast, for
example:

```
$ docker run -d --name test --log-opt max-size=1 --log-opt max-file=2
busybox sh -c 'while true; do echo hello; usleep 100000; done'
```

With this change the log reader for jsonlogs can handle rotations that
happen as above.

Instead of listening for both fs events AND rotation events
simultaneously, potentially meaning we see 2 rotations for only a single
rotation due to channel buffering, only listen for fs events (like
`Rename`) and then wait to be notified about rotation by the logger.
This makes sure that we don't see 2 rotations for 1, and that we don't
start trying to read until the logger is actually ready for us to.

Signed-off-by: Brian Goff <cpuguy83@gmail.com>
2016-08-15 13:57:28 -04:00
allencloud 0ead624473 add defer file.Close to avoid potential fd leak
Signed-off-by: allencloud <allen.sun@daocloud.io>
2016-08-10 08:36:09 +08:00
Stefan J. Wernli 54f11b84d2 Fixing file handle leak for "docker logs"
If "docker logs" was used on an offline container, the logger is leaked, leaving it up to the finalizer to close the file handle, which could block removal of the container.  Further, the json file logger could leak an open handle if the logs are read without follow due to an early return without a close.  This change addresses both cases.

Signed-off-by: Stefan J. Wernli <swernli@microsoft.com>
2016-06-21 18:40:30 -07:00
Brian Goff bd9d14a07b Add support for reading logs extra attrs
The jsonlog logger currently allows specifying envs and labels that
should be propagated to the log message, however there has been no way
to read that back.

This adds a new API option to enable inserting these attrs back to the
log reader.

With timestamps, this looks like so:
```
92016-04-08T15:28:09.835913720Z foo=bar,hello=world hello
```

The extra attrs are comma separated before the log message but after
timestamps.

Without timestaps it looks like so:
```
foo=bar,hello=world hello
```

Signed-off-by: Brian Goff <cpuguy83@gmail.com>
2016-05-06 20:42:20 -04:00
Brian Goff 1ae9dcf97d Fixes some issues with jsonfile write/read
This cleans up some of the use of the filepoller which makes reading
significantly more robust and gives fewer changes to fallback to the
polling based watcher.
In a lot of cases, if the file was being rotated while we were adding it
to the watcher, it would return an error that the file doesn't exist and
would fallback.
In some cases this fallback could be triggered multiple times even if we
were already on the fallback/poll-based watcher.

It also fixes an open file leak caused by not closing files properly on
rotate, as well as not closing files that were read via the `tail`
function until after the log reader is completed.

Prior to the above changes, it was relatively simple to cause the log
reader to error out by having quick rotations, for example:
```
$ docker run --name test --log-opt max-size=10b --log-opt max-files=10
-d busybox sh -c 'while true; do usleep 500000; echo hello; done'
$ docker logs -f test
```
After these changes I can run this forever without error.

Another fix removes 2 `os.Stat` calls when rotating files. The stat
calls are not needed since we are just calling `os.Rename` anyway, which
will in turn also just produce the same error that `Stat` would.
These `Stat` calls were also quite expensive.
Removing these stat calls also seemed to resolve an issue causing slow
memory growth on the daemon.

Signed-off-by: Brian Goff <cpuguy83@gmail.com>
2016-03-21 11:05:58 -04:00
Brian Goff 91fdfdd537 Revert "use pubsub instead of filenotify to follow json logs"
This reverts commit b1594c59f5.

Signed-off-by: Brian Goff <cpuguy83@gmail.com>
2016-02-23 21:43:52 -05:00
Shijiang Wei b1594c59f5 use pubsub instead of filenotify to follow json logs
inotify event is trigged immediately there's data written to disk.
But at the time that the inotify event is received, the json line might
not fully saved to disk. If the json decoder tries to decode in such
case, an io.UnexpectedEOF will be trigged.
We used to retry for several times to mitigate the io.UnexpectedEOF error.
But there are still flaky tests caused by the partial log entries.

The daemon knows exactly when there are new log entries emitted. We can
use the pubsub package to notify all the log readers instead of inotify.

Signed-off-by: Shijiang Wei <mountkin@gmail.com>

try to fix broken test. will squash once tests pass

Signed-off-by: Shijiang Wei <mountkin@gmail.com>
2016-02-15 19:25:16 +08:00
Justas Brazauskas 927b334ebf Fix typos found across repository
Signed-off-by: Justas Brazauskas <brazauskasjustas@gmail.com>
2015-12-13 18:04:12 +02:00
daehyeok mun 086c0b4a66 Extract rotate file writer from json log driver
Signed-off-by: Daehyeok Mun <daehyeok@gmail.com>
2015-11-14 16:15:44 -07:00
Brian Goff c136a33c5b Fallback to file polling for jsonlog reader on err
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
2015-11-05 10:05:05 -05:00