Commit graph

7 commits

Author SHA1 Message Date
Yorick Peterse
057eb824b5 Randomize metrics sample intervals
Sampling data at a fixed interval means we can potentially miss data
from events occurring between sampling intervals. For example, say we
sample data every 15 seconds but Unicorn workers get killed after 10
seconds. In this particular case it's possible to miss interesting data
as the sampler will never get to actually submitting data.

To work around this (at least for the most part) the sampling interval
is randomized as following:

1. Take the user specified sampling interval (15 seconds by default)
2. Divide it by 2 (referred to as "half" below)
3. Generate a range (using a step of 0.1) from -"half" to "half"
4. Every time the sampler goes to sleep we'll grab the user provided
   interval and add a randomly chosen "adjustment" to it while making
   sure we don't pick the same value twice in a row.

For a specified timeout of 15 this means the actual intervals can be
anywhere between 7.5 and 22.5, but never can the same interval be used
twice in a row.

The rationale behind this change is that on dev.gitlab.org I'm sometimes
seeing certain Gitlab::Git/Rugged objects being retained, but only for a
few minutes every 24 hours. Knowing the code of Gitlab and how much
memory it uses/leaks I suspect we're missing data due to workers getting
terminated before the sampler can write its data to InfluxDB.
2016-01-13 12:57:46 +01:00
Yorick Peterse
2367160015 Make the metrics sampler interval configurable 2016-01-13 12:29:48 +01:00
Yorick Peterse
2ea464bb27 Use separate series for Rails/Sidekiq sample stats
This removes the need for any tags to differentiate between Sidekiq and
Rails statistics while still being able to separate the two.
2016-01-04 12:45:31 +01:00
Yorick Peterse
620e7bb3d6 Write to InfluxDB directly via UDP
This removes the need for Sidekiq and any overhead/problems introduced
by TCP. There are a few things to take into account:

1. When writing data to InfluxDB you may still get an error if the
   server becomes unavailable during the write. Because of this we're
   catching all exceptions and just ignore them (for now).
2. Writing via UDP apparently requires the timestamp to be in
   nanoseconds. Without this data either isn't written properly.
3. Due to the restrictions on UDP buffer sizes we're writing metrics one
   by one, instead of writing all of them at once.
2015-12-29 14:53:45 +01:00
Yorick Peterse
f181f05e8a Track object counts using the "allocations" Gem
This allows us to track the counts of actual classes instead of "T_XXX"
nodes. This is only enabled on CRuby as it uses CRuby specific APIs.
2015-12-17 17:25:48 +01:00
Yorick Peterse
09a311568a Track object count types as tags 2015-12-17 17:25:48 +01:00
Yorick Peterse
141e946c3d Storing of application metrics in InfluxDB
This adds the ability to write application metrics (e.g. SQL timings) to
InfluxDB. These metrics can in turn be visualized using Grafana, or
really anything else that can read from InfluxDB. These metrics can be
used to track application performance over time, between different Ruby
versions, different GitLab versions, etc.

== Transaction Metrics

Currently the following is tracked on a per transaction basis (a
transaction is a Rails request or a single Sidekiq job):

* Timings per query along with the raw (obfuscated) SQL and information
  about what file the query originated from.
* Timings per view along with the path of the view and information about
  what file triggered the rendering process.
* The duration of a request itself along with the controller/worker
  class and method name.
* The duration of any instrumented method calls (more below).

== Sampled Metrics

Certain metrics can't be directly associated with a transaction. For
example, a process' total memory usage is unrelated to any running
transactions. While a transaction can result in the memory usage going
up there's no accurate way to determine what transaction is to blame,
this becomes especially problematic in multi-threaded environments.

To solve this problem there's a separate thread that takes samples at a
fixed interval. This thread (using the class Gitlab::Metrics::Sampler)
currently tracks the following:

* The process' total memory usage.
* The number of file descriptors opened by the process.
* The amount of Ruby objects (using ObjectSpace.count_objects).
* GC statistics such as timings, heap slots, etc.

The default/current interval is 15 seconds, any smaller interval might
put too much pressure on InfluxDB (especially when running dozens of
processes).

== Method Instrumentation

While currently not yet used methods can be instrumented to track how
long they take to run. Unlike the likes of New Relic this doesn't
require modifying the source code (e.g. including modules), it all
happens from the outside. For example, to track `User.by_login` we'd add
the following code somewhere in an initializer:

    Gitlab::Metrics::Instrumentation.
      instrument_method(User, :by_login)

to instead instrument an instance method:

    Gitlab::Metrics::Instrumentation.
      instrument_instance_method(User, :save)

Instrumentation for either all public model methods or a few crucial
ones will be added in the near future, I simply haven't gotten to doing
so just yet.

== Configuration

By default metrics are disabled. This means users don't have to bother
setting anything up if they don't want to. Metrics can be enabled by
editing one's gitlab.yml configuration file (see
config/gitlab.yml.example for example settings).

== Writing Data To InfluxDB

Because InfluxDB is still a fairly young product I expect the worse.
Data loss, unexpected reboots, the database not responding, you name it.
Because of this data is _not_ written to InfluxDB directly, instead it's
queued and processed by Sidekiq. This ensures that users won't notice
anything when InfluxDB is giving trouble.

The metrics worker can be started in a standalone manner as following:

    bundle exec sidekiq -q metrics

The corresponding class is called MetricsWorker.
2015-12-17 17:25:48 +01:00