Commit graph

11 commits

Author SHA1 Message Date
Yorick Peterse
905f8d763a
Reduce instrumentation overhead
This reduces the overhead of the method instrumentation code primarily
by reducing the number of method calls. There are also some other small
optimisations such as not casting timing values to Floats (there's no
particular need for this), using Symbols for method call metric names,
and reducing the number of Hash lookups for instrumented methods.

The exact impact depends on the code being executed. For example, for a
method that's only called once the difference won't be very noticeable.
However, for methods that are called many times the difference can be
more significant.

For example, the loading time of a large commit
(nrclark/dummy_project@81ebdea5df)
was reduced from around 19 seconds to around 15 seconds using these
changes.
2016-07-28 16:56:17 +02:00
Yorick Peterse
be3b878443 Track method call times/counts as a single metric
Previously we'd create a separate Metric instance for every method call
that would exceed the method call threshold. This is problematic because
it doesn't provide us with information to accurately get the _total_
execution time of a particular method. For example, if the method
"Foo#bar" was called 4 times with a runtime of ~10 milliseconds we'd end
up with 4 different Metric instances. If we were to then get the
average/95th percentile/etc of the timings this would be roughly 10
milliseconds. However, the _actual_ total time spent in this method
would be around 40 milliseconds.

To solve this problem we now create a single Metric instance per method.
This Metric instance contains the _total_ real/CPU time and the call
count for every instrumented method.
2016-06-17 13:09:55 -04:00
Yorick Peterse
5679ee0120 Track memory allocated during a transaction
This gives a very rough estimate of how much memory is allocated during
a transaction. This only works reliably when using a single-threaded
application server and a Ruby implementation with a GIL as otherwise
memory allocated by other threads might skew the statistics. Sadly
there's no way around this as Ruby doesn't provide a reliable way of
gathering accurate object sizes upon allocation on a per-thread basis.
2016-01-12 14:59:30 +01:00
Yorick Peterse
35b501f30a Tag all transaction metrics with an "action" tag
Without this it's impossible to find out what methods/views/queries are
executed by a certain controller or Sidekiq worker. While this will
increase the total number of series it should stay within reasonable
limits due to the amount of "actions" being small enough.
2016-01-11 16:51:01 +01:00
Yorick Peterse
7b10cb6f0f Store request methods/URIs as values
Since filtering by these values is very rare (they're mostly just
displayed as-is) we don't need to waste any index space by saving them
as tags. By storing them as values we also greatly reduce the number of
series in InfluxDB.
2016-01-07 13:05:00 +01:00
Yorick Peterse
364b07cff0 Removed UUIDs from metrics transactions
While useful for finding out what methods/views belong to a transaction
this might result in too much data being stored in InfluxDB.
2016-01-07 12:44:15 +01:00
Yorick Peterse
2ee8f55599 Automatically prefix transaction series names
This ensures Rails and Sidekiq transactions are split into the series
"rails_transactions" and "sidekiq_transactions" respectively.
2016-01-04 13:17:02 +01:00
Yorick Peterse
96075be6f4 Ability to increment custom transaction values
This will be used to store/increment the total query/view rendering
timings on a per transaction basis. This in turn can greatly reduce the
amount of metrics stored.
2016-01-04 11:37:46 +01:00
Yorick Peterse
bd9f86bb8a Use separate series for Rails/Sidekiq transactions
This removes the need for tagging all metrics with a "process_type" tag.
2015-12-31 17:52:51 +01:00
Yorick Peterse
620e7bb3d6 Write to InfluxDB directly via UDP
This removes the need for Sidekiq and any overhead/problems introduced
by TCP. There are a few things to take into account:

1. When writing data to InfluxDB you may still get an error if the
   server becomes unavailable during the write. Because of this we're
   catching all exceptions and just ignore them (for now).
2. Writing via UDP apparently requires the timestamp to be in
   nanoseconds. Without this data either isn't written properly.
3. Due to the restrictions on UDP buffer sizes we're writing metrics one
   by one, instead of writing all of them at once.
2015-12-29 14:53:45 +01:00
Yorick Peterse
141e946c3d Storing of application metrics in InfluxDB
This adds the ability to write application metrics (e.g. SQL timings) to
InfluxDB. These metrics can in turn be visualized using Grafana, or
really anything else that can read from InfluxDB. These metrics can be
used to track application performance over time, between different Ruby
versions, different GitLab versions, etc.

== Transaction Metrics

Currently the following is tracked on a per transaction basis (a
transaction is a Rails request or a single Sidekiq job):

* Timings per query along with the raw (obfuscated) SQL and information
  about what file the query originated from.
* Timings per view along with the path of the view and information about
  what file triggered the rendering process.
* The duration of a request itself along with the controller/worker
  class and method name.
* The duration of any instrumented method calls (more below).

== Sampled Metrics

Certain metrics can't be directly associated with a transaction. For
example, a process' total memory usage is unrelated to any running
transactions. While a transaction can result in the memory usage going
up there's no accurate way to determine what transaction is to blame,
this becomes especially problematic in multi-threaded environments.

To solve this problem there's a separate thread that takes samples at a
fixed interval. This thread (using the class Gitlab::Metrics::Sampler)
currently tracks the following:

* The process' total memory usage.
* The number of file descriptors opened by the process.
* The amount of Ruby objects (using ObjectSpace.count_objects).
* GC statistics such as timings, heap slots, etc.

The default/current interval is 15 seconds, any smaller interval might
put too much pressure on InfluxDB (especially when running dozens of
processes).

== Method Instrumentation

While currently not yet used methods can be instrumented to track how
long they take to run. Unlike the likes of New Relic this doesn't
require modifying the source code (e.g. including modules), it all
happens from the outside. For example, to track `User.by_login` we'd add
the following code somewhere in an initializer:

    Gitlab::Metrics::Instrumentation.
      instrument_method(User, :by_login)

to instead instrument an instance method:

    Gitlab::Metrics::Instrumentation.
      instrument_instance_method(User, :save)

Instrumentation for either all public model methods or a few crucial
ones will be added in the near future, I simply haven't gotten to doing
so just yet.

== Configuration

By default metrics are disabled. This means users don't have to bother
setting anything up if they don't want to. Metrics can be enabled by
editing one's gitlab.yml configuration file (see
config/gitlab.yml.example for example settings).

== Writing Data To InfluxDB

Because InfluxDB is still a fairly young product I expect the worse.
Data loss, unexpected reboots, the database not responding, you name it.
Because of this data is _not_ written to InfluxDB directly, instead it's
queued and processed by Sidekiq. This ensures that users won't notice
anything when InfluxDB is giving trouble.

The metrics worker can be started in a standalone manner as following:

    bundle exec sidekiq -q metrics

The corresponding class is called MetricsWorker.
2015-12-17 17:25:48 +01:00