Commit graph

14 commits

Author SHA1 Message Date
Paco Guzman
330de255b7 RailsCache metrics now includes fetch_hit/fetch_miss and read_hit/read_miss info. 2016-07-05 12:28:06 +02:00
Paco Guzman
e9a4d117f2 Instrument cache fetch hit and cache fetch misses 2016-07-05 12:28:06 +02:00
Pablo Carranza
b9306c2e82 Add cache count metrics to rails cache 2016-05-15 19:47:41 +01:00
Yorick Peterse
7e6f0ac0e0
Count the number of SQL queries per transaction
Fixes gitlab-org/gitlab-ce#15335
2016-04-18 14:53:13 +02:00
Yorick Peterse
c56f702ec3
Instrument Rails cache code
This allows us to track how much time of a transaction is spent in
dealing with cached data.
2016-04-08 17:54:52 +02:00
Yorick Peterse
355c341fe7 Stop tracking call stacks for instrumented views
Where a vew is called from doesn't matter as much. We already know what
action they belong to and this is more than enough information. By
removing the file/line number from the list of tags we should also be
able to reduce the number of series stored in InfluxDB.
2016-01-12 15:41:22 +01:00
Yorick Peterse
7ed3a5a240 Revert "Store SQL/view timings in milliseconds"
This reverts commit 7549102bb7.

Apparently I was wrong about
ActiveSupport::Notifications::Event#duration returning the duration in
seconds, instead it returns it in milliseconds already.
2016-01-07 11:47:06 +01:00
Yorick Peterse
7549102bb7 Store SQL/view timings in milliseconds
Transaction timings are also already stored in milliseconds, this keeps
things consistent.
2016-01-06 16:37:14 +01:00
Yorick Peterse
66a997a914 Track total query/view timings in transactions 2016-01-04 12:14:36 +01:00
Yorick Peterse
a6c60127e3 Removed tracking of raw SQL queries
This particular setup had 3 problems:

1. Storing SQL queries as tags is very inefficient as InfluxDB ends up
   indexing every query (and they can get pretty large). Storing these
   as values instead means we can't always display the SQL as easily.
2. We already instrument ActiveRecord query methods, thus we already
   have timing information about database queries.
3. SQL obfuscation is difficult to get right and I'd rather not expose
   sensitive data by accident.
2015-12-31 17:14:02 +01:00
Yorick Peterse
9f95ff0d90 Track location information as tags
This allows the information to be displayed when using certain functions
(e.g. top()) as well as making it easier to aggregate on a per file
basis.
2015-12-17 17:25:48 +01:00
Yorick Peterse
1b077d2d81 Use custom code for instrumenting method calls
The use of ActiveSupport would slow down instrumented method calls by
about 180x due to:

1. ActiveSupport itself not being the fastest thing on the planet
2. caller_locations() having quite some overhead

The use of caller_locations() has been removed because it's not _that_
useful since we already know the full namespace of receivers and the
names of the called methods.

The use of ActiveSupport has been replaced with some custom code that's
generated using eval() (which can be quite a bit faster than using
define_method).

This new setup results in instrumented methods only being about 35-40x
slower (compared to non instrumented methods).
2015-12-17 17:25:48 +01:00
Yorick Peterse
b66a16c838 Use string evaluation for method instrumentation
This is faster than using define_method since we don't have to keep
block bindings around.
2015-12-17 17:25:48 +01:00
Yorick Peterse
141e946c3d Storing of application metrics in InfluxDB
This adds the ability to write application metrics (e.g. SQL timings) to
InfluxDB. These metrics can in turn be visualized using Grafana, or
really anything else that can read from InfluxDB. These metrics can be
used to track application performance over time, between different Ruby
versions, different GitLab versions, etc.

== Transaction Metrics

Currently the following is tracked on a per transaction basis (a
transaction is a Rails request or a single Sidekiq job):

* Timings per query along with the raw (obfuscated) SQL and information
  about what file the query originated from.
* Timings per view along with the path of the view and information about
  what file triggered the rendering process.
* The duration of a request itself along with the controller/worker
  class and method name.
* The duration of any instrumented method calls (more below).

== Sampled Metrics

Certain metrics can't be directly associated with a transaction. For
example, a process' total memory usage is unrelated to any running
transactions. While a transaction can result in the memory usage going
up there's no accurate way to determine what transaction is to blame,
this becomes especially problematic in multi-threaded environments.

To solve this problem there's a separate thread that takes samples at a
fixed interval. This thread (using the class Gitlab::Metrics::Sampler)
currently tracks the following:

* The process' total memory usage.
* The number of file descriptors opened by the process.
* The amount of Ruby objects (using ObjectSpace.count_objects).
* GC statistics such as timings, heap slots, etc.

The default/current interval is 15 seconds, any smaller interval might
put too much pressure on InfluxDB (especially when running dozens of
processes).

== Method Instrumentation

While currently not yet used methods can be instrumented to track how
long they take to run. Unlike the likes of New Relic this doesn't
require modifying the source code (e.g. including modules), it all
happens from the outside. For example, to track `User.by_login` we'd add
the following code somewhere in an initializer:

    Gitlab::Metrics::Instrumentation.
      instrument_method(User, :by_login)

to instead instrument an instance method:

    Gitlab::Metrics::Instrumentation.
      instrument_instance_method(User, :save)

Instrumentation for either all public model methods or a few crucial
ones will be added in the near future, I simply haven't gotten to doing
so just yet.

== Configuration

By default metrics are disabled. This means users don't have to bother
setting anything up if they don't want to. Metrics can be enabled by
editing one's gitlab.yml configuration file (see
config/gitlab.yml.example for example settings).

== Writing Data To InfluxDB

Because InfluxDB is still a fairly young product I expect the worse.
Data loss, unexpected reboots, the database not responding, you name it.
Because of this data is _not_ written to InfluxDB directly, instead it's
queued and processed by Sidekiq. This ensures that users won't notice
anything when InfluxDB is giving trouble.

The metrics worker can be started in a standalone manner as following:

    bundle exec sidekiq -q metrics

The corresponding class is called MetricsWorker.
2015-12-17 17:25:48 +01:00