This reduces the overhead of the method instrumentation code primarily
by reducing the number of method calls. There are also some other small
optimisations such as not casting timing values to Floats (there's no
particular need for this), using Symbols for method call metric names,
and reducing the number of Hash lookups for instrumented methods.
The exact impact depends on the code being executed. For example, for a
method that's only called once the difference won't be very noticeable.
However, for methods that are called many times the difference can be
more significant.
For example, the loading time of a large commit
(nrclark/dummy_project@81ebdea5df)
was reduced from around 19 seconds to around 15 seconds using these
changes.
Previously we'd create a separate Metric instance for every method call
that would exceed the method call threshold. This is problematic because
it doesn't provide us with information to accurately get the _total_
execution time of a particular method. For example, if the method
"Foo#bar" was called 4 times with a runtime of ~10 milliseconds we'd end
up with 4 different Metric instances. If we were to then get the
average/95th percentile/etc of the timings this would be roughly 10
milliseconds. However, the _actual_ total time spent in this method
would be around 40 milliseconds.
To solve this problem we now create a single Metric instance per method.
This Metric instance contains the _total_ real/CPU time and the call
count for every instrumented method.
By default instrumentation will instrument public,
protected and private methods, because usually
heavy work is done on private method or at least
that’s what facts is showing
Because method call timings are inclusive (that is, they include the
time of any sub method calls) this would lead to the total method
execution time often being far greater than the total transaction time.
Because this is incredibly confusing it's best to simply _not_ track the
total method execution time, after all it's not that useful to begin
with.
Fixesgitlab-org/gitlab-ce#17239
By using Module#prepend we can define a Module containing all proxy
methods. This removes the need for setting up crazy method alias chains
and in turn prevents us from having to deal with all that madness (e.g.
methods calling each other recursively).
Fixesgitlab-org/gitlab-ce#15281
This ensures that an instrumented method that doesn't take arguments
reports an arity of 0, instead of -1.
If Ruby had a proper method for finding out the required arguments of a
method (e.g. Method#required_arguments) this would not have been an
issue. Sadly the only two methods we have are Method#parameters and
Method#arity, and both are equally painful to use.
Fixesgitlab-org/gitlab-ce#12450
This ensures we don't end up wasting resources by tracking method calls
that only take a few microseconds. By default the threshold is 10
milliseconds but this can be changed using the gitlab.yml configuration
file.
When using instrument_methods/instrument_instance_methods we only want
to instrument methods defined directly in a class, not those included
via mixins (e.g. whatever RSpec throws in during development).
In case an externally included method _has_ to be instrumented we can
still use the regular instrument_method/instrument_instance_method
methods.
The methods Instrumentation.instrument_methods and
Instrumentation.instrument_instance_methods can be used to instrument
all methods of a module at once.
The use of ActiveSupport would slow down instrumented method calls by
about 180x due to:
1. ActiveSupport itself not being the fastest thing on the planet
2. caller_locations() having quite some overhead
The use of caller_locations() has been removed because it's not _that_
useful since we already know the full namespace of receivers and the
names of the called methods.
The use of ActiveSupport has been replaced with some custom code that's
generated using eval() (which can be quite a bit faster than using
define_method).
This new setup results in instrumented methods only being about 35-40x
slower (compared to non instrumented methods).
This adds the ability to write application metrics (e.g. SQL timings) to
InfluxDB. These metrics can in turn be visualized using Grafana, or
really anything else that can read from InfluxDB. These metrics can be
used to track application performance over time, between different Ruby
versions, different GitLab versions, etc.
== Transaction Metrics
Currently the following is tracked on a per transaction basis (a
transaction is a Rails request or a single Sidekiq job):
* Timings per query along with the raw (obfuscated) SQL and information
about what file the query originated from.
* Timings per view along with the path of the view and information about
what file triggered the rendering process.
* The duration of a request itself along with the controller/worker
class and method name.
* The duration of any instrumented method calls (more below).
== Sampled Metrics
Certain metrics can't be directly associated with a transaction. For
example, a process' total memory usage is unrelated to any running
transactions. While a transaction can result in the memory usage going
up there's no accurate way to determine what transaction is to blame,
this becomes especially problematic in multi-threaded environments.
To solve this problem there's a separate thread that takes samples at a
fixed interval. This thread (using the class Gitlab::Metrics::Sampler)
currently tracks the following:
* The process' total memory usage.
* The number of file descriptors opened by the process.
* The amount of Ruby objects (using ObjectSpace.count_objects).
* GC statistics such as timings, heap slots, etc.
The default/current interval is 15 seconds, any smaller interval might
put too much pressure on InfluxDB (especially when running dozens of
processes).
== Method Instrumentation
While currently not yet used methods can be instrumented to track how
long they take to run. Unlike the likes of New Relic this doesn't
require modifying the source code (e.g. including modules), it all
happens from the outside. For example, to track `User.by_login` we'd add
the following code somewhere in an initializer:
Gitlab::Metrics::Instrumentation.
instrument_method(User, :by_login)
to instead instrument an instance method:
Gitlab::Metrics::Instrumentation.
instrument_instance_method(User, :save)
Instrumentation for either all public model methods or a few crucial
ones will be added in the near future, I simply haven't gotten to doing
so just yet.
== Configuration
By default metrics are disabled. This means users don't have to bother
setting anything up if they don't want to. Metrics can be enabled by
editing one's gitlab.yml configuration file (see
config/gitlab.yml.example for example settings).
== Writing Data To InfluxDB
Because InfluxDB is still a fairly young product I expect the worse.
Data loss, unexpected reboots, the database not responding, you name it.
Because of this data is _not_ written to InfluxDB directly, instead it's
queued and processed by Sidekiq. This ensures that users won't notice
anything when InfluxDB is giving trouble.
The metrics worker can be started in a standalone manner as following:
bundle exec sidekiq -q metrics
The corresponding class is called MetricsWorker.