2019-08-22 06:57:44 -04:00
|
|
|
# frozen_string_literal: true
|
|
|
|
|
Storing of application metrics in InfluxDB
This adds the ability to write application metrics (e.g. SQL timings) to
InfluxDB. These metrics can in turn be visualized using Grafana, or
really anything else that can read from InfluxDB. These metrics can be
used to track application performance over time, between different Ruby
versions, different GitLab versions, etc.
== Transaction Metrics
Currently the following is tracked on a per transaction basis (a
transaction is a Rails request or a single Sidekiq job):
* Timings per query along with the raw (obfuscated) SQL and information
about what file the query originated from.
* Timings per view along with the path of the view and information about
what file triggered the rendering process.
* The duration of a request itself along with the controller/worker
class and method name.
* The duration of any instrumented method calls (more below).
== Sampled Metrics
Certain metrics can't be directly associated with a transaction. For
example, a process' total memory usage is unrelated to any running
transactions. While a transaction can result in the memory usage going
up there's no accurate way to determine what transaction is to blame,
this becomes especially problematic in multi-threaded environments.
To solve this problem there's a separate thread that takes samples at a
fixed interval. This thread (using the class Gitlab::Metrics::Sampler)
currently tracks the following:
* The process' total memory usage.
* The number of file descriptors opened by the process.
* The amount of Ruby objects (using ObjectSpace.count_objects).
* GC statistics such as timings, heap slots, etc.
The default/current interval is 15 seconds, any smaller interval might
put too much pressure on InfluxDB (especially when running dozens of
processes).
== Method Instrumentation
While currently not yet used methods can be instrumented to track how
long they take to run. Unlike the likes of New Relic this doesn't
require modifying the source code (e.g. including modules), it all
happens from the outside. For example, to track `User.by_login` we'd add
the following code somewhere in an initializer:
Gitlab::Metrics::Instrumentation.
instrument_method(User, :by_login)
to instead instrument an instance method:
Gitlab::Metrics::Instrumentation.
instrument_instance_method(User, :save)
Instrumentation for either all public model methods or a few crucial
ones will be added in the near future, I simply haven't gotten to doing
so just yet.
== Configuration
By default metrics are disabled. This means users don't have to bother
setting anything up if they don't want to. Metrics can be enabled by
editing one's gitlab.yml configuration file (see
config/gitlab.yml.example for example settings).
== Writing Data To InfluxDB
Because InfluxDB is still a fairly young product I expect the worse.
Data loss, unexpected reboots, the database not responding, you name it.
Because of this data is _not_ written to InfluxDB directly, instead it's
queued and processed by Sidekiq. This ensures that users won't notice
anything when InfluxDB is giving trouble.
The metrics worker can be started in a standalone manner as following:
bundle exec sidekiq -q metrics
The corresponding class is called MetricsWorker.
2015-12-09 10:45:51 -05:00
|
|
|
require 'spec_helper'
|
|
|
|
|
2020-06-24 14:09:03 -04:00
|
|
|
RSpec.describe Gitlab::Metrics::System do
|
2020-05-07 11:09:29 -04:00
|
|
|
context 'when /proc files exist' do
|
2022-02-03 13:17:34 -05:00
|
|
|
# Modified column 22 to be 1000 (starttime ticks)
|
|
|
|
let(:proc_stat) do
|
|
|
|
<<~SNIP
|
|
|
|
2095 (ruby) R 0 2095 2095 34818 2095 4194560 211267 7897 2 0 287 51 10 1 20 0 5 0 1000 566210560 80885 18446744073709551615 94736211292160 94736211292813 140720919612064 0 0 0 0 0 1107394127 0 0 0 17 3 0 0 0 0 0 94736211303768 94736211304544 94736226689024 140720919619473 140720919619513 140720919619513 140720919621604 0
|
|
|
|
SNIP
|
|
|
|
end
|
|
|
|
|
2020-05-07 11:09:29 -04:00
|
|
|
# Fixtures pulled from:
|
|
|
|
# Linux carbon 5.3.0-7648-generic #41~1586789791~19.10~9593806-Ubuntu SMP Mon Apr 13 17:50:40 UTC x86_64 x86_64 x86_64 GNU/Linux
|
|
|
|
let(:proc_status) do
|
|
|
|
# most rows omitted for brevity
|
|
|
|
<<~SNIP
|
|
|
|
Name: less
|
|
|
|
VmHWM: 2468 kB
|
|
|
|
VmRSS: 2468 kB
|
|
|
|
RssAnon: 260 kB
|
|
|
|
SNIP
|
|
|
|
end
|
|
|
|
|
|
|
|
let(:proc_smaps_rollup) do
|
|
|
|
# full snapshot
|
|
|
|
<<~SNIP
|
|
|
|
Rss: 2564 kB
|
|
|
|
Pss: 503 kB
|
|
|
|
Pss_Anon: 312 kB
|
|
|
|
Pss_File: 191 kB
|
|
|
|
Pss_Shmem: 0 kB
|
|
|
|
Shared_Clean: 2100 kB
|
|
|
|
Shared_Dirty: 0 kB
|
|
|
|
Private_Clean: 152 kB
|
|
|
|
Private_Dirty: 312 kB
|
|
|
|
Referenced: 2564 kB
|
|
|
|
Anonymous: 312 kB
|
|
|
|
LazyFree: 0 kB
|
|
|
|
AnonHugePages: 0 kB
|
|
|
|
ShmemPmdMapped: 0 kB
|
|
|
|
Shared_Hugetlb: 0 kB
|
|
|
|
Private_Hugetlb: 0 kB
|
|
|
|
Swap: 0 kB
|
|
|
|
SwapPss: 0 kB
|
|
|
|
Locked: 0 kB
|
|
|
|
SNIP
|
|
|
|
end
|
|
|
|
|
|
|
|
let(:proc_limits) do
|
|
|
|
# full snapshot
|
|
|
|
<<~SNIP
|
|
|
|
Limit Soft Limit Hard Limit Units
|
|
|
|
Max cpu time unlimited unlimited seconds
|
|
|
|
Max file size unlimited unlimited bytes
|
|
|
|
Max data size unlimited unlimited bytes
|
|
|
|
Max stack size 8388608 unlimited bytes
|
|
|
|
Max core file size 0 unlimited bytes
|
|
|
|
Max resident set unlimited unlimited bytes
|
|
|
|
Max processes 126519 126519 processes
|
|
|
|
Max open files 1024 1048576 files
|
|
|
|
Max locked memory 67108864 67108864 bytes
|
|
|
|
Max address space unlimited unlimited bytes
|
|
|
|
Max file locks unlimited unlimited locks
|
|
|
|
Max pending signals 126519 126519 signals
|
|
|
|
Max msgqueue size 819200 819200 bytes
|
|
|
|
Max nice priority 0 0
|
|
|
|
Max realtime priority 0 0
|
|
|
|
Max realtime timeout unlimited unlimited us
|
|
|
|
SNIP
|
|
|
|
end
|
|
|
|
|
2020-05-13 05:08:37 -04:00
|
|
|
describe '.memory_usage_rss' do
|
2020-05-07 11:09:29 -04:00
|
|
|
it "returns the process' resident set size (RSS) in bytes" do
|
|
|
|
mock_existing_proc_file('/proc/self/status', proc_status)
|
|
|
|
|
2020-05-13 05:08:37 -04:00
|
|
|
expect(described_class.memory_usage_rss).to eq(2527232)
|
Storing of application metrics in InfluxDB
This adds the ability to write application metrics (e.g. SQL timings) to
InfluxDB. These metrics can in turn be visualized using Grafana, or
really anything else that can read from InfluxDB. These metrics can be
used to track application performance over time, between different Ruby
versions, different GitLab versions, etc.
== Transaction Metrics
Currently the following is tracked on a per transaction basis (a
transaction is a Rails request or a single Sidekiq job):
* Timings per query along with the raw (obfuscated) SQL and information
about what file the query originated from.
* Timings per view along with the path of the view and information about
what file triggered the rendering process.
* The duration of a request itself along with the controller/worker
class and method name.
* The duration of any instrumented method calls (more below).
== Sampled Metrics
Certain metrics can't be directly associated with a transaction. For
example, a process' total memory usage is unrelated to any running
transactions. While a transaction can result in the memory usage going
up there's no accurate way to determine what transaction is to blame,
this becomes especially problematic in multi-threaded environments.
To solve this problem there's a separate thread that takes samples at a
fixed interval. This thread (using the class Gitlab::Metrics::Sampler)
currently tracks the following:
* The process' total memory usage.
* The number of file descriptors opened by the process.
* The amount of Ruby objects (using ObjectSpace.count_objects).
* GC statistics such as timings, heap slots, etc.
The default/current interval is 15 seconds, any smaller interval might
put too much pressure on InfluxDB (especially when running dozens of
processes).
== Method Instrumentation
While currently not yet used methods can be instrumented to track how
long they take to run. Unlike the likes of New Relic this doesn't
require modifying the source code (e.g. including modules), it all
happens from the outside. For example, to track `User.by_login` we'd add
the following code somewhere in an initializer:
Gitlab::Metrics::Instrumentation.
instrument_method(User, :by_login)
to instead instrument an instance method:
Gitlab::Metrics::Instrumentation.
instrument_instance_method(User, :save)
Instrumentation for either all public model methods or a few crucial
ones will be added in the near future, I simply haven't gotten to doing
so just yet.
== Configuration
By default metrics are disabled. This means users don't have to bother
setting anything up if they don't want to. Metrics can be enabled by
editing one's gitlab.yml configuration file (see
config/gitlab.yml.example for example settings).
== Writing Data To InfluxDB
Because InfluxDB is still a fairly young product I expect the worse.
Data loss, unexpected reboots, the database not responding, you name it.
Because of this data is _not_ written to InfluxDB directly, instead it's
queued and processed by Sidekiq. This ensures that users won't notice
anything when InfluxDB is giving trouble.
The metrics worker can be started in a standalone manner as following:
bundle exec sidekiq -q metrics
The corresponding class is called MetricsWorker.
2015-12-09 10:45:51 -05:00
|
|
|
end
|
|
|
|
end
|
|
|
|
|
|
|
|
describe '.file_descriptor_count' do
|
|
|
|
it 'returns the amount of open file descriptors' do
|
2020-05-07 11:09:29 -04:00
|
|
|
expect(Dir).to receive(:glob).and_return(['/some/path', '/some/other/path'])
|
|
|
|
|
|
|
|
expect(described_class.file_descriptor_count).to eq(2)
|
Storing of application metrics in InfluxDB
This adds the ability to write application metrics (e.g. SQL timings) to
InfluxDB. These metrics can in turn be visualized using Grafana, or
really anything else that can read from InfluxDB. These metrics can be
used to track application performance over time, between different Ruby
versions, different GitLab versions, etc.
== Transaction Metrics
Currently the following is tracked on a per transaction basis (a
transaction is a Rails request or a single Sidekiq job):
* Timings per query along with the raw (obfuscated) SQL and information
about what file the query originated from.
* Timings per view along with the path of the view and information about
what file triggered the rendering process.
* The duration of a request itself along with the controller/worker
class and method name.
* The duration of any instrumented method calls (more below).
== Sampled Metrics
Certain metrics can't be directly associated with a transaction. For
example, a process' total memory usage is unrelated to any running
transactions. While a transaction can result in the memory usage going
up there's no accurate way to determine what transaction is to blame,
this becomes especially problematic in multi-threaded environments.
To solve this problem there's a separate thread that takes samples at a
fixed interval. This thread (using the class Gitlab::Metrics::Sampler)
currently tracks the following:
* The process' total memory usage.
* The number of file descriptors opened by the process.
* The amount of Ruby objects (using ObjectSpace.count_objects).
* GC statistics such as timings, heap slots, etc.
The default/current interval is 15 seconds, any smaller interval might
put too much pressure on InfluxDB (especially when running dozens of
processes).
== Method Instrumentation
While currently not yet used methods can be instrumented to track how
long they take to run. Unlike the likes of New Relic this doesn't
require modifying the source code (e.g. including modules), it all
happens from the outside. For example, to track `User.by_login` we'd add
the following code somewhere in an initializer:
Gitlab::Metrics::Instrumentation.
instrument_method(User, :by_login)
to instead instrument an instance method:
Gitlab::Metrics::Instrumentation.
instrument_instance_method(User, :save)
Instrumentation for either all public model methods or a few crucial
ones will be added in the near future, I simply haven't gotten to doing
so just yet.
== Configuration
By default metrics are disabled. This means users don't have to bother
setting anything up if they don't want to. Metrics can be enabled by
editing one's gitlab.yml configuration file (see
config/gitlab.yml.example for example settings).
== Writing Data To InfluxDB
Because InfluxDB is still a fairly young product I expect the worse.
Data loss, unexpected reboots, the database not responding, you name it.
Because of this data is _not_ written to InfluxDB directly, instead it's
queued and processed by Sidekiq. This ensures that users won't notice
anything when InfluxDB is giving trouble.
The metrics worker can be started in a standalone manner as following:
bundle exec sidekiq -q metrics
The corresponding class is called MetricsWorker.
2015-12-09 10:45:51 -05:00
|
|
|
end
|
|
|
|
end
|
2019-05-20 15:36:59 -04:00
|
|
|
|
|
|
|
describe '.max_open_file_descriptors' do
|
|
|
|
it 'returns the max allowed open file descriptors' do
|
2020-05-07 11:09:29 -04:00
|
|
|
mock_existing_proc_file('/proc/self/limits', proc_limits)
|
|
|
|
|
|
|
|
expect(described_class.max_open_file_descriptors).to eq(1024)
|
|
|
|
end
|
|
|
|
end
|
|
|
|
|
|
|
|
describe '.memory_usage_uss_pss' do
|
|
|
|
it "returns the process' unique and porportional set size (USS/PSS) in bytes" do
|
|
|
|
mock_existing_proc_file('/proc/self/smaps_rollup', proc_smaps_rollup)
|
|
|
|
|
|
|
|
# (Private_Clean (152 kB) + Private_Dirty (312 kB) + Private_Hugetlb (0 kB)) * 1024
|
|
|
|
expect(described_class.memory_usage_uss_pss).to eq(uss: 475136, pss: 515072)
|
2019-05-20 15:36:59 -04:00
|
|
|
end
|
|
|
|
end
|
2021-01-13 07:10:27 -05:00
|
|
|
|
2022-02-03 13:17:34 -05:00
|
|
|
describe '.process_runtime_elapsed_seconds' do
|
|
|
|
it 'returns the seconds elapsed since the process was started' do
|
|
|
|
# sets process starttime ticks to 1000
|
|
|
|
mock_existing_proc_file('/proc/self/stat', proc_stat)
|
|
|
|
# system clock ticks/sec
|
|
|
|
expect(Etc).to receive(:sysconf).with(Etc::SC_CLK_TCK).and_return(100)
|
|
|
|
# system uptime in seconds
|
|
|
|
expect(::Process).to receive(:clock_gettime).and_return(15)
|
|
|
|
|
|
|
|
# uptime - (starttime_ticks / ticks_per_sec)
|
|
|
|
expect(described_class.process_runtime_elapsed_seconds).to eq(5)
|
|
|
|
end
|
|
|
|
|
|
|
|
context 'when inputs are not available' do
|
|
|
|
it 'returns 0' do
|
|
|
|
mock_missing_proc_file
|
|
|
|
expect(::Process).to receive(:clock_gettime).and_raise(NameError)
|
|
|
|
|
|
|
|
expect(described_class.process_runtime_elapsed_seconds).to eq(0)
|
|
|
|
end
|
|
|
|
end
|
|
|
|
end
|
|
|
|
|
2021-01-13 07:10:27 -05:00
|
|
|
describe '.summary' do
|
|
|
|
it 'contains a selection of the available fields' do
|
|
|
|
stub_const('RUBY_DESCRIPTION', 'ruby-3.0-patch1')
|
|
|
|
mock_existing_proc_file('/proc/self/status', proc_status)
|
|
|
|
mock_existing_proc_file('/proc/self/smaps_rollup', proc_smaps_rollup)
|
|
|
|
|
|
|
|
summary = described_class.summary
|
|
|
|
|
|
|
|
expect(summary[:version]).to eq('ruby-3.0-patch1')
|
|
|
|
expect(summary[:gc_stat].keys).to eq(GC.stat.keys)
|
|
|
|
expect(summary[:memory_rss]).to eq(2527232)
|
|
|
|
expect(summary[:memory_uss]).to eq(475136)
|
|
|
|
expect(summary[:memory_pss]).to eq(515072)
|
|
|
|
expect(summary[:time_cputime]).to be_a(Float)
|
|
|
|
expect(summary[:time_realtime]).to be_a(Float)
|
|
|
|
expect(summary[:time_monotonic]).to be_a(Float)
|
|
|
|
end
|
|
|
|
end
|
2020-05-07 11:09:29 -04:00
|
|
|
end
|
|
|
|
|
|
|
|
context 'when /proc files do not exist' do
|
|
|
|
before do
|
|
|
|
mock_missing_proc_file
|
|
|
|
end
|
|
|
|
|
2020-05-13 05:08:37 -04:00
|
|
|
describe '.memory_usage_rss' do
|
2020-05-07 11:09:29 -04:00
|
|
|
it 'returns 0' do
|
2020-05-13 05:08:37 -04:00
|
|
|
expect(described_class.memory_usage_rss).to eq(0)
|
2020-05-07 11:09:29 -04:00
|
|
|
end
|
|
|
|
end
|
|
|
|
|
|
|
|
describe '.memory_usage_uss_pss' do
|
|
|
|
it "returns 0 for all components" do
|
|
|
|
expect(described_class.memory_usage_uss_pss).to eq(uss: 0, pss: 0)
|
Storing of application metrics in InfluxDB
This adds the ability to write application metrics (e.g. SQL timings) to
InfluxDB. These metrics can in turn be visualized using Grafana, or
really anything else that can read from InfluxDB. These metrics can be
used to track application performance over time, between different Ruby
versions, different GitLab versions, etc.
== Transaction Metrics
Currently the following is tracked on a per transaction basis (a
transaction is a Rails request or a single Sidekiq job):
* Timings per query along with the raw (obfuscated) SQL and information
about what file the query originated from.
* Timings per view along with the path of the view and information about
what file triggered the rendering process.
* The duration of a request itself along with the controller/worker
class and method name.
* The duration of any instrumented method calls (more below).
== Sampled Metrics
Certain metrics can't be directly associated with a transaction. For
example, a process' total memory usage is unrelated to any running
transactions. While a transaction can result in the memory usage going
up there's no accurate way to determine what transaction is to blame,
this becomes especially problematic in multi-threaded environments.
To solve this problem there's a separate thread that takes samples at a
fixed interval. This thread (using the class Gitlab::Metrics::Sampler)
currently tracks the following:
* The process' total memory usage.
* The number of file descriptors opened by the process.
* The amount of Ruby objects (using ObjectSpace.count_objects).
* GC statistics such as timings, heap slots, etc.
The default/current interval is 15 seconds, any smaller interval might
put too much pressure on InfluxDB (especially when running dozens of
processes).
== Method Instrumentation
While currently not yet used methods can be instrumented to track how
long they take to run. Unlike the likes of New Relic this doesn't
require modifying the source code (e.g. including modules), it all
happens from the outside. For example, to track `User.by_login` we'd add
the following code somewhere in an initializer:
Gitlab::Metrics::Instrumentation.
instrument_method(User, :by_login)
to instead instrument an instance method:
Gitlab::Metrics::Instrumentation.
instrument_instance_method(User, :save)
Instrumentation for either all public model methods or a few crucial
ones will be added in the near future, I simply haven't gotten to doing
so just yet.
== Configuration
By default metrics are disabled. This means users don't have to bother
setting anything up if they don't want to. Metrics can be enabled by
editing one's gitlab.yml configuration file (see
config/gitlab.yml.example for example settings).
== Writing Data To InfluxDB
Because InfluxDB is still a fairly young product I expect the worse.
Data loss, unexpected reboots, the database not responding, you name it.
Because of this data is _not_ written to InfluxDB directly, instead it's
queued and processed by Sidekiq. This ensures that users won't notice
anything when InfluxDB is giving trouble.
The metrics worker can be started in a standalone manner as following:
bundle exec sidekiq -q metrics
The corresponding class is called MetricsWorker.
2015-12-09 10:45:51 -05:00
|
|
|
end
|
|
|
|
end
|
|
|
|
|
|
|
|
describe '.file_descriptor_count' do
|
|
|
|
it 'returns 0' do
|
2020-05-07 11:09:29 -04:00
|
|
|
expect(Dir).to receive(:glob).and_return([])
|
|
|
|
|
Storing of application metrics in InfluxDB
This adds the ability to write application metrics (e.g. SQL timings) to
InfluxDB. These metrics can in turn be visualized using Grafana, or
really anything else that can read from InfluxDB. These metrics can be
used to track application performance over time, between different Ruby
versions, different GitLab versions, etc.
== Transaction Metrics
Currently the following is tracked on a per transaction basis (a
transaction is a Rails request or a single Sidekiq job):
* Timings per query along with the raw (obfuscated) SQL and information
about what file the query originated from.
* Timings per view along with the path of the view and information about
what file triggered the rendering process.
* The duration of a request itself along with the controller/worker
class and method name.
* The duration of any instrumented method calls (more below).
== Sampled Metrics
Certain metrics can't be directly associated with a transaction. For
example, a process' total memory usage is unrelated to any running
transactions. While a transaction can result in the memory usage going
up there's no accurate way to determine what transaction is to blame,
this becomes especially problematic in multi-threaded environments.
To solve this problem there's a separate thread that takes samples at a
fixed interval. This thread (using the class Gitlab::Metrics::Sampler)
currently tracks the following:
* The process' total memory usage.
* The number of file descriptors opened by the process.
* The amount of Ruby objects (using ObjectSpace.count_objects).
* GC statistics such as timings, heap slots, etc.
The default/current interval is 15 seconds, any smaller interval might
put too much pressure on InfluxDB (especially when running dozens of
processes).
== Method Instrumentation
While currently not yet used methods can be instrumented to track how
long they take to run. Unlike the likes of New Relic this doesn't
require modifying the source code (e.g. including modules), it all
happens from the outside. For example, to track `User.by_login` we'd add
the following code somewhere in an initializer:
Gitlab::Metrics::Instrumentation.
instrument_method(User, :by_login)
to instead instrument an instance method:
Gitlab::Metrics::Instrumentation.
instrument_instance_method(User, :save)
Instrumentation for either all public model methods or a few crucial
ones will be added in the near future, I simply haven't gotten to doing
so just yet.
== Configuration
By default metrics are disabled. This means users don't have to bother
setting anything up if they don't want to. Metrics can be enabled by
editing one's gitlab.yml configuration file (see
config/gitlab.yml.example for example settings).
== Writing Data To InfluxDB
Because InfluxDB is still a fairly young product I expect the worse.
Data loss, unexpected reboots, the database not responding, you name it.
Because of this data is _not_ written to InfluxDB directly, instead it's
queued and processed by Sidekiq. This ensures that users won't notice
anything when InfluxDB is giving trouble.
The metrics worker can be started in a standalone manner as following:
bundle exec sidekiq -q metrics
The corresponding class is called MetricsWorker.
2015-12-09 10:45:51 -05:00
|
|
|
expect(described_class.file_descriptor_count).to eq(0)
|
|
|
|
end
|
|
|
|
end
|
2019-05-20 15:36:59 -04:00
|
|
|
|
|
|
|
describe '.max_open_file_descriptors' do
|
|
|
|
it 'returns 0' do
|
|
|
|
expect(described_class.max_open_file_descriptors).to eq(0)
|
|
|
|
end
|
|
|
|
end
|
2021-01-13 07:10:27 -05:00
|
|
|
|
|
|
|
describe '.summary' do
|
|
|
|
it 'returns only available fields' do
|
|
|
|
summary = described_class.summary
|
|
|
|
|
|
|
|
expect(summary[:version]).to be_a(String)
|
|
|
|
expect(summary[:gc_stat].keys).to eq(GC.stat.keys)
|
|
|
|
expect(summary[:memory_rss]).to eq(0)
|
|
|
|
expect(summary[:memory_uss]).to eq(0)
|
|
|
|
expect(summary[:memory_pss]).to eq(0)
|
|
|
|
expect(summary[:time_cputime]).to be_a(Float)
|
|
|
|
expect(summary[:time_realtime]).to be_a(Float)
|
|
|
|
expect(summary[:time_monotonic]).to be_a(Float)
|
|
|
|
end
|
|
|
|
end
|
Storing of application metrics in InfluxDB
This adds the ability to write application metrics (e.g. SQL timings) to
InfluxDB. These metrics can in turn be visualized using Grafana, or
really anything else that can read from InfluxDB. These metrics can be
used to track application performance over time, between different Ruby
versions, different GitLab versions, etc.
== Transaction Metrics
Currently the following is tracked on a per transaction basis (a
transaction is a Rails request or a single Sidekiq job):
* Timings per query along with the raw (obfuscated) SQL and information
about what file the query originated from.
* Timings per view along with the path of the view and information about
what file triggered the rendering process.
* The duration of a request itself along with the controller/worker
class and method name.
* The duration of any instrumented method calls (more below).
== Sampled Metrics
Certain metrics can't be directly associated with a transaction. For
example, a process' total memory usage is unrelated to any running
transactions. While a transaction can result in the memory usage going
up there's no accurate way to determine what transaction is to blame,
this becomes especially problematic in multi-threaded environments.
To solve this problem there's a separate thread that takes samples at a
fixed interval. This thread (using the class Gitlab::Metrics::Sampler)
currently tracks the following:
* The process' total memory usage.
* The number of file descriptors opened by the process.
* The amount of Ruby objects (using ObjectSpace.count_objects).
* GC statistics such as timings, heap slots, etc.
The default/current interval is 15 seconds, any smaller interval might
put too much pressure on InfluxDB (especially when running dozens of
processes).
== Method Instrumentation
While currently not yet used methods can be instrumented to track how
long they take to run. Unlike the likes of New Relic this doesn't
require modifying the source code (e.g. including modules), it all
happens from the outside. For example, to track `User.by_login` we'd add
the following code somewhere in an initializer:
Gitlab::Metrics::Instrumentation.
instrument_method(User, :by_login)
to instead instrument an instance method:
Gitlab::Metrics::Instrumentation.
instrument_instance_method(User, :save)
Instrumentation for either all public model methods or a few crucial
ones will be added in the near future, I simply haven't gotten to doing
so just yet.
== Configuration
By default metrics are disabled. This means users don't have to bother
setting anything up if they don't want to. Metrics can be enabled by
editing one's gitlab.yml configuration file (see
config/gitlab.yml.example for example settings).
== Writing Data To InfluxDB
Because InfluxDB is still a fairly young product I expect the worse.
Data loss, unexpected reboots, the database not responding, you name it.
Because of this data is _not_ written to InfluxDB directly, instead it's
queued and processed by Sidekiq. This ensures that users won't notice
anything when InfluxDB is giving trouble.
The metrics worker can be started in a standalone manner as following:
bundle exec sidekiq -q metrics
The corresponding class is called MetricsWorker.
2015-12-09 10:45:51 -05:00
|
|
|
end
|
2016-04-11 07:11:13 -04:00
|
|
|
|
|
|
|
describe '.cpu_time' do
|
2019-06-28 09:27:02 -04:00
|
|
|
it 'returns a Float' do
|
2017-12-20 13:30:58 -05:00
|
|
|
expect(described_class.cpu_time).to be_an(Float)
|
2016-06-24 07:44:50 -04:00
|
|
|
end
|
|
|
|
end
|
|
|
|
|
|
|
|
describe '.real_time' do
|
2019-06-28 09:27:02 -04:00
|
|
|
it 'returns a Float' do
|
2017-12-20 13:30:58 -05:00
|
|
|
expect(described_class.real_time).to be_an(Float)
|
2016-06-24 07:44:50 -04:00
|
|
|
end
|
|
|
|
end
|
|
|
|
|
|
|
|
describe '.monotonic_time' do
|
2017-12-12 12:12:49 -05:00
|
|
|
it 'returns a Float' do
|
|
|
|
expect(described_class.monotonic_time).to be_an(Float)
|
2016-04-11 07:11:13 -04:00
|
|
|
end
|
|
|
|
end
|
2019-10-17 14:08:05 -04:00
|
|
|
|
|
|
|
describe '.thread_cpu_time' do
|
|
|
|
it 'returns cpu_time on supported platform' do
|
|
|
|
stub_const("Process::CLOCK_THREAD_CPUTIME_ID", 16)
|
|
|
|
|
|
|
|
expect(Process).to receive(:clock_gettime)
|
|
|
|
.with(16, kind_of(Symbol)) { 0.111222333 }
|
|
|
|
|
|
|
|
expect(described_class.thread_cpu_time).to eq(0.111222333)
|
|
|
|
end
|
|
|
|
|
|
|
|
it 'returns nil on unsupported platform' do
|
|
|
|
hide_const("Process::CLOCK_THREAD_CPUTIME_ID")
|
|
|
|
|
|
|
|
expect(described_class.thread_cpu_time).to be_nil
|
|
|
|
end
|
|
|
|
end
|
|
|
|
|
|
|
|
describe '.thread_cpu_duration' do
|
|
|
|
let(:start_time) { described_class.thread_cpu_time }
|
|
|
|
|
|
|
|
it 'returns difference between start and current time' do
|
|
|
|
stub_const("Process::CLOCK_THREAD_CPUTIME_ID", 16)
|
|
|
|
|
|
|
|
expect(Process).to receive(:clock_gettime)
|
|
|
|
.with(16, kind_of(Symbol))
|
|
|
|
.and_return(
|
|
|
|
0.111222333,
|
|
|
|
0.222333833
|
|
|
|
)
|
|
|
|
|
|
|
|
expect(described_class.thread_cpu_duration(start_time)).to eq(0.1111115)
|
|
|
|
end
|
|
|
|
|
|
|
|
it 'returns nil on unsupported platform' do
|
|
|
|
hide_const("Process::CLOCK_THREAD_CPUTIME_ID")
|
|
|
|
|
|
|
|
expect(described_class.thread_cpu_duration(start_time)).to be_nil
|
|
|
|
end
|
|
|
|
end
|
2020-05-07 11:09:29 -04:00
|
|
|
|
|
|
|
def mock_existing_proc_file(path, content)
|
2022-02-03 13:17:34 -05:00
|
|
|
allow(File).to receive(:open).with(path) { |_path, &block| block.call(StringIO.new(content)) }
|
2020-05-07 11:09:29 -04:00
|
|
|
end
|
|
|
|
|
|
|
|
def mock_missing_proc_file
|
2022-02-03 13:17:34 -05:00
|
|
|
allow(File).to receive(:open).and_raise(Errno::ENOENT)
|
2020-05-07 11:09:29 -04:00
|
|
|
end
|
Storing of application metrics in InfluxDB
This adds the ability to write application metrics (e.g. SQL timings) to
InfluxDB. These metrics can in turn be visualized using Grafana, or
really anything else that can read from InfluxDB. These metrics can be
used to track application performance over time, between different Ruby
versions, different GitLab versions, etc.
== Transaction Metrics
Currently the following is tracked on a per transaction basis (a
transaction is a Rails request or a single Sidekiq job):
* Timings per query along with the raw (obfuscated) SQL and information
about what file the query originated from.
* Timings per view along with the path of the view and information about
what file triggered the rendering process.
* The duration of a request itself along with the controller/worker
class and method name.
* The duration of any instrumented method calls (more below).
== Sampled Metrics
Certain metrics can't be directly associated with a transaction. For
example, a process' total memory usage is unrelated to any running
transactions. While a transaction can result in the memory usage going
up there's no accurate way to determine what transaction is to blame,
this becomes especially problematic in multi-threaded environments.
To solve this problem there's a separate thread that takes samples at a
fixed interval. This thread (using the class Gitlab::Metrics::Sampler)
currently tracks the following:
* The process' total memory usage.
* The number of file descriptors opened by the process.
* The amount of Ruby objects (using ObjectSpace.count_objects).
* GC statistics such as timings, heap slots, etc.
The default/current interval is 15 seconds, any smaller interval might
put too much pressure on InfluxDB (especially when running dozens of
processes).
== Method Instrumentation
While currently not yet used methods can be instrumented to track how
long they take to run. Unlike the likes of New Relic this doesn't
require modifying the source code (e.g. including modules), it all
happens from the outside. For example, to track `User.by_login` we'd add
the following code somewhere in an initializer:
Gitlab::Metrics::Instrumentation.
instrument_method(User, :by_login)
to instead instrument an instance method:
Gitlab::Metrics::Instrumentation.
instrument_instance_method(User, :save)
Instrumentation for either all public model methods or a few crucial
ones will be added in the near future, I simply haven't gotten to doing
so just yet.
== Configuration
By default metrics are disabled. This means users don't have to bother
setting anything up if they don't want to. Metrics can be enabled by
editing one's gitlab.yml configuration file (see
config/gitlab.yml.example for example settings).
== Writing Data To InfluxDB
Because InfluxDB is still a fairly young product I expect the worse.
Data loss, unexpected reboots, the database not responding, you name it.
Because of this data is _not_ written to InfluxDB directly, instead it's
queued and processed by Sidekiq. This ensures that users won't notice
anything when InfluxDB is giving trouble.
The metrics worker can be started in a standalone manner as following:
bundle exec sidekiq -q metrics
The corresponding class is called MetricsWorker.
2015-12-09 10:45:51 -05:00
|
|
|
end
|