gitlab-org--gitlab-foss/lib/gitlab/metrics/system.rb

Ignoring revisions in .git-blame-ignore-revs. Click here to bypass and see the normal blame view.

146 lines
4.8 KiB
Ruby
Raw Normal View History

# frozen_string_literal: true
Storing of application metrics in InfluxDB This adds the ability to write application metrics (e.g. SQL timings) to InfluxDB. These metrics can in turn be visualized using Grafana, or really anything else that can read from InfluxDB. These metrics can be used to track application performance over time, between different Ruby versions, different GitLab versions, etc. == Transaction Metrics Currently the following is tracked on a per transaction basis (a transaction is a Rails request or a single Sidekiq job): * Timings per query along with the raw (obfuscated) SQL and information about what file the query originated from. * Timings per view along with the path of the view and information about what file triggered the rendering process. * The duration of a request itself along with the controller/worker class and method name. * The duration of any instrumented method calls (more below). == Sampled Metrics Certain metrics can't be directly associated with a transaction. For example, a process' total memory usage is unrelated to any running transactions. While a transaction can result in the memory usage going up there's no accurate way to determine what transaction is to blame, this becomes especially problematic in multi-threaded environments. To solve this problem there's a separate thread that takes samples at a fixed interval. This thread (using the class Gitlab::Metrics::Sampler) currently tracks the following: * The process' total memory usage. * The number of file descriptors opened by the process. * The amount of Ruby objects (using ObjectSpace.count_objects). * GC statistics such as timings, heap slots, etc. The default/current interval is 15 seconds, any smaller interval might put too much pressure on InfluxDB (especially when running dozens of processes). == Method Instrumentation While currently not yet used methods can be instrumented to track how long they take to run. Unlike the likes of New Relic this doesn't require modifying the source code (e.g. including modules), it all happens from the outside. For example, to track `User.by_login` we'd add the following code somewhere in an initializer: Gitlab::Metrics::Instrumentation. instrument_method(User, :by_login) to instead instrument an instance method: Gitlab::Metrics::Instrumentation. instrument_instance_method(User, :save) Instrumentation for either all public model methods or a few crucial ones will be added in the near future, I simply haven't gotten to doing so just yet. == Configuration By default metrics are disabled. This means users don't have to bother setting anything up if they don't want to. Metrics can be enabled by editing one's gitlab.yml configuration file (see config/gitlab.yml.example for example settings). == Writing Data To InfluxDB Because InfluxDB is still a fairly young product I expect the worse. Data loss, unexpected reboots, the database not responding, you name it. Because of this data is _not_ written to InfluxDB directly, instead it's queued and processed by Sidekiq. This ensures that users won't notice anything when InfluxDB is giving trouble. The metrics worker can be started in a standalone manner as following: bundle exec sidekiq -q metrics The corresponding class is called MetricsWorker.
2015-12-09 15:45:51 +00:00
module Gitlab
module Metrics
# Module for gathering system/process statistics such as the memory usage.
#
# This module relies on the /proc filesystem being available. If /proc is
# not available the methods of this module will be stubbed.
module System
extend self
PROC_STAT_PATH = '/proc/self/stat'
PROC_STATUS_PATH = '/proc/%s/status'
PROC_SMAPS_ROLLUP_PATH = '/proc/%s/smaps_rollup'
PROC_LIMITS_PATH = '/proc/self/limits'
PROC_FD_GLOB = '/proc/self/fd/*'
PRIVATE_PAGES_PATTERN = /^(Private_Clean|Private_Dirty|Private_Hugetlb):\s+(?<value>\d+)/.freeze
PSS_PATTERN = /^Pss:\s+(?<value>\d+)/.freeze
RSS_PATTERN = /VmRSS:\s+(?<value>\d+)/.freeze
MAX_OPEN_FILES_PATTERN = /Max open files\s*(?<value>\d+)/.freeze
def summary
proportional_mem = memory_usage_uss_pss
{
version: RUBY_DESCRIPTION,
gc_stat: GC.stat,
memory_rss: memory_usage_rss,
memory_uss: proportional_mem[:uss],
memory_pss: proportional_mem[:pss],
time_cputime: cpu_time,
time_realtime: real_time,
time_monotonic: monotonic_time
}
end
# Returns the given process' RSS (resident set size) in bytes.
def memory_usage_rss(pid: 'self')
sum_matches(PROC_STATUS_PATH % pid, rss: RSS_PATTERN)[:rss].kilobytes
end
# Returns the given process' USS/PSS (unique/proportional set size) in bytes.
def memory_usage_uss_pss(pid: 'self')
sum_matches(PROC_SMAPS_ROLLUP_PATH % pid, uss: PRIVATE_PAGES_PATTERN, pss: PSS_PATTERN)
.transform_values(&:kilobytes)
end
def file_descriptor_count
Dir.glob(PROC_FD_GLOB).length
end
def max_open_file_descriptors
sum_matches(PROC_LIMITS_PATH, max_fds: MAX_OPEN_FILES_PATTERN)[:max_fds]
Storing of application metrics in InfluxDB This adds the ability to write application metrics (e.g. SQL timings) to InfluxDB. These metrics can in turn be visualized using Grafana, or really anything else that can read from InfluxDB. These metrics can be used to track application performance over time, between different Ruby versions, different GitLab versions, etc. == Transaction Metrics Currently the following is tracked on a per transaction basis (a transaction is a Rails request or a single Sidekiq job): * Timings per query along with the raw (obfuscated) SQL and information about what file the query originated from. * Timings per view along with the path of the view and information about what file triggered the rendering process. * The duration of a request itself along with the controller/worker class and method name. * The duration of any instrumented method calls (more below). == Sampled Metrics Certain metrics can't be directly associated with a transaction. For example, a process' total memory usage is unrelated to any running transactions. While a transaction can result in the memory usage going up there's no accurate way to determine what transaction is to blame, this becomes especially problematic in multi-threaded environments. To solve this problem there's a separate thread that takes samples at a fixed interval. This thread (using the class Gitlab::Metrics::Sampler) currently tracks the following: * The process' total memory usage. * The number of file descriptors opened by the process. * The amount of Ruby objects (using ObjectSpace.count_objects). * GC statistics such as timings, heap slots, etc. The default/current interval is 15 seconds, any smaller interval might put too much pressure on InfluxDB (especially when running dozens of processes). == Method Instrumentation While currently not yet used methods can be instrumented to track how long they take to run. Unlike the likes of New Relic this doesn't require modifying the source code (e.g. including modules), it all happens from the outside. For example, to track `User.by_login` we'd add the following code somewhere in an initializer: Gitlab::Metrics::Instrumentation. instrument_method(User, :by_login) to instead instrument an instance method: Gitlab::Metrics::Instrumentation. instrument_instance_method(User, :save) Instrumentation for either all public model methods or a few crucial ones will be added in the near future, I simply haven't gotten to doing so just yet. == Configuration By default metrics are disabled. This means users don't have to bother setting anything up if they don't want to. Metrics can be enabled by editing one's gitlab.yml configuration file (see config/gitlab.yml.example for example settings). == Writing Data To InfluxDB Because InfluxDB is still a fairly young product I expect the worse. Data loss, unexpected reboots, the database not responding, you name it. Because of this data is _not_ written to InfluxDB directly, instead it's queued and processed by Sidekiq. This ensures that users won't notice anything when InfluxDB is giving trouble. The metrics worker can be started in a standalone manner as following: bundle exec sidekiq -q metrics The corresponding class is called MetricsWorker.
2015-12-09 15:45:51 +00:00
end
def cpu_time
Process.clock_gettime(Process::CLOCK_PROCESS_CPUTIME_ID, :float_second)
end
# Returns the current real time in a given precision.
#
# Returns the time as a Float for precision = :float_second.
def real_time(precision = :float_second)
Process.clock_gettime(Process::CLOCK_REALTIME, precision)
end
# Returns the current monotonic clock time as seconds with microseconds precision.
#
# Returns the time as a Float.
def monotonic_time
Process.clock_gettime(Process::CLOCK_MONOTONIC, :float_second)
end
def thread_cpu_time
# Not all OS kernels are supporting `Process::CLOCK_THREAD_CPUTIME_ID`
# Refer: https://gitlab.com/gitlab-org/gitlab/issues/30567#note_221765627
return unless defined?(Process::CLOCK_THREAD_CPUTIME_ID)
Process.clock_gettime(Process::CLOCK_THREAD_CPUTIME_ID, :float_second)
end
def thread_cpu_duration(start_time)
end_time = thread_cpu_time
return unless start_time && end_time
end_time - start_time
end
# Returns the total time the current process has been running in seconds.
def process_runtime_elapsed_seconds
# Entry 22 (1-indexed) contains the process `starttime`, see:
# https://man7.org/linux/man-pages/man5/proc.5.html
#
# This value is a fixed timestamp in clock ticks.
# To obtain an elapsed time in seconds, we divide by the number
# of ticks per second and subtract from the system uptime.
start_time_ticks = proc_stat_entries[21].to_f
clock_ticks_per_second = Etc.sysconf(Etc::SC_CLK_TCK)
uptime - (start_time_ticks / clock_ticks_per_second)
end
private
# Given a path to a file in /proc and a hash of (metric, pattern) pairs,
# sums up all values found for those patterns under the respective metric.
def sum_matches(proc_file, **patterns)
results = patterns.transform_values { 0 }
safe_yield_procfile(proc_file) do |io|
io.each_line do |line|
patterns.each do |metric, pattern|
match = line.match(pattern)
value = match&.named_captures&.fetch('value', 0)
results[metric] += value.to_i
end
end
end
results
end
def proc_stat_entries
safe_yield_procfile(PROC_STAT_PATH) do |io|
io.read.split(' ')
end || []
end
def safe_yield_procfile(path, &block)
File.open(path, &block)
rescue Errno::ENOENT
# This means the procfile we're reading from did not exist;
# most likely we're on Darwin.
end
# Equivalent to reading /proc/uptime on Linux 2.6+.
#
# Returns 0 if not supported, e.g. on Darwin.
def uptime
Process.clock_gettime(Process::CLOCK_BOOTTIME)
rescue NameError
0
end
Storing of application metrics in InfluxDB This adds the ability to write application metrics (e.g. SQL timings) to InfluxDB. These metrics can in turn be visualized using Grafana, or really anything else that can read from InfluxDB. These metrics can be used to track application performance over time, between different Ruby versions, different GitLab versions, etc. == Transaction Metrics Currently the following is tracked on a per transaction basis (a transaction is a Rails request or a single Sidekiq job): * Timings per query along with the raw (obfuscated) SQL and information about what file the query originated from. * Timings per view along with the path of the view and information about what file triggered the rendering process. * The duration of a request itself along with the controller/worker class and method name. * The duration of any instrumented method calls (more below). == Sampled Metrics Certain metrics can't be directly associated with a transaction. For example, a process' total memory usage is unrelated to any running transactions. While a transaction can result in the memory usage going up there's no accurate way to determine what transaction is to blame, this becomes especially problematic in multi-threaded environments. To solve this problem there's a separate thread that takes samples at a fixed interval. This thread (using the class Gitlab::Metrics::Sampler) currently tracks the following: * The process' total memory usage. * The number of file descriptors opened by the process. * The amount of Ruby objects (using ObjectSpace.count_objects). * GC statistics such as timings, heap slots, etc. The default/current interval is 15 seconds, any smaller interval might put too much pressure on InfluxDB (especially when running dozens of processes). == Method Instrumentation While currently not yet used methods can be instrumented to track how long they take to run. Unlike the likes of New Relic this doesn't require modifying the source code (e.g. including modules), it all happens from the outside. For example, to track `User.by_login` we'd add the following code somewhere in an initializer: Gitlab::Metrics::Instrumentation. instrument_method(User, :by_login) to instead instrument an instance method: Gitlab::Metrics::Instrumentation. instrument_instance_method(User, :save) Instrumentation for either all public model methods or a few crucial ones will be added in the near future, I simply haven't gotten to doing so just yet. == Configuration By default metrics are disabled. This means users don't have to bother setting anything up if they don't want to. Metrics can be enabled by editing one's gitlab.yml configuration file (see config/gitlab.yml.example for example settings). == Writing Data To InfluxDB Because InfluxDB is still a fairly young product I expect the worse. Data loss, unexpected reboots, the database not responding, you name it. Because of this data is _not_ written to InfluxDB directly, instead it's queued and processed by Sidekiq. This ensures that users won't notice anything when InfluxDB is giving trouble. The metrics worker can be started in a standalone manner as following: bundle exec sidekiq -q metrics The corresponding class is called MetricsWorker.
2015-12-09 15:45:51 +00:00
end
end
end