f1ae1e39ce
Moving the check out of the general requests, makes sure we don't have any slowdown in the regular requests. To keep the process performing this checks small, the check is still performed inside a unicorn. But that is called from a process running on the same server. Because the checks are now done outside normal request, we can have a simpler failure strategy: The check is now performed in the background every `circuitbreaker_check_interval`. Failures are logged in redis. The failures are reset when the check succeeds. Per check we will try `circuitbreaker_access_retries` times within `circuitbreaker_storage_timeout` seconds. When the number of failures exceeds `circuitbreaker_failure_count_threshold`, we will block access to the storage. After `failure_reset_time` of no checks, we will clear the stored failures. This could happen when the process that performs the checks is not running.
37 lines
894 B
Ruby
37 lines
894 B
Ruby
module Gitlab
|
|
module Git
|
|
module Storage
|
|
module CircuitBreakerSettings
|
|
def failure_count_threshold
|
|
application_settings.circuitbreaker_failure_count_threshold
|
|
end
|
|
|
|
def failure_reset_time
|
|
application_settings.circuitbreaker_failure_reset_time
|
|
end
|
|
|
|
def storage_timeout
|
|
application_settings.circuitbreaker_storage_timeout
|
|
end
|
|
|
|
def access_retries
|
|
application_settings.circuitbreaker_access_retries
|
|
end
|
|
|
|
def check_interval
|
|
application_settings.circuitbreaker_check_interval
|
|
end
|
|
|
|
def cache_key
|
|
@cache_key ||= "#{Gitlab::Git::Storage::REDIS_KEY_PREFIX}#{storage}:#{hostname}"
|
|
end
|
|
|
|
private
|
|
|
|
def application_settings
|
|
Gitlab::CurrentSettings.current_application_settings
|
|
end
|
|
end
|
|
end
|
|
end
|
|
end
|