f1ae1e39ce
Moving the check out of the general requests, makes sure we don't have any slowdown in the regular requests. To keep the process performing this checks small, the check is still performed inside a unicorn. But that is called from a process running on the same server. Because the checks are now done outside normal request, we can have a simpler failure strategy: The check is now performed in the background every `circuitbreaker_check_interval`. Failures are logged in redis. The failures are reset when the check succeeds. Per check we will try `circuitbreaker_access_retries` times within `circuitbreaker_storage_timeout` seconds. When the number of failures exceeds `circuitbreaker_failure_count_threshold`, we will block access to the storage. After `failure_reset_time` of no checks, we will clear the stored failures. This could happen when the process that performs the checks is not running.
12 lines
430 B
Ruby
12 lines
430 B
Ruby
class Admin::HealthCheckController < Admin::ApplicationController
|
|
def show
|
|
@errors = HealthCheck::Utils.process_checks(['standard'])
|
|
@failing_storage_statuses = Gitlab::Git::Storage::Health.for_failing_storages
|
|
end
|
|
|
|
def reset_storage_health
|
|
Gitlab::Git::Storage::FailureInfo.reset_all!
|
|
redirect_to admin_health_check_path,
|
|
notice: _('Git storage health information has been reset')
|
|
end
|
|
end
|