f30089075f
For reasons unknown, the logs of a web hook were paginated in memory. This would result in the "Edit" page of a web hook timing out once it has more than a few thousand log entries. This commit makes the following changes: 1. We use LIMIT/OFFSET to paginate the data, instead of doing this in memory. 2. We limit the logs to the last two days, just like the documentation says (instead of retrieving everything). 3. We change the indexes on "web_hook_logs" so the query to get the data can perform a backwards index scan, without the need for a Filter. These changes combined ensure that Projects::HooksController#edit no longer times out.
28 lines
851 B
Ruby
28 lines
851 B
Ruby
# frozen_string_literal: true
|
|
|
|
# See http://doc.gitlab.com/ce/development/migration_style_guide.html
|
|
# for more information on how to write migrations for GitLab.
|
|
|
|
class AlterWebHookLogsIndexes < ActiveRecord::Migration
|
|
include Gitlab::Database::MigrationHelpers
|
|
|
|
# Set this constant to true if this migration requires downtime.
|
|
DOWNTIME = false
|
|
|
|
disable_ddl_transaction!
|
|
|
|
# "created_at" comes first so the Sidekiq worker pruning old webhook logs can
|
|
# use a composite index index.
|
|
#
|
|
# We leave the old standalone index on "web_hook_id" in place so future code
|
|
# that doesn't care about "created_at" can still use that index.
|
|
COLUMNS_TO_INDEX = %i[created_at web_hook_id]
|
|
|
|
def up
|
|
add_concurrent_index(:web_hook_logs, COLUMNS_TO_INDEX)
|
|
end
|
|
|
|
def down
|
|
remove_concurrent_index(:web_hook_logs, COLUMNS_TO_INDEX)
|
|
end
|
|
end
|