92b2c74ce1
When I proposed using serializable transactions I was hoping we would be able to refresh data of individual users concurrently. Unfortunately upon closer inspection it was revealed this was not the case. This could result in a lot of queries failing due to serialization errors, overloading the database in the process (given enough workers trying to update the target table). To work around this we're now using a Redis lease that is cancelled upon completion. This ensures we can update the data of different users concurrently without overloading the database. The code will try to obtain the lease until it succeeds, waiting at least 1 second between retries. This is necessary as we may otherwise end up _not_ updating the data which is not an option.
34 lines
898 B
Ruby
34 lines
898 B
Ruby
class AuthorizedProjectsWorker
|
|
include Sidekiq::Worker
|
|
include DedicatedSidekiqQueue
|
|
|
|
LEASE_TIMEOUT = 1.minute.to_i
|
|
|
|
def self.bulk_perform_async(args_list)
|
|
Sidekiq::Client.push_bulk('class' => self, 'args' => args_list)
|
|
end
|
|
|
|
def perform(user_id)
|
|
user = User.find_by(id: user_id)
|
|
|
|
refresh(user) if user
|
|
end
|
|
|
|
def refresh(user)
|
|
lease_key = "refresh_authorized_projects:#{user.id}"
|
|
lease = Gitlab::ExclusiveLease.new(lease_key, timeout: LEASE_TIMEOUT)
|
|
|
|
until uuid = lease.try_obtain
|
|
# Keep trying until we obtain the lease. If we don't do so we may end up
|
|
# not updating the list of authorized projects properly. To prevent
|
|
# hammering Redis too much we'll wait for a bit between retries.
|
|
sleep(1)
|
|
end
|
|
|
|
begin
|
|
user.refresh_authorized_projects
|
|
ensure
|
|
Gitlab::ExclusiveLease.cancel(lease_key, uuid)
|
|
end
|
|
end
|
|
end
|