Commit graph

6 commits

Author SHA1 Message Date
Stan Hu
32b688e785 Enable 5 lines of Sidekiq backtrace lines to aid in debugging
Customers often have Sidekiq jobs that failed without much context. Without
Sentry, there's no way to tell where these exceptions were hit. Adding
in additional lines adds a bit more Redis storage overhead. This commit
adds in backtrace logging for workers that delete groups/projects and
import/export projects.

Closes #27626
2017-08-25 05:27:42 -07:00
Robert Speicher
24244d03b5 Revert "Merge branch 'sh-sidekiq-backtrace' into 'master'"
This reverts merge request !13813
2017-08-25 01:47:37 +00:00
Stan Hu
38bb92197d Enable 5 lines of Sidekiq backtrace lines to aid in debugging
Customers often have Sidekiq jobs that failed without much context. Without
Sentry, there's no way to tell where these exceptions were hit. Adding
in additional lines adds a bit more Redis storage overhead. This commit
adds in backtrace logging for workers that delete groups/projects and
import/export projects.

Closes #27626
2017-08-24 12:29:50 -07:00
dixpac
0dacf3c169 Fix inconsistent naming for services that delete things
* Changed name of delete_user_service and worker to destroy
* Move and change delete_group_service to Groups::DestroyService
* Rename Notes::DeleteService to Notes::DestroyService
2017-02-08 09:16:43 +01:00
Yorick Peterse
97731760d7
Re-organize queues to use for Sidekiq
Dumping too many jobs in the same queue (e.g. the "default" queue) is a
dangerous setup. Jobs that take a long time to process can effectively
block any other work from being performed given there are enough of
these jobs.

Furthermore it becomes harder to monitor the jobs as a single queue
could contain jobs for different workers. In such a setup the only
reliable way of getting counts per job is to iterate over all jobs in a
queue, which is a rather time consuming process.

By using separate queues for various workers we have better control over
throughput, we can add weight to queues, and we can monitor queues
better. Some workers still use the same queue whenever their work is
related. For example, the various CI pipeline workers use the same
"pipeline" queue.

This commit includes a Rails migration that moves Sidekiq jobs from the
old queues to the new ones. This migration also takes care of doing the
inverse if ever needed. This does require downtime as otherwise new jobs
could be scheduled in the old queues after this migration completes.

This commit also includes an RSpec test that blacklists the use of the
"default" queue and ensures cron workers use the "cronjob" queue.

Fixes gitlab-org/gitlab-ce#23370
2016-10-21 18:17:07 +02:00
Stan Hu
cb8a425ba4 Fix bug where destroying a namespace would not always destroy projects
There is a race condition in DestroyGroupService now that projects are deleted asynchronously:

1. User attempts to delete group
2. DestroyGroupService iterates through all projects and schedules a Sidekiq job to delete each Project
3. DestroyGroupService destroys the Group, leaving all its projects without a namespace
4. Projects::DestroyService runs later but the can?(current_user,
   :remove_project) is `false` because the user no longer has permission to
   destroy projects with no namespace.
5. This leaves the project in pending_delete state with no namespace/group.

Projects without a namespace or group also adds another problem: it's not possible to destroy the container
registry tags, since container_registry_path_with_namespace is the wrong value.

The fix is to destroy the group asynchronously and to run execute directly on Projects::DestroyService.

Closes #17893
2016-08-11 15:36:35 -07:00