Using Sentry, while useful, poses two problems you have to choose from:
1. All errors are reported separately, making it easy to create issues
but also making it next to impossible to see other errors (due to the
sheer volume of threshold errors).
2. Errors can be grouped or merged together, reducing the noise. This
however also means it's (as far as I can tell) much harder to
automatically create GitLab issues from Sentry for the offending
controllers.
Since both solutions are terrible I decided to go with a third option:
not using Sentry for this at all. Instead we'll investigate using
Prometheus alerts and Grafana dashboards for this, which has the added
benefit of being able to more accurately measure the behaviour over
time.
Note that throwing errors in test environments is still enabled, and
whitelisting is still necessary to prevent that from happening (and that
in turn still requires that developers create issues).
This ensures that we have more visibility in the number of SQL queries
that are executed in web requests. The current threshold is hardcoded to
100 as we will rarely (maybe once or twice) change it.
In production and development we use Sentry if enabled, in the test
environment we raise an error. This feature is also only enabled in
production/staging when running on GitLab.com as it's not very useful to
other users.