Preloading of project_features mitigates N+1 queries when checking
references in other projects.
When loading projects for resources referenced in comments it
makes sense to include also associated project_features because
in the following step (`can_read_reference?(user, projects[node],
node)`) project features is used for checking permissions for the given
project.
Even if it doesn’t save lines of code, since people will tend to use
code they’ve seen. And `SafeRequestStore` is safer since you
don’t have to remember to check `RequestStore.active?`.
This refactors the Markdown pipeline so it supports the rendering of
multiple documents that may belong to different projects. An example of
where this happens is when displaying the event feed of a group. In this
case we retrieve events for all projects in the group. Previously we
would group events per project and render these chunks separately, but
this would result in many SQL queries being executed. By extending the
Markdown pipeline to support this out of the box we can drastically
reduce the number of SQL queries.
To achieve this we introduce a new object to the pipeline:
Banzai::RenderContext. This object simply wraps two other objects: an
optional Project instance, and an optional User instance. On its own
this wouldn't be very helpful, but a RenderContext can also be used to
associate HTML documents with specific Project instances. This work is
done in Banzai::ObjectRenderer and allows us to reuse as many queries
(and results) as possible.
The /unsubscribe slash command means that we check if the current user is
subscribed to the issuable without having an explicit subscription. That means
that we use the UserParser to find references to them in the notes.
The UserParser (and all parsers inheriting from BaseParser) use RequestStore to
cache ActiveRecord objects, so that we don't need to load the User object each
time, if we're parsing references a bunch of times in the same request.
However, it was always returning _all_ of the previously cached items, not just
the ones matching the IDs passed. This would mean that we did two runs through
with UserParser if you were mentioned in a comment, and then mentioned someone
else in your comment while using /unsubscribe:
1. Because /unsubscribe was used, we see if you were mentioned in any comments.
2. Because you mentioned someone, we find them - but we would also get back your
user, even if you didn't mention yourself. This would have the effect of
creating a mention or directly addressed todo for yourself incorrectly.
The fix is simple: only return values from the cache matching the IDs passed.
Example: for issues that are closed, the links will now show '[closed]'
following the issue number. This is done as post-process after the markdown has
been loaded from the cache as the status of the issue may change between
the cache being populated and the content being displayed.
In order to avoid N+1 queries problem when rendering notes ObjectRenderer
populates the cache of referenced issuables for all notes at once,
before the post processing phase.
As a part of this change, the Banzai BaseParser#grouped_objects_for_nodes
method has been refactored to return a Hash utilising the node itself as the
key, since this was a common pattern of usage for this method.
disable markdown in comments when referencing disabled features
fixes https://gitlab.com/gitlab-org/gitlab-ce/issues/23548
This MR prevents the following references when tool is disabled:
- issues
- snippets
- commits - when repo is disabled
- commit range - when repo is disabled
- milestones
This MR does not prevent references to repository files, since they are just markdown links and don't leak
information.
See merge request !2011
Signed-off-by: Rémy Coutable <remy@rymai.me>
This splits the Markdown rendering and reference extraction phases into
two distinct code bases. The reference extraction phase no longer relies
on the html-pipeline Gem (and any related code) and allows for
extracting of references from multiple HTML nodes in a single pass. This
means that if you want to extract user references from 200 comments you
no longer need to run 200 times N number of queries, instead only a
handful of queries may be needed.