5ee7884d91
If a field is a resolver, its complexity is automatically increased. By default we add extra points for sort and search arguments (which will be common for various resolvers). For specific resolvers we add field-specific complexity, e.g. for Issues complexity is increased if we filter issues by `labelName` (because then SQL query is more complex). We may want to tune these values in future depending on real-life results. Complexity is also dependent on the number of loaded nodes, but only if we don't search by specific ID(s). Also added complexity is limited (by default only twice more than child complexity) - the reason is that although it's more complex to process more items, the complexity increase is not linear (there is not so much difference between loading 10, 20 or 100 records from DB).
32 lines
942 B
Ruby
32 lines
942 B
Ruby
# frozen_string_literal: true
|
|
|
|
module Resolvers
|
|
class BaseResolver < GraphQL::Schema::Resolver
|
|
def self.single
|
|
@single ||= Class.new(self) do
|
|
def resolve(**args)
|
|
super.first
|
|
end
|
|
end
|
|
end
|
|
|
|
def self.resolver_complexity(args)
|
|
complexity = 1
|
|
complexity += 1 if args[:sort]
|
|
complexity += 5 if args[:search]
|
|
|
|
complexity
|
|
end
|
|
|
|
def self.complexity_multiplier(args)
|
|
# When fetching many items, additional complexity is added to the field
|
|
# depending on how many items is fetched. For each item we add 1% of the
|
|
# original complexity - this means that loading 100 items (our default
|
|
# maxp_age_size limit) doubles the original complexity.
|
|
#
|
|
# Complexity is not increased when searching by specific ID(s), because
|
|
# complexity difference is minimal in this case.
|
|
[args[:iid], args[:iids]].any? ? 0 : 0.01
|
|
end
|
|
end
|
|
end
|