77c8520e2e
This concern provides an optimized/simplified version of the "cache_key" method. This method is about 9 times faster than the default "cache_key" method. The produced cache keys _are_ different from the previous ones but this is worth the performance improvement. To showcase this I set up a benchmark (using benchmark-ips) that compares FasterCacheKeys#cache_key with the regular cache_key. The output of this benchmark was: Calculating ------------------------------------- cache_key 4.825k i/100ms cache_key_fast 21.723k i/100ms ------------------------------------------------- cache_key 59.422k (± 7.2%) i/s - 299.150k cache_key_fast 543.243k (± 9.2%) i/s - 2.694M Comparison: cache_key_fast: 543243.4 i/s cache_key: 59422.0 i/s - 9.14x slower To see the impact on real code I applied these changes and benchmarked Issue#referenced_merge_requests. For an issue referencing 10 merge requests these changes shaved off between 40 and 60 milliseconds.
17 lines
467 B
Ruby
17 lines
467 B
Ruby
require 'spec_helper'
|
|
|
|
describe FasterCacheKeys do
|
|
describe '#cache_key' do
|
|
it 'returns a String' do
|
|
# We're using a fixed string here so it's easier to set an expectation for
|
|
# the resulting cache key.
|
|
time = '2016-08-08 16:39:00+02'
|
|
issue = build(:issue, updated_at: time)
|
|
issue.extend(described_class)
|
|
|
|
expect(issue).to receive(:id).and_return(1)
|
|
|
|
expect(issue.cache_key).to eq("issues/1-#{time}")
|
|
end
|
|
end
|
|
end
|