```ruby
require 'action_pack'
require 'action_dispatch'
require 'benchmark/ips'
route_set = ActionDispatch::Routing::RouteSet.new
routes = ActionDispatch::Routing::Mapper.new route_set
ObjectSpace::AllocationTracer.setup(%i{path line type})
result = ObjectSpace::AllocationTracer.trace do
500.times do
routes.resources :foo
end
end
sorted = ObjectSpace::AllocationTracer.allocated_count_table.sort_by(&:last)
sorted.each do |k,v|
next if v == 0
p k => v
end
__END__
Before:
{:T_SYMBOL=>11}
{:T_REGEXP=>17}
{:T_STRUCT=>6500}
{:T_MATCH=>12004}
{:T_OBJECT=>99009}
{:T_DATA=>100088}
{:T_HASH=>122015}
{:T_STRING=>159637}
{:T_IMEMO=>363134}
{:T_ARRAY=>433056}
After:
{:T_SYMBOL=>11}
{:T_REGEXP=>17}
{:T_STRUCT=>6500}
{:T_MATCH=>12004}
{:T_OBJECT=>91009}
{:T_DATA=>100088}
{:T_HASH=>114013}
{:T_STRING=>159637}
{:T_ARRAY=>321056}
{:T_IMEMO=>351133}
```
Eagerly calculate and cache the name of Symbol objects in the path AST.
This drops about 26 string allocations per resource:
```ruby
require 'action_pack'
require 'action_dispatch'
require 'benchmark/ips'
route_set = ActionDispatch::Routing::RouteSet.new
routes = ActionDispatch::Routing::Mapper.new route_set
ObjectSpace::AllocationTracer.setup(%i{path line type})
result = ObjectSpace::AllocationTracer.trace do
500.times do
routes.resources :foo
end
end
sorted = ObjectSpace::AllocationTracer.allocated_count_table.sort_by(&:last)
sorted.each do |k,v|
next if v == 0
p k => v
end
__END__
Before:
{:T_SYMBOL=>11}
{:T_REGEXP=>17}
{:T_STRUCT=>6500}
{:T_MATCH=>12004}
{:T_OBJECT=>99009}
{:T_DATA=>116084}
{:T_HASH=>122015}
{:T_STRING=>172647}
{:T_IMEMO=>371132}
{:T_ARRAY=>433056}
After:
{:T_SYMBOL=>11}
{:T_REGEXP=>17}
{:T_STRUCT=>6500}
{:T_MATCH=>12004}
{:T_OBJECT=>99009}
{:T_DATA=>100088}
{:T_HASH=>122015}
{:T_STRING=>159637}
{:T_IMEMO=>363134}
{:T_ARRAY=>433056}
```
This commit introduces a functional Path AST visitor and implements
`each` on the AST in terms of the functional visitor. The functional
visitor doesn't maintain state, so we only need to allocate one of them.
Given this benchmark route file:
```ruby
require 'action_pack'
require 'action_dispatch'
route_set = ActionDispatch::Routing::RouteSet.new
routes = ActionDispatch::Routing::Mapper.new route_set
ObjectSpace::AllocationTracer.setup(%i{path line type})
result = ObjectSpace::AllocationTracer.trace do
500.times{|i|
routes.resource :omglol
}
end
result.find_all { |k,v| k.first =~ /git\/rails/ }.sort_by { |k,v|
v.first
}.each { |k,v|
p k => v
}
```
node.rb line 17 was in our top 3 allocation spot:
```
{["/Users/aaron/git/rails/actionpack/lib/action_dispatch/journey/nodes/node.rb", 17, :T_OBJECT]=>[31526, 0, 28329, 0, 2, 1123160]}
{["/Users/aaron/git/rails/actionpack/lib/action_dispatch/routing/mapper.rb", 2080, :T_IMEMO]=>[34002, 0, 30563, 0, 2, 1211480]}
{["/Users/aaron/git/rails/actionpack/lib/action_dispatch/routing/mapper.rb", 2071, :T_IMEMO]=>[121934, 1, 109608, 0, 7, 4344400]}
```
This commit eliminates allocations at that place.
I wrote a utility that helps find areas where you could optimize your program using a frozen string instead of a string literal, it's called [let_it_go](https://github.com/schneems/let_it_go). After going through the output and adding `.freeze` I was able to eliminate the creation of 1,114 string objects on EVERY request to [codetriage](codetriage.com). How does this impact execution?
To look at memory:
```ruby
require 'get_process_mem'
mem = GetProcessMem.new
GC.start
GC.disable
1_114.times { " " }
before = mem.mb
after = mem.mb
GC.enable
puts "Diff: #{after - before} mb"
```
Creating 1,114 string objects results in `Diff: 0.03125 mb` of RAM allocated on every request. Or 1mb every 32 requests.
To look at raw speed:
```ruby
require 'benchmark/ips'
number_of_objects_reduced = 1_114
Benchmark.ips do |x|
x.report("freeze") { number_of_objects_reduced.times { " ".freeze } }
x.report("no-freeze") { number_of_objects_reduced.times { " " } }
end
```
We get the results
```
Calculating -------------------------------------
freeze 1.428k i/100ms
no-freeze 609.000 i/100ms
-------------------------------------------------
freeze 14.363k (± 8.5%) i/s - 71.400k
no-freeze 6.084k (± 8.1%) i/s - 30.450k
```
Now we can do some maths:
```ruby
ips = 6_226k # iterations / 1 second
call_time_before = 1.0 / ips # seconds per iteration
ips = 15_254 # iterations / 1 second
call_time_after = 1.0 / ips # seconds per iteration
diff = call_time_before - call_time_after
number_of_objects_reduced * diff * 100
# => 0.4530373333993266 miliseconds saved per request
```
So we're shaving off 1 second of execution time for every 220 requests.
Is this going to be an insane speed boost to any Rails app: nope. Should we merge it: yep.
p.s. If you know of a method call that doesn't modify a string input such as [String#gsub](b0e2da69f0/lib/let_it_go/core_ext/string.rb (L37)) please [give me a pull request to the appropriate file](b0e2da69f0/lib/let_it_go/core_ext/string.rb (L37)), or open an issue in LetItGo so we can track and freeze more strings.
Keep those strings Frozen
![](https://www.dropbox.com/s/z4dj9fdsv213r4v/let-it-go.gif?dl=1)
Move the Journey code underneath the ActionDispatch namespace so
that we don't pollute the global namespace with names that may
be used for models.
Fixesrails/journey#49.