2009-02-05 20:57:02 -05:00
|
|
|
h2. Performance Testing Rails Applications
|
|
|
|
|
2012-07-24 14:13:14 -04:00
|
|
|
This guide covers the various ways of performance testing a Ruby on Rails
|
|
|
|
application. By referring to this guide, you will be able to:
|
|
|
|
|
|
|
|
* Understand the various types of benchmarking and profiling metrics.
|
|
|
|
* Generate performance and benchmarking tests.
|
|
|
|
* Install and use a GC-patched Ruby binary to measure memory usage and object
|
|
|
|
allocation.
|
|
|
|
* Understand the benchmarking information provided by Rails inside the log files.
|
|
|
|
* Learn about various tools facilitating benchmarking and profiling.
|
|
|
|
|
|
|
|
Performance testing is an integral part of the development cycle. It is very
|
|
|
|
important that you don't make your end users wait for too long before the page
|
|
|
|
is completely loaded. Ensuring a pleasant browsing experience for end users and
|
|
|
|
cutting the cost of unnecessary hardware is important for any non-trivial web
|
|
|
|
application.
|
2009-02-05 20:57:02 -05:00
|
|
|
|
|
|
|
endprologue.
|
|
|
|
|
|
|
|
h3. Performance Test Cases
|
|
|
|
|
2012-07-24 14:13:14 -04:00
|
|
|
Rails performance tests are a special type of integration tests, designed for
|
|
|
|
benchmarking and profiling the test code. With performance tests, you can
|
|
|
|
determine where your application's memory or speed problems are coming from,
|
|
|
|
and get a more in-depth picture of those problems.
|
2009-02-05 20:57:02 -05:00
|
|
|
|
2012-07-24 14:13:14 -04:00
|
|
|
In a freshly generated Rails application, +test/performance/browsing_test.rb+
|
|
|
|
contains an example of a performance test:
|
2009-02-05 20:57:02 -05:00
|
|
|
|
|
|
|
<ruby>
|
|
|
|
require 'test_helper'
|
2010-08-31 07:39:08 -04:00
|
|
|
require 'rails/performance_test_help'
|
2009-02-05 20:57:02 -05:00
|
|
|
|
2011-03-29 17:41:49 -04:00
|
|
|
class BrowsingTest < ActionDispatch::PerformanceTest
|
2012-07-24 14:13:14 -04:00
|
|
|
# Refer to the documentation for all available options
|
|
|
|
# self.profile_options = { runs: 5, metrics: [:wall_time, :memory],
|
|
|
|
# output: 'tmp/performance', formats: [:flat] }
|
|
|
|
|
|
|
|
test "homepage" do
|
2009-02-05 20:57:02 -05:00
|
|
|
get '/'
|
|
|
|
end
|
|
|
|
end
|
|
|
|
</ruby>
|
|
|
|
|
2012-07-24 14:13:14 -04:00
|
|
|
This example is a simple performance test case for profiling a GET request to
|
|
|
|
the application's homepage.
|
2009-02-05 20:57:02 -05:00
|
|
|
|
2009-03-16 07:28:36 -04:00
|
|
|
h4. Generating Performance Tests
|
2009-02-05 20:57:02 -05:00
|
|
|
|
2012-07-24 14:13:14 -04:00
|
|
|
Rails provides a generator called +performance_test+ for creating new
|
|
|
|
performance tests:
|
2009-02-05 20:57:02 -05:00
|
|
|
|
|
|
|
<shell>
|
2011-03-29 17:41:49 -04:00
|
|
|
$ rails generate performance_test homepage
|
2009-02-05 20:57:02 -05:00
|
|
|
</shell>
|
|
|
|
|
|
|
|
This generates +homepage_test.rb+ in the +test/performance+ directory:
|
|
|
|
|
|
|
|
<ruby>
|
|
|
|
require 'test_helper'
|
2010-08-31 07:39:08 -04:00
|
|
|
require 'rails/performance_test_help'
|
2009-02-05 20:57:02 -05:00
|
|
|
|
2011-03-29 17:41:49 -04:00
|
|
|
class HomepageTest < ActionDispatch::PerformanceTest
|
2012-07-24 14:13:14 -04:00
|
|
|
# Refer to the documentation for all available options
|
|
|
|
# self.profile_options = { :runs => 5, :metrics => [:wall_time, :memory],
|
|
|
|
# :output => 'tmp/performance', :formats => [:flat] }
|
|
|
|
|
|
|
|
test "homepage" do
|
2009-02-05 20:57:02 -05:00
|
|
|
get '/'
|
|
|
|
end
|
|
|
|
end
|
|
|
|
</ruby>
|
|
|
|
|
|
|
|
h4. Examples
|
|
|
|
|
|
|
|
Let's assume your application has the following controller and model:
|
|
|
|
|
|
|
|
<ruby>
|
|
|
|
# routes.rb
|
2012-07-24 14:13:14 -04:00
|
|
|
root to: 'home#dashboard'
|
2010-12-25 12:57:15 -05:00
|
|
|
resources :posts
|
2009-02-05 20:57:02 -05:00
|
|
|
|
|
|
|
# home_controller.rb
|
|
|
|
class HomeController < ApplicationController
|
|
|
|
def dashboard
|
2011-06-10 07:48:57 -04:00
|
|
|
@users = User.last_ten.includes(:avatars)
|
2009-02-05 20:57:02 -05:00
|
|
|
@posts = Post.all_today
|
|
|
|
end
|
|
|
|
end
|
|
|
|
|
|
|
|
# posts_controller.rb
|
|
|
|
class PostsController < ApplicationController
|
|
|
|
def create
|
|
|
|
@post = Post.create(params[:post])
|
|
|
|
redirect_to(@post)
|
|
|
|
end
|
|
|
|
end
|
|
|
|
|
|
|
|
# post.rb
|
|
|
|
class Post < ActiveRecord::Base
|
|
|
|
before_save :recalculate_costly_stats
|
|
|
|
|
|
|
|
def slow_method
|
|
|
|
# I fire gallzilion queries sleeping all around
|
|
|
|
end
|
|
|
|
|
|
|
|
private
|
|
|
|
|
|
|
|
def recalculate_costly_stats
|
|
|
|
# CPU heavy calculations
|
|
|
|
end
|
|
|
|
end
|
|
|
|
</ruby>
|
|
|
|
|
2009-03-16 07:28:36 -04:00
|
|
|
h5. Controller Example
|
2009-02-05 20:57:02 -05:00
|
|
|
|
2012-07-24 14:13:14 -04:00
|
|
|
Because performance tests are a special kind of integration test, you can use
|
|
|
|
the +get+ and +post+ methods in them.
|
2009-02-05 20:57:02 -05:00
|
|
|
|
2012-07-24 14:13:14 -04:00
|
|
|
Here's the performance test for +HomeController#dashboard+ and
|
|
|
|
+PostsController#create+:
|
2009-02-05 20:57:02 -05:00
|
|
|
|
|
|
|
<ruby>
|
|
|
|
require 'test_helper'
|
2010-08-31 07:39:08 -04:00
|
|
|
require 'rails/performance_test_help'
|
2009-02-05 20:57:02 -05:00
|
|
|
|
2011-03-29 17:41:49 -04:00
|
|
|
class PostPerformanceTest < ActionDispatch::PerformanceTest
|
2009-02-05 20:57:02 -05:00
|
|
|
def setup
|
|
|
|
# Application requires logged-in user
|
|
|
|
login_as(:lifo)
|
|
|
|
end
|
|
|
|
|
2012-07-24 14:13:14 -04:00
|
|
|
test "homepage" do
|
2009-02-05 20:57:02 -05:00
|
|
|
get '/dashboard'
|
|
|
|
end
|
|
|
|
|
2012-07-24 14:13:14 -04:00
|
|
|
test "creating new post" do
|
|
|
|
post '/posts', post: { body: 'lifo is fooling you' }
|
2009-02-05 20:57:02 -05:00
|
|
|
end
|
|
|
|
end
|
|
|
|
</ruby>
|
|
|
|
|
2012-07-24 14:13:14 -04:00
|
|
|
You can find more details about the +get+ and +post+ methods in the
|
|
|
|
"Testing Rails Applications":testing.html guide.
|
2009-02-05 20:57:02 -05:00
|
|
|
|
2009-03-16 07:28:36 -04:00
|
|
|
h5. Model Example
|
2009-02-05 20:57:02 -05:00
|
|
|
|
2012-07-24 14:13:14 -04:00
|
|
|
Even though the performance tests are integration tests and hence closer to
|
|
|
|
the request/response cycle by nature, you can still performance test pure model
|
|
|
|
code.
|
2009-02-05 20:57:02 -05:00
|
|
|
|
|
|
|
Performance test for +Post+ model:
|
|
|
|
|
|
|
|
<ruby>
|
|
|
|
require 'test_helper'
|
2010-08-31 07:39:08 -04:00
|
|
|
require 'rails/performance_test_help'
|
2009-02-05 20:57:02 -05:00
|
|
|
|
2011-03-29 17:41:49 -04:00
|
|
|
class PostModelTest < ActionDispatch::PerformanceTest
|
2012-07-24 14:13:14 -04:00
|
|
|
test "creation" do
|
|
|
|
Post.create body: 'still fooling you', cost: '100'
|
2009-02-05 20:57:02 -05:00
|
|
|
end
|
|
|
|
|
2012-07-24 14:13:14 -04:00
|
|
|
test "slow method" do
|
2009-02-05 20:57:02 -05:00
|
|
|
# Using posts(:awesome) fixture
|
|
|
|
posts(:awesome).slow_method
|
|
|
|
end
|
|
|
|
end
|
|
|
|
</ruby>
|
|
|
|
|
|
|
|
h4. Modes
|
|
|
|
|
2009-02-24 07:29:25 -05:00
|
|
|
Performance tests can be run in two modes: Benchmarking and Profiling.
|
2009-02-05 20:57:02 -05:00
|
|
|
|
|
|
|
h5. Benchmarking
|
|
|
|
|
2012-07-24 14:13:14 -04:00
|
|
|
Benchmarking makes it easy to quickly gather a few metrics about each test run.
|
|
|
|
By default, each test case is run *4 times* in benchmarking mode.
|
2009-02-05 20:57:02 -05:00
|
|
|
|
|
|
|
To run performance tests in benchmarking mode:
|
|
|
|
|
|
|
|
<shell>
|
|
|
|
$ rake test:benchmark
|
|
|
|
</shell>
|
|
|
|
|
|
|
|
h5. Profiling
|
|
|
|
|
2012-07-24 14:13:14 -04:00
|
|
|
Profiling allows you to make an in-depth analysis of each of your tests by using
|
|
|
|
an external profiler. Depending on your Ruby interpreter, this profiler can be
|
|
|
|
native (Rubinius, JRuby) or not (MRI, which uses RubyProf). By default, each
|
|
|
|
test case is run *once* in profiling mode.
|
2009-02-05 20:57:02 -05:00
|
|
|
|
|
|
|
To run performance tests in profiling mode:
|
|
|
|
|
|
|
|
<shell>
|
|
|
|
$ rake test:profile
|
|
|
|
</shell>
|
|
|
|
|
|
|
|
h4. Metrics
|
|
|
|
|
2012-07-24 14:13:14 -04:00
|
|
|
Benchmarking and profiling run performance tests and give you multiple metrics.
|
|
|
|
The availability of each metric is determined by the interpreter being used—none
|
|
|
|
of them support all metrics—and by the mode in use. A brief description of each
|
|
|
|
metric and their availability across interpreters/modes is given below.
|
2009-02-05 20:57:02 -05:00
|
|
|
|
2009-03-16 07:28:36 -04:00
|
|
|
h5. Wall Time
|
2009-02-05 20:57:02 -05:00
|
|
|
|
2012-07-24 14:13:14 -04:00
|
|
|
Wall time measures the real world time elapsed during the test run. It is
|
|
|
|
affected by any other processes concurrently running on the system.
|
2009-02-05 20:57:02 -05:00
|
|
|
|
2009-03-16 07:28:36 -04:00
|
|
|
h5. Process Time
|
2009-02-05 20:57:02 -05:00
|
|
|
|
2012-07-24 14:13:14 -04:00
|
|
|
Process time measures the time taken by the process. It is unaffected by any
|
|
|
|
other processes running concurrently on the same system. Hence, process time
|
|
|
|
is likely to be constant for any given performance test, irrespective of the
|
|
|
|
machine load.
|
2009-02-05 20:57:02 -05:00
|
|
|
|
2011-03-29 22:23:49 -04:00
|
|
|
h5. CPU Time
|
|
|
|
|
2012-07-24 14:13:14 -04:00
|
|
|
Similar to process time, but leverages the more accurate CPU clock counter
|
|
|
|
available on the Pentium and PowerPC platforms.
|
2011-03-29 22:23:49 -04:00
|
|
|
|
|
|
|
h5. User Time
|
|
|
|
|
2012-07-24 14:13:14 -04:00
|
|
|
User time measures the amount of time the CPU spent in user-mode, i.e. within
|
|
|
|
the process. This is not affected by other processes and by the time it possibly
|
|
|
|
spends blocked.
|
2009-02-05 20:57:02 -05:00
|
|
|
|
|
|
|
h5. Memory
|
|
|
|
|
|
|
|
Memory measures the amount of memory used for the performance test case.
|
|
|
|
|
|
|
|
h5. Objects
|
|
|
|
|
|
|
|
Objects measures the number of objects allocated for the performance test case.
|
|
|
|
|
2009-03-16 07:28:36 -04:00
|
|
|
h5. GC Runs
|
2009-02-05 20:57:02 -05:00
|
|
|
|
|
|
|
GC Runs measures the number of times GC was invoked for the performance test case.
|
|
|
|
|
2009-03-16 07:28:36 -04:00
|
|
|
h5. GC Time
|
2009-02-05 20:57:02 -05:00
|
|
|
|
|
|
|
GC Time measures the amount of time spent in GC for the performance test case.
|
|
|
|
|
2011-03-29 22:23:49 -04:00
|
|
|
h5. Metric Availability
|
|
|
|
|
2011-09-02 16:10:00 -04:00
|
|
|
h6(#benchmarking_1). Benchmarking
|
2011-03-29 22:23:49 -04:00
|
|
|
|
|
|
|
|_.Interpreter|_.Wall Time|_.Process Time|_.CPU Time|_.User Time|_.Memory|_.Objects|_.GC Runs|_.GC Time|
|
|
|
|
|_.MRI | yes | yes | yes | no | yes | yes | yes | yes |
|
|
|
|
|_.REE | yes | yes | yes | no | yes | yes | yes | yes |
|
|
|
|
|_.Rubinius | yes | no | no | no | yes | yes | yes | yes |
|
|
|
|
|_.JRuby | yes | no | no | yes | yes | yes | yes | yes |
|
|
|
|
|
2011-09-02 16:10:00 -04:00
|
|
|
h6(#profiling_1). Profiling
|
2011-03-29 22:23:49 -04:00
|
|
|
|
|
|
|
|_.Interpreter|_.Wall Time|_.Process Time|_.CPU Time|_.User Time|_.Memory|_.Objects|_.GC Runs|_.GC Time|
|
|
|
|
|_.MRI | yes | yes | no | no | yes | yes | yes | yes |
|
|
|
|
|_.REE | yes | yes | no | no | yes | yes | yes | yes |
|
|
|
|
|_.Rubinius | yes | no | no | no | no | no | no | no |
|
|
|
|
|_.JRuby | yes | no | no | no | no | no | no | no |
|
|
|
|
|
2012-07-24 14:13:14 -04:00
|
|
|
NOTE: To profile under JRuby you'll need to run +export JRUBY_OPTS="-Xlaunch.inproc=false --profile.api"+
|
|
|
|
*before* the performance tests.
|
2009-02-05 20:57:02 -05:00
|
|
|
|
2009-03-16 07:28:36 -04:00
|
|
|
h4. Understanding the Output
|
2009-02-05 20:57:02 -05:00
|
|
|
|
2012-07-24 14:13:14 -04:00
|
|
|
Performance tests generate different outputs inside +tmp/performance+ directory
|
|
|
|
depending on their mode and metric.
|
2009-02-05 20:57:02 -05:00
|
|
|
|
2010-04-04 11:42:54 -04:00
|
|
|
h5(#output-benchmarking). Benchmarking
|
2009-02-05 20:57:02 -05:00
|
|
|
|
2011-03-29 22:23:49 -04:00
|
|
|
In benchmarking mode, performance tests generate two types of outputs.
|
2009-02-05 20:57:02 -05:00
|
|
|
|
2010-04-04 11:42:54 -04:00
|
|
|
h6(#output-command-line). Command Line
|
2009-02-05 20:57:02 -05:00
|
|
|
|
2009-02-24 07:29:25 -05:00
|
|
|
This is the primary form of output in benchmarking mode. Example:
|
2009-02-05 20:57:02 -05:00
|
|
|
|
|
|
|
<shell>
|
|
|
|
BrowsingTest#test_homepage (31 ms warmup)
|
|
|
|
wall_time: 6 ms
|
|
|
|
memory: 437.27 KB
|
2011-03-29 22:23:49 -04:00
|
|
|
objects: 5,514
|
2009-02-05 20:57:02 -05:00
|
|
|
gc_runs: 0
|
|
|
|
gc_time: 19 ms
|
|
|
|
</shell>
|
|
|
|
|
2009-03-16 07:28:36 -04:00
|
|
|
h6. CSV Files
|
2009-02-05 20:57:02 -05:00
|
|
|
|
2012-07-24 14:13:14 -04:00
|
|
|
Performance test results are also appended to +.csv+ files inside +tmp/performance+.
|
|
|
|
For example, running the default +BrowsingTest#test_homepage+ will generate
|
|
|
|
following five files:
|
2009-02-05 20:57:02 -05:00
|
|
|
|
|
|
|
* BrowsingTest#test_homepage_gc_runs.csv
|
|
|
|
* BrowsingTest#test_homepage_gc_time.csv
|
|
|
|
* BrowsingTest#test_homepage_memory.csv
|
|
|
|
* BrowsingTest#test_homepage_objects.csv
|
|
|
|
* BrowsingTest#test_homepage_wall_time.csv
|
|
|
|
|
2012-07-24 14:13:14 -04:00
|
|
|
As the results are appended to these files each time the performance tests are
|
|
|
|
run in benchmarking mode, you can collect data over a period of time. This can
|
|
|
|
be very helpful in analyzing the effects of code changes.
|
2009-02-05 20:57:02 -05:00
|
|
|
|
|
|
|
Sample output of +BrowsingTest#test_homepage_wall_time.csv+:
|
|
|
|
|
|
|
|
<shell>
|
|
|
|
measurement,created_at,app,rails,ruby,platform
|
2010-04-19 12:50:59 -04:00
|
|
|
0.00738224999999992,2009-01-08T03:40:29Z,,3.0.0,ruby-1.8.7.249,x86_64-linux
|
|
|
|
0.00755874999999984,2009-01-08T03:46:18Z,,3.0.0,ruby-1.8.7.249,x86_64-linux
|
|
|
|
0.00762099999999993,2009-01-08T03:49:25Z,,3.0.0,ruby-1.8.7.249,x86_64-linux
|
|
|
|
0.00603075000000008,2009-01-08T04:03:29Z,,3.0.0,ruby-1.8.7.249,x86_64-linux
|
|
|
|
0.00619899999999995,2009-01-08T04:03:53Z,,3.0.0,ruby-1.8.7.249,x86_64-linux
|
|
|
|
0.00755449999999991,2009-01-08T04:04:55Z,,3.0.0,ruby-1.8.7.249,x86_64-linux
|
|
|
|
0.00595999999999997,2009-01-08T04:05:06Z,,3.0.0,ruby-1.8.7.249,x86_64-linux
|
|
|
|
0.00740450000000004,2009-01-09T03:54:47Z,,3.0.0,ruby-1.8.7.249,x86_64-linux
|
|
|
|
0.00603150000000008,2009-01-09T03:54:57Z,,3.0.0,ruby-1.8.7.249,x86_64-linux
|
|
|
|
0.00771250000000012,2009-01-09T15:46:03Z,,3.0.0,ruby-1.8.7.249,x86_64-linux
|
2009-02-05 20:57:02 -05:00
|
|
|
</shell>
|
|
|
|
|
2010-04-04 11:42:54 -04:00
|
|
|
h5(#output-profiling). Profiling
|
2009-02-05 20:57:02 -05:00
|
|
|
|
2012-07-24 14:13:14 -04:00
|
|
|
In profiling mode, performance tests can generate multiple types of outputs.
|
|
|
|
The command line output is always presented but support for the others is
|
|
|
|
dependent on the interpreter in use. A brief description of each type and
|
|
|
|
their availability across interpreters is given below.
|
2009-02-05 20:57:02 -05:00
|
|
|
|
2009-03-16 07:28:36 -04:00
|
|
|
h6. Command Line
|
2009-02-05 20:57:02 -05:00
|
|
|
|
|
|
|
This is a very basic form of output in profiling mode:
|
|
|
|
|
|
|
|
<shell>
|
|
|
|
BrowsingTest#test_homepage (58 ms warmup)
|
|
|
|
process_time: 63 ms
|
|
|
|
memory: 832.13 KB
|
2011-03-29 22:23:49 -04:00
|
|
|
objects: 7,882
|
2009-02-05 20:57:02 -05:00
|
|
|
</shell>
|
|
|
|
|
|
|
|
h6. Flat
|
|
|
|
|
2012-07-24 14:13:14 -04:00
|
|
|
Flat output shows the metric—time, memory, etc—measure in each method.
|
|
|
|
"Check Ruby-Prof documentation for a better explanation":http://ruby-prof.rubyforge.org/files/examples/flat_txt.html.
|
2009-02-05 20:57:02 -05:00
|
|
|
|
|
|
|
h6. Graph
|
|
|
|
|
2012-07-24 14:13:14 -04:00
|
|
|
Graph output shows the metric measure in each method, which methods call it and
|
|
|
|
which methods it calls. "Check Ruby-Prof documentation for a better explanation":http://ruby-prof.rubyforge.org/files/examples/graph_txt.html.
|
2009-02-05 20:57:02 -05:00
|
|
|
|
|
|
|
h6. Tree
|
|
|
|
|
2012-07-24 14:13:14 -04:00
|
|
|
Tree output is profiling information in calltree format for use by "kcachegrind":http://kcachegrind.sourceforge.net/html/Home.html
|
|
|
|
and similar tools.
|
2009-02-05 20:57:02 -05:00
|
|
|
|
2011-03-29 22:23:49 -04:00
|
|
|
h6. Output Availability
|
|
|
|
|
|
|
|
|_. |_.Flat|_.Graph|_.Tree|
|
|
|
|
|_.MRI | yes | yes | yes |
|
|
|
|
|_.REE | yes | yes | yes |
|
|
|
|
|_.Rubinius | yes | yes | no |
|
|
|
|
|_.JRuby | yes | yes | no |
|
|
|
|
|
2009-03-16 07:28:36 -04:00
|
|
|
h4. Tuning Test Runs
|
2009-02-05 20:57:02 -05:00
|
|
|
|
2012-07-24 14:13:14 -04:00
|
|
|
Test runs can be tuned by setting the +profile_options+ class variable on your
|
|
|
|
test class.
|
2011-03-29 22:23:49 -04:00
|
|
|
|
|
|
|
<ruby>
|
|
|
|
require 'test_helper'
|
|
|
|
require 'rails/performance_test_help'
|
|
|
|
|
|
|
|
class BrowsingTest < ActionDispatch::PerformanceTest
|
2012-07-24 14:13:14 -04:00
|
|
|
self.profile_options = { runs: 5, metrics: [:wall_time, :memory] }
|
2011-03-29 22:23:49 -04:00
|
|
|
|
2012-07-24 14:13:14 -04:00
|
|
|
test "homepage"
|
2011-03-29 22:23:49 -04:00
|
|
|
get '/'
|
|
|
|
end
|
|
|
|
end
|
|
|
|
</ruby>
|
|
|
|
|
2012-07-24 14:13:14 -04:00
|
|
|
In this example, the test would run 5 times and measure wall time and memory.
|
|
|
|
There are a few configurable options:
|
2011-03-29 22:23:49 -04:00
|
|
|
|
|
|
|
|_.Option |_.Description|_.Default|_.Mode|
|
|
|
|
|+:runs+ |Number of runs.|Benchmarking: 4, Profiling: 1|Both|
|
|
|
|
|+:output+ |Directory to use when writing the results.|+tmp/performance+|Both|
|
|
|
|
|+:metrics+ |Metrics to use.|See below.|Both|
|
|
|
|
|+:formats+ |Formats to output to.|See below.|Profiling|
|
|
|
|
|
|
|
|
Metrics and formats have different defaults depending on the interpreter in use.
|
|
|
|
|
|
|
|
|_.Interpreter|_.Mode|_.Default metrics|_.Default formats|
|
2011-05-09 18:46:07 -04:00
|
|
|
|/2.MRI/REE |Benchmarking|+[:wall_time, :memory, :objects, :gc_runs, :gc_time]+|N/A|
|
2011-05-09 21:49:32 -04:00
|
|
|
|Profiling |+[:process_time, :memory, :objects]+|+[:flat, :graph_html, :call_tree, :call_stack]+|
|
2011-03-29 22:23:49 -04:00
|
|
|
|/2.Rubinius|Benchmarking|+[:wall_time, :memory, :objects, :gc_runs, :gc_time]+|N/A|
|
|
|
|
|Profiling |+[:wall_time]+|+[:flat, :graph]+|
|
|
|
|
|/2.JRuby |Benchmarking|+[:wall_time, :user_time, :memory, :gc_runs, :gc_time]+|N/A|
|
|
|
|
|Profiling |+[:wall_time]+|+[:flat, :graph]+|
|
2009-02-05 20:57:02 -05:00
|
|
|
|
2012-07-24 14:13:14 -04:00
|
|
|
As you've probably noticed by now, metrics and formats are specified using a
|
|
|
|
symbol array with each name "underscored.":http://api.rubyonrails.org/classes/String.html#method-i-underscore
|
2009-02-05 20:57:02 -05:00
|
|
|
|
2009-03-16 07:28:36 -04:00
|
|
|
h4. Performance Test Environment
|
2009-02-05 20:57:02 -05:00
|
|
|
|
2012-07-24 14:13:14 -04:00
|
|
|
Performance tests are run in the +test+ environment. But running performance
|
|
|
|
tests will set the following configuration parameters:
|
2009-02-05 20:57:02 -05:00
|
|
|
|
|
|
|
<shell>
|
|
|
|
ActionController::Base.perform_caching = true
|
|
|
|
ActiveSupport::Dependencies.mechanism = :require
|
|
|
|
Rails.logger.level = ActiveSupport::BufferedLogger::INFO
|
|
|
|
</shell>
|
|
|
|
|
2012-07-24 14:13:14 -04:00
|
|
|
As +ActionController::Base.perform_caching+ is set to +true+, performance tests
|
|
|
|
will behave much as they do in the +production+ environment.
|
2009-02-05 20:57:02 -05:00
|
|
|
|
2011-03-29 22:23:49 -04:00
|
|
|
h4. Installing GC-Patched MRI
|
2009-02-05 20:57:02 -05:00
|
|
|
|
2012-07-24 14:13:14 -04:00
|
|
|
To get the best from Rails' performance tests under MRI, you'll need to build
|
|
|
|
a special Ruby binary with some super powers.
|
2009-02-05 20:57:02 -05:00
|
|
|
|
2011-03-29 22:23:49 -04:00
|
|
|
The recommended patches for each MRI version are:
|
2009-02-05 20:57:02 -05:00
|
|
|
|
2011-03-29 22:23:49 -04:00
|
|
|
|_.Version|_.Patch|
|
|
|
|
|1.8.6|ruby186gc|
|
|
|
|
|1.8.7|ruby187gc|
|
|
|
|
|1.9.2 and above|gcdata|
|
2009-02-05 20:57:02 -05:00
|
|
|
|
2012-07-24 14:13:14 -04:00
|
|
|
All of these can be found on "RVM's _patches_ directory":https://github.com/wayneeseguin/rvm/tree/master/patches/ruby
|
|
|
|
under each specific interpreter version.
|
2011-03-29 22:23:49 -04:00
|
|
|
|
2012-07-24 14:13:14 -04:00
|
|
|
Concerning the installation itself, you can either do this easily by using
|
|
|
|
"RVM":http://rvm.beginrescueend.com or you can build everything from source,
|
|
|
|
which is a little bit harder.
|
2011-03-29 22:23:49 -04:00
|
|
|
|
|
|
|
h5. Install Using RVM
|
|
|
|
|
2012-07-24 14:13:14 -04:00
|
|
|
The process of installing a patched Ruby interpreter is very easy if you let RVM
|
|
|
|
do the hard work. All of the following RVM commands will provide you with a
|
|
|
|
patched Ruby interpreter:
|
2011-03-29 22:23:49 -04:00
|
|
|
|
|
|
|
<shell>
|
|
|
|
$ rvm install 1.9.2-p180 --patch gcdata
|
|
|
|
$ rvm install 1.8.7 --patch ruby187gc
|
|
|
|
$ rvm install 1.9.2-p180 --patch ~/Downloads/downloaded_gcdata_patch.patch
|
|
|
|
</shell>
|
2009-02-05 20:57:02 -05:00
|
|
|
|
2012-07-24 14:13:14 -04:00
|
|
|
You can even keep your regular interpreter by assigning a name to the patched
|
|
|
|
one:
|
2011-03-29 22:23:49 -04:00
|
|
|
|
|
|
|
<shell>
|
|
|
|
$ rvm install 1.9.2-p180 --patch gcdata --name gcdata
|
|
|
|
$ rvm use 1.9.2-p180 # your regular ruby
|
|
|
|
$ rvm use 1.9.2-p180-gcdata # your patched ruby
|
|
|
|
</shell>
|
|
|
|
|
|
|
|
And it's done! You have installed a patched Ruby interpreter.
|
|
|
|
|
|
|
|
h5. Install From Source
|
|
|
|
|
2012-07-24 14:13:14 -04:00
|
|
|
This process is a bit more complicated, but straightforward nonetheless. If
|
|
|
|
you've never compiled a Ruby binary before, follow these steps to build a
|
|
|
|
Ruby binary inside your home directory.
|
2011-03-29 22:23:49 -04:00
|
|
|
|
|
|
|
h6. Download and Extract
|
2009-02-05 20:57:02 -05:00
|
|
|
|
|
|
|
<shell>
|
2011-02-20 11:07:39 -05:00
|
|
|
$ mkdir rubygc
|
2011-03-29 22:23:49 -04:00
|
|
|
$ wget <the version you want from ftp://ftp.ruby-lang.org/pub/ruby>
|
2011-02-20 11:07:39 -05:00
|
|
|
$ tar -xzvf <ruby-version.tar.gz>
|
|
|
|
$ cd <ruby-version>
|
2009-02-05 20:57:02 -05:00
|
|
|
</shell>
|
|
|
|
|
2011-03-29 22:23:49 -04:00
|
|
|
h6. Apply the Patch
|
2009-02-05 20:57:02 -05:00
|
|
|
|
|
|
|
<shell>
|
2011-03-29 22:23:49 -04:00
|
|
|
$ curl http://github.com/wayneeseguin/rvm/raw/master/patches/ruby/1.9.2/p180/gcdata.patch | patch -p0 # if you're on 1.9.2!
|
|
|
|
$ curl http://github.com/wayneeseguin/rvm/raw/master/patches/ruby/1.8.7/ruby187gc.patch | patch -p0 # if you're on 1.8.7!
|
2009-02-05 20:57:02 -05:00
|
|
|
</shell>
|
|
|
|
|
2011-03-29 22:23:49 -04:00
|
|
|
h6. Configure and Install
|
2009-02-05 20:57:02 -05:00
|
|
|
|
2012-07-24 14:13:14 -04:00
|
|
|
The following will install Ruby in your home directory's +/rubygc+ directory.
|
|
|
|
Make sure to replace +<homedir>+ with a full patch to your actual home
|
|
|
|
directory.
|
2009-02-05 20:57:02 -05:00
|
|
|
|
|
|
|
<shell>
|
2011-02-20 11:07:39 -05:00
|
|
|
$ ./configure --prefix=/<homedir>/rubygc
|
|
|
|
$ make && make install
|
2009-02-05 20:57:02 -05:00
|
|
|
</shell>
|
|
|
|
|
2011-03-29 22:23:49 -04:00
|
|
|
h6. Prepare Aliases
|
2009-02-05 20:57:02 -05:00
|
|
|
|
|
|
|
For convenience, add the following lines in your +~/.profile+:
|
|
|
|
|
|
|
|
<shell>
|
|
|
|
alias gcruby='~/rubygc/bin/ruby'
|
|
|
|
alias gcrake='~/rubygc/bin/rake'
|
|
|
|
alias gcgem='~/rubygc/bin/gem'
|
|
|
|
alias gcirb='~/rubygc/bin/irb'
|
|
|
|
alias gcrails='~/rubygc/bin/rails'
|
|
|
|
</shell>
|
|
|
|
|
2011-03-29 22:23:49 -04:00
|
|
|
Don't forget to use your aliases from now on.
|
2009-02-05 20:57:02 -05:00
|
|
|
|
2011-03-29 22:23:49 -04:00
|
|
|
h4. Using Ruby-Prof on MRI and REE
|
2009-02-05 20:57:02 -05:00
|
|
|
|
2012-07-24 14:13:14 -04:00
|
|
|
Add Ruby-Prof to your applications' Gemfile if you want to benchmark/profile
|
|
|
|
under MRI or REE:
|
2009-02-05 20:57:02 -05:00
|
|
|
|
2011-03-29 22:23:49 -04:00
|
|
|
<ruby>
|
2012-07-24 14:13:14 -04:00
|
|
|
gem 'ruby-prof', git: 'git://github.com/wycats/ruby-prof.git'
|
2011-03-29 22:23:49 -04:00
|
|
|
</ruby>
|
2009-02-05 20:57:02 -05:00
|
|
|
|
2011-03-29 22:23:49 -04:00
|
|
|
Now run +bundle install+ and you're ready to go.
|
2009-02-05 20:57:02 -05:00
|
|
|
|
|
|
|
h3. Command Line Tools
|
|
|
|
|
2012-07-24 14:13:14 -04:00
|
|
|
Writing performance test cases could be an overkill when you are looking for one
|
|
|
|
time tests. Rails ships with two command line tools that enable quick and dirty
|
|
|
|
performance testing:
|
2009-02-05 20:57:02 -05:00
|
|
|
|
2009-02-24 07:29:25 -05:00
|
|
|
h4. +benchmarker+
|
2009-02-05 20:57:02 -05:00
|
|
|
|
|
|
|
Usage:
|
|
|
|
|
|
|
|
<shell>
|
2011-05-09 21:33:01 -04:00
|
|
|
Usage: rails benchmarker 'Ruby.code' 'Ruby.more_code' ... [OPTS]
|
|
|
|
-r, --runs N Number of runs.
|
|
|
|
Default: 4
|
|
|
|
-o, --output PATH Directory to use when writing the results.
|
|
|
|
Default: tmp/performance
|
|
|
|
-m, --metrics a,b,c Metrics to use.
|
|
|
|
Default: wall_time,memory,objects,gc_runs,gc_time
|
2009-02-05 20:57:02 -05:00
|
|
|
</shell>
|
|
|
|
|
2011-05-09 21:33:01 -04:00
|
|
|
Example:
|
2009-02-05 20:57:02 -05:00
|
|
|
|
|
|
|
<shell>
|
2011-05-09 21:33:01 -04:00
|
|
|
$ rails benchmarker 'Item.all' 'CouchItem.all' --runs 3 --metrics wall_time,memory
|
2009-02-05 20:57:02 -05:00
|
|
|
</shell>
|
|
|
|
|
2009-02-24 07:29:25 -05:00
|
|
|
h4. +profiler+
|
2009-02-05 20:57:02 -05:00
|
|
|
|
|
|
|
Usage:
|
|
|
|
|
|
|
|
<shell>
|
2011-06-10 07:47:51 -04:00
|
|
|
Usage: rails profiler 'Ruby.code' 'Ruby.more_code' ... [OPTS]
|
2011-05-09 21:33:01 -04:00
|
|
|
-r, --runs N Number of runs.
|
|
|
|
Default: 1
|
|
|
|
-o, --output PATH Directory to use when writing the results.
|
|
|
|
Default: tmp/performance
|
|
|
|
--metrics a,b,c Metrics to use.
|
|
|
|
Default: process_time,memory,objects
|
|
|
|
-m, --formats x,y,z Formats to output to.
|
|
|
|
Default: flat,graph_html,call_tree
|
2009-02-05 20:57:02 -05:00
|
|
|
</shell>
|
|
|
|
|
2011-05-09 21:33:01 -04:00
|
|
|
Example:
|
2009-02-05 20:57:02 -05:00
|
|
|
|
|
|
|
<shell>
|
2011-05-09 21:33:01 -04:00
|
|
|
$ rails profiler 'Item.all' 'CouchItem.all' --runs 2 --metrics process_time --formats flat
|
2009-02-05 20:57:02 -05:00
|
|
|
</shell>
|
|
|
|
|
2012-07-24 14:13:14 -04:00
|
|
|
NOTE: Metrics and formats vary from interpreter to interpreter. Pass +--help+ to
|
|
|
|
each tool to see the defaults for your interpreter.
|
2009-02-05 20:57:02 -05:00
|
|
|
|
2009-02-24 07:29:25 -05:00
|
|
|
h3. Helper Methods
|
2009-02-05 20:57:02 -05:00
|
|
|
|
2012-07-24 14:13:14 -04:00
|
|
|
Rails provides various helper methods inside Active Record, Action Controller
|
|
|
|
and Action View to measure the time taken by a given piece of code. The method
|
|
|
|
is called +benchmark()+ in all the three components.
|
2009-02-05 20:57:02 -05:00
|
|
|
|
|
|
|
h4. Model
|
|
|
|
|
|
|
|
<ruby>
|
|
|
|
Project.benchmark("Creating project") do
|
|
|
|
project = Project.create("name" => "stuff")
|
|
|
|
project.create_manager("name" => "David")
|
2010-12-25 13:29:57 -05:00
|
|
|
project.milestones << Milestone.all
|
2009-02-05 20:57:02 -05:00
|
|
|
end
|
|
|
|
</ruby>
|
|
|
|
|
2012-07-24 14:13:14 -04:00
|
|
|
This benchmarks the code enclosed in the +Project.benchmark("Creating project") do...end+
|
|
|
|
block and prints the result to the log file:
|
2009-02-05 20:57:02 -05:00
|
|
|
|
|
|
|
<ruby>
|
|
|
|
Creating project (185.3ms)
|
|
|
|
</ruby>
|
|
|
|
|
2012-07-24 14:13:14 -04:00
|
|
|
Please refer to the "API docs":http://api.rubyonrails.org/classes/ActiveSupport/Benchmarkable.html#method-i-benchmark
|
|
|
|
for additional options to +benchmark()+.
|
2009-02-05 20:57:02 -05:00
|
|
|
|
|
|
|
h4. Controller
|
|
|
|
|
2012-07-24 14:13:14 -04:00
|
|
|
Similarly, you could use this helper method inside "controllers.":http://api.rubyonrails.org/classes/ActiveSupport/Benchmarkable.html
|
2009-02-05 20:57:02 -05:00
|
|
|
|
|
|
|
<ruby>
|
|
|
|
def process_projects
|
2012-06-26 16:00:11 -04:00
|
|
|
benchmark("Processing projects") do
|
2009-02-05 20:57:02 -05:00
|
|
|
Project.process(params[:project_ids])
|
|
|
|
Project.update_cached_projects
|
|
|
|
end
|
|
|
|
end
|
|
|
|
</ruby>
|
|
|
|
|
2012-07-24 14:13:14 -04:00
|
|
|
NOTE: +benchmark+ is a class method inside controllers.
|
2009-02-05 20:57:02 -05:00
|
|
|
|
|
|
|
h4. View
|
|
|
|
|
2012-06-26 16:00:11 -04:00
|
|
|
And in "views":http://api.rubyonrails.org/classes/ActiveSupport/Benchmarkable.html:
|
2009-02-05 20:57:02 -05:00
|
|
|
|
|
|
|
<erb>
|
|
|
|
<% benchmark("Showing projects partial") do %>
|
2010-08-26 15:27:44 -04:00
|
|
|
<%= render @projects %>
|
2009-02-05 20:57:02 -05:00
|
|
|
<% end %>
|
|
|
|
</erb>
|
|
|
|
|
|
|
|
h3. Request Logging
|
|
|
|
|
2012-07-24 14:13:14 -04:00
|
|
|
Rails log files contain very useful information about the time taken to serve
|
|
|
|
each request. Here's a typical log file entry:
|
2009-02-05 20:57:02 -05:00
|
|
|
|
|
|
|
<shell>
|
|
|
|
Processing ItemsController#index (for 127.0.0.1 at 2009-01-08 03:06:39) [GET]
|
|
|
|
Rendering template within layouts/items
|
|
|
|
Rendering items/index
|
|
|
|
Completed in 5ms (View: 2, DB: 0) | 200 OK [http://0.0.0.0/items]
|
|
|
|
</shell>
|
|
|
|
|
|
|
|
For this section, we're only interested in the last line:
|
|
|
|
|
|
|
|
<shell>
|
|
|
|
Completed in 5ms (View: 2, DB: 0) | 200 OK [http://0.0.0.0/items]
|
|
|
|
</shell>
|
|
|
|
|
2012-07-24 14:13:14 -04:00
|
|
|
This data is fairly straightforward to understand. Rails uses millisecond(ms) as
|
|
|
|
the metric to measure the time taken. The complete request spent 5 ms inside
|
|
|
|
Rails, out of which 2 ms were spent rendering views and none was spent
|
|
|
|
communication with the database. It's safe to assume that the remaining 3 ms
|
|
|
|
were spent inside the controller.
|
2009-02-05 20:57:02 -05:00
|
|
|
|
2012-07-24 14:13:14 -04:00
|
|
|
Michael Koziarski has an "interesting blog post":http://www.therailsway.com/2009/1/6/requests-per-second
|
|
|
|
explaining the importance of using milliseconds as the metric.
|
2009-02-05 20:57:02 -05:00
|
|
|
|
|
|
|
h3. Useful Links
|
|
|
|
|
2009-03-16 07:28:36 -04:00
|
|
|
h4. Rails Plugins and Gems
|
2009-02-05 20:57:02 -05:00
|
|
|
|
|
|
|
* "Rails Analyzer":http://rails-analyzer.rubyforge.org
|
2011-06-09 13:22:44 -04:00
|
|
|
* "Palmist":http://www.flyingmachinestudios.com/programming/announcing-palmist
|
2011-03-06 16:18:32 -05:00
|
|
|
* "Rails Footnotes":https://github.com/josevalim/rails-footnotes/tree/master
|
|
|
|
* "Query Reviewer":https://github.com/dsboulder/query_reviewer/tree/master
|
2009-02-05 20:57:02 -05:00
|
|
|
|
2009-03-16 07:28:36 -04:00
|
|
|
h4. Generic Tools
|
2009-02-05 20:57:02 -05:00
|
|
|
|
2009-03-16 07:28:36 -04:00
|
|
|
* "httperf":http://www.hpl.hp.com/research/linux/httperf/
|
2009-02-05 20:57:02 -05:00
|
|
|
* "ab":http://httpd.apache.org/docs/2.2/programs/ab.html
|
2009-03-16 07:28:36 -04:00
|
|
|
* "JMeter":http://jakarta.apache.org/jmeter/
|
2009-02-05 20:57:02 -05:00
|
|
|
* "kcachegrind":http://kcachegrind.sourceforge.net/html/Home.html
|
|
|
|
|
2009-03-16 07:28:36 -04:00
|
|
|
h4. Tutorials and Documentation
|
2009-02-05 20:57:02 -05:00
|
|
|
|
|
|
|
* "ruby-prof API Documentation":http://ruby-prof.rubyforge.org
|
2012-07-24 14:13:14 -04:00
|
|
|
* "Request Profiling Railscast":http://railscasts.com/episodes/98-request-profiling - Outdated, but useful for understanding call graphs.
|
2009-02-05 20:57:02 -05:00
|
|
|
|
|
|
|
h3. Commercial Products
|
|
|
|
|
2012-07-24 14:13:14 -04:00
|
|
|
Rails has been lucky to have a few companies dedicated to Rails-specific
|
|
|
|
performance tools. A couple of those are:
|
2009-02-05 20:57:02 -05:00
|
|
|
|
|
|
|
* "New Relic":http://www.newrelic.com
|
2012-07-24 14:13:14 -04:00
|
|
|
* "Scout":http://scoutapp.com
|