1
0
Fork 0
mirror of https://github.com/puma/puma.git synced 2022-11-09 13:48:40 -05:00

Request body read time metric

Measure the time spent reading the HTTP request body and expose it to the Rack app as `env['puma.request_body_wait']`.

This can be combined with a timestamp from a upstream proxy to get an indication of how long a request was waiting for a Puma thread to become available.

Fixes https://github.com/puma/puma/issues/1541
This commit is contained in:
Rian McGuire 2018-04-24 15:54:51 -03:00
parent 8dbc6eb6ed
commit f2eb2ecfde
4 changed files with 75 additions and 16 deletions

View file

@ -20,6 +20,7 @@ Clustered mode is shown/discussed here. Single mode is analogous to having a sin
* By default, a single, separate thread is used to receive HTTP requests across the socket.
* When at least one worker thread is available for work, a connection is accepted and placed in this request buffer
* This thread waits for entire HTTP requests to be received over the connection
* The time spent waiting for the HTTP request body to be received is exposed to the Rack app as `env['puma.request_body_wait']` (milliseconds)
* Once received, the connection is pushed into the "todo" set
* Worker threads pop work off the "todo" set for processing
* The thread processes the request via the rack application (which generates the HTTP response)

View file

@ -54,6 +54,22 @@ up resources.
Watching your CPU utilization over time and aim for about 70% on average. This means
you've got capacity still but aren't starving threads.
**Measuring utilization**
Using a timestamp header from an upstream proxy server (eg. nginx or haproxy), it's
possible to get an indication of how long requests have been waiting for a Puma
thread to become available.
* Have your upstream proxy set a header with the time it received the request:
* nginx: `proxy_set_header X-Request-Start "${msec}";`
* haproxy: `http-request set-header X-Request-Start "%t";`
* In your Rack middleware, determine the amount of time elapsed since `X-Request-Start`.
* To improve accuracy, you will want to subtract time spent waiting for slow clients:
* `env['puma.request_body_wait']` contains the number of milliseconds Puma spent
waiting for the client to send the request body.
* haproxy: `%Th` (TLS handshake time) and `%Ti` (idle time before request) can
can also be added as headers.
## Daemonizing
I prefer to not daemonize my servers and use something like `runit` or `upstart` to

View file

@ -158,8 +158,7 @@ module Puma
@body.rewind
rest = io.read
@buffer = rest.empty? ? nil : rest
@requests_served += 1
@ready = true
set_ready
return true
end
@ -207,8 +206,7 @@ module Puma
unless chunk
@body.close
@buffer = nil
@requests_served += 1
@ready = true
set_ready
raise EOFError
end
@ -217,6 +215,8 @@ module Puma
end
def setup_body
@body_read_start = Process.clock_gettime(Process::CLOCK_MONOTONIC, :millisecond)
if @env[HTTP_EXPECT] == CONTINUE
# TODO allow a hook here to check the headers before
# going forward
@ -241,8 +241,7 @@ module Puma
unless cl
@buffer = body.empty? ? nil : body
@body = EmptyBody
@requests_served += 1
@ready = true
set_ready
return true
end
@ -251,8 +250,7 @@ module Puma
if remain <= 0
@body = StringIO.new(body)
@buffer = nil
@requests_served += 1
@ready = true
set_ready
return true
end
@ -287,8 +285,7 @@ module Puma
# No data means a closed socket
unless data
@buffer = nil
@requests_served += 1
@ready = true
set_ready
raise EOFError
end
@ -324,8 +321,7 @@ module Puma
# No data means a closed socket
unless data
@buffer = nil
@requests_served += 1
@ready = true
set_ready
raise EOFError
end
@ -402,8 +398,7 @@ module Puma
unless chunk
@body.close
@buffer = nil
@requests_served += 1
@ready = true
set_ready
raise EOFError
end
@ -412,8 +407,7 @@ module Puma
if remain <= 0
@body.rewind
@buffer = nil
@requests_served += 1
@ready = true
set_ready
return true
end
@ -422,6 +416,14 @@ module Puma
false
end
def set_ready
if @body_read_start
@env['puma.request_body_wait'] = Process.clock_gettime(Process::CLOCK_MONOTONIC, :millisecond) - @body_read_start
end
@requests_served += 1
@ready = true
end
def write_400
begin
@io << ERROR_400_RESPONSE

View file

@ -691,4 +691,44 @@ EOF
assert_equal "HTTP/1.0 200 OK\r\nX-Empty-Header: \r\n\r\n", data
end
def test_request_body_wait
request_body_wait = nil
@server.app = proc { |env|
request_body_wait = env['puma.request_body_wait']
[204, {}, []]
}
@server.add_tcp_listener @host, @port
@server.run
sock = TCPSocket.new @host, @server.connected_port
sock << "POST / HTTP/1.1\r\nHost: test.com\r\nContent-Type: text/plain\r\nContent-Length: 5\r\n\r\nh"
sleep 1
sock << "ello"
sock.gets
assert request_body_wait >= 1000
end
def test_request_body_wait_chunked
request_body_wait = nil
@server.app = proc { |env|
request_body_wait = env['puma.request_body_wait']
[204, {}, []]
}
@server.add_tcp_listener @host, @port
@server.run
sock = TCPSocket.new @host, @server.connected_port
sock << "GET / HTTP/1.1\r\nConnection: close\r\nTransfer-Encoding: chunked\r\n\r\n1\r\nh\r\n"
sleep 1
sock << "4\r\nello\r\n0\r\n"
sock.gets
assert request_body_wait >= 1000
end
end