1
0
Fork 0
mirror of https://github.com/puma/puma.git synced 2022-11-09 13:48:40 -05:00

Merge pull request #1576 from puma/schneems/doc-threadpoo

Document ThreadPool and Friends
This commit is contained in:
Richard Schneeman 2018-05-01 16:56:42 -05:00 committed by GitHub
commit 119b6eb4ad
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
7 changed files with 80 additions and 1 deletions

View file

@ -21,6 +21,17 @@ module Puma
class ConnectionError < RuntimeError; end
# An instance of this class represents a unique request from a client.
# For example a web request from a browser or from CURL. This
#
# An instance of `Puma::Client` can be used as if it were an IO object
# for example it is passed into `IO.select` inside of the `Puma::Reactor`.
# This is accomplished by the `to_io` method which gets called on any
# non-IO objects being used with the IO api such as `IO.select.
#
# Instances of this class are responsible for knowing if
# the header and body are fully buffered via the `try_to_finish` method.
# They can be used to "time out" a response via the `timeout_at` reader.
class Client
include Puma::Const
extend Puma::Delegation

View file

@ -5,6 +5,17 @@ require 'puma/plugin'
require 'time'
module Puma
# This class is instantiated by the `Puma::Launcher` and used
# to boot and serve a Ruby application when puma "workers" are needed
# i.e. when using multi-processes. For example `$ puma -w 5`
#
# At the core of this class is running an instance of `Puma::Server` which
# gets created via the `start_server` method from the `Puma::Runner` class
# that this inherits from.
#
# An instance of this class will spawn the number of processes passed in
# via the `spawn_workers` method call. Each worker will have it's own
# instance of a `Puma::Server`.
class Cluster < Runner
WORKER_CHECK_INTERVAL = 5

View file

@ -9,6 +9,9 @@ module Puma
# If read buffering is not done, and no other read buffering is performed (such as by an application server
# such as nginx) then the application would be subject to a slow client attack.
#
# Each Puma "worker" process has its own Reactor. For example if you start puma with `$ puma -w 5` then
# it will have 5 workers and each worker will have it's own reactor.
#
# For a graphical representation of how the reactor works see [architecture.md](https://github.com/puma/puma/blob/master/docs/architecture.md#connection-pipeline).
#
# ## Reactor Flow

View file

@ -2,6 +2,9 @@ require 'puma/server'
require 'puma/const'
module Puma
# Generic class that is used by `Puma::Cluster` and `Puma::Single` to
# serve requests. This class spawns a new instance of `Puma::Server` via
# a call to `start_server`.
class Runner
def initialize(cli, events)
@launcher = cli

View file

@ -23,6 +23,15 @@ require 'socket'
module Puma
# The HTTP Server itself. Serves out a single Rack app.
#
# This class is used by the `Puma::Single` and `Puma::Cluster` classes
# to generate one or more `Puma::Server` instances capable of handling requests.
# Each Puma process will contain one `Puma::Server` instacne.
#
# The `Puma::Server` instance pulls requests from the socket, adds them to a
# `Puma::Reactor` where they get eventually passed to a `Puma::ThreadPool`.
#
# Each `Puma::Server` will have one reactor and one thread pool.
class Server
include Puma::Const

View file

@ -3,6 +3,13 @@ require 'puma/detect'
require 'puma/plugin'
module Puma
# This class is instantiated by the `Puma::Launcher` and used
# to boot and serve a Ruby application when no puma "workers" are needed
# i.e. only using "threaded" mode. For example `$ puma -t 1:5`
#
# At the core of this class is running an instance of `Puma::Server` which
# gets created via the `start_server` method from the `Puma::Runner` class
# that this inherits from.
class Single < Runner
def stats
b = @server.backlog || 0

View file

@ -1,8 +1,17 @@
require 'thread'
module Puma
# A simple thread pool management object.
# Internal Docs for A simple thread pool management object.
#
# Each Puma "worker" has a thread pool to process requests.
#
# First a connection to a client is made in `Puma::Server`. It is wrapped in a
# `Puma::Client` instance and then passed to the `Puma::Reactor` to ensure
# the whole request is buffered into memory. Once the request is ready, it is passed into
# a thread pool via the `Puma::ThreadPool#<<` operator where it is stored in a `@todo` array.
#
# Each thread in the pool has an internal loop where it pulls a request from the `@todo` array
# and proceses it.
class ThreadPool
class ForceShutdown < RuntimeError
end
@ -153,6 +162,32 @@ module Puma
end
end
# This method is used by `Puma::Server` to let the server know when
# the thread pool can pull more requests from the socket and
# pass to the reactor.
#
# The general idea is that the thread pool can only work on a fixed
# number of requests at the same time. If it is already processing that
# number of requests then it is at capacity. If another Puma process has
# spare capacity, then the request can be left on the socket so the other
# worker can pick it up and process it.
#
# For example: if there are 5 threads, but only 4 working on
# requests, this method will not wait and the `Puma::Server`
# can pull a request right away.
#
# If there are 5 threads and all 5 of them are busy, then it will
# pause here, and wait until the `not_full` condition variable is
# signaled, usually this indicates that a request has been processed.
#
# It's important to note that even though the server might accept another
# request, it might not be added to the `@todo` array right away.
# For example if a slow client has only sent a header, but not a body
# then the `@todo` array would stay the same size as the reactor works
# to try to buffer the request. In tha scenario the next call to this
# method would not block and another request would be added into the reactor
# by the server. This would continue until a fully bufferend request
# makes it through the reactor and can then be processed by the thread pool.
def wait_until_not_full
@mutex.synchronize do
while true