1
0
Fork 0
mirror of https://github.com/puma/puma.git synced 2022-11-09 13:48:40 -05:00
Commit graph

21 commits

Author SHA1 Message Date
David Rodríguez
6f99e91d93 Fix uninitialized constant error
Otherwise I continuously get the error

```
Error in reactor loop escaped: uninitialized constant
Puma::MiniSSL::SSLError (NameError)
/path/to/puma-2.12.2-java/lib/puma/reactor.rb:72:in `block in run_internal'
/path/to/puma-2.12.2-java/lib/puma/reactor.rb:41:in `each'
/path/to/puma-2.12.2-java/lib/puma/reactor.rb:41:in `run_internal'
/path/to/puma-2.12.2-java/lib/puma/reactor.rb:137:in `block in run_in_thread'
```

It does not seem to affect functionality but it's annoying.
2015-07-17 17:00:55 -03:00
Julian Langschaedel
e8d25b30f3 ssl: Add Client Side Certificate Auth
Add Client Side Certificate Auth feature and handling to puma's MiniSSL. Also exposes SSL errors to puma/apps.

 compatibility notes: MRI only

 shell example:

   puma -b 'ssl://127.0.0.1:9292?key=path_to_key&cert=path_to_cert&ca=path_to_ca&verify_mode=force_peer'

 code example: (examples/client_side_ssl)

    app = proc {|env| p env['puma.peercert']; [200, {}, ["hey"]] }

    events = SSLEvents.new($stdout, $stderr)
    server = Puma::Server.new(app, events)

    admin_context             = Puma::MiniSSL::Context.new
    admin_context.key         = KEY_PATH
    admin_context.cert        = CERT_PATH
    admin_context.ca          = CA_CERT_PATH
    admin_context.verify_mode = Puma::MiniSSL::VERIFY_PEER | Puma::MiniSSL::VERIFY_FAIL_IF_NO_PEER_CERT

    server.add_ssl_listener("0.0.0.0", ADMIN_PORT, admin_context)
    server.min_threads = MIN_THREADS
    server.max_threads = MAX_THREADS
    server.persistent_timeout = IDLE_TIMEOUT
    server.run.join

 additional credits: Andy Alness <andy.alness@gmail.com>
2015-06-06 23:15:00 +02:00
Philip Wiebe
32300e81b4 only send 408 if in the data phase. 2014-01-30 17:37:38 -05:00
Philip Wiebe
ace195c9d4 added 408 status on timeout. 2014-01-30 13:23:01 -05:00
Gustav Munkby
62755750a5 Use real blocks instead of Symbol#to_proc 2013-10-31 17:33:44 +01:00
Gustav Munkby
a0e3474d89 Handle IOError closed stream in IO.select
If the other end of a socket is closed, select causes an IOError.
Attempt to fix this by retrying the operation with any closed socket
removed
2013-10-28 14:36:54 +01:00
Gustav Munkby
bd15892046 Rewrite Reactor#run_in_thread
- Log both error message and backtrace to STDERR
- Reintroduce pipe closing when thread exits
2013-10-28 14:27:30 +01:00
Gustav Munkby
74ea16df7f Do not close the pipe sockets when retrying 2013-10-28 13:56:45 +01:00
Evan Phoenix
b61a372ada Syncronize all access to @timeouts. Fixes #208 2013-03-18 16:41:59 -07:00
Evan Phoenix
5d1fd4e74b Cleanup pipes properly. Fixes #182 2013-02-04 22:31:40 -08:00
Evan Phoenix
bd5d824ce5 Write 400 on HTTP parse error. Fixes #142 2012-09-05 22:09:42 -07:00
Evan Phoenix
810144e77f Close kept alive sockets on restart. Fixes #144 2012-09-02 23:33:09 -04:00
Evan Phoenix
b2550acfaf Handle more errors trying to read client data. Fixes #138 2012-08-27 10:56:43 -07:00
Evan Phoenix
cd83b2f304 Minor cleanup, use IOError instead of EOFError 2012-08-10 22:41:35 -07:00
Evan Phoenix
870767a246 Properly shutdown the reactor thread 2012-08-10 10:10:30 -07:00
Evan Phoenix
70bbef66cf Work around JRuby buffering the request inside #accept 2012-08-09 16:54:55 -07:00
Evan Phoenix
6b72885be6 Fix errant closing of sockets 2012-07-30 17:12:23 -06:00
Evan Phoenix
765eed127a Properly update @sleep_for when there are timeout'd clients 2012-07-24 17:25:03 -07:00
Evan Phoenix
e13d9ba9e9 Be sure to cleanup and close bad client sockets 2012-07-23 17:08:11 -07:00
Evan Phoenix
44c8c1ab50 Some minor cleanup 2012-07-23 17:00:53 -07:00
Evan Phoenix
6777c771d8 Add separate IO reactor to defeat slow clients
Previously, the app thread would be in charge of reading the request
directly from the client. This resulted in a set of slow clients being
able to completely starve the app thread pool and prevent any further
connections from being handled.

This new organization uses a seperate reactor thread that is in charge
of responding when a client has more data, buffering the data and
attempting to parse the data. When the data represents a fully realized
request, only then is it handed to the app thread pool. This means we
trust apps to not starve the pool, but don't trust clients.
2012-07-23 10:26:52 -07:00