1
0
Fork 0
mirror of https://github.com/puma/puma.git synced 2022-11-09 13:48:40 -05:00
Commit graph

7 commits

Author SHA1 Message Date
Nate Berkopec
5f3f489ee8
IO/copy stream (#2923)
* Proof of Concept: Use `IO.copy_stream` to serve files

Ref: https://puma/puma/issues/2697

```
$ benchmarks/wrk/big_response.sh
Puma starting in single mode...
* Puma version: 5.5.0 (ruby 3.0.2-p107) ("Zawgyi")
*  Min threads: 4
*  Max threads: 4
*  Environment: development
*          PID: 17879
* Listening on http://0.0.0.0:9292
Use Ctrl-C to stop
Running 1m test @ http://localhost:9292
  2 threads and 4 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     3.37ms    5.89ms  48.28ms   94.46%
    Req/Sec     0.88k   148.97     1.07k    82.08%
  Latency Distribution
     50%    2.21ms
     75%    2.78ms
     90%    4.09ms
     99%   35.75ms
  105651 requests in 1.00m, 108.24GB read
Requests/sec:   1758.39
Transfer/sec:      1.80GB
- Gracefully stopping, waiting for requests to finish
```

```
$ benchmarks/wrk/big_file.sh
Puma starting in single mode...
* Puma version: 5.5.0 (ruby 3.0.2-p107) ("Zawgyi")
*  Min threads: 4
*  Max threads: 4
*  Environment: development
*          PID: 18034
* Listening on http://0.0.0.0:9292
Use Ctrl-C to stop
Running 1m test @ http://localhost:9292
  2 threads and 4 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     1.06ms    1.09ms  20.98ms   97.94%
    Req/Sec     1.85k   150.69     2.03k    89.92%
  Latency Distribution
     50%    0.94ms
     75%    1.03ms
     90%    1.21ms
     99%    4.91ms
  221380 requests in 1.00m, 226.81GB read
Requests/sec:   3689.18
Transfer/sec:      3.78GB
- Gracefully stopping, waiting for requests to finish
```

* Ruby 2.2 compat

* test_puma_server.rb - fixup test_file_body

Co-authored-by: Jean Boussier <jean.boussier@gmail.com>
Co-authored-by: MSP-Greg <Greg.mpls@gmail.com>
2022-09-09 16:30:46 +09:00
Nate Berkopec
87eb88f15d
Make new benchmark output to tmp 2020-05-11 10:41:12 +09:00
Kamil Trzciński
7af9807778 Inject small delay to improve requests distribution
Ruby MRI when used can at most process a single thread concurrently due to GVL. This results in a over-utilisation if unfavourable distribution of connections is happening.

This tries to prefer less-busy workers (ie. faster to accept
the connection) to improve workers utilisation.
2020-05-05 17:06:28 +02:00
Nate Berkopec
94093a6361
SSL benchmarks 2020-03-18 12:53:04 -06:00
Nate Berkopec
c36491756f
Merge pull request from GHSA-84j7-475p-hp8v
header value could inject a CR or LF and inject their own HTTP response.
2020-02-27 11:52:27 -06:00
Nate Berkopec
860c17557c
Add benchmarks for large request bodies and responses 2019-10-13 11:27:58 +02:00
Nate Berkopec
0bbb495236
Start of a benchmark folder 2019-10-02 22:53:03 +02:00