1
0
Fork 0
mirror of https://github.com/mperham/connection_pool synced 2023-03-27 23:22:21 -04:00
Generic connection pooling for Ruby
Find a file
2020-06-04 09:06:23 -07:00
lib Rejigger to remove errors.rb 2020-06-02 13:41:54 -07:00
test Integrate standard gem, code formatting fixes, no functional changes 2020-06-02 13:29:59 -07:00
.gitignore Initial pass at a generic connection pool 2011-05-14 12:29:51 -07:00
.travis.yml CI: Use openjdk11 2020-05-11 13:55:30 -07:00
Changes.md Rejigger exceptions, fixes #130 2020-06-02 13:11:59 -07:00
connection_pool.gemspec Integrate standard gem, code formatting fixes, no functional changes 2020-06-02 13:29:59 -07:00
Gemfile remove standard as it requires Ruby 2.4 2020-06-04 09:06:23 -07:00
LICENSE Add project info, tests 2011-05-14 15:36:17 -07:00
Rakefile remove standard as it requires Ruby 2.4 2020-06-04 09:06:23 -07:00
README.md README: Use API Redis.new in example 2020-05-11 07:32:13 -07:00

connection_pool

Build Status

Generic connection pooling for Ruby.

MongoDB has its own connection pool. ActiveRecord has its own connection pool. This is a generic connection pool that can be used with anything, e.g. Redis, Dalli and other Ruby network clients.

Usage

Create a pool of objects to share amongst the fibers or threads in your Ruby application:

$memcached = ConnectionPool.new(size: 5, timeout: 5) { Dalli::Client.new }

Then use the pool in your application:

$memcached.with do |conn|
  conn.get('some-count')
end

If all the objects in the connection pool are in use, with will block until one becomes available. If no object is available within :timeout seconds, with will raise a Timeout::Error.

Optionally, you can specify a timeout override using the with-block semantics:

$memcached.with(timeout: 2.0) do |conn|
  conn.get('some-count')
end

This will only modify the resource-get timeout for this particular invocation. This is useful if you want to fail-fast on certain non critical sections when a resource is not available, or conversely if you are comfortable blocking longer on a particular resource. This is not implemented in the below ConnectionPool::Wrapper class.

Migrating to a Connection Pool

You can use ConnectionPool::Wrapper to wrap a single global connection, making it easier to migrate existing connection code over time:

$redis = ConnectionPool::Wrapper.new(size: 5, timeout: 3) { Redis.new }
$redis.sadd('foo', 1)
$redis.smembers('foo')

The wrapper uses method_missing to checkout a connection, run the requested method and then immediately check the connection back into the pool. It's not high-performance so you'll want to port your performance sensitive code to use with as soon as possible.

$redis.with do |conn|
  conn.sadd('foo', 1)
  conn.smembers('foo')
end

Once you've ported your entire system to use with, you can simply remove Wrapper and use the simpler and faster ConnectionPool.

Shutdown

You can shut down a ConnectionPool instance once it should no longer be used. Further checkout attempts will immediately raise an error but existing checkouts will work.

cp = ConnectionPool.new { Redis.new }
cp.shutdown { |conn| conn.quit }

Shutting down a connection pool will block until all connections are checked in and closed. Note that shutting down is completely optional; Ruby's garbage collector will reclaim unreferenced pools under normal circumstances.

Current State

There are several methods that return information about a pool.

cp = ConnectionPool.new(size: 10) { Redis.new }
cp.size # => 10
cp.available # => 10

cp.with do |conn|
  cp.size # => 10
  cp.available # => 9
end

Notes

  • Connections are lazily created as needed.
  • There is no provision for repairing or checking the health of a connection; connections should be self-repairing. This is true of the Dalli and Redis clients.
  • WARNING: Don't ever use Timeout.timeout in your Ruby code or you will see occasional silent corruption and mysterious errors. The Timeout API is unsafe and cannot be used correctly, ever. Use proper socket timeout options as exposed by Net::HTTP, Redis, Dalli, etc.

Author

Mike Perham, @mperham, http://mikeperham.com