* Prepare for upcoming Sidekiq::Config redesign
Adjust the server internals to use a config object rather than refering directly to the Sidekiq module.
Under just the right conditions, we could lose a job:
- Job raises an error
- Retry subsystem catches error and tries to create a retry in Redis but this raises a "Redis down" exception
- Processor catches Redis exception and thinks a retry was created
- Redis comes back online just in time for the job to be acknowledged and lost
That's a very specific and rare set of steps but it can happen.
Instead have the Retry subsystem raise a specific error signaling that it created a retry. There will be three common cases:
1. Job is successful: job is acknowledged.
2. Job fails, retry is created, Processor rescues specific error: job is acknowledged.
3. Sidekiq::Shutdown is raised: job is not acknowledged
Now there is another case:
4. Job fails, retry fails, Processor rescues Exception: job is NOT acknowledged. Sidekiq Pro's super_fetch will rescue the orphaned job at some point in the future.
* Rescue standard errors raised from exception's message
* Set a default message value if it raises an error
* Update default message value
* Add comments for exception_message
* Rework job processing in light of Rails 5's Reloader, see #3221
* Ignore built gems
* Documentation, testing
* Add fallback for 'retry' value server-side, fixes#3234
* Fix job hash reporting in stats
* cleanup
* Add job for AJ testing
* Push jobs with invalid JSON immediately to Dead set, fixes#3296
* Break retry logic into global and local parts, fixes#3306
* fix heisentest