This ensures that keys in the Yaml file can be specified as either
symbols or strings, but will always be treated as symbols after loading.
It tries to use ActiveSupport's `deep_symbolize_keys!` method if
ActiveSupport is loaded, otherwise falling back to an inline version.
Closes#3672.
Backout previous change to the exception handler signature and instead
just log at error level the main part of the redis exception then just
use the original exception handler as-is (at warn level) for the
backtrace. This is a compromise in compatibility insofar as the
backtrace is now at a lower log level, but alerting specifically on
these error strings likely uses the main error strings explicitly logged
before the very verbose backtrace.
The change that Fixesmperham/sidekiq#3673 actually broke any existing
exception handlers due to differences in expected parameter count. This
fixes it explicitly which normally seems like it won't matter but there
are random monitoring gems that install their own sidekiq exception
handlers that I don't want to break.
Extends the `ExceptionHandler` to support multiple log levels and
additional context text in order to use it when the `Processor`
encounters an exception when fetching new work. The added options ensure
that the new behavior is fairly close to the old behavior (same log
level, same fetch error message).
Fixesmperham/sidekiq#3673
The current implementation of the #each method uses Redis.zrange to
paginate the iteration and use multiple lightweight calls. It performs
this pagination in descending score order, but each page is returned
from Redis in ascending order. The result is that the final iteration
through the whole set is not sorted properly. Here's an example with a
page of size 3:
Redis set: 1, 2, 3, 4, 5, 6, 7, 8, 9
JobSet.to_a: 7, 8, 9, 4, 5, 6, 1, 2, 3
This fixes it with barely no performance cost (each page is reverted in
Ruby) and all the items are perfectly sorted in descending score order.
* Loosen Redis version dep to `>= 3.3.5, < 5`
* Bump redis-namespace for looser Redis version dep. Pending
https://github.com/resque/redis-namespace/pull/136 gem release.
Use `redis.connection` where we can and fall back to `redis._client`
where we need (to inspect timeout and scheme).
References #3617
Currently when a server middleware mutates arguments and then the job
fails, it will get re-queued with the mutated arguments, not the
original arguments.
This is unexpected and (IMO) faulty behavior as that same middleware
will now see the already-mutated arguments when the job gets executed
again.
This changes the behavior to use the pristine, unmutated jobs hash when
requeueing the failed job. It also includes a failing test case for the
old code.
* Add test case for sv locale
* Use Rack::Utils to parse locale header
* Take "q" value into account
* Make '*' match the default locale.
* Add test for available_locales
* Correct test case sv -> en
* Add missing test cases for Safari requests
* Add missing require needed to run a single test file
* Reimplement WebHelpers#locale to handle regions in header
Implementation inspired by:
https://github.com/iain/http_accept_language/blob/master/lib/http_accept_language/parser.rb
Also see:
https://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.4
* Add docs and references
* Add failing test cases for pt-br, pt-pt, pt (examples taken from Chrome & Safari)
* Add more test cases for Mac + Chrome + UK English + US English
* Make test cases for 'pt-PT,pt;q=0.8,en-US;q=0.6,en;q=0.4' and 'pt-pt' pass
* Make special case 'ru,en' work (equal qvalues)
Sidekiq will better handle jobs with malformed payloads. Any job which raises a JSON::ParserError will immediately move to the Dead set. Update the API to degrade gracefully when trying to render bad JSON in the queue, scheduled or dead. These payloads are most often from other languages where the JSON is being pieced together manually and pushed to Redis.
In #2531, we saw how an IO exception in the logger could cause a job to fail and be deleted before it reached the RetryJobs block, causing job loss. To fix this, we disabled job acknowledgement until job execution starts but this has the bad side effect of duplicating jobs if the user is running a reliable scheme and the error happens after the RetryJobs middleware but before execution starts.
Instead we flip the middleware ordering; logging now happens within the retry block. We would lose context-specific logging within retry so we move the context log setup out of the middleware into the Processor. With these changes, we can properly retry and acknowledge even if there are errors within the initial server middleware and executor calls.
This code path has been reimplemented in Sidekiq 5.0 so this change only applies to 4.x.
* Integrate Percy.io for visual regression tests
* Configure Travis so Percy runs on latest ruby only.
* Wrap Percy::Capybara in a helper method
* Lock capybara and related testing gems to major versions
* Adjust Percy.io integration so PERCY_ENV=0 prevents Percy from being required
* Rework job processing in light of Rails 5's Reloader, see #3221
* Ignore built gems
* Documentation, testing
* Add fallback for 'retry' value server-side, fixes#3234
* Fix job hash reporting in stats
* cleanup
* Add job for AJ testing
* Push jobs with invalid JSON immediately to Dead set, fixes#3296
* Break retry logic into global and local parts, fixes#3306
* fix heisentest
Trying to emulate the real behavior, the job obtains an enqueued_at
attribute when it's placed in a queue.
Since using perform_in or perform_at does not put the job in the queue
when run, the attribute is not set for those methods.
When I moved the reloader inside the block so that any errors it raised
would be handled properly, the `job` local variable was pushed into a
nested scope, which meant it wasn't accessible from the rescue block any
more. This changed the meaning of `job` in that rescue block from the
local variable to the `attr_reader` with the same name.
We don't need to reload the application before parsing the job payload,
so we can move this work outside the reloader block so that the job hash
is accessible in the rescue block again.
The reloader might raise an error instead of yielding control. If this
happens, the job will silently fail to run, without being reenqueued and
without logging any indication of what happened.
This change doesn't prevent the job from being lost, but at least if the
error handlers are called it will be possible to diagnose the problem.
* Allow to disable sessions
* Set sessions to false or to an hash
* Tests sessions settings
* Remove unnecessary code
* Add instance setters to Sidekiq::Web
Situation:
We are using Sidekiq Pro with ActiveSupport.
When I passed a object which is ActiveSupport::TimeWithZone to perform_at,
`TypeError: can't convert ActiveSupport::TimeWithZone into an exact
number` has occurred.
Problem:
Time can't plus a ActiveSupport::TimeWithZone object.
Solution:
We can transform any Time object to float, and use it to compare and
calculate.