rails--rails/activerecord/test/cases/hot_compatibility_test.rb

143 lines
4.3 KiB
Ruby
Raw Normal View History

require "cases/helper"
require "support/connection_helper"
class HotCompatibilityTest < ActiveRecord::TestCase
self.use_transactional_tests = false
Correctly deallocate prepared statements if we fail inside a transaction - Addresses issue #12330 Overview ======== Cached postgres prepared statements become invalidated if the schema changes in a way that it affects the returned result. Examples: - adding or removing a column then doing a 'SELECT *' - removing the foo column then doing a 'SELECT bar.foo' In normal operation this isn't a problem, we can rescue the error, deallocate the prepared statement and re-issue the command. However in PostgreSQL transactions, once any command fails, the transaction becomes 'poisoned' and any subsequent commands will raise InFailedSQLTransaction. This includes DEALLOCATE statements, so the default deallocation strategy instead of removing the cached prepared statement instead raises InFailedSQLTransaction. Why this is bad =============== 1. InFailedSQLTransaction is a fairly cryptic error and doesn't communicate any useful information about what has actually gone wrong. 2. In the naive implementation the prepared statement never gets deallocated - it stays alive for the length of the session taking up memory on the postgres server. 3. It is unsafe to retry the transaction because the bad prepared statement is still in the cache and we would see the exact same failure repeated. Solution ======== If we are outside a transaction we can continue to handle these failures gracefully in the usual way. Inside a transaction instead of issuing a DEALLOCATE command that will certainly fail, we now raise ActiveRecord::PreparedStatementCacheExpired. This can be handled further up the stack, notably inside TransactionManager#within_new_transaction. Here we can make sure to first rollback the transaction, then safely issue DEALLOCATE statements to invalidate the rest of the cached prepared statements. This also allows the user (or some gem) the opportunity to catch this error and voluntarily retry the transaction if a schema change causes the prepared statement cache to become invalidated. Because the outdated statement has been deallocated, we can expect the transaction to succeed on the second try.
2015-11-02 16:17:52 -05:00
include ConnectionHelper
setup do
@klass = Class.new(ActiveRecord::Base) do
connection.create_table :hot_compatibilities, force: true do |t|
t.string :foo
t.string :bar
end
def self.name; "HotCompatibility"; end
end
end
teardown do
2014-01-13 19:42:12 -05:00
ActiveRecord::Base.connection.drop_table :hot_compatibilities
end
test "insert after remove_column" do
# warm cache
@klass.create!
# we have 3 columns
assert_equal 3, @klass.columns.length
# remove one of them
@klass.connection.remove_column :hot_compatibilities, :bar
# we still have 3 columns in the cache
assert_equal 3, @klass.columns.length
# but we can successfully create a record so long as we don't
# reference the removed column
record = @klass.create! foo: "foo"
record.reload
assert_equal "foo", record.foo
end
test "update after remove_column" do
record = @klass.create! foo: "foo"
assert_equal 3, @klass.columns.length
@klass.connection.remove_column :hot_compatibilities, :bar
assert_equal 3, @klass.columns.length
record.reload
assert_equal "foo", record.foo
record.foo = "bar"
record.save!
record.reload
assert_equal "bar", record.foo
end
Correctly deallocate prepared statements if we fail inside a transaction - Addresses issue #12330 Overview ======== Cached postgres prepared statements become invalidated if the schema changes in a way that it affects the returned result. Examples: - adding or removing a column then doing a 'SELECT *' - removing the foo column then doing a 'SELECT bar.foo' In normal operation this isn't a problem, we can rescue the error, deallocate the prepared statement and re-issue the command. However in PostgreSQL transactions, once any command fails, the transaction becomes 'poisoned' and any subsequent commands will raise InFailedSQLTransaction. This includes DEALLOCATE statements, so the default deallocation strategy instead of removing the cached prepared statement instead raises InFailedSQLTransaction. Why this is bad =============== 1. InFailedSQLTransaction is a fairly cryptic error and doesn't communicate any useful information about what has actually gone wrong. 2. In the naive implementation the prepared statement never gets deallocated - it stays alive for the length of the session taking up memory on the postgres server. 3. It is unsafe to retry the transaction because the bad prepared statement is still in the cache and we would see the exact same failure repeated. Solution ======== If we are outside a transaction we can continue to handle these failures gracefully in the usual way. Inside a transaction instead of issuing a DEALLOCATE command that will certainly fail, we now raise ActiveRecord::PreparedStatementCacheExpired. This can be handled further up the stack, notably inside TransactionManager#within_new_transaction. Here we can make sure to first rollback the transaction, then safely issue DEALLOCATE statements to invalidate the rest of the cached prepared statements. This also allows the user (or some gem) the opportunity to catch this error and voluntarily retry the transaction if a schema change causes the prepared statement cache to become invalidated. Because the outdated statement has been deallocated, we can expect the transaction to succeed on the second try.
2015-11-02 16:17:52 -05:00
if current_adapter?(:PostgreSQLAdapter)
test "cleans up after prepared statement failure in a transaction" do
with_two_connections do |original_connection, ddl_connection|
record = @klass.create! bar: "bar"
Correctly deallocate prepared statements if we fail inside a transaction - Addresses issue #12330 Overview ======== Cached postgres prepared statements become invalidated if the schema changes in a way that it affects the returned result. Examples: - adding or removing a column then doing a 'SELECT *' - removing the foo column then doing a 'SELECT bar.foo' In normal operation this isn't a problem, we can rescue the error, deallocate the prepared statement and re-issue the command. However in PostgreSQL transactions, once any command fails, the transaction becomes 'poisoned' and any subsequent commands will raise InFailedSQLTransaction. This includes DEALLOCATE statements, so the default deallocation strategy instead of removing the cached prepared statement instead raises InFailedSQLTransaction. Why this is bad =============== 1. InFailedSQLTransaction is a fairly cryptic error and doesn't communicate any useful information about what has actually gone wrong. 2. In the naive implementation the prepared statement never gets deallocated - it stays alive for the length of the session taking up memory on the postgres server. 3. It is unsafe to retry the transaction because the bad prepared statement is still in the cache and we would see the exact same failure repeated. Solution ======== If we are outside a transaction we can continue to handle these failures gracefully in the usual way. Inside a transaction instead of issuing a DEALLOCATE command that will certainly fail, we now raise ActiveRecord::PreparedStatementCacheExpired. This can be handled further up the stack, notably inside TransactionManager#within_new_transaction. Here we can make sure to first rollback the transaction, then safely issue DEALLOCATE statements to invalidate the rest of the cached prepared statements. This also allows the user (or some gem) the opportunity to catch this error and voluntarily retry the transaction if a schema change causes the prepared statement cache to become invalidated. Because the outdated statement has been deallocated, we can expect the transaction to succeed on the second try.
2015-11-02 16:17:52 -05:00
# prepare the reload statement in a transaction
@klass.transaction do
record.reload
end
2015-11-05 15:44:47 -05:00
assert get_prepared_statement_cache(@klass.connection).any?,
"expected prepared statement cache to have something in it"
Correctly deallocate prepared statements if we fail inside a transaction - Addresses issue #12330 Overview ======== Cached postgres prepared statements become invalidated if the schema changes in a way that it affects the returned result. Examples: - adding or removing a column then doing a 'SELECT *' - removing the foo column then doing a 'SELECT bar.foo' In normal operation this isn't a problem, we can rescue the error, deallocate the prepared statement and re-issue the command. However in PostgreSQL transactions, once any command fails, the transaction becomes 'poisoned' and any subsequent commands will raise InFailedSQLTransaction. This includes DEALLOCATE statements, so the default deallocation strategy instead of removing the cached prepared statement instead raises InFailedSQLTransaction. Why this is bad =============== 1. InFailedSQLTransaction is a fairly cryptic error and doesn't communicate any useful information about what has actually gone wrong. 2. In the naive implementation the prepared statement never gets deallocated - it stays alive for the length of the session taking up memory on the postgres server. 3. It is unsafe to retry the transaction because the bad prepared statement is still in the cache and we would see the exact same failure repeated. Solution ======== If we are outside a transaction we can continue to handle these failures gracefully in the usual way. Inside a transaction instead of issuing a DEALLOCATE command that will certainly fail, we now raise ActiveRecord::PreparedStatementCacheExpired. This can be handled further up the stack, notably inside TransactionManager#within_new_transaction. Here we can make sure to first rollback the transaction, then safely issue DEALLOCATE statements to invalidate the rest of the cached prepared statements. This also allows the user (or some gem) the opportunity to catch this error and voluntarily retry the transaction if a schema change causes the prepared statement cache to become invalidated. Because the outdated statement has been deallocated, we can expect the transaction to succeed on the second try.
2015-11-02 16:17:52 -05:00
# add a new column
ddl_connection.add_column :hot_compatibilities, :baz, :string
assert_raise(ActiveRecord::PreparedStatementCacheExpired) do
@klass.transaction do
record.reload
end
end
2015-11-05 15:44:47 -05:00
assert_empty get_prepared_statement_cache(@klass.connection),
Correctly deallocate prepared statements if we fail inside a transaction - Addresses issue #12330 Overview ======== Cached postgres prepared statements become invalidated if the schema changes in a way that it affects the returned result. Examples: - adding or removing a column then doing a 'SELECT *' - removing the foo column then doing a 'SELECT bar.foo' In normal operation this isn't a problem, we can rescue the error, deallocate the prepared statement and re-issue the command. However in PostgreSQL transactions, once any command fails, the transaction becomes 'poisoned' and any subsequent commands will raise InFailedSQLTransaction. This includes DEALLOCATE statements, so the default deallocation strategy instead of removing the cached prepared statement instead raises InFailedSQLTransaction. Why this is bad =============== 1. InFailedSQLTransaction is a fairly cryptic error and doesn't communicate any useful information about what has actually gone wrong. 2. In the naive implementation the prepared statement never gets deallocated - it stays alive for the length of the session taking up memory on the postgres server. 3. It is unsafe to retry the transaction because the bad prepared statement is still in the cache and we would see the exact same failure repeated. Solution ======== If we are outside a transaction we can continue to handle these failures gracefully in the usual way. Inside a transaction instead of issuing a DEALLOCATE command that will certainly fail, we now raise ActiveRecord::PreparedStatementCacheExpired. This can be handled further up the stack, notably inside TransactionManager#within_new_transaction. Here we can make sure to first rollback the transaction, then safely issue DEALLOCATE statements to invalidate the rest of the cached prepared statements. This also allows the user (or some gem) the opportunity to catch this error and voluntarily retry the transaction if a schema change causes the prepared statement cache to become invalidated. Because the outdated statement has been deallocated, we can expect the transaction to succeed on the second try.
2015-11-02 16:17:52 -05:00
"expected prepared statement cache to be empty but it wasn't"
end
end
test "cleans up after prepared statement failure in nested transactions" do
with_two_connections do |original_connection, ddl_connection|
record = @klass.create! bar: "bar"
Correctly deallocate prepared statements if we fail inside a transaction - Addresses issue #12330 Overview ======== Cached postgres prepared statements become invalidated if the schema changes in a way that it affects the returned result. Examples: - adding or removing a column then doing a 'SELECT *' - removing the foo column then doing a 'SELECT bar.foo' In normal operation this isn't a problem, we can rescue the error, deallocate the prepared statement and re-issue the command. However in PostgreSQL transactions, once any command fails, the transaction becomes 'poisoned' and any subsequent commands will raise InFailedSQLTransaction. This includes DEALLOCATE statements, so the default deallocation strategy instead of removing the cached prepared statement instead raises InFailedSQLTransaction. Why this is bad =============== 1. InFailedSQLTransaction is a fairly cryptic error and doesn't communicate any useful information about what has actually gone wrong. 2. In the naive implementation the prepared statement never gets deallocated - it stays alive for the length of the session taking up memory on the postgres server. 3. It is unsafe to retry the transaction because the bad prepared statement is still in the cache and we would see the exact same failure repeated. Solution ======== If we are outside a transaction we can continue to handle these failures gracefully in the usual way. Inside a transaction instead of issuing a DEALLOCATE command that will certainly fail, we now raise ActiveRecord::PreparedStatementCacheExpired. This can be handled further up the stack, notably inside TransactionManager#within_new_transaction. Here we can make sure to first rollback the transaction, then safely issue DEALLOCATE statements to invalidate the rest of the cached prepared statements. This also allows the user (or some gem) the opportunity to catch this error and voluntarily retry the transaction if a schema change causes the prepared statement cache to become invalidated. Because the outdated statement has been deallocated, we can expect the transaction to succeed on the second try.
2015-11-02 16:17:52 -05:00
# prepare the reload statement in a transaction
@klass.transaction do
record.reload
end
2015-11-05 15:44:47 -05:00
assert get_prepared_statement_cache(@klass.connection).any?,
"expected prepared statement cache to have something in it"
Correctly deallocate prepared statements if we fail inside a transaction - Addresses issue #12330 Overview ======== Cached postgres prepared statements become invalidated if the schema changes in a way that it affects the returned result. Examples: - adding or removing a column then doing a 'SELECT *' - removing the foo column then doing a 'SELECT bar.foo' In normal operation this isn't a problem, we can rescue the error, deallocate the prepared statement and re-issue the command. However in PostgreSQL transactions, once any command fails, the transaction becomes 'poisoned' and any subsequent commands will raise InFailedSQLTransaction. This includes DEALLOCATE statements, so the default deallocation strategy instead of removing the cached prepared statement instead raises InFailedSQLTransaction. Why this is bad =============== 1. InFailedSQLTransaction is a fairly cryptic error and doesn't communicate any useful information about what has actually gone wrong. 2. In the naive implementation the prepared statement never gets deallocated - it stays alive for the length of the session taking up memory on the postgres server. 3. It is unsafe to retry the transaction because the bad prepared statement is still in the cache and we would see the exact same failure repeated. Solution ======== If we are outside a transaction we can continue to handle these failures gracefully in the usual way. Inside a transaction instead of issuing a DEALLOCATE command that will certainly fail, we now raise ActiveRecord::PreparedStatementCacheExpired. This can be handled further up the stack, notably inside TransactionManager#within_new_transaction. Here we can make sure to first rollback the transaction, then safely issue DEALLOCATE statements to invalidate the rest of the cached prepared statements. This also allows the user (or some gem) the opportunity to catch this error and voluntarily retry the transaction if a schema change causes the prepared statement cache to become invalidated. Because the outdated statement has been deallocated, we can expect the transaction to succeed on the second try.
2015-11-02 16:17:52 -05:00
# add a new column
ddl_connection.add_column :hot_compatibilities, :baz, :string
assert_raise(ActiveRecord::PreparedStatementCacheExpired) do
@klass.transaction do
@klass.transaction do
@klass.transaction do
record.reload
end
end
end
end
2015-11-05 15:44:47 -05:00
assert_empty get_prepared_statement_cache(@klass.connection),
Correctly deallocate prepared statements if we fail inside a transaction - Addresses issue #12330 Overview ======== Cached postgres prepared statements become invalidated if the schema changes in a way that it affects the returned result. Examples: - adding or removing a column then doing a 'SELECT *' - removing the foo column then doing a 'SELECT bar.foo' In normal operation this isn't a problem, we can rescue the error, deallocate the prepared statement and re-issue the command. However in PostgreSQL transactions, once any command fails, the transaction becomes 'poisoned' and any subsequent commands will raise InFailedSQLTransaction. This includes DEALLOCATE statements, so the default deallocation strategy instead of removing the cached prepared statement instead raises InFailedSQLTransaction. Why this is bad =============== 1. InFailedSQLTransaction is a fairly cryptic error and doesn't communicate any useful information about what has actually gone wrong. 2. In the naive implementation the prepared statement never gets deallocated - it stays alive for the length of the session taking up memory on the postgres server. 3. It is unsafe to retry the transaction because the bad prepared statement is still in the cache and we would see the exact same failure repeated. Solution ======== If we are outside a transaction we can continue to handle these failures gracefully in the usual way. Inside a transaction instead of issuing a DEALLOCATE command that will certainly fail, we now raise ActiveRecord::PreparedStatementCacheExpired. This can be handled further up the stack, notably inside TransactionManager#within_new_transaction. Here we can make sure to first rollback the transaction, then safely issue DEALLOCATE statements to invalidate the rest of the cached prepared statements. This also allows the user (or some gem) the opportunity to catch this error and voluntarily retry the transaction if a schema change causes the prepared statement cache to become invalidated. Because the outdated statement has been deallocated, we can expect the transaction to succeed on the second try.
2015-11-02 16:17:52 -05:00
"expected prepared statement cache to be empty but it wasn't"
end
end
end
private
def get_prepared_statement_cache(connection)
connection.instance_variable_get(:@statements)
.instance_variable_get(:@cache)[Process.pid]
end
2015-11-05 15:44:47 -05:00
# Rails will automatically clear the prepared statements on the connection
# that runs the migration, so we use two connections to simulate what would
# actually happen on a production system; we'd have one connection running the
# migration from the rake task ("ddl_connection" here), and we'd have another
# connection in a web worker.
def with_two_connections
run_without_connection do |original_connection|
ActiveRecord::Base.establish_connection(original_connection.merge(pool_size: 2))
Correctly deallocate prepared statements if we fail inside a transaction - Addresses issue #12330 Overview ======== Cached postgres prepared statements become invalidated if the schema changes in a way that it affects the returned result. Examples: - adding or removing a column then doing a 'SELECT *' - removing the foo column then doing a 'SELECT bar.foo' In normal operation this isn't a problem, we can rescue the error, deallocate the prepared statement and re-issue the command. However in PostgreSQL transactions, once any command fails, the transaction becomes 'poisoned' and any subsequent commands will raise InFailedSQLTransaction. This includes DEALLOCATE statements, so the default deallocation strategy instead of removing the cached prepared statement instead raises InFailedSQLTransaction. Why this is bad =============== 1. InFailedSQLTransaction is a fairly cryptic error and doesn't communicate any useful information about what has actually gone wrong. 2. In the naive implementation the prepared statement never gets deallocated - it stays alive for the length of the session taking up memory on the postgres server. 3. It is unsafe to retry the transaction because the bad prepared statement is still in the cache and we would see the exact same failure repeated. Solution ======== If we are outside a transaction we can continue to handle these failures gracefully in the usual way. Inside a transaction instead of issuing a DEALLOCATE command that will certainly fail, we now raise ActiveRecord::PreparedStatementCacheExpired. This can be handled further up the stack, notably inside TransactionManager#within_new_transaction. Here we can make sure to first rollback the transaction, then safely issue DEALLOCATE statements to invalidate the rest of the cached prepared statements. This also allows the user (or some gem) the opportunity to catch this error and voluntarily retry the transaction if a schema change causes the prepared statement cache to become invalidated. Because the outdated statement has been deallocated, we can expect the transaction to succeed on the second try.
2015-11-02 16:17:52 -05:00
begin
ddl_connection = ActiveRecord::Base.connection_pool.checkout
begin
yield original_connection, ddl_connection
ensure
ActiveRecord::Base.connection_pool.checkin ddl_connection
end
Correctly deallocate prepared statements if we fail inside a transaction - Addresses issue #12330 Overview ======== Cached postgres prepared statements become invalidated if the schema changes in a way that it affects the returned result. Examples: - adding or removing a column then doing a 'SELECT *' - removing the foo column then doing a 'SELECT bar.foo' In normal operation this isn't a problem, we can rescue the error, deallocate the prepared statement and re-issue the command. However in PostgreSQL transactions, once any command fails, the transaction becomes 'poisoned' and any subsequent commands will raise InFailedSQLTransaction. This includes DEALLOCATE statements, so the default deallocation strategy instead of removing the cached prepared statement instead raises InFailedSQLTransaction. Why this is bad =============== 1. InFailedSQLTransaction is a fairly cryptic error and doesn't communicate any useful information about what has actually gone wrong. 2. In the naive implementation the prepared statement never gets deallocated - it stays alive for the length of the session taking up memory on the postgres server. 3. It is unsafe to retry the transaction because the bad prepared statement is still in the cache and we would see the exact same failure repeated. Solution ======== If we are outside a transaction we can continue to handle these failures gracefully in the usual way. Inside a transaction instead of issuing a DEALLOCATE command that will certainly fail, we now raise ActiveRecord::PreparedStatementCacheExpired. This can be handled further up the stack, notably inside TransactionManager#within_new_transaction. Here we can make sure to first rollback the transaction, then safely issue DEALLOCATE statements to invalidate the rest of the cached prepared statements. This also allows the user (or some gem) the opportunity to catch this error and voluntarily retry the transaction if a schema change causes the prepared statement cache to become invalidated. Because the outdated statement has been deallocated, we can expect the transaction to succeed on the second try.
2015-11-02 16:17:52 -05:00
ensure
ActiveRecord::Base.clear_all_connections!
Correctly deallocate prepared statements if we fail inside a transaction - Addresses issue #12330 Overview ======== Cached postgres prepared statements become invalidated if the schema changes in a way that it affects the returned result. Examples: - adding or removing a column then doing a 'SELECT *' - removing the foo column then doing a 'SELECT bar.foo' In normal operation this isn't a problem, we can rescue the error, deallocate the prepared statement and re-issue the command. However in PostgreSQL transactions, once any command fails, the transaction becomes 'poisoned' and any subsequent commands will raise InFailedSQLTransaction. This includes DEALLOCATE statements, so the default deallocation strategy instead of removing the cached prepared statement instead raises InFailedSQLTransaction. Why this is bad =============== 1. InFailedSQLTransaction is a fairly cryptic error and doesn't communicate any useful information about what has actually gone wrong. 2. In the naive implementation the prepared statement never gets deallocated - it stays alive for the length of the session taking up memory on the postgres server. 3. It is unsafe to retry the transaction because the bad prepared statement is still in the cache and we would see the exact same failure repeated. Solution ======== If we are outside a transaction we can continue to handle these failures gracefully in the usual way. Inside a transaction instead of issuing a DEALLOCATE command that will certainly fail, we now raise ActiveRecord::PreparedStatementCacheExpired. This can be handled further up the stack, notably inside TransactionManager#within_new_transaction. Here we can make sure to first rollback the transaction, then safely issue DEALLOCATE statements to invalidate the rest of the cached prepared statements. This also allows the user (or some gem) the opportunity to catch this error and voluntarily retry the transaction if a schema change causes the prepared statement cache to become invalidated. Because the outdated statement has been deallocated, we can expect the transaction to succeed on the second try.
2015-11-02 16:17:52 -05:00
end
end
end
end