The previous logic was not properly handling the case of a node
that was failing and oining back in short period of time.
The issue was in the handling of the network messages.
When a node joins it sync with other nodes, these are passing
the whole list of nodes that at best of their knowledge are part
of a network. At this point if the node receives that node A is part
of the network it saves it before having received the notification
that node A is actually alive (coming from memberlist).
If node A failed the source node will receive the notification
while the new joined node won't because memberlist never advertise
node A as available. In this case the new node will never purge
node A from its state but also worse, will accept any table notification
where node A is the owner and so will end up in a out of sync state
with the rest of the cluster.
This commit contains also some code cleanup around the area of node
management
Signed-off-by: Flavio Crisciani <flavio.crisciani@docker.com>
Update Dockerfile, curl is used for the healthcheck
Add /dump for creating the routine stack trace
Signed-off-by: Flavio Crisciani <flavio.crisciani@docker.com>
- Diagnose framework that exposes REST API for db interaction
- Dockerfile to build the test image
- Periodic print of stats regarding queue size
- Client and server side for integration with testkit
- Added write-delete-leave-join
- Added test write-delete-wait-leave-join
- Added write-wait-leave-join
Signed-off-by: Flavio Crisciani <flavio.crisciani@docker.com>