Created method to handle the node state change with cleanup operation
associated.
Realign testing client with the new diagnostic interface
Signed-off-by: Flavio Crisciani <flavio.crisciani@docker.com>
This commit introduces the possibility to enable a debug mode
for the networkDB, this will allow the opening of a tcp port
on localhost that will expose the networkDB api for debugging
purposes.
The API can be discovered using curl localhost:<port>/help
It support json output if passed json as URL query parameter
option and pretty printing if passing json=pretty
All the binaries values are serialized in base64 encoding, this
can be skip passing the unsafe option as url query parameter
A simple go client will follow up
Signed-off-by: Flavio Crisciani <flavio.crisciani@docker.com>
Update Dockerfile, curl is used for the healthcheck
Add /dump for creating the routine stack trace
Signed-off-by: Flavio Crisciani <flavio.crisciani@docker.com>
Separate the hostname from the node identifier. All the messages
that are exchanged on the network are containing a nodeName field
that today was hostname-uniqueid. Now being encoded as strings in
the protobuf without any length restriction they plays a role
on the effieciency of protocol itself. If the hostname is very long
the overhead will increase and will degradate the performance of
the database itself that each single cycle by default allows 1400
bytes payload
Signed-off-by: Flavio Crisciani <flavio.crisciani@docker.com>
- Changed the loop per network. Previous implementation was taking a
ReadLock to update the reapTime but now with the residualReapTime
also the bulkSync is using the same ReadLock creating possible
issues in concurrent read and update of the value.
The new logic fetches the list of networks and proceed to the
cleanup network by network locking the database and releasing it
after each network. This should ensure a fair locking avoiding
to keep the database blocked for too much time.
Note: The ticker does not guarantee that the reap logic runs
precisely every reapTimePeriod, actually documentation says that
if the routine is too long will skip ticks. In case of slowdown
of the process itself it is possible that the lifetime of the
deleted entries increases, it still should not be a huge problem
because now the residual reaptime is propagated among all the nodes
a slower node will let the deleted entry being repropagate multiple
times but the state will still remain consistent.
Signed-off-by: Flavio Crisciani <flavio.crisciani@docker.com>
- Added remainingReapTime field in the table event.
Wihtout it a node that did not have a state for the element
was marking the element for deletion setting the max reapTime.
This was creating the possibility to keep the entry being resync
between nodes forever avoding the purpose of the reap time
itself.
- On broadcast of the table event the node owner was rewritten
with the local node name, this was not correct because the owner
should continue to remain the original one of the message
Signed-off-by: Flavio Crisciani <flavio.crisciani@docker.com>
- Diagnose framework that exposes REST API for db interaction
- Dockerfile to build the test image
- Periodic print of stats regarding queue size
- Client and server side for integration with testkit
- Added write-delete-leave-join
- Added test write-delete-wait-leave-join
- Added write-wait-leave-join
Signed-off-by: Flavio Crisciani <flavio.crisciani@docker.com>