- Added remainingReapTime field in the table event.
Wihtout it a node that did not have a state for the element
was marking the element for deletion setting the max reapTime.
This was creating the possibility to keep the entry being resync
between nodes forever avoding the purpose of the reap time
itself.
- On broadcast of the table event the node owner was rewritten
with the local node name, this was not correct because the owner
should continue to remain the original one of the message
Signed-off-by: Flavio Crisciani <flavio.crisciani@docker.com>
- Otherwise operation will unnecessarely block
for five seconds.
- This is particularly noticeable on graceful
shutdown of daemon in one node cluster.
Signed-off-by: Alessandro Boch <aboch@docker.com>
- Do not run the risk of suppressing meaningful messages
for the rest of the cluster, as a many services depend
on it, like the service records and the distributed
load balancers.
Signed-off-by: Alessandro Boch <aboch@docker.com>
Currently if there is any transient gossip failure in any node the
recoevry process depends on other nodes propogating the information
indirectly. In cases if these transient failures affects all the nodes
that this node has in its memberlist then this node will be permenantly
cutoff from the the gossip channel. Added node state management code in
networkdb to address these problems by trying to rejoin the cluster via
the failed nodes when there is a failure. This also necessitates the
need to add new messages called node event messages to differentiate
between node leave and node failure.
Signed-off-by: Jana Radhakrishnan <mrjana@docker.com>
When broadcasting table event, make sure the broadcast queue is
valid. The network may have been removed while in the process of sending
the broadcast.
Signed-off-by: Jana Radhakrishnan <mrjana@docker.com>
Convert all networkdb core message types from go message types to
protobuf message types. This faciliates future modification of the
message structure without breaking backward compatibility.
Signed-off-by: Jana Radhakrishnan <mrjana@docker.com>
Network DB is a network scoped gossip database built
on top of hashicorp/memberlist providing an eventually
consistent state store.
It limits the scope of the gossip and periodic bulk syncing
for table entries to only the nodes which participate in the
network to which the gossip belongs. This designs make the
gossip layer scale better and only consumes resources for the
network state that the node participates in.
Since the complete state for a network is maintained by all nodes
participating in the network, all nodes will eventually converge
to the same state.
NetworkDB also provides facilities for the users of the package to
watch on any table (or all tables) and get notified if there are
state changes of interest that happened anywhere in the cluster when
that state change eventually finds it's way to the watcher's node.
Signed-off-by: Jana Radhakrishnan <mrjana@docker.com>