Since the node name randomization fix, we need to make sure that we
purge the old node with the same prefix and same IP from the nodes
database if it still present. This causes unnecessary reconnect
attempts.
Also added a change to avoid unnecessary update of local lamport time
and only do it of we are ready to do a push pull on a join. Join should
happen only when the node is bootstrapped or when trying to reconnect
with a failed node.
Signed-off-by: Jana Radhakrishnan <mrjana@docker.com>
If user provided a non-zero listen address, honor that and bind only to
that address. Right now it is not honored and we always bind to all ip
addresses in the host.
Signed-off-by: Jana Radhakrishnan <mrjana@docker.com>
Currently if there is any transient gossip failure in any node the
recoevry process depends on other nodes propogating the information
indirectly. In cases if these transient failures affects all the nodes
that this node has in its memberlist then this node will be permenantly
cutoff from the the gossip channel. Added node state management code in
networkdb to address these problems by trying to rejoin the cluster via
the failed nodes when there is a failure. This also necessitates the
need to add new messages called node event messages to differentiate
between node leave and node failure.
Signed-off-by: Jana Radhakrishnan <mrjana@docker.com>
In cases a node left the cluster and quickly rejoined before the node
entry is expired by other nodes in the cluster, when the node rejoins we
fail to add it to the quick lookup database. Fixed it.
Signed-off-by: Jana Radhakrishnan <mrjana@docker.com>
In networkdb we should ignore delete events for entries which doesn't
exist in the db. This is always true because if the entry did not exist
then the entry has been removed way earlier and got purged after the
reap timer and this notification is very stale.
Also there were duplicate delete notifications being sent to the
clients. One when the actual delete event was received from gossip and
later when the entry was getting reaped. The second notification is
unnecessary and may cause issues with the clients if they are not
coded for idempotency.
Signed-off-by: Jana Radhakrishnan <mrjana@docker.com>
With this change, all the auto-detection of the addresses are removed
from libnetwork and the caller takes the responsibilty to have a proper
advertise-addr in various scenarios (including externally facing public
advertise-addr with an internal facing private listen-addr)
Signed-off-by: Madhu Venugopal <madhu@docker.com>
When a node goes away purge all the network attachments from the node
and make sure we don't attempt bulk syncing to that node once removed.
Signed-off-by: Jana Radhakrishnan <mrjana@docker.com>
Convert all networkdb core message types from go message types to
protobuf message types. This faciliates future modification of the
message structure without breaking backward compatibility.
Signed-off-by: Jana Radhakrishnan <mrjana@docker.com>
When a node joins a network it sends out a gossip event before it
updates it's own in-memory state. This can create a race where the node
gets the event back from a remote node before we update in-memory state
and we treat that as latest state. To avoid this race, always generate
the gossip after updating local state.
Signed-off-by: Jana Radhakrishnan <mrjana@docker.com>
Network DB is a network scoped gossip database built
on top of hashicorp/memberlist providing an eventually
consistent state store.
It limits the scope of the gossip and periodic bulk syncing
for table entries to only the nodes which participate in the
network to which the gossip belongs. This designs make the
gossip layer scale better and only consumes resources for the
network state that the node participates in.
Since the complete state for a network is maintained by all nodes
participating in the network, all nodes will eventually converge
to the same state.
NetworkDB also provides facilities for the users of the package to
watch on any table (or all tables) and get notified if there are
state changes of interest that happened anywhere in the cluster when
that state change eventually finds it's way to the watcher's node.
Signed-off-by: Jana Radhakrishnan <mrjana@docker.com>