ipfs-cluster/pstoremgr/pstoremgr_test.go

202 lines
4.4 KiB
Go
Raw Normal View History

Feat: emancipate Consensus from the Cluster component This commit promotes the Consensus component (and Raft) to become a fully independent thing like other components, passed to NewCluster during initialization. Cluster (main component) no longer creates the consensus layer internally. This has triggered a number of breaking changes that I will explain below. Motivation: Future work will require the possibility of running Cluster with a consensus layer that is not Raft. The "consensus" layer is in charge of maintaining two things: * The current cluster peerset, as required by the implementation * The current cluster pinset (shared state) While the pinset maintenance has always been in the consensus layer, the peerset maintenance was handled by the main component (starting by the "peers" key in the configuration) AND the Raft component (internally) and this generated lots of confusion: if the user edited the peers in the configuration they would be greeted with an error. The bootstrap process (adding a peer to an existing cluster) and configuration key also complicated many things, since the main component did it, but only when the consensus was initialized and in single peer mode. In all this we also mixed the peerstore (list of peer addresses in the libp2p host) with the peerset, when they need not to be linked. By initializing the consensus layer before calling NewCluster, all the difficulties in maintaining the current implementation in the same way have come to light. Thus, the following changes have been introduced: * Remove "peers" and "bootstrap" keys from the configuration: we no longer edit or save the configuration files. This was a very bad practice, requiring write permissions by the process to the file containing the private key and additionally made things like Puppet deployments of cluster difficult as configuration would mutate from its initial version. Needless to say all the maintenance associated to making sure peers and bootstrap had correct values when peers are bootstrapped or removed. A loud and detailed error message has been added when staring cluster with an old config, along with instructions on how to move forward. * Introduce a PeerstoreFile ("peerstore") which stores peer addresses: in ipfs, the peerstore is not persisted because it can be re-built from the network bootstrappers and the DHT. Cluster should probably also allow discoverability of peers addresses (when not bootstrapping, as in that case we have it), but in the meantime, we will read and persist the peerstore addresses for cluster peers in this file, different from the configuration. Note that dns multiaddresses are now fully supported and no IPs are saved when we have DNS multiaddresses for a peer. * The former "peer_manager" code is now a pstoremgr module, providing utilities to parse, add, list and generally maintain the libp2p host peerstore, including operations on the PeerstoreFile. This "pstoremgr" can now also be extended to perform address autodiscovery and other things indepedently from Cluster. * Create and initialize Raft outside of the main Cluster component: since we can now launch Raft independently from Cluster, we have more degrees of freedom. A new "staging" option when creating the object allows a raft peer to be launched in Staging mode, waiting to be added to a running consensus, and thus, not electing itself as leader or doing anything like we were doing before. This additionally allows us to track when the peer has become a Voter, which only happens when it's caught up with the state, something that was wonky previously. * The raft configuration now includes an InitPeerset key, which allows to provide a peerset for new peers and which is ignored when staging==true. The whole Raft initialization code is way cleaner and stronger now. * Cluster peer bootsrapping is now an ipfs-cluster-service feature. The --bootstrap flag works as before (additionally allowing comma-separated-list of entries). What bootstrap does, is to initialize Raft with staging == true, and then call Join in the main cluster component. Only when the Raft peer transitions to Voter, consensus becomes ready, and cluster becomes Ready. This is cleaner, works better and is less complex than before (supporting both flags and config values). We also backup and clean the state whenever we are boostrapping, automatically * ipfs-cluster-service no longer runs the daemon. Starting cluster needs now "ipfs-cluster-service daemon". The daemon specific flags (bootstrap, alloc) are now flags for the daemon subcommand. Here we mimic ipfs ("ipfs" does not start the daemon but print help) and pave the path for merging both service and ctl in the future. While this brings some breaking changes, it significantly reduces the complexity of the configuration, the code and most importantly, the documentation. It should be easier now to explain the user what is the right way to launch a cluster peer, and more difficult to make mistakes. As a side effect, the PR also: * Fixes #381 - peers with dynamic addresses * Fixes #371 - peers should be Raft configuration option * Fixes #378 - waitForUpdates may return before state fully synced * Fixes #235 - config option shadowing (no cfg saves, no need to shadow) License: MIT Signed-off-by: Hector Sanjuan <code@hector.link>
2018-04-28 22:22:23 +00:00
package pstoremgr
import (
"context"
"os"
"testing"
Fix #787: Connectivity fixes Currently, unless doing Join() (--bootstrap), we do not connect to any peers on startup. We however loaded up the peerstore file and Raft will automatically connect older peers to figure out who is the leader etc. DHT bootstrap, after Raft was working, did the rest. For CRDTs we need to connect to people on a normal boot as otherwise, unless bootstrapping, this does not happen, even if the peerstore contains known peers. This introduces a number of changes: * Move peerstore file management back inside the Cluster component, which was already in charge of saving the peerstore file. * We keep saving all "known addresses" but we load them with a non permanent TTL, so that there will be clean up of peers we're not connected to for long. * "Bootstrap" (connect) to a small number of peers during Cluster component creation. * Bootstrap the DHT asap after this, so that other cluster components can initialize with a working peer discovery mechanism. * CRDT Trust() method will now: * Protect the trusted Peer ID in the conn manager * Give top priority in the PeerManager to that Peer (see below) * Mark addresses as permanent in the Peerstore The PeerManager now attaches priorities to peers when importing them and is able to order them according to that priority. The result is that peers with high priority are saved first in the peerstore file. When we load the peerstore file, the first entries in it are given the highest priority. This means that during startup we will connect to "trusted peers" first (because they have been tagged with priority in the previous run and saved at the top of the list). Once connected to a small number of peers, we let the DHT bootstrap process in the background do the rest and discover the network. All this makes the peerstore file a "bootstrap" list for CRDTs and we will attempt to connect to peers on that list until some of those connections succeed.
2019-05-23 16:41:33 +00:00
"time"
Feat: emancipate Consensus from the Cluster component This commit promotes the Consensus component (and Raft) to become a fully independent thing like other components, passed to NewCluster during initialization. Cluster (main component) no longer creates the consensus layer internally. This has triggered a number of breaking changes that I will explain below. Motivation: Future work will require the possibility of running Cluster with a consensus layer that is not Raft. The "consensus" layer is in charge of maintaining two things: * The current cluster peerset, as required by the implementation * The current cluster pinset (shared state) While the pinset maintenance has always been in the consensus layer, the peerset maintenance was handled by the main component (starting by the "peers" key in the configuration) AND the Raft component (internally) and this generated lots of confusion: if the user edited the peers in the configuration they would be greeted with an error. The bootstrap process (adding a peer to an existing cluster) and configuration key also complicated many things, since the main component did it, but only when the consensus was initialized and in single peer mode. In all this we also mixed the peerstore (list of peer addresses in the libp2p host) with the peerset, when they need not to be linked. By initializing the consensus layer before calling NewCluster, all the difficulties in maintaining the current implementation in the same way have come to light. Thus, the following changes have been introduced: * Remove "peers" and "bootstrap" keys from the configuration: we no longer edit or save the configuration files. This was a very bad practice, requiring write permissions by the process to the file containing the private key and additionally made things like Puppet deployments of cluster difficult as configuration would mutate from its initial version. Needless to say all the maintenance associated to making sure peers and bootstrap had correct values when peers are bootstrapped or removed. A loud and detailed error message has been added when staring cluster with an old config, along with instructions on how to move forward. * Introduce a PeerstoreFile ("peerstore") which stores peer addresses: in ipfs, the peerstore is not persisted because it can be re-built from the network bootstrappers and the DHT. Cluster should probably also allow discoverability of peers addresses (when not bootstrapping, as in that case we have it), but in the meantime, we will read and persist the peerstore addresses for cluster peers in this file, different from the configuration. Note that dns multiaddresses are now fully supported and no IPs are saved when we have DNS multiaddresses for a peer. * The former "peer_manager" code is now a pstoremgr module, providing utilities to parse, add, list and generally maintain the libp2p host peerstore, including operations on the PeerstoreFile. This "pstoremgr" can now also be extended to perform address autodiscovery and other things indepedently from Cluster. * Create and initialize Raft outside of the main Cluster component: since we can now launch Raft independently from Cluster, we have more degrees of freedom. A new "staging" option when creating the object allows a raft peer to be launched in Staging mode, waiting to be added to a running consensus, and thus, not electing itself as leader or doing anything like we were doing before. This additionally allows us to track when the peer has become a Voter, which only happens when it's caught up with the state, something that was wonky previously. * The raft configuration now includes an InitPeerset key, which allows to provide a peerset for new peers and which is ignored when staging==true. The whole Raft initialization code is way cleaner and stronger now. * Cluster peer bootsrapping is now an ipfs-cluster-service feature. The --bootstrap flag works as before (additionally allowing comma-separated-list of entries). What bootstrap does, is to initialize Raft with staging == true, and then call Join in the main cluster component. Only when the Raft peer transitions to Voter, consensus becomes ready, and cluster becomes Ready. This is cleaner, works better and is less complex than before (supporting both flags and config values). We also backup and clean the state whenever we are boostrapping, automatically * ipfs-cluster-service no longer runs the daemon. Starting cluster needs now "ipfs-cluster-service daemon". The daemon specific flags (bootstrap, alloc) are now flags for the daemon subcommand. Here we mimic ipfs ("ipfs" does not start the daemon but print help) and pave the path for merging both service and ctl in the future. While this brings some breaking changes, it significantly reduces the complexity of the configuration, the code and most importantly, the documentation. It should be easier now to explain the user what is the right way to launch a cluster peer, and more difficult to make mistakes. As a side effect, the PR also: * Fixes #381 - peers with dynamic addresses * Fixes #371 - peers should be Raft configuration option * Fixes #378 - waitForUpdates may return before state fully synced * Fixes #235 - config option shadowing (no cfg saves, no need to shadow) License: MIT Signed-off-by: Hector Sanjuan <code@hector.link>
2018-04-28 22:22:23 +00:00
Fix #787: Connectivity fixes Currently, unless doing Join() (--bootstrap), we do not connect to any peers on startup. We however loaded up the peerstore file and Raft will automatically connect older peers to figure out who is the leader etc. DHT bootstrap, after Raft was working, did the rest. For CRDTs we need to connect to people on a normal boot as otherwise, unless bootstrapping, this does not happen, even if the peerstore contains known peers. This introduces a number of changes: * Move peerstore file management back inside the Cluster component, which was already in charge of saving the peerstore file. * We keep saving all "known addresses" but we load them with a non permanent TTL, so that there will be clean up of peers we're not connected to for long. * "Bootstrap" (connect) to a small number of peers during Cluster component creation. * Bootstrap the DHT asap after this, so that other cluster components can initialize with a working peer discovery mechanism. * CRDT Trust() method will now: * Protect the trusted Peer ID in the conn manager * Give top priority in the PeerManager to that Peer (see below) * Mark addresses as permanent in the Peerstore The PeerManager now attaches priorities to peers when importing them and is able to order them according to that priority. The result is that peers with high priority are saved first in the peerstore file. When we load the peerstore file, the first entries in it are given the highest priority. This means that during startup we will connect to "trusted peers" first (because they have been tagged with priority in the previous run and saved at the top of the list). Once connected to a small number of peers, we let the DHT bootstrap process in the background do the rest and discover the network. All this makes the peerstore file a "bootstrap" list for CRDTs and we will attempt to connect to peers on that list until some of those connections succeed.
2019-05-23 16:41:33 +00:00
"github.com/ipfs/ipfs-cluster/test"
Feat: emancipate Consensus from the Cluster component This commit promotes the Consensus component (and Raft) to become a fully independent thing like other components, passed to NewCluster during initialization. Cluster (main component) no longer creates the consensus layer internally. This has triggered a number of breaking changes that I will explain below. Motivation: Future work will require the possibility of running Cluster with a consensus layer that is not Raft. The "consensus" layer is in charge of maintaining two things: * The current cluster peerset, as required by the implementation * The current cluster pinset (shared state) While the pinset maintenance has always been in the consensus layer, the peerset maintenance was handled by the main component (starting by the "peers" key in the configuration) AND the Raft component (internally) and this generated lots of confusion: if the user edited the peers in the configuration they would be greeted with an error. The bootstrap process (adding a peer to an existing cluster) and configuration key also complicated many things, since the main component did it, but only when the consensus was initialized and in single peer mode. In all this we also mixed the peerstore (list of peer addresses in the libp2p host) with the peerset, when they need not to be linked. By initializing the consensus layer before calling NewCluster, all the difficulties in maintaining the current implementation in the same way have come to light. Thus, the following changes have been introduced: * Remove "peers" and "bootstrap" keys from the configuration: we no longer edit or save the configuration files. This was a very bad practice, requiring write permissions by the process to the file containing the private key and additionally made things like Puppet deployments of cluster difficult as configuration would mutate from its initial version. Needless to say all the maintenance associated to making sure peers and bootstrap had correct values when peers are bootstrapped or removed. A loud and detailed error message has been added when staring cluster with an old config, along with instructions on how to move forward. * Introduce a PeerstoreFile ("peerstore") which stores peer addresses: in ipfs, the peerstore is not persisted because it can be re-built from the network bootstrappers and the DHT. Cluster should probably also allow discoverability of peers addresses (when not bootstrapping, as in that case we have it), but in the meantime, we will read and persist the peerstore addresses for cluster peers in this file, different from the configuration. Note that dns multiaddresses are now fully supported and no IPs are saved when we have DNS multiaddresses for a peer. * The former "peer_manager" code is now a pstoremgr module, providing utilities to parse, add, list and generally maintain the libp2p host peerstore, including operations on the PeerstoreFile. This "pstoremgr" can now also be extended to perform address autodiscovery and other things indepedently from Cluster. * Create and initialize Raft outside of the main Cluster component: since we can now launch Raft independently from Cluster, we have more degrees of freedom. A new "staging" option when creating the object allows a raft peer to be launched in Staging mode, waiting to be added to a running consensus, and thus, not electing itself as leader or doing anything like we were doing before. This additionally allows us to track when the peer has become a Voter, which only happens when it's caught up with the state, something that was wonky previously. * The raft configuration now includes an InitPeerset key, which allows to provide a peerset for new peers and which is ignored when staging==true. The whole Raft initialization code is way cleaner and stronger now. * Cluster peer bootsrapping is now an ipfs-cluster-service feature. The --bootstrap flag works as before (additionally allowing comma-separated-list of entries). What bootstrap does, is to initialize Raft with staging == true, and then call Join in the main cluster component. Only when the Raft peer transitions to Voter, consensus becomes ready, and cluster becomes Ready. This is cleaner, works better and is less complex than before (supporting both flags and config values). We also backup and clean the state whenever we are boostrapping, automatically * ipfs-cluster-service no longer runs the daemon. Starting cluster needs now "ipfs-cluster-service daemon". The daemon specific flags (bootstrap, alloc) are now flags for the daemon subcommand. Here we mimic ipfs ("ipfs" does not start the daemon but print help) and pave the path for merging both service and ctl in the future. While this brings some breaking changes, it significantly reduces the complexity of the configuration, the code and most importantly, the documentation. It should be easier now to explain the user what is the right way to launch a cluster peer, and more difficult to make mistakes. As a side effect, the PR also: * Fixes #381 - peers with dynamic addresses * Fixes #371 - peers should be Raft configuration option * Fixes #378 - waitForUpdates may return before state fully synced * Fixes #235 - config option shadowing (no cfg saves, no need to shadow) License: MIT Signed-off-by: Hector Sanjuan <code@hector.link>
2018-04-28 22:22:23 +00:00
libp2p "github.com/libp2p/go-libp2p"
peer "github.com/libp2p/go-libp2p-core/peer"
Feat: emancipate Consensus from the Cluster component This commit promotes the Consensus component (and Raft) to become a fully independent thing like other components, passed to NewCluster during initialization. Cluster (main component) no longer creates the consensus layer internally. This has triggered a number of breaking changes that I will explain below. Motivation: Future work will require the possibility of running Cluster with a consensus layer that is not Raft. The "consensus" layer is in charge of maintaining two things: * The current cluster peerset, as required by the implementation * The current cluster pinset (shared state) While the pinset maintenance has always been in the consensus layer, the peerset maintenance was handled by the main component (starting by the "peers" key in the configuration) AND the Raft component (internally) and this generated lots of confusion: if the user edited the peers in the configuration they would be greeted with an error. The bootstrap process (adding a peer to an existing cluster) and configuration key also complicated many things, since the main component did it, but only when the consensus was initialized and in single peer mode. In all this we also mixed the peerstore (list of peer addresses in the libp2p host) with the peerset, when they need not to be linked. By initializing the consensus layer before calling NewCluster, all the difficulties in maintaining the current implementation in the same way have come to light. Thus, the following changes have been introduced: * Remove "peers" and "bootstrap" keys from the configuration: we no longer edit or save the configuration files. This was a very bad practice, requiring write permissions by the process to the file containing the private key and additionally made things like Puppet deployments of cluster difficult as configuration would mutate from its initial version. Needless to say all the maintenance associated to making sure peers and bootstrap had correct values when peers are bootstrapped or removed. A loud and detailed error message has been added when staring cluster with an old config, along with instructions on how to move forward. * Introduce a PeerstoreFile ("peerstore") which stores peer addresses: in ipfs, the peerstore is not persisted because it can be re-built from the network bootstrappers and the DHT. Cluster should probably also allow discoverability of peers addresses (when not bootstrapping, as in that case we have it), but in the meantime, we will read and persist the peerstore addresses for cluster peers in this file, different from the configuration. Note that dns multiaddresses are now fully supported and no IPs are saved when we have DNS multiaddresses for a peer. * The former "peer_manager" code is now a pstoremgr module, providing utilities to parse, add, list and generally maintain the libp2p host peerstore, including operations on the PeerstoreFile. This "pstoremgr" can now also be extended to perform address autodiscovery and other things indepedently from Cluster. * Create and initialize Raft outside of the main Cluster component: since we can now launch Raft independently from Cluster, we have more degrees of freedom. A new "staging" option when creating the object allows a raft peer to be launched in Staging mode, waiting to be added to a running consensus, and thus, not electing itself as leader or doing anything like we were doing before. This additionally allows us to track when the peer has become a Voter, which only happens when it's caught up with the state, something that was wonky previously. * The raft configuration now includes an InitPeerset key, which allows to provide a peerset for new peers and which is ignored when staging==true. The whole Raft initialization code is way cleaner and stronger now. * Cluster peer bootsrapping is now an ipfs-cluster-service feature. The --bootstrap flag works as before (additionally allowing comma-separated-list of entries). What bootstrap does, is to initialize Raft with staging == true, and then call Join in the main cluster component. Only when the Raft peer transitions to Voter, consensus becomes ready, and cluster becomes Ready. This is cleaner, works better and is less complex than before (supporting both flags and config values). We also backup and clean the state whenever we are boostrapping, automatically * ipfs-cluster-service no longer runs the daemon. Starting cluster needs now "ipfs-cluster-service daemon". The daemon specific flags (bootstrap, alloc) are now flags for the daemon subcommand. Here we mimic ipfs ("ipfs" does not start the daemon but print help) and pave the path for merging both service and ctl in the future. While this brings some breaking changes, it significantly reduces the complexity of the configuration, the code and most importantly, the documentation. It should be easier now to explain the user what is the right way to launch a cluster peer, and more difficult to make mistakes. As a side effect, the PR also: * Fixes #381 - peers with dynamic addresses * Fixes #371 - peers should be Raft configuration option * Fixes #378 - waitForUpdates may return before state fully synced * Fixes #235 - config option shadowing (no cfg saves, no need to shadow) License: MIT Signed-off-by: Hector Sanjuan <code@hector.link>
2018-04-28 22:22:23 +00:00
ma "github.com/multiformats/go-multiaddr"
)
func makeMgr(t *testing.T) *Manager {
h, err := libp2p.New(context.Background())
if err != nil {
t.Fatal(err)
}
Fix #787: Connectivity fixes Currently, unless doing Join() (--bootstrap), we do not connect to any peers on startup. We however loaded up the peerstore file and Raft will automatically connect older peers to figure out who is the leader etc. DHT bootstrap, after Raft was working, did the rest. For CRDTs we need to connect to people on a normal boot as otherwise, unless bootstrapping, this does not happen, even if the peerstore contains known peers. This introduces a number of changes: * Move peerstore file management back inside the Cluster component, which was already in charge of saving the peerstore file. * We keep saving all "known addresses" but we load them with a non permanent TTL, so that there will be clean up of peers we're not connected to for long. * "Bootstrap" (connect) to a small number of peers during Cluster component creation. * Bootstrap the DHT asap after this, so that other cluster components can initialize with a working peer discovery mechanism. * CRDT Trust() method will now: * Protect the trusted Peer ID in the conn manager * Give top priority in the PeerManager to that Peer (see below) * Mark addresses as permanent in the Peerstore The PeerManager now attaches priorities to peers when importing them and is able to order them according to that priority. The result is that peers with high priority are saved first in the peerstore file. When we load the peerstore file, the first entries in it are given the highest priority. This means that during startup we will connect to "trusted peers" first (because they have been tagged with priority in the previous run and saved at the top of the list). Once connected to a small number of peers, we let the DHT bootstrap process in the background do the rest and discover the network. All this makes the peerstore file a "bootstrap" list for CRDTs and we will attempt to connect to peers on that list until some of those connections succeed.
2019-05-23 16:41:33 +00:00
return New(context.Background(), h, "peerstore")
Feat: emancipate Consensus from the Cluster component This commit promotes the Consensus component (and Raft) to become a fully independent thing like other components, passed to NewCluster during initialization. Cluster (main component) no longer creates the consensus layer internally. This has triggered a number of breaking changes that I will explain below. Motivation: Future work will require the possibility of running Cluster with a consensus layer that is not Raft. The "consensus" layer is in charge of maintaining two things: * The current cluster peerset, as required by the implementation * The current cluster pinset (shared state) While the pinset maintenance has always been in the consensus layer, the peerset maintenance was handled by the main component (starting by the "peers" key in the configuration) AND the Raft component (internally) and this generated lots of confusion: if the user edited the peers in the configuration they would be greeted with an error. The bootstrap process (adding a peer to an existing cluster) and configuration key also complicated many things, since the main component did it, but only when the consensus was initialized and in single peer mode. In all this we also mixed the peerstore (list of peer addresses in the libp2p host) with the peerset, when they need not to be linked. By initializing the consensus layer before calling NewCluster, all the difficulties in maintaining the current implementation in the same way have come to light. Thus, the following changes have been introduced: * Remove "peers" and "bootstrap" keys from the configuration: we no longer edit or save the configuration files. This was a very bad practice, requiring write permissions by the process to the file containing the private key and additionally made things like Puppet deployments of cluster difficult as configuration would mutate from its initial version. Needless to say all the maintenance associated to making sure peers and bootstrap had correct values when peers are bootstrapped or removed. A loud and detailed error message has been added when staring cluster with an old config, along with instructions on how to move forward. * Introduce a PeerstoreFile ("peerstore") which stores peer addresses: in ipfs, the peerstore is not persisted because it can be re-built from the network bootstrappers and the DHT. Cluster should probably also allow discoverability of peers addresses (when not bootstrapping, as in that case we have it), but in the meantime, we will read and persist the peerstore addresses for cluster peers in this file, different from the configuration. Note that dns multiaddresses are now fully supported and no IPs are saved when we have DNS multiaddresses for a peer. * The former "peer_manager" code is now a pstoremgr module, providing utilities to parse, add, list and generally maintain the libp2p host peerstore, including operations on the PeerstoreFile. This "pstoremgr" can now also be extended to perform address autodiscovery and other things indepedently from Cluster. * Create and initialize Raft outside of the main Cluster component: since we can now launch Raft independently from Cluster, we have more degrees of freedom. A new "staging" option when creating the object allows a raft peer to be launched in Staging mode, waiting to be added to a running consensus, and thus, not electing itself as leader or doing anything like we were doing before. This additionally allows us to track when the peer has become a Voter, which only happens when it's caught up with the state, something that was wonky previously. * The raft configuration now includes an InitPeerset key, which allows to provide a peerset for new peers and which is ignored when staging==true. The whole Raft initialization code is way cleaner and stronger now. * Cluster peer bootsrapping is now an ipfs-cluster-service feature. The --bootstrap flag works as before (additionally allowing comma-separated-list of entries). What bootstrap does, is to initialize Raft with staging == true, and then call Join in the main cluster component. Only when the Raft peer transitions to Voter, consensus becomes ready, and cluster becomes Ready. This is cleaner, works better and is less complex than before (supporting both flags and config values). We also backup and clean the state whenever we are boostrapping, automatically * ipfs-cluster-service no longer runs the daemon. Starting cluster needs now "ipfs-cluster-service daemon". The daemon specific flags (bootstrap, alloc) are now flags for the daemon subcommand. Here we mimic ipfs ("ipfs" does not start the daemon but print help) and pave the path for merging both service and ctl in the future. While this brings some breaking changes, it significantly reduces the complexity of the configuration, the code and most importantly, the documentation. It should be easier now to explain the user what is the right way to launch a cluster peer, and more difficult to make mistakes. As a side effect, the PR also: * Fixes #381 - peers with dynamic addresses * Fixes #371 - peers should be Raft configuration option * Fixes #378 - waitForUpdates may return before state fully synced * Fixes #235 - config option shadowing (no cfg saves, no need to shadow) License: MIT Signed-off-by: Hector Sanjuan <code@hector.link>
2018-04-28 22:22:23 +00:00
}
func clean(pm *Manager) {
if path := pm.peerstorePath; path != "" {
os.RemoveAll(path)
}
}
Fix #787: Connectivity fixes Currently, unless doing Join() (--bootstrap), we do not connect to any peers on startup. We however loaded up the peerstore file and Raft will automatically connect older peers to figure out who is the leader etc. DHT bootstrap, after Raft was working, did the rest. For CRDTs we need to connect to people on a normal boot as otherwise, unless bootstrapping, this does not happen, even if the peerstore contains known peers. This introduces a number of changes: * Move peerstore file management back inside the Cluster component, which was already in charge of saving the peerstore file. * We keep saving all "known addresses" but we load them with a non permanent TTL, so that there will be clean up of peers we're not connected to for long. * "Bootstrap" (connect) to a small number of peers during Cluster component creation. * Bootstrap the DHT asap after this, so that other cluster components can initialize with a working peer discovery mechanism. * CRDT Trust() method will now: * Protect the trusted Peer ID in the conn manager * Give top priority in the PeerManager to that Peer (see below) * Mark addresses as permanent in the Peerstore The PeerManager now attaches priorities to peers when importing them and is able to order them according to that priority. The result is that peers with high priority are saved first in the peerstore file. When we load the peerstore file, the first entries in it are given the highest priority. This means that during startup we will connect to "trusted peers" first (because they have been tagged with priority in the previous run and saved at the top of the list). Once connected to a small number of peers, we let the DHT bootstrap process in the background do the rest and discover the network. All this makes the peerstore file a "bootstrap" list for CRDTs and we will attempt to connect to peers on that list until some of those connections succeed.
2019-05-23 16:41:33 +00:00
func testAddr(loc string, pid peer.ID) ma.Multiaddr {
m, _ := ma.NewMultiaddr(loc + "/p2p/" + peer.Encode(pid))
Fix #787: Connectivity fixes Currently, unless doing Join() (--bootstrap), we do not connect to any peers on startup. We however loaded up the peerstore file and Raft will automatically connect older peers to figure out who is the leader etc. DHT bootstrap, after Raft was working, did the rest. For CRDTs we need to connect to people on a normal boot as otherwise, unless bootstrapping, this does not happen, even if the peerstore contains known peers. This introduces a number of changes: * Move peerstore file management back inside the Cluster component, which was already in charge of saving the peerstore file. * We keep saving all "known addresses" but we load them with a non permanent TTL, so that there will be clean up of peers we're not connected to for long. * "Bootstrap" (connect) to a small number of peers during Cluster component creation. * Bootstrap the DHT asap after this, so that other cluster components can initialize with a working peer discovery mechanism. * CRDT Trust() method will now: * Protect the trusted Peer ID in the conn manager * Give top priority in the PeerManager to that Peer (see below) * Mark addresses as permanent in the Peerstore The PeerManager now attaches priorities to peers when importing them and is able to order them according to that priority. The result is that peers with high priority are saved first in the peerstore file. When we load the peerstore file, the first entries in it are given the highest priority. This means that during startup we will connect to "trusted peers" first (because they have been tagged with priority in the previous run and saved at the top of the list). Once connected to a small number of peers, we let the DHT bootstrap process in the background do the rest and discover the network. All this makes the peerstore file a "bootstrap" list for CRDTs and we will attempt to connect to peers on that list until some of those connections succeed.
2019-05-23 16:41:33 +00:00
return m
}
Feat: emancipate Consensus from the Cluster component This commit promotes the Consensus component (and Raft) to become a fully independent thing like other components, passed to NewCluster during initialization. Cluster (main component) no longer creates the consensus layer internally. This has triggered a number of breaking changes that I will explain below. Motivation: Future work will require the possibility of running Cluster with a consensus layer that is not Raft. The "consensus" layer is in charge of maintaining two things: * The current cluster peerset, as required by the implementation * The current cluster pinset (shared state) While the pinset maintenance has always been in the consensus layer, the peerset maintenance was handled by the main component (starting by the "peers" key in the configuration) AND the Raft component (internally) and this generated lots of confusion: if the user edited the peers in the configuration they would be greeted with an error. The bootstrap process (adding a peer to an existing cluster) and configuration key also complicated many things, since the main component did it, but only when the consensus was initialized and in single peer mode. In all this we also mixed the peerstore (list of peer addresses in the libp2p host) with the peerset, when they need not to be linked. By initializing the consensus layer before calling NewCluster, all the difficulties in maintaining the current implementation in the same way have come to light. Thus, the following changes have been introduced: * Remove "peers" and "bootstrap" keys from the configuration: we no longer edit or save the configuration files. This was a very bad practice, requiring write permissions by the process to the file containing the private key and additionally made things like Puppet deployments of cluster difficult as configuration would mutate from its initial version. Needless to say all the maintenance associated to making sure peers and bootstrap had correct values when peers are bootstrapped or removed. A loud and detailed error message has been added when staring cluster with an old config, along with instructions on how to move forward. * Introduce a PeerstoreFile ("peerstore") which stores peer addresses: in ipfs, the peerstore is not persisted because it can be re-built from the network bootstrappers and the DHT. Cluster should probably also allow discoverability of peers addresses (when not bootstrapping, as in that case we have it), but in the meantime, we will read and persist the peerstore addresses for cluster peers in this file, different from the configuration. Note that dns multiaddresses are now fully supported and no IPs are saved when we have DNS multiaddresses for a peer. * The former "peer_manager" code is now a pstoremgr module, providing utilities to parse, add, list and generally maintain the libp2p host peerstore, including operations on the PeerstoreFile. This "pstoremgr" can now also be extended to perform address autodiscovery and other things indepedently from Cluster. * Create and initialize Raft outside of the main Cluster component: since we can now launch Raft independently from Cluster, we have more degrees of freedom. A new "staging" option when creating the object allows a raft peer to be launched in Staging mode, waiting to be added to a running consensus, and thus, not electing itself as leader or doing anything like we were doing before. This additionally allows us to track when the peer has become a Voter, which only happens when it's caught up with the state, something that was wonky previously. * The raft configuration now includes an InitPeerset key, which allows to provide a peerset for new peers and which is ignored when staging==true. The whole Raft initialization code is way cleaner and stronger now. * Cluster peer bootsrapping is now an ipfs-cluster-service feature. The --bootstrap flag works as before (additionally allowing comma-separated-list of entries). What bootstrap does, is to initialize Raft with staging == true, and then call Join in the main cluster component. Only when the Raft peer transitions to Voter, consensus becomes ready, and cluster becomes Ready. This is cleaner, works better and is less complex than before (supporting both flags and config values). We also backup and clean the state whenever we are boostrapping, automatically * ipfs-cluster-service no longer runs the daemon. Starting cluster needs now "ipfs-cluster-service daemon". The daemon specific flags (bootstrap, alloc) are now flags for the daemon subcommand. Here we mimic ipfs ("ipfs" does not start the daemon but print help) and pave the path for merging both service and ctl in the future. While this brings some breaking changes, it significantly reduces the complexity of the configuration, the code and most importantly, the documentation. It should be easier now to explain the user what is the right way to launch a cluster peer, and more difficult to make mistakes. As a side effect, the PR also: * Fixes #381 - peers with dynamic addresses * Fixes #371 - peers should be Raft configuration option * Fixes #378 - waitForUpdates may return before state fully synced * Fixes #235 - config option shadowing (no cfg saves, no need to shadow) License: MIT Signed-off-by: Hector Sanjuan <code@hector.link>
2018-04-28 22:22:23 +00:00
func TestManager(t *testing.T) {
pm := makeMgr(t)
defer clean(pm)
Fix #787: Connectivity fixes Currently, unless doing Join() (--bootstrap), we do not connect to any peers on startup. We however loaded up the peerstore file and Raft will automatically connect older peers to figure out who is the leader etc. DHT bootstrap, after Raft was working, did the rest. For CRDTs we need to connect to people on a normal boot as otherwise, unless bootstrapping, this does not happen, even if the peerstore contains known peers. This introduces a number of changes: * Move peerstore file management back inside the Cluster component, which was already in charge of saving the peerstore file. * We keep saving all "known addresses" but we load them with a non permanent TTL, so that there will be clean up of peers we're not connected to for long. * "Bootstrap" (connect) to a small number of peers during Cluster component creation. * Bootstrap the DHT asap after this, so that other cluster components can initialize with a working peer discovery mechanism. * CRDT Trust() method will now: * Protect the trusted Peer ID in the conn manager * Give top priority in the PeerManager to that Peer (see below) * Mark addresses as permanent in the Peerstore The PeerManager now attaches priorities to peers when importing them and is able to order them according to that priority. The result is that peers with high priority are saved first in the peerstore file. When we load the peerstore file, the first entries in it are given the highest priority. This means that during startup we will connect to "trusted peers" first (because they have been tagged with priority in the previous run and saved at the top of the list). Once connected to a small number of peers, we let the DHT bootstrap process in the background do the rest and discover the network. All this makes the peerstore file a "bootstrap" list for CRDTs and we will attempt to connect to peers on that list until some of those connections succeed.
2019-05-23 16:41:33 +00:00
loc := "/ip4/127.0.0.1/tcp/1234"
testAddr := testAddr(loc, test.PeerID1)
Feat: emancipate Consensus from the Cluster component This commit promotes the Consensus component (and Raft) to become a fully independent thing like other components, passed to NewCluster during initialization. Cluster (main component) no longer creates the consensus layer internally. This has triggered a number of breaking changes that I will explain below. Motivation: Future work will require the possibility of running Cluster with a consensus layer that is not Raft. The "consensus" layer is in charge of maintaining two things: * The current cluster peerset, as required by the implementation * The current cluster pinset (shared state) While the pinset maintenance has always been in the consensus layer, the peerset maintenance was handled by the main component (starting by the "peers" key in the configuration) AND the Raft component (internally) and this generated lots of confusion: if the user edited the peers in the configuration they would be greeted with an error. The bootstrap process (adding a peer to an existing cluster) and configuration key also complicated many things, since the main component did it, but only when the consensus was initialized and in single peer mode. In all this we also mixed the peerstore (list of peer addresses in the libp2p host) with the peerset, when they need not to be linked. By initializing the consensus layer before calling NewCluster, all the difficulties in maintaining the current implementation in the same way have come to light. Thus, the following changes have been introduced: * Remove "peers" and "bootstrap" keys from the configuration: we no longer edit or save the configuration files. This was a very bad practice, requiring write permissions by the process to the file containing the private key and additionally made things like Puppet deployments of cluster difficult as configuration would mutate from its initial version. Needless to say all the maintenance associated to making sure peers and bootstrap had correct values when peers are bootstrapped or removed. A loud and detailed error message has been added when staring cluster with an old config, along with instructions on how to move forward. * Introduce a PeerstoreFile ("peerstore") which stores peer addresses: in ipfs, the peerstore is not persisted because it can be re-built from the network bootstrappers and the DHT. Cluster should probably also allow discoverability of peers addresses (when not bootstrapping, as in that case we have it), but in the meantime, we will read and persist the peerstore addresses for cluster peers in this file, different from the configuration. Note that dns multiaddresses are now fully supported and no IPs are saved when we have DNS multiaddresses for a peer. * The former "peer_manager" code is now a pstoremgr module, providing utilities to parse, add, list and generally maintain the libp2p host peerstore, including operations on the PeerstoreFile. This "pstoremgr" can now also be extended to perform address autodiscovery and other things indepedently from Cluster. * Create and initialize Raft outside of the main Cluster component: since we can now launch Raft independently from Cluster, we have more degrees of freedom. A new "staging" option when creating the object allows a raft peer to be launched in Staging mode, waiting to be added to a running consensus, and thus, not electing itself as leader or doing anything like we were doing before. This additionally allows us to track when the peer has become a Voter, which only happens when it's caught up with the state, something that was wonky previously. * The raft configuration now includes an InitPeerset key, which allows to provide a peerset for new peers and which is ignored when staging==true. The whole Raft initialization code is way cleaner and stronger now. * Cluster peer bootsrapping is now an ipfs-cluster-service feature. The --bootstrap flag works as before (additionally allowing comma-separated-list of entries). What bootstrap does, is to initialize Raft with staging == true, and then call Join in the main cluster component. Only when the Raft peer transitions to Voter, consensus becomes ready, and cluster becomes Ready. This is cleaner, works better and is less complex than before (supporting both flags and config values). We also backup and clean the state whenever we are boostrapping, automatically * ipfs-cluster-service no longer runs the daemon. Starting cluster needs now "ipfs-cluster-service daemon". The daemon specific flags (bootstrap, alloc) are now flags for the daemon subcommand. Here we mimic ipfs ("ipfs" does not start the daemon but print help) and pave the path for merging both service and ctl in the future. While this brings some breaking changes, it significantly reduces the complexity of the configuration, the code and most importantly, the documentation. It should be easier now to explain the user what is the right way to launch a cluster peer, and more difficult to make mistakes. As a side effect, the PR also: * Fixes #381 - peers with dynamic addresses * Fixes #371 - peers should be Raft configuration option * Fixes #378 - waitForUpdates may return before state fully synced * Fixes #235 - config option shadowing (no cfg saves, no need to shadow) License: MIT Signed-off-by: Hector Sanjuan <code@hector.link>
2018-04-28 22:22:23 +00:00
Fix #787: Connectivity fixes Currently, unless doing Join() (--bootstrap), we do not connect to any peers on startup. We however loaded up the peerstore file and Raft will automatically connect older peers to figure out who is the leader etc. DHT bootstrap, after Raft was working, did the rest. For CRDTs we need to connect to people on a normal boot as otherwise, unless bootstrapping, this does not happen, even if the peerstore contains known peers. This introduces a number of changes: * Move peerstore file management back inside the Cluster component, which was already in charge of saving the peerstore file. * We keep saving all "known addresses" but we load them with a non permanent TTL, so that there will be clean up of peers we're not connected to for long. * "Bootstrap" (connect) to a small number of peers during Cluster component creation. * Bootstrap the DHT asap after this, so that other cluster components can initialize with a working peer discovery mechanism. * CRDT Trust() method will now: * Protect the trusted Peer ID in the conn manager * Give top priority in the PeerManager to that Peer (see below) * Mark addresses as permanent in the Peerstore The PeerManager now attaches priorities to peers when importing them and is able to order them according to that priority. The result is that peers with high priority are saved first in the peerstore file. When we load the peerstore file, the first entries in it are given the highest priority. This means that during startup we will connect to "trusted peers" first (because they have been tagged with priority in the previous run and saved at the top of the list). Once connected to a small number of peers, we let the DHT bootstrap process in the background do the rest and discover the network. All this makes the peerstore file a "bootstrap" list for CRDTs and we will attempt to connect to peers on that list until some of those connections succeed.
2019-05-23 16:41:33 +00:00
_, err := pm.ImportPeer(testAddr, false, time.Minute)
Feat: emancipate Consensus from the Cluster component This commit promotes the Consensus component (and Raft) to become a fully independent thing like other components, passed to NewCluster during initialization. Cluster (main component) no longer creates the consensus layer internally. This has triggered a number of breaking changes that I will explain below. Motivation: Future work will require the possibility of running Cluster with a consensus layer that is not Raft. The "consensus" layer is in charge of maintaining two things: * The current cluster peerset, as required by the implementation * The current cluster pinset (shared state) While the pinset maintenance has always been in the consensus layer, the peerset maintenance was handled by the main component (starting by the "peers" key in the configuration) AND the Raft component (internally) and this generated lots of confusion: if the user edited the peers in the configuration they would be greeted with an error. The bootstrap process (adding a peer to an existing cluster) and configuration key also complicated many things, since the main component did it, but only when the consensus was initialized and in single peer mode. In all this we also mixed the peerstore (list of peer addresses in the libp2p host) with the peerset, when they need not to be linked. By initializing the consensus layer before calling NewCluster, all the difficulties in maintaining the current implementation in the same way have come to light. Thus, the following changes have been introduced: * Remove "peers" and "bootstrap" keys from the configuration: we no longer edit or save the configuration files. This was a very bad practice, requiring write permissions by the process to the file containing the private key and additionally made things like Puppet deployments of cluster difficult as configuration would mutate from its initial version. Needless to say all the maintenance associated to making sure peers and bootstrap had correct values when peers are bootstrapped or removed. A loud and detailed error message has been added when staring cluster with an old config, along with instructions on how to move forward. * Introduce a PeerstoreFile ("peerstore") which stores peer addresses: in ipfs, the peerstore is not persisted because it can be re-built from the network bootstrappers and the DHT. Cluster should probably also allow discoverability of peers addresses (when not bootstrapping, as in that case we have it), but in the meantime, we will read and persist the peerstore addresses for cluster peers in this file, different from the configuration. Note that dns multiaddresses are now fully supported and no IPs are saved when we have DNS multiaddresses for a peer. * The former "peer_manager" code is now a pstoremgr module, providing utilities to parse, add, list and generally maintain the libp2p host peerstore, including operations on the PeerstoreFile. This "pstoremgr" can now also be extended to perform address autodiscovery and other things indepedently from Cluster. * Create and initialize Raft outside of the main Cluster component: since we can now launch Raft independently from Cluster, we have more degrees of freedom. A new "staging" option when creating the object allows a raft peer to be launched in Staging mode, waiting to be added to a running consensus, and thus, not electing itself as leader or doing anything like we were doing before. This additionally allows us to track when the peer has become a Voter, which only happens when it's caught up with the state, something that was wonky previously. * The raft configuration now includes an InitPeerset key, which allows to provide a peerset for new peers and which is ignored when staging==true. The whole Raft initialization code is way cleaner and stronger now. * Cluster peer bootsrapping is now an ipfs-cluster-service feature. The --bootstrap flag works as before (additionally allowing comma-separated-list of entries). What bootstrap does, is to initialize Raft with staging == true, and then call Join in the main cluster component. Only when the Raft peer transitions to Voter, consensus becomes ready, and cluster becomes Ready. This is cleaner, works better and is less complex than before (supporting both flags and config values). We also backup and clean the state whenever we are boostrapping, automatically * ipfs-cluster-service no longer runs the daemon. Starting cluster needs now "ipfs-cluster-service daemon". The daemon specific flags (bootstrap, alloc) are now flags for the daemon subcommand. Here we mimic ipfs ("ipfs" does not start the daemon but print help) and pave the path for merging both service and ctl in the future. While this brings some breaking changes, it significantly reduces the complexity of the configuration, the code and most importantly, the documentation. It should be easier now to explain the user what is the right way to launch a cluster peer, and more difficult to make mistakes. As a side effect, the PR also: * Fixes #381 - peers with dynamic addresses * Fixes #371 - peers should be Raft configuration option * Fixes #378 - waitForUpdates may return before state fully synced * Fixes #235 - config option shadowing (no cfg saves, no need to shadow) License: MIT Signed-off-by: Hector Sanjuan <code@hector.link>
2018-04-28 22:22:23 +00:00
if err != nil {
t.Fatal(err)
}
Fix #787: Connectivity fixes Currently, unless doing Join() (--bootstrap), we do not connect to any peers on startup. We however loaded up the peerstore file and Raft will automatically connect older peers to figure out who is the leader etc. DHT bootstrap, after Raft was working, did the rest. For CRDTs we need to connect to people on a normal boot as otherwise, unless bootstrapping, this does not happen, even if the peerstore contains known peers. This introduces a number of changes: * Move peerstore file management back inside the Cluster component, which was already in charge of saving the peerstore file. * We keep saving all "known addresses" but we load them with a non permanent TTL, so that there will be clean up of peers we're not connected to for long. * "Bootstrap" (connect) to a small number of peers during Cluster component creation. * Bootstrap the DHT asap after this, so that other cluster components can initialize with a working peer discovery mechanism. * CRDT Trust() method will now: * Protect the trusted Peer ID in the conn manager * Give top priority in the PeerManager to that Peer (see below) * Mark addresses as permanent in the Peerstore The PeerManager now attaches priorities to peers when importing them and is able to order them according to that priority. The result is that peers with high priority are saved first in the peerstore file. When we load the peerstore file, the first entries in it are given the highest priority. This means that during startup we will connect to "trusted peers" first (because they have been tagged with priority in the previous run and saved at the top of the list). Once connected to a small number of peers, we let the DHT bootstrap process in the background do the rest and discover the network. All this makes the peerstore file a "bootstrap" list for CRDTs and we will attempt to connect to peers on that list until some of those connections succeed.
2019-05-23 16:41:33 +00:00
peers := []peer.ID{test.PeerID1, pm.host.ID()}
pinfos := pm.PeerInfos(peers)
if len(pinfos) != 1 {
t.Fatal("expected 1 peerinfo")
}
if pinfos[0].ID != test.PeerID1 {
t.Error("expected same peer as added")
Feat: emancipate Consensus from the Cluster component This commit promotes the Consensus component (and Raft) to become a fully independent thing like other components, passed to NewCluster during initialization. Cluster (main component) no longer creates the consensus layer internally. This has triggered a number of breaking changes that I will explain below. Motivation: Future work will require the possibility of running Cluster with a consensus layer that is not Raft. The "consensus" layer is in charge of maintaining two things: * The current cluster peerset, as required by the implementation * The current cluster pinset (shared state) While the pinset maintenance has always been in the consensus layer, the peerset maintenance was handled by the main component (starting by the "peers" key in the configuration) AND the Raft component (internally) and this generated lots of confusion: if the user edited the peers in the configuration they would be greeted with an error. The bootstrap process (adding a peer to an existing cluster) and configuration key also complicated many things, since the main component did it, but only when the consensus was initialized and in single peer mode. In all this we also mixed the peerstore (list of peer addresses in the libp2p host) with the peerset, when they need not to be linked. By initializing the consensus layer before calling NewCluster, all the difficulties in maintaining the current implementation in the same way have come to light. Thus, the following changes have been introduced: * Remove "peers" and "bootstrap" keys from the configuration: we no longer edit or save the configuration files. This was a very bad practice, requiring write permissions by the process to the file containing the private key and additionally made things like Puppet deployments of cluster difficult as configuration would mutate from its initial version. Needless to say all the maintenance associated to making sure peers and bootstrap had correct values when peers are bootstrapped or removed. A loud and detailed error message has been added when staring cluster with an old config, along with instructions on how to move forward. * Introduce a PeerstoreFile ("peerstore") which stores peer addresses: in ipfs, the peerstore is not persisted because it can be re-built from the network bootstrappers and the DHT. Cluster should probably also allow discoverability of peers addresses (when not bootstrapping, as in that case we have it), but in the meantime, we will read and persist the peerstore addresses for cluster peers in this file, different from the configuration. Note that dns multiaddresses are now fully supported and no IPs are saved when we have DNS multiaddresses for a peer. * The former "peer_manager" code is now a pstoremgr module, providing utilities to parse, add, list and generally maintain the libp2p host peerstore, including operations on the PeerstoreFile. This "pstoremgr" can now also be extended to perform address autodiscovery and other things indepedently from Cluster. * Create and initialize Raft outside of the main Cluster component: since we can now launch Raft independently from Cluster, we have more degrees of freedom. A new "staging" option when creating the object allows a raft peer to be launched in Staging mode, waiting to be added to a running consensus, and thus, not electing itself as leader or doing anything like we were doing before. This additionally allows us to track when the peer has become a Voter, which only happens when it's caught up with the state, something that was wonky previously. * The raft configuration now includes an InitPeerset key, which allows to provide a peerset for new peers and which is ignored when staging==true. The whole Raft initialization code is way cleaner and stronger now. * Cluster peer bootsrapping is now an ipfs-cluster-service feature. The --bootstrap flag works as before (additionally allowing comma-separated-list of entries). What bootstrap does, is to initialize Raft with staging == true, and then call Join in the main cluster component. Only when the Raft peer transitions to Voter, consensus becomes ready, and cluster becomes Ready. This is cleaner, works better and is less complex than before (supporting both flags and config values). We also backup and clean the state whenever we are boostrapping, automatically * ipfs-cluster-service no longer runs the daemon. Starting cluster needs now "ipfs-cluster-service daemon". The daemon specific flags (bootstrap, alloc) are now flags for the daemon subcommand. Here we mimic ipfs ("ipfs" does not start the daemon but print help) and pave the path for merging both service and ctl in the future. While this brings some breaking changes, it significantly reduces the complexity of the configuration, the code and most importantly, the documentation. It should be easier now to explain the user what is the right way to launch a cluster peer, and more difficult to make mistakes. As a side effect, the PR also: * Fixes #381 - peers with dynamic addresses * Fixes #371 - peers should be Raft configuration option * Fixes #378 - waitForUpdates may return before state fully synced * Fixes #235 - config option shadowing (no cfg saves, no need to shadow) License: MIT Signed-off-by: Hector Sanjuan <code@hector.link>
2018-04-28 22:22:23 +00:00
}
Fix #787: Connectivity fixes Currently, unless doing Join() (--bootstrap), we do not connect to any peers on startup. We however loaded up the peerstore file and Raft will automatically connect older peers to figure out who is the leader etc. DHT bootstrap, after Raft was working, did the rest. For CRDTs we need to connect to people on a normal boot as otherwise, unless bootstrapping, this does not happen, even if the peerstore contains known peers. This introduces a number of changes: * Move peerstore file management back inside the Cluster component, which was already in charge of saving the peerstore file. * We keep saving all "known addresses" but we load them with a non permanent TTL, so that there will be clean up of peers we're not connected to for long. * "Bootstrap" (connect) to a small number of peers during Cluster component creation. * Bootstrap the DHT asap after this, so that other cluster components can initialize with a working peer discovery mechanism. * CRDT Trust() method will now: * Protect the trusted Peer ID in the conn manager * Give top priority in the PeerManager to that Peer (see below) * Mark addresses as permanent in the Peerstore The PeerManager now attaches priorities to peers when importing them and is able to order them according to that priority. The result is that peers with high priority are saved first in the peerstore file. When we load the peerstore file, the first entries in it are given the highest priority. This means that during startup we will connect to "trusted peers" first (because they have been tagged with priority in the previous run and saved at the top of the list). Once connected to a small number of peers, we let the DHT bootstrap process in the background do the rest and discover the network. All this makes the peerstore file a "bootstrap" list for CRDTs and we will attempt to connect to peers on that list until some of those connections succeed.
2019-05-23 16:41:33 +00:00
if len(pinfos[0].Addrs) != 1 {
t.Fatal("expected an address")
}
if pinfos[0].Addrs[0].String() != loc {
Feat: emancipate Consensus from the Cluster component This commit promotes the Consensus component (and Raft) to become a fully independent thing like other components, passed to NewCluster during initialization. Cluster (main component) no longer creates the consensus layer internally. This has triggered a number of breaking changes that I will explain below. Motivation: Future work will require the possibility of running Cluster with a consensus layer that is not Raft. The "consensus" layer is in charge of maintaining two things: * The current cluster peerset, as required by the implementation * The current cluster pinset (shared state) While the pinset maintenance has always been in the consensus layer, the peerset maintenance was handled by the main component (starting by the "peers" key in the configuration) AND the Raft component (internally) and this generated lots of confusion: if the user edited the peers in the configuration they would be greeted with an error. The bootstrap process (adding a peer to an existing cluster) and configuration key also complicated many things, since the main component did it, but only when the consensus was initialized and in single peer mode. In all this we also mixed the peerstore (list of peer addresses in the libp2p host) with the peerset, when they need not to be linked. By initializing the consensus layer before calling NewCluster, all the difficulties in maintaining the current implementation in the same way have come to light. Thus, the following changes have been introduced: * Remove "peers" and "bootstrap" keys from the configuration: we no longer edit or save the configuration files. This was a very bad practice, requiring write permissions by the process to the file containing the private key and additionally made things like Puppet deployments of cluster difficult as configuration would mutate from its initial version. Needless to say all the maintenance associated to making sure peers and bootstrap had correct values when peers are bootstrapped or removed. A loud and detailed error message has been added when staring cluster with an old config, along with instructions on how to move forward. * Introduce a PeerstoreFile ("peerstore") which stores peer addresses: in ipfs, the peerstore is not persisted because it can be re-built from the network bootstrappers and the DHT. Cluster should probably also allow discoverability of peers addresses (when not bootstrapping, as in that case we have it), but in the meantime, we will read and persist the peerstore addresses for cluster peers in this file, different from the configuration. Note that dns multiaddresses are now fully supported and no IPs are saved when we have DNS multiaddresses for a peer. * The former "peer_manager" code is now a pstoremgr module, providing utilities to parse, add, list and generally maintain the libp2p host peerstore, including operations on the PeerstoreFile. This "pstoremgr" can now also be extended to perform address autodiscovery and other things indepedently from Cluster. * Create and initialize Raft outside of the main Cluster component: since we can now launch Raft independently from Cluster, we have more degrees of freedom. A new "staging" option when creating the object allows a raft peer to be launched in Staging mode, waiting to be added to a running consensus, and thus, not electing itself as leader or doing anything like we were doing before. This additionally allows us to track when the peer has become a Voter, which only happens when it's caught up with the state, something that was wonky previously. * The raft configuration now includes an InitPeerset key, which allows to provide a peerset for new peers and which is ignored when staging==true. The whole Raft initialization code is way cleaner and stronger now. * Cluster peer bootsrapping is now an ipfs-cluster-service feature. The --bootstrap flag works as before (additionally allowing comma-separated-list of entries). What bootstrap does, is to initialize Raft with staging == true, and then call Join in the main cluster component. Only when the Raft peer transitions to Voter, consensus becomes ready, and cluster becomes Ready. This is cleaner, works better and is less complex than before (supporting both flags and config values). We also backup and clean the state whenever we are boostrapping, automatically * ipfs-cluster-service no longer runs the daemon. Starting cluster needs now "ipfs-cluster-service daemon". The daemon specific flags (bootstrap, alloc) are now flags for the daemon subcommand. Here we mimic ipfs ("ipfs" does not start the daemon but print help) and pave the path for merging both service and ctl in the future. While this brings some breaking changes, it significantly reduces the complexity of the configuration, the code and most importantly, the documentation. It should be easier now to explain the user what is the right way to launch a cluster peer, and more difficult to make mistakes. As a side effect, the PR also: * Fixes #381 - peers with dynamic addresses * Fixes #371 - peers should be Raft configuration option * Fixes #378 - waitForUpdates may return before state fully synced * Fixes #235 - config option shadowing (no cfg saves, no need to shadow) License: MIT Signed-off-by: Hector Sanjuan <code@hector.link>
2018-04-28 22:22:23 +00:00
t.Error("expected same address as added")
}
pm.RmPeer(peers[0])
Fix #787: Connectivity fixes Currently, unless doing Join() (--bootstrap), we do not connect to any peers on startup. We however loaded up the peerstore file and Raft will automatically connect older peers to figure out who is the leader etc. DHT bootstrap, after Raft was working, did the rest. For CRDTs we need to connect to people on a normal boot as otherwise, unless bootstrapping, this does not happen, even if the peerstore contains known peers. This introduces a number of changes: * Move peerstore file management back inside the Cluster component, which was already in charge of saving the peerstore file. * We keep saving all "known addresses" but we load them with a non permanent TTL, so that there will be clean up of peers we're not connected to for long. * "Bootstrap" (connect) to a small number of peers during Cluster component creation. * Bootstrap the DHT asap after this, so that other cluster components can initialize with a working peer discovery mechanism. * CRDT Trust() method will now: * Protect the trusted Peer ID in the conn manager * Give top priority in the PeerManager to that Peer (see below) * Mark addresses as permanent in the Peerstore The PeerManager now attaches priorities to peers when importing them and is able to order them according to that priority. The result is that peers with high priority are saved first in the peerstore file. When we load the peerstore file, the first entries in it are given the highest priority. This means that during startup we will connect to "trusted peers" first (because they have been tagged with priority in the previous run and saved at the top of the list). Once connected to a small number of peers, we let the DHT bootstrap process in the background do the rest and discover the network. All this makes the peerstore file a "bootstrap" list for CRDTs and we will attempt to connect to peers on that list until some of those connections succeed.
2019-05-23 16:41:33 +00:00
pinfos = pm.PeerInfos(peers)
if len(pinfos) != 0 {
t.Fatal("expected 0 pinfos")
Feat: emancipate Consensus from the Cluster component This commit promotes the Consensus component (and Raft) to become a fully independent thing like other components, passed to NewCluster during initialization. Cluster (main component) no longer creates the consensus layer internally. This has triggered a number of breaking changes that I will explain below. Motivation: Future work will require the possibility of running Cluster with a consensus layer that is not Raft. The "consensus" layer is in charge of maintaining two things: * The current cluster peerset, as required by the implementation * The current cluster pinset (shared state) While the pinset maintenance has always been in the consensus layer, the peerset maintenance was handled by the main component (starting by the "peers" key in the configuration) AND the Raft component (internally) and this generated lots of confusion: if the user edited the peers in the configuration they would be greeted with an error. The bootstrap process (adding a peer to an existing cluster) and configuration key also complicated many things, since the main component did it, but only when the consensus was initialized and in single peer mode. In all this we also mixed the peerstore (list of peer addresses in the libp2p host) with the peerset, when they need not to be linked. By initializing the consensus layer before calling NewCluster, all the difficulties in maintaining the current implementation in the same way have come to light. Thus, the following changes have been introduced: * Remove "peers" and "bootstrap" keys from the configuration: we no longer edit or save the configuration files. This was a very bad practice, requiring write permissions by the process to the file containing the private key and additionally made things like Puppet deployments of cluster difficult as configuration would mutate from its initial version. Needless to say all the maintenance associated to making sure peers and bootstrap had correct values when peers are bootstrapped or removed. A loud and detailed error message has been added when staring cluster with an old config, along with instructions on how to move forward. * Introduce a PeerstoreFile ("peerstore") which stores peer addresses: in ipfs, the peerstore is not persisted because it can be re-built from the network bootstrappers and the DHT. Cluster should probably also allow discoverability of peers addresses (when not bootstrapping, as in that case we have it), but in the meantime, we will read and persist the peerstore addresses for cluster peers in this file, different from the configuration. Note that dns multiaddresses are now fully supported and no IPs are saved when we have DNS multiaddresses for a peer. * The former "peer_manager" code is now a pstoremgr module, providing utilities to parse, add, list and generally maintain the libp2p host peerstore, including operations on the PeerstoreFile. This "pstoremgr" can now also be extended to perform address autodiscovery and other things indepedently from Cluster. * Create and initialize Raft outside of the main Cluster component: since we can now launch Raft independently from Cluster, we have more degrees of freedom. A new "staging" option when creating the object allows a raft peer to be launched in Staging mode, waiting to be added to a running consensus, and thus, not electing itself as leader or doing anything like we were doing before. This additionally allows us to track when the peer has become a Voter, which only happens when it's caught up with the state, something that was wonky previously. * The raft configuration now includes an InitPeerset key, which allows to provide a peerset for new peers and which is ignored when staging==true. The whole Raft initialization code is way cleaner and stronger now. * Cluster peer bootsrapping is now an ipfs-cluster-service feature. The --bootstrap flag works as before (additionally allowing comma-separated-list of entries). What bootstrap does, is to initialize Raft with staging == true, and then call Join in the main cluster component. Only when the Raft peer transitions to Voter, consensus becomes ready, and cluster becomes Ready. This is cleaner, works better and is less complex than before (supporting both flags and config values). We also backup and clean the state whenever we are boostrapping, automatically * ipfs-cluster-service no longer runs the daemon. Starting cluster needs now "ipfs-cluster-service daemon". The daemon specific flags (bootstrap, alloc) are now flags for the daemon subcommand. Here we mimic ipfs ("ipfs" does not start the daemon but print help) and pave the path for merging both service and ctl in the future. While this brings some breaking changes, it significantly reduces the complexity of the configuration, the code and most importantly, the documentation. It should be easier now to explain the user what is the right way to launch a cluster peer, and more difficult to make mistakes. As a side effect, the PR also: * Fixes #381 - peers with dynamic addresses * Fixes #371 - peers should be Raft configuration option * Fixes #378 - waitForUpdates may return before state fully synced * Fixes #235 - config option shadowing (no cfg saves, no need to shadow) License: MIT Signed-off-by: Hector Sanjuan <code@hector.link>
2018-04-28 22:22:23 +00:00
}
}
func TestManagerDNS(t *testing.T) {
pm := makeMgr(t)
defer clean(pm)
Fix #787: Connectivity fixes Currently, unless doing Join() (--bootstrap), we do not connect to any peers on startup. We however loaded up the peerstore file and Raft will automatically connect older peers to figure out who is the leader etc. DHT bootstrap, after Raft was working, did the rest. For CRDTs we need to connect to people on a normal boot as otherwise, unless bootstrapping, this does not happen, even if the peerstore contains known peers. This introduces a number of changes: * Move peerstore file management back inside the Cluster component, which was already in charge of saving the peerstore file. * We keep saving all "known addresses" but we load them with a non permanent TTL, so that there will be clean up of peers we're not connected to for long. * "Bootstrap" (connect) to a small number of peers during Cluster component creation. * Bootstrap the DHT asap after this, so that other cluster components can initialize with a working peer discovery mechanism. * CRDT Trust() method will now: * Protect the trusted Peer ID in the conn manager * Give top priority in the PeerManager to that Peer (see below) * Mark addresses as permanent in the Peerstore The PeerManager now attaches priorities to peers when importing them and is able to order them according to that priority. The result is that peers with high priority are saved first in the peerstore file. When we load the peerstore file, the first entries in it are given the highest priority. This means that during startup we will connect to "trusted peers" first (because they have been tagged with priority in the previous run and saved at the top of the list). Once connected to a small number of peers, we let the DHT bootstrap process in the background do the rest and discover the network. All this makes the peerstore file a "bootstrap" list for CRDTs and we will attempt to connect to peers on that list until some of those connections succeed.
2019-05-23 16:41:33 +00:00
loc1 := "/ip4/127.0.0.1/tcp/1234"
testAddr1 := testAddr(loc1, test.PeerID1)
loc2 := "/dns4/localhost/tcp/1235"
testAddr2 := testAddr(loc2, test.PeerID1)
Feat: emancipate Consensus from the Cluster component This commit promotes the Consensus component (and Raft) to become a fully independent thing like other components, passed to NewCluster during initialization. Cluster (main component) no longer creates the consensus layer internally. This has triggered a number of breaking changes that I will explain below. Motivation: Future work will require the possibility of running Cluster with a consensus layer that is not Raft. The "consensus" layer is in charge of maintaining two things: * The current cluster peerset, as required by the implementation * The current cluster pinset (shared state) While the pinset maintenance has always been in the consensus layer, the peerset maintenance was handled by the main component (starting by the "peers" key in the configuration) AND the Raft component (internally) and this generated lots of confusion: if the user edited the peers in the configuration they would be greeted with an error. The bootstrap process (adding a peer to an existing cluster) and configuration key also complicated many things, since the main component did it, but only when the consensus was initialized and in single peer mode. In all this we also mixed the peerstore (list of peer addresses in the libp2p host) with the peerset, when they need not to be linked. By initializing the consensus layer before calling NewCluster, all the difficulties in maintaining the current implementation in the same way have come to light. Thus, the following changes have been introduced: * Remove "peers" and "bootstrap" keys from the configuration: we no longer edit or save the configuration files. This was a very bad practice, requiring write permissions by the process to the file containing the private key and additionally made things like Puppet deployments of cluster difficult as configuration would mutate from its initial version. Needless to say all the maintenance associated to making sure peers and bootstrap had correct values when peers are bootstrapped or removed. A loud and detailed error message has been added when staring cluster with an old config, along with instructions on how to move forward. * Introduce a PeerstoreFile ("peerstore") which stores peer addresses: in ipfs, the peerstore is not persisted because it can be re-built from the network bootstrappers and the DHT. Cluster should probably also allow discoverability of peers addresses (when not bootstrapping, as in that case we have it), but in the meantime, we will read and persist the peerstore addresses for cluster peers in this file, different from the configuration. Note that dns multiaddresses are now fully supported and no IPs are saved when we have DNS multiaddresses for a peer. * The former "peer_manager" code is now a pstoremgr module, providing utilities to parse, add, list and generally maintain the libp2p host peerstore, including operations on the PeerstoreFile. This "pstoremgr" can now also be extended to perform address autodiscovery and other things indepedently from Cluster. * Create and initialize Raft outside of the main Cluster component: since we can now launch Raft independently from Cluster, we have more degrees of freedom. A new "staging" option when creating the object allows a raft peer to be launched in Staging mode, waiting to be added to a running consensus, and thus, not electing itself as leader or doing anything like we were doing before. This additionally allows us to track when the peer has become a Voter, which only happens when it's caught up with the state, something that was wonky previously. * The raft configuration now includes an InitPeerset key, which allows to provide a peerset for new peers and which is ignored when staging==true. The whole Raft initialization code is way cleaner and stronger now. * Cluster peer bootsrapping is now an ipfs-cluster-service feature. The --bootstrap flag works as before (additionally allowing comma-separated-list of entries). What bootstrap does, is to initialize Raft with staging == true, and then call Join in the main cluster component. Only when the Raft peer transitions to Voter, consensus becomes ready, and cluster becomes Ready. This is cleaner, works better and is less complex than before (supporting both flags and config values). We also backup and clean the state whenever we are boostrapping, automatically * ipfs-cluster-service no longer runs the daemon. Starting cluster needs now "ipfs-cluster-service daemon". The daemon specific flags (bootstrap, alloc) are now flags for the daemon subcommand. Here we mimic ipfs ("ipfs" does not start the daemon but print help) and pave the path for merging both service and ctl in the future. While this brings some breaking changes, it significantly reduces the complexity of the configuration, the code and most importantly, the documentation. It should be easier now to explain the user what is the right way to launch a cluster peer, and more difficult to make mistakes. As a side effect, the PR also: * Fixes #381 - peers with dynamic addresses * Fixes #371 - peers should be Raft configuration option * Fixes #378 - waitForUpdates may return before state fully synced * Fixes #235 - config option shadowing (no cfg saves, no need to shadow) License: MIT Signed-off-by: Hector Sanjuan <code@hector.link>
2018-04-28 22:22:23 +00:00
Fix #787: Connectivity fixes Currently, unless doing Join() (--bootstrap), we do not connect to any peers on startup. We however loaded up the peerstore file and Raft will automatically connect older peers to figure out who is the leader etc. DHT bootstrap, after Raft was working, did the rest. For CRDTs we need to connect to people on a normal boot as otherwise, unless bootstrapping, this does not happen, even if the peerstore contains known peers. This introduces a number of changes: * Move peerstore file management back inside the Cluster component, which was already in charge of saving the peerstore file. * We keep saving all "known addresses" but we load them with a non permanent TTL, so that there will be clean up of peers we're not connected to for long. * "Bootstrap" (connect) to a small number of peers during Cluster component creation. * Bootstrap the DHT asap after this, so that other cluster components can initialize with a working peer discovery mechanism. * CRDT Trust() method will now: * Protect the trusted Peer ID in the conn manager * Give top priority in the PeerManager to that Peer (see below) * Mark addresses as permanent in the Peerstore The PeerManager now attaches priorities to peers when importing them and is able to order them according to that priority. The result is that peers with high priority are saved first in the peerstore file. When we load the peerstore file, the first entries in it are given the highest priority. This means that during startup we will connect to "trusted peers" first (because they have been tagged with priority in the previous run and saved at the top of the list). Once connected to a small number of peers, we let the DHT bootstrap process in the background do the rest and discover the network. All this makes the peerstore file a "bootstrap" list for CRDTs and we will attempt to connect to peers on that list until some of those connections succeed.
2019-05-23 16:41:33 +00:00
err := pm.ImportPeers([]ma.Multiaddr{testAddr1, testAddr2}, false, time.Minute)
Feat: emancipate Consensus from the Cluster component This commit promotes the Consensus component (and Raft) to become a fully independent thing like other components, passed to NewCluster during initialization. Cluster (main component) no longer creates the consensus layer internally. This has triggered a number of breaking changes that I will explain below. Motivation: Future work will require the possibility of running Cluster with a consensus layer that is not Raft. The "consensus" layer is in charge of maintaining two things: * The current cluster peerset, as required by the implementation * The current cluster pinset (shared state) While the pinset maintenance has always been in the consensus layer, the peerset maintenance was handled by the main component (starting by the "peers" key in the configuration) AND the Raft component (internally) and this generated lots of confusion: if the user edited the peers in the configuration they would be greeted with an error. The bootstrap process (adding a peer to an existing cluster) and configuration key also complicated many things, since the main component did it, but only when the consensus was initialized and in single peer mode. In all this we also mixed the peerstore (list of peer addresses in the libp2p host) with the peerset, when they need not to be linked. By initializing the consensus layer before calling NewCluster, all the difficulties in maintaining the current implementation in the same way have come to light. Thus, the following changes have been introduced: * Remove "peers" and "bootstrap" keys from the configuration: we no longer edit or save the configuration files. This was a very bad practice, requiring write permissions by the process to the file containing the private key and additionally made things like Puppet deployments of cluster difficult as configuration would mutate from its initial version. Needless to say all the maintenance associated to making sure peers and bootstrap had correct values when peers are bootstrapped or removed. A loud and detailed error message has been added when staring cluster with an old config, along with instructions on how to move forward. * Introduce a PeerstoreFile ("peerstore") which stores peer addresses: in ipfs, the peerstore is not persisted because it can be re-built from the network bootstrappers and the DHT. Cluster should probably also allow discoverability of peers addresses (when not bootstrapping, as in that case we have it), but in the meantime, we will read and persist the peerstore addresses for cluster peers in this file, different from the configuration. Note that dns multiaddresses are now fully supported and no IPs are saved when we have DNS multiaddresses for a peer. * The former "peer_manager" code is now a pstoremgr module, providing utilities to parse, add, list and generally maintain the libp2p host peerstore, including operations on the PeerstoreFile. This "pstoremgr" can now also be extended to perform address autodiscovery and other things indepedently from Cluster. * Create and initialize Raft outside of the main Cluster component: since we can now launch Raft independently from Cluster, we have more degrees of freedom. A new "staging" option when creating the object allows a raft peer to be launched in Staging mode, waiting to be added to a running consensus, and thus, not electing itself as leader or doing anything like we were doing before. This additionally allows us to track when the peer has become a Voter, which only happens when it's caught up with the state, something that was wonky previously. * The raft configuration now includes an InitPeerset key, which allows to provide a peerset for new peers and which is ignored when staging==true. The whole Raft initialization code is way cleaner and stronger now. * Cluster peer bootsrapping is now an ipfs-cluster-service feature. The --bootstrap flag works as before (additionally allowing comma-separated-list of entries). What bootstrap does, is to initialize Raft with staging == true, and then call Join in the main cluster component. Only when the Raft peer transitions to Voter, consensus becomes ready, and cluster becomes Ready. This is cleaner, works better and is less complex than before (supporting both flags and config values). We also backup and clean the state whenever we are boostrapping, automatically * ipfs-cluster-service no longer runs the daemon. Starting cluster needs now "ipfs-cluster-service daemon". The daemon specific flags (bootstrap, alloc) are now flags for the daemon subcommand. Here we mimic ipfs ("ipfs" does not start the daemon but print help) and pave the path for merging both service and ctl in the future. While this brings some breaking changes, it significantly reduces the complexity of the configuration, the code and most importantly, the documentation. It should be easier now to explain the user what is the right way to launch a cluster peer, and more difficult to make mistakes. As a side effect, the PR also: * Fixes #381 - peers with dynamic addresses * Fixes #371 - peers should be Raft configuration option * Fixes #378 - waitForUpdates may return before state fully synced * Fixes #235 - config option shadowing (no cfg saves, no need to shadow) License: MIT Signed-off-by: Hector Sanjuan <code@hector.link>
2018-04-28 22:22:23 +00:00
if err != nil {
t.Fatal(err)
}
Fix #787: Connectivity fixes Currently, unless doing Join() (--bootstrap), we do not connect to any peers on startup. We however loaded up the peerstore file and Raft will automatically connect older peers to figure out who is the leader etc. DHT bootstrap, after Raft was working, did the rest. For CRDTs we need to connect to people on a normal boot as otherwise, unless bootstrapping, this does not happen, even if the peerstore contains known peers. This introduces a number of changes: * Move peerstore file management back inside the Cluster component, which was already in charge of saving the peerstore file. * We keep saving all "known addresses" but we load them with a non permanent TTL, so that there will be clean up of peers we're not connected to for long. * "Bootstrap" (connect) to a small number of peers during Cluster component creation. * Bootstrap the DHT asap after this, so that other cluster components can initialize with a working peer discovery mechanism. * CRDT Trust() method will now: * Protect the trusted Peer ID in the conn manager * Give top priority in the PeerManager to that Peer (see below) * Mark addresses as permanent in the Peerstore The PeerManager now attaches priorities to peers when importing them and is able to order them according to that priority. The result is that peers with high priority are saved first in the peerstore file. When we load the peerstore file, the first entries in it are given the highest priority. This means that during startup we will connect to "trusted peers" first (because they have been tagged with priority in the previous run and saved at the top of the list). Once connected to a small number of peers, we let the DHT bootstrap process in the background do the rest and discover the network. All this makes the peerstore file a "bootstrap" list for CRDTs and we will attempt to connect to peers on that list until some of those connections succeed.
2019-05-23 16:41:33 +00:00
pinfos := pm.PeerInfos([]peer.ID{test.PeerID1})
if len(pinfos) != 1 {
t.Fatal("expected 1 pinfo")
Feat: emancipate Consensus from the Cluster component This commit promotes the Consensus component (and Raft) to become a fully independent thing like other components, passed to NewCluster during initialization. Cluster (main component) no longer creates the consensus layer internally. This has triggered a number of breaking changes that I will explain below. Motivation: Future work will require the possibility of running Cluster with a consensus layer that is not Raft. The "consensus" layer is in charge of maintaining two things: * The current cluster peerset, as required by the implementation * The current cluster pinset (shared state) While the pinset maintenance has always been in the consensus layer, the peerset maintenance was handled by the main component (starting by the "peers" key in the configuration) AND the Raft component (internally) and this generated lots of confusion: if the user edited the peers in the configuration they would be greeted with an error. The bootstrap process (adding a peer to an existing cluster) and configuration key also complicated many things, since the main component did it, but only when the consensus was initialized and in single peer mode. In all this we also mixed the peerstore (list of peer addresses in the libp2p host) with the peerset, when they need not to be linked. By initializing the consensus layer before calling NewCluster, all the difficulties in maintaining the current implementation in the same way have come to light. Thus, the following changes have been introduced: * Remove "peers" and "bootstrap" keys from the configuration: we no longer edit or save the configuration files. This was a very bad practice, requiring write permissions by the process to the file containing the private key and additionally made things like Puppet deployments of cluster difficult as configuration would mutate from its initial version. Needless to say all the maintenance associated to making sure peers and bootstrap had correct values when peers are bootstrapped or removed. A loud and detailed error message has been added when staring cluster with an old config, along with instructions on how to move forward. * Introduce a PeerstoreFile ("peerstore") which stores peer addresses: in ipfs, the peerstore is not persisted because it can be re-built from the network bootstrappers and the DHT. Cluster should probably also allow discoverability of peers addresses (when not bootstrapping, as in that case we have it), but in the meantime, we will read and persist the peerstore addresses for cluster peers in this file, different from the configuration. Note that dns multiaddresses are now fully supported and no IPs are saved when we have DNS multiaddresses for a peer. * The former "peer_manager" code is now a pstoremgr module, providing utilities to parse, add, list and generally maintain the libp2p host peerstore, including operations on the PeerstoreFile. This "pstoremgr" can now also be extended to perform address autodiscovery and other things indepedently from Cluster. * Create and initialize Raft outside of the main Cluster component: since we can now launch Raft independently from Cluster, we have more degrees of freedom. A new "staging" option when creating the object allows a raft peer to be launched in Staging mode, waiting to be added to a running consensus, and thus, not electing itself as leader or doing anything like we were doing before. This additionally allows us to track when the peer has become a Voter, which only happens when it's caught up with the state, something that was wonky previously. * The raft configuration now includes an InitPeerset key, which allows to provide a peerset for new peers and which is ignored when staging==true. The whole Raft initialization code is way cleaner and stronger now. * Cluster peer bootsrapping is now an ipfs-cluster-service feature. The --bootstrap flag works as before (additionally allowing comma-separated-list of entries). What bootstrap does, is to initialize Raft with staging == true, and then call Join in the main cluster component. Only when the Raft peer transitions to Voter, consensus becomes ready, and cluster becomes Ready. This is cleaner, works better and is less complex than before (supporting both flags and config values). We also backup and clean the state whenever we are boostrapping, automatically * ipfs-cluster-service no longer runs the daemon. Starting cluster needs now "ipfs-cluster-service daemon". The daemon specific flags (bootstrap, alloc) are now flags for the daemon subcommand. Here we mimic ipfs ("ipfs" does not start the daemon but print help) and pave the path for merging both service and ctl in the future. While this brings some breaking changes, it significantly reduces the complexity of the configuration, the code and most importantly, the documentation. It should be easier now to explain the user what is the right way to launch a cluster peer, and more difficult to make mistakes. As a side effect, the PR also: * Fixes #381 - peers with dynamic addresses * Fixes #371 - peers should be Raft configuration option * Fixes #378 - waitForUpdates may return before state fully synced * Fixes #235 - config option shadowing (no cfg saves, no need to shadow) License: MIT Signed-off-by: Hector Sanjuan <code@hector.link>
2018-04-28 22:22:23 +00:00
}
Fix #787: Connectivity fixes Currently, unless doing Join() (--bootstrap), we do not connect to any peers on startup. We however loaded up the peerstore file and Raft will automatically connect older peers to figure out who is the leader etc. DHT bootstrap, after Raft was working, did the rest. For CRDTs we need to connect to people on a normal boot as otherwise, unless bootstrapping, this does not happen, even if the peerstore contains known peers. This introduces a number of changes: * Move peerstore file management back inside the Cluster component, which was already in charge of saving the peerstore file. * We keep saving all "known addresses" but we load them with a non permanent TTL, so that there will be clean up of peers we're not connected to for long. * "Bootstrap" (connect) to a small number of peers during Cluster component creation. * Bootstrap the DHT asap after this, so that other cluster components can initialize with a working peer discovery mechanism. * CRDT Trust() method will now: * Protect the trusted Peer ID in the conn manager * Give top priority in the PeerManager to that Peer (see below) * Mark addresses as permanent in the Peerstore The PeerManager now attaches priorities to peers when importing them and is able to order them according to that priority. The result is that peers with high priority are saved first in the peerstore file. When we load the peerstore file, the first entries in it are given the highest priority. This means that during startup we will connect to "trusted peers" first (because they have been tagged with priority in the previous run and saved at the top of the list). Once connected to a small number of peers, we let the DHT bootstrap process in the background do the rest and discover the network. All this makes the peerstore file a "bootstrap" list for CRDTs and we will attempt to connect to peers on that list until some of those connections succeed.
2019-05-23 16:41:33 +00:00
if len(pinfos[0].Addrs) != 1 {
t.Error("expected a single address")
}
if pinfos[0].Addrs[0].String() != "/dns4/localhost/tcp/1235" {
t.Error("expected the dns address")
Feat: emancipate Consensus from the Cluster component This commit promotes the Consensus component (and Raft) to become a fully independent thing like other components, passed to NewCluster during initialization. Cluster (main component) no longer creates the consensus layer internally. This has triggered a number of breaking changes that I will explain below. Motivation: Future work will require the possibility of running Cluster with a consensus layer that is not Raft. The "consensus" layer is in charge of maintaining two things: * The current cluster peerset, as required by the implementation * The current cluster pinset (shared state) While the pinset maintenance has always been in the consensus layer, the peerset maintenance was handled by the main component (starting by the "peers" key in the configuration) AND the Raft component (internally) and this generated lots of confusion: if the user edited the peers in the configuration they would be greeted with an error. The bootstrap process (adding a peer to an existing cluster) and configuration key also complicated many things, since the main component did it, but only when the consensus was initialized and in single peer mode. In all this we also mixed the peerstore (list of peer addresses in the libp2p host) with the peerset, when they need not to be linked. By initializing the consensus layer before calling NewCluster, all the difficulties in maintaining the current implementation in the same way have come to light. Thus, the following changes have been introduced: * Remove "peers" and "bootstrap" keys from the configuration: we no longer edit or save the configuration files. This was a very bad practice, requiring write permissions by the process to the file containing the private key and additionally made things like Puppet deployments of cluster difficult as configuration would mutate from its initial version. Needless to say all the maintenance associated to making sure peers and bootstrap had correct values when peers are bootstrapped or removed. A loud and detailed error message has been added when staring cluster with an old config, along with instructions on how to move forward. * Introduce a PeerstoreFile ("peerstore") which stores peer addresses: in ipfs, the peerstore is not persisted because it can be re-built from the network bootstrappers and the DHT. Cluster should probably also allow discoverability of peers addresses (when not bootstrapping, as in that case we have it), but in the meantime, we will read and persist the peerstore addresses for cluster peers in this file, different from the configuration. Note that dns multiaddresses are now fully supported and no IPs are saved when we have DNS multiaddresses for a peer. * The former "peer_manager" code is now a pstoremgr module, providing utilities to parse, add, list and generally maintain the libp2p host peerstore, including operations on the PeerstoreFile. This "pstoremgr" can now also be extended to perform address autodiscovery and other things indepedently from Cluster. * Create and initialize Raft outside of the main Cluster component: since we can now launch Raft independently from Cluster, we have more degrees of freedom. A new "staging" option when creating the object allows a raft peer to be launched in Staging mode, waiting to be added to a running consensus, and thus, not electing itself as leader or doing anything like we were doing before. This additionally allows us to track when the peer has become a Voter, which only happens when it's caught up with the state, something that was wonky previously. * The raft configuration now includes an InitPeerset key, which allows to provide a peerset for new peers and which is ignored when staging==true. The whole Raft initialization code is way cleaner and stronger now. * Cluster peer bootsrapping is now an ipfs-cluster-service feature. The --bootstrap flag works as before (additionally allowing comma-separated-list of entries). What bootstrap does, is to initialize Raft with staging == true, and then call Join in the main cluster component. Only when the Raft peer transitions to Voter, consensus becomes ready, and cluster becomes Ready. This is cleaner, works better and is less complex than before (supporting both flags and config values). We also backup and clean the state whenever we are boostrapping, automatically * ipfs-cluster-service no longer runs the daemon. Starting cluster needs now "ipfs-cluster-service daemon". The daemon specific flags (bootstrap, alloc) are now flags for the daemon subcommand. Here we mimic ipfs ("ipfs" does not start the daemon but print help) and pave the path for merging both service and ctl in the future. While this brings some breaking changes, it significantly reduces the complexity of the configuration, the code and most importantly, the documentation. It should be easier now to explain the user what is the right way to launch a cluster peer, and more difficult to make mistakes. As a side effect, the PR also: * Fixes #381 - peers with dynamic addresses * Fixes #371 - peers should be Raft configuration option * Fixes #378 - waitForUpdates may return before state fully synced * Fixes #235 - config option shadowing (no cfg saves, no need to shadow) License: MIT Signed-off-by: Hector Sanjuan <code@hector.link>
2018-04-28 22:22:23 +00:00
}
}
func TestPeerstore(t *testing.T) {
pm := makeMgr(t)
defer clean(pm)
Fix #787: Connectivity fixes Currently, unless doing Join() (--bootstrap), we do not connect to any peers on startup. We however loaded up the peerstore file and Raft will automatically connect older peers to figure out who is the leader etc. DHT bootstrap, after Raft was working, did the rest. For CRDTs we need to connect to people on a normal boot as otherwise, unless bootstrapping, this does not happen, even if the peerstore contains known peers. This introduces a number of changes: * Move peerstore file management back inside the Cluster component, which was already in charge of saving the peerstore file. * We keep saving all "known addresses" but we load them with a non permanent TTL, so that there will be clean up of peers we're not connected to for long. * "Bootstrap" (connect) to a small number of peers during Cluster component creation. * Bootstrap the DHT asap after this, so that other cluster components can initialize with a working peer discovery mechanism. * CRDT Trust() method will now: * Protect the trusted Peer ID in the conn manager * Give top priority in the PeerManager to that Peer (see below) * Mark addresses as permanent in the Peerstore The PeerManager now attaches priorities to peers when importing them and is able to order them according to that priority. The result is that peers with high priority are saved first in the peerstore file. When we load the peerstore file, the first entries in it are given the highest priority. This means that during startup we will connect to "trusted peers" first (because they have been tagged with priority in the previous run and saved at the top of the list). Once connected to a small number of peers, we let the DHT bootstrap process in the background do the rest and discover the network. All this makes the peerstore file a "bootstrap" list for CRDTs and we will attempt to connect to peers on that list until some of those connections succeed.
2019-05-23 16:41:33 +00:00
loc1 := "/ip4/127.0.0.1/tcp/1234"
testAddr1 := testAddr(loc1, test.PeerID1)
loc2 := "/ip4/127.0.0.1/tcp/1235"
testAddr2 := testAddr(loc2, test.PeerID1)
err := pm.ImportPeers([]ma.Multiaddr{testAddr1, testAddr2}, false, time.Minute)
if err != nil {
t.Fatal(err)
}
err = pm.SavePeerstoreForPeers([]peer.ID{test.PeerID1})
if err != nil {
t.Error(err)
}
Fix #787: Connectivity fixes Currently, unless doing Join() (--bootstrap), we do not connect to any peers on startup. We however loaded up the peerstore file and Raft will automatically connect older peers to figure out who is the leader etc. DHT bootstrap, after Raft was working, did the rest. For CRDTs we need to connect to people on a normal boot as otherwise, unless bootstrapping, this does not happen, even if the peerstore contains known peers. This introduces a number of changes: * Move peerstore file management back inside the Cluster component, which was already in charge of saving the peerstore file. * We keep saving all "known addresses" but we load them with a non permanent TTL, so that there will be clean up of peers we're not connected to for long. * "Bootstrap" (connect) to a small number of peers during Cluster component creation. * Bootstrap the DHT asap after this, so that other cluster components can initialize with a working peer discovery mechanism. * CRDT Trust() method will now: * Protect the trusted Peer ID in the conn manager * Give top priority in the PeerManager to that Peer (see below) * Mark addresses as permanent in the Peerstore The PeerManager now attaches priorities to peers when importing them and is able to order them according to that priority. The result is that peers with high priority are saved first in the peerstore file. When we load the peerstore file, the first entries in it are given the highest priority. This means that during startup we will connect to "trusted peers" first (because they have been tagged with priority in the previous run and saved at the top of the list). Once connected to a small number of peers, we let the DHT bootstrap process in the background do the rest and discover the network. All this makes the peerstore file a "bootstrap" list for CRDTs and we will attempt to connect to peers on that list until some of those connections succeed.
2019-05-23 16:41:33 +00:00
pm2 := makeMgr(t)
defer clean(pm2)
err = pm2.ImportPeersFromPeerstore(false, time.Minute)
if err != nil {
t.Fatal(err)
}
pinfos := pm2.PeerInfos([]peer.ID{test.PeerID1})
if len(pinfos) != 1 {
t.Fatal("expected 1 peer in the peerstore")
}
if len(pinfos[0].Addrs) != 2 {
t.Error("expected 2 addresses")
}
}
func TestPriority(t *testing.T) {
pm := makeMgr(t)
defer clean(pm)
loc1 := "/ip4/127.0.0.1/tcp/1234"
testAddr1 := testAddr(loc1, test.PeerID1)
loc2 := "/ip4/127.0.0.2/tcp/1235"
testAddr2 := testAddr(loc2, test.PeerID2)
loc3 := "/ip4/127.0.0.3/tcp/1234"
testAddr3 := testAddr(loc3, test.PeerID3)
loc4 := "/ip4/127.0.0.4/tcp/1235"
testAddr4 := testAddr(loc4, test.PeerID4)
Feat: emancipate Consensus from the Cluster component This commit promotes the Consensus component (and Raft) to become a fully independent thing like other components, passed to NewCluster during initialization. Cluster (main component) no longer creates the consensus layer internally. This has triggered a number of breaking changes that I will explain below. Motivation: Future work will require the possibility of running Cluster with a consensus layer that is not Raft. The "consensus" layer is in charge of maintaining two things: * The current cluster peerset, as required by the implementation * The current cluster pinset (shared state) While the pinset maintenance has always been in the consensus layer, the peerset maintenance was handled by the main component (starting by the "peers" key in the configuration) AND the Raft component (internally) and this generated lots of confusion: if the user edited the peers in the configuration they would be greeted with an error. The bootstrap process (adding a peer to an existing cluster) and configuration key also complicated many things, since the main component did it, but only when the consensus was initialized and in single peer mode. In all this we also mixed the peerstore (list of peer addresses in the libp2p host) with the peerset, when they need not to be linked. By initializing the consensus layer before calling NewCluster, all the difficulties in maintaining the current implementation in the same way have come to light. Thus, the following changes have been introduced: * Remove "peers" and "bootstrap" keys from the configuration: we no longer edit or save the configuration files. This was a very bad practice, requiring write permissions by the process to the file containing the private key and additionally made things like Puppet deployments of cluster difficult as configuration would mutate from its initial version. Needless to say all the maintenance associated to making sure peers and bootstrap had correct values when peers are bootstrapped or removed. A loud and detailed error message has been added when staring cluster with an old config, along with instructions on how to move forward. * Introduce a PeerstoreFile ("peerstore") which stores peer addresses: in ipfs, the peerstore is not persisted because it can be re-built from the network bootstrappers and the DHT. Cluster should probably also allow discoverability of peers addresses (when not bootstrapping, as in that case we have it), but in the meantime, we will read and persist the peerstore addresses for cluster peers in this file, different from the configuration. Note that dns multiaddresses are now fully supported and no IPs are saved when we have DNS multiaddresses for a peer. * The former "peer_manager" code is now a pstoremgr module, providing utilities to parse, add, list and generally maintain the libp2p host peerstore, including operations on the PeerstoreFile. This "pstoremgr" can now also be extended to perform address autodiscovery and other things indepedently from Cluster. * Create and initialize Raft outside of the main Cluster component: since we can now launch Raft independently from Cluster, we have more degrees of freedom. A new "staging" option when creating the object allows a raft peer to be launched in Staging mode, waiting to be added to a running consensus, and thus, not electing itself as leader or doing anything like we were doing before. This additionally allows us to track when the peer has become a Voter, which only happens when it's caught up with the state, something that was wonky previously. * The raft configuration now includes an InitPeerset key, which allows to provide a peerset for new peers and which is ignored when staging==true. The whole Raft initialization code is way cleaner and stronger now. * Cluster peer bootsrapping is now an ipfs-cluster-service feature. The --bootstrap flag works as before (additionally allowing comma-separated-list of entries). What bootstrap does, is to initialize Raft with staging == true, and then call Join in the main cluster component. Only when the Raft peer transitions to Voter, consensus becomes ready, and cluster becomes Ready. This is cleaner, works better and is less complex than before (supporting both flags and config values). We also backup and clean the state whenever we are boostrapping, automatically * ipfs-cluster-service no longer runs the daemon. Starting cluster needs now "ipfs-cluster-service daemon". The daemon specific flags (bootstrap, alloc) are now flags for the daemon subcommand. Here we mimic ipfs ("ipfs" does not start the daemon but print help) and pave the path for merging both service and ctl in the future. While this brings some breaking changes, it significantly reduces the complexity of the configuration, the code and most importantly, the documentation. It should be easier now to explain the user what is the right way to launch a cluster peer, and more difficult to make mistakes. As a side effect, the PR also: * Fixes #381 - peers with dynamic addresses * Fixes #371 - peers should be Raft configuration option * Fixes #378 - waitForUpdates may return before state fully synced * Fixes #235 - config option shadowing (no cfg saves, no need to shadow) License: MIT Signed-off-by: Hector Sanjuan <code@hector.link>
2018-04-28 22:22:23 +00:00
Fix #787: Connectivity fixes Currently, unless doing Join() (--bootstrap), we do not connect to any peers on startup. We however loaded up the peerstore file and Raft will automatically connect older peers to figure out who is the leader etc. DHT bootstrap, after Raft was working, did the rest. For CRDTs we need to connect to people on a normal boot as otherwise, unless bootstrapping, this does not happen, even if the peerstore contains known peers. This introduces a number of changes: * Move peerstore file management back inside the Cluster component, which was already in charge of saving the peerstore file. * We keep saving all "known addresses" but we load them with a non permanent TTL, so that there will be clean up of peers we're not connected to for long. * "Bootstrap" (connect) to a small number of peers during Cluster component creation. * Bootstrap the DHT asap after this, so that other cluster components can initialize with a working peer discovery mechanism. * CRDT Trust() method will now: * Protect the trusted Peer ID in the conn manager * Give top priority in the PeerManager to that Peer (see below) * Mark addresses as permanent in the Peerstore The PeerManager now attaches priorities to peers when importing them and is able to order them according to that priority. The result is that peers with high priority are saved first in the peerstore file. When we load the peerstore file, the first entries in it are given the highest priority. This means that during startup we will connect to "trusted peers" first (because they have been tagged with priority in the previous run and saved at the top of the list). Once connected to a small number of peers, we let the DHT bootstrap process in the background do the rest and discover the network. All this makes the peerstore file a "bootstrap" list for CRDTs and we will attempt to connect to peers on that list until some of those connections succeed.
2019-05-23 16:41:33 +00:00
err := pm.ImportPeers([]ma.Multiaddr{testAddr1, testAddr2, testAddr3, testAddr4}, false, time.Minute)
Feat: emancipate Consensus from the Cluster component This commit promotes the Consensus component (and Raft) to become a fully independent thing like other components, passed to NewCluster during initialization. Cluster (main component) no longer creates the consensus layer internally. This has triggered a number of breaking changes that I will explain below. Motivation: Future work will require the possibility of running Cluster with a consensus layer that is not Raft. The "consensus" layer is in charge of maintaining two things: * The current cluster peerset, as required by the implementation * The current cluster pinset (shared state) While the pinset maintenance has always been in the consensus layer, the peerset maintenance was handled by the main component (starting by the "peers" key in the configuration) AND the Raft component (internally) and this generated lots of confusion: if the user edited the peers in the configuration they would be greeted with an error. The bootstrap process (adding a peer to an existing cluster) and configuration key also complicated many things, since the main component did it, but only when the consensus was initialized and in single peer mode. In all this we also mixed the peerstore (list of peer addresses in the libp2p host) with the peerset, when they need not to be linked. By initializing the consensus layer before calling NewCluster, all the difficulties in maintaining the current implementation in the same way have come to light. Thus, the following changes have been introduced: * Remove "peers" and "bootstrap" keys from the configuration: we no longer edit or save the configuration files. This was a very bad practice, requiring write permissions by the process to the file containing the private key and additionally made things like Puppet deployments of cluster difficult as configuration would mutate from its initial version. Needless to say all the maintenance associated to making sure peers and bootstrap had correct values when peers are bootstrapped or removed. A loud and detailed error message has been added when staring cluster with an old config, along with instructions on how to move forward. * Introduce a PeerstoreFile ("peerstore") which stores peer addresses: in ipfs, the peerstore is not persisted because it can be re-built from the network bootstrappers and the DHT. Cluster should probably also allow discoverability of peers addresses (when not bootstrapping, as in that case we have it), but in the meantime, we will read and persist the peerstore addresses for cluster peers in this file, different from the configuration. Note that dns multiaddresses are now fully supported and no IPs are saved when we have DNS multiaddresses for a peer. * The former "peer_manager" code is now a pstoremgr module, providing utilities to parse, add, list and generally maintain the libp2p host peerstore, including operations on the PeerstoreFile. This "pstoremgr" can now also be extended to perform address autodiscovery and other things indepedently from Cluster. * Create and initialize Raft outside of the main Cluster component: since we can now launch Raft independently from Cluster, we have more degrees of freedom. A new "staging" option when creating the object allows a raft peer to be launched in Staging mode, waiting to be added to a running consensus, and thus, not electing itself as leader or doing anything like we were doing before. This additionally allows us to track when the peer has become a Voter, which only happens when it's caught up with the state, something that was wonky previously. * The raft configuration now includes an InitPeerset key, which allows to provide a peerset for new peers and which is ignored when staging==true. The whole Raft initialization code is way cleaner and stronger now. * Cluster peer bootsrapping is now an ipfs-cluster-service feature. The --bootstrap flag works as before (additionally allowing comma-separated-list of entries). What bootstrap does, is to initialize Raft with staging == true, and then call Join in the main cluster component. Only when the Raft peer transitions to Voter, consensus becomes ready, and cluster becomes Ready. This is cleaner, works better and is less complex than before (supporting both flags and config values). We also backup and clean the state whenever we are boostrapping, automatically * ipfs-cluster-service no longer runs the daemon. Starting cluster needs now "ipfs-cluster-service daemon". The daemon specific flags (bootstrap, alloc) are now flags for the daemon subcommand. Here we mimic ipfs ("ipfs" does not start the daemon but print help) and pave the path for merging both service and ctl in the future. While this brings some breaking changes, it significantly reduces the complexity of the configuration, the code and most importantly, the documentation. It should be easier now to explain the user what is the right way to launch a cluster peer, and more difficult to make mistakes. As a side effect, the PR also: * Fixes #381 - peers with dynamic addresses * Fixes #371 - peers should be Raft configuration option * Fixes #378 - waitForUpdates may return before state fully synced * Fixes #235 - config option shadowing (no cfg saves, no need to shadow) License: MIT Signed-off-by: Hector Sanjuan <code@hector.link>
2018-04-28 22:22:23 +00:00
if err != nil {
t.Fatal(err)
}
Fix #787: Connectivity fixes Currently, unless doing Join() (--bootstrap), we do not connect to any peers on startup. We however loaded up the peerstore file and Raft will automatically connect older peers to figure out who is the leader etc. DHT bootstrap, after Raft was working, did the rest. For CRDTs we need to connect to people on a normal boot as otherwise, unless bootstrapping, this does not happen, even if the peerstore contains known peers. This introduces a number of changes: * Move peerstore file management back inside the Cluster component, which was already in charge of saving the peerstore file. * We keep saving all "known addresses" but we load them with a non permanent TTL, so that there will be clean up of peers we're not connected to for long. * "Bootstrap" (connect) to a small number of peers during Cluster component creation. * Bootstrap the DHT asap after this, so that other cluster components can initialize with a working peer discovery mechanism. * CRDT Trust() method will now: * Protect the trusted Peer ID in the conn manager * Give top priority in the PeerManager to that Peer (see below) * Mark addresses as permanent in the Peerstore The PeerManager now attaches priorities to peers when importing them and is able to order them according to that priority. The result is that peers with high priority are saved first in the peerstore file. When we load the peerstore file, the first entries in it are given the highest priority. This means that during startup we will connect to "trusted peers" first (because they have been tagged with priority in the previous run and saved at the top of the list). Once connected to a small number of peers, we let the DHT bootstrap process in the background do the rest and discover the network. All this makes the peerstore file a "bootstrap" list for CRDTs and we will attempt to connect to peers on that list until some of those connections succeed.
2019-05-23 16:41:33 +00:00
pinfos := pm.PeerInfos([]peer.ID{test.PeerID4, test.PeerID2, test.PeerID3, test.PeerID1})
if len(pinfos) != 4 {
t.Fatal("expected 4 pinfos")
}
if pinfos[0].ID != test.PeerID1 ||
pinfos[1].ID != test.PeerID2 ||
pinfos[2].ID != test.PeerID3 ||
pinfos[3].ID != test.PeerID4 {
t.Error("wrong order of peerinfos")
}
pm.SetPriority(test.PeerID1, 100)
pinfos = pm.PeerInfos([]peer.ID{test.PeerID4, test.PeerID2, test.PeerID3, test.PeerID1})
if len(pinfos) != 4 {
t.Fatal("expected 4 pinfos")
}
if pinfos[3].ID != test.PeerID1 {
t.Fatal("PeerID1 should be last in the list")
}
err = pm.SavePeerstoreForPeers([]peer.ID{test.PeerID4, test.PeerID2, test.PeerID3, test.PeerID1})
if err != nil {
t.Error(err)
}
Feat: emancipate Consensus from the Cluster component This commit promotes the Consensus component (and Raft) to become a fully independent thing like other components, passed to NewCluster during initialization. Cluster (main component) no longer creates the consensus layer internally. This has triggered a number of breaking changes that I will explain below. Motivation: Future work will require the possibility of running Cluster with a consensus layer that is not Raft. The "consensus" layer is in charge of maintaining two things: * The current cluster peerset, as required by the implementation * The current cluster pinset (shared state) While the pinset maintenance has always been in the consensus layer, the peerset maintenance was handled by the main component (starting by the "peers" key in the configuration) AND the Raft component (internally) and this generated lots of confusion: if the user edited the peers in the configuration they would be greeted with an error. The bootstrap process (adding a peer to an existing cluster) and configuration key also complicated many things, since the main component did it, but only when the consensus was initialized and in single peer mode. In all this we also mixed the peerstore (list of peer addresses in the libp2p host) with the peerset, when they need not to be linked. By initializing the consensus layer before calling NewCluster, all the difficulties in maintaining the current implementation in the same way have come to light. Thus, the following changes have been introduced: * Remove "peers" and "bootstrap" keys from the configuration: we no longer edit or save the configuration files. This was a very bad practice, requiring write permissions by the process to the file containing the private key and additionally made things like Puppet deployments of cluster difficult as configuration would mutate from its initial version. Needless to say all the maintenance associated to making sure peers and bootstrap had correct values when peers are bootstrapped or removed. A loud and detailed error message has been added when staring cluster with an old config, along with instructions on how to move forward. * Introduce a PeerstoreFile ("peerstore") which stores peer addresses: in ipfs, the peerstore is not persisted because it can be re-built from the network bootstrappers and the DHT. Cluster should probably also allow discoverability of peers addresses (when not bootstrapping, as in that case we have it), but in the meantime, we will read and persist the peerstore addresses for cluster peers in this file, different from the configuration. Note that dns multiaddresses are now fully supported and no IPs are saved when we have DNS multiaddresses for a peer. * The former "peer_manager" code is now a pstoremgr module, providing utilities to parse, add, list and generally maintain the libp2p host peerstore, including operations on the PeerstoreFile. This "pstoremgr" can now also be extended to perform address autodiscovery and other things indepedently from Cluster. * Create and initialize Raft outside of the main Cluster component: since we can now launch Raft independently from Cluster, we have more degrees of freedom. A new "staging" option when creating the object allows a raft peer to be launched in Staging mode, waiting to be added to a running consensus, and thus, not electing itself as leader or doing anything like we were doing before. This additionally allows us to track when the peer has become a Voter, which only happens when it's caught up with the state, something that was wonky previously. * The raft configuration now includes an InitPeerset key, which allows to provide a peerset for new peers and which is ignored when staging==true. The whole Raft initialization code is way cleaner and stronger now. * Cluster peer bootsrapping is now an ipfs-cluster-service feature. The --bootstrap flag works as before (additionally allowing comma-separated-list of entries). What bootstrap does, is to initialize Raft with staging == true, and then call Join in the main cluster component. Only when the Raft peer transitions to Voter, consensus becomes ready, and cluster becomes Ready. This is cleaner, works better and is less complex than before (supporting both flags and config values). We also backup and clean the state whenever we are boostrapping, automatically * ipfs-cluster-service no longer runs the daemon. Starting cluster needs now "ipfs-cluster-service daemon". The daemon specific flags (bootstrap, alloc) are now flags for the daemon subcommand. Here we mimic ipfs ("ipfs" does not start the daemon but print help) and pave the path for merging both service and ctl in the future. While this brings some breaking changes, it significantly reduces the complexity of the configuration, the code and most importantly, the documentation. It should be easier now to explain the user what is the right way to launch a cluster peer, and more difficult to make mistakes. As a side effect, the PR also: * Fixes #381 - peers with dynamic addresses * Fixes #371 - peers should be Raft configuration option * Fixes #378 - waitForUpdates may return before state fully synced * Fixes #235 - config option shadowing (no cfg saves, no need to shadow) License: MIT Signed-off-by: Hector Sanjuan <code@hector.link>
2018-04-28 22:22:23 +00:00
pm2 := makeMgr(t)
defer clean(pm2)
Fix #787: Connectivity fixes Currently, unless doing Join() (--bootstrap), we do not connect to any peers on startup. We however loaded up the peerstore file and Raft will automatically connect older peers to figure out who is the leader etc. DHT bootstrap, after Raft was working, did the rest. For CRDTs we need to connect to people on a normal boot as otherwise, unless bootstrapping, this does not happen, even if the peerstore contains known peers. This introduces a number of changes: * Move peerstore file management back inside the Cluster component, which was already in charge of saving the peerstore file. * We keep saving all "known addresses" but we load them with a non permanent TTL, so that there will be clean up of peers we're not connected to for long. * "Bootstrap" (connect) to a small number of peers during Cluster component creation. * Bootstrap the DHT asap after this, so that other cluster components can initialize with a working peer discovery mechanism. * CRDT Trust() method will now: * Protect the trusted Peer ID in the conn manager * Give top priority in the PeerManager to that Peer (see below) * Mark addresses as permanent in the Peerstore The PeerManager now attaches priorities to peers when importing them and is able to order them according to that priority. The result is that peers with high priority are saved first in the peerstore file. When we load the peerstore file, the first entries in it are given the highest priority. This means that during startup we will connect to "trusted peers" first (because they have been tagged with priority in the previous run and saved at the top of the list). Once connected to a small number of peers, we let the DHT bootstrap process in the background do the rest and discover the network. All this makes the peerstore file a "bootstrap" list for CRDTs and we will attempt to connect to peers on that list until some of those connections succeed.
2019-05-23 16:41:33 +00:00
err = pm2.ImportPeersFromPeerstore(false, time.Minute)
Feat: emancipate Consensus from the Cluster component This commit promotes the Consensus component (and Raft) to become a fully independent thing like other components, passed to NewCluster during initialization. Cluster (main component) no longer creates the consensus layer internally. This has triggered a number of breaking changes that I will explain below. Motivation: Future work will require the possibility of running Cluster with a consensus layer that is not Raft. The "consensus" layer is in charge of maintaining two things: * The current cluster peerset, as required by the implementation * The current cluster pinset (shared state) While the pinset maintenance has always been in the consensus layer, the peerset maintenance was handled by the main component (starting by the "peers" key in the configuration) AND the Raft component (internally) and this generated lots of confusion: if the user edited the peers in the configuration they would be greeted with an error. The bootstrap process (adding a peer to an existing cluster) and configuration key also complicated many things, since the main component did it, but only when the consensus was initialized and in single peer mode. In all this we also mixed the peerstore (list of peer addresses in the libp2p host) with the peerset, when they need not to be linked. By initializing the consensus layer before calling NewCluster, all the difficulties in maintaining the current implementation in the same way have come to light. Thus, the following changes have been introduced: * Remove "peers" and "bootstrap" keys from the configuration: we no longer edit or save the configuration files. This was a very bad practice, requiring write permissions by the process to the file containing the private key and additionally made things like Puppet deployments of cluster difficult as configuration would mutate from its initial version. Needless to say all the maintenance associated to making sure peers and bootstrap had correct values when peers are bootstrapped or removed. A loud and detailed error message has been added when staring cluster with an old config, along with instructions on how to move forward. * Introduce a PeerstoreFile ("peerstore") which stores peer addresses: in ipfs, the peerstore is not persisted because it can be re-built from the network bootstrappers and the DHT. Cluster should probably also allow discoverability of peers addresses (when not bootstrapping, as in that case we have it), but in the meantime, we will read and persist the peerstore addresses for cluster peers in this file, different from the configuration. Note that dns multiaddresses are now fully supported and no IPs are saved when we have DNS multiaddresses for a peer. * The former "peer_manager" code is now a pstoremgr module, providing utilities to parse, add, list and generally maintain the libp2p host peerstore, including operations on the PeerstoreFile. This "pstoremgr" can now also be extended to perform address autodiscovery and other things indepedently from Cluster. * Create and initialize Raft outside of the main Cluster component: since we can now launch Raft independently from Cluster, we have more degrees of freedom. A new "staging" option when creating the object allows a raft peer to be launched in Staging mode, waiting to be added to a running consensus, and thus, not electing itself as leader or doing anything like we were doing before. This additionally allows us to track when the peer has become a Voter, which only happens when it's caught up with the state, something that was wonky previously. * The raft configuration now includes an InitPeerset key, which allows to provide a peerset for new peers and which is ignored when staging==true. The whole Raft initialization code is way cleaner and stronger now. * Cluster peer bootsrapping is now an ipfs-cluster-service feature. The --bootstrap flag works as before (additionally allowing comma-separated-list of entries). What bootstrap does, is to initialize Raft with staging == true, and then call Join in the main cluster component. Only when the Raft peer transitions to Voter, consensus becomes ready, and cluster becomes Ready. This is cleaner, works better and is less complex than before (supporting both flags and config values). We also backup and clean the state whenever we are boostrapping, automatically * ipfs-cluster-service no longer runs the daemon. Starting cluster needs now "ipfs-cluster-service daemon". The daemon specific flags (bootstrap, alloc) are now flags for the daemon subcommand. Here we mimic ipfs ("ipfs" does not start the daemon but print help) and pave the path for merging both service and ctl in the future. While this brings some breaking changes, it significantly reduces the complexity of the configuration, the code and most importantly, the documentation. It should be easier now to explain the user what is the right way to launch a cluster peer, and more difficult to make mistakes. As a side effect, the PR also: * Fixes #381 - peers with dynamic addresses * Fixes #371 - peers should be Raft configuration option * Fixes #378 - waitForUpdates may return before state fully synced * Fixes #235 - config option shadowing (no cfg saves, no need to shadow) License: MIT Signed-off-by: Hector Sanjuan <code@hector.link>
2018-04-28 22:22:23 +00:00
if err != nil {
t.Fatal(err)
}
Fix #787: Connectivity fixes Currently, unless doing Join() (--bootstrap), we do not connect to any peers on startup. We however loaded up the peerstore file and Raft will automatically connect older peers to figure out who is the leader etc. DHT bootstrap, after Raft was working, did the rest. For CRDTs we need to connect to people on a normal boot as otherwise, unless bootstrapping, this does not happen, even if the peerstore contains known peers. This introduces a number of changes: * Move peerstore file management back inside the Cluster component, which was already in charge of saving the peerstore file. * We keep saving all "known addresses" but we load them with a non permanent TTL, so that there will be clean up of peers we're not connected to for long. * "Bootstrap" (connect) to a small number of peers during Cluster component creation. * Bootstrap the DHT asap after this, so that other cluster components can initialize with a working peer discovery mechanism. * CRDT Trust() method will now: * Protect the trusted Peer ID in the conn manager * Give top priority in the PeerManager to that Peer (see below) * Mark addresses as permanent in the Peerstore The PeerManager now attaches priorities to peers when importing them and is able to order them according to that priority. The result is that peers with high priority are saved first in the peerstore file. When we load the peerstore file, the first entries in it are given the highest priority. This means that during startup we will connect to "trusted peers" first (because they have been tagged with priority in the previous run and saved at the top of the list). Once connected to a small number of peers, we let the DHT bootstrap process in the background do the rest and discover the network. All this makes the peerstore file a "bootstrap" list for CRDTs and we will attempt to connect to peers on that list until some of those connections succeed.
2019-05-23 16:41:33 +00:00
pinfos = pm2.PeerInfos([]peer.ID{test.PeerID4, test.PeerID2, test.PeerID3, test.PeerID1})
if len(pinfos) != 4 {
t.Fatal("expected 4 pinfos")
}
Feat: emancipate Consensus from the Cluster component This commit promotes the Consensus component (and Raft) to become a fully independent thing like other components, passed to NewCluster during initialization. Cluster (main component) no longer creates the consensus layer internally. This has triggered a number of breaking changes that I will explain below. Motivation: Future work will require the possibility of running Cluster with a consensus layer that is not Raft. The "consensus" layer is in charge of maintaining two things: * The current cluster peerset, as required by the implementation * The current cluster pinset (shared state) While the pinset maintenance has always been in the consensus layer, the peerset maintenance was handled by the main component (starting by the "peers" key in the configuration) AND the Raft component (internally) and this generated lots of confusion: if the user edited the peers in the configuration they would be greeted with an error. The bootstrap process (adding a peer to an existing cluster) and configuration key also complicated many things, since the main component did it, but only when the consensus was initialized and in single peer mode. In all this we also mixed the peerstore (list of peer addresses in the libp2p host) with the peerset, when they need not to be linked. By initializing the consensus layer before calling NewCluster, all the difficulties in maintaining the current implementation in the same way have come to light. Thus, the following changes have been introduced: * Remove "peers" and "bootstrap" keys from the configuration: we no longer edit or save the configuration files. This was a very bad practice, requiring write permissions by the process to the file containing the private key and additionally made things like Puppet deployments of cluster difficult as configuration would mutate from its initial version. Needless to say all the maintenance associated to making sure peers and bootstrap had correct values when peers are bootstrapped or removed. A loud and detailed error message has been added when staring cluster with an old config, along with instructions on how to move forward. * Introduce a PeerstoreFile ("peerstore") which stores peer addresses: in ipfs, the peerstore is not persisted because it can be re-built from the network bootstrappers and the DHT. Cluster should probably also allow discoverability of peers addresses (when not bootstrapping, as in that case we have it), but in the meantime, we will read and persist the peerstore addresses for cluster peers in this file, different from the configuration. Note that dns multiaddresses are now fully supported and no IPs are saved when we have DNS multiaddresses for a peer. * The former "peer_manager" code is now a pstoremgr module, providing utilities to parse, add, list and generally maintain the libp2p host peerstore, including operations on the PeerstoreFile. This "pstoremgr" can now also be extended to perform address autodiscovery and other things indepedently from Cluster. * Create and initialize Raft outside of the main Cluster component: since we can now launch Raft independently from Cluster, we have more degrees of freedom. A new "staging" option when creating the object allows a raft peer to be launched in Staging mode, waiting to be added to a running consensus, and thus, not electing itself as leader or doing anything like we were doing before. This additionally allows us to track when the peer has become a Voter, which only happens when it's caught up with the state, something that was wonky previously. * The raft configuration now includes an InitPeerset key, which allows to provide a peerset for new peers and which is ignored when staging==true. The whole Raft initialization code is way cleaner and stronger now. * Cluster peer bootsrapping is now an ipfs-cluster-service feature. The --bootstrap flag works as before (additionally allowing comma-separated-list of entries). What bootstrap does, is to initialize Raft with staging == true, and then call Join in the main cluster component. Only when the Raft peer transitions to Voter, consensus becomes ready, and cluster becomes Ready. This is cleaner, works better and is less complex than before (supporting both flags and config values). We also backup and clean the state whenever we are boostrapping, automatically * ipfs-cluster-service no longer runs the daemon. Starting cluster needs now "ipfs-cluster-service daemon". The daemon specific flags (bootstrap, alloc) are now flags for the daemon subcommand. Here we mimic ipfs ("ipfs" does not start the daemon but print help) and pave the path for merging both service and ctl in the future. While this brings some breaking changes, it significantly reduces the complexity of the configuration, the code and most importantly, the documentation. It should be easier now to explain the user what is the right way to launch a cluster peer, and more difficult to make mistakes. As a side effect, the PR also: * Fixes #381 - peers with dynamic addresses * Fixes #371 - peers should be Raft configuration option * Fixes #378 - waitForUpdates may return before state fully synced * Fixes #235 - config option shadowing (no cfg saves, no need to shadow) License: MIT Signed-off-by: Hector Sanjuan <code@hector.link>
2018-04-28 22:22:23 +00:00
Fix #787: Connectivity fixes Currently, unless doing Join() (--bootstrap), we do not connect to any peers on startup. We however loaded up the peerstore file and Raft will automatically connect older peers to figure out who is the leader etc. DHT bootstrap, after Raft was working, did the rest. For CRDTs we need to connect to people on a normal boot as otherwise, unless bootstrapping, this does not happen, even if the peerstore contains known peers. This introduces a number of changes: * Move peerstore file management back inside the Cluster component, which was already in charge of saving the peerstore file. * We keep saving all "known addresses" but we load them with a non permanent TTL, so that there will be clean up of peers we're not connected to for long. * "Bootstrap" (connect) to a small number of peers during Cluster component creation. * Bootstrap the DHT asap after this, so that other cluster components can initialize with a working peer discovery mechanism. * CRDT Trust() method will now: * Protect the trusted Peer ID in the conn manager * Give top priority in the PeerManager to that Peer (see below) * Mark addresses as permanent in the Peerstore The PeerManager now attaches priorities to peers when importing them and is able to order them according to that priority. The result is that peers with high priority are saved first in the peerstore file. When we load the peerstore file, the first entries in it are given the highest priority. This means that during startup we will connect to "trusted peers" first (because they have been tagged with priority in the previous run and saved at the top of the list). Once connected to a small number of peers, we let the DHT bootstrap process in the background do the rest and discover the network. All this makes the peerstore file a "bootstrap" list for CRDTs and we will attempt to connect to peers on that list until some of those connections succeed.
2019-05-23 16:41:33 +00:00
if pinfos[0].ID != test.PeerID2 ||
pinfos[1].ID != test.PeerID3 ||
pinfos[2].ID != test.PeerID4 ||
pinfos[3].ID != test.PeerID1 {
t.Error("wrong order of peerinfos")
Feat: emancipate Consensus from the Cluster component This commit promotes the Consensus component (and Raft) to become a fully independent thing like other components, passed to NewCluster during initialization. Cluster (main component) no longer creates the consensus layer internally. This has triggered a number of breaking changes that I will explain below. Motivation: Future work will require the possibility of running Cluster with a consensus layer that is not Raft. The "consensus" layer is in charge of maintaining two things: * The current cluster peerset, as required by the implementation * The current cluster pinset (shared state) While the pinset maintenance has always been in the consensus layer, the peerset maintenance was handled by the main component (starting by the "peers" key in the configuration) AND the Raft component (internally) and this generated lots of confusion: if the user edited the peers in the configuration they would be greeted with an error. The bootstrap process (adding a peer to an existing cluster) and configuration key also complicated many things, since the main component did it, but only when the consensus was initialized and in single peer mode. In all this we also mixed the peerstore (list of peer addresses in the libp2p host) with the peerset, when they need not to be linked. By initializing the consensus layer before calling NewCluster, all the difficulties in maintaining the current implementation in the same way have come to light. Thus, the following changes have been introduced: * Remove "peers" and "bootstrap" keys from the configuration: we no longer edit or save the configuration files. This was a very bad practice, requiring write permissions by the process to the file containing the private key and additionally made things like Puppet deployments of cluster difficult as configuration would mutate from its initial version. Needless to say all the maintenance associated to making sure peers and bootstrap had correct values when peers are bootstrapped or removed. A loud and detailed error message has been added when staring cluster with an old config, along with instructions on how to move forward. * Introduce a PeerstoreFile ("peerstore") which stores peer addresses: in ipfs, the peerstore is not persisted because it can be re-built from the network bootstrappers and the DHT. Cluster should probably also allow discoverability of peers addresses (when not bootstrapping, as in that case we have it), but in the meantime, we will read and persist the peerstore addresses for cluster peers in this file, different from the configuration. Note that dns multiaddresses are now fully supported and no IPs are saved when we have DNS multiaddresses for a peer. * The former "peer_manager" code is now a pstoremgr module, providing utilities to parse, add, list and generally maintain the libp2p host peerstore, including operations on the PeerstoreFile. This "pstoremgr" can now also be extended to perform address autodiscovery and other things indepedently from Cluster. * Create and initialize Raft outside of the main Cluster component: since we can now launch Raft independently from Cluster, we have more degrees of freedom. A new "staging" option when creating the object allows a raft peer to be launched in Staging mode, waiting to be added to a running consensus, and thus, not electing itself as leader or doing anything like we were doing before. This additionally allows us to track when the peer has become a Voter, which only happens when it's caught up with the state, something that was wonky previously. * The raft configuration now includes an InitPeerset key, which allows to provide a peerset for new peers and which is ignored when staging==true. The whole Raft initialization code is way cleaner and stronger now. * Cluster peer bootsrapping is now an ipfs-cluster-service feature. The --bootstrap flag works as before (additionally allowing comma-separated-list of entries). What bootstrap does, is to initialize Raft with staging == true, and then call Join in the main cluster component. Only when the Raft peer transitions to Voter, consensus becomes ready, and cluster becomes Ready. This is cleaner, works better and is less complex than before (supporting both flags and config values). We also backup and clean the state whenever we are boostrapping, automatically * ipfs-cluster-service no longer runs the daemon. Starting cluster needs now "ipfs-cluster-service daemon". The daemon specific flags (bootstrap, alloc) are now flags for the daemon subcommand. Here we mimic ipfs ("ipfs" does not start the daemon but print help) and pave the path for merging both service and ctl in the future. While this brings some breaking changes, it significantly reduces the complexity of the configuration, the code and most importantly, the documentation. It should be easier now to explain the user what is the right way to launch a cluster peer, and more difficult to make mistakes. As a side effect, the PR also: * Fixes #381 - peers with dynamic addresses * Fixes #371 - peers should be Raft configuration option * Fixes #378 - waitForUpdates may return before state fully synced * Fixes #235 - config option shadowing (no cfg saves, no need to shadow) License: MIT Signed-off-by: Hector Sanjuan <code@hector.link>
2018-04-28 22:22:23 +00:00
}
}