Currently, it was not looking at the signer of the message, but at the
peer that relayed it, to verify the validity of the message.
This caused that under certain gossip graph configurations, some nodes would only
get messages via untrusted peers, and thus be unable to sync the chain.
trusted_peers and peer_addresses were appended the same info twice when
reparsing the config.
Once when parsing json and once when re-parsing with env-vars.
Add necessary validators.
There is currently no way to run a dynamic-mode DHT that supports
LAN-based peers. Therefore for the time-being we are setting the
dht.Mode to "server".
* Libp2p protectors no longer needed, use PSK directly
* Generate cluster 32-byte secret here (helper gone from pnet)
* Switch to go-log/v2 in all places
* DHT bootstrapping not needed. Adjust DHT options for tests.
* Do not rely on dissappeared CidToDsKey and DsKeyToCid functions fro dshelp.
* Disable QUIC (does not support private networks)
* Fix tests: autodiscovery started working properly
This removes mappintracker and sets stateless tracker as the default (and only) pintracker component.
Because the stateless tracker matches the cluster state with only ongoing operations being kept on memory, and additional information provided by ipfs-pin-ls, syncing operations are not necessary. Therefore the Sync/SyncAll operations are removed cluster-wide.
* Update go-ds-crdt to 0.1.5 which adds a return statement in case of error fetching a node.
* Increase DAG-Get timeout to 2 minutes
* Downgrade go-bitswap to 0.1.6.
With this commit, cluster peer will observe on events of peer removal
from cluster. On occurence of the event, the cluster peer will clear
the removed peer from its peerstore.
* Init should take a list of peers
This commit adds `--peers` option to `ipfs-cluster-service init`
`ipfs-cluster-service init --peers <multiaddress,multiaddress>`
- Adds and writes the given peers to the peerstore file
- For raft config section, adds the peer IDs to the `init_peerset`
- For crdt config section, add the peer IDs to the `trusted_peers`
Specifying "*" as part of "trusted_peers" in the configuration will
result in trusting all peers.
This is useful for private clusters where we don't want to list every
peer ID in the config.
Currently, unless doing Join() (--bootstrap), we do not connect to any peers on startup.
We however loaded up the peerstore file and Raft will automatically connect
older peers to figure out who is the leader etc. DHT bootstrap, after Raft
was working, did the rest.
For CRDTs we need to connect to people on a normal boot as otherwise, unless
bootstrapping, this does not happen, even if the peerstore contains known peers.
This introduces a number of changes:
* Move peerstore file management back inside the Cluster component, which was
already in charge of saving the peerstore file.
* We keep saving all "known addresses" but we load them with a non permanent
TTL, so that there will be clean up of peers we're not connected to for long.
* "Bootstrap" (connect) to a small number of peers during Cluster component creation.
* Bootstrap the DHT asap after this, so that other cluster components can
initialize with a working peer discovery mechanism.
* CRDT Trust() method will now:
* Protect the trusted Peer ID in the conn manager
* Give top priority in the PeerManager to that Peer (see below)
* Mark addresses as permanent in the Peerstore
The PeerManager now attaches priorities to peers when importing them and is
able to order them according to that priority. The result is that peers with
high priority are saved first in the peerstore file. When we load the peerstore
file, the first entries in it are given the highest priority.
This means that during startup we will connect to "trusted peers" first
(because they have been tagged with priority in the previous run and saved at
the top of the list). Once connected to a small number of peers, we let the
DHT bootstrap process in the background do the rest and discover the network.
All this makes the peerstore file a "bootstrap" list for CRDTs and we will attempt
to connect to peers on that list until some of those connections succeed.
This fixes multiple issues in and around tests while
increasing ttls and delays in 100ms. Multiple issues, including
races, tests not running with consensus-crdt missing log messages
and better initialization have been fixed.
License: MIT
Signed-off-by: Hector Sanjuan <code@hector.link>
TrustedPeers are specified in the configuration. Additional peers
can be added at runtime with Trust/Distrust functions.
Unfortunately we cannot use consensus.PeerAdd as a way to trust a peer as
cluster.PeerAdd+Join can be called by any peer and this calls
consensus.PeerAdd.
The result is consensus.PeerAdd doing a lot in Raft while consensus.Trust does
nothing, while in CRDTs consensus.Trust does something but consensus.PeerAdd
does nothing. But this is more or less consistent.
I had thought of this for a very long time but there were no compelling
reasons to do it. Specifying RPC endpoint permissions becomes however
significantly nicer if each Component is a different RPC Service. This also
fixes some naming issues like having to prefix methods with the component name
to separate them from methods named in the same way in some other component
(Pin and IPFSPin).
This adds a new "crdt" consensus component using go-ds-crdt.
This implies several refactors to fully make cluster consensus-component
independent:
* Delete mapstate and fully adopt dsstate (after people have migrated).
* Return errors from state methods rather than ignoring them.
* Add a new "datastore" modules so that we can configure datastores in the
main configuration like other components.
* Let the consensus components fully define the "state.State". Thus, they do
not receive the state, they receive the storage where we put the state (a
go-datastore).
* Allow to customize how the monitor component obtains Peers() (the current
peerset), including avoiding using the current peerset. At the moment the
crdt consensus uses the monitoring component to define the current peerset.
Therefore the monitor component cannot rely on the consensus component to
produce a peerset.
* Re-factor/re-implementation of "ipfs-cluster-service state"
operations. Includes the dissapearance of the "migrate" one.
The CRDT consensus component defines creates a crdt-datastore (with ipfs-lite)
and uses it to intitialize a dssate. Thus the crdt-store is elegantly
wrapped. Any modifications to the state get automatically replicated to other
peers. We store all the CRDT DAG blocks in the local datastore.
The consensus components only expose a ReadOnly state, as any modifications to
the shared state should happen through them.
DHT and PubSub facilities must now be created outside of Cluster and passed in
so they can be re-used by different components.