tests cover local and sharded adds of files
ipfs mock and ipfs block put/get calls cleaned up
License: MIT
Signed-off-by: Wyatt Daviau <wdaviau@cs.stanford.edu>
non-recursive cdag pins
priority pins for shards to allocated peer
cdag, meta and shard pin types used
License: MIT
Signed-off-by: Wyatt Daviau <wdaviau@cs.stanford.edu>
4 PinTypes specify how CID is pinned
Changes to Pin and Unpin to handle different PinTypes
Tests for different PinTypes
Migration for new state format using new Pin datastructures
Visibility of the PinTypes used internally limited by default
License: MIT
Signed-off-by: Wyatt Daviau <wdaviau@cs.stanford.edu>
A sharder config provides a default shard size
getAllocations calls newly exposed rpcs
ipld cbor cluster dag node construction
logic to handle Flushing final shard
logic to initialize and finalize shard sessions
License: MIT
Signed-off-by: Wyatt Daviau <wdaviau@cs.stanford.edu>
Write basic scaffolding to include a sharding component in cluster
Sketch out a high level implementation in pseudo code
Share thoughts on upcoming design challenges
License: MIT
Signed-off-by: Wyatt Daviau <wdaviau@cs.stanford.edu>
RPC call to put a block in ipfs
IPFSConnector method to implement the RPC call
cluster restapi reads from channel and puts
blocks into IPFS via RPC in ctl add handler
License: MIT
Signed-off-by: Wyatt Daviau <wdaviau@cs.stanford.edu>
The monitor component should be in charge of deciding how it is
best to send metrics to other peers and what that means.
This adds the PublishMetric() method to the component interface
and moves that functionality from Cluster main component to the
basic monitor.
There is a behaviour change. Before, the metrics where sent only to
the leader, while the leader was the only peer to broadcast them everywhere.
Now, all peers broadcast all metrics everywhere. This is mostly
because we should not rely on the consensus layer providing a Leader(), so
we are taking the chance to remove this dependency.
Note that in any-case, pubsub monitoring should replace the
existing basic monitor. This is just paving the ground.
Additionally, in order to not duplicate the multiRPC code
in the monitor, I have moved that functionality to go-libp2p-gorpc
and added an rpcutil library to cluster which includes useful
methods to perform multiRPC requests (some of them existed in
util.go, others are new and help handling multiple contexts etc).
License: MIT
Signed-off-by: Hector Sanjuan <code@hector.link>
This commit promotes the Consensus component (and Raft) to become a fully
independent thing like other components, passed to NewCluster during
initialization. Cluster (main component) no longer creates the consensus
layer internally. This has triggered a number of breaking changes
that I will explain below.
Motivation: Future work will require the possibility of running Cluster
with a consensus layer that is not Raft. The "consensus" layer is in charge
of maintaining two things:
* The current cluster peerset, as required by the implementation
* The current cluster pinset (shared state)
While the pinset maintenance has always been in the consensus layer, the
peerset maintenance was handled by the main component (starting by the "peers"
key in the configuration) AND the Raft component (internally)
and this generated lots of confusion: if the user edited the peers in the
configuration they would be greeted with an error.
The bootstrap process (adding a peer to an existing cluster) and configuration
key also complicated many things, since the main component did it, but only
when the consensus was initialized and in single peer mode.
In all this we also mixed the peerstore (list of peer addresses in the libp2p
host) with the peerset, when they need not to be linked.
By initializing the consensus layer before calling NewCluster, all the
difficulties in maintaining the current implementation in the same way
have come to light. Thus, the following changes have been introduced:
* Remove "peers" and "bootstrap" keys from the configuration: we no longer
edit or save the configuration files. This was a very bad practice, requiring
write permissions by the process to the file containing the private key and
additionally made things like Puppet deployments of cluster difficult as
configuration would mutate from its initial version. Needless to say all the
maintenance associated to making sure peers and bootstrap had correct values
when peers are bootstrapped or removed. A loud and detailed error message has
been added when staring cluster with an old config, along with instructions on
how to move forward.
* Introduce a PeerstoreFile ("peerstore") which stores peer addresses: in
ipfs, the peerstore is not persisted because it can be re-built from the
network bootstrappers and the DHT. Cluster should probably also allow
discoverability of peers addresses (when not bootstrapping, as in that case
we have it), but in the meantime, we will read and persist the peerstore
addresses for cluster peers in this file, different from the configuration.
Note that dns multiaddresses are now fully supported and no IPs are saved
when we have DNS multiaddresses for a peer.
* The former "peer_manager" code is now a pstoremgr module, providing utilities
to parse, add, list and generally maintain the libp2p host peerstore, including
operations on the PeerstoreFile. This "pstoremgr" can now also be extended to
perform address autodiscovery and other things indepedently from Cluster.
* Create and initialize Raft outside of the main Cluster component: since we
can now launch Raft independently from Cluster, we have more degrees of
freedom. A new "staging" option when creating the object allows a raft peer to
be launched in Staging mode, waiting to be added to a running consensus, and
thus, not electing itself as leader or doing anything like we were doing
before. This additionally allows us to track when the peer has become a
Voter, which only happens when it's caught up with the state, something that
was wonky previously.
* The raft configuration now includes an InitPeerset key, which allows to
provide a peerset for new peers and which is ignored when staging==true. The
whole Raft initialization code is way cleaner and stronger now.
* Cluster peer bootsrapping is now an ipfs-cluster-service feature. The
--bootstrap flag works as before (additionally allowing comma-separated-list
of entries). What bootstrap does, is to initialize Raft with staging == true,
and then call Join in the main cluster component. Only when the Raft peer
transitions to Voter, consensus becomes ready, and cluster becomes Ready.
This is cleaner, works better and is less complex than before (supporting
both flags and config values). We also backup and clean the state whenever
we are boostrapping, automatically
* ipfs-cluster-service no longer runs the daemon. Starting cluster needs
now "ipfs-cluster-service daemon". The daemon specific flags (bootstrap,
alloc) are now flags for the daemon subcommand. Here we mimic ipfs ("ipfs"
does not start the daemon but print help) and pave the path for merging both
service and ctl in the future.
While this brings some breaking changes, it significantly reduces the
complexity of the configuration, the code and most importantly, the
documentation. It should be easier now to explain the user what is the
right way to launch a cluster peer, and more difficult to make mistakes.
As a side effect, the PR also:
* Fixes#381 - peers with dynamic addresses
* Fixes#371 - peers should be Raft configuration option
* Fixes#378 - waitForUpdates may return before state fully synced
* Fixes#235 - config option shadowing (no cfg saves, no need to shadow)
License: MIT
Signed-off-by: Hector Sanjuan <code@hector.link>
to the ipfscluster.IPFSConnector interface and then
to the implementation of that interface in ipfsconn/ipfshttp.
This allows calls from MapPinTracker to cancel requests made
to the local IPFS node.
License: MIT
Signed-off-by: Adrian Lanzafame <adrianlanzafame92@gmail.com>
added ConnectGraph type and serialization
added cli command hitting cluster api
added cluster api client method + endpoint calling into rpc
added rpc calling into main cluster component
added clustercomponent's function to collect ConnectGraph
added functionality in ipfsconn to retrieve ipfs swarm peers
added dot file printing given ConnectGraphSerial
License: MIT
Signed-off-by: Wyatt Daviau <wdaviau@cs.stanford.edu>
This adds API, RPC calls to support RecoverAllLocal() (and expose RecoverLocal()
on the Rest API too). cluster-ctl is updated accordingly.
License: MIT
Signed-off-by: Hector Sanjuan <code@hector.link>
This change removes the duplicities of the PeerManager component:
* No more commiting PeerAdd and PeerRm log entries
* The Raft peer set is the source of truth
* Basic broadcasting is used to communicate peer multiaddresses
in the cluster
* A peer can only be added in a healthy cluster
* A peer can be removed from any cluster which can still commit
* This also adds support for multiple multiaddresses per peer
License: MIT
Signed-off-by: Hector Sanjuan <hector@protocol.ai>
The disk informer uses "ipfs repo stat" to fetch the RepoSize value and
uses it as a metric.
The numpinalloc allocator is now a generalized ascendalloc which
sorts metrics in ascending order and return the ones with lowest
values.
License: MIT
Signed-off-by: Hector Sanjuan <hector@protocol.ai>
This adds snapshot and restore methods to state and uses the snapshot
one to save a copy of the state when shutting down. Right now, this is
not used for anything else.
Some lines performing a migration, but this is only an idea of how it could
work.
License: MIT
Signed-off-by: Hector Sanjuan <hector@protocol.ai>
CidArg used to be an internal name for an argument that carried a Cid.
Now it has surfaced to API level and makes no sense. It is a Pin. It
represents a Pin (Cid, Allocations, Replication Factor)
License: MIT
Signed-off-by: Hector Sanjuan <hector@protocol.ai>
New PeerManager, Allocator, Informer components have been added along
with a new "replication_factor" configuration option.
First, cluster peers collect and push metrics (Informer) to the Cluster
leader regularly. The Informer is an interface that can be implemented
in custom wayts to support custom metrics.
Second, on a pin operation, using the information from the collected metrics,
an Allocator can provide a list of preferences as to where the new pin
should be assigned. The Allocator is an interface allowing to provide
different allocation strategies.
Both Allocator and Informer are Cluster Componenets, and have access
to the RPC API.
The allocations are kept in the shared state. Cluster peer failure
detection is still missing and re-allocation is still missing, although
re-pinning something when a node is down/metrics missing does re-allocate
the pin somewhere else.
License: MIT
Signed-off-by: Hector Sanjuan <hector@protocol.ai>
At the beginning we opted for native types which were
serializable (PinInfo had a CidStr field instead of Cid).
Now we provide types in two versions: native and serializable.
Go methods use native. The rest of APIs (REST/RPC) use always
serializable versions. Methods are provided to convert between the
two.
The reason for moving these out of the way is to be able to re-use
type definitions when parsing API responses in `ipfs-cluster-ctl` or
any other clients that come up. API responses are just the serializable
version of types in JSON encoding. This also reduces having
duplicate types defs and parsing methods everywhere.
License: MIT
Signed-off-by: Hector Sanjuan <hector@protocol.ai>
This is the third implementation attempt. This time, rather than
broadcasting PeerAdd/Join requests to the whole cluster, we use the
consensus log to broadcast new peers joining.
This makes it easier to recover from errors and to know who exactly
is member of a cluster and who is not. The consensus is, after all,
meant to agree on things, and the list of cluster peers is something
everyone has to agree on.
Raft itself uses a special log operation to maintain the peer set.
The tests are almost unchanged from the previous attempts so it should
be the same, except it doesn't seem possible to bootstrap a bunch of nodes
at the same time using different bootstrap nodes. It works when using
the same. I'm not sure this worked before either, but the code is
simpler than recursively contacting peers, and scales better for
larger clusters.
Nodes have to be careful about joining clusters while keeping the state
from a different cluster (disjoint logs). This may cause problems with
Raft.
License: MIT
Signed-off-by: Hector Sanjuan <hector@protocol.ai>
This commit adds PeerAdd() and PeerRemove() endpoints, CLI support,
tests. Peer management is a delicate issue because of how the consensus
works underneath and the places that need to track such peers.
When adding a peer the procedure is as follows:
* Try to open a connection to the new peer and abort if not reachable
* Broadcast a PeerManagerAddPeer operation which tells all cluster members
to add the new Peer. The Raft leader will add it to Raft's peerset and
the multiaddress will be saved in the ClusterPeers configuration key.
* If the above fails because some cluster node is not responding,
broadcast a PeerRemove() and try to undo any damage.
* If the broadcast succeeds, send our ClusterPeers to the new Peer along with
the local multiaddress we are using in the connection opened in the
first step (that is the multiaddress through which the other peer can reach us)
* The new peer updates its configuration with the new list and joins
the consensus
License: MIT
Signed-off-by: Hector Sanjuan <hector@protocol.ai>
I have renamed "members" to "peers".
Added IPFS daemon ID and addresses to the ID object and
have Peers() return the collection of ID() objects from the cluster.
License: MIT
Signed-off-by: Hector Sanjuan <hector@protocol.ai>
This includes adding a new API endpoint, CLI command.
I have also changed some api endpoints. I find:
POST /pins/<cid>/sync
POST /pins/<cid>/recover
GET /pins/<cid>
GET /pins
better. The problem is makes the pin list /pinlist but it general
its more consistent.
License: MIT
Signed-off-by: Hector Sanjuan <hector@protocol.ai>
This has implied changes to the PinTracker API, to the IPFSConnector API and
a few renames on some PinTracker related constants.
License: MIT
Signed-off-by: Hector Sanjuan <hector@protocol.ai>