ipfs-cluster/ipfscluster.go

196 lines
8.1 KiB
Go
Raw Normal View History

// Package ipfscluster implements a wrapper for the IPFS deamon which
// allows to orchestrate pinning operations among several IPFS nodes.
2016-12-02 18:33:39 +00:00
//
// IPFS Cluster peers form a separate libp2p swarm. A Cluster peer uses
// multiple Cluster Components which perform different tasks like managing
// the underlying IPFS daemons, or providing APIs for external control.
2016-12-02 18:33:39 +00:00
package ipfscluster
import (
"context"
"github.com/ipfs/ipfs-cluster/api"
"github.com/ipfs/ipfs-cluster/state"
peer "github.com/libp2p/go-libp2p-core/peer"
rpc "github.com/libp2p/go-libp2p-gorpc"
2016-12-02 18:33:39 +00:00
)
// Component represents a piece of ipfscluster. Cluster components
// usually run their own goroutines (a http server for example). They
// communicate with the main Cluster component and other components
// (both local and remote), using an instance of rpc.Client.
type Component interface {
SetClient(*rpc.Client)
Shutdown(context.Context) error
2016-12-02 18:33:39 +00:00
}
// Consensus is a component which keeps a shared state in
// IPFS Cluster and triggers actions on updates to that state.
// Currently, Consensus needs to be able to elect/provide a
// Cluster Leader and the implementation is very tight to
// the Cluster main component.
type Consensus interface {
Component
Feat: emancipate Consensus from the Cluster component This commit promotes the Consensus component (and Raft) to become a fully independent thing like other components, passed to NewCluster during initialization. Cluster (main component) no longer creates the consensus layer internally. This has triggered a number of breaking changes that I will explain below. Motivation: Future work will require the possibility of running Cluster with a consensus layer that is not Raft. The "consensus" layer is in charge of maintaining two things: * The current cluster peerset, as required by the implementation * The current cluster pinset (shared state) While the pinset maintenance has always been in the consensus layer, the peerset maintenance was handled by the main component (starting by the "peers" key in the configuration) AND the Raft component (internally) and this generated lots of confusion: if the user edited the peers in the configuration they would be greeted with an error. The bootstrap process (adding a peer to an existing cluster) and configuration key also complicated many things, since the main component did it, but only when the consensus was initialized and in single peer mode. In all this we also mixed the peerstore (list of peer addresses in the libp2p host) with the peerset, when they need not to be linked. By initializing the consensus layer before calling NewCluster, all the difficulties in maintaining the current implementation in the same way have come to light. Thus, the following changes have been introduced: * Remove "peers" and "bootstrap" keys from the configuration: we no longer edit or save the configuration files. This was a very bad practice, requiring write permissions by the process to the file containing the private key and additionally made things like Puppet deployments of cluster difficult as configuration would mutate from its initial version. Needless to say all the maintenance associated to making sure peers and bootstrap had correct values when peers are bootstrapped or removed. A loud and detailed error message has been added when staring cluster with an old config, along with instructions on how to move forward. * Introduce a PeerstoreFile ("peerstore") which stores peer addresses: in ipfs, the peerstore is not persisted because it can be re-built from the network bootstrappers and the DHT. Cluster should probably also allow discoverability of peers addresses (when not bootstrapping, as in that case we have it), but in the meantime, we will read and persist the peerstore addresses for cluster peers in this file, different from the configuration. Note that dns multiaddresses are now fully supported and no IPs are saved when we have DNS multiaddresses for a peer. * The former "peer_manager" code is now a pstoremgr module, providing utilities to parse, add, list and generally maintain the libp2p host peerstore, including operations on the PeerstoreFile. This "pstoremgr" can now also be extended to perform address autodiscovery and other things indepedently from Cluster. * Create and initialize Raft outside of the main Cluster component: since we can now launch Raft independently from Cluster, we have more degrees of freedom. A new "staging" option when creating the object allows a raft peer to be launched in Staging mode, waiting to be added to a running consensus, and thus, not electing itself as leader or doing anything like we were doing before. This additionally allows us to track when the peer has become a Voter, which only happens when it's caught up with the state, something that was wonky previously. * The raft configuration now includes an InitPeerset key, which allows to provide a peerset for new peers and which is ignored when staging==true. The whole Raft initialization code is way cleaner and stronger now. * Cluster peer bootsrapping is now an ipfs-cluster-service feature. The --bootstrap flag works as before (additionally allowing comma-separated-list of entries). What bootstrap does, is to initialize Raft with staging == true, and then call Join in the main cluster component. Only when the Raft peer transitions to Voter, consensus becomes ready, and cluster becomes Ready. This is cleaner, works better and is less complex than before (supporting both flags and config values). We also backup and clean the state whenever we are boostrapping, automatically * ipfs-cluster-service no longer runs the daemon. Starting cluster needs now "ipfs-cluster-service daemon". The daemon specific flags (bootstrap, alloc) are now flags for the daemon subcommand. Here we mimic ipfs ("ipfs" does not start the daemon but print help) and pave the path for merging both service and ctl in the future. While this brings some breaking changes, it significantly reduces the complexity of the configuration, the code and most importantly, the documentation. It should be easier now to explain the user what is the right way to launch a cluster peer, and more difficult to make mistakes. As a side effect, the PR also: * Fixes #381 - peers with dynamic addresses * Fixes #371 - peers should be Raft configuration option * Fixes #378 - waitForUpdates may return before state fully synced * Fixes #235 - config option shadowing (no cfg saves, no need to shadow) License: MIT Signed-off-by: Hector Sanjuan <code@hector.link>
2018-04-28 22:22:23 +00:00
// Returns a channel to signal that the consensus layer is ready
// allowing the main component to wait for it during start.
Ready(context.Context) <-chan struct{}
// Logs a pin operation.
LogPin(context.Context, api.Pin) error
// Logs an unpin operation.
LogUnpin(context.Context, api.Pin) error
AddPeer(context.Context, peer.ID) error
RmPeer(context.Context, peer.ID) error
Consensus: add new "crdt" consensus component This adds a new "crdt" consensus component using go-ds-crdt. This implies several refactors to fully make cluster consensus-component independent: * Delete mapstate and fully adopt dsstate (after people have migrated). * Return errors from state methods rather than ignoring them. * Add a new "datastore" modules so that we can configure datastores in the main configuration like other components. * Let the consensus components fully define the "state.State". Thus, they do not receive the state, they receive the storage where we put the state (a go-datastore). * Allow to customize how the monitor component obtains Peers() (the current peerset), including avoiding using the current peerset. At the moment the crdt consensus uses the monitoring component to define the current peerset. Therefore the monitor component cannot rely on the consensus component to produce a peerset. * Re-factor/re-implementation of "ipfs-cluster-service state" operations. Includes the dissapearance of the "migrate" one. The CRDT consensus component defines creates a crdt-datastore (with ipfs-lite) and uses it to intitialize a dssate. Thus the crdt-store is elegantly wrapped. Any modifications to the state get automatically replicated to other peers. We store all the CRDT DAG blocks in the local datastore. The consensus components only expose a ReadOnly state, as any modifications to the shared state should happen through them. DHT and PubSub facilities must now be created outside of Cluster and passed in so they can be re-used by different components.
2019-02-20 14:24:25 +00:00
State(context.Context) (state.ReadOnly, error)
// Provide a node which is responsible to perform
// specific tasks which must only run in 1 cluster peer.
Leader(context.Context) (peer.ID, error)
// Only returns when the consensus state has all log
// updates applied to it.
WaitForSync(context.Context) error
// Clean removes all consensus data.
Clean(context.Context) error
// Peers returns the peerset participating in the Consensus.
Peers(context.Context) ([]peer.ID, error)
// IsTrustedPeer returns true if the given peer is "trusted".
// This will grant access to more rpc endpoints and a
// non-trusted one. This should be fast as it will be
// called repeatedly for every remote RPC request.
IsTrustedPeer(context.Context, peer.ID) bool
// Trust marks a peer as "trusted".
Trust(context.Context, peer.ID) error
// Distrust removes a peer from the "trusted" set.
Distrust(context.Context, peer.ID) error
}
// API is a component which offers an API for Cluster. This is
2016-12-02 18:33:39 +00:00
// a base component.
type API interface {
Component
2016-12-02 18:33:39 +00:00
}
// IPFSConnector is a component which allows cluster to interact with
// an IPFS daemon. This is a base component.
2016-12-02 18:33:39 +00:00
type IPFSConnector interface {
Component
ID(context.Context) (api.IPFSID, error)
Pin(context.Context, api.Pin) error
Unpin(context.Context, api.Cid) error
PinLsCid(context.Context, api.Pin) (api.IPFSPinStatus, error)
// PinLs returns pins in the pinset of the given types (recursive, direct...)
PinLs(ctx context.Context, typeFilters []string, out chan<- api.IPFSPinInfo) error
// ConnectSwarms make sure this peer's IPFS daemon is connected to
// other peers IPFS daemons.
ConnectSwarms(context.Context) error
// SwarmPeers returns the IPFS daemon's swarm peers.
SwarmPeers(context.Context) ([]peer.ID, error)
// ConfigKey returns the value for a configuration key.
// Subobjects are reached with keypaths as "Parent/Child/GrandChild...".
ConfigKey(keypath string) (interface{}, error)
// RepoStat returns the current repository size and max limit as
// provided by "repo stat".
RepoStat(context.Context) (api.IPFSRepoStat, error)
// RepoGC performs garbage collection sweep on the IPFS repo.
RepoGC(context.Context) (api.RepoGC, error)
// Resolve returns a cid given a path.
Resolve(context.Context, string) (api.Cid, error)
// BlockStream adds a stream of blocks to IPFS.
BlockStream(context.Context, <-chan api.NodeWithMeta) error
// BlockGet retrieves the raw data of an IPFS block.
BlockGet(context.Context, api.Cid) ([]byte, error)
2016-12-02 18:33:39 +00:00
}
// Peered represents a component which needs to be aware of the peers
// in the Cluster and of any changes to the peer set.
2016-12-02 18:33:39 +00:00
type Peered interface {
AddPeer(ctx context.Context, p peer.ID)
RmPeer(ctx context.Context, p peer.ID)
//SetPeers(peers []peer.ID)
2016-12-02 18:33:39 +00:00
}
// PinTracker represents a component which tracks the status of
// the pins in this cluster and ensures they are in sync with the
// IPFS daemon. This component should be thread safe.
type PinTracker interface {
Component
// Track tells the tracker that a Cid is now under its supervision
// The tracker may decide to perform an IPFS pin.
Track(context.Context, api.Pin) error
// Untrack tells the tracker that a Cid is to be forgotten. The tracker
// may perform an IPFS unpin operation.
Untrack(context.Context, api.Cid) error
// StatusAll returns the list of pins with their local status. Takes a
// filter to specify which statuses to report.
StatusAll(context.Context, api.TrackerStatus, chan<- api.PinInfo) error
// Status returns the local status of a given Cid.
Status(context.Context, api.Cid) api.PinInfo
// RecoverAll calls Recover() for all pins tracked.
RecoverAll(context.Context, chan<- api.PinInfo) error
// Recover retriggers a Pin/Unpin operation in a Cids with error status.
Recover(context.Context, api.Cid) (api.PinInfo, error)
}
// Informer provides Metric information from a peer. The metrics produced by
// informers are then passed to a PinAllocator which will use them to
// determine where to pin content. The metric is agnostic to the rest of
// Cluster.
type Informer interface {
Component
Name() string
2021-10-05 12:04:28 +00:00
// GetMetrics returns the metrics obtained by this Informer. It must
// always return at least one metric.
GetMetrics(context.Context) []api.Metric
}
// PinAllocator decides where to pin certain content. In order to make such
// decision, it receives the pin arguments, the peers which are currently
// allocated to the content and metrics available for all peers which could
// allocate the content.
type PinAllocator interface {
Component
// Allocate returns the list of peers that should be assigned to
// Pin content in order of preference (from the most preferred to the
// least). The "current" map contains valid metrics for peers
// which are currently pinning the content. The candidates map
// contains the metrics for all peers which are eligible for pinning
// the content.
Allocate(ctx context.Context, c api.Cid, current, candidates, priority api.MetricsSet) ([]peer.ID, error)
// Metrics returns the list of metrics that the allocator needs.
Metrics() []string
}
// PeerMonitor is a component in charge of publishing a peer's metrics and
// reading metrics from other peers in the cluster. The PinAllocator will
// use the metrics provided by the monitor as candidates for Pin allocations.
//
// The PeerMonitor component also provides an Alert channel which is signaled
// when a metric is no longer received and the monitor identifies it
// as a problem.
type PeerMonitor interface {
Component
// LogMetric stores a metric. It can be used to manually inject
// a metric to a monitor.
LogMetric(context.Context, api.Metric) error
// PublishMetric sends a metric to the rest of the peers.
// How to send it, and to who, is to be decided by the implementation.
PublishMetric(context.Context, api.Metric) error
// LatestMetrics returns a map with the latest valid metrics of matching
2021-10-05 12:04:28 +00:00
// name for the current cluster peers. The result should only contain
// one metric per peer at most.
LatestMetrics(ctx context.Context, name string) []api.Metric
// Returns the latest metric received from a peer. It may be expired.
LatestForPeer(ctx context.Context, name string, pid peer.ID) api.Metric
// MetricNames returns a list of metric names.
MetricNames(ctx context.Context) []string
// Alerts delivers alerts generated when this peer monitor detects
// a problem (i.e. metrics not arriving as expected). Alerts can be used
// to trigger self-healing measures or re-pinnings of content.
Alerts() <-chan api.Alert
}
// Tracer implements Component as a way
// to shutdown and flush and remaining traces.
type Tracer interface {
Component
}