ipfs-cluster/consensus/raft/config.go
Hector Sanjuan 33d9cdd3c4 Feat: emancipate Consensus from the Cluster component
This commit promotes the Consensus component (and Raft) to become a fully
independent thing like other components, passed to NewCluster during
initialization. Cluster (main component) no longer creates the consensus
layer internally. This has triggered a number of breaking changes
that I will explain below.

Motivation: Future work will require the possibility of running Cluster
with a consensus layer that is not Raft. The "consensus" layer is in charge
of maintaining two things:
  * The current cluster peerset, as required by the implementation
  * The current cluster pinset (shared state)

While the pinset maintenance has always been in the consensus layer, the
peerset maintenance was handled by the main component (starting by the "peers"
key in the configuration) AND the Raft component (internally)
and this generated lots of confusion: if the user edited the peers in the
configuration they would be greeted with an error.

The bootstrap process (adding a peer to an existing cluster) and configuration
key also complicated many things, since the main component did it, but only
when the consensus was initialized and in single peer mode.

In all this we also mixed the peerstore (list of peer addresses in the libp2p
host) with the peerset, when they need not to be linked.

By initializing the consensus layer before calling NewCluster, all the
difficulties in maintaining the current implementation in the same way
have come to light. Thus, the following changes have been introduced:

* Remove "peers" and "bootstrap" keys from the configuration: we no longer
edit or save the configuration files. This was a very bad practice, requiring
write permissions by the process to the file containing the private key and
additionally made things like Puppet deployments of cluster difficult as
configuration would mutate from its initial version. Needless to say all the
maintenance associated to making sure peers and bootstrap had correct values
when peers are bootstrapped or removed. A loud and detailed error message has
been added when staring cluster with an old config, along with instructions on
how to move forward.

* Introduce a PeerstoreFile ("peerstore") which stores peer addresses: in
ipfs, the peerstore is not persisted because it can be re-built from the
network bootstrappers and the DHT. Cluster should probably also allow
discoverability of peers addresses (when not bootstrapping, as in that case
we have it), but in the meantime, we will read and persist the peerstore
addresses for cluster peers in this file, different from the configuration.
Note that dns multiaddresses are now fully supported and no IPs are saved
when we have DNS multiaddresses for a peer.

* The former "peer_manager" code is now a pstoremgr module, providing utilities
to parse, add, list and generally maintain the libp2p host peerstore, including
operations on the PeerstoreFile. This "pstoremgr" can now also be extended to
perform address autodiscovery and other things indepedently from Cluster.

* Create and initialize Raft outside of the main Cluster component: since we
can now launch Raft independently from Cluster, we have more degrees of
freedom. A new "staging" option when creating the object allows a raft peer to
be launched in Staging mode, waiting to be added to a running consensus, and
thus, not electing itself as leader or doing anything like we were doing
before. This additionally allows us to track when the peer has become a
Voter, which only happens when it's caught up with the state, something that
was wonky previously.

* The raft configuration now includes an InitPeerset key, which allows to
provide a peerset for new peers and which is ignored when staging==true. The
whole Raft initialization code is way cleaner and stronger now.

* Cluster peer bootsrapping is now an ipfs-cluster-service feature. The
--bootstrap flag works as before (additionally allowing comma-separated-list
of entries). What bootstrap does, is to initialize Raft with staging == true,
and then call Join in the main cluster component. Only when the Raft peer
transitions to Voter, consensus becomes ready, and cluster becomes Ready.
This is cleaner, works better and is less complex than before (supporting
both flags and config values). We also backup and clean the state whenever
we are boostrapping, automatically

* ipfs-cluster-service no longer runs the daemon. Starting cluster needs
now "ipfs-cluster-service daemon". The daemon specific flags (bootstrap,
alloc) are now flags for the daemon subcommand. Here we mimic ipfs ("ipfs"
does not start the daemon but print help) and pave the path for merging both
service and ctl in the future.

While this brings some breaking changes, it significantly reduces the
complexity of the configuration, the code and most importantly, the
documentation. It should be easier now to explain the user what is the
right way to launch a cluster peer, and more difficult to make mistakes.

As a side effect, the PR also:

* Fixes #381 - peers with dynamic addresses
* Fixes #371 - peers should be Raft configuration option
* Fixes #378 - waitForUpdates may return before state fully synced
* Fixes #235 - config option shadowing (no cfg saves, no need to shadow)

License: MIT
Signed-off-by: Hector Sanjuan <code@hector.link>
2018-05-07 07:39:41 +02:00

278 lines
9.7 KiB
Go

package raft
import (
"encoding/json"
"errors"
"io/ioutil"
"path/filepath"
"time"
"github.com/ipfs/ipfs-cluster/api"
"github.com/ipfs/ipfs-cluster/config"
hraft "github.com/hashicorp/raft"
peer "github.com/libp2p/go-libp2p-peer"
)
// ConfigKey is the default configuration key for holding this component's
// configuration section.
var configKey = "raft"
// Configuration defaults
var (
DefaultDataSubFolder = "ipfs-cluster-data"
DefaultWaitForLeaderTimeout = 15 * time.Second
DefaultCommitRetries = 1
DefaultNetworkTimeout = 10 * time.Second
DefaultCommitRetryDelay = 200 * time.Millisecond
DefaultBackupsRotate = 6
)
// Config allows to configure the Raft Consensus component for ipfs-cluster.
// The component's configuration section is represented by ConfigJSON.
// Config implements the ComponentConfig interface.
type Config struct {
config.Saver
// will shutdown libp2p host on shutdown. Useful for testing
hostShutdown bool
// A folder to store Raft's data.
DataFolder string
// InitPeerset provides the list of initial cluster peers for new Raft
// peers (with no prior state). It is ignored when Raft was already
// initialized or when starting in staging mode.
InitPeerset []peer.ID
// LeaderTimeout specifies how long to wait for a leader before
// failing an operation.
WaitForLeaderTimeout time.Duration
// NetworkTimeout specifies how long before a Raft network
// operation is timed out
NetworkTimeout time.Duration
// CommitRetries specifies how many times we retry a failed commit until
// we give up.
CommitRetries int
// How long to wait between retries
CommitRetryDelay time.Duration
// BackupsRotate specifies the maximum number of Raft's DataFolder
// copies that we keep as backups (renaming) after cleanup.
BackupsRotate int
// A Hashicorp Raft's configuration object.
RaftConfig *hraft.Config
}
// ConfigJSON represents a human-friendly Config
// object which can be saved to JSON. Most configuration keys are converted
// into simple types like strings, and key names aim to be self-explanatory
// for the user.
// Check https://godoc.org/github.com/hashicorp/raft#Config for extended
// description on all Raft-specific keys.
type jsonConfig struct {
// Storage folder for snapshots, log store etc. Used by
// the Raft.
DataFolder string `json:"data_folder,omitempty"`
// InitPeerset provides the list of initial cluster peers for new Raft
// peers (with no prior state). It is ignored when Raft was already
// initialized or when starting in staging mode.
InitPeerset []string `json:"init_peerset"`
// How long to wait for a leader before failing
WaitForLeaderTimeout string `json:"wait_for_leader_timeout"`
// How long to wait before timing out network operations
NetworkTimeout string `json:"network_timeout"`
// How many retries to make upon a failed commit
CommitRetries int `json:"commit_retries"`
// How long to wait between commit retries
CommitRetryDelay string `json:"commit_retry_delay"`
// BackupsRotate specifies the maximum number of Raft's DataFolder
// copies that we keep as backups (renaming) after cleanup.
BackupsRotate int `json:"backups_rotate"`
// HeartbeatTimeout specifies the time in follower state without
// a leader before we attempt an election.
HeartbeatTimeout string `json:"heartbeat_timeout,omitempty"`
// ElectionTimeout specifies the time in candidate state without
// a leader before we attempt an election.
ElectionTimeout string `json:"election_timeout,omitempty"`
// CommitTimeout controls the time without an Apply() operation
// before we heartbeat to ensure a timely commit.
CommitTimeout string `json:"commit_timeout,omitempty"`
// MaxAppendEntries controls the maximum number of append entries
// to send at once.
MaxAppendEntries int `json:"max_append_entries,omitempty"`
// TrailingLogs controls how many logs we leave after a snapshot.
TrailingLogs uint64 `json:"trailing_logs,omitempty"`
// SnapshotInterval controls how often we check if we should perform
// a snapshot.
SnapshotInterval string `json:"snapshot_interval,omitempty"`
// SnapshotThreshold controls how many outstanding logs there must be
// before we perform a snapshot.
SnapshotThreshold uint64 `json:"snapshot_threshold,omitempty"`
// LeaderLeaseTimeout is used to control how long the "lease" lasts
// for being the leader without being able to contact a quorum
// of nodes. If we reach this interval without contact, we will
// step down as leader.
LeaderLeaseTimeout string `json:"leader_lease_timeout,omitempty"`
// The unique ID for this server across all time. When running with
// ProtocolVersion < 3, you must set this to be the same as the network
// address of your transport.
// LocalID string `json:local_id`
}
// ConfigKey returns a human-friendly indentifier for this Config.
func (cfg *Config) ConfigKey() string {
return configKey
}
// Validate checks that this configuration has working values,
// at least in appearance.
func (cfg *Config) Validate() error {
if cfg.RaftConfig == nil {
return errors.New("no hashicorp/raft.Config")
}
if cfg.WaitForLeaderTimeout <= 0 {
return errors.New("wait_for_leader_timeout <= 0")
}
if cfg.NetworkTimeout <= 0 {
return errors.New("network_timeout <= 0")
}
if cfg.CommitRetries < 0 {
return errors.New("commit_retries is invalid")
}
if cfg.CommitRetryDelay <= 0 {
return errors.New("commit_retry_delay is invalid")
}
if cfg.BackupsRotate <= 0 {
return errors.New("backups_rotate should be larger than 0")
}
return hraft.ValidateConfig(cfg.RaftConfig)
}
// LoadJSON parses a json-encoded configuration (see jsonConfig).
// The Config will have default values for all fields not explicited
// in the given json object.
func (cfg *Config) LoadJSON(raw []byte) error {
jcfg := &jsonConfig{}
err := json.Unmarshal(raw, jcfg)
if err != nil {
logger.Error("Error unmarshaling raft config")
return err
}
cfg.Default()
parseDuration := func(txt string) time.Duration {
d, _ := time.ParseDuration(txt)
if txt != "" && d == 0 {
logger.Warningf("%s is not a valid duration. Default will be used", txt)
}
return d
}
// Parse durations. We ignore errors as 0 will take Default values.
waitForLeaderTimeout := parseDuration(jcfg.WaitForLeaderTimeout)
networkTimeout := parseDuration(jcfg.NetworkTimeout)
commitRetryDelay := parseDuration(jcfg.CommitRetryDelay)
heartbeatTimeout := parseDuration(jcfg.HeartbeatTimeout)
electionTimeout := parseDuration(jcfg.ElectionTimeout)
commitTimeout := parseDuration(jcfg.CommitTimeout)
snapshotInterval := parseDuration(jcfg.SnapshotInterval)
leaderLeaseTimeout := parseDuration(jcfg.LeaderLeaseTimeout)
// Set all values in config. For some, take defaults if they are 0.
// Set values from jcfg if they are not 0 values
// Own values
config.SetIfNotDefault(jcfg.DataFolder, &cfg.DataFolder)
config.SetIfNotDefault(waitForLeaderTimeout, &cfg.WaitForLeaderTimeout)
config.SetIfNotDefault(networkTimeout, &cfg.NetworkTimeout)
cfg.CommitRetries = jcfg.CommitRetries
config.SetIfNotDefault(commitRetryDelay, &cfg.CommitRetryDelay)
config.SetIfNotDefault(jcfg.BackupsRotate, &cfg.BackupsRotate)
// Raft values
config.SetIfNotDefault(heartbeatTimeout, &cfg.RaftConfig.HeartbeatTimeout)
config.SetIfNotDefault(electionTimeout, &cfg.RaftConfig.ElectionTimeout)
config.SetIfNotDefault(commitTimeout, &cfg.RaftConfig.CommitTimeout)
config.SetIfNotDefault(jcfg.MaxAppendEntries, &cfg.RaftConfig.MaxAppendEntries)
config.SetIfNotDefault(jcfg.TrailingLogs, &cfg.RaftConfig.TrailingLogs)
config.SetIfNotDefault(snapshotInterval, &cfg.RaftConfig.SnapshotInterval)
config.SetIfNotDefault(jcfg.SnapshotThreshold, &cfg.RaftConfig.SnapshotThreshold)
config.SetIfNotDefault(leaderLeaseTimeout, &cfg.RaftConfig.LeaderLeaseTimeout)
cfg.InitPeerset = api.StringsToPeers(jcfg.InitPeerset)
return cfg.Validate()
}
// ToJSON returns the pretty JSON representation of a Config.
func (cfg *Config) ToJSON() ([]byte, error) {
jcfg := &jsonConfig{
DataFolder: cfg.DataFolder,
InitPeerset: api.PeersToStrings(cfg.InitPeerset),
WaitForLeaderTimeout: cfg.WaitForLeaderTimeout.String(),
NetworkTimeout: cfg.NetworkTimeout.String(),
CommitRetries: cfg.CommitRetries,
CommitRetryDelay: cfg.CommitRetryDelay.String(),
BackupsRotate: cfg.BackupsRotate,
HeartbeatTimeout: cfg.RaftConfig.HeartbeatTimeout.String(),
ElectionTimeout: cfg.RaftConfig.ElectionTimeout.String(),
CommitTimeout: cfg.RaftConfig.CommitTimeout.String(),
MaxAppendEntries: cfg.RaftConfig.MaxAppendEntries,
TrailingLogs: cfg.RaftConfig.TrailingLogs,
SnapshotInterval: cfg.RaftConfig.SnapshotInterval.String(),
SnapshotThreshold: cfg.RaftConfig.SnapshotThreshold,
LeaderLeaseTimeout: cfg.RaftConfig.LeaderLeaseTimeout.String(),
}
return config.DefaultJSONMarshal(jcfg)
}
// Default initializes this configuration with working defaults.
func (cfg *Config) Default() error {
cfg.DataFolder = "" // empty so it gets omitted
cfg.InitPeerset = []peer.ID{}
cfg.WaitForLeaderTimeout = DefaultWaitForLeaderTimeout
cfg.NetworkTimeout = DefaultNetworkTimeout
cfg.CommitRetries = DefaultCommitRetries
cfg.CommitRetryDelay = DefaultCommitRetryDelay
cfg.BackupsRotate = DefaultBackupsRotate
cfg.RaftConfig = hraft.DefaultConfig()
// These options are imposed over any Default Raft Config.
cfg.RaftConfig.ShutdownOnRemove = false
cfg.RaftConfig.LocalID = "will_be_set_automatically"
// Set up logging
cfg.RaftConfig.LogOutput = ioutil.Discard
cfg.RaftConfig.Logger = raftStdLogger // see logging.go
return nil
}
// GetDataFolder returns the Raft data folder that we are using.
func (cfg *Config) GetDataFolder() string {
if cfg.DataFolder == "" {
return filepath.Join(cfg.BaseDir, DefaultDataSubFolder)
}
return cfg.DataFolder
}