ipfs-cluster/cmd/ipfs-cluster-service/main.go

929 lines
26 KiB
Go
Raw Normal View History

// The ipfs-cluster-service application.
package main
import (
"bufio"
"context"
"errors"
"fmt"
"io"
"os"
"os/user"
"path/filepath"
"strings"
ipfslite "github.com/hsanjuan/ipfs-lite"
ipfscluster "github.com/ipfs-cluster/ipfs-cluster"
"github.com/ipfs-cluster/ipfs-cluster/api"
"github.com/ipfs-cluster/ipfs-cluster/cmdutils"
"github.com/ipfs-cluster/ipfs-cluster/consensus/crdt"
"github.com/ipfs-cluster/ipfs-cluster/pstoremgr"
"github.com/ipfs-cluster/ipfs-cluster/version"
Dependency upgrades (#1755) * Update go-libp2p to v0.22.0 * Testing with go1.19 * build(deps): bump github.com/multiformats/go-multicodec Bumps [github.com/multiformats/go-multicodec](https://github.com/multiformats/go-multicodec) from 0.5.0 to 0.6.0. - [Release notes](https://github.com/multiformats/go-multicodec/releases) - [Commits](https://github.com/multiformats/go-multicodec/compare/v0.5.0...v0.6.0) --- updated-dependencies: - dependency-name: github.com/multiformats/go-multicodec dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] <support@github.com> * build(deps): bump github.com/ipld/go-car from 0.4.0 to 0.5.0 Bumps [github.com/ipld/go-car](https://github.com/ipld/go-car) from 0.4.0 to 0.5.0. - [Release notes](https://github.com/ipld/go-car/releases) - [Commits](https://github.com/ipld/go-car/compare/v0.4.0...v0.5.0) --- updated-dependencies: - dependency-name: github.com/ipld/go-car dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] <support@github.com> * build(deps): bump github.com/prometheus/client_golang Bumps [github.com/prometheus/client_golang](https://github.com/prometheus/client_golang) from 1.12.2 to 1.13.0. - [Release notes](https://github.com/prometheus/client_golang/releases) - [Changelog](https://github.com/prometheus/client_golang/blob/main/CHANGELOG.md) - [Commits](https://github.com/prometheus/client_golang/compare/v1.12.2...v1.13.0) --- updated-dependencies: - dependency-name: github.com/prometheus/client_golang dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] <support@github.com> * build(deps): bump github.com/hashicorp/go-hclog from 1.2.1 to 1.3.0 Bumps [github.com/hashicorp/go-hclog](https://github.com/hashicorp/go-hclog) from 1.2.1 to 1.3.0. - [Release notes](https://github.com/hashicorp/go-hclog/releases) - [Commits](https://github.com/hashicorp/go-hclog/compare/v1.2.1...v1.3.0) --- updated-dependencies: - dependency-name: github.com/hashicorp/go-hclog dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] <support@github.com> * build(deps): bump github.com/ipfs/go-ds-crdt from 0.3.6 to 0.3.7 Bumps [github.com/ipfs/go-ds-crdt](https://github.com/ipfs/go-ds-crdt) from 0.3.6 to 0.3.7. - [Release notes](https://github.com/ipfs/go-ds-crdt/releases) - [Commits](https://github.com/ipfs/go-ds-crdt/compare/v0.3.6...v0.3.7) --- updated-dependencies: - dependency-name: github.com/ipfs/go-ds-crdt dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] <support@github.com> * build(deps): bump github.com/urfave/cli/v2 from 2.10.2 to 2.14.1 Bumps [github.com/urfave/cli/v2](https://github.com/urfave/cli) from 2.10.2 to 2.14.1. - [Release notes](https://github.com/urfave/cli/releases) - [Changelog](https://github.com/urfave/cli/blob/main/docs/CHANGELOG.md) - [Commits](https://github.com/urfave/cli/compare/v2.10.2...v2.14.1) --- updated-dependencies: - dependency-name: github.com/urfave/cli/v2 dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] <support@github.com> * build(deps): bump github.com/libp2p/go-libp2p-http from 0.3.0 to 0.4.0 Bumps [github.com/libp2p/go-libp2p-http](https://github.com/libp2p/go-libp2p-http) from 0.3.0 to 0.4.0. - [Release notes](https://github.com/libp2p/go-libp2p-http/releases) - [Commits](https://github.com/libp2p/go-libp2p-http/compare/v0.3.0...v0.4.0) --- updated-dependencies: - dependency-name: github.com/libp2p/go-libp2p-http dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] <support@github.com> * build(deps): bump github.com/libp2p/go-libp2p-gorpc from 0.4.0 to 0.5.0 Bumps [github.com/libp2p/go-libp2p-gorpc](https://github.com/libp2p/go-libp2p-gorpc) from 0.4.0 to 0.5.0. - [Release notes](https://github.com/libp2p/go-libp2p-gorpc/releases) - [Commits](https://github.com/libp2p/go-libp2p-gorpc/compare/v0.4.0...v0.5.0) --- updated-dependencies: - dependency-name: github.com/libp2p/go-libp2p-gorpc dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] <support@github.com> * build(deps): bump contrib.go.opencensus.io/exporter/prometheus Bumps [contrib.go.opencensus.io/exporter/prometheus](https://github.com/census-ecosystem/opencensus-go-exporter-prometheus) from 0.4.1 to 0.4.2. - [Release notes](https://github.com/census-ecosystem/opencensus-go-exporter-prometheus/releases) - [Commits](https://github.com/census-ecosystem/opencensus-go-exporter-prometheus/compare/v0.4.1...v0.4.2) --- updated-dependencies: - dependency-name: contrib.go.opencensus.io/exporter/prometheus dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] <support@github.com> * build(deps): bump github.com/libp2p/go-libp2p-raft from 0.1.8 to 0.2.0 Bumps [github.com/libp2p/go-libp2p-raft](https://github.com/libp2p/go-libp2p-raft) from 0.1.8 to 0.2.0. - [Release notes](https://github.com/libp2p/go-libp2p-raft/releases) - [Commits](https://github.com/libp2p/go-libp2p-raft/compare/v0.1.8...v0.2.0) --- updated-dependencies: - dependency-name: github.com/libp2p/go-libp2p-raft dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] <support@github.com> * build(deps): bump github.com/urfave/cli from 1.22.9 to 1.22.10 Bumps [github.com/urfave/cli](https://github.com/urfave/cli) from 1.22.9 to 1.22.10. - [Release notes](https://github.com/urfave/cli/releases) - [Changelog](https://github.com/urfave/cli/blob/main/docs/CHANGELOG.md) - [Commits](https://github.com/urfave/cli/compare/v1.22.9...v1.22.10) --- updated-dependencies: - dependency-name: github.com/urfave/cli dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] <support@github.com> * Fix checker/linter/staticcheck warnings Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-09-06 14:57:17 +00:00
peer "github.com/libp2p/go-libp2p/core/peer"
ma "github.com/multiformats/go-multiaddr"
semver "github.com/blang/semver"
"github.com/ipfs/go-datastore"
"github.com/ipfs/go-datastore/namespace"
dscrdt "github.com/ipfs/go-ds-crdt"
logging "github.com/ipfs/go-log/v2"
cli "github.com/urfave/cli"
)
// ProgramName of this application
const programName = "ipfs-cluster-service"
Feat: emancipate Consensus from the Cluster component This commit promotes the Consensus component (and Raft) to become a fully independent thing like other components, passed to NewCluster during initialization. Cluster (main component) no longer creates the consensus layer internally. This has triggered a number of breaking changes that I will explain below. Motivation: Future work will require the possibility of running Cluster with a consensus layer that is not Raft. The "consensus" layer is in charge of maintaining two things: * The current cluster peerset, as required by the implementation * The current cluster pinset (shared state) While the pinset maintenance has always been in the consensus layer, the peerset maintenance was handled by the main component (starting by the "peers" key in the configuration) AND the Raft component (internally) and this generated lots of confusion: if the user edited the peers in the configuration they would be greeted with an error. The bootstrap process (adding a peer to an existing cluster) and configuration key also complicated many things, since the main component did it, but only when the consensus was initialized and in single peer mode. In all this we also mixed the peerstore (list of peer addresses in the libp2p host) with the peerset, when they need not to be linked. By initializing the consensus layer before calling NewCluster, all the difficulties in maintaining the current implementation in the same way have come to light. Thus, the following changes have been introduced: * Remove "peers" and "bootstrap" keys from the configuration: we no longer edit or save the configuration files. This was a very bad practice, requiring write permissions by the process to the file containing the private key and additionally made things like Puppet deployments of cluster difficult as configuration would mutate from its initial version. Needless to say all the maintenance associated to making sure peers and bootstrap had correct values when peers are bootstrapped or removed. A loud and detailed error message has been added when staring cluster with an old config, along with instructions on how to move forward. * Introduce a PeerstoreFile ("peerstore") which stores peer addresses: in ipfs, the peerstore is not persisted because it can be re-built from the network bootstrappers and the DHT. Cluster should probably also allow discoverability of peers addresses (when not bootstrapping, as in that case we have it), but in the meantime, we will read and persist the peerstore addresses for cluster peers in this file, different from the configuration. Note that dns multiaddresses are now fully supported and no IPs are saved when we have DNS multiaddresses for a peer. * The former "peer_manager" code is now a pstoremgr module, providing utilities to parse, add, list and generally maintain the libp2p host peerstore, including operations on the PeerstoreFile. This "pstoremgr" can now also be extended to perform address autodiscovery and other things indepedently from Cluster. * Create and initialize Raft outside of the main Cluster component: since we can now launch Raft independently from Cluster, we have more degrees of freedom. A new "staging" option when creating the object allows a raft peer to be launched in Staging mode, waiting to be added to a running consensus, and thus, not electing itself as leader or doing anything like we were doing before. This additionally allows us to track when the peer has become a Voter, which only happens when it's caught up with the state, something that was wonky previously. * The raft configuration now includes an InitPeerset key, which allows to provide a peerset for new peers and which is ignored when staging==true. The whole Raft initialization code is way cleaner and stronger now. * Cluster peer bootsrapping is now an ipfs-cluster-service feature. The --bootstrap flag works as before (additionally allowing comma-separated-list of entries). What bootstrap does, is to initialize Raft with staging == true, and then call Join in the main cluster component. Only when the Raft peer transitions to Voter, consensus becomes ready, and cluster becomes Ready. This is cleaner, works better and is less complex than before (supporting both flags and config values). We also backup and clean the state whenever we are boostrapping, automatically * ipfs-cluster-service no longer runs the daemon. Starting cluster needs now "ipfs-cluster-service daemon". The daemon specific flags (bootstrap, alloc) are now flags for the daemon subcommand. Here we mimic ipfs ("ipfs" does not start the daemon but print help) and pave the path for merging both service and ctl in the future. While this brings some breaking changes, it significantly reduces the complexity of the configuration, the code and most importantly, the documentation. It should be easier now to explain the user what is the right way to launch a cluster peer, and more difficult to make mistakes. As a side effect, the PR also: * Fixes #381 - peers with dynamic addresses * Fixes #371 - peers should be Raft configuration option * Fixes #378 - waitForUpdates may return before state fully synced * Fixes #235 - config option shadowing (no cfg saves, no need to shadow) License: MIT Signed-off-by: Hector Sanjuan <code@hector.link>
2018-04-28 22:22:23 +00:00
// flag defaults
const (
defaultLogLevel = "info"
defaultConsensus = "crdt"
defaultDatastore = "pebble"
Feat: emancipate Consensus from the Cluster component This commit promotes the Consensus component (and Raft) to become a fully independent thing like other components, passed to NewCluster during initialization. Cluster (main component) no longer creates the consensus layer internally. This has triggered a number of breaking changes that I will explain below. Motivation: Future work will require the possibility of running Cluster with a consensus layer that is not Raft. The "consensus" layer is in charge of maintaining two things: * The current cluster peerset, as required by the implementation * The current cluster pinset (shared state) While the pinset maintenance has always been in the consensus layer, the peerset maintenance was handled by the main component (starting by the "peers" key in the configuration) AND the Raft component (internally) and this generated lots of confusion: if the user edited the peers in the configuration they would be greeted with an error. The bootstrap process (adding a peer to an existing cluster) and configuration key also complicated many things, since the main component did it, but only when the consensus was initialized and in single peer mode. In all this we also mixed the peerstore (list of peer addresses in the libp2p host) with the peerset, when they need not to be linked. By initializing the consensus layer before calling NewCluster, all the difficulties in maintaining the current implementation in the same way have come to light. Thus, the following changes have been introduced: * Remove "peers" and "bootstrap" keys from the configuration: we no longer edit or save the configuration files. This was a very bad practice, requiring write permissions by the process to the file containing the private key and additionally made things like Puppet deployments of cluster difficult as configuration would mutate from its initial version. Needless to say all the maintenance associated to making sure peers and bootstrap had correct values when peers are bootstrapped or removed. A loud and detailed error message has been added when staring cluster with an old config, along with instructions on how to move forward. * Introduce a PeerstoreFile ("peerstore") which stores peer addresses: in ipfs, the peerstore is not persisted because it can be re-built from the network bootstrappers and the DHT. Cluster should probably also allow discoverability of peers addresses (when not bootstrapping, as in that case we have it), but in the meantime, we will read and persist the peerstore addresses for cluster peers in this file, different from the configuration. Note that dns multiaddresses are now fully supported and no IPs are saved when we have DNS multiaddresses for a peer. * The former "peer_manager" code is now a pstoremgr module, providing utilities to parse, add, list and generally maintain the libp2p host peerstore, including operations on the PeerstoreFile. This "pstoremgr" can now also be extended to perform address autodiscovery and other things indepedently from Cluster. * Create and initialize Raft outside of the main Cluster component: since we can now launch Raft independently from Cluster, we have more degrees of freedom. A new "staging" option when creating the object allows a raft peer to be launched in Staging mode, waiting to be added to a running consensus, and thus, not electing itself as leader or doing anything like we were doing before. This additionally allows us to track when the peer has become a Voter, which only happens when it's caught up with the state, something that was wonky previously. * The raft configuration now includes an InitPeerset key, which allows to provide a peerset for new peers and which is ignored when staging==true. The whole Raft initialization code is way cleaner and stronger now. * Cluster peer bootsrapping is now an ipfs-cluster-service feature. The --bootstrap flag works as before (additionally allowing comma-separated-list of entries). What bootstrap does, is to initialize Raft with staging == true, and then call Join in the main cluster component. Only when the Raft peer transitions to Voter, consensus becomes ready, and cluster becomes Ready. This is cleaner, works better and is less complex than before (supporting both flags and config values). We also backup and clean the state whenever we are boostrapping, automatically * ipfs-cluster-service no longer runs the daemon. Starting cluster needs now "ipfs-cluster-service daemon". The daemon specific flags (bootstrap, alloc) are now flags for the daemon subcommand. Here we mimic ipfs ("ipfs" does not start the daemon but print help) and pave the path for merging both service and ctl in the future. While this brings some breaking changes, it significantly reduces the complexity of the configuration, the code and most importantly, the documentation. It should be easier now to explain the user what is the right way to launch a cluster peer, and more difficult to make mistakes. As a side effect, the PR also: * Fixes #381 - peers with dynamic addresses * Fixes #371 - peers should be Raft configuration option * Fixes #378 - waitForUpdates may return before state fully synced * Fixes #235 - config option shadowing (no cfg saves, no need to shadow) License: MIT Signed-off-by: Hector Sanjuan <code@hector.link>
2018-04-28 22:22:23 +00:00
)
const (
stateCleanupPrompt = "The peer state will be removed. Existing pins may be lost."
configurationOverwritePrompt = "The configuration file will be overwritten."
)
// We store a commit id here
var commit string
// Description provides a short summary of the functionality of this tool
var Description = fmt.Sprintf(`
%s runs an IPFS Cluster peer.
A peer participates in the cluster consensus, follows a distributed log
of pinning and unpinning requests and manages pinning operations to a
configured IPFS daemon.
This peer also provides an API for cluster management, an IPFS Proxy API which
forwards requests to IPFS and a number of components for internal communication
using LibP2P. This is a simplified view of the components:
+------------------+
| ipfs-cluster-ctl |
+---------+--------+
|
| HTTP(s)
ipfs-cluster-service | HTTP
+----------+--------+--v--+----------------------+ +-------------+
| RPC | Peer 1 | API | IPFS Connector/Proxy +------> IPFS daemon |
+----^-----+--------+-----+----------------------+ +-------------+
| libp2p
|
+----v-----+--------+-----+----------------------+ +-------------+
| RPC | Peer 2 | API | IPFS Connector/Proxy +------> IPFS daemon |
+----^-----+--------+-----+----------------------+ +-------------+
|
|
+----v-----+--------+-----+----------------------+ +-------------+
| RPC | Peer 3 | API | IPFS Connector/Proxy +------> IPFS daemon |
+----------+--------+-----+----------------------+ +-------------+
%s needs valid configuration and identity files to run.
These are independent from IPFS. The identity includes its own
libp2p key-pair. They can be initialized with "init" and their
default locations are ~/%s/%s
and ~/%s/%s.
For feedback, bug reports or any additional information, visit
https://github.com/ipfs-cluster/ipfs-cluster.
EXAMPLES:
Initial configuration:
$ ipfs-cluster-service init
Launch a cluster:
Feat: emancipate Consensus from the Cluster component This commit promotes the Consensus component (and Raft) to become a fully independent thing like other components, passed to NewCluster during initialization. Cluster (main component) no longer creates the consensus layer internally. This has triggered a number of breaking changes that I will explain below. Motivation: Future work will require the possibility of running Cluster with a consensus layer that is not Raft. The "consensus" layer is in charge of maintaining two things: * The current cluster peerset, as required by the implementation * The current cluster pinset (shared state) While the pinset maintenance has always been in the consensus layer, the peerset maintenance was handled by the main component (starting by the "peers" key in the configuration) AND the Raft component (internally) and this generated lots of confusion: if the user edited the peers in the configuration they would be greeted with an error. The bootstrap process (adding a peer to an existing cluster) and configuration key also complicated many things, since the main component did it, but only when the consensus was initialized and in single peer mode. In all this we also mixed the peerstore (list of peer addresses in the libp2p host) with the peerset, when they need not to be linked. By initializing the consensus layer before calling NewCluster, all the difficulties in maintaining the current implementation in the same way have come to light. Thus, the following changes have been introduced: * Remove "peers" and "bootstrap" keys from the configuration: we no longer edit or save the configuration files. This was a very bad practice, requiring write permissions by the process to the file containing the private key and additionally made things like Puppet deployments of cluster difficult as configuration would mutate from its initial version. Needless to say all the maintenance associated to making sure peers and bootstrap had correct values when peers are bootstrapped or removed. A loud and detailed error message has been added when staring cluster with an old config, along with instructions on how to move forward. * Introduce a PeerstoreFile ("peerstore") which stores peer addresses: in ipfs, the peerstore is not persisted because it can be re-built from the network bootstrappers and the DHT. Cluster should probably also allow discoverability of peers addresses (when not bootstrapping, as in that case we have it), but in the meantime, we will read and persist the peerstore addresses for cluster peers in this file, different from the configuration. Note that dns multiaddresses are now fully supported and no IPs are saved when we have DNS multiaddresses for a peer. * The former "peer_manager" code is now a pstoremgr module, providing utilities to parse, add, list and generally maintain the libp2p host peerstore, including operations on the PeerstoreFile. This "pstoremgr" can now also be extended to perform address autodiscovery and other things indepedently from Cluster. * Create and initialize Raft outside of the main Cluster component: since we can now launch Raft independently from Cluster, we have more degrees of freedom. A new "staging" option when creating the object allows a raft peer to be launched in Staging mode, waiting to be added to a running consensus, and thus, not electing itself as leader or doing anything like we were doing before. This additionally allows us to track when the peer has become a Voter, which only happens when it's caught up with the state, something that was wonky previously. * The raft configuration now includes an InitPeerset key, which allows to provide a peerset for new peers and which is ignored when staging==true. The whole Raft initialization code is way cleaner and stronger now. * Cluster peer bootsrapping is now an ipfs-cluster-service feature. The --bootstrap flag works as before (additionally allowing comma-separated-list of entries). What bootstrap does, is to initialize Raft with staging == true, and then call Join in the main cluster component. Only when the Raft peer transitions to Voter, consensus becomes ready, and cluster becomes Ready. This is cleaner, works better and is less complex than before (supporting both flags and config values). We also backup and clean the state whenever we are boostrapping, automatically * ipfs-cluster-service no longer runs the daemon. Starting cluster needs now "ipfs-cluster-service daemon". The daemon specific flags (bootstrap, alloc) are now flags for the daemon subcommand. Here we mimic ipfs ("ipfs" does not start the daemon but print help) and pave the path for merging both service and ctl in the future. While this brings some breaking changes, it significantly reduces the complexity of the configuration, the code and most importantly, the documentation. It should be easier now to explain the user what is the right way to launch a cluster peer, and more difficult to make mistakes. As a side effect, the PR also: * Fixes #381 - peers with dynamic addresses * Fixes #371 - peers should be Raft configuration option * Fixes #378 - waitForUpdates may return before state fully synced * Fixes #235 - config option shadowing (no cfg saves, no need to shadow) License: MIT Signed-off-by: Hector Sanjuan <code@hector.link>
2018-04-28 22:22:23 +00:00
$ ipfs-cluster-service daemon
Launch a peer and join existing cluster:
$ ipfs-cluster-service daemon --bootstrap /ip4/192.168.1.2/tcp/9096/p2p/QmPSoSaPXpyunaBwHs1rZBKYSqRV4bLRk32VGYLuvdrypL
Customize logs using --loglevel flag. To customize component-level
logging pass a comma-separated list of component-identifer:log-level
pair or without identifier for overall loglevel. Valid loglevels
are critical, error, warning, notice, info and debug.
$ ipfs-cluster-service --loglevel info,cluster:debug,pintracker:debug daemon
`,
programName,
programName,
DefaultFolder,
DefaultConfigFile,
DefaultFolder,
DefaultIdentityFile,
)
var logger = logging.Logger("service")
// Default location for the configurations and data
var (
// DefaultFolder is the name of the cluster folder
DefaultFolder = ".ipfs-cluster"
// DefaultPath is set on init() to $HOME/DefaultFolder
// and holds all the ipfs-cluster data
DefaultPath string
// The name of the configuration file inside DefaultPath
DefaultConfigFile = "service.json"
2019-04-30 04:50:41 +00:00
// The name of the identity file inside DefaultPath
DefaultIdentityFile = "identity.json"
)
var (
2019-04-30 04:50:41 +00:00
configPath string
identityPath string
)
func init() {
// Set build information.
if build, err := semver.NewBuildVersion(commit); err == nil {
version.Version.Build = []string{"git" + build}
}
// We try guessing user's home from the HOME variable. This
// allows HOME hacks for things like Snapcraft builds. HOME
// should be set in all UNIX by the OS. Alternatively, we fall back to
// usr.HomeDir (which should work on Windows etc.).
home := os.Getenv("HOME")
if home == "" {
usr, err := user.Current()
if err != nil {
panic(fmt.Sprintf("cannot get current user: %s", err))
}
home = usr.HomeDir
}
DefaultPath = filepath.Join(home, DefaultFolder)
}
func out(m string, a ...interface{}) {
fmt.Fprintf(os.Stderr, m, a...)
}
func checkErr(doing string, err error, args ...interface{}) {
if err != nil {
if len(args) > 0 {
doing = fmt.Sprintf(doing, args...)
}
out("error %s: %s\n", doing, err)
err = locker.tryUnlock()
if err != nil {
out("error releasing execution lock: %s\n", err)
}
os.Exit(1)
}
}
func main() {
app := cli.NewApp()
app.Name = programName
app.Usage = "IPFS Cluster peer"
app.Description = Description
//app.Copyright = "© Protocol Labs, Inc."
app.Version = version.Version.String()
app.Flags = []cli.Flag{
cli.StringFlag{
Name: "config, c",
Value: DefaultPath,
Usage: "path to the configuration and data `FOLDER`",
EnvVar: "IPFS_CLUSTER_PATH",
},
cli.BoolFlag{
Name: "force, f",
Usage: "forcefully proceed with some actions. i.e. overwriting configuration",
},
cli.BoolFlag{
Name: "debug, d",
Usage: "enable full debug logging (very verbose)",
},
cli.StringFlag{
Name: "loglevel, l",
2019-12-04 17:40:35 +00:00
EnvVar: "IPFS_CLUSTER_LOG_LEVEL",
Usage: "set overall and component-wise log levels",
},
}
2019-04-30 04:50:41 +00:00
app.Before = func(c *cli.Context) error {
absPath, err := filepath.Abs(c.String("config"))
if err != nil {
return err
}
configPath = filepath.Join(absPath, DefaultConfigFile)
identityPath = filepath.Join(absPath, DefaultIdentityFile)
2019-12-04 17:40:35 +00:00
err = setupLogLevel(c.Bool("debug"), c.String("loglevel"))
if err != nil {
return err
2019-04-30 04:50:41 +00:00
}
locker = &lock{path: absPath}
return nil
}
app.Commands = []cli.Command{
{
Name: "init",
Usage: "Creates a configuration and generates an identity",
Description: fmt.Sprintf(`
This command will initialize a new %s configuration file and, if it
does already exist, generate a new %s for %s.
If the optional [source-url] is given, the generated configuration file
will refer to it. The source configuration will be fetched from its source
URL during the launch of the daemon. If not, a default standard configuration
file will be created.
In the latter case, a cluster secret will be generated as required
by %s. Alternatively, this secret can be manually
provided with --custom-secret (in which case it will be prompted), or
by setting the CLUSTER_SECRET environment variable.
The --consensus flag allows to select an alternative consensus components for
in the newly-generated configuration.
Note that the --force flag allows to overwrite an existing
configuration with default values. To generate a new identity, please
remove the %s file first and clean any Raft state.
By default, an empty peerstore file will be created too. Initial contents can
be provided with the --peers flag. Depending on the chosen consensus, the
"trusted_peers" list in the "crdt" configuration section and the
"init_peerset" list in the "raft" configuration section will be prefilled to
the peer IDs in the given multiaddresses.
`,
DefaultConfigFile,
DefaultIdentityFile,
programName,
programName,
DefaultIdentityFile,
),
ArgsUsage: "[http-source-url]",
Flags: []cli.Flag{
cli.StringFlag{
Name: "consensus",
Usage: "select consensus: 'crdt' or 'raft'",
Value: defaultConsensus,
},
cli.StringFlag{
Name: "datastore",
Usage: "select datastore: 'badger', 'badger3', 'leveldb' or 'pebble'",
Value: defaultDatastore,
},
cli.BoolFlag{
Name: "custom-secret, s",
Usage: "prompt for the cluster secret (when no source specified)",
},
cli.StringFlag{
Name: "peers",
Usage: "comma-separated list of multiaddresses to init with (see help)",
},
cli.BoolFlag{
Name: "force, f",
Usage: "overwrite configuration without prompting",
},
cli.BoolFlag{
Name: "randomports",
Usage: "configure random ports to listen on instead of defaults",
},
},
Action: func(c *cli.Context) error {
consensus := c.String("consensus")
switch consensus {
case "raft", "crdt":
default:
checkErr("choosing consensus", errors.New("flag value must be set to 'raft' or 'crdt'"))
}
datastore := c.String("datastore")
switch datastore {
case "leveldb", "badger", "badger3", "pebble":
default:
checkErr("choosing datastore", errors.New("flag value must be set to 'leveldb', 'badger', 'badger3' or 'pebble'"))
}
cfgHelper := cmdutils.NewConfigHelper(configPath, identityPath, consensus, datastore)
defer cfgHelper.Manager().Shutdown() // wait for saves
Issue #162: Rework configuration format The following commit reimplements ipfs-cluster configuration under the following premises: * Each component is initialized with a configuration object defined by its module * Each component decides how the JSON representation of its configuration looks like * Each component parses and validates its own configuration * Each component exposes its own defaults * Component configurations are make the sections of a central JSON configuration file (which replaces the current JSON format) * Component configurations implement a common interface (config.ComponentConfig) with a set of common operations * The central configuration file is managed by a config.ConfigManager which: * Registers ComponentConfigs * Assigns the correspondent sections from the JSON file to each component and delegates the parsing * Delegates the JSON generation for each section * Can be notified when the configuration is updated and must be saved to disk The new service.json would then look as follows: ```json { "cluster": { "id": "QmTVW8NoRxC5wBhV7WtAYtRn7itipEESfozWN5KmXUQnk2", "private_key": "<...>", "secret": "00224102ae6aaf94f2606abf69a0e278251ecc1d64815b617ff19d6d2841f786", "peers": [], "bootstrap": [], "leave_on_shutdown": false, "listen_multiaddress": "/ip4/0.0.0.0/tcp/9096", "state_sync_interval": "1m0s", "ipfs_sync_interval": "2m10s", "replication_factor": -1, "monitor_ping_interval": "15s" }, "consensus": { "raft": { "heartbeat_timeout": "1s", "election_timeout": "1s", "commit_timeout": "50ms", "max_append_entries": 64, "trailing_logs": 10240, "snapshot_interval": "2m0s", "snapshot_threshold": 8192, "leader_lease_timeout": "500ms" } }, "api": { "restapi": { "listen_multiaddress": "/ip4/127.0.0.1/tcp/9094", "read_timeout": "30s", "read_header_timeout": "5s", "write_timeout": "1m0s", "idle_timeout": "2m0s" } }, "ipfs_connector": { "ipfshttp": { "proxy_listen_multiaddress": "/ip4/127.0.0.1/tcp/9095", "node_multiaddress": "/ip4/127.0.0.1/tcp/5001", "connect_swarms_delay": "7s", "proxy_read_timeout": "10m0s", "proxy_read_header_timeout": "5s", "proxy_write_timeout": "10m0s", "proxy_idle_timeout": "1m0s" } }, "monitor": { "monbasic": { "check_interval": "15s" } }, "informer": { "disk": { "metric_ttl": "30s", "metric_type": "freespace" }, "numpin": { "metric_ttl": "10s" } } } ``` This new format aims to be easily extensible per component. As such, it already surfaces quite a few new options which were hardcoded before. Additionally, since Go API have changed, some redundant methods have been removed and small refactoring has happened to take advantage of the new way. License: MIT Signed-off-by: Hector Sanjuan <hector@protocol.ai>
2017-10-11 18:23:03 +00:00
configExists := false
if _, err := os.Stat(configPath); !os.IsNotExist(err) {
configExists = true
}
identityExists := false
if _, err := os.Stat(identityPath); !os.IsNotExist(err) {
identityExists = true
}
if configExists || identityExists {
// cluster might be running
// acquire lock for config folder
Consensus: add new "crdt" consensus component This adds a new "crdt" consensus component using go-ds-crdt. This implies several refactors to fully make cluster consensus-component independent: * Delete mapstate and fully adopt dsstate (after people have migrated). * Return errors from state methods rather than ignoring them. * Add a new "datastore" modules so that we can configure datastores in the main configuration like other components. * Let the consensus components fully define the "state.State". Thus, they do not receive the state, they receive the storage where we put the state (a go-datastore). * Allow to customize how the monitor component obtains Peers() (the current peerset), including avoiding using the current peerset. At the moment the crdt consensus uses the monitoring component to define the current peerset. Therefore the monitor component cannot rely on the consensus component to produce a peerset. * Re-factor/re-implementation of "ipfs-cluster-service state" operations. Includes the dissapearance of the "migrate" one. The CRDT consensus component defines creates a crdt-datastore (with ipfs-lite) and uses it to intitialize a dssate. Thus the crdt-store is elegantly wrapped. Any modifications to the state get automatically replicated to other peers. We store all the CRDT DAG blocks in the local datastore. The consensus components only expose a ReadOnly state, as any modifications to the shared state should happen through them. DHT and PubSub facilities must now be created outside of Cluster and passed in so they can be re-used by different components.
2019-02-20 14:24:25 +00:00
locker.lock()
defer locker.tryUnlock()
}
if configExists {
Consensus: add new "crdt" consensus component This adds a new "crdt" consensus component using go-ds-crdt. This implies several refactors to fully make cluster consensus-component independent: * Delete mapstate and fully adopt dsstate (after people have migrated). * Return errors from state methods rather than ignoring them. * Add a new "datastore" modules so that we can configure datastores in the main configuration like other components. * Let the consensus components fully define the "state.State". Thus, they do not receive the state, they receive the storage where we put the state (a go-datastore). * Allow to customize how the monitor component obtains Peers() (the current peerset), including avoiding using the current peerset. At the moment the crdt consensus uses the monitoring component to define the current peerset. Therefore the monitor component cannot rely on the consensus component to produce a peerset. * Re-factor/re-implementation of "ipfs-cluster-service state" operations. Includes the dissapearance of the "migrate" one. The CRDT consensus component defines creates a crdt-datastore (with ipfs-lite) and uses it to intitialize a dssate. Thus the crdt-store is elegantly wrapped. Any modifications to the state get automatically replicated to other peers. We store all the CRDT DAG blocks in the local datastore. The consensus components only expose a ReadOnly state, as any modifications to the shared state should happen through them. DHT and PubSub facilities must now be created outside of Cluster and passed in so they can be re-used by different components.
2019-02-20 14:24:25 +00:00
confirm := fmt.Sprintf(
"%s Continue? [y/n]:",
Consensus: add new "crdt" consensus component This adds a new "crdt" consensus component using go-ds-crdt. This implies several refactors to fully make cluster consensus-component independent: * Delete mapstate and fully adopt dsstate (after people have migrated). * Return errors from state methods rather than ignoring them. * Add a new "datastore" modules so that we can configure datastores in the main configuration like other components. * Let the consensus components fully define the "state.State". Thus, they do not receive the state, they receive the storage where we put the state (a go-datastore). * Allow to customize how the monitor component obtains Peers() (the current peerset), including avoiding using the current peerset. At the moment the crdt consensus uses the monitoring component to define the current peerset. Therefore the monitor component cannot rely on the consensus component to produce a peerset. * Re-factor/re-implementation of "ipfs-cluster-service state" operations. Includes the dissapearance of the "migrate" one. The CRDT consensus component defines creates a crdt-datastore (with ipfs-lite) and uses it to intitialize a dssate. Thus the crdt-store is elegantly wrapped. Any modifications to the state get automatically replicated to other peers. We store all the CRDT DAG blocks in the local datastore. The consensus components only expose a ReadOnly state, as any modifications to the shared state should happen through them. DHT and PubSub facilities must now be created outside of Cluster and passed in so they can be re-used by different components.
2019-02-20 14:24:25 +00:00
configurationOverwritePrompt,
)
// --force allows override of the prompt
if !c.Bool("force") {
if !yesNoPrompt(confirm) {
return nil
}
}
}
// Set url. If exists, it will be the only thing saved.
cfgHelper.Manager().Source = c.Args().First()
Issue #162: Rework configuration format The following commit reimplements ipfs-cluster configuration under the following premises: * Each component is initialized with a configuration object defined by its module * Each component decides how the JSON representation of its configuration looks like * Each component parses and validates its own configuration * Each component exposes its own defaults * Component configurations are make the sections of a central JSON configuration file (which replaces the current JSON format) * Component configurations implement a common interface (config.ComponentConfig) with a set of common operations * The central configuration file is managed by a config.ConfigManager which: * Registers ComponentConfigs * Assigns the correspondent sections from the JSON file to each component and delegates the parsing * Delegates the JSON generation for each section * Can be notified when the configuration is updated and must be saved to disk The new service.json would then look as follows: ```json { "cluster": { "id": "QmTVW8NoRxC5wBhV7WtAYtRn7itipEESfozWN5KmXUQnk2", "private_key": "<...>", "secret": "00224102ae6aaf94f2606abf69a0e278251ecc1d64815b617ff19d6d2841f786", "peers": [], "bootstrap": [], "leave_on_shutdown": false, "listen_multiaddress": "/ip4/0.0.0.0/tcp/9096", "state_sync_interval": "1m0s", "ipfs_sync_interval": "2m10s", "replication_factor": -1, "monitor_ping_interval": "15s" }, "consensus": { "raft": { "heartbeat_timeout": "1s", "election_timeout": "1s", "commit_timeout": "50ms", "max_append_entries": 64, "trailing_logs": 10240, "snapshot_interval": "2m0s", "snapshot_threshold": 8192, "leader_lease_timeout": "500ms" } }, "api": { "restapi": { "listen_multiaddress": "/ip4/127.0.0.1/tcp/9094", "read_timeout": "30s", "read_header_timeout": "5s", "write_timeout": "1m0s", "idle_timeout": "2m0s" } }, "ipfs_connector": { "ipfshttp": { "proxy_listen_multiaddress": "/ip4/127.0.0.1/tcp/9095", "node_multiaddress": "/ip4/127.0.0.1/tcp/5001", "connect_swarms_delay": "7s", "proxy_read_timeout": "10m0s", "proxy_read_header_timeout": "5s", "proxy_write_timeout": "10m0s", "proxy_idle_timeout": "1m0s" } }, "monitor": { "monbasic": { "check_interval": "15s" } }, "informer": { "disk": { "metric_ttl": "30s", "metric_type": "freespace" }, "numpin": { "metric_ttl": "10s" } } } ``` This new format aims to be easily extensible per component. As such, it already surfaces quite a few new options which were hardcoded before. Additionally, since Go API have changed, some redundant methods have been removed and small refactoring has happened to take advantage of the new way. License: MIT Signed-off-by: Hector Sanjuan <hector@protocol.ai>
2017-10-11 18:23:03 +00:00
// Generate defaults for all registered components
err := cfgHelper.Manager().Default()
Issue #162: Rework configuration format The following commit reimplements ipfs-cluster configuration under the following premises: * Each component is initialized with a configuration object defined by its module * Each component decides how the JSON representation of its configuration looks like * Each component parses and validates its own configuration * Each component exposes its own defaults * Component configurations are make the sections of a central JSON configuration file (which replaces the current JSON format) * Component configurations implement a common interface (config.ComponentConfig) with a set of common operations * The central configuration file is managed by a config.ConfigManager which: * Registers ComponentConfigs * Assigns the correspondent sections from the JSON file to each component and delegates the parsing * Delegates the JSON generation for each section * Can be notified when the configuration is updated and must be saved to disk The new service.json would then look as follows: ```json { "cluster": { "id": "QmTVW8NoRxC5wBhV7WtAYtRn7itipEESfozWN5KmXUQnk2", "private_key": "<...>", "secret": "00224102ae6aaf94f2606abf69a0e278251ecc1d64815b617ff19d6d2841f786", "peers": [], "bootstrap": [], "leave_on_shutdown": false, "listen_multiaddress": "/ip4/0.0.0.0/tcp/9096", "state_sync_interval": "1m0s", "ipfs_sync_interval": "2m10s", "replication_factor": -1, "monitor_ping_interval": "15s" }, "consensus": { "raft": { "heartbeat_timeout": "1s", "election_timeout": "1s", "commit_timeout": "50ms", "max_append_entries": 64, "trailing_logs": 10240, "snapshot_interval": "2m0s", "snapshot_threshold": 8192, "leader_lease_timeout": "500ms" } }, "api": { "restapi": { "listen_multiaddress": "/ip4/127.0.0.1/tcp/9094", "read_timeout": "30s", "read_header_timeout": "5s", "write_timeout": "1m0s", "idle_timeout": "2m0s" } }, "ipfs_connector": { "ipfshttp": { "proxy_listen_multiaddress": "/ip4/127.0.0.1/tcp/9095", "node_multiaddress": "/ip4/127.0.0.1/tcp/5001", "connect_swarms_delay": "7s", "proxy_read_timeout": "10m0s", "proxy_read_header_timeout": "5s", "proxy_write_timeout": "10m0s", "proxy_idle_timeout": "1m0s" } }, "monitor": { "monbasic": { "check_interval": "15s" } }, "informer": { "disk": { "metric_ttl": "30s", "metric_type": "freespace" }, "numpin": { "metric_ttl": "10s" } } } ``` This new format aims to be easily extensible per component. As such, it already surfaces quite a few new options which were hardcoded before. Additionally, since Go API have changed, some redundant methods have been removed and small refactoring has happened to take advantage of the new way. License: MIT Signed-off-by: Hector Sanjuan <hector@protocol.ai>
2017-10-11 18:23:03 +00:00
checkErr("generating default configuration", err)
if c.Bool("randomports") {
cfgs := cfgHelper.Configs()
cfgs.Cluster.ListenAddr, err = cmdutils.RandomizePorts(cfgs.Cluster.ListenAddr)
checkErr("randomizing ports", err)
cfgs.Restapi.HTTPListenAddr, err = cmdutils.RandomizePorts(cfgs.Restapi.HTTPListenAddr)
checkErr("randomizing ports", err)
cfgs.Ipfsproxy.ListenAddr, err = cmdutils.RandomizePorts(cfgs.Ipfsproxy.ListenAddr)
checkErr("randomizing ports", err)
cfgs.Pinsvcapi.HTTPListenAddr, err = cmdutils.RandomizePorts(cfgs.Pinsvcapi.HTTPListenAddr)
checkErr("randomizing ports", err)
}
err = cfgHelper.Manager().ApplyEnvVars()
checkErr("applying environment variables to configuration", err)
userSecret, userSecretDefined := userProvidedSecret(c.Bool("custom-secret") && !c.Args().Present())
Issue #162: Rework configuration format The following commit reimplements ipfs-cluster configuration under the following premises: * Each component is initialized with a configuration object defined by its module * Each component decides how the JSON representation of its configuration looks like * Each component parses and validates its own configuration * Each component exposes its own defaults * Component configurations are make the sections of a central JSON configuration file (which replaces the current JSON format) * Component configurations implement a common interface (config.ComponentConfig) with a set of common operations * The central configuration file is managed by a config.ConfigManager which: * Registers ComponentConfigs * Assigns the correspondent sections from the JSON file to each component and delegates the parsing * Delegates the JSON generation for each section * Can be notified when the configuration is updated and must be saved to disk The new service.json would then look as follows: ```json { "cluster": { "id": "QmTVW8NoRxC5wBhV7WtAYtRn7itipEESfozWN5KmXUQnk2", "private_key": "<...>", "secret": "00224102ae6aaf94f2606abf69a0e278251ecc1d64815b617ff19d6d2841f786", "peers": [], "bootstrap": [], "leave_on_shutdown": false, "listen_multiaddress": "/ip4/0.0.0.0/tcp/9096", "state_sync_interval": "1m0s", "ipfs_sync_interval": "2m10s", "replication_factor": -1, "monitor_ping_interval": "15s" }, "consensus": { "raft": { "heartbeat_timeout": "1s", "election_timeout": "1s", "commit_timeout": "50ms", "max_append_entries": 64, "trailing_logs": 10240, "snapshot_interval": "2m0s", "snapshot_threshold": 8192, "leader_lease_timeout": "500ms" } }, "api": { "restapi": { "listen_multiaddress": "/ip4/127.0.0.1/tcp/9094", "read_timeout": "30s", "read_header_timeout": "5s", "write_timeout": "1m0s", "idle_timeout": "2m0s" } }, "ipfs_connector": { "ipfshttp": { "proxy_listen_multiaddress": "/ip4/127.0.0.1/tcp/9095", "node_multiaddress": "/ip4/127.0.0.1/tcp/5001", "connect_swarms_delay": "7s", "proxy_read_timeout": "10m0s", "proxy_read_header_timeout": "5s", "proxy_write_timeout": "10m0s", "proxy_idle_timeout": "1m0s" } }, "monitor": { "monbasic": { "check_interval": "15s" } }, "informer": { "disk": { "metric_ttl": "30s", "metric_type": "freespace" }, "numpin": { "metric_ttl": "10s" } } } ``` This new format aims to be easily extensible per component. As such, it already surfaces quite a few new options which were hardcoded before. Additionally, since Go API have changed, some redundant methods have been removed and small refactoring has happened to take advantage of the new way. License: MIT Signed-off-by: Hector Sanjuan <hector@protocol.ai>
2017-10-11 18:23:03 +00:00
// Set user secret
if userSecretDefined {
cfgHelper.Configs().Cluster.Secret = userSecret
Issue #162: Rework configuration format The following commit reimplements ipfs-cluster configuration under the following premises: * Each component is initialized with a configuration object defined by its module * Each component decides how the JSON representation of its configuration looks like * Each component parses and validates its own configuration * Each component exposes its own defaults * Component configurations are make the sections of a central JSON configuration file (which replaces the current JSON format) * Component configurations implement a common interface (config.ComponentConfig) with a set of common operations * The central configuration file is managed by a config.ConfigManager which: * Registers ComponentConfigs * Assigns the correspondent sections from the JSON file to each component and delegates the parsing * Delegates the JSON generation for each section * Can be notified when the configuration is updated and must be saved to disk The new service.json would then look as follows: ```json { "cluster": { "id": "QmTVW8NoRxC5wBhV7WtAYtRn7itipEESfozWN5KmXUQnk2", "private_key": "<...>", "secret": "00224102ae6aaf94f2606abf69a0e278251ecc1d64815b617ff19d6d2841f786", "peers": [], "bootstrap": [], "leave_on_shutdown": false, "listen_multiaddress": "/ip4/0.0.0.0/tcp/9096", "state_sync_interval": "1m0s", "ipfs_sync_interval": "2m10s", "replication_factor": -1, "monitor_ping_interval": "15s" }, "consensus": { "raft": { "heartbeat_timeout": "1s", "election_timeout": "1s", "commit_timeout": "50ms", "max_append_entries": 64, "trailing_logs": 10240, "snapshot_interval": "2m0s", "snapshot_threshold": 8192, "leader_lease_timeout": "500ms" } }, "api": { "restapi": { "listen_multiaddress": "/ip4/127.0.0.1/tcp/9094", "read_timeout": "30s", "read_header_timeout": "5s", "write_timeout": "1m0s", "idle_timeout": "2m0s" } }, "ipfs_connector": { "ipfshttp": { "proxy_listen_multiaddress": "/ip4/127.0.0.1/tcp/9095", "node_multiaddress": "/ip4/127.0.0.1/tcp/5001", "connect_swarms_delay": "7s", "proxy_read_timeout": "10m0s", "proxy_read_header_timeout": "5s", "proxy_write_timeout": "10m0s", "proxy_idle_timeout": "1m0s" } }, "monitor": { "monbasic": { "check_interval": "15s" } }, "informer": { "disk": { "metric_ttl": "30s", "metric_type": "freespace" }, "numpin": { "metric_ttl": "10s" } } } ``` This new format aims to be easily extensible per component. As such, it already surfaces quite a few new options which were hardcoded before. Additionally, since Go API have changed, some redundant methods have been removed and small refactoring has happened to take advantage of the new way. License: MIT Signed-off-by: Hector Sanjuan <hector@protocol.ai>
2017-10-11 18:23:03 +00:00
}
peersOpt := c.String("peers")
var multiAddrs []ma.Multiaddr
if peersOpt != "" {
addrs := strings.Split(peersOpt, ",")
for _, addr := range addrs {
addr = strings.TrimSpace(addr)
multiAddr, err := ma.NewMultiaddr(addr)
checkErr("parsing peer multiaddress: "+addr, err)
multiAddrs = append(multiAddrs, multiAddr)
}
peers := ipfscluster.PeersFromMultiaddrs(multiAddrs)
cfgHelper.Configs().Crdt.TrustAll = false
cfgHelper.Configs().Crdt.TrustedPeers = peers
cfgHelper.Configs().Raft.InitPeerset = peers
}
// Save config. Creates the folder.
// Sets BaseDir in components.
checkErr("saving default configuration", cfgHelper.SaveConfigToDisk())
out("configuration written to %s.\n", configPath)
2019-04-30 04:50:41 +00:00
if !identityExists {
ident := cfgHelper.Identity()
err := ident.Default()
checkErr("generating an identity", err)
err = ident.ApplyEnvVars()
checkErr("applying environment variables to the identity", err)
2019-04-30 04:50:41 +00:00
err = cfgHelper.SaveIdentityToDisk()
checkErr("saving "+DefaultIdentityFile, err)
out("new identity written to %s\n", identityPath)
}
// Initialize peerstore file - even if empty
peerstorePath := cfgHelper.Configs().Cluster.GetPeerstorePath()
peerManager := pstoremgr.New(context.Background(), nil, peerstorePath)
addrInfos, err := peer.AddrInfosFromP2pAddrs(multiAddrs...)
checkErr("getting AddrInfos from peer multiaddresses", err)
err = peerManager.SavePeerstore(addrInfos)
checkErr("saving peers to peerstore", err)
if l := len(multiAddrs); l > 0 {
out("peerstore written to %s with %d entries.\n", peerstorePath, len(multiAddrs))
} else {
out("new empty peerstore written to %s.\n", peerstorePath)
}
return nil
},
},
{
Name: "daemon",
Usage: "Runs the IPFS Cluster peer (default)",
Flags: []cli.Flag{
cli.BoolFlag{
Name: "upgrade, u",
Consensus: add new "crdt" consensus component This adds a new "crdt" consensus component using go-ds-crdt. This implies several refactors to fully make cluster consensus-component independent: * Delete mapstate and fully adopt dsstate (after people have migrated). * Return errors from state methods rather than ignoring them. * Add a new "datastore" modules so that we can configure datastores in the main configuration like other components. * Let the consensus components fully define the "state.State". Thus, they do not receive the state, they receive the storage where we put the state (a go-datastore). * Allow to customize how the monitor component obtains Peers() (the current peerset), including avoiding using the current peerset. At the moment the crdt consensus uses the monitoring component to define the current peerset. Therefore the monitor component cannot rely on the consensus component to produce a peerset. * Re-factor/re-implementation of "ipfs-cluster-service state" operations. Includes the dissapearance of the "migrate" one. The CRDT consensus component defines creates a crdt-datastore (with ipfs-lite) and uses it to intitialize a dssate. Thus the crdt-store is elegantly wrapped. Any modifications to the state get automatically replicated to other peers. We store all the CRDT DAG blocks in the local datastore. The consensus components only expose a ReadOnly state, as any modifications to the shared state should happen through them. DHT and PubSub facilities must now be created outside of Cluster and passed in so they can be re-used by different components.
2019-02-20 14:24:25 +00:00
Usage: "run state migrations before starting (deprecated/unused)",
},
cli.StringFlag{
Feat: emancipate Consensus from the Cluster component This commit promotes the Consensus component (and Raft) to become a fully independent thing like other components, passed to NewCluster during initialization. Cluster (main component) no longer creates the consensus layer internally. This has triggered a number of breaking changes that I will explain below. Motivation: Future work will require the possibility of running Cluster with a consensus layer that is not Raft. The "consensus" layer is in charge of maintaining two things: * The current cluster peerset, as required by the implementation * The current cluster pinset (shared state) While the pinset maintenance has always been in the consensus layer, the peerset maintenance was handled by the main component (starting by the "peers" key in the configuration) AND the Raft component (internally) and this generated lots of confusion: if the user edited the peers in the configuration they would be greeted with an error. The bootstrap process (adding a peer to an existing cluster) and configuration key also complicated many things, since the main component did it, but only when the consensus was initialized and in single peer mode. In all this we also mixed the peerstore (list of peer addresses in the libp2p host) with the peerset, when they need not to be linked. By initializing the consensus layer before calling NewCluster, all the difficulties in maintaining the current implementation in the same way have come to light. Thus, the following changes have been introduced: * Remove "peers" and "bootstrap" keys from the configuration: we no longer edit or save the configuration files. This was a very bad practice, requiring write permissions by the process to the file containing the private key and additionally made things like Puppet deployments of cluster difficult as configuration would mutate from its initial version. Needless to say all the maintenance associated to making sure peers and bootstrap had correct values when peers are bootstrapped or removed. A loud and detailed error message has been added when staring cluster with an old config, along with instructions on how to move forward. * Introduce a PeerstoreFile ("peerstore") which stores peer addresses: in ipfs, the peerstore is not persisted because it can be re-built from the network bootstrappers and the DHT. Cluster should probably also allow discoverability of peers addresses (when not bootstrapping, as in that case we have it), but in the meantime, we will read and persist the peerstore addresses for cluster peers in this file, different from the configuration. Note that dns multiaddresses are now fully supported and no IPs are saved when we have DNS multiaddresses for a peer. * The former "peer_manager" code is now a pstoremgr module, providing utilities to parse, add, list and generally maintain the libp2p host peerstore, including operations on the PeerstoreFile. This "pstoremgr" can now also be extended to perform address autodiscovery and other things indepedently from Cluster. * Create and initialize Raft outside of the main Cluster component: since we can now launch Raft independently from Cluster, we have more degrees of freedom. A new "staging" option when creating the object allows a raft peer to be launched in Staging mode, waiting to be added to a running consensus, and thus, not electing itself as leader or doing anything like we were doing before. This additionally allows us to track when the peer has become a Voter, which only happens when it's caught up with the state, something that was wonky previously. * The raft configuration now includes an InitPeerset key, which allows to provide a peerset for new peers and which is ignored when staging==true. The whole Raft initialization code is way cleaner and stronger now. * Cluster peer bootsrapping is now an ipfs-cluster-service feature. The --bootstrap flag works as before (additionally allowing comma-separated-list of entries). What bootstrap does, is to initialize Raft with staging == true, and then call Join in the main cluster component. Only when the Raft peer transitions to Voter, consensus becomes ready, and cluster becomes Ready. This is cleaner, works better and is less complex than before (supporting both flags and config values). We also backup and clean the state whenever we are boostrapping, automatically * ipfs-cluster-service no longer runs the daemon. Starting cluster needs now "ipfs-cluster-service daemon". The daemon specific flags (bootstrap, alloc) are now flags for the daemon subcommand. Here we mimic ipfs ("ipfs" does not start the daemon but print help) and pave the path for merging both service and ctl in the future. While this brings some breaking changes, it significantly reduces the complexity of the configuration, the code and most importantly, the documentation. It should be easier now to explain the user what is the right way to launch a cluster peer, and more difficult to make mistakes. As a side effect, the PR also: * Fixes #381 - peers with dynamic addresses * Fixes #371 - peers should be Raft configuration option * Fixes #378 - waitForUpdates may return before state fully synced * Fixes #235 - config option shadowing (no cfg saves, no need to shadow) License: MIT Signed-off-by: Hector Sanjuan <code@hector.link>
2018-04-28 22:22:23 +00:00
Name: "bootstrap, j",
Usage: "join a cluster providing a comma-separated list of existing peers multiaddress(es)",
Feat: emancipate Consensus from the Cluster component This commit promotes the Consensus component (and Raft) to become a fully independent thing like other components, passed to NewCluster during initialization. Cluster (main component) no longer creates the consensus layer internally. This has triggered a number of breaking changes that I will explain below. Motivation: Future work will require the possibility of running Cluster with a consensus layer that is not Raft. The "consensus" layer is in charge of maintaining two things: * The current cluster peerset, as required by the implementation * The current cluster pinset (shared state) While the pinset maintenance has always been in the consensus layer, the peerset maintenance was handled by the main component (starting by the "peers" key in the configuration) AND the Raft component (internally) and this generated lots of confusion: if the user edited the peers in the configuration they would be greeted with an error. The bootstrap process (adding a peer to an existing cluster) and configuration key also complicated many things, since the main component did it, but only when the consensus was initialized and in single peer mode. In all this we also mixed the peerstore (list of peer addresses in the libp2p host) with the peerset, when they need not to be linked. By initializing the consensus layer before calling NewCluster, all the difficulties in maintaining the current implementation in the same way have come to light. Thus, the following changes have been introduced: * Remove "peers" and "bootstrap" keys from the configuration: we no longer edit or save the configuration files. This was a very bad practice, requiring write permissions by the process to the file containing the private key and additionally made things like Puppet deployments of cluster difficult as configuration would mutate from its initial version. Needless to say all the maintenance associated to making sure peers and bootstrap had correct values when peers are bootstrapped or removed. A loud and detailed error message has been added when staring cluster with an old config, along with instructions on how to move forward. * Introduce a PeerstoreFile ("peerstore") which stores peer addresses: in ipfs, the peerstore is not persisted because it can be re-built from the network bootstrappers and the DHT. Cluster should probably also allow discoverability of peers addresses (when not bootstrapping, as in that case we have it), but in the meantime, we will read and persist the peerstore addresses for cluster peers in this file, different from the configuration. Note that dns multiaddresses are now fully supported and no IPs are saved when we have DNS multiaddresses for a peer. * The former "peer_manager" code is now a pstoremgr module, providing utilities to parse, add, list and generally maintain the libp2p host peerstore, including operations on the PeerstoreFile. This "pstoremgr" can now also be extended to perform address autodiscovery and other things indepedently from Cluster. * Create and initialize Raft outside of the main Cluster component: since we can now launch Raft independently from Cluster, we have more degrees of freedom. A new "staging" option when creating the object allows a raft peer to be launched in Staging mode, waiting to be added to a running consensus, and thus, not electing itself as leader or doing anything like we were doing before. This additionally allows us to track when the peer has become a Voter, which only happens when it's caught up with the state, something that was wonky previously. * The raft configuration now includes an InitPeerset key, which allows to provide a peerset for new peers and which is ignored when staging==true. The whole Raft initialization code is way cleaner and stronger now. * Cluster peer bootsrapping is now an ipfs-cluster-service feature. The --bootstrap flag works as before (additionally allowing comma-separated-list of entries). What bootstrap does, is to initialize Raft with staging == true, and then call Join in the main cluster component. Only when the Raft peer transitions to Voter, consensus becomes ready, and cluster becomes Ready. This is cleaner, works better and is less complex than before (supporting both flags and config values). We also backup and clean the state whenever we are boostrapping, automatically * ipfs-cluster-service no longer runs the daemon. Starting cluster needs now "ipfs-cluster-service daemon". The daemon specific flags (bootstrap, alloc) are now flags for the daemon subcommand. Here we mimic ipfs ("ipfs" does not start the daemon but print help) and pave the path for merging both service and ctl in the future. While this brings some breaking changes, it significantly reduces the complexity of the configuration, the code and most importantly, the documentation. It should be easier now to explain the user what is the right way to launch a cluster peer, and more difficult to make mistakes. As a side effect, the PR also: * Fixes #381 - peers with dynamic addresses * Fixes #371 - peers should be Raft configuration option * Fixes #378 - waitForUpdates may return before state fully synced * Fixes #235 - config option shadowing (no cfg saves, no need to shadow) License: MIT Signed-off-by: Hector Sanjuan <code@hector.link>
2018-04-28 22:22:23 +00:00
},
cli.BoolFlag{
Name: "leave, x",
Usage: "remove peer from cluster on exit. Overrides \"leave_on_shutdown\"",
Hidden: true,
},
cli.BoolFlag{
Name: "stats",
Usage: "enable stats collection",
},
cli.BoolFlag{
Name: "tracing",
Usage: "enable tracing collection",
},
cli.BoolFlag{
Name: "no-trust",
Usage: "do not trust bootstrap peers (only for \"crdt\" consensus)",
},
},
Action: daemon,
},
{
Name: "state",
Usage: "Manages the peer's persistent state (pinset)",
Subcommands: []cli.Command{
{
Name: "crdt",
Usage: "CRDT-state commands",
Before: func(c *cli.Context) error {
// Load all the configurations and identity
cfgHelper, err := cmdutils.NewLoadedConfigHelper(configPath, identityPath)
cfgs := cfgHelper.Configs()
checkErr("loading configurations", err)
defer cfgHelper.Manager().Shutdown()
if cfgHelper.GetConsensus() != cfgs.Crdt.ConfigKey() {
checkErr("", errors.New("crdt subcommands can only be run on peers initialized with crdt consensus"))
}
return nil
},
Subcommands: []cli.Command{
{
Name: "info",
Usage: "Print information about the CRDT store",
Description: `
This commands prints basic information: current heads, dirty flag etc.
`,
Flags: []cli.Flag{},
Action: func(c *cli.Context) error {
locker.lock()
defer locker.tryUnlock()
crdt := getCrdt()
info := crdt.InternalStats()
fmt.Printf(
"Number of heads: %d. Current max-height: %d. Dirty: %t\nHeads: %s",
len(info.Heads),
info.MaxHeight,
crdt.IsDirty(),
info.Heads,
)
return nil
},
},
{
Name: "dot",
Usage: "Write the CRDT-DAG as DOT file",
Description: `
This command generates a DOT file representing the CRDT-DAG of this node.
The DOT file can then be visualized, converted to SVG etc.
This is a debugging command to visualize how the DAG looks like, whether there
is a lot of branching etc. large DAGs will generate large DOT files.
Use with caution!
`,
Flags: []cli.Flag{
cli.StringFlag{
Name: "file, f",
Value: "",
Usage: "writes to file instead of stdout",
},
},
Action: func(c *cli.Context) error {
locker.lock()
defer locker.tryUnlock()
crdt := getCrdt()
var err error
var w io.WriteCloser
outputPath := c.String("file")
if outputPath == "" {
// Output to stdout
w = os.Stdout
} else {
// Create the export file
w, err = os.Create(outputPath)
checkErr("creating output file", err)
}
// 256KiB of buffer size.
buf := bufio.NewWriterSize(w, 1<<18)
defer buf.Flush()
logger.Info("initiating CDRT-DAG DOT file export. Export might take a long time on large graphs")
checkErr("generating graph", crdt.DotDAG(buf))
logger.Info("dot file ")
return nil
},
},
{
Name: "mark-dirty",
Usage: "Marks the CRDT-store as dirty",
Description: `
Marking the CRDT store as dirty will force-run a Repair operation on the next
run (i.e. next time the cluster peer is started).
`,
Flags: []cli.Flag{},
Action: func(c *cli.Context) error {
locker.lock()
defer locker.tryUnlock()
crdt := getCrdt()
crdt.MarkDirty()
fmt.Println("Datastore marked 'dirty'")
return nil
},
},
{
Name: "mark-clean",
Usage: "Marks the CRDT-store as clean",
Description: `
This command remove the dirty-mark on the CRDT-store, which means no
DAG operations will be run.
`,
Flags: []cli.Flag{},
Action: func(c *cli.Context) error {
locker.lock()
defer locker.tryUnlock()
crdt := getCrdt()
crdt.MarkClean()
fmt.Println("Datastore marked 'clean'")
return nil
},
},
},
},
{
Name: "export",
Consensus: add new "crdt" consensus component This adds a new "crdt" consensus component using go-ds-crdt. This implies several refactors to fully make cluster consensus-component independent: * Delete mapstate and fully adopt dsstate (after people have migrated). * Return errors from state methods rather than ignoring them. * Add a new "datastore" modules so that we can configure datastores in the main configuration like other components. * Let the consensus components fully define the "state.State". Thus, they do not receive the state, they receive the storage where we put the state (a go-datastore). * Allow to customize how the monitor component obtains Peers() (the current peerset), including avoiding using the current peerset. At the moment the crdt consensus uses the monitoring component to define the current peerset. Therefore the monitor component cannot rely on the consensus component to produce a peerset. * Re-factor/re-implementation of "ipfs-cluster-service state" operations. Includes the dissapearance of the "migrate" one. The CRDT consensus component defines creates a crdt-datastore (with ipfs-lite) and uses it to intitialize a dssate. Thus the crdt-store is elegantly wrapped. Any modifications to the state get automatically replicated to other peers. We store all the CRDT DAG blocks in the local datastore. The consensus components only expose a ReadOnly state, as any modifications to the shared state should happen through them. DHT and PubSub facilities must now be created outside of Cluster and passed in so they can be re-used by different components.
2019-02-20 14:24:25 +00:00
Usage: "save the state to a JSON file",
Description: `
Consensus: add new "crdt" consensus component This adds a new "crdt" consensus component using go-ds-crdt. This implies several refactors to fully make cluster consensus-component independent: * Delete mapstate and fully adopt dsstate (after people have migrated). * Return errors from state methods rather than ignoring them. * Add a new "datastore" modules so that we can configure datastores in the main configuration like other components. * Let the consensus components fully define the "state.State". Thus, they do not receive the state, they receive the storage where we put the state (a go-datastore). * Allow to customize how the monitor component obtains Peers() (the current peerset), including avoiding using the current peerset. At the moment the crdt consensus uses the monitoring component to define the current peerset. Therefore the monitor component cannot rely on the consensus component to produce a peerset. * Re-factor/re-implementation of "ipfs-cluster-service state" operations. Includes the dissapearance of the "migrate" one. The CRDT consensus component defines creates a crdt-datastore (with ipfs-lite) and uses it to intitialize a dssate. Thus the crdt-store is elegantly wrapped. Any modifications to the state get automatically replicated to other peers. We store all the CRDT DAG blocks in the local datastore. The consensus components only expose a ReadOnly state, as any modifications to the shared state should happen through them. DHT and PubSub facilities must now be created outside of Cluster and passed in so they can be re-used by different components.
2019-02-20 14:24:25 +00:00
This command dumps the current cluster pinset (state) as a JSON file. The
resulting file can be used to migrate, restore or backup a Cluster peer.
By default, the state will be printed to stdout.
`,
Flags: []cli.Flag{
cli.StringFlag{
Name: "file, f",
Value: "",
Consensus: add new "crdt" consensus component This adds a new "crdt" consensus component using go-ds-crdt. This implies several refactors to fully make cluster consensus-component independent: * Delete mapstate and fully adopt dsstate (after people have migrated). * Return errors from state methods rather than ignoring them. * Add a new "datastore" modules so that we can configure datastores in the main configuration like other components. * Let the consensus components fully define the "state.State". Thus, they do not receive the state, they receive the storage where we put the state (a go-datastore). * Allow to customize how the monitor component obtains Peers() (the current peerset), including avoiding using the current peerset. At the moment the crdt consensus uses the monitoring component to define the current peerset. Therefore the monitor component cannot rely on the consensus component to produce a peerset. * Re-factor/re-implementation of "ipfs-cluster-service state" operations. Includes the dissapearance of the "migrate" one. The CRDT consensus component defines creates a crdt-datastore (with ipfs-lite) and uses it to intitialize a dssate. Thus the crdt-store is elegantly wrapped. Any modifications to the state get automatically replicated to other peers. We store all the CRDT DAG blocks in the local datastore. The consensus components only expose a ReadOnly state, as any modifications to the shared state should happen through them. DHT and PubSub facilities must now be created outside of Cluster and passed in so they can be re-used by different components.
2019-02-20 14:24:25 +00:00
Usage: "writes to an output file",
},
},
Action: func(c *cli.Context) error {
Consensus: add new "crdt" consensus component This adds a new "crdt" consensus component using go-ds-crdt. This implies several refactors to fully make cluster consensus-component independent: * Delete mapstate and fully adopt dsstate (after people have migrated). * Return errors from state methods rather than ignoring them. * Add a new "datastore" modules so that we can configure datastores in the main configuration like other components. * Let the consensus components fully define the "state.State". Thus, they do not receive the state, they receive the storage where we put the state (a go-datastore). * Allow to customize how the monitor component obtains Peers() (the current peerset), including avoiding using the current peerset. At the moment the crdt consensus uses the monitoring component to define the current peerset. Therefore the monitor component cannot rely on the consensus component to produce a peerset. * Re-factor/re-implementation of "ipfs-cluster-service state" operations. Includes the dissapearance of the "migrate" one. The CRDT consensus component defines creates a crdt-datastore (with ipfs-lite) and uses it to intitialize a dssate. Thus the crdt-store is elegantly wrapped. Any modifications to the state get automatically replicated to other peers. We store all the CRDT DAG blocks in the local datastore. The consensus components only expose a ReadOnly state, as any modifications to the shared state should happen through them. DHT and PubSub facilities must now be created outside of Cluster and passed in so they can be re-used by different components.
2019-02-20 14:24:25 +00:00
locker.lock()
defer locker.tryUnlock()
mgr := getStateManager()
var w io.WriteCloser
Consensus: add new "crdt" consensus component This adds a new "crdt" consensus component using go-ds-crdt. This implies several refactors to fully make cluster consensus-component independent: * Delete mapstate and fully adopt dsstate (after people have migrated). * Return errors from state methods rather than ignoring them. * Add a new "datastore" modules so that we can configure datastores in the main configuration like other components. * Let the consensus components fully define the "state.State". Thus, they do not receive the state, they receive the storage where we put the state (a go-datastore). * Allow to customize how the monitor component obtains Peers() (the current peerset), including avoiding using the current peerset. At the moment the crdt consensus uses the monitoring component to define the current peerset. Therefore the monitor component cannot rely on the consensus component to produce a peerset. * Re-factor/re-implementation of "ipfs-cluster-service state" operations. Includes the dissapearance of the "migrate" one. The CRDT consensus component defines creates a crdt-datastore (with ipfs-lite) and uses it to intitialize a dssate. Thus the crdt-store is elegantly wrapped. Any modifications to the state get automatically replicated to other peers. We store all the CRDT DAG blocks in the local datastore. The consensus components only expose a ReadOnly state, as any modifications to the shared state should happen through them. DHT and PubSub facilities must now be created outside of Cluster and passed in so they can be re-used by different components.
2019-02-20 14:24:25 +00:00
var err error
outputPath := c.String("file")
if outputPath == "" {
// Output to stdout
w = os.Stdout
} else {
// Create the export file
w, err = os.Create(outputPath)
checkErr("creating output file", err)
}
buf := bufio.NewWriter(w)
defer func() {
buf.Flush()
w.Close()
}()
checkErr("exporting state", mgr.ExportState(buf))
Consensus: add new "crdt" consensus component This adds a new "crdt" consensus component using go-ds-crdt. This implies several refactors to fully make cluster consensus-component independent: * Delete mapstate and fully adopt dsstate (after people have migrated). * Return errors from state methods rather than ignoring them. * Add a new "datastore" modules so that we can configure datastores in the main configuration like other components. * Let the consensus components fully define the "state.State". Thus, they do not receive the state, they receive the storage where we put the state (a go-datastore). * Allow to customize how the monitor component obtains Peers() (the current peerset), including avoiding using the current peerset. At the moment the crdt consensus uses the monitoring component to define the current peerset. Therefore the monitor component cannot rely on the consensus component to produce a peerset. * Re-factor/re-implementation of "ipfs-cluster-service state" operations. Includes the dissapearance of the "migrate" one. The CRDT consensus component defines creates a crdt-datastore (with ipfs-lite) and uses it to intitialize a dssate. Thus the crdt-store is elegantly wrapped. Any modifications to the state get automatically replicated to other peers. We store all the CRDT DAG blocks in the local datastore. The consensus components only expose a ReadOnly state, as any modifications to the shared state should happen through them. DHT and PubSub facilities must now be created outside of Cluster and passed in so they can be re-used by different components.
2019-02-20 14:24:25 +00:00
logger.Info("state successfully exported")
return nil
},
},
{
Name: "import",
Consensus: add new "crdt" consensus component This adds a new "crdt" consensus component using go-ds-crdt. This implies several refactors to fully make cluster consensus-component independent: * Delete mapstate and fully adopt dsstate (after people have migrated). * Return errors from state methods rather than ignoring them. * Add a new "datastore" modules so that we can configure datastores in the main configuration like other components. * Let the consensus components fully define the "state.State". Thus, they do not receive the state, they receive the storage where we put the state (a go-datastore). * Allow to customize how the monitor component obtains Peers() (the current peerset), including avoiding using the current peerset. At the moment the crdt consensus uses the monitoring component to define the current peerset. Therefore the monitor component cannot rely on the consensus component to produce a peerset. * Re-factor/re-implementation of "ipfs-cluster-service state" operations. Includes the dissapearance of the "migrate" one. The CRDT consensus component defines creates a crdt-datastore (with ipfs-lite) and uses it to intitialize a dssate. Thus the crdt-store is elegantly wrapped. Any modifications to the state get automatically replicated to other peers. We store all the CRDT DAG blocks in the local datastore. The consensus components only expose a ReadOnly state, as any modifications to the shared state should happen through them. DHT and PubSub facilities must now be created outside of Cluster and passed in so they can be re-used by different components.
2019-02-20 14:24:25 +00:00
Usage: "load the state from a file produced by 'export'",
Description: `
Consensus: add new "crdt" consensus component This adds a new "crdt" consensus component using go-ds-crdt. This implies several refactors to fully make cluster consensus-component independent: * Delete mapstate and fully adopt dsstate (after people have migrated). * Return errors from state methods rather than ignoring them. * Add a new "datastore" modules so that we can configure datastores in the main configuration like other components. * Let the consensus components fully define the "state.State". Thus, they do not receive the state, they receive the storage where we put the state (a go-datastore). * Allow to customize how the monitor component obtains Peers() (the current peerset), including avoiding using the current peerset. At the moment the crdt consensus uses the monitoring component to define the current peerset. Therefore the monitor component cannot rely on the consensus component to produce a peerset. * Re-factor/re-implementation of "ipfs-cluster-service state" operations. Includes the dissapearance of the "migrate" one. The CRDT consensus component defines creates a crdt-datastore (with ipfs-lite) and uses it to intitialize a dssate. Thus the crdt-store is elegantly wrapped. Any modifications to the state get automatically replicated to other peers. We store all the CRDT DAG blocks in the local datastore. The consensus components only expose a ReadOnly state, as any modifications to the shared state should happen through them. DHT and PubSub facilities must now be created outside of Cluster and passed in so they can be re-used by different components.
2019-02-20 14:24:25 +00:00
This command reads in an exported pinset (state) file and replaces the
existing one. This can be used, for example, to restore a Cluster peer from a
backup.
Consensus: add new "crdt" consensus component This adds a new "crdt" consensus component using go-ds-crdt. This implies several refactors to fully make cluster consensus-component independent: * Delete mapstate and fully adopt dsstate (after people have migrated). * Return errors from state methods rather than ignoring them. * Add a new "datastore" modules so that we can configure datastores in the main configuration like other components. * Let the consensus components fully define the "state.State". Thus, they do not receive the state, they receive the storage where we put the state (a go-datastore). * Allow to customize how the monitor component obtains Peers() (the current peerset), including avoiding using the current peerset. At the moment the crdt consensus uses the monitoring component to define the current peerset. Therefore the monitor component cannot rely on the consensus component to produce a peerset. * Re-factor/re-implementation of "ipfs-cluster-service state" operations. Includes the dissapearance of the "migrate" one. The CRDT consensus component defines creates a crdt-datastore (with ipfs-lite) and uses it to intitialize a dssate. Thus the crdt-store is elegantly wrapped. Any modifications to the state get automatically replicated to other peers. We store all the CRDT DAG blocks in the local datastore. The consensus components only expose a ReadOnly state, as any modifications to the shared state should happen through them. DHT and PubSub facilities must now be created outside of Cluster and passed in so they can be re-used by different components.
2019-02-20 14:24:25 +00:00
If an argument is provided, it will be treated it as the path of the file
to import. If no argument is provided, stdin will be used.
`,
Flags: []cli.Flag{
cli.BoolFlag{
Name: "force, f",
Consensus: add new "crdt" consensus component This adds a new "crdt" consensus component using go-ds-crdt. This implies several refactors to fully make cluster consensus-component independent: * Delete mapstate and fully adopt dsstate (after people have migrated). * Return errors from state methods rather than ignoring them. * Add a new "datastore" modules so that we can configure datastores in the main configuration like other components. * Let the consensus components fully define the "state.State". Thus, they do not receive the state, they receive the storage where we put the state (a go-datastore). * Allow to customize how the monitor component obtains Peers() (the current peerset), including avoiding using the current peerset. At the moment the crdt consensus uses the monitoring component to define the current peerset. Therefore the monitor component cannot rely on the consensus component to produce a peerset. * Re-factor/re-implementation of "ipfs-cluster-service state" operations. Includes the dissapearance of the "migrate" one. The CRDT consensus component defines creates a crdt-datastore (with ipfs-lite) and uses it to intitialize a dssate. Thus the crdt-store is elegantly wrapped. Any modifications to the state get automatically replicated to other peers. We store all the CRDT DAG blocks in the local datastore. The consensus components only expose a ReadOnly state, as any modifications to the shared state should happen through them. DHT and PubSub facilities must now be created outside of Cluster and passed in so they can be re-used by different components.
2019-02-20 14:24:25 +00:00
Usage: "skips confirmation prompt",
},
cli.IntFlag{
Name: "replication-min, rmin",
Value: 0,
Usage: "Overwrite replication-factor-min for all pins on import",
},
cli.IntFlag{
Name: "replication-max, rmax",
Value: 0,
Usage: "Overwrite replication-factor-max for all pins on import",
},
cli.StringFlag{
Name: "allocations, allocs",
Usage: "Overwrite allocations for all pins on import. Comma-separated list of peer IDs",
},
},
Action: func(c *cli.Context) error {
Consensus: add new "crdt" consensus component This adds a new "crdt" consensus component using go-ds-crdt. This implies several refactors to fully make cluster consensus-component independent: * Delete mapstate and fully adopt dsstate (after people have migrated). * Return errors from state methods rather than ignoring them. * Add a new "datastore" modules so that we can configure datastores in the main configuration like other components. * Let the consensus components fully define the "state.State". Thus, they do not receive the state, they receive the storage where we put the state (a go-datastore). * Allow to customize how the monitor component obtains Peers() (the current peerset), including avoiding using the current peerset. At the moment the crdt consensus uses the monitoring component to define the current peerset. Therefore the monitor component cannot rely on the consensus component to produce a peerset. * Re-factor/re-implementation of "ipfs-cluster-service state" operations. Includes the dissapearance of the "migrate" one. The CRDT consensus component defines creates a crdt-datastore (with ipfs-lite) and uses it to intitialize a dssate. Thus the crdt-store is elegantly wrapped. Any modifications to the state get automatically replicated to other peers. We store all the CRDT DAG blocks in the local datastore. The consensus components only expose a ReadOnly state, as any modifications to the shared state should happen through them. DHT and PubSub facilities must now be created outside of Cluster and passed in so they can be re-used by different components.
2019-02-20 14:24:25 +00:00
locker.lock()
defer locker.tryUnlock()
Consensus: add new "crdt" consensus component This adds a new "crdt" consensus component using go-ds-crdt. This implies several refactors to fully make cluster consensus-component independent: * Delete mapstate and fully adopt dsstate (after people have migrated). * Return errors from state methods rather than ignoring them. * Add a new "datastore" modules so that we can configure datastores in the main configuration like other components. * Let the consensus components fully define the "state.State". Thus, they do not receive the state, they receive the storage where we put the state (a go-datastore). * Allow to customize how the monitor component obtains Peers() (the current peerset), including avoiding using the current peerset. At the moment the crdt consensus uses the monitoring component to define the current peerset. Therefore the monitor component cannot rely on the consensus component to produce a peerset. * Re-factor/re-implementation of "ipfs-cluster-service state" operations. Includes the dissapearance of the "migrate" one. The CRDT consensus component defines creates a crdt-datastore (with ipfs-lite) and uses it to intitialize a dssate. Thus the crdt-store is elegantly wrapped. Any modifications to the state get automatically replicated to other peers. We store all the CRDT DAG blocks in the local datastore. The consensus components only expose a ReadOnly state, as any modifications to the shared state should happen through them. DHT and PubSub facilities must now be created outside of Cluster and passed in so they can be re-used by different components.
2019-02-20 14:24:25 +00:00
confirm := "The pinset (state) of this peer "
confirm += "will be replaced. Continue? [y/n]:"
if !c.Bool("force") && !yesNoPrompt(confirm) {
return nil
}
// importState allows overwriting of some options on import
opts := api.PinOptions{
ReplicationFactorMin: c.Int("replication-min"),
ReplicationFactorMax: c.Int("replication-max"),
UserAllocations: api.StringsToPeers(strings.Split(c.String("allocations"), ",")),
}
mgr := getStateManager()
// Get the importing file path
importFile := c.Args().First()
var r io.ReadCloser
Consensus: add new "crdt" consensus component This adds a new "crdt" consensus component using go-ds-crdt. This implies several refactors to fully make cluster consensus-component independent: * Delete mapstate and fully adopt dsstate (after people have migrated). * Return errors from state methods rather than ignoring them. * Add a new "datastore" modules so that we can configure datastores in the main configuration like other components. * Let the consensus components fully define the "state.State". Thus, they do not receive the state, they receive the storage where we put the state (a go-datastore). * Allow to customize how the monitor component obtains Peers() (the current peerset), including avoiding using the current peerset. At the moment the crdt consensus uses the monitoring component to define the current peerset. Therefore the monitor component cannot rely on the consensus component to produce a peerset. * Re-factor/re-implementation of "ipfs-cluster-service state" operations. Includes the dissapearance of the "migrate" one. The CRDT consensus component defines creates a crdt-datastore (with ipfs-lite) and uses it to intitialize a dssate. Thus the crdt-store is elegantly wrapped. Any modifications to the state get automatically replicated to other peers. We store all the CRDT DAG blocks in the local datastore. The consensus components only expose a ReadOnly state, as any modifications to the shared state should happen through them. DHT and PubSub facilities must now be created outside of Cluster and passed in so they can be re-used by different components.
2019-02-20 14:24:25 +00:00
var err error
if importFile == "" {
r = os.Stdin
Consensus: add new "crdt" consensus component This adds a new "crdt" consensus component using go-ds-crdt. This implies several refactors to fully make cluster consensus-component independent: * Delete mapstate and fully adopt dsstate (after people have migrated). * Return errors from state methods rather than ignoring them. * Add a new "datastore" modules so that we can configure datastores in the main configuration like other components. * Let the consensus components fully define the "state.State". Thus, they do not receive the state, they receive the storage where we put the state (a go-datastore). * Allow to customize how the monitor component obtains Peers() (the current peerset), including avoiding using the current peerset. At the moment the crdt consensus uses the monitoring component to define the current peerset. Therefore the monitor component cannot rely on the consensus component to produce a peerset. * Re-factor/re-implementation of "ipfs-cluster-service state" operations. Includes the dissapearance of the "migrate" one. The CRDT consensus component defines creates a crdt-datastore (with ipfs-lite) and uses it to intitialize a dssate. Thus the crdt-store is elegantly wrapped. Any modifications to the state get automatically replicated to other peers. We store all the CRDT DAG blocks in the local datastore. The consensus components only expose a ReadOnly state, as any modifications to the shared state should happen through them. DHT and PubSub facilities must now be created outside of Cluster and passed in so they can be re-used by different components.
2019-02-20 14:24:25 +00:00
fmt.Println("reading from stdin, Ctrl-D to finish")
} else {
r, err = os.Open(importFile)
checkErr("reading import file", err)
}
defer r.Close()
Consensus: add new "crdt" consensus component This adds a new "crdt" consensus component using go-ds-crdt. This implies several refactors to fully make cluster consensus-component independent: * Delete mapstate and fully adopt dsstate (after people have migrated). * Return errors from state methods rather than ignoring them. * Add a new "datastore" modules so that we can configure datastores in the main configuration like other components. * Let the consensus components fully define the "state.State". Thus, they do not receive the state, they receive the storage where we put the state (a go-datastore). * Allow to customize how the monitor component obtains Peers() (the current peerset), including avoiding using the current peerset. At the moment the crdt consensus uses the monitoring component to define the current peerset. Therefore the monitor component cannot rely on the consensus component to produce a peerset. * Re-factor/re-implementation of "ipfs-cluster-service state" operations. Includes the dissapearance of the "migrate" one. The CRDT consensus component defines creates a crdt-datastore (with ipfs-lite) and uses it to intitialize a dssate. Thus the crdt-store is elegantly wrapped. Any modifications to the state get automatically replicated to other peers. We store all the CRDT DAG blocks in the local datastore. The consensus components only expose a ReadOnly state, as any modifications to the shared state should happen through them. DHT and PubSub facilities must now be created outside of Cluster and passed in so they can be re-used by different components.
2019-02-20 14:24:25 +00:00
buf := bufio.NewReader(r)
checkErr("importing state", mgr.ImportState(buf, opts))
Consensus: add new "crdt" consensus component This adds a new "crdt" consensus component using go-ds-crdt. This implies several refactors to fully make cluster consensus-component independent: * Delete mapstate and fully adopt dsstate (after people have migrated). * Return errors from state methods rather than ignoring them. * Add a new "datastore" modules so that we can configure datastores in the main configuration like other components. * Let the consensus components fully define the "state.State". Thus, they do not receive the state, they receive the storage where we put the state (a go-datastore). * Allow to customize how the monitor component obtains Peers() (the current peerset), including avoiding using the current peerset. At the moment the crdt consensus uses the monitoring component to define the current peerset. Therefore the monitor component cannot rely on the consensus component to produce a peerset. * Re-factor/re-implementation of "ipfs-cluster-service state" operations. Includes the dissapearance of the "migrate" one. The CRDT consensus component defines creates a crdt-datastore (with ipfs-lite) and uses it to intitialize a dssate. Thus the crdt-store is elegantly wrapped. Any modifications to the state get automatically replicated to other peers. We store all the CRDT DAG blocks in the local datastore. The consensus components only expose a ReadOnly state, as any modifications to the shared state should happen through them. DHT and PubSub facilities must now be created outside of Cluster and passed in so they can be re-used by different components.
2019-02-20 14:24:25 +00:00
logger.Info("state successfully imported. Make sure all peers have consistent states")
return nil
},
},
{
Name: "cleanup",
Consensus: add new "crdt" consensus component This adds a new "crdt" consensus component using go-ds-crdt. This implies several refactors to fully make cluster consensus-component independent: * Delete mapstate and fully adopt dsstate (after people have migrated). * Return errors from state methods rather than ignoring them. * Add a new "datastore" modules so that we can configure datastores in the main configuration like other components. * Let the consensus components fully define the "state.State". Thus, they do not receive the state, they receive the storage where we put the state (a go-datastore). * Allow to customize how the monitor component obtains Peers() (the current peerset), including avoiding using the current peerset. At the moment the crdt consensus uses the monitoring component to define the current peerset. Therefore the monitor component cannot rely on the consensus component to produce a peerset. * Re-factor/re-implementation of "ipfs-cluster-service state" operations. Includes the dissapearance of the "migrate" one. The CRDT consensus component defines creates a crdt-datastore (with ipfs-lite) and uses it to intitialize a dssate. Thus the crdt-store is elegantly wrapped. Any modifications to the state get automatically replicated to other peers. We store all the CRDT DAG blocks in the local datastore. The consensus components only expose a ReadOnly state, as any modifications to the shared state should happen through them. DHT and PubSub facilities must now be created outside of Cluster and passed in so they can be re-used by different components.
2019-02-20 14:24:25 +00:00
Usage: "remove persistent data",
Description: `
Consensus: add new "crdt" consensus component This adds a new "crdt" consensus component using go-ds-crdt. This implies several refactors to fully make cluster consensus-component independent: * Delete mapstate and fully adopt dsstate (after people have migrated). * Return errors from state methods rather than ignoring them. * Add a new "datastore" modules so that we can configure datastores in the main configuration like other components. * Let the consensus components fully define the "state.State". Thus, they do not receive the state, they receive the storage where we put the state (a go-datastore). * Allow to customize how the monitor component obtains Peers() (the current peerset), including avoiding using the current peerset. At the moment the crdt consensus uses the monitoring component to define the current peerset. Therefore the monitor component cannot rely on the consensus component to produce a peerset. * Re-factor/re-implementation of "ipfs-cluster-service state" operations. Includes the dissapearance of the "migrate" one. The CRDT consensus component defines creates a crdt-datastore (with ipfs-lite) and uses it to intitialize a dssate. Thus the crdt-store is elegantly wrapped. Any modifications to the state get automatically replicated to other peers. We store all the CRDT DAG blocks in the local datastore. The consensus components only expose a ReadOnly state, as any modifications to the shared state should happen through them. DHT and PubSub facilities must now be created outside of Cluster and passed in so they can be re-used by different components.
2019-02-20 14:24:25 +00:00
This command removes any persisted consensus data in this peer, including the
current pinset (state). The next start of the peer will be like a first start
Consensus: add new "crdt" consensus component This adds a new "crdt" consensus component using go-ds-crdt. This implies several refactors to fully make cluster consensus-component independent: * Delete mapstate and fully adopt dsstate (after people have migrated). * Return errors from state methods rather than ignoring them. * Add a new "datastore" modules so that we can configure datastores in the main configuration like other components. * Let the consensus components fully define the "state.State". Thus, they do not receive the state, they receive the storage where we put the state (a go-datastore). * Allow to customize how the monitor component obtains Peers() (the current peerset), including avoiding using the current peerset. At the moment the crdt consensus uses the monitoring component to define the current peerset. Therefore the monitor component cannot rely on the consensus component to produce a peerset. * Re-factor/re-implementation of "ipfs-cluster-service state" operations. Includes the dissapearance of the "migrate" one. The CRDT consensus component defines creates a crdt-datastore (with ipfs-lite) and uses it to intitialize a dssate. Thus the crdt-store is elegantly wrapped. Any modifications to the state get automatically replicated to other peers. We store all the CRDT DAG blocks in the local datastore. The consensus components only expose a ReadOnly state, as any modifications to the shared state should happen through them. DHT and PubSub facilities must now be created outside of Cluster and passed in so they can be re-used by different components.
2019-02-20 14:24:25 +00:00
to all effects. Peers may need to bootstrap and sync from scratch after this.
`,
Flags: []cli.Flag{
cli.BoolFlag{
Name: "force, f",
Consensus: add new "crdt" consensus component This adds a new "crdt" consensus component using go-ds-crdt. This implies several refactors to fully make cluster consensus-component independent: * Delete mapstate and fully adopt dsstate (after people have migrated). * Return errors from state methods rather than ignoring them. * Add a new "datastore" modules so that we can configure datastores in the main configuration like other components. * Let the consensus components fully define the "state.State". Thus, they do not receive the state, they receive the storage where we put the state (a go-datastore). * Allow to customize how the monitor component obtains Peers() (the current peerset), including avoiding using the current peerset. At the moment the crdt consensus uses the monitoring component to define the current peerset. Therefore the monitor component cannot rely on the consensus component to produce a peerset. * Re-factor/re-implementation of "ipfs-cluster-service state" operations. Includes the dissapearance of the "migrate" one. The CRDT consensus component defines creates a crdt-datastore (with ipfs-lite) and uses it to intitialize a dssate. Thus the crdt-store is elegantly wrapped. Any modifications to the state get automatically replicated to other peers. We store all the CRDT DAG blocks in the local datastore. The consensus components only expose a ReadOnly state, as any modifications to the shared state should happen through them. DHT and PubSub facilities must now be created outside of Cluster and passed in so they can be re-used by different components.
2019-02-20 14:24:25 +00:00
Usage: "skip confirmation prompt",
},
},
Action: func(c *cli.Context) error {
Consensus: add new "crdt" consensus component This adds a new "crdt" consensus component using go-ds-crdt. This implies several refactors to fully make cluster consensus-component independent: * Delete mapstate and fully adopt dsstate (after people have migrated). * Return errors from state methods rather than ignoring them. * Add a new "datastore" modules so that we can configure datastores in the main configuration like other components. * Let the consensus components fully define the "state.State". Thus, they do not receive the state, they receive the storage where we put the state (a go-datastore). * Allow to customize how the monitor component obtains Peers() (the current peerset), including avoiding using the current peerset. At the moment the crdt consensus uses the monitoring component to define the current peerset. Therefore the monitor component cannot rely on the consensus component to produce a peerset. * Re-factor/re-implementation of "ipfs-cluster-service state" operations. Includes the dissapearance of the "migrate" one. The CRDT consensus component defines creates a crdt-datastore (with ipfs-lite) and uses it to intitialize a dssate. Thus the crdt-store is elegantly wrapped. Any modifications to the state get automatically replicated to other peers. We store all the CRDT DAG blocks in the local datastore. The consensus components only expose a ReadOnly state, as any modifications to the shared state should happen through them. DHT and PubSub facilities must now be created outside of Cluster and passed in so they can be re-used by different components.
2019-02-20 14:24:25 +00:00
locker.lock()
defer locker.tryUnlock()
Consensus: add new "crdt" consensus component This adds a new "crdt" consensus component using go-ds-crdt. This implies several refactors to fully make cluster consensus-component independent: * Delete mapstate and fully adopt dsstate (after people have migrated). * Return errors from state methods rather than ignoring them. * Add a new "datastore" modules so that we can configure datastores in the main configuration like other components. * Let the consensus components fully define the "state.State". Thus, they do not receive the state, they receive the storage where we put the state (a go-datastore). * Allow to customize how the monitor component obtains Peers() (the current peerset), including avoiding using the current peerset. At the moment the crdt consensus uses the monitoring component to define the current peerset. Therefore the monitor component cannot rely on the consensus component to produce a peerset. * Re-factor/re-implementation of "ipfs-cluster-service state" operations. Includes the dissapearance of the "migrate" one. The CRDT consensus component defines creates a crdt-datastore (with ipfs-lite) and uses it to intitialize a dssate. Thus the crdt-store is elegantly wrapped. Any modifications to the state get automatically replicated to other peers. We store all the CRDT DAG blocks in the local datastore. The consensus components only expose a ReadOnly state, as any modifications to the shared state should happen through them. DHT and PubSub facilities must now be created outside of Cluster and passed in so they can be re-used by different components.
2019-02-20 14:24:25 +00:00
confirm := fmt.Sprintf(
"%s Continue? [y/n]:",
stateCleanupPrompt,
)
if !c.Bool("force") && !yesNoPrompt(confirm) {
return nil
}
mgr := getStateManager()
Consensus: add new "crdt" consensus component This adds a new "crdt" consensus component using go-ds-crdt. This implies several refactors to fully make cluster consensus-component independent: * Delete mapstate and fully adopt dsstate (after people have migrated). * Return errors from state methods rather than ignoring them. * Add a new "datastore" modules so that we can configure datastores in the main configuration like other components. * Let the consensus components fully define the "state.State". Thus, they do not receive the state, they receive the storage where we put the state (a go-datastore). * Allow to customize how the monitor component obtains Peers() (the current peerset), including avoiding using the current peerset. At the moment the crdt consensus uses the monitoring component to define the current peerset. Therefore the monitor component cannot rely on the consensus component to produce a peerset. * Re-factor/re-implementation of "ipfs-cluster-service state" operations. Includes the dissapearance of the "migrate" one. The CRDT consensus component defines creates a crdt-datastore (with ipfs-lite) and uses it to intitialize a dssate. Thus the crdt-store is elegantly wrapped. Any modifications to the state get automatically replicated to other peers. We store all the CRDT DAG blocks in the local datastore. The consensus components only expose a ReadOnly state, as any modifications to the shared state should happen through them. DHT and PubSub facilities must now be created outside of Cluster and passed in so they can be re-used by different components.
2019-02-20 14:24:25 +00:00
checkErr("cleaning state", mgr.Clean())
logger.Info("data correctly cleaned up")
return nil
},
},
},
},
{
Name: "version",
Usage: "Prints the ipfs-cluster version",
Action: func(c *cli.Context) error {
fmt.Printf("%s\n", version.Version)
return nil
},
},
}
app.Action = run
app.Run(os.Args)
}
// run daemon() by default, or error.
func run(c *cli.Context) error {
Feat: emancipate Consensus from the Cluster component This commit promotes the Consensus component (and Raft) to become a fully independent thing like other components, passed to NewCluster during initialization. Cluster (main component) no longer creates the consensus layer internally. This has triggered a number of breaking changes that I will explain below. Motivation: Future work will require the possibility of running Cluster with a consensus layer that is not Raft. The "consensus" layer is in charge of maintaining two things: * The current cluster peerset, as required by the implementation * The current cluster pinset (shared state) While the pinset maintenance has always been in the consensus layer, the peerset maintenance was handled by the main component (starting by the "peers" key in the configuration) AND the Raft component (internally) and this generated lots of confusion: if the user edited the peers in the configuration they would be greeted with an error. The bootstrap process (adding a peer to an existing cluster) and configuration key also complicated many things, since the main component did it, but only when the consensus was initialized and in single peer mode. In all this we also mixed the peerstore (list of peer addresses in the libp2p host) with the peerset, when they need not to be linked. By initializing the consensus layer before calling NewCluster, all the difficulties in maintaining the current implementation in the same way have come to light. Thus, the following changes have been introduced: * Remove "peers" and "bootstrap" keys from the configuration: we no longer edit or save the configuration files. This was a very bad practice, requiring write permissions by the process to the file containing the private key and additionally made things like Puppet deployments of cluster difficult as configuration would mutate from its initial version. Needless to say all the maintenance associated to making sure peers and bootstrap had correct values when peers are bootstrapped or removed. A loud and detailed error message has been added when staring cluster with an old config, along with instructions on how to move forward. * Introduce a PeerstoreFile ("peerstore") which stores peer addresses: in ipfs, the peerstore is not persisted because it can be re-built from the network bootstrappers and the DHT. Cluster should probably also allow discoverability of peers addresses (when not bootstrapping, as in that case we have it), but in the meantime, we will read and persist the peerstore addresses for cluster peers in this file, different from the configuration. Note that dns multiaddresses are now fully supported and no IPs are saved when we have DNS multiaddresses for a peer. * The former "peer_manager" code is now a pstoremgr module, providing utilities to parse, add, list and generally maintain the libp2p host peerstore, including operations on the PeerstoreFile. This "pstoremgr" can now also be extended to perform address autodiscovery and other things indepedently from Cluster. * Create and initialize Raft outside of the main Cluster component: since we can now launch Raft independently from Cluster, we have more degrees of freedom. A new "staging" option when creating the object allows a raft peer to be launched in Staging mode, waiting to be added to a running consensus, and thus, not electing itself as leader or doing anything like we were doing before. This additionally allows us to track when the peer has become a Voter, which only happens when it's caught up with the state, something that was wonky previously. * The raft configuration now includes an InitPeerset key, which allows to provide a peerset for new peers and which is ignored when staging==true. The whole Raft initialization code is way cleaner and stronger now. * Cluster peer bootsrapping is now an ipfs-cluster-service feature. The --bootstrap flag works as before (additionally allowing comma-separated-list of entries). What bootstrap does, is to initialize Raft with staging == true, and then call Join in the main cluster component. Only when the Raft peer transitions to Voter, consensus becomes ready, and cluster becomes Ready. This is cleaner, works better and is less complex than before (supporting both flags and config values). We also backup and clean the state whenever we are boostrapping, automatically * ipfs-cluster-service no longer runs the daemon. Starting cluster needs now "ipfs-cluster-service daemon". The daemon specific flags (bootstrap, alloc) are now flags for the daemon subcommand. Here we mimic ipfs ("ipfs" does not start the daemon but print help) and pave the path for merging both service and ctl in the future. While this brings some breaking changes, it significantly reduces the complexity of the configuration, the code and most importantly, the documentation. It should be easier now to explain the user what is the right way to launch a cluster peer, and more difficult to make mistakes. As a side effect, the PR also: * Fixes #381 - peers with dynamic addresses * Fixes #371 - peers should be Raft configuration option * Fixes #378 - waitForUpdates may return before state fully synced * Fixes #235 - config option shadowing (no cfg saves, no need to shadow) License: MIT Signed-off-by: Hector Sanjuan <code@hector.link>
2018-04-28 22:22:23 +00:00
cli.ShowAppHelp(c)
os.Exit(1)
return nil
}
2019-12-04 17:40:35 +00:00
func setupLogLevel(debug bool, l string) error {
// if debug is set to true, log everything in debug level
if debug {
ipfscluster.SetFacilityLogLevel("*", "DEBUG")
return nil
}
2019-12-04 17:40:35 +00:00
compLogLevel := strings.Split(l, ",")
var logLevel string
compLogFacs := make(map[string]string)
// get overall log level and component-wise log levels from arguments
for _, cll := range compLogLevel {
if cll == "" {
continue
}
identifierToLevel := strings.Split(cll, ":")
2019-12-04 17:40:35 +00:00
var lvl string
var comp string
switch len(identifierToLevel) {
case 1:
2019-12-04 17:40:35 +00:00
lvl = identifierToLevel[0]
comp = "all"
case 2:
2019-12-04 17:40:35 +00:00
lvl = identifierToLevel[1]
comp = identifierToLevel[0]
default:
2019-11-25 11:11:49 +00:00
return errors.New("log level not in expected format \"identifier:loglevel\" or \"loglevel\"")
}
2019-12-04 17:40:35 +00:00
_, ok := compLogFacs[comp]
if ok {
fmt.Printf("overwriting existing %s log level\n", comp)
}
compLogFacs[comp] = lvl
}
2019-12-04 17:40:35 +00:00
logLevel, ok := compLogFacs["all"]
if !ok {
logLevel = defaultLogLevel
2019-12-04 17:40:35 +00:00
} else {
delete(compLogFacs, "all")
}
// log service with logLevel
ipfscluster.SetFacilityLogLevel("service", logLevel)
logfacs := make(map[string]string)
2019-11-25 11:11:49 +00:00
// fill component-wise log levels
for identifier, level := range compLogFacs {
logfacs[identifier] = level
}
// Set the values for things not set by the user or for
// things set by "all".
for key := range ipfscluster.LoggingFacilities {
if _, ok := logfacs[key]; !ok {
logfacs[key] = logLevel
}
}
// For Extra facilities, set the defaults per logging.go unless
// manually set
for key, defaultLvl := range ipfscluster.LoggingFacilitiesExtra {
if _, ok := logfacs[key]; !ok {
logfacs[key] = defaultLvl
}
}
for identifier, level := range logfacs {
ipfscluster.SetFacilityLogLevel(identifier, level)
}
return nil
}
func userProvidedSecret(enterSecret bool) ([]byte, bool) {
if enterSecret {
secret := promptUser("Enter cluster secret (32-byte hex string): ")
decodedSecret, err := ipfscluster.DecodeClusterSecret(secret)
checkErr("parsing user-provided secret", err)
return decodedSecret, true
}
return nil, false
}
func promptUser(msg string) string {
scanner := bufio.NewScanner(os.Stdin)
fmt.Print(msg)
scanner.Scan()
return scanner.Text()
}
2017-07-19 16:54:57 +00:00
// Lifted from go-ipfs/cmd/ipfs/daemon.go
func yesNoPrompt(prompt string) bool {
var s string
for i := 0; i < 3; i++ {
fmt.Printf("%s ", prompt)
fmt.Scanf("%s", &s)
switch s {
case "y", "Y":
return true
case "n", "N":
return false
case "":
return false
}
fmt.Println("Please press either 'y' or 'n'")
}
return false
}
func getStateManager() cmdutils.StateManager {
cfgHelper, err := cmdutils.NewLoadedConfigHelper(
configPath,
identityPath,
)
checkErr("loading configurations", err)
cfgHelper.Manager().Shutdown()
mgr, err := cmdutils.NewStateManagerWithHelper(cfgHelper)
checkErr("creating state manager", err)
return mgr
}
func getCrdt() *dscrdt.Datastore {
// Load all the configurations and identity
cfgHelper, err := cmdutils.NewLoadedConfigHelper(configPath, identityPath)
checkErr("loading configurations", err)
defer cfgHelper.Manager().Shutdown()
// Get a state manager and the datastore
mgr, err := cmdutils.NewStateManagerWithHelper(cfgHelper)
checkErr("creating state manager", err)
store, err := mgr.GetStore()
checkErr("opening datastore", err)
batching, ok := store.(datastore.Batching)
if !ok {
checkErr("", errors.New("no batching store"))
}
crdtNs := cfgHelper.Configs().Crdt.DatastoreNamespace
var blocksDatastore datastore.Batching = namespace.Wrap(
batching,
datastore.NewKey(crdtNs).ChildString(crdt.BlocksNs),
)
ipfs, err := ipfslite.New(
context.Background(),
blocksDatastore,
nil,
nil,
Dependency upgrades (#1840) * Bump github.com/libp2p/go-libp2p-kad-dht from 0.18.0 to 0.20.0 (#1821) Bumps [github.com/libp2p/go-libp2p-kad-dht](https://github.com/libp2p/go-libp2p-kad-dht) from 0.18.0 to 0.20.0. - [Release notes](https://github.com/libp2p/go-libp2p-kad-dht/releases) - [Commits](https://github.com/libp2p/go-libp2p-kad-dht/compare/v0.18.0...v0.20.0) --- updated-dependencies: - dependency-name: github.com/libp2p/go-libp2p-kad-dht dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] <support@github.com> Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * Bump github.com/dgraph-io/badger/v3 from 3.2103.4 to 3.2103.5 (#1822) Bumps [github.com/dgraph-io/badger/v3](https://github.com/dgraph-io/badger) from 3.2103.4 to 3.2103.5. - [Release notes](https://github.com/dgraph-io/badger/releases) - [Changelog](https://github.com/dgraph-io/badger/blob/main/CHANGELOG.md) - [Commits](https://github.com/dgraph-io/badger/compare/v3.2103.4...v3.2103.5) --- updated-dependencies: - dependency-name: github.com/dgraph-io/badger/v3 dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] <support@github.com> Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * Bump go.uber.org/multierr from 1.8.0 to 1.9.0 (#1823) Bumps [go.uber.org/multierr](https://github.com/uber-go/multierr) from 1.8.0 to 1.9.0. - [Release notes](https://github.com/uber-go/multierr/releases) - [Changelog](https://github.com/uber-go/multierr/blob/master/CHANGELOG.md) - [Commits](https://github.com/uber-go/multierr/compare/v1.8.0...v1.9.0) --- updated-dependencies: - dependency-name: go.uber.org/multierr dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] <support@github.com> Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * Bump github.com/rs/cors from 1.8.2 to 1.8.3 (#1826) Bumps [github.com/rs/cors](https://github.com/rs/cors) from 1.8.2 to 1.8.3. - [Release notes](https://github.com/rs/cors/releases) - [Commits](https://github.com/rs/cors/compare/v1.8.2...v1.8.3) --- updated-dependencies: - dependency-name: github.com/rs/cors dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] <support@github.com> Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * Bump github.com/ipfs/go-merkledag from 0.8.1 to 0.9.0 (#1830) Bumps [github.com/ipfs/go-merkledag](https://github.com/ipfs/go-merkledag) from 0.8.1 to 0.9.0. - [Release notes](https://github.com/ipfs/go-merkledag/releases) - [Commits](https://github.com/ipfs/go-merkledag/compare/v0.8.1...v0.9.0) --- updated-dependencies: - dependency-name: github.com/ipfs/go-merkledag dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] <support@github.com> Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * Bump github.com/dustin/go-humanize from 1.0.0 to 1.0.1 (#1831) Bumps [github.com/dustin/go-humanize](https://github.com/dustin/go-humanize) from 1.0.0 to 1.0.1. - [Release notes](https://github.com/dustin/go-humanize/releases) - [Commits](https://github.com/dustin/go-humanize/compare/v1.0.0...v1.0.1) --- updated-dependencies: - dependency-name: github.com/dustin/go-humanize dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] <support@github.com> Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * Bump github.com/urfave/cli from 1.22.10 to 1.22.12 (#1834) Bumps [github.com/urfave/cli](https://github.com/urfave/cli) from 1.22.10 to 1.22.12. - [Release notes](https://github.com/urfave/cli/releases) - [Changelog](https://github.com/urfave/cli/blob/main/docs/CHANGELOG.md) - [Commits](https://github.com/urfave/cli/compare/v1.22.10...v1.22.12) --- updated-dependencies: - dependency-name: github.com/urfave/cli dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] <support@github.com> Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * Bump github.com/ipfs/go-unixfs from 0.4.1 to 0.4.2 (#1829) Bumps [github.com/ipfs/go-unixfs](https://github.com/ipfs/go-unixfs) from 0.4.1 to 0.4.2. - [Release notes](https://github.com/ipfs/go-unixfs/releases) - [Commits](https://github.com/ipfs/go-unixfs/compare/v0.4.1...v0.4.2) --- updated-dependencies: - dependency-name: github.com/ipfs/go-unixfs dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] <support@github.com> Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * Bump go.opencensus.io from 0.23.0 to 0.24.0 (#1794) Dependabot couldn't find the original pull request head commit, 9dedea6eeef2c8b5f7358cc29b5b088ac6cf7aba. Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * Bump github.com/ugorji/go/codec from 1.2.7 to 1.2.8 (#1827) Bumps [github.com/ugorji/go/codec](https://github.com/ugorji/go) from 1.2.7 to 1.2.8. - [Release notes](https://github.com/ugorji/go/releases) - [Commits](https://github.com/ugorji/go/compare/v1.2.7...v1.2.8) --- updated-dependencies: - dependency-name: github.com/ugorji/go/codec dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] <support@github.com> Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * Bump github.com/libp2p/go-libp2p-pubsub from 0.8.2 to 0.8.3 (#1838) Bumps [github.com/libp2p/go-libp2p-pubsub](https://github.com/libp2p/go-libp2p-pubsub) from 0.8.2 to 0.8.3. - [Release notes](https://github.com/libp2p/go-libp2p-pubsub/releases) - [Commits](https://github.com/libp2p/go-libp2p-pubsub/compare/v0.8.2...v0.8.3) --- updated-dependencies: - dependency-name: github.com/libp2p/go-libp2p-pubsub dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] <support@github.com> Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * update go-libp2p * Bump github.com/ipfs/go-block-format from 0.0.3 to 0.1.1 (#1837) Bumps [github.com/ipfs/go-block-format](https://github.com/ipfs/go-block-format) from 0.0.3 to 0.1.1. - [Release notes](https://github.com/ipfs/go-block-format/releases) - [Commits](https://github.com/ipfs/go-block-format/compare/v0.0.3...v0.1.1) --- updated-dependencies: - dependency-name: github.com/ipfs/go-block-format dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] <support@github.com> Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * Update go-mfs (#1818) * Fix docker build failure (git safe directory) (#1836) Git refuses to run `git rev-parse HEAD` now, in docker. * Remove go-block-format and use go-libipfs (deprecation) * Update ipfs-lite * Bump golang.org/x/crypto from 0.3.0 to 0.5.0 (#1839) Bumps [golang.org/x/crypto](https://github.com/golang/crypto) from 0.3.0 to 0.5.0. - [Release notes](https://github.com/golang/crypto/releases) - [Commits](https://github.com/golang/crypto/compare/v0.3.0...v0.5.0) --- updated-dependencies: - dependency-name: golang.org/x/crypto dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] <support@github.com> Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-01-27 13:34:53 +00:00
nil,
&ipfslite.Config{
Offline: true,
},
)
checkErr("creating ipfs-lite offline node", err)
opts := dscrdt.DefaultOptions()
opts.RepairInterval = 0
crdt, err := dscrdt.New(
batching,
datastore.NewKey(crdtNs),
ipfs,
nil,
opts,
)
checkErr("creating crdt node", err)
return crdt
}