The following commit reimplements ipfs-cluster configuration under the following premises: * Each component is initialized with a configuration object defined by its module * Each component decides how the JSON representation of its configuration looks like * Each component parses and validates its own configuration * Each component exposes its own defaults * Component configurations are make the sections of a central JSON configuration file (which replaces the current JSON format) * Component configurations implement a common interface (config.ComponentConfig) with a set of common operations * The central configuration file is managed by a config.ConfigManager which: * Registers ComponentConfigs * Assigns the correspondent sections from the JSON file to each component and delegates the parsing * Delegates the JSON generation for each section * Can be notified when the configuration is updated and must be saved to disk The new service.json would then look as follows: ```json { "cluster": { "id": "QmTVW8NoRxC5wBhV7WtAYtRn7itipEESfozWN5KmXUQnk2", "private_key": "<...>", "secret": "00224102ae6aaf94f2606abf69a0e278251ecc1d64815b617ff19d6d2841f786", "peers": [], "bootstrap": [], "leave_on_shutdown": false, "listen_multiaddress": "/ip4/0.0.0.0/tcp/9096", "state_sync_interval": "1m0s", "ipfs_sync_interval": "2m10s", "replication_factor": -1, "monitor_ping_interval": "15s" }, "consensus": { "raft": { "heartbeat_timeout": "1s", "election_timeout": "1s", "commit_timeout": "50ms", "max_append_entries": 64, "trailing_logs": 10240, "snapshot_interval": "2m0s", "snapshot_threshold": 8192, "leader_lease_timeout": "500ms" } }, "api": { "restapi": { "listen_multiaddress": "/ip4/127.0.0.1/tcp/9094", "read_timeout": "30s", "read_header_timeout": "5s", "write_timeout": "1m0s", "idle_timeout": "2m0s" } }, "ipfs_connector": { "ipfshttp": { "proxy_listen_multiaddress": "/ip4/127.0.0.1/tcp/9095", "node_multiaddress": "/ip4/127.0.0.1/tcp/5001", "connect_swarms_delay": "7s", "proxy_read_timeout": "10m0s", "proxy_read_header_timeout": "5s", "proxy_write_timeout": "10m0s", "proxy_idle_timeout": "1m0s" } }, "monitor": { "monbasic": { "check_interval": "15s" } }, "informer": { "disk": { "metric_ttl": "30s", "metric_type": "freespace" }, "numpin": { "metric_ttl": "10s" } } } ``` This new format aims to be easily extensible per component. As such, it already surfaces quite a few new options which were hardcoded before. Additionally, since Go API have changed, some redundant methods have been removed and small refactoring has happened to take advantage of the new way. License: MIT Signed-off-by: Hector Sanjuan <hector@protocol.ai>
3.1 KiB
ipfs-cluster-service
IPFS cluster peer launcher
ipfs-cluster-service
runs a full IPFS Cluster peer.
Usage
Usage information can be obtained with:
$ ipfs-cluster-service -h
Initialization
Before running ipfs-cluster-service
for the first time, initialize a configuration file with:
$ ipfs-cluster-service init
init
will randomly generate a cluster_secret
(unless specified by the CLUSTER_SECRET
environment variable or running with --custom-secret
, which will prompt it interactively).
All peers in a cluster must share the same cluster secret. Using an empty secret may compromise the security of your cluster (see the documentation for more information).
Configuration
After initialization, the configuration will be placed in ~/.ipfs-cluster/service.json
by default.
You can add the multiaddresses for the other cluster peers to the cluster.peers
or cluster.bootstrap
variables (see below). A configuration example with explanations is provided in A guide to running IPFS Cluster.
The configuration file should probably be identical among all cluster peers, except for the id
and private_key
fields. Once every cluster peer has the configuration in place, you can run ipfs-cluster-service
to start the cluster.
Clusters using cluster.peers
The peers
configuration variable holds a list of current cluster members. If you know the members of the cluster in advance, or you want to start a cluster fully in parallel, set peers
in all configurations so that every peer knows the rest upon boot. Leave bootstrap
empty. A cluster peer address looks like: /ip4/1.2.3.4/tcp/9096/<id>
.
Clusters using cluster.bootstrap
When the peers
variable is empty, the multiaddresses in bootstrap
can be used to have a peer join an existing cluster. The peer will contact those addresses (in order) until one of them succeeds in joining it to the cluster. When the peer is shut down, it will save the current cluster peers in the peers
configuration variable for future use (unless leave_on_shutdown
is true, in which case it will save them in bootstrap
)
Bootstrap is a convenient method, but more prone to errors than having a fixed set of peers. It can be used as well with ipfs-cluster-service --bootstrap <multiaddress>
. Note that bootstrapping nodes with an old state (or diverging state) from the one running in the cluster may lead to problems with the consensus, so usually you would want to bootstrap clean nodes.
Debugging
ipfs-cluster-service
offers two debugging options:
--debug
enables debug logging from theipfs-cluster
,go-libp2p-raft
andgo-libp2p-rpc
layers. This will be a very verbose log output, but at the same time it is the most informative.--loglevel
sets the log level ([error, warning, info, debug]
) for theipfs-cluster
only, allowing to get an overview of the what cluster is doing. The default log-level isinfo
.