ipfs-cluster/cluster_test.go

827 lines
20 KiB
Go
Raw Normal View History

package ipfscluster
import (
"context"
"errors"
"mime/multipart"
"os"
"path/filepath"
"sync"
"testing"
Issue #162: Rework configuration format The following commit reimplements ipfs-cluster configuration under the following premises: * Each component is initialized with a configuration object defined by its module * Each component decides how the JSON representation of its configuration looks like * Each component parses and validates its own configuration * Each component exposes its own defaults * Component configurations are make the sections of a central JSON configuration file (which replaces the current JSON format) * Component configurations implement a common interface (config.ComponentConfig) with a set of common operations * The central configuration file is managed by a config.ConfigManager which: * Registers ComponentConfigs * Assigns the correspondent sections from the JSON file to each component and delegates the parsing * Delegates the JSON generation for each section * Can be notified when the configuration is updated and must be saved to disk The new service.json would then look as follows: ```json { "cluster": { "id": "QmTVW8NoRxC5wBhV7WtAYtRn7itipEESfozWN5KmXUQnk2", "private_key": "<...>", "secret": "00224102ae6aaf94f2606abf69a0e278251ecc1d64815b617ff19d6d2841f786", "peers": [], "bootstrap": [], "leave_on_shutdown": false, "listen_multiaddress": "/ip4/0.0.0.0/tcp/9096", "state_sync_interval": "1m0s", "ipfs_sync_interval": "2m10s", "replication_factor": -1, "monitor_ping_interval": "15s" }, "consensus": { "raft": { "heartbeat_timeout": "1s", "election_timeout": "1s", "commit_timeout": "50ms", "max_append_entries": 64, "trailing_logs": 10240, "snapshot_interval": "2m0s", "snapshot_threshold": 8192, "leader_lease_timeout": "500ms" } }, "api": { "restapi": { "listen_multiaddress": "/ip4/127.0.0.1/tcp/9094", "read_timeout": "30s", "read_header_timeout": "5s", "write_timeout": "1m0s", "idle_timeout": "2m0s" } }, "ipfs_connector": { "ipfshttp": { "proxy_listen_multiaddress": "/ip4/127.0.0.1/tcp/9095", "node_multiaddress": "/ip4/127.0.0.1/tcp/5001", "connect_swarms_delay": "7s", "proxy_read_timeout": "10m0s", "proxy_read_header_timeout": "5s", "proxy_write_timeout": "10m0s", "proxy_idle_timeout": "1m0s" } }, "monitor": { "monbasic": { "check_interval": "15s" } }, "informer": { "disk": { "metric_ttl": "30s", "metric_type": "freespace" }, "numpin": { "metric_ttl": "10s" } } } ``` This new format aims to be easily extensible per component. As such, it already surfaces quite a few new options which were hardcoded before. Additionally, since Go API have changed, some redundant methods have been removed and small refactoring has happened to take advantage of the new way. License: MIT Signed-off-by: Hector Sanjuan <hector@protocol.ai>
2017-10-11 18:23:03 +00:00
"time"
"github.com/ipfs/ipfs-cluster/adder/sharding"
"github.com/ipfs/ipfs-cluster/allocator/ascendalloc"
"github.com/ipfs/ipfs-cluster/api"
Feat: emancipate Consensus from the Cluster component This commit promotes the Consensus component (and Raft) to become a fully independent thing like other components, passed to NewCluster during initialization. Cluster (main component) no longer creates the consensus layer internally. This has triggered a number of breaking changes that I will explain below. Motivation: Future work will require the possibility of running Cluster with a consensus layer that is not Raft. The "consensus" layer is in charge of maintaining two things: * The current cluster peerset, as required by the implementation * The current cluster pinset (shared state) While the pinset maintenance has always been in the consensus layer, the peerset maintenance was handled by the main component (starting by the "peers" key in the configuration) AND the Raft component (internally) and this generated lots of confusion: if the user edited the peers in the configuration they would be greeted with an error. The bootstrap process (adding a peer to an existing cluster) and configuration key also complicated many things, since the main component did it, but only when the consensus was initialized and in single peer mode. In all this we also mixed the peerstore (list of peer addresses in the libp2p host) with the peerset, when they need not to be linked. By initializing the consensus layer before calling NewCluster, all the difficulties in maintaining the current implementation in the same way have come to light. Thus, the following changes have been introduced: * Remove "peers" and "bootstrap" keys from the configuration: we no longer edit or save the configuration files. This was a very bad practice, requiring write permissions by the process to the file containing the private key and additionally made things like Puppet deployments of cluster difficult as configuration would mutate from its initial version. Needless to say all the maintenance associated to making sure peers and bootstrap had correct values when peers are bootstrapped or removed. A loud and detailed error message has been added when staring cluster with an old config, along with instructions on how to move forward. * Introduce a PeerstoreFile ("peerstore") which stores peer addresses: in ipfs, the peerstore is not persisted because it can be re-built from the network bootstrappers and the DHT. Cluster should probably also allow discoverability of peers addresses (when not bootstrapping, as in that case we have it), but in the meantime, we will read and persist the peerstore addresses for cluster peers in this file, different from the configuration. Note that dns multiaddresses are now fully supported and no IPs are saved when we have DNS multiaddresses for a peer. * The former "peer_manager" code is now a pstoremgr module, providing utilities to parse, add, list and generally maintain the libp2p host peerstore, including operations on the PeerstoreFile. This "pstoremgr" can now also be extended to perform address autodiscovery and other things indepedently from Cluster. * Create and initialize Raft outside of the main Cluster component: since we can now launch Raft independently from Cluster, we have more degrees of freedom. A new "staging" option when creating the object allows a raft peer to be launched in Staging mode, waiting to be added to a running consensus, and thus, not electing itself as leader or doing anything like we were doing before. This additionally allows us to track when the peer has become a Voter, which only happens when it's caught up with the state, something that was wonky previously. * The raft configuration now includes an InitPeerset key, which allows to provide a peerset for new peers and which is ignored when staging==true. The whole Raft initialization code is way cleaner and stronger now. * Cluster peer bootsrapping is now an ipfs-cluster-service feature. The --bootstrap flag works as before (additionally allowing comma-separated-list of entries). What bootstrap does, is to initialize Raft with staging == true, and then call Join in the main cluster component. Only when the Raft peer transitions to Voter, consensus becomes ready, and cluster becomes Ready. This is cleaner, works better and is less complex than before (supporting both flags and config values). We also backup and clean the state whenever we are boostrapping, automatically * ipfs-cluster-service no longer runs the daemon. Starting cluster needs now "ipfs-cluster-service daemon". The daemon specific flags (bootstrap, alloc) are now flags for the daemon subcommand. Here we mimic ipfs ("ipfs" does not start the daemon but print help) and pave the path for merging both service and ctl in the future. While this brings some breaking changes, it significantly reduces the complexity of the configuration, the code and most importantly, the documentation. It should be easier now to explain the user what is the right way to launch a cluster peer, and more difficult to make mistakes. As a side effect, the PR also: * Fixes #381 - peers with dynamic addresses * Fixes #371 - peers should be Raft configuration option * Fixes #378 - waitForUpdates may return before state fully synced * Fixes #235 - config option shadowing (no cfg saves, no need to shadow) License: MIT Signed-off-by: Hector Sanjuan <code@hector.link>
2018-04-28 22:22:23 +00:00
"github.com/ipfs/ipfs-cluster/consensus/raft"
"github.com/ipfs/ipfs-cluster/informer/numpin"
"github.com/ipfs/ipfs-cluster/state"
"github.com/ipfs/ipfs-cluster/state/mapstate"
"github.com/ipfs/ipfs-cluster/test"
"github.com/ipfs/ipfs-cluster/version"
cid "github.com/ipfs/go-cid"
rpc "github.com/libp2p/go-libp2p-gorpc"
peer "github.com/libp2p/go-libp2p-peer"
)
type mockComponent struct {
rpcClient *rpc.Client
}
func (c *mockComponent) Shutdown(ctx context.Context) error {
return nil
}
func (c *mockComponent) SetClient(client *rpc.Client) {
c.rpcClient = client
return
}
type mockAPI struct {
mockComponent
}
type mockProxy struct {
mockComponent
}
type mockConnector struct {
mockComponent
pins sync.Map
blocks sync.Map
}
func (ipfs *mockConnector) ID(ctx context.Context) (api.IPFSID, error) {
return api.IPFSID{
ID: test.TestPeerID1,
}, nil
}
func (ipfs *mockConnector) Pin(ctx context.Context, c cid.Cid, maxDepth int) error {
ipfs.pins.Store(c.String(), maxDepth)
return nil
}
func (ipfs *mockConnector) Unpin(ctx context.Context, c cid.Cid) error {
ipfs.pins.Delete(c.String())
return nil
}
func (ipfs *mockConnector) PinLsCid(ctx context.Context, c cid.Cid) (api.IPFSPinStatus, error) {
dI, ok := ipfs.pins.Load(c.String())
if !ok {
return api.IPFSPinStatusUnpinned, nil
}
depth := dI.(int)
if depth == 0 {
return api.IPFSPinStatusDirect, nil
}
return api.IPFSPinStatusRecursive, nil
}
func (ipfs *mockConnector) PinLs(ctx context.Context, filter string) (map[string]api.IPFSPinStatus, error) {
m := make(map[string]api.IPFSPinStatus)
var st api.IPFSPinStatus
ipfs.pins.Range(func(k, v interface{}) bool {
switch v.(int) {
case 0:
st = api.IPFSPinStatusDirect
default:
st = api.IPFSPinStatusRecursive
}
m[k.(string)] = st
return true
})
return m, nil
}
func (ipfs *mockConnector) SwarmPeers(ctx context.Context) (api.SwarmPeers, error) {
return []peer.ID{test.TestPeerID4, test.TestPeerID5}, nil
}
func (ipfs *mockConnector) RepoStat(ctx context.Context) (api.IPFSRepoStat, error) {
return api.IPFSRepoStat{RepoSize: 100, StorageMax: 1000}, nil
}
func (ipfs *mockConnector) ConnectSwarms(ctx context.Context) error { return nil }
func (ipfs *mockConnector) ConfigKey(keypath string) (interface{}, error) { return nil, nil }
func (ipfs *mockConnector) BlockPut(ctx context.Context, nwm api.NodeWithMeta) error {
ipfs.blocks.Store(nwm.Cid, nwm.Data)
return nil
}
func (ipfs *mockConnector) BlockGet(ctx context.Context, c cid.Cid) ([]byte, error) {
d, ok := ipfs.blocks.Load(c.String())
if !ok {
return nil, errors.New("block not found")
}
return d.([]byte), nil
}
type mockTracer struct {
mockComponent
}
func testingCluster(t *testing.T) (*Cluster, *mockAPI, *mockConnector, state.State, PinTracker) {
clusterCfg, _, _, _, consensusCfg, maptrackerCfg, statelesstrackerCfg, bmonCfg, psmonCfg, _, _ := testingConfigs()
Issue #162: Rework configuration format The following commit reimplements ipfs-cluster configuration under the following premises: * Each component is initialized with a configuration object defined by its module * Each component decides how the JSON representation of its configuration looks like * Each component parses and validates its own configuration * Each component exposes its own defaults * Component configurations are make the sections of a central JSON configuration file (which replaces the current JSON format) * Component configurations implement a common interface (config.ComponentConfig) with a set of common operations * The central configuration file is managed by a config.ConfigManager which: * Registers ComponentConfigs * Assigns the correspondent sections from the JSON file to each component and delegates the parsing * Delegates the JSON generation for each section * Can be notified when the configuration is updated and must be saved to disk The new service.json would then look as follows: ```json { "cluster": { "id": "QmTVW8NoRxC5wBhV7WtAYtRn7itipEESfozWN5KmXUQnk2", "private_key": "<...>", "secret": "00224102ae6aaf94f2606abf69a0e278251ecc1d64815b617ff19d6d2841f786", "peers": [], "bootstrap": [], "leave_on_shutdown": false, "listen_multiaddress": "/ip4/0.0.0.0/tcp/9096", "state_sync_interval": "1m0s", "ipfs_sync_interval": "2m10s", "replication_factor": -1, "monitor_ping_interval": "15s" }, "consensus": { "raft": { "heartbeat_timeout": "1s", "election_timeout": "1s", "commit_timeout": "50ms", "max_append_entries": 64, "trailing_logs": 10240, "snapshot_interval": "2m0s", "snapshot_threshold": 8192, "leader_lease_timeout": "500ms" } }, "api": { "restapi": { "listen_multiaddress": "/ip4/127.0.0.1/tcp/9094", "read_timeout": "30s", "read_header_timeout": "5s", "write_timeout": "1m0s", "idle_timeout": "2m0s" } }, "ipfs_connector": { "ipfshttp": { "proxy_listen_multiaddress": "/ip4/127.0.0.1/tcp/9095", "node_multiaddress": "/ip4/127.0.0.1/tcp/5001", "connect_swarms_delay": "7s", "proxy_read_timeout": "10m0s", "proxy_read_header_timeout": "5s", "proxy_write_timeout": "10m0s", "proxy_idle_timeout": "1m0s" } }, "monitor": { "monbasic": { "check_interval": "15s" } }, "informer": { "disk": { "metric_ttl": "30s", "metric_type": "freespace" }, "numpin": { "metric_ttl": "10s" } } } ``` This new format aims to be easily extensible per component. As such, it already surfaces quite a few new options which were hardcoded before. Additionally, since Go API have changed, some redundant methods have been removed and small refactoring has happened to take advantage of the new way. License: MIT Signed-off-by: Hector Sanjuan <hector@protocol.ai>
2017-10-11 18:23:03 +00:00
Feat: emancipate Consensus from the Cluster component This commit promotes the Consensus component (and Raft) to become a fully independent thing like other components, passed to NewCluster during initialization. Cluster (main component) no longer creates the consensus layer internally. This has triggered a number of breaking changes that I will explain below. Motivation: Future work will require the possibility of running Cluster with a consensus layer that is not Raft. The "consensus" layer is in charge of maintaining two things: * The current cluster peerset, as required by the implementation * The current cluster pinset (shared state) While the pinset maintenance has always been in the consensus layer, the peerset maintenance was handled by the main component (starting by the "peers" key in the configuration) AND the Raft component (internally) and this generated lots of confusion: if the user edited the peers in the configuration they would be greeted with an error. The bootstrap process (adding a peer to an existing cluster) and configuration key also complicated many things, since the main component did it, but only when the consensus was initialized and in single peer mode. In all this we also mixed the peerstore (list of peer addresses in the libp2p host) with the peerset, when they need not to be linked. By initializing the consensus layer before calling NewCluster, all the difficulties in maintaining the current implementation in the same way have come to light. Thus, the following changes have been introduced: * Remove "peers" and "bootstrap" keys from the configuration: we no longer edit or save the configuration files. This was a very bad practice, requiring write permissions by the process to the file containing the private key and additionally made things like Puppet deployments of cluster difficult as configuration would mutate from its initial version. Needless to say all the maintenance associated to making sure peers and bootstrap had correct values when peers are bootstrapped or removed. A loud and detailed error message has been added when staring cluster with an old config, along with instructions on how to move forward. * Introduce a PeerstoreFile ("peerstore") which stores peer addresses: in ipfs, the peerstore is not persisted because it can be re-built from the network bootstrappers and the DHT. Cluster should probably also allow discoverability of peers addresses (when not bootstrapping, as in that case we have it), but in the meantime, we will read and persist the peerstore addresses for cluster peers in this file, different from the configuration. Note that dns multiaddresses are now fully supported and no IPs are saved when we have DNS multiaddresses for a peer. * The former "peer_manager" code is now a pstoremgr module, providing utilities to parse, add, list and generally maintain the libp2p host peerstore, including operations on the PeerstoreFile. This "pstoremgr" can now also be extended to perform address autodiscovery and other things indepedently from Cluster. * Create and initialize Raft outside of the main Cluster component: since we can now launch Raft independently from Cluster, we have more degrees of freedom. A new "staging" option when creating the object allows a raft peer to be launched in Staging mode, waiting to be added to a running consensus, and thus, not electing itself as leader or doing anything like we were doing before. This additionally allows us to track when the peer has become a Voter, which only happens when it's caught up with the state, something that was wonky previously. * The raft configuration now includes an InitPeerset key, which allows to provide a peerset for new peers and which is ignored when staging==true. The whole Raft initialization code is way cleaner and stronger now. * Cluster peer bootsrapping is now an ipfs-cluster-service feature. The --bootstrap flag works as before (additionally allowing comma-separated-list of entries). What bootstrap does, is to initialize Raft with staging == true, and then call Join in the main cluster component. Only when the Raft peer transitions to Voter, consensus becomes ready, and cluster becomes Ready. This is cleaner, works better and is less complex than before (supporting both flags and config values). We also backup and clean the state whenever we are boostrapping, automatically * ipfs-cluster-service no longer runs the daemon. Starting cluster needs now "ipfs-cluster-service daemon". The daemon specific flags (bootstrap, alloc) are now flags for the daemon subcommand. Here we mimic ipfs ("ipfs" does not start the daemon but print help) and pave the path for merging both service and ctl in the future. While this brings some breaking changes, it significantly reduces the complexity of the configuration, the code and most importantly, the documentation. It should be easier now to explain the user what is the right way to launch a cluster peer, and more difficult to make mistakes. As a side effect, the PR also: * Fixes #381 - peers with dynamic addresses * Fixes #371 - peers should be Raft configuration option * Fixes #378 - waitForUpdates may return before state fully synced * Fixes #235 - config option shadowing (no cfg saves, no need to shadow) License: MIT Signed-off-by: Hector Sanjuan <code@hector.link>
2018-04-28 22:22:23 +00:00
host, err := NewClusterHost(context.Background(), clusterCfg)
if err != nil {
t.Fatal(err)
}
api := &mockAPI{}
proxy := &mockProxy{}
ipfs := &mockConnector{}
st := mapstate.NewMapState()
tracker := makePinTracker(t, clusterCfg.ID, maptrackerCfg, statelesstrackerCfg, clusterCfg.Peername)
tracer := &mockTracer{}
Feat: emancipate Consensus from the Cluster component This commit promotes the Consensus component (and Raft) to become a fully independent thing like other components, passed to NewCluster during initialization. Cluster (main component) no longer creates the consensus layer internally. This has triggered a number of breaking changes that I will explain below. Motivation: Future work will require the possibility of running Cluster with a consensus layer that is not Raft. The "consensus" layer is in charge of maintaining two things: * The current cluster peerset, as required by the implementation * The current cluster pinset (shared state) While the pinset maintenance has always been in the consensus layer, the peerset maintenance was handled by the main component (starting by the "peers" key in the configuration) AND the Raft component (internally) and this generated lots of confusion: if the user edited the peers in the configuration they would be greeted with an error. The bootstrap process (adding a peer to an existing cluster) and configuration key also complicated many things, since the main component did it, but only when the consensus was initialized and in single peer mode. In all this we also mixed the peerstore (list of peer addresses in the libp2p host) with the peerset, when they need not to be linked. By initializing the consensus layer before calling NewCluster, all the difficulties in maintaining the current implementation in the same way have come to light. Thus, the following changes have been introduced: * Remove "peers" and "bootstrap" keys from the configuration: we no longer edit or save the configuration files. This was a very bad practice, requiring write permissions by the process to the file containing the private key and additionally made things like Puppet deployments of cluster difficult as configuration would mutate from its initial version. Needless to say all the maintenance associated to making sure peers and bootstrap had correct values when peers are bootstrapped or removed. A loud and detailed error message has been added when staring cluster with an old config, along with instructions on how to move forward. * Introduce a PeerstoreFile ("peerstore") which stores peer addresses: in ipfs, the peerstore is not persisted because it can be re-built from the network bootstrappers and the DHT. Cluster should probably also allow discoverability of peers addresses (when not bootstrapping, as in that case we have it), but in the meantime, we will read and persist the peerstore addresses for cluster peers in this file, different from the configuration. Note that dns multiaddresses are now fully supported and no IPs are saved when we have DNS multiaddresses for a peer. * The former "peer_manager" code is now a pstoremgr module, providing utilities to parse, add, list and generally maintain the libp2p host peerstore, including operations on the PeerstoreFile. This "pstoremgr" can now also be extended to perform address autodiscovery and other things indepedently from Cluster. * Create and initialize Raft outside of the main Cluster component: since we can now launch Raft independently from Cluster, we have more degrees of freedom. A new "staging" option when creating the object allows a raft peer to be launched in Staging mode, waiting to be added to a running consensus, and thus, not electing itself as leader or doing anything like we were doing before. This additionally allows us to track when the peer has become a Voter, which only happens when it's caught up with the state, something that was wonky previously. * The raft configuration now includes an InitPeerset key, which allows to provide a peerset for new peers and which is ignored when staging==true. The whole Raft initialization code is way cleaner and stronger now. * Cluster peer bootsrapping is now an ipfs-cluster-service feature. The --bootstrap flag works as before (additionally allowing comma-separated-list of entries). What bootstrap does, is to initialize Raft with staging == true, and then call Join in the main cluster component. Only when the Raft peer transitions to Voter, consensus becomes ready, and cluster becomes Ready. This is cleaner, works better and is less complex than before (supporting both flags and config values). We also backup and clean the state whenever we are boostrapping, automatically * ipfs-cluster-service no longer runs the daemon. Starting cluster needs now "ipfs-cluster-service daemon". The daemon specific flags (bootstrap, alloc) are now flags for the daemon subcommand. Here we mimic ipfs ("ipfs" does not start the daemon but print help) and pave the path for merging both service and ctl in the future. While this brings some breaking changes, it significantly reduces the complexity of the configuration, the code and most importantly, the documentation. It should be easier now to explain the user what is the right way to launch a cluster peer, and more difficult to make mistakes. As a side effect, the PR also: * Fixes #381 - peers with dynamic addresses * Fixes #371 - peers should be Raft configuration option * Fixes #378 - waitForUpdates may return before state fully synced * Fixes #235 - config option shadowing (no cfg saves, no need to shadow) License: MIT Signed-off-by: Hector Sanjuan <code@hector.link>
2018-04-28 22:22:23 +00:00
raftcon, _ := raft.NewConsensus(host, consensusCfg, st, false)
bmonCfg.CheckInterval = 2 * time.Second
psmonCfg.CheckInterval = 2 * time.Second
mon := makeMonitor(t, host, bmonCfg, psmonCfg)
alloc := ascendalloc.NewAllocator()
Issue #162: Rework configuration format The following commit reimplements ipfs-cluster configuration under the following premises: * Each component is initialized with a configuration object defined by its module * Each component decides how the JSON representation of its configuration looks like * Each component parses and validates its own configuration * Each component exposes its own defaults * Component configurations are make the sections of a central JSON configuration file (which replaces the current JSON format) * Component configurations implement a common interface (config.ComponentConfig) with a set of common operations * The central configuration file is managed by a config.ConfigManager which: * Registers ComponentConfigs * Assigns the correspondent sections from the JSON file to each component and delegates the parsing * Delegates the JSON generation for each section * Can be notified when the configuration is updated and must be saved to disk The new service.json would then look as follows: ```json { "cluster": { "id": "QmTVW8NoRxC5wBhV7WtAYtRn7itipEESfozWN5KmXUQnk2", "private_key": "<...>", "secret": "00224102ae6aaf94f2606abf69a0e278251ecc1d64815b617ff19d6d2841f786", "peers": [], "bootstrap": [], "leave_on_shutdown": false, "listen_multiaddress": "/ip4/0.0.0.0/tcp/9096", "state_sync_interval": "1m0s", "ipfs_sync_interval": "2m10s", "replication_factor": -1, "monitor_ping_interval": "15s" }, "consensus": { "raft": { "heartbeat_timeout": "1s", "election_timeout": "1s", "commit_timeout": "50ms", "max_append_entries": 64, "trailing_logs": 10240, "snapshot_interval": "2m0s", "snapshot_threshold": 8192, "leader_lease_timeout": "500ms" } }, "api": { "restapi": { "listen_multiaddress": "/ip4/127.0.0.1/tcp/9094", "read_timeout": "30s", "read_header_timeout": "5s", "write_timeout": "1m0s", "idle_timeout": "2m0s" } }, "ipfs_connector": { "ipfshttp": { "proxy_listen_multiaddress": "/ip4/127.0.0.1/tcp/9095", "node_multiaddress": "/ip4/127.0.0.1/tcp/5001", "connect_swarms_delay": "7s", "proxy_read_timeout": "10m0s", "proxy_read_header_timeout": "5s", "proxy_write_timeout": "10m0s", "proxy_idle_timeout": "1m0s" } }, "monitor": { "monbasic": { "check_interval": "15s" } }, "informer": { "disk": { "metric_ttl": "30s", "metric_type": "freespace" }, "numpin": { "metric_ttl": "10s" } } } ``` This new format aims to be easily extensible per component. As such, it already surfaces quite a few new options which were hardcoded before. Additionally, since Go API have changed, some redundant methods have been removed and small refactoring has happened to take advantage of the new way. License: MIT Signed-off-by: Hector Sanjuan <hector@protocol.ai>
2017-10-11 18:23:03 +00:00
numpinCfg := &numpin.Config{}
numpinCfg.Default()
inf, _ := numpin.NewInformer(numpinCfg)
Feat: emancipate Consensus from the Cluster component This commit promotes the Consensus component (and Raft) to become a fully independent thing like other components, passed to NewCluster during initialization. Cluster (main component) no longer creates the consensus layer internally. This has triggered a number of breaking changes that I will explain below. Motivation: Future work will require the possibility of running Cluster with a consensus layer that is not Raft. The "consensus" layer is in charge of maintaining two things: * The current cluster peerset, as required by the implementation * The current cluster pinset (shared state) While the pinset maintenance has always been in the consensus layer, the peerset maintenance was handled by the main component (starting by the "peers" key in the configuration) AND the Raft component (internally) and this generated lots of confusion: if the user edited the peers in the configuration they would be greeted with an error. The bootstrap process (adding a peer to an existing cluster) and configuration key also complicated many things, since the main component did it, but only when the consensus was initialized and in single peer mode. In all this we also mixed the peerstore (list of peer addresses in the libp2p host) with the peerset, when they need not to be linked. By initializing the consensus layer before calling NewCluster, all the difficulties in maintaining the current implementation in the same way have come to light. Thus, the following changes have been introduced: * Remove "peers" and "bootstrap" keys from the configuration: we no longer edit or save the configuration files. This was a very bad practice, requiring write permissions by the process to the file containing the private key and additionally made things like Puppet deployments of cluster difficult as configuration would mutate from its initial version. Needless to say all the maintenance associated to making sure peers and bootstrap had correct values when peers are bootstrapped or removed. A loud and detailed error message has been added when staring cluster with an old config, along with instructions on how to move forward. * Introduce a PeerstoreFile ("peerstore") which stores peer addresses: in ipfs, the peerstore is not persisted because it can be re-built from the network bootstrappers and the DHT. Cluster should probably also allow discoverability of peers addresses (when not bootstrapping, as in that case we have it), but in the meantime, we will read and persist the peerstore addresses for cluster peers in this file, different from the configuration. Note that dns multiaddresses are now fully supported and no IPs are saved when we have DNS multiaddresses for a peer. * The former "peer_manager" code is now a pstoremgr module, providing utilities to parse, add, list and generally maintain the libp2p host peerstore, including operations on the PeerstoreFile. This "pstoremgr" can now also be extended to perform address autodiscovery and other things indepedently from Cluster. * Create and initialize Raft outside of the main Cluster component: since we can now launch Raft independently from Cluster, we have more degrees of freedom. A new "staging" option when creating the object allows a raft peer to be launched in Staging mode, waiting to be added to a running consensus, and thus, not electing itself as leader or doing anything like we were doing before. This additionally allows us to track when the peer has become a Voter, which only happens when it's caught up with the state, something that was wonky previously. * The raft configuration now includes an InitPeerset key, which allows to provide a peerset for new peers and which is ignored when staging==true. The whole Raft initialization code is way cleaner and stronger now. * Cluster peer bootsrapping is now an ipfs-cluster-service feature. The --bootstrap flag works as before (additionally allowing comma-separated-list of entries). What bootstrap does, is to initialize Raft with staging == true, and then call Join in the main cluster component. Only when the Raft peer transitions to Voter, consensus becomes ready, and cluster becomes Ready. This is cleaner, works better and is less complex than before (supporting both flags and config values). We also backup and clean the state whenever we are boostrapping, automatically * ipfs-cluster-service no longer runs the daemon. Starting cluster needs now "ipfs-cluster-service daemon". The daemon specific flags (bootstrap, alloc) are now flags for the daemon subcommand. Here we mimic ipfs ("ipfs" does not start the daemon but print help) and pave the path for merging both service and ctl in the future. While this brings some breaking changes, it significantly reduces the complexity of the configuration, the code and most importantly, the documentation. It should be easier now to explain the user what is the right way to launch a cluster peer, and more difficult to make mistakes. As a side effect, the PR also: * Fixes #381 - peers with dynamic addresses * Fixes #371 - peers should be Raft configuration option * Fixes #378 - waitForUpdates may return before state fully synced * Fixes #235 - config option shadowing (no cfg saves, no need to shadow) License: MIT Signed-off-by: Hector Sanjuan <code@hector.link>
2018-04-28 22:22:23 +00:00
ReadyTimeout = consensusCfg.WaitForLeaderTimeout + 1*time.Second
cl, err := NewCluster(
Feat: emancipate Consensus from the Cluster component This commit promotes the Consensus component (and Raft) to become a fully independent thing like other components, passed to NewCluster during initialization. Cluster (main component) no longer creates the consensus layer internally. This has triggered a number of breaking changes that I will explain below. Motivation: Future work will require the possibility of running Cluster with a consensus layer that is not Raft. The "consensus" layer is in charge of maintaining two things: * The current cluster peerset, as required by the implementation * The current cluster pinset (shared state) While the pinset maintenance has always been in the consensus layer, the peerset maintenance was handled by the main component (starting by the "peers" key in the configuration) AND the Raft component (internally) and this generated lots of confusion: if the user edited the peers in the configuration they would be greeted with an error. The bootstrap process (adding a peer to an existing cluster) and configuration key also complicated many things, since the main component did it, but only when the consensus was initialized and in single peer mode. In all this we also mixed the peerstore (list of peer addresses in the libp2p host) with the peerset, when they need not to be linked. By initializing the consensus layer before calling NewCluster, all the difficulties in maintaining the current implementation in the same way have come to light. Thus, the following changes have been introduced: * Remove "peers" and "bootstrap" keys from the configuration: we no longer edit or save the configuration files. This was a very bad practice, requiring write permissions by the process to the file containing the private key and additionally made things like Puppet deployments of cluster difficult as configuration would mutate from its initial version. Needless to say all the maintenance associated to making sure peers and bootstrap had correct values when peers are bootstrapped or removed. A loud and detailed error message has been added when staring cluster with an old config, along with instructions on how to move forward. * Introduce a PeerstoreFile ("peerstore") which stores peer addresses: in ipfs, the peerstore is not persisted because it can be re-built from the network bootstrappers and the DHT. Cluster should probably also allow discoverability of peers addresses (when not bootstrapping, as in that case we have it), but in the meantime, we will read and persist the peerstore addresses for cluster peers in this file, different from the configuration. Note that dns multiaddresses are now fully supported and no IPs are saved when we have DNS multiaddresses for a peer. * The former "peer_manager" code is now a pstoremgr module, providing utilities to parse, add, list and generally maintain the libp2p host peerstore, including operations on the PeerstoreFile. This "pstoremgr" can now also be extended to perform address autodiscovery and other things indepedently from Cluster. * Create and initialize Raft outside of the main Cluster component: since we can now launch Raft independently from Cluster, we have more degrees of freedom. A new "staging" option when creating the object allows a raft peer to be launched in Staging mode, waiting to be added to a running consensus, and thus, not electing itself as leader or doing anything like we were doing before. This additionally allows us to track when the peer has become a Voter, which only happens when it's caught up with the state, something that was wonky previously. * The raft configuration now includes an InitPeerset key, which allows to provide a peerset for new peers and which is ignored when staging==true. The whole Raft initialization code is way cleaner and stronger now. * Cluster peer bootsrapping is now an ipfs-cluster-service feature. The --bootstrap flag works as before (additionally allowing comma-separated-list of entries). What bootstrap does, is to initialize Raft with staging == true, and then call Join in the main cluster component. Only when the Raft peer transitions to Voter, consensus becomes ready, and cluster becomes Ready. This is cleaner, works better and is less complex than before (supporting both flags and config values). We also backup and clean the state whenever we are boostrapping, automatically * ipfs-cluster-service no longer runs the daemon. Starting cluster needs now "ipfs-cluster-service daemon". The daemon specific flags (bootstrap, alloc) are now flags for the daemon subcommand. Here we mimic ipfs ("ipfs" does not start the daemon but print help) and pave the path for merging both service and ctl in the future. While this brings some breaking changes, it significantly reduces the complexity of the configuration, the code and most importantly, the documentation. It should be easier now to explain the user what is the right way to launch a cluster peer, and more difficult to make mistakes. As a side effect, the PR also: * Fixes #381 - peers with dynamic addresses * Fixes #371 - peers should be Raft configuration option * Fixes #378 - waitForUpdates may return before state fully synced * Fixes #235 - config option shadowing (no cfg saves, no need to shadow) License: MIT Signed-off-by: Hector Sanjuan <code@hector.link>
2018-04-28 22:22:23 +00:00
host,
Issue #162: Rework configuration format The following commit reimplements ipfs-cluster configuration under the following premises: * Each component is initialized with a configuration object defined by its module * Each component decides how the JSON representation of its configuration looks like * Each component parses and validates its own configuration * Each component exposes its own defaults * Component configurations are make the sections of a central JSON configuration file (which replaces the current JSON format) * Component configurations implement a common interface (config.ComponentConfig) with a set of common operations * The central configuration file is managed by a config.ConfigManager which: * Registers ComponentConfigs * Assigns the correspondent sections from the JSON file to each component and delegates the parsing * Delegates the JSON generation for each section * Can be notified when the configuration is updated and must be saved to disk The new service.json would then look as follows: ```json { "cluster": { "id": "QmTVW8NoRxC5wBhV7WtAYtRn7itipEESfozWN5KmXUQnk2", "private_key": "<...>", "secret": "00224102ae6aaf94f2606abf69a0e278251ecc1d64815b617ff19d6d2841f786", "peers": [], "bootstrap": [], "leave_on_shutdown": false, "listen_multiaddress": "/ip4/0.0.0.0/tcp/9096", "state_sync_interval": "1m0s", "ipfs_sync_interval": "2m10s", "replication_factor": -1, "monitor_ping_interval": "15s" }, "consensus": { "raft": { "heartbeat_timeout": "1s", "election_timeout": "1s", "commit_timeout": "50ms", "max_append_entries": 64, "trailing_logs": 10240, "snapshot_interval": "2m0s", "snapshot_threshold": 8192, "leader_lease_timeout": "500ms" } }, "api": { "restapi": { "listen_multiaddress": "/ip4/127.0.0.1/tcp/9094", "read_timeout": "30s", "read_header_timeout": "5s", "write_timeout": "1m0s", "idle_timeout": "2m0s" } }, "ipfs_connector": { "ipfshttp": { "proxy_listen_multiaddress": "/ip4/127.0.0.1/tcp/9095", "node_multiaddress": "/ip4/127.0.0.1/tcp/5001", "connect_swarms_delay": "7s", "proxy_read_timeout": "10m0s", "proxy_read_header_timeout": "5s", "proxy_write_timeout": "10m0s", "proxy_idle_timeout": "1m0s" } }, "monitor": { "monbasic": { "check_interval": "15s" } }, "informer": { "disk": { "metric_ttl": "30s", "metric_type": "freespace" }, "numpin": { "metric_ttl": "10s" } } } ``` This new format aims to be easily extensible per component. As such, it already surfaces quite a few new options which were hardcoded before. Additionally, since Go API have changed, some redundant methods have been removed and small refactoring has happened to take advantage of the new way. License: MIT Signed-off-by: Hector Sanjuan <hector@protocol.ai>
2017-10-11 18:23:03 +00:00
clusterCfg,
Feat: emancipate Consensus from the Cluster component This commit promotes the Consensus component (and Raft) to become a fully independent thing like other components, passed to NewCluster during initialization. Cluster (main component) no longer creates the consensus layer internally. This has triggered a number of breaking changes that I will explain below. Motivation: Future work will require the possibility of running Cluster with a consensus layer that is not Raft. The "consensus" layer is in charge of maintaining two things: * The current cluster peerset, as required by the implementation * The current cluster pinset (shared state) While the pinset maintenance has always been in the consensus layer, the peerset maintenance was handled by the main component (starting by the "peers" key in the configuration) AND the Raft component (internally) and this generated lots of confusion: if the user edited the peers in the configuration they would be greeted with an error. The bootstrap process (adding a peer to an existing cluster) and configuration key also complicated many things, since the main component did it, but only when the consensus was initialized and in single peer mode. In all this we also mixed the peerstore (list of peer addresses in the libp2p host) with the peerset, when they need not to be linked. By initializing the consensus layer before calling NewCluster, all the difficulties in maintaining the current implementation in the same way have come to light. Thus, the following changes have been introduced: * Remove "peers" and "bootstrap" keys from the configuration: we no longer edit or save the configuration files. This was a very bad practice, requiring write permissions by the process to the file containing the private key and additionally made things like Puppet deployments of cluster difficult as configuration would mutate from its initial version. Needless to say all the maintenance associated to making sure peers and bootstrap had correct values when peers are bootstrapped or removed. A loud and detailed error message has been added when staring cluster with an old config, along with instructions on how to move forward. * Introduce a PeerstoreFile ("peerstore") which stores peer addresses: in ipfs, the peerstore is not persisted because it can be re-built from the network bootstrappers and the DHT. Cluster should probably also allow discoverability of peers addresses (when not bootstrapping, as in that case we have it), but in the meantime, we will read and persist the peerstore addresses for cluster peers in this file, different from the configuration. Note that dns multiaddresses are now fully supported and no IPs are saved when we have DNS multiaddresses for a peer. * The former "peer_manager" code is now a pstoremgr module, providing utilities to parse, add, list and generally maintain the libp2p host peerstore, including operations on the PeerstoreFile. This "pstoremgr" can now also be extended to perform address autodiscovery and other things indepedently from Cluster. * Create and initialize Raft outside of the main Cluster component: since we can now launch Raft independently from Cluster, we have more degrees of freedom. A new "staging" option when creating the object allows a raft peer to be launched in Staging mode, waiting to be added to a running consensus, and thus, not electing itself as leader or doing anything like we were doing before. This additionally allows us to track when the peer has become a Voter, which only happens when it's caught up with the state, something that was wonky previously. * The raft configuration now includes an InitPeerset key, which allows to provide a peerset for new peers and which is ignored when staging==true. The whole Raft initialization code is way cleaner and stronger now. * Cluster peer bootsrapping is now an ipfs-cluster-service feature. The --bootstrap flag works as before (additionally allowing comma-separated-list of entries). What bootstrap does, is to initialize Raft with staging == true, and then call Join in the main cluster component. Only when the Raft peer transitions to Voter, consensus becomes ready, and cluster becomes Ready. This is cleaner, works better and is less complex than before (supporting both flags and config values). We also backup and clean the state whenever we are boostrapping, automatically * ipfs-cluster-service no longer runs the daemon. Starting cluster needs now "ipfs-cluster-service daemon". The daemon specific flags (bootstrap, alloc) are now flags for the daemon subcommand. Here we mimic ipfs ("ipfs" does not start the daemon but print help) and pave the path for merging both service and ctl in the future. While this brings some breaking changes, it significantly reduces the complexity of the configuration, the code and most importantly, the documentation. It should be easier now to explain the user what is the right way to launch a cluster peer, and more difficult to make mistakes. As a side effect, the PR also: * Fixes #381 - peers with dynamic addresses * Fixes #371 - peers should be Raft configuration option * Fixes #378 - waitForUpdates may return before state fully synced * Fixes #235 - config option shadowing (no cfg saves, no need to shadow) License: MIT Signed-off-by: Hector Sanjuan <code@hector.link>
2018-04-28 22:22:23 +00:00
raftcon,
[]API{api, proxy},
ipfs,
st,
tracker,
mon,
alloc,
inf,
tracer,
)
if err != nil {
t.Fatal("cannot create cluster:", err)
}
<-cl.Ready()
return cl, api, ipfs, st, tracker
}
func cleanRaft() {
raftDirs, _ := filepath.Glob("raftFolderFromTests*")
for _, dir := range raftDirs {
os.RemoveAll(dir)
}
}
func testClusterShutdown(t *testing.T) {
ctx := context.Background()
cl, _, _, _, _ := testingCluster(t)
err := cl.Shutdown(ctx)
if err != nil {
t.Error("cluster shutdown failed:", err)
}
cl.Shutdown(ctx)
cl, _, _, _, _ = testingCluster(t)
err = cl.Shutdown(ctx)
if err != nil {
t.Error("cluster shutdown failed:", err)
}
}
func TestClusterStateSync(t *testing.T) {
ctx := context.Background()
cleanRaft()
cl, _, _, st, _ := testingCluster(t)
defer cleanRaft()
defer cl.Shutdown(ctx)
err := cl.StateSync(ctx)
if err == nil {
t.Fatal("expected an error as there is no state to sync")
}
c, _ := cid.Decode(test.TestCid1)
err = cl.Pin(ctx, api.PinCid(c))
if err != nil {
t.Fatal("pin should have worked:", err)
}
err = cl.StateSync(ctx)
if err != nil {
t.Fatal("sync after pinning should have worked:", err)
}
// Modify state on the side so the sync does not
// happen on an empty slide
st.Rm(ctx, c)
err = cl.StateSync(ctx)
if err != nil {
t.Fatal("sync with recover should have worked:", err)
}
}
func TestClusterID(t *testing.T) {
ctx := context.Background()
cl, _, _, _, _ := testingCluster(t)
defer cleanRaft()
defer cl.Shutdown(ctx)
id := cl.ID(ctx)
if len(id.Addresses) == 0 {
t.Error("expected more addresses")
}
if id.ID == "" {
t.Error("expected a cluster ID")
}
if id.Version != version.Version.String() {
t.Error("version should match current version")
}
//if id.PublicKey == nil {
// t.Error("publicKey should not be empty")
//}
}
func TestClusterPin(t *testing.T) {
ctx := context.Background()
cl, _, _, _, _ := testingCluster(t)
defer cleanRaft()
defer cl.Shutdown(ctx)
c, _ := cid.Decode(test.TestCid1)
err := cl.Pin(ctx, api.PinCid(c))
if err != nil {
t.Fatal("pin should have worked:", err)
}
// test an error case
cl.consensus.Shutdown(ctx)
pin := api.PinCid(c)
pin.ReplicationFactorMax = 1
pin.ReplicationFactorMin = 1
err = cl.Pin(ctx, pin)
if err == nil {
t.Error("expected an error but things worked")
}
}
func TestAddFile(t *testing.T) {
ctx := context.Background()
cl, _, _, _, _ := testingCluster(t)
defer cleanRaft()
defer cl.Shutdown(ctx)
sth := test.NewShardingTestHelper()
defer sth.Clean(t)
t.Run("local", func(t *testing.T) {
params := api.DefaultAddParams()
params.Shard = false
params.Name = "testlocal"
mfr, closer := sth.GetTreeMultiReader(t)
defer closer.Close()
r := multipart.NewReader(mfr, mfr.Boundary())
c, err := cl.AddFile(r, params)
if err != nil {
t.Fatal(err)
}
if c.String() != test.ShardingDirBalancedRootCID {
t.Fatal("unexpected root CID for local add")
}
pinDelay()
pin := cl.StatusLocal(ctx, c)
if pin.Error != "" {
t.Fatal(pin.Error)
}
if pin.Status != api.TrackerStatusPinned {
t.Error("cid should be pinned")
}
cl.Unpin(ctx, c) // unpin so we can pin the shard in next test
pinDelay()
})
t.Run("shard", func(t *testing.T) {
params := api.DefaultAddParams()
params.Shard = true
params.Name = "testshard"
mfr, closer := sth.GetTreeMultiReader(t)
defer closer.Close()
r := multipart.NewReader(mfr, mfr.Boundary())
c, err := cl.AddFile(r, params)
if err != nil {
t.Fatal(err)
}
if c.String() != test.ShardingDirBalancedRootCID {
t.Fatal("unexpected root CID for local add")
}
pinDelay()
// We know that this produces 14 shards.
sharding.VerifyShards(t, c, cl, cl.ipfs, 14)
})
}
func TestUnpinShard(t *testing.T) {
ctx := context.Background()
cl, _, _, _, _ := testingCluster(t)
defer cleanRaft()
defer cl.Shutdown(ctx)
sth := test.NewShardingTestHelper()
defer sth.Clean(t)
params := api.DefaultAddParams()
params.Shard = true
params.Name = "testshard"
mfr, closer := sth.GetTreeMultiReader(t)
defer closer.Close()
r := multipart.NewReader(mfr, mfr.Boundary())
root, err := cl.AddFile(r, params)
if err != nil {
t.Fatal(err)
}
pinDelay()
// We know that this produces 14 shards.
sharding.VerifyShards(t, root, cl, cl.ipfs, 14)
// skipping errors, VerifyShards has checked
pinnedCids := []cid.Cid{}
pinnedCids = append(pinnedCids, root)
metaPin, _ := cl.PinGet(ctx, root)
cDag, _ := cl.PinGet(ctx, metaPin.Reference)
pinnedCids = append(pinnedCids, cDag.Cid)
cDagBlock, _ := cl.ipfs.BlockGet(ctx, cDag.Cid)
cDagNode, _ := sharding.CborDataToNode(cDagBlock, "cbor")
for _, l := range cDagNode.Links() {
pinnedCids = append(pinnedCids, l.Cid)
}
t.Run("unpin clusterdag should fail", func(t *testing.T) {
err := cl.Unpin(ctx, cDag.Cid)
if err == nil {
t.Fatal("should not allow unpinning the cluster DAG directly")
}
t.Log(err)
})
t.Run("unpin shard should fail", func(t *testing.T) {
err := cl.Unpin(ctx, cDagNode.Links()[0].Cid)
if err == nil {
t.Fatal("should not allow unpinning shards directly")
}
t.Log(err)
})
t.Run("normal unpin", func(t *testing.T) {
err := cl.Unpin(ctx, root)
if err != nil {
t.Fatal(err)
}
pinDelay()
for _, c := range pinnedCids {
st := cl.StatusLocal(ctx, c)
if st.Status != api.TrackerStatusUnpinned {
t.Errorf("%s should have been unpinned but is %s", c, st.Status)
}
st2, err := cl.ipfs.PinLsCid(context.Background(), c)
if err != nil {
t.Fatal(err)
}
if st2 != api.IPFSPinStatusUnpinned {
t.Errorf("%s should have been unpinned in ipfs but is %d", c, st2)
}
}
})
}
// func singleShardedPin(t *testing.T, cl *Cluster) {
// cShard, _ := cid.Decode(test.TestShardCid)
// cCdag, _ := cid.Decode(test.TestCdagCid)
// cMeta, _ := cid.Decode(test.TestMetaRootCid)
// pinMeta(t, cl, []cid.Cid{cShard}, cCdag, cMeta)
// }
// func pinMeta(t *testing.T, cl *Cluster, shardCids []cid.Cid, cCdag, cMeta cid.Cid) {
// for _, cShard := range shardCids {
// shardPin := api.Pin{
// Cid: cShard,
// Type: api.ShardType,
// MaxDepth: 1,
// PinOptions: api.PinOptions{
// ReplicationFactorMin: -1,
// ReplicationFactorMax: -1,
// },
// }
// err := cl.Pin(shardPin)
// if err != nil {
// t.Fatal("shard pin should have worked:", err)
// }
// }
// parents := cid.NewSet()
// parents.Add(cMeta)
// cdagPin := api.Pin{
// Cid: cCdag,
// Type: api.ClusterDAGType,
// MaxDepth: 0,
// PinOptions: api.PinOptions{
// ReplicationFactorMin: -1,
// ReplicationFactorMax: -1,
// },
// }
// err := cl.Pin(cdagPin)
// if err != nil {
// t.Fatal("pin should have worked:", err)
// }
// metaPin := api.Pin{
// Cid: cMeta,
// Type: api.MetaType,
// Clusterdag: cCdag,
// }
// err = cl.Pin(metaPin)
// if err != nil {
// t.Fatal("pin should have worked:", err)
// }
// }
// func TestClusterPinMeta(t *testing.T) {
// cl, _, _, _, _ := testingCluster(t)
// defer cleanRaft()
// defer cl.Shutdown()
// singleShardedPin(t, cl)
// }
// func TestClusterUnpinShardFail(t *testing.T) {
// cl, _, _, _, _ := testingCluster(t)
// defer cleanRaft()
// defer cl.Shutdown()
// singleShardedPin(t, cl)
// // verify pins
// if len(cl.Pins()) != 3 {
// t.Fatal("should have 3 pins")
// }
// // Unpinning metadata should fail
// cShard, _ := cid.Decode(test.TestShardCid)
// cCdag, _ := cid.Decode(test.TestCdagCid)
// err := cl.Unpin(cShard)
// if err == nil {
// t.Error("should error when unpinning shard")
// }
// err = cl.Unpin(cCdag)
// if err == nil {
// t.Error("should error when unpinning cluster dag")
// }
// }
// func TestClusterUnpinMeta(t *testing.T) {
// cl, _, _, _, _ := testingCluster(t)
// defer cleanRaft()
// defer cl.Shutdown()
// singleShardedPin(t, cl)
// // verify pins
// if len(cl.Pins()) != 3 {
// t.Fatal("should have 3 pins")
// }
// // Unpinning from root should work
// cMeta, _ := cid.Decode(test.TestMetaRootCid)
// err := cl.Unpin(cMeta)
// if err != nil {
// t.Error(err)
// }
// }
// func pinTwoParentsOneShard(t *testing.T, cl *Cluster) {
// singleShardedPin(t, cl)
// cShard, _ := cid.Decode(test.TestShardCid)
// cShard2, _ := cid.Decode(test.TestShardCid2)
// cCdag2, _ := cid.Decode(test.TestCdagCid2)
// cMeta2, _ := cid.Decode(test.TestMetaRootCid2)
// pinMeta(t, cl, []cid.Cid{cShard, cShard2}, cCdag2, cMeta2)
// shardPin, err := cl.PinGet(cShard)
// if err != nil {
// t.Fatal("pin should be in state")
// }
// if shardPin.Parents.Len() != 2 {
// t.Fatal("unexpected parent set in shared shard")
// }
// shardPin2, err := cl.PinGet(cShard2)
// if shardPin2.Parents.Len() != 1 {
// t.Fatal("unexpected parent set in unshared shard")
// }
// if err != nil {
// t.Fatal("pin should be in state")
// }
// }
// func TestClusterPinShardTwoParents(t *testing.T) {
// cl, _, _, _, _ := testingCluster(t)
// defer cleanRaft()
// defer cl.Shutdown()
// pinTwoParentsOneShard(t, cl)
// cShard, _ := cid.Decode(test.TestShardCid)
// shardPin, err := cl.PinGet(cShard)
// if err != nil {
// t.Fatal("double pinned shard should be pinned")
// }
// if shardPin.Parents == nil || shardPin.Parents.Len() != 2 {
// t.Fatal("double pinned shard should have two parents")
// }
// }
// func TestClusterUnpinShardSecondParent(t *testing.T) {
// cl, _, _, _, _ := testingCluster(t)
// defer cleanRaft()
// defer cl.Shutdown()
// pinTwoParentsOneShard(t, cl)
// if len(cl.Pins()) != 6 {
// t.Fatal("should have 6 pins")
// }
// cMeta2, _ := cid.Decode(test.TestMetaRootCid2)
// err := cl.Unpin(cMeta2)
// if err != nil {
// t.Error(err)
// }
// pinDelay()
// if len(cl.Pins()) != 3 {
// t.Fatal("should have 3 pins")
// }
// cShard, _ := cid.Decode(test.TestShardCid)
// cCdag, _ := cid.Decode(test.TestCdagCid)
// shardPin, err := cl.PinGet(cShard)
// if err != nil {
// t.Fatal("double pinned shard node should still be pinned")
// }
// if shardPin.Parents == nil || shardPin.Parents.Len() != 1 ||
// !shardPin.Parents.Has(cCdag) {
// t.Fatalf("shard node should have single original parent %v", shardPin.Parents.Keys())
// }
// }
// func TestClusterUnpinShardFirstParent(t *testing.T) {
// cl, _, _, _, _ := testingCluster(t)
// defer cleanRaft()
// defer cl.Shutdown()
// pinTwoParentsOneShard(t, cl)
// if len(cl.Pins()) != 6 {
// t.Fatal("should have 6 pins")
// }
// cMeta, _ := cid.Decode(test.TestMetaRootCid)
// err := cl.Unpin(cMeta)
// if err != nil {
// t.Error(err)
// }
// if len(cl.Pins()) != 4 {
// t.Fatal("should have 4 pins")
// }
// cShard, _ := cid.Decode(test.TestShardCid)
// cShard2, _ := cid.Decode(test.TestShardCid2)
// cCdag2, _ := cid.Decode(test.TestCdagCid2)
// shardPin, err := cl.PinGet(cShard)
// if err != nil {
// t.Fatal("double pinned shard node should still be pinned")
// }
// if shardPin.Parents == nil || shardPin.Parents.Len() != 1 ||
// !shardPin.Parents.Has(cCdag2) {
// t.Fatal("shard node should have single original parent")
// }
// _, err = cl.PinGet(cShard2)
// if err != nil {
// t.Fatal("other shard shoud still be pinned too")
// }
// }
// func TestClusterPinTwoMethodsFail(t *testing.T) {
// cl, _, _, _, _ := testingCluster(t)
// defer cleanRaft()
// defer cl.Shutdown()
// // First pin normally then sharding pin fails
// c, _ := cid.Decode(test.TestMetaRootCid)
// err := cl.Pin(api.PinCid(c))
// if err != nil {
// t.Fatal("pin should have worked:", err)
// }
// cCdag, _ := cid.Decode(test.TestCdagCid)
// cMeta, _ := cid.Decode(test.TestMetaRootCid)
// metaPin := api.Pin{
// Cid: cMeta,
// Type: api.MetaType,
// Clusterdag: cCdag,
// }
// err = cl.Pin(metaPin)
// if err == nil {
// t.Fatal("pin should have failed:", err)
// }
// err = cl.Unpin(c)
// if err != nil {
// t.Fatal("unpin should have worked:", err)
// }
// singleShardedPin(t, cl)
// err = cl.Pin(api.PinCid(c))
// if err == nil {
// t.Fatal("pin should have failed:", err)
// }
// }
// func TestClusterRePinShard(t *testing.T) {
// cl, _, _, _, _ := testingCluster(t)
// defer cleanRaft()
// defer cl.Shutdown()
// cCdag, _ := cid.Decode(test.TestCdagCid)
// cShard, _ := cid.Decode(test.TestShardCid)
// shardPin := api.Pin{
// Cid: cShard,
// Type: api.ShardType,
// ReplicationFactorMin: -1,
// ReplicationFactorMax: -1,
// Recursive: true,
// }
// err := cl.Pin(shardPin)
// if err != nil {
// t.Fatal("shard pin should have worked:", err)
// }
// parents := cid.NewSet()
// parents.Add(cCdag)
// shardPin.Parents = parents
// err = cl.Pin(shardPin)
// if err != nil {
// t.Fatal("repinning shard pin with different parents should have worked:", err)
// }
// shardPin.ReplicationFactorMin = 3
// shardPin.ReplicationFactorMax = 5
// err = cl.Pin(shardPin)
// if err == nil {
// t.Fatal("repinning shard pin with different repl factors should have failed:", err)
// }
// }
func TestClusterPins(t *testing.T) {
ctx := context.Background()
cl, _, _, _, _ := testingCluster(t)
defer cleanRaft()
defer cl.Shutdown(ctx)
c, _ := cid.Decode(test.TestCid1)
err := cl.Pin(ctx, api.PinCid(c))
if err != nil {
t.Fatal("pin should have worked:", err)
}
pins := cl.Pins(ctx)
if len(pins) != 1 {
t.Fatal("pin should be part of the state")
}
if !pins[0].Cid.Equals(c) || pins[0].ReplicationFactorMin != -1 || pins[0].ReplicationFactorMax != -1 {
t.Error("the Pin does not look as expected")
}
}
func TestClusterPinGet(t *testing.T) {
ctx := context.Background()
cl, _, _, _, _ := testingCluster(t)
defer cleanRaft()
defer cl.Shutdown(ctx)
c, _ := cid.Decode(test.TestCid1)
err := cl.Pin(ctx, api.PinCid(c))
if err != nil {
t.Fatal("pin should have worked:", err)
}
pin, err := cl.PinGet(ctx, c)
if err != nil {
t.Fatal(err)
}
if !pin.Cid.Equals(c) || pin.ReplicationFactorMin != -1 || pin.ReplicationFactorMax != -1 {
t.Error("the Pin does not look as expected")
}
c2, _ := cid.Decode(test.TestCid2)
_, err = cl.PinGet(ctx, c2)
if err == nil {
t.Fatal("expected an error")
}
}
func TestClusterUnpin(t *testing.T) {
ctx := context.Background()
cl, _, _, _, _ := testingCluster(t)
defer cleanRaft()
defer cl.Shutdown(ctx)
c, _ := cid.Decode(test.TestCid1)
// Unpin should error without pin being committed to state
err := cl.Unpin(ctx, c)
if err == nil {
t.Error("unpin should have failed")
}
// Unpin after pin should succeed
err = cl.Pin(ctx, api.PinCid(c))
if err != nil {
t.Fatal("pin should have worked:", err)
}
err = cl.Unpin(ctx, c)
if err != nil {
t.Error("unpin should have worked:", err)
}
// test another error case
cl.consensus.Shutdown(ctx)
err = cl.Unpin(ctx, c)
if err == nil {
t.Error("expected an error but things worked")
}
}
func TestClusterPeers(t *testing.T) {
ctx := context.Background()
cl, _, _, _, _ := testingCluster(t)
defer cleanRaft()
defer cl.Shutdown(ctx)
peers := cl.Peers(ctx)
if len(peers) != 1 {
t.Fatal("expected 1 peer")
}
Issue #162: Rework configuration format The following commit reimplements ipfs-cluster configuration under the following premises: * Each component is initialized with a configuration object defined by its module * Each component decides how the JSON representation of its configuration looks like * Each component parses and validates its own configuration * Each component exposes its own defaults * Component configurations are make the sections of a central JSON configuration file (which replaces the current JSON format) * Component configurations implement a common interface (config.ComponentConfig) with a set of common operations * The central configuration file is managed by a config.ConfigManager which: * Registers ComponentConfigs * Assigns the correspondent sections from the JSON file to each component and delegates the parsing * Delegates the JSON generation for each section * Can be notified when the configuration is updated and must be saved to disk The new service.json would then look as follows: ```json { "cluster": { "id": "QmTVW8NoRxC5wBhV7WtAYtRn7itipEESfozWN5KmXUQnk2", "private_key": "<...>", "secret": "00224102ae6aaf94f2606abf69a0e278251ecc1d64815b617ff19d6d2841f786", "peers": [], "bootstrap": [], "leave_on_shutdown": false, "listen_multiaddress": "/ip4/0.0.0.0/tcp/9096", "state_sync_interval": "1m0s", "ipfs_sync_interval": "2m10s", "replication_factor": -1, "monitor_ping_interval": "15s" }, "consensus": { "raft": { "heartbeat_timeout": "1s", "election_timeout": "1s", "commit_timeout": "50ms", "max_append_entries": 64, "trailing_logs": 10240, "snapshot_interval": "2m0s", "snapshot_threshold": 8192, "leader_lease_timeout": "500ms" } }, "api": { "restapi": { "listen_multiaddress": "/ip4/127.0.0.1/tcp/9094", "read_timeout": "30s", "read_header_timeout": "5s", "write_timeout": "1m0s", "idle_timeout": "2m0s" } }, "ipfs_connector": { "ipfshttp": { "proxy_listen_multiaddress": "/ip4/127.0.0.1/tcp/9095", "node_multiaddress": "/ip4/127.0.0.1/tcp/5001", "connect_swarms_delay": "7s", "proxy_read_timeout": "10m0s", "proxy_read_header_timeout": "5s", "proxy_write_timeout": "10m0s", "proxy_idle_timeout": "1m0s" } }, "monitor": { "monbasic": { "check_interval": "15s" } }, "informer": { "disk": { "metric_ttl": "30s", "metric_type": "freespace" }, "numpin": { "metric_ttl": "10s" } } } ``` This new format aims to be easily extensible per component. As such, it already surfaces quite a few new options which were hardcoded before. Additionally, since Go API have changed, some redundant methods have been removed and small refactoring has happened to take advantage of the new way. License: MIT Signed-off-by: Hector Sanjuan <hector@protocol.ai>
2017-10-11 18:23:03 +00:00
clusterCfg := &Config{}
clusterCfg.LoadJSON(testingClusterCfg)
if peers[0].ID != clusterCfg.ID {
t.Error("bad member")
}
}
func TestVersion(t *testing.T) {
ctx := context.Background()
cl, _, _, _, _ := testingCluster(t)
defer cleanRaft()
defer cl.Shutdown(ctx)
if cl.Version() != version.Version.String() {
t.Error("bad Version()")
}
}
func TestClusterRecoverAllLocal(t *testing.T) {
ctx := context.Background()
cl, _, _, _, _ := testingCluster(t)
defer cleanRaft()
defer cl.Shutdown(ctx)
c, _ := cid.Decode(test.TestCid1)
err := cl.Pin(ctx, api.PinCid(c))
if err != nil {
t.Fatal("pin should have worked:", err)
}
pinDelay()
recov, err := cl.RecoverAllLocal(ctx)
if err != nil {
t.Error("did not expect an error")
}
if len(recov) != 1 {
t.Fatalf("there should be only one pin, got = %d", len(recov))
}
if recov[0].Status != api.TrackerStatusPinned {
t.Errorf("the pin should have been recovered, got = %v", recov[0].Status)
}
}