Commit Graph

17 Commits

Author SHA1 Message Date
Hector Sanjuan
7d16108751 Start using libp2p/go-libp2p-gorpc
License: MIT
Signed-off-by: Hector Sanjuan <code@hector.link>
2018-10-17 15:28:03 +02:00
Hector Sanjuan
5bbc699bb4 Issue #340: Fix some data races
Unfortunately, there are still some data races in yamux
https://github.com/libp2p/go-libp2p/issues/396 so we can't
enable this by default.

License: MIT
Signed-off-by: Hector Sanjuan <code@hector.link>
2018-08-15 12:27:01 +02:00
Hector Sanjuan
33d9cdd3c4 Feat: emancipate Consensus from the Cluster component
This commit promotes the Consensus component (and Raft) to become a fully
independent thing like other components, passed to NewCluster during
initialization. Cluster (main component) no longer creates the consensus
layer internally. This has triggered a number of breaking changes
that I will explain below.

Motivation: Future work will require the possibility of running Cluster
with a consensus layer that is not Raft. The "consensus" layer is in charge
of maintaining two things:
  * The current cluster peerset, as required by the implementation
  * The current cluster pinset (shared state)

While the pinset maintenance has always been in the consensus layer, the
peerset maintenance was handled by the main component (starting by the "peers"
key in the configuration) AND the Raft component (internally)
and this generated lots of confusion: if the user edited the peers in the
configuration they would be greeted with an error.

The bootstrap process (adding a peer to an existing cluster) and configuration
key also complicated many things, since the main component did it, but only
when the consensus was initialized and in single peer mode.

In all this we also mixed the peerstore (list of peer addresses in the libp2p
host) with the peerset, when they need not to be linked.

By initializing the consensus layer before calling NewCluster, all the
difficulties in maintaining the current implementation in the same way
have come to light. Thus, the following changes have been introduced:

* Remove "peers" and "bootstrap" keys from the configuration: we no longer
edit or save the configuration files. This was a very bad practice, requiring
write permissions by the process to the file containing the private key and
additionally made things like Puppet deployments of cluster difficult as
configuration would mutate from its initial version. Needless to say all the
maintenance associated to making sure peers and bootstrap had correct values
when peers are bootstrapped or removed. A loud and detailed error message has
been added when staring cluster with an old config, along with instructions on
how to move forward.

* Introduce a PeerstoreFile ("peerstore") which stores peer addresses: in
ipfs, the peerstore is not persisted because it can be re-built from the
network bootstrappers and the DHT. Cluster should probably also allow
discoverability of peers addresses (when not bootstrapping, as in that case
we have it), but in the meantime, we will read and persist the peerstore
addresses for cluster peers in this file, different from the configuration.
Note that dns multiaddresses are now fully supported and no IPs are saved
when we have DNS multiaddresses for a peer.

* The former "peer_manager" code is now a pstoremgr module, providing utilities
to parse, add, list and generally maintain the libp2p host peerstore, including
operations on the PeerstoreFile. This "pstoremgr" can now also be extended to
perform address autodiscovery and other things indepedently from Cluster.

* Create and initialize Raft outside of the main Cluster component: since we
can now launch Raft independently from Cluster, we have more degrees of
freedom. A new "staging" option when creating the object allows a raft peer to
be launched in Staging mode, waiting to be added to a running consensus, and
thus, not electing itself as leader or doing anything like we were doing
before. This additionally allows us to track when the peer has become a
Voter, which only happens when it's caught up with the state, something that
was wonky previously.

* The raft configuration now includes an InitPeerset key, which allows to
provide a peerset for new peers and which is ignored when staging==true. The
whole Raft initialization code is way cleaner and stronger now.

* Cluster peer bootsrapping is now an ipfs-cluster-service feature. The
--bootstrap flag works as before (additionally allowing comma-separated-list
of entries). What bootstrap does, is to initialize Raft with staging == true,
and then call Join in the main cluster component. Only when the Raft peer
transitions to Voter, consensus becomes ready, and cluster becomes Ready.
This is cleaner, works better and is less complex than before (supporting
both flags and config values). We also backup and clean the state whenever
we are boostrapping, automatically

* ipfs-cluster-service no longer runs the daemon. Starting cluster needs
now "ipfs-cluster-service daemon". The daemon specific flags (bootstrap,
alloc) are now flags for the daemon subcommand. Here we mimic ipfs ("ipfs"
does not start the daemon but print help) and pave the path for merging both
service and ctl in the future.

While this brings some breaking changes, it significantly reduces the
complexity of the configuration, the code and most importantly, the
documentation. It should be easier now to explain the user what is the
right way to launch a cluster peer, and more difficult to make mistakes.

As a side effect, the PR also:

* Fixes #381 - peers with dynamic addresses
* Fixes #371 - peers should be Raft configuration option
* Fixes #378 - waitForUpdates may return before state fully synced
* Fixes #235 - config option shadowing (no cfg saves, no need to shadow)

License: MIT
Signed-off-by: Hector Sanjuan <code@hector.link>
2018-05-07 07:39:41 +02:00
Hector Sanjuan
2b6dfa45cd cluster-service: add version subcommand and change some startup logging
The --version flag is default from our cli library so I left that. The
version subcommand prints only the version number + the short commit
so it's a bit more easy to parse.

I have additionally reduced the amount of output on start up by converting
some messages to debug. I wish there was a level between INFO and DEBUG
though.

License: MIT
Signed-off-by: Hector Sanjuan <code@hector.link>
2017-12-13 10:25:01 +01:00
Hector Sanjuan
145dced3e8 Cluster: Fix libp2p host getting shutdown in the middle of peer removal
This is what it was likely causing PeerRemove tests to fail randomly
but very often. We cancelled the Cluster context before shutting down
the Consensus component. This killed networking and aborted
the peer remove operations when the leader is removing itself.

As a result, it would error with "leadership lost", which would
trigger a retry which would set the final error to "context cancelled"
because the shutdown of the consensus component proceeds during the
retry, cancelling the consensus context.

This is not only affecting tests, it might affected operations when
running cluster.

License: MIT
Signed-off-by: Hector Sanjuan <hector@protocol.ai>
2017-11-15 02:33:46 +01:00
Hector Sanjuan
417f30c9ea Avoid shutting down consensus in the middle of a commit
I think this will prevents some random tests failures
when we realize that we are not anymore in the peerset
and trigger a shutdown but Raft has not finished fully
committing the operation, which then triggers an error,
and a retry. But the contexts are cancelled in the retry
so it won't find a leader and will error finally error
with that message.

License: MIT
Signed-off-by: Hector Sanjuan <hector@protocol.ai>
2017-11-14 23:29:56 +01:00
Hector Sanjuan
b852dfa892 Fix #219: WIP: Remove duplicate peer accounting
This change removes the duplicities of the PeerManager component:

* No more commiting PeerAdd and PeerRm log entries
* The Raft peer set is the source of truth
* Basic broadcasting is used to communicate peer multiaddresses
  in the cluster
* A peer can only be added in a healthy cluster
* A peer can be removed from any cluster which can still commit
* This also adds support for multiple multiaddresses per peer

License: MIT
Signed-off-by: Hector Sanjuan <hector@protocol.ai>
2017-11-08 20:04:04 +01:00
Hector Sanjuan
c912cfd205 Issue #131: Destroy raft data when the peer has been removed
License: MIT
Signed-off-by: Hector Sanjuan <hector@protocol.ai>
2017-11-01 13:25:28 +01:00
Hector Sanjuan
7a5f8f184b Issue #131: Improvements adding and removing
This works on remove+shutdown procedure and fixes a few small
issues.

License: MIT
Signed-off-by: Hector Sanjuan <hector@protocol.ai>
2017-11-01 13:00:32 +01:00
Hector Sanjuan
74ed634653 Raft: add cachestore for the log store
Just like consul does it

License: MIT
Signed-off-by: Hector Sanjuan <hector@protocol.ai>
2017-11-01 12:17:33 +01:00
Hector Sanjuan
199dbb944a Raft/PeerRm: attempt more orderly peer removal
Wait until FSM has applied the operation.

License: MIT
Signed-off-by: Hector Sanjuan <hector@protocol.ai>
2017-11-01 12:17:33 +01:00
Hector Sanjuan
7df2277684 Consensus: only log pins committed on the leader.
License: MIT
Signed-off-by: Hector Sanjuan <hector@protocol.ai>
2017-11-01 12:17:33 +01:00
Hector Sanjuan
18dbf1a93b Issue #131: Do not abort on bad peerset
Print warning instead.

Shutdown raft on peerRm.

License: MIT
Signed-off-by: Hector Sanjuan <hector@protocol.ai>
2017-11-01 12:17:33 +01:00
Hector Sanjuan
848023e381 Fix #139: Update cluster to Raft 1.0.0
The main differences is that the new version of Raft is more strict
about starting raft peers which already contain configurations.

For a start, cluster will fail to start if the configured cluster
peers are different from the Raft peers. The user will have to
manually cleanup Raft (TODO: an ipfs-cluster-service command for it).

Additionally, this commit adds extra options to the consensus/raft
configuration section, adds tests and improves existing ones and
improves certain code sections.

License: MIT
Signed-off-by: Hector Sanjuan <hector@protocol.ai>
2017-11-01 12:17:33 +01:00
Hector Sanjuan
8f06baa1bf Issue #162: Rework configuration format
The following commit reimplements ipfs-cluster configuration under
the following premises:

  * Each component is initialized with a configuration object
  defined by its module
  * Each component decides how the JSON representation of its
  configuration looks like
  * Each component parses and validates its own configuration
  * Each component exposes its own defaults
  * Component configurations are make the sections of a
  central JSON configuration file (which replaces the current
  JSON format)
  * Component configurations implement a common interface
  (config.ComponentConfig) with a set of common operations
  * The central configuration file is managed by a
  config.ConfigManager which:
    * Registers ComponentConfigs
    * Assigns the correspondent sections from the JSON file to each
    component and delegates the parsing
    * Delegates the JSON generation for each section
    * Can be notified when the configuration is updated and must be
    saved to disk

The new service.json would then look as follows:

```json
{
  "cluster": {
    "id": "QmTVW8NoRxC5wBhV7WtAYtRn7itipEESfozWN5KmXUQnk2",
    "private_key": "<...>",
    "secret": "00224102ae6aaf94f2606abf69a0e278251ecc1d64815b617ff19d6d2841f786",
    "peers": [],
    "bootstrap": [],
    "leave_on_shutdown": false,
    "listen_multiaddress": "/ip4/0.0.0.0/tcp/9096",
    "state_sync_interval": "1m0s",
    "ipfs_sync_interval": "2m10s",
    "replication_factor": -1,
    "monitor_ping_interval": "15s"
  },
  "consensus": {
    "raft": {
      "heartbeat_timeout": "1s",
      "election_timeout": "1s",
      "commit_timeout": "50ms",
      "max_append_entries": 64,
      "trailing_logs": 10240,
      "snapshot_interval": "2m0s",
      "snapshot_threshold": 8192,
      "leader_lease_timeout": "500ms"
    }
  },
  "api": {
    "restapi": {
      "listen_multiaddress": "/ip4/127.0.0.1/tcp/9094",
      "read_timeout": "30s",
      "read_header_timeout": "5s",
      "write_timeout": "1m0s",
      "idle_timeout": "2m0s"
    }
  },
  "ipfs_connector": {
    "ipfshttp": {
      "proxy_listen_multiaddress": "/ip4/127.0.0.1/tcp/9095",
      "node_multiaddress": "/ip4/127.0.0.1/tcp/5001",
      "connect_swarms_delay": "7s",
      "proxy_read_timeout": "10m0s",
      "proxy_read_header_timeout": "5s",
      "proxy_write_timeout": "10m0s",
      "proxy_idle_timeout": "1m0s"
    }
  },
  "monitor": {
    "monbasic": {
      "check_interval": "15s"
    }
  },
  "informer": {
    "disk": {
      "metric_ttl": "30s",
      "metric_type": "freespace"
    },
    "numpin": {
      "metric_ttl": "10s"
    }
  }
}
```

This new format aims to be easily extensible per component. As such,
it already surfaces quite a few new options which were hardcoded
before.

Additionally, since Go API have changed, some redundant methods have been
removed and small refactoring has happened to take advantage of the new
way.

License: MIT
Signed-off-by: Hector Sanjuan <hector@protocol.ai>
2017-10-18 00:00:12 +02:00
Hector Sanjuan
e2efef8469 go lint, go vet, put the Consensus component behind interface.
License: MIT
Signed-off-by: Hector Sanjuan <hector@protocol.ai>
2017-03-14 16:37:29 +01:00
Hector Sanjuan
c2faf48177 Issue #18: Move Consensus and PeerMonitor to its own submodules
License: MIT
Signed-off-by: Hector Sanjuan <hector@protocol.ai>
2017-03-13 18:40:35 +01:00