Commit Graph

32 Commits

Author SHA1 Message Date
Kishan Mohanbhai Sagathiya
05a4661145 Use the updated sharness library 2019-08-19 16:49:27 +05:30
Hector Sanjuan
063c5f1b78 Service: Select consensus on "init" (not on "daemon")
Fixes #865.

This makes the necessary changes so that consensu is selected on "init" with a
flag set, by default, to "crdt". This generates only a "crdt" or a "raft"
section, not both.

If the configuration file has a "raft" section, "raft" will be used to start
the daemon. If it has a "crdt" section, "crdt" will be used. If it has none or
both sections, an error will happen.

This also affects "state *" commands, which will now autoselect how to work
from the existing configuration.
2019-08-09 19:20:53 +02:00
Hector Sanjuan
153e4f97c9 Sharness: run state import/export with both crdt and raft 2019-07-30 13:48:48 +02:00
Hector Sanjuan
6188d6ff52 service: Make --consensus a mandatory flag
It has a few implications to launch a raft peer when you wanted to do crdt and vice-versa.

Same when exporting and exporting states.

Users starting cluster peers should be explicit about their consensus choice.

Also, if we ever want to make `crdt` the default, we can't do that before
making `raft` non-default first. I don't like to break things but otherwise
the experience for new users wanting to try crdts might be aweful.
2019-07-26 18:15:41 +02:00
Kishan Mohanbhai Sagathiya
654c376a57 Fixed sharness test with new identity
License: MIT
Signed-off-by: Kishan Mohanbhai Sagathiya <kishansagathiya@gmail.com>
2019-05-09 00:13:29 +05:30
Kishan Sagathiya
61f86be96c We are using https://github.com/chriscool/sharness
We aren't using https://github.com/mlafeldt/sharness, code reference
below
fdfe8def94/Makefile (L67)

License: MIT
Signed-off-by: Kishan Mohanbhai Sagathiya <kishansagathiya@gmail.com>
2018-12-21 08:24:40 +05:30
Hector Sanjuan
031ff7183b Fix sharness
License: MIT
Signed-off-by: Hector Sanjuan <code@hector.link>
2018-08-09 02:41:30 +02:00
Hector Sanjuan
7638fabe1e Fix sharness
License: MIT
Signed-off-by: Hector Sanjuan <code@hector.link>
2018-05-25 09:54:52 +02:00
Hector Sanjuan
33d9cdd3c4 Feat: emancipate Consensus from the Cluster component
This commit promotes the Consensus component (and Raft) to become a fully
independent thing like other components, passed to NewCluster during
initialization. Cluster (main component) no longer creates the consensus
layer internally. This has triggered a number of breaking changes
that I will explain below.

Motivation: Future work will require the possibility of running Cluster
with a consensus layer that is not Raft. The "consensus" layer is in charge
of maintaining two things:
  * The current cluster peerset, as required by the implementation
  * The current cluster pinset (shared state)

While the pinset maintenance has always been in the consensus layer, the
peerset maintenance was handled by the main component (starting by the "peers"
key in the configuration) AND the Raft component (internally)
and this generated lots of confusion: if the user edited the peers in the
configuration they would be greeted with an error.

The bootstrap process (adding a peer to an existing cluster) and configuration
key also complicated many things, since the main component did it, but only
when the consensus was initialized and in single peer mode.

In all this we also mixed the peerstore (list of peer addresses in the libp2p
host) with the peerset, when they need not to be linked.

By initializing the consensus layer before calling NewCluster, all the
difficulties in maintaining the current implementation in the same way
have come to light. Thus, the following changes have been introduced:

* Remove "peers" and "bootstrap" keys from the configuration: we no longer
edit or save the configuration files. This was a very bad practice, requiring
write permissions by the process to the file containing the private key and
additionally made things like Puppet deployments of cluster difficult as
configuration would mutate from its initial version. Needless to say all the
maintenance associated to making sure peers and bootstrap had correct values
when peers are bootstrapped or removed. A loud and detailed error message has
been added when staring cluster with an old config, along with instructions on
how to move forward.

* Introduce a PeerstoreFile ("peerstore") which stores peer addresses: in
ipfs, the peerstore is not persisted because it can be re-built from the
network bootstrappers and the DHT. Cluster should probably also allow
discoverability of peers addresses (when not bootstrapping, as in that case
we have it), but in the meantime, we will read and persist the peerstore
addresses for cluster peers in this file, different from the configuration.
Note that dns multiaddresses are now fully supported and no IPs are saved
when we have DNS multiaddresses for a peer.

* The former "peer_manager" code is now a pstoremgr module, providing utilities
to parse, add, list and generally maintain the libp2p host peerstore, including
operations on the PeerstoreFile. This "pstoremgr" can now also be extended to
perform address autodiscovery and other things indepedently from Cluster.

* Create and initialize Raft outside of the main Cluster component: since we
can now launch Raft independently from Cluster, we have more degrees of
freedom. A new "staging" option when creating the object allows a raft peer to
be launched in Staging mode, waiting to be added to a running consensus, and
thus, not electing itself as leader or doing anything like we were doing
before. This additionally allows us to track when the peer has become a
Voter, which only happens when it's caught up with the state, something that
was wonky previously.

* The raft configuration now includes an InitPeerset key, which allows to
provide a peerset for new peers and which is ignored when staging==true. The
whole Raft initialization code is way cleaner and stronger now.

* Cluster peer bootsrapping is now an ipfs-cluster-service feature. The
--bootstrap flag works as before (additionally allowing comma-separated-list
of entries). What bootstrap does, is to initialize Raft with staging == true,
and then call Join in the main cluster component. Only when the Raft peer
transitions to Voter, consensus becomes ready, and cluster becomes Ready.
This is cleaner, works better and is less complex than before (supporting
both flags and config values). We also backup and clean the state whenever
we are boostrapping, automatically

* ipfs-cluster-service no longer runs the daemon. Starting cluster needs
now "ipfs-cluster-service daemon". The daemon specific flags (bootstrap,
alloc) are now flags for the daemon subcommand. Here we mimic ipfs ("ipfs"
does not start the daemon but print help) and pave the path for merging both
service and ctl in the future.

While this brings some breaking changes, it significantly reduces the
complexity of the configuration, the code and most importantly, the
documentation. It should be easier now to explain the user what is the
right way to launch a cluster peer, and more difficult to make mistakes.

As a side effect, the PR also:

* Fixes #381 - peers with dynamic addresses
* Fixes #371 - peers should be Raft configuration option
* Fixes #378 - waitForUpdates may return before state fully synced
* Fixes #235 - config option shadowing (no cfg saves, no need to shadow)

License: MIT
Signed-off-by: Hector Sanjuan <code@hector.link>
2018-05-07 07:39:41 +02:00
Hector Sanjuan
b013850f94 Fear #277: Add test about wanted < 0
License: MIT
Signed-off-by: Hector Sanjuan <code@hector.link>
2018-01-19 22:24:03 +01:00
Wyatt Daviau
a99b403273 Sharness hardening:
Better error messages in test-lib init functions and ipfshttp
Cleanup functions are now called after every test file

License: MIT
Signed-off-by: Wyatt Daviau <wdaviau@cs.stanford.edu>
2018-01-09 14:17:21 -05:00
Wyatt Daviau
8361b8afe4 Add and refine cli interface for cluster state
Added import, export, cleanup.
Changed state interface.
New sharness tests.

License: MIT
Signed-off-by: Wyatt Daviau <wdaviau@cs.stanford.edu>
2017-12-28 09:06:28 -05:00
Wyatt
47b744f1c0 ipfs-cluster-service state upgrade cli command
ipfs-cluster-service now has a migration subcommand that upgrades
    persistant state snapshots with an out-of-date format version to the
    newest version of raft state. If all cluster members shutdown with
    consistent state, upgrade ipfs-cluster, and run the state upgrade command,
    the new version of cluster will be compatible with persistent storage.
    ipfs-cluster now validates its persistent state upon loading it and exits
    with a clear error in the case the state format version is not up to date.

    Raft snapshotting is enforced on all shutdowns and the json backup is no
    longer run.  This commit makes use of recent changes to libp2p-raft
    allowing raft states to implement their own marshaling strategies. Now
    mapstate handles the logic for its (de)serialization.  In the interest of
    supporting various potential upgrade formats the state serialization
    begins with a varint (right now one byte) describing the version.

    Some go tests are modified and a go test is added to cover new ipfs-cluster
    raft snapshot reading functions.  Sharness tests are added to cover the
    state upgrade command.
2017-11-28 22:35:48 -05:00
Hector Sanjuan
8f06baa1bf Issue #162: Rework configuration format
The following commit reimplements ipfs-cluster configuration under
the following premises:

  * Each component is initialized with a configuration object
  defined by its module
  * Each component decides how the JSON representation of its
  configuration looks like
  * Each component parses and validates its own configuration
  * Each component exposes its own defaults
  * Component configurations are make the sections of a
  central JSON configuration file (which replaces the current
  JSON format)
  * Component configurations implement a common interface
  (config.ComponentConfig) with a set of common operations
  * The central configuration file is managed by a
  config.ConfigManager which:
    * Registers ComponentConfigs
    * Assigns the correspondent sections from the JSON file to each
    component and delegates the parsing
    * Delegates the JSON generation for each section
    * Can be notified when the configuration is updated and must be
    saved to disk

The new service.json would then look as follows:

```json
{
  "cluster": {
    "id": "QmTVW8NoRxC5wBhV7WtAYtRn7itipEESfozWN5KmXUQnk2",
    "private_key": "<...>",
    "secret": "00224102ae6aaf94f2606abf69a0e278251ecc1d64815b617ff19d6d2841f786",
    "peers": [],
    "bootstrap": [],
    "leave_on_shutdown": false,
    "listen_multiaddress": "/ip4/0.0.0.0/tcp/9096",
    "state_sync_interval": "1m0s",
    "ipfs_sync_interval": "2m10s",
    "replication_factor": -1,
    "monitor_ping_interval": "15s"
  },
  "consensus": {
    "raft": {
      "heartbeat_timeout": "1s",
      "election_timeout": "1s",
      "commit_timeout": "50ms",
      "max_append_entries": 64,
      "trailing_logs": 10240,
      "snapshot_interval": "2m0s",
      "snapshot_threshold": 8192,
      "leader_lease_timeout": "500ms"
    }
  },
  "api": {
    "restapi": {
      "listen_multiaddress": "/ip4/127.0.0.1/tcp/9094",
      "read_timeout": "30s",
      "read_header_timeout": "5s",
      "write_timeout": "1m0s",
      "idle_timeout": "2m0s"
    }
  },
  "ipfs_connector": {
    "ipfshttp": {
      "proxy_listen_multiaddress": "/ip4/127.0.0.1/tcp/9095",
      "node_multiaddress": "/ip4/127.0.0.1/tcp/5001",
      "connect_swarms_delay": "7s",
      "proxy_read_timeout": "10m0s",
      "proxy_read_header_timeout": "5s",
      "proxy_write_timeout": "10m0s",
      "proxy_idle_timeout": "1m0s"
    }
  },
  "monitor": {
    "monbasic": {
      "check_interval": "15s"
    }
  },
  "informer": {
    "disk": {
      "metric_ttl": "30s",
      "metric_type": "freespace"
    },
    "numpin": {
      "metric_ttl": "10s"
    }
  }
}
```

This new format aims to be easily extensible per component. As such,
it already surfaces quite a few new options which were hardcoded
before.

Additionally, since Go API have changed, some redundant methods have been
removed and small refactoring has happened to take advantage of the new
way.

License: MIT
Signed-off-by: Hector Sanjuan <hector@protocol.ai>
2017-10-18 00:00:12 +02:00
dgrisham
25a910faad BasicAuth implementation -- CLI, server, and tests. 2017-10-14 15:55:21 -06:00
Wyatt
534da6c923 Moving exec call to sh as bash is deprecated in go-ipfs 2017-10-11 15:51:17 -04:00
Hector Sanjuan
fcb6a98d03 More test fixing
License: MIT
Signed-off-by: Hector Sanjuan <hector@protocol.ai>
2017-08-01 14:25:20 +02:00
Hector Sanjuan
dac6fcd18d Fix tests
License: MIT
Signed-off-by: Hector Sanjuan <hector@protocol.ai>
2017-08-01 13:50:32 +02:00
dgrisham
aaefccb8a9 Bash -> sh -style conditional 2017-07-29 10:55:04 -06:00
dgrisham
a8fedc5c29 Optional test_cluster_init arg to modify config files 2017-07-28 12:30:10 -06:00
Hector Sanjuan
69ffd20dbf Generate secret by default.
This:

* Takes CLUSTER_SECRET as the secret whenever it is defined
* Generates the secret by default in other cases
* Only prompts with -s, -custom-secret.

License: MIT
Signed-off-by: Hector Sanjuan <hector@protocol.ai>
2017-07-25 00:16:07 +02:00
Hector Sanjuan
4ea1777050 Fix sharness tests
Use docker to run ipfs
Improve intialization of daemons
Fix a bunch of tests
Improve run script
Make sure everything is shell-compatible (remove bash syntax)
Fit to run in travis
2017-07-21 11:24:17 +02:00
Wyatt
633c97961c Fixing whitespace and minor Make issues 2017-05-05 10:02:07 -07:00
Wyatt
8286cd0ae2 Address 2nd round of comments 2017-05-05 10:00:17 -07:00
Wyatt
6f4c139cf9 Sharness Tests updated
I have attempted to address all of the comments on the original PR.
The sharness tests now make use of prereqs, see test-lib for details
and helper functions.
2017-05-05 09:59:23 -07:00
Wyatt
eac6c08a22 Sharness tests and library precondition setting fns updated 2017-05-05 09:59:22 -07:00
Wyatt
4a231b746f ctl basic cleanup 2017-05-05 09:59:22 -07:00
Wyatt
568b69b721 GPL compatability, fully automatic sharness build and clean 2017-05-05 09:59:22 -07:00
Wyatt
f547077700 Sharness Tests updated
I have attempted to address all of the comments on the original PR.
The sharness tests now make use of prereqs, see test-lib for details
and helper functions.
2017-05-05 09:09:48 -07:00
Wyatt
916b7e8849 Sharness tests and library precondition setting fns updated 2017-05-05 09:09:48 -07:00
Wyatt
148b08ffb8 ctl basic cleanup 2017-05-05 09:09:48 -07:00
Wyatt
1a966aee53 GPL compatability, fully automatic sharness build and clean 2017-05-05 09:09:47 -07:00