This adds a new "crdt" consensus component using go-ds-crdt.
This implies several refactors to fully make cluster consensus-component
independent:
* Delete mapstate and fully adopt dsstate (after people have migrated).
* Return errors from state methods rather than ignoring them.
* Add a new "datastore" modules so that we can configure datastores in the
main configuration like other components.
* Let the consensus components fully define the "state.State". Thus, they do
not receive the state, they receive the storage where we put the state (a
go-datastore).
* Allow to customize how the monitor component obtains Peers() (the current
peerset), including avoiding using the current peerset. At the moment the
crdt consensus uses the monitoring component to define the current peerset.
Therefore the monitor component cannot rely on the consensus component to
produce a peerset.
* Re-factor/re-implementation of "ipfs-cluster-service state"
operations. Includes the dissapearance of the "migrate" one.
The CRDT consensus component defines creates a crdt-datastore (with ipfs-lite)
and uses it to intitialize a dssate. Thus the crdt-store is elegantly
wrapped. Any modifications to the state get automatically replicated to other
peers. We store all the CRDT DAG blocks in the local datastore.
The consensus components only expose a ReadOnly state, as any modifications to
the shared state should happen through them.
DHT and PubSub facilities must now be created outside of Cluster and passed in
so they can be re-used by different components.
Remove basic monitor
This commit removes `basic` monitor component, because it is not being
used by default since few releases ago pubsub monitor was introduced.
Issue #689
Snap builds have broken again. It seems the credentials have expired without
warning, even though they were not so old anyways. As promised,
next time snaps would break, they would be removed.
License: MIT
Signed-off-by: Hector Sanjuan <code@hector.link>
Snaps won't publish with classic confimement or non-edge.
For the moment this works around that so that, at least,
there is releases on 'edge' with strict confinement.
License: MIT
Signed-off-by: Hector Sanjuan <code@hector.link>
This enables support for testing in jenkins.
Several minor adjustments have been performed to improve the probability
that the tests pass, but there are still some random
problems appearing with libp2p conections not becoming available or
stopping working (similar to travis, but perhaps more often).
MacOS and Windows builds are broken in worse ways (those issues will
need to be addressed in the future).
Thanks to @zenground0 and @victorbjelkholm for support!
License: MIT
Signed-off-by: Hector Sanjuan <code@hector.link>
Raft will fail to take a snapshot when applied index is
different from the last index. Therefore, we wait for
all updates to be aplied before snapshotting.
If still it doesn't work, we retry a few times.
License: MIT
Signed-off-by: Hector Sanjuan <code@hector.link>
ipfs-cluster-service now locks before running the daemon and state
upgrade commands. Locking mechanism heavily inspired by ipfs, see
go-ipfs fsrepo. Unlock called on exit to free up repo. one lockfile
per repo. A very simple sharness test checks that two service
invocations cannot occur.
A longstanding sharness/ci logging issue is addressed by exporting
verbose=t into the travis environment. Now output of commands from
within sharness test strings are displayed during travis runs.
License: MIT
Signed-off-by: Wyatt Daviau <wdaviau@cs.stanford.edu>
Use docker to run ipfs
Improve intialization of daemons
Fix a bunch of tests
Improve run script
Make sure everything is shell-compatible (remove bash syntax)
Fit to run in travis
* Added travis sharness build
* ipfs-cluster-ctl help text width overflow fixed
* Expect success from fixed sharness test
* First travis-sharness PR touch up
Undoes gx rewrite of cluster-ctl dependencies
Undoess linebreaking of usage descriptions
Changes test to check for lines in excess of 120 chars instead of 80
run-sharness script now correctly tracks exit codes and exits with error
The former RPC stuff had become a monster, really hard to have an overview
of the RPC api capabilities and with lots of magic.
go-libp2p-rpc allows to have a clearly defined RPC api which
shows which methods every component can use. A component to perform
remote requests, and the convoluted LeaderRPC, BroadcastRPC methods are
no longer necessary.
Things are much simpler now, less goroutines are needed, the central channel
handling bottleneck is gone, RPC requests are very streamlined in form.
In the future, it would be inmediate to have components living on different
libp2p hosts and it is way clearer how to plug into the advanced cluster rpc
api.
License: MIT
Signed-off-by: Hector Sanjuan <hector@protocol.ai>