This commit promotes the Consensus component (and Raft) to become a fully
independent thing like other components, passed to NewCluster during
initialization. Cluster (main component) no longer creates the consensus
layer internally. This has triggered a number of breaking changes
that I will explain below.
Motivation: Future work will require the possibility of running Cluster
with a consensus layer that is not Raft. The "consensus" layer is in charge
of maintaining two things:
* The current cluster peerset, as required by the implementation
* The current cluster pinset (shared state)
While the pinset maintenance has always been in the consensus layer, the
peerset maintenance was handled by the main component (starting by the "peers"
key in the configuration) AND the Raft component (internally)
and this generated lots of confusion: if the user edited the peers in the
configuration they would be greeted with an error.
The bootstrap process (adding a peer to an existing cluster) and configuration
key also complicated many things, since the main component did it, but only
when the consensus was initialized and in single peer mode.
In all this we also mixed the peerstore (list of peer addresses in the libp2p
host) with the peerset, when they need not to be linked.
By initializing the consensus layer before calling NewCluster, all the
difficulties in maintaining the current implementation in the same way
have come to light. Thus, the following changes have been introduced:
* Remove "peers" and "bootstrap" keys from the configuration: we no longer
edit or save the configuration files. This was a very bad practice, requiring
write permissions by the process to the file containing the private key and
additionally made things like Puppet deployments of cluster difficult as
configuration would mutate from its initial version. Needless to say all the
maintenance associated to making sure peers and bootstrap had correct values
when peers are bootstrapped or removed. A loud and detailed error message has
been added when staring cluster with an old config, along with instructions on
how to move forward.
* Introduce a PeerstoreFile ("peerstore") which stores peer addresses: in
ipfs, the peerstore is not persisted because it can be re-built from the
network bootstrappers and the DHT. Cluster should probably also allow
discoverability of peers addresses (when not bootstrapping, as in that case
we have it), but in the meantime, we will read and persist the peerstore
addresses for cluster peers in this file, different from the configuration.
Note that dns multiaddresses are now fully supported and no IPs are saved
when we have DNS multiaddresses for a peer.
* The former "peer_manager" code is now a pstoremgr module, providing utilities
to parse, add, list and generally maintain the libp2p host peerstore, including
operations on the PeerstoreFile. This "pstoremgr" can now also be extended to
perform address autodiscovery and other things indepedently from Cluster.
* Create and initialize Raft outside of the main Cluster component: since we
can now launch Raft independently from Cluster, we have more degrees of
freedom. A new "staging" option when creating the object allows a raft peer to
be launched in Staging mode, waiting to be added to a running consensus, and
thus, not electing itself as leader or doing anything like we were doing
before. This additionally allows us to track when the peer has become a
Voter, which only happens when it's caught up with the state, something that
was wonky previously.
* The raft configuration now includes an InitPeerset key, which allows to
provide a peerset for new peers and which is ignored when staging==true. The
whole Raft initialization code is way cleaner and stronger now.
* Cluster peer bootsrapping is now an ipfs-cluster-service feature. The
--bootstrap flag works as before (additionally allowing comma-separated-list
of entries). What bootstrap does, is to initialize Raft with staging == true,
and then call Join in the main cluster component. Only when the Raft peer
transitions to Voter, consensus becomes ready, and cluster becomes Ready.
This is cleaner, works better and is less complex than before (supporting
both flags and config values). We also backup and clean the state whenever
we are boostrapping, automatically
* ipfs-cluster-service no longer runs the daemon. Starting cluster needs
now "ipfs-cluster-service daemon". The daemon specific flags (bootstrap,
alloc) are now flags for the daemon subcommand. Here we mimic ipfs ("ipfs"
does not start the daemon but print help) and pave the path for merging both
service and ctl in the future.
While this brings some breaking changes, it significantly reduces the
complexity of the configuration, the code and most importantly, the
documentation. It should be easier now to explain the user what is the
right way to launch a cluster peer, and more difficult to make mistakes.
As a side effect, the PR also:
* Fixes#381 - peers with dynamic addresses
* Fixes#371 - peers should be Raft configuration option
* Fixes#378 - waitForUpdates may return before state fully synced
* Fixes#235 - config option shadowing (no cfg saves, no need to shadow)
License: MIT
Signed-off-by: Hector Sanjuan <code@hector.link>
Added https server and client in restapi_test.go, with a sample unit test in TestRestAPIIDEndpoint
License: MIT
Signed-off-by: Liang Gao lianggao91@hotmail.com
The --wait flag was being completely ignored unless --no-status was passed
too, which makes no sense because then it would print the wait status.
This waits when --wait and prints the status when --no-status is not passed.
If we have been waiting, the status comes from that. Otherwise we request it.
License: MIT
Signed-off-by: Hector Sanjuan <code@hector.link>
address feedback and add WaitForPinnedStatus
address feedback and rework WaitFor
implement StatusFilter
address feedback and rework StatusFilter
License: MIT
Signed-off-by: Adrian Lanzafame <adrianlanzafame92@gmail.com>
The IPFS() methods returns an ipfs Shell pointing to the ipfs-cluster
proxy endpoint. The location can be customized (via ProxyAddr configuration
option) or it is assumed to be the same as the PeerAddr/APIAddr, with
a different port (the default).
License: MIT
Signed-off-by: Hector Sanjuan <code@hector.link>
This adds support for libp2p-tunneled http to the rest api component.
If PeerAddr is specified in the configuration, then we will create a
libp2p host and communicate with the API using that.
Tests run now in both http and libp2p mode.
Note: pnet support not included, but coming up
License: MIT
Signed-off-by: Hector Sanjuan <code@hector.link>
This commits allows restapi to serve/tunnel http on a libp2p stream.
NewWitHost(...) allows to provide a libp2p host during initialization
which is then used to obtain a listener with go-libp2p-gostream.
Alternatively, if the configuration provides an ID, PrivateKey and Libp2pListenAddr,
a host is created directly by us and used to get the listener.
The protocol tag used is provided by the p2phttp library which will
be used by the client.
All tests now run against the libp2p node too.
License: MIT
Signed-off-by: Hector Sanjuan <code@hector.link>
This adds support for parameters to create a libp2p host
in the REST API configuration: ID, PrivateKey and ListenMultiaddr.
These parameters default to nil/empty and are ommited in the default
configuration. They are only supposed to be used when the user wants
the REST API to use a different libp2p host than a provided one (upcoming
changes).
Pnet protector not supported yet in this case. Underlying basic auth
should cover that front. Will implement if someone has a usecase.
License: MIT
Signed-off-by: Hector Sanjuan <code@hector.link>
Change the rest/client's and ipfs-cluster-ctl's default
timeout from 60 to 120.
License: MIT
Signed-off-by: Adrian Lanzafame <adrianlanzafame92@gmail.com>
Added go tests
Refactored cluster connect graph to new file
Refactored dot file printing to new repo
Fixed code climate issues
Added sharness test
License: MIT
Signed-off-by: Wyatt Daviau <wdaviau@cs.stanford.edu>
added ConnectGraph type and serialization
added cli command hitting cluster api
added cluster api client method + endpoint calling into rpc
added rpc calling into main cluster component
added clustercomponent's function to collect ConnectGraph
added functionality in ipfsconn to retrieve ipfs swarm peers
added dot file printing given ConnectGraphSerial
License: MIT
Signed-off-by: Wyatt Daviau <wdaviau@cs.stanford.edu>
Raft will fail to take a snapshot when applied index is
different from the last index. Therefore, we wait for
all updates to be aplied before snapshotting.
If still it doesn't work, we retry a few times.
License: MIT
Signed-off-by: Hector Sanjuan <code@hector.link>
We had a problem happening when assigning the returned *api.Error
to default 'error' type.
Things like "if err != nil" would not work even when *api.Error is nil
I'm not sure why this happens, but this is very confusing for the user
integrating on top. It is better that we just return plain go errors.
License: MIT
Signed-off-by: Hector Sanjuan <code@hector.link>
Apparently, cancelling the request context closes the response body
prematurely, before it's being fully read.
License: MIT
Signed-off-by: Hector Sanjuan <code@hector.link>
This allows taking advantage of connection keep alive by having the
api client re-use the same connection. Additionally, an option
to close connections after every request is provided.
License: MIT
Signed-off-by: Hector Sanjuan <code@hector.link>
* Set default logging facility
* Remove old keep-alive comment in tests
* Use a port for TestPeersWithErrors which is not default
License: MIT
Signed-off-by: Hector Sanjuan <code@hector.link>
This adds the pakage api/rest/client which implements a go-client
for the REST API component. It also update the ipfs-cluster-ctl
tool to rely on it.
Originally, I wanted this to live it in it's own separate repository,
but the api client uses /api/types.go, which is part of cluster.
Therefore it would need to import all of cluster as a dependency.
ipfs-cluster-ctl would also need to import go-ipfs-cluster-api-client
as a dependency, creating circular gx deps which would be a mess to
maintain.
Only the splitting of cluster in multiple repositories (at least for
api, rest, ipfs-cluster-ctl, rest/client and test) would allow better
dependency management by allowing rest/client and the ctl tool
to only import what is needed, but this is something which brings
maintenance costs and can probably wait a bit until cluster is more stable.
License: MIT
Signed-off-by: Hector Sanjuan <code@hector.link>