This modifies the Makefile so that gx and gx-go downloads are first
attempted from the local ipfs gateway (127.0.0.1:8080) and then
from the offical gateway (ipfs.io).
Make and wget doesn't work well dealing with stuff in subfolders,
so I have moved deptools-related rules to the deptools folder in its own
Makefile.
License: MIT
Signed-off-by: Hector Sanjuan <code@hector.link>
This commit moves `ipfs-cluster-service` and `ipfs-cluster-ctl` to
`cmd/` directory to follow "standard" project structure.
License: MIT
Signed-off-by: Kishan Mohanbhai Sagathiya <kishansagathiya@gmail.com>
It's not supported in all subpackages. Make test is not really used
in our CI that's why we didn't see this before.
License: MIT
Signed-off-by: Hector Sanjuan <code@hector.link>
A test target should not modify the running system. Safest to run
without test_sharness. Thus avoiding to install.
License: MIT
Signed-off-by: Kishan Mohanbhai Sagathiya <kishansagathiya@gmail.com>
`make all` assumes that your dependencies are re-written. This can lead
to unexpected failures/errors.
This commit changes `make all` to update dependencies as well.
License: MIT
Signed-off-by: Kishan Mohanbhai Sagathiya <kishansagathiya@gmail.com>
This commit adds a single make target that can run and check if I am
ready to push my PR, i.e, `make prcheck`
License: MIT
Signed-off-by: Kishan Mohanbhai Sagathiya <kishansagathiya@gmail.com>
By using gx install --local, deps are copied into
the build context and therefore only stale deps
will get pulled from the network on image build.
License: MIT
Signed-off-by: Adrian Lanzafame <adrianlanzafame92@gmail.com>
This commit promotes the Consensus component (and Raft) to become a fully
independent thing like other components, passed to NewCluster during
initialization. Cluster (main component) no longer creates the consensus
layer internally. This has triggered a number of breaking changes
that I will explain below.
Motivation: Future work will require the possibility of running Cluster
with a consensus layer that is not Raft. The "consensus" layer is in charge
of maintaining two things:
* The current cluster peerset, as required by the implementation
* The current cluster pinset (shared state)
While the pinset maintenance has always been in the consensus layer, the
peerset maintenance was handled by the main component (starting by the "peers"
key in the configuration) AND the Raft component (internally)
and this generated lots of confusion: if the user edited the peers in the
configuration they would be greeted with an error.
The bootstrap process (adding a peer to an existing cluster) and configuration
key also complicated many things, since the main component did it, but only
when the consensus was initialized and in single peer mode.
In all this we also mixed the peerstore (list of peer addresses in the libp2p
host) with the peerset, when they need not to be linked.
By initializing the consensus layer before calling NewCluster, all the
difficulties in maintaining the current implementation in the same way
have come to light. Thus, the following changes have been introduced:
* Remove "peers" and "bootstrap" keys from the configuration: we no longer
edit or save the configuration files. This was a very bad practice, requiring
write permissions by the process to the file containing the private key and
additionally made things like Puppet deployments of cluster difficult as
configuration would mutate from its initial version. Needless to say all the
maintenance associated to making sure peers and bootstrap had correct values
when peers are bootstrapped or removed. A loud and detailed error message has
been added when staring cluster with an old config, along with instructions on
how to move forward.
* Introduce a PeerstoreFile ("peerstore") which stores peer addresses: in
ipfs, the peerstore is not persisted because it can be re-built from the
network bootstrappers and the DHT. Cluster should probably also allow
discoverability of peers addresses (when not bootstrapping, as in that case
we have it), but in the meantime, we will read and persist the peerstore
addresses for cluster peers in this file, different from the configuration.
Note that dns multiaddresses are now fully supported and no IPs are saved
when we have DNS multiaddresses for a peer.
* The former "peer_manager" code is now a pstoremgr module, providing utilities
to parse, add, list and generally maintain the libp2p host peerstore, including
operations on the PeerstoreFile. This "pstoremgr" can now also be extended to
perform address autodiscovery and other things indepedently from Cluster.
* Create and initialize Raft outside of the main Cluster component: since we
can now launch Raft independently from Cluster, we have more degrees of
freedom. A new "staging" option when creating the object allows a raft peer to
be launched in Staging mode, waiting to be added to a running consensus, and
thus, not electing itself as leader or doing anything like we were doing
before. This additionally allows us to track when the peer has become a
Voter, which only happens when it's caught up with the state, something that
was wonky previously.
* The raft configuration now includes an InitPeerset key, which allows to
provide a peerset for new peers and which is ignored when staging==true. The
whole Raft initialization code is way cleaner and stronger now.
* Cluster peer bootsrapping is now an ipfs-cluster-service feature. The
--bootstrap flag works as before (additionally allowing comma-separated-list
of entries). What bootstrap does, is to initialize Raft with staging == true,
and then call Join in the main cluster component. Only when the Raft peer
transitions to Voter, consensus becomes ready, and cluster becomes Ready.
This is cleaner, works better and is less complex than before (supporting
both flags and config values). We also backup and clean the state whenever
we are boostrapping, automatically
* ipfs-cluster-service no longer runs the daemon. Starting cluster needs
now "ipfs-cluster-service daemon". The daemon specific flags (bootstrap,
alloc) are now flags for the daemon subcommand. Here we mimic ipfs ("ipfs"
does not start the daemon but print help) and pave the path for merging both
service and ctl in the future.
While this brings some breaking changes, it significantly reduces the
complexity of the configuration, the code and most importantly, the
documentation. It should be easier now to explain the user what is the
right way to launch a cluster peer, and more difficult to make mistakes.
As a side effect, the PR also:
* Fixes#381 - peers with dynamic addresses
* Fixes#371 - peers should be Raft configuration option
* Fixes#378 - waitForUpdates may return before state fully synced
* Fixes#235 - config option shadowing (no cfg saves, no need to shadow)
License: MIT
Signed-off-by: Hector Sanjuan <code@hector.link>
This enables support for testing in jenkins.
Several minor adjustments have been performed to improve the probability
that the tests pass, but there are still some random
problems appearing with libp2p conections not becoming available or
stopping working (similar to travis, but perhaps more often).
MacOS and Windows builds are broken in worse ways (those issues will
need to be addressed in the future).
Thanks to @zenground0 and @victorbjelkholm for support!
License: MIT
Signed-off-by: Hector Sanjuan <code@hector.link>
New PeerManager, Allocator, Informer components have been added along
with a new "replication_factor" configuration option.
First, cluster peers collect and push metrics (Informer) to the Cluster
leader regularly. The Informer is an interface that can be implemented
in custom wayts to support custom metrics.
Second, on a pin operation, using the information from the collected metrics,
an Allocator can provide a list of preferences as to where the new pin
should be assigned. The Allocator is an interface allowing to provide
different allocation strategies.
Both Allocator and Informer are Cluster Componenets, and have access
to the RPC API.
The allocations are kept in the shared state. Cluster peer failure
detection is still missing and re-allocation is still missing, although
re-pinning something when a node is down/metrics missing does re-allocate
the pin somewhere else.
License: MIT
Signed-off-by: Hector Sanjuan <hector@protocol.ai>
The former RPC stuff had become a monster, really hard to have an overview
of the RPC api capabilities and with lots of magic.
go-libp2p-rpc allows to have a clearly defined RPC api which
shows which methods every component can use. A component to perform
remote requests, and the convoluted LeaderRPC, BroadcastRPC methods are
no longer necessary.
Things are much simpler now, less goroutines are needed, the central channel
handling bottleneck is gone, RPC requests are very streamlined in form.
In the future, it would be inmediate to have components living on different
libp2p hosts and it is way clearer how to plug into the advanced cluster rpc
api.
License: MIT
Signed-off-by: Hector Sanjuan <hector@protocol.ai>