This is a preparatory PR to add additional APIs (Pinning Service API) easily
to cluster.
Instead of copy-pasting most of what the REST API does, I have refactored so
that the whole configuration, routing and request-handling utilities can be
re-used.
The worst part has been to divide the test between tests that test core
(common.API) functionality and tests that test specific REST API endpoint
functionality. I could not get away without an additional common/test package
to provide functions that are used from both places. This is a side effect of
testing both http and libp2p endpoints for every request etc.
Fixes#1427. Currently, if --wait is used when pinning it will wait until all
statuses reported for a pin are either Pinned or Remote. If a peer was lagging
behind and not syncing the state properly (reporting "unpinned" for example),
that would be enough to block waiting.
This modifies the behaviour of wait to return when replication_factor_min is
reached, regardless of what other statuses are.
The restapi component supports filters for the pinset. This was done to keep
expected output when sharding was fully supported by filtering out "internal"
pins.
However this filter requires looping on the full pinset and re-allocating and
usually does nothing. The useless copy is significant for really big pinsets.
Additionally, ipfs-cluster-ctl set the filter by default to "pins". By setting
it to "all" instead we can skip the whole filtering step and, in practice, get the
same results.
When using SSL and not talking to libp2p-http endpoints, we should not
resolve the dns names in the multiaddresses as otherwise we cannot
verify the https certificates used by the remote endpoint.
This goes back to badger as the default. Testing performed on 0.14.0-rc2
showed very successful GC behaviour on large clusters.
Being that we run with badger at scale and that the main problem seems
resolved, it seems appropiate to default to badger.
Badger can take 1000x the amount of needed space if not GC'ed or compacted
(#1320), even for non heavy usage. Cluster has no provisions to run datastore
GC operations and while they could be added, they are not ensured to
help. Improvements on Badger v3 might help but would still need to GC
explicitally.
Cluster was however designed to support any go-datastore as backend.
This commit adds LevelDB support. LevelDB go-datastore wrapper is mature, does
not need GC and should work well for most cluster usecases, which are not
overly demanding.
A new `--datastore` flag has been added on init. The store backend is selected
based on the value in the configuration, similar to how raft/crdt is. The
default is set to leveldb. From now on it should be easier to add additional
backends, i.e. badgerv3.
This commit adds a new add option: "format".
This option specifies how IPFS Cluster is expected to build the DAG when
adding content. By default, it takes a "unixfs", which chunks and DAG-ifies as
it did before, resulting in a UnixFSv1 DAG.
Alternatively, it can be set to "car". In this case, Cluster will directly
read blocks from the CAR file and add them.
Adding CAR files or doing normal processing is independent from letting
cluster do sharding or not. If sharding is ever enabled, Cluster could
potentially shard a large CAR file among peers.
Currently, importing CAR files is limited to a single CAR file with a single
root (the one that is pinned). Future iterations may support multiple CARs
and/or multiple roots by transparently wrapping them.
The Allocations of a pin that has been added with default replication factor
are kept even when the replication factor turns out to be -1.
This resulted in the Status(cid) code skipping calls to a number of peers
and setting the pin directly as REMOTE.
The fix, on one side makes sure Allocations is always nil when the replication
factor is -1. On the other size, lets the globalPinInfoCid method check the
replication factor value, rather than the number of allocations to decide if
any nodes are bound to be remote.
On the plus side, the pin tracker used the IsRemotePin method, which uses the
replication factor, so things were pinned even if the Status(cid) method shows
them as remote.
A "simplest thing that could work" implementation of adding a `--wait` flag to the ipfs-cluster-ctl add command.
Allows CI to wait for cluster to fully replicate the files just added before continuting, or fail if replication fails.
Fixes#1285
License: MIT
Signed-off-by: Oli Evans <oli@tableflip.io>