This commit introduces an api.Cid type and replaces the usage of cid.Cid
everywhere.
The main motivation here is to override MarshalJSON so that Cids are
JSON-ified as '"Qm...."' instead of '{ "/": "Qm....." }', as this "ipld"
representation of IDs is horrible to work with, and our APIs are not issuing
IPLD objects to start with.
Unfortunately, there is no way to do this cleanly, and the best way is to just
switch everything to our own type.
This commit makes all the changes to make Peers() a streaming call.
While Peers is usually a non problematic call, for consistency, all calls
returning collections assembled through broadcast to cluster peers are now
streaming calls.
This commit continues the work of taking advantage of the streaming
capabilities in go-libp2p-gorpc by improving the ipfsconnector and pintracker
components.
StatusAll and RecoverAll methods are now streaming methods, with the REST API
output changing accordingly to produce a stream of GlobalPinInfos rather than
a json array.
pin/ls request to the ipfs daemon now use ?stream=true and avoid having to
load the full pinset map on memory. StatusAllLocal and RecoverAllLocal
requests to the pin tracker stream all the way and no longer store the full
pinset, and the full PinInfo status slice before sending it out.
We have additionally switched to a pattern where streaming methods receive the
channel as an argument, allowing the caller to decide on whether to launch a
goroutine, do buffering etc.
This commit introduces the new go-libp2p-gorpc streaming capabilities for
Cluster. The main aim is to work towards heavily reducing memory usage when
working with very large pinsets.
As a side-effect, it takes the chance to revampt all types for all public
methods so that pointers to static what should be static objects are not used
anymore. This should heavily reduce heap allocations and GC activity.
The main change is that state.List now returns a channel from which to read
the pins, rather than pins being all loaded into a huge slice.
Things reading pins have been all updated to iterate on the channel rather
than on the slice. The full pinset is no longer fully loaded onto memory for
things that run regularly like StateSync().
Additionally, the /allocations endpoint of the rest API no longer returns an
array of pins, but rather streams json-encoded pin objects directly. This
change has extended to the restapi client (which puts pins into a channel as
they arrive) and to ipfs-cluster-ctl.
There are still pending improvements like StatusAll() calls which should also
stream responses, and specially BlockPut calls which should stream blocks
directly into IPFS on a single call.
These are coming up in future commits.
Fixes#1427. Currently, if --wait is used when pinning it will wait until all
statuses reported for a pin are either Pinned or Remote. If a peer was lagging
behind and not syncing the state properly (reporting "unpinned" for example),
that would be enough to block waiting.
This modifies the behaviour of wait to return when replication_factor_min is
reached, regardless of what other statuses are.
The restapi component supports filters for the pinset. This was done to keep
expected output when sharding was fully supported by filtering out "internal"
pins.
However this filter requires looping on the full pinset and re-allocating and
usually does nothing. The useless copy is significant for really big pinsets.
Additionally, ipfs-cluster-ctl set the filter by default to "pins". By setting
it to "all" instead we can skip the whole filtering step and, in practice, get the
same results.
When using SSL and not talking to libp2p-http endpoints, we should not
resolve the dns names in the multiaddresses as otherwise we cannot
verify the https certificates used by the remote endpoint.
This commit adds a new add option: "format".
This option specifies how IPFS Cluster is expected to build the DAG when
adding content. By default, it takes a "unixfs", which chunks and DAG-ifies as
it did before, resulting in a UnixFSv1 DAG.
Alternatively, it can be set to "car". In this case, Cluster will directly
read blocks from the CAR file and add them.
Adding CAR files or doing normal processing is independent from letting
cluster do sharding or not. If sharding is ever enabled, Cluster could
potentially shard a large CAR file among peers.
Currently, importing CAR files is limited to a single CAR file with a single
root (the one that is pinned). Future iterations may support multiple CARs
and/or multiple roots by transparently wrapping them.