This implements committing batches on shutdown properly.
Now the batchWorker will only finish when there are no more things queued to
be included in the final batch(es).
LogPin/Unpin operations will fail while we are shutting down and they cannot
be included in the batch.
This attempt to commit any pending batches when the crdt component is being
shutudown. A commit should succeed if the new DAG node is created, heads are
replaced and broadcast.
The latest version of CRDT ensure that the datastore does not unnecessarily
gets marked as dirty when a broadcasted head cannot be fetched/processed, so
the side effect of publishing a head before shutting down should be under
control at least.
The Pinning Services API standard mandates Bearer token authentication.
This adds JWT bearer token authentication to the IPFS Cluster REST and PINSVC
APIs.
The basic_auth_credentials configuration option needs to be not null and have
at least one username/passwords entry.
A user authenticated via Basic Auth can then "POST /token" and obtain a json
object:
```json { "token" : "<JWTtoken>" } ```
The JWT token has the "iss" (issuer) field set to the Basic auth user that
authorized its creation and is HMAC-signed with its password.
When basic-auth-credentials are set, the APIs will verify that requests come
with either Basic Auth authorization header or with a Bearer token
authorization header.
Bearer tokens will be decoded and the signature will be verified against the
password of the issuer.
At the moment we provide no support to revoke tokens, set "expiration date",
"not before" etc, but this may come in the future.
The operation tracker was not setting some fields correctly when producing
PinInfo objects. Additionally, recover operations were submitted with empty
pin objects, which resulted in the status for pins sent on recover operations
to be missing fields.
These changes fix a few bugs in the PinSVC API and have been discovered by
running the compliance tool at https://github.com/ipfs-shipyard/pinning-service-compliance.
In particular, the error objects did not respect the spec, filters and counts
were not done right. An additional PR will follow with fixes to the pintracker
because some pin status information was not correctly set.
This deprecates the Metadata protobuf map and starts serializing metadata as an
array that is always sorted by the metadata key.
This should resolve the issue that an state export file can be imported
resulting in two different CRDT dags because the binary representation of the
pin can arbitrarily change depending on how the keys in the map are listed,
and thus the CID of a delta is impacted.
So with this, a state export should result in exactly the same DAG regardless
of where it is imported.
I think this should fix the issue. As solution we make every retry with a
temporary channel and copy results to the final channel which is only closed
by us. This only affects streaming methods.
This new component broadcasts metrics about the current size of the pinqueue,
which can in turn be used to inform allocations.
It has a weight_bucket_size option that serves to divide the actual size by a
given factor. This allows considering peers with similar queue sizes to have
the same weight.
Additionally, some changes have been made to the balanced allocator so that a
combination of tags, pinqueue sizes and free-spaces can be used. When
allocating by [<tag>, pinqueue, freespace], the allocator will prioritize
choosing peers with the smallest pin queue weight first, and of those with the
same weight, it will allocate based on freespace.
* Gauge for total number of ipfs pins
* Counter for pin/add
* Counter for pin/add errors
* Counter for Block/Puts
* Counter for blocks added
* Counter for block/added size
* Counted for block/added errors