I've modified the peer identifier to be 'peername'. I've also
modified the TestLoadJSON to check that it is correctly read from
config and set to a default if empty.
Also added 'peername' fields to configurations for various tests.
This allows to call the Rest API's status and sync endpoints with a
"?local=true" parameter. This will trigger operations but only on the
local peer. Cluster *Local and RPC-*Local methods have been accordingly,
although they are aliases for the PinTracker methods (but otherwise they
would not be exposed in external APIs). ipfs-cluster-ctl has been updated to
support the new flag.
The rationaly behind this feature is that sometimes, a single cluster peer
(or the ipfs daemon in it) is misbehaving. The user then wants to Sync,
Recover, or see Status for that single peer. This is specially relevant
when working with big pinsets in larger clusters, as a Status() call will
be considerably more expensive when broadcasted everywhere.
Note that the Rest API keeps returning GlobalPinInfo objects even on local=true
calls. This ensures that the user always gets the same datatype from an endpoint.
License: MIT
Signed-off-by: Hector Sanjuan <code@hector.link>
I renamed Hostname to simply Name as to not imply relation to DNS.
Removed quotes from formatter, used helper function setting config,
and added defensive error check.
URL encoding was added for `ipfs-cluster-ctl pin add --name`. The
Map State version was return to version 2. I did not notice errors,
but if there are, Version 2 still is not in a stable release.
I added an option called --name or -n which specifies the name for
the added pin to be stored in consensus. Also added quotes around
name for allocations formatter.
When not specified, the name was being assigned to whichever name
was assigned to the previous pin. Not sure exactly what was
happening, but it works after removing omitempty. The idea was to
make Name optional, but here it will always be assigned.
Added Name field to be a part of the stored state of a Pin. This
allows the name to be stored as a part of the consensus if needed.
This is the bare minimum to use. Some other calls like status need
to be checked out.
This adds API, RPC calls to support RecoverAllLocal() (and expose RecoverLocal()
on the Rest API too). cluster-ctl is updated accordingly.
License: MIT
Signed-off-by: Hector Sanjuan <code@hector.link>
I added a "hostname" property to a node's configuration file. Its
value defaults to the hostname provided by the OS, but can be
modified to anything beside an empty string in the config file.
The "hostname" was added to the output of the "id" call. Thus, peer
hostnames are available when listing peers.
This also generates a default configuration section when it
doesn't exist, so it's backwards compatible.
License: MIT
Signed-off-by: Hector Sanjuan <code@hector.link>
The main container will now run only ipfs-cluster-service.
A new ipfs-cluster-bundle container is built by Dockerfile-bundle
which will provide ipfs-cluster+ipfs.
Fixes#197
License: MIT
Signed-off-by: Hector Sanjuan <code@hector.link>
Allows us to test against latest ipfs (either it's broken and this
helps finding out, or it's not broken anymore and we don't lose time
trying to figure out why).
License: MIT
Signed-off-by: Hector Sanjuan <code@hector.link>
ipfs-cluster-service now has a migration subcommand that upgrades
persistant state snapshots with an out-of-date format version to the
newest version of raft state. If all cluster members shutdown with
consistent state, upgrade ipfs-cluster, and run the state upgrade command,
the new version of cluster will be compatible with persistent storage.
ipfs-cluster now validates its persistent state upon loading it and exits
with a clear error in the case the state format version is not up to date.
Raft snapshotting is enforced on all shutdowns and the json backup is no
longer run. This commit makes use of recent changes to libp2p-raft
allowing raft states to implement their own marshaling strategies. Now
mapstate handles the logic for its (de)serialization. In the interest of
supporting various potential upgrade formats the state serialization
begins with a varint (right now one byte) describing the version.
Some go tests are modified and a go test is added to cover new ipfs-cluster
raft snapshot reading functions. Sharness tests are added to cover the
state upgrade command.
make sure we save a new config if the new peerset
is different than the one in the configuration at
boot.
Hopefully this fixes a race condition in PeerAdd test
License: MIT
Signed-off-by: Hector Sanjuan <code@hector.link>
SwarmConnect on the ipfs connector calls rpc Peers() which
requests IDs for every peer member. If that peer member
is booting, it might get the request after RPC is setup
but before consensus is initialized. In which case
a panic happens. Probability that this happens is small, but still.
Also increase the connect swarms delay to 30 seconds, which
should be a bit longer than the default wait_for_leader timeout,
otherwise we might connect swarms while there's not even a leader.
License: MIT
Signed-off-by: Hector Sanjuan <hector@protocol.ai>
This is what it was likely causing PeerRemove tests to fail randomly
but very often. We cancelled the Cluster context before shutting down
the Consensus component. This killed networking and aborted
the peer remove operations when the leader is removing itself.
As a result, it would error with "leadership lost", which would
trigger a retry which would set the final error to "context cancelled"
because the shutdown of the consensus component proceeds during the
retry, cancelling the consensus context.
This is not only affecting tests, it might affected operations when
running cluster.
License: MIT
Signed-off-by: Hector Sanjuan <hector@protocol.ai>
Sometimes tests fail because the returned api.ID for a new peer
does not include the current cluster peers. This is because the
new peerset has not yet be commited in the new peer at the time
of the request.
This commit retries obtaining the request until the correct peerset
comes in, or gives up after two seconds retrying.
Rather than the tests failing, note that the ID returned it is very
user-facing and should contain the current cluster peers after adding,
and not the former peerset, at least while peerAdd operation is allowed.
License: MIT
Signed-off-by: Hector Sanjuan <hector@protocol.ai>
I think this will prevents some random tests failures
when we realize that we are not anymore in the peerset
and trigger a shutdown but Raft has not finished fully
committing the operation, which then triggers an error,
and a retry. But the contexts are cancelled in the retry
so it won't find a leader and will error finally error
with that message.
License: MIT
Signed-off-by: Hector Sanjuan <hector@protocol.ai>
Also, start removing mentions of `PeerAdd` operation, as things should
happen with `--bootstrap` (Join). PeerAdd should not be part of the user
workflow and might disappear or be hidden in the future.
License: MIT
Signed-off-by: Hector Sanjuan <hector@protocol.ai>