Allows us to test against latest ipfs (either it's broken and this
helps finding out, or it's not broken anymore and we don't lose time
trying to figure out why).
License: MIT
Signed-off-by: Hector Sanjuan <code@hector.link>
ipfs-cluster-service now has a migration subcommand that upgrades
persistant state snapshots with an out-of-date format version to the
newest version of raft state. If all cluster members shutdown with
consistent state, upgrade ipfs-cluster, and run the state upgrade command,
the new version of cluster will be compatible with persistent storage.
ipfs-cluster now validates its persistent state upon loading it and exits
with a clear error in the case the state format version is not up to date.
Raft snapshotting is enforced on all shutdowns and the json backup is no
longer run. This commit makes use of recent changes to libp2p-raft
allowing raft states to implement their own marshaling strategies. Now
mapstate handles the logic for its (de)serialization. In the interest of
supporting various potential upgrade formats the state serialization
begins with a varint (right now one byte) describing the version.
Some go tests are modified and a go test is added to cover new ipfs-cluster
raft snapshot reading functions. Sharness tests are added to cover the
state upgrade command.
make sure we save a new config if the new peerset
is different than the one in the configuration at
boot.
Hopefully this fixes a race condition in PeerAdd test
License: MIT
Signed-off-by: Hector Sanjuan <code@hector.link>
SwarmConnect on the ipfs connector calls rpc Peers() which
requests IDs for every peer member. If that peer member
is booting, it might get the request after RPC is setup
but before consensus is initialized. In which case
a panic happens. Probability that this happens is small, but still.
Also increase the connect swarms delay to 30 seconds, which
should be a bit longer than the default wait_for_leader timeout,
otherwise we might connect swarms while there's not even a leader.
License: MIT
Signed-off-by: Hector Sanjuan <hector@protocol.ai>
This is what it was likely causing PeerRemove tests to fail randomly
but very often. We cancelled the Cluster context before shutting down
the Consensus component. This killed networking and aborted
the peer remove operations when the leader is removing itself.
As a result, it would error with "leadership lost", which would
trigger a retry which would set the final error to "context cancelled"
because the shutdown of the consensus component proceeds during the
retry, cancelling the consensus context.
This is not only affecting tests, it might affected operations when
running cluster.
License: MIT
Signed-off-by: Hector Sanjuan <hector@protocol.ai>
Sometimes tests fail because the returned api.ID for a new peer
does not include the current cluster peers. This is because the
new peerset has not yet be commited in the new peer at the time
of the request.
This commit retries obtaining the request until the correct peerset
comes in, or gives up after two seconds retrying.
Rather than the tests failing, note that the ID returned it is very
user-facing and should contain the current cluster peers after adding,
and not the former peerset, at least while peerAdd operation is allowed.
License: MIT
Signed-off-by: Hector Sanjuan <hector@protocol.ai>
I think this will prevents some random tests failures
when we realize that we are not anymore in the peerset
and trigger a shutdown but Raft has not finished fully
committing the operation, which then triggers an error,
and a retry. But the contexts are cancelled in the retry
so it won't find a leader and will error finally error
with that message.
License: MIT
Signed-off-by: Hector Sanjuan <hector@protocol.ai>
Also, start removing mentions of `PeerAdd` operation, as things should
happen with `--bootstrap` (Join). PeerAdd should not be part of the user
workflow and might disappear or be hidden in the future.
License: MIT
Signed-off-by: Hector Sanjuan <hector@protocol.ai>
It seems more logical that the new peer should know how to contact
everyone before it needs to start doing it, but it probably does not
matter much. Still, more logical.
License: MIT
Signed-off-by: Hector Sanjuan <hector@protocol.ai>
Review documentation to be in line with latest updates to Raft and
any other feature introduced since 0.12.0.
License: MIT
Signed-off-by: Hector Sanjuan <hector@protocol.ai>
Snaps set a custom $HOME, but we were using /etc/passwd.
There might be other cases were using a custom $HOME might be
handy.
In UNIX systems, $HOME should be always set. For all the rest,
we fall back to the original os/user.HomeDir method.
License: MIT
Signed-off-by: Hector Sanjuan <hector@protocol.ai>
This commit changes the way that consensus.Clean() works. Before
it deleted the whole data folder. Now it renames it as <name>.old.0
and leaves it. When Clean() is called again, it renames <name>.old.0
as <name>.old.1, and the actual data becomes <name>.old.0. Higher number
means older. The number of backups is fixed to 5. When 5 backups exists
and a new one comes up again, the last one is discarded.
License: MIT
Signed-off-by: Hector Sanjuan <hector@protocol.ai>
Up to now, we hardcoded progress to "false" in the proxy, regardless
of what the original request said. We now leave it as it is, and
just ignore any progress updates when processing the response.
Since the response is buffered and sent back all together, they are
still useless, but at least the clients (ipfs cli) won't show a 0%
progress bar when successfully adding a file.
License: MIT
Signed-off-by: Hector Sanjuan <hector@protocol.ai>