Commit Graph

22 Commits

Author SHA1 Message Date
Hector Sanjuan
6447ea51d2 Remove *Serial types. Use pointers for all types.
This takes advantange of the latest features in go-cid, peer.ID and
go-multiaddr and makes the Go types serializable by default.

This means we no longer need to copy between Pin <-> PinSerial, or ID <->
IDSerial etc. We can now efficiently binary-encode these types using short
field keys and without parsing/stringifying (in many cases it just a cast).

We still get the same json output as before (with minor modifications for
Cids).

This should greatly improve Cluster performance and memory usage when dealing
with large collections of items.

License: MIT
Signed-off-by: Hector Sanjuan <hector@protocol.ai>
2019-02-27 17:04:35 +00:00
Adrian Lanzafame
31474f6490
update go-cid and go-libp2p
License: MIT
Signed-off-by: Adrian Lanzafame <adrianlanzafame92@gmail.com>
2018-09-24 11:35:38 +10:00
Hector Sanjuan
623120fd50 Start cluster tests
License: MIT
Signed-off-by: Hector Sanjuan <code@hector.link>
2018-08-07 20:12:05 +02:00
Hector Sanjuan
a96241941e WIP
License: MIT
Signed-off-by: Hector Sanjuan <code@hector.link>
2018-08-07 20:12:05 +02:00
Wyatt Daviau
238f3726f3 Pin datastructure updated to support sharding
4 PinTypes specify how CID is pinned
Changes to Pin and Unpin to handle different PinTypes
Tests for different PinTypes
Migration for new state format using new Pin datastructures
Visibility of the PinTypes used internally limited by default

License: MIT
Signed-off-by: Wyatt Daviau <wdaviau@cs.stanford.edu>
2018-08-07 20:11:23 +02:00
Adrian Lanzafame
c89508035a Maptracker: extract optracker and make improvements
License: MIT
Signed-off-by: Hector Sanjuan <code@hector.link>
2018-05-28 11:59:26 +02:00
Hector Sanjuan
3c3341e491 Monitor: add PublishMetric() to component interface
The monitor component should be in charge of deciding how it is
best to send metrics to other peers and what that means.

This adds the PublishMetric() method to the component interface
and moves that functionality from Cluster main component to the
basic monitor.

There is a behaviour change. Before, the metrics where sent only to
the leader, while the leader was the only peer to broadcast them everywhere.
Now, all peers broadcast all metrics everywhere. This is mostly
because we should not rely on the consensus layer providing a Leader(), so
we are taking the chance to remove this dependency.

Note that in any-case, pubsub monitoring should replace the
existing basic monitor. This is just paving the ground.

Additionally, in order to not duplicate the multiRPC code
in the monitor, I have moved that functionality to go-libp2p-gorpc
and added an rpcutil library to cluster which includes useful
methods to perform multiRPC requests (some of them existed in
util.go, others are new and help handling multiple contexts etc).

License: MIT
Signed-off-by: Hector Sanjuan <code@hector.link>
2018-05-07 14:26:06 +02:00
Hector Sanjuan
09f4c9fce3 rest/libp2p-http: address lanzafame's review
License: MIT
Signed-off-by: Hector Sanjuan <code@hector.link>
2018-03-20 19:35:42 +01:00
Hector Sanjuan
a73d7e6f7e Relocate multiaddrJoin and multiaddrSplit to api/types.h
So they can serve as multi-module helpers without having circular deps.

License: MIT
Signed-off-by: Hector Sanjuan <code@hector.link>
2018-03-16 13:37:32 +01:00
Hector Sanjuan
a180f1a5c5
Merge pull request #291 from ipfs/feat/connectivity-graph
Feat/connectivity graph
2018-01-26 12:42:18 +01:00
Wyatt Daviau
eafc747305 fix/297 Resolve the lack of snapshot pushes:
Snapshot saving state commands (upgrade and import)
now save raft config peers as consensus peers in snapshot.
Snapshot index 1 -> 2 when saving from a fresh import to force
replication when bootstrapping.

License: MIT
Signed-off-by: Wyatt Daviau <wdaviau@cs.stanford.edu>
2018-01-25 16:47:12 -05:00
ZenGround0
4b26ccd144
Merge branch 'master' into feat/connectivity-graph 2018-01-23 08:34:43 -05:00
Wyatt Daviau
d2ef32f48f Testing and polishing connection graph
Added go tests
Refactored cluster connect graph to new file
Refactored dot file printing to new repo
Fixed code climate issues
Added sharness test

License: MIT
Signed-off-by: Wyatt Daviau <wdaviau@cs.stanford.edu>
2018-01-22 10:03:37 -05:00
Hector Sanjuan
4549282cba Fix #277: Introduce maximum and minimum replication factor
This PR replaces ReplicationFactor with ReplicationFactorMax
and ReplicationFactor min.

This allows a CID to be pinned even though the desired
replication factor (max) is not reached, and prevents triggering
re-pinnings when the replication factor has not crossed the
lower threshold (min).

License: MIT
Signed-off-by: Hector Sanjuan <code@hector.link>
2018-01-16 16:36:06 +01:00
Hector Sanjuan
c0628e43ff fix golint: Address a few golint warnings
License: MIT
Signed-off-by: Hector Sanjuan <code@hector.link>
2017-12-06 15:15:38 +01:00
Wyatt
47b744f1c0 ipfs-cluster-service state upgrade cli command
ipfs-cluster-service now has a migration subcommand that upgrades
    persistant state snapshots with an out-of-date format version to the
    newest version of raft state. If all cluster members shutdown with
    consistent state, upgrade ipfs-cluster, and run the state upgrade command,
    the new version of cluster will be compatible with persistent storage.
    ipfs-cluster now validates its persistent state upon loading it and exits
    with a clear error in the case the state format version is not up to date.

    Raft snapshotting is enforced on all shutdowns and the json backup is no
    longer run.  This commit makes use of recent changes to libp2p-raft
    allowing raft states to implement their own marshaling strategies. Now
    mapstate handles the logic for its (de)serialization.  In the interest of
    supporting various potential upgrade formats the state serialization
    begins with a varint (right now one byte) describing the version.

    Some go tests are modified and a go test is added to cover new ipfs-cluster
    raft snapshot reading functions.  Sharness tests are added to cover the
    state upgrade command.
2017-11-28 22:35:48 -05:00
Hector Sanjuan
081384fb7f cluster: Make peersFromMultiaddrs remove any duplicates.
Use it to find out the number of peers in the config and prevent
peerAdd test failures.

License: MIT
Signed-off-by: Hector Sanjuan <code@hector.link>
2017-11-15 18:55:55 +01:00
Hector Sanjuan
faa755f43a Re-allocate pins on peer removal
PeerRm now triggers re-pinning of all the Cids allocated to
the removed peer.

License: MIT
Signed-off-by: Hector Sanjuan <hector@protocol.ai>
2017-07-05 16:38:36 +02:00
Hector Sanjuan
2512ecb701 Issue #41: Add Replication factor
New PeerManager, Allocator, Informer components have been added along
with a new "replication_factor" configuration option.

First, cluster peers collect and push metrics (Informer) to the Cluster
leader regularly. The Informer is an interface that can be implemented
in custom wayts to support custom metrics.

Second, on a pin operation, using the information from the collected metrics,
an Allocator can provide a list of preferences as to where the new pin
should be assigned. The Allocator is an interface allowing to provide
different allocation strategies.

Both Allocator and Informer are Cluster Componenets, and have access
to the RPC API.

The allocations are kept in the shared state. Cluster peer failure
detection is still missing and re-allocation is still missing, although
re-pinning something when a node is down/metrics missing does re-allocate
the pin somewhere else.

License: MIT
Signed-off-by: Hector Sanjuan <hector@protocol.ai>
2017-02-14 19:13:08 +01:00
Hector Sanjuan
1b3d04e18b Move all API-related types to the /api subpackage.
At the beginning we opted for native types which were
serializable (PinInfo had a CidStr field instead of Cid).

Now we provide types in two versions: native and serializable.

Go methods use native. The rest of APIs (REST/RPC) use always
serializable versions. Methods are provided to convert between the
two.

The reason for moving these out of the way is to be able to re-use
type definitions when parsing API responses in `ipfs-cluster-ctl` or
any other clients that come up. API responses are just the serializable
version of types in JSON encoding. This also reduces having
duplicate types defs and parsing methods everywhere.

License: MIT
Signed-off-by: Hector Sanjuan <hector@protocol.ai>
2017-02-09 16:30:53 +01:00
Hector Sanjuan
34fdc329fc Fix #24: Auto-join and auto-leave operations for Cluster
This is the third implementation attempt. This time, rather than
broadcasting PeerAdd/Join requests to the whole cluster, we use the
consensus log to broadcast new peers joining.

This makes it easier to recover from errors and to know who exactly
is member of a cluster and who is not. The consensus is, after all,
meant to agree on things, and the list of cluster peers is something
everyone has to agree on.

Raft itself uses a special log operation to maintain the peer set.

The tests are almost unchanged from the previous attempts so it should
be the same, except it doesn't seem possible to bootstrap a bunch of nodes
at the same time using different bootstrap nodes. It works when using
the same. I'm not sure this worked before either, but the code is
simpler than recursively contacting peers, and scales better for
larger clusters.

Nodes have to be careful about joining clusters while keeping the state
from a different cluster (disjoint logs). This may cause problems with
Raft.

License: MIT
Signed-off-by: Hector Sanjuan <hector@protocol.ai>
2017-02-07 18:46:09 +01:00
Hector Sanjuan
6c18c02106 Issue #10: peers/add and peers/rm feature + tests
This commit adds PeerAdd() and PeerRemove() endpoints, CLI support,
tests. Peer management is a delicate issue because of how the consensus
works underneath and the places that need to track such peers.

When adding a peer the procedure is as follows:

* Try to open a connection to the new peer and abort if not reachable
* Broadcast a PeerManagerAddPeer operation which tells all cluster members
to add the new Peer. The Raft leader will add it to Raft's peerset and
the multiaddress will be saved in the ClusterPeers configuration key.
* If the above fails because some cluster node is not responding,
broadcast a PeerRemove() and try to undo any damage.
* If the broadcast succeeds, send our ClusterPeers to the new Peer along with
the local multiaddress we are using in the connection opened in the
first step (that is the multiaddress through which the other peer can reach us)
* The new peer updates its configuration with the new list and joins
the consensus

License: MIT
Signed-off-by: Hector Sanjuan <hector@protocol.ai>
2017-02-02 13:51:49 +01:00