Commit Graph

16 Commits

Author SHA1 Message Date
Hector Sanjuan
39fb193eaf Peerstore: support dns multiaddresses
License: MIT
Signed-off-by: Hector Sanjuan <code@hector.link>
2017-11-29 10:34:03 +01:00
Hector Sanjuan
b6ba6d5a1e Issue #219: Clean up peer manager. Rename Peers RPC call
License: MIT
Signed-off-by: Hector Sanjuan <hector@protocol.ai>
2017-11-14 12:26:42 +01:00
Hector Sanjuan
b852dfa892 Fix #219: WIP: Remove duplicate peer accounting
This change removes the duplicities of the PeerManager component:

* No more commiting PeerAdd and PeerRm log entries
* The Raft peer set is the source of truth
* Basic broadcasting is used to communicate peer multiaddresses
  in the cluster
* A peer can only be added in a healthy cluster
* A peer can be removed from any cluster which can still commit
* This also adds support for multiple multiaddresses per peer

License: MIT
Signed-off-by: Hector Sanjuan <hector@protocol.ai>
2017-11-08 20:04:04 +01:00
Hector Sanjuan
bff1ec3635 Issue #131: rename addFromMultiaddrs to setFromMultiaddrs
License: MIT
Signed-off-by: Hector Sanjuan <hector@protocol.ai>
2017-11-01 19:38:46 +01:00
Hector Sanjuan
073c43e291 Issue #131: Make sure peers are moved to bootstrap when leaving
Also, do not shutdown when seeing our own departure during bootstrap.

License: MIT
Signed-off-by: Hector Sanjuan <hector@protocol.ai>
2017-11-01 13:58:57 +01:00
Hector Sanjuan
c912cfd205 Issue #131: Destroy raft data when the peer has been removed
License: MIT
Signed-off-by: Hector Sanjuan <hector@protocol.ai>
2017-11-01 13:25:28 +01:00
Hector Sanjuan
7a5f8f184b Issue #131: Improvements adding and removing
This works on remove+shutdown procedure and fixes a few small
issues.

License: MIT
Signed-off-by: Hector Sanjuan <hector@protocol.ai>
2017-11-01 13:00:32 +01:00
Hector Sanjuan
09e9b05ed0 Peer add: do not print "new peer" info messages on boot
License: MIT
Signed-off-by: Hector Sanjuan <hector@protocol.ai>
2017-11-01 12:17:33 +01:00
Hector Sanjuan
746ce00d31 Issue #213: Peer addresses should never be a nil slice
License: MIT
Signed-off-by: Hector Sanjuan <hector@protocol.ai>
2017-10-27 22:11:14 +02:00
Hector Sanjuan
828236dcc0 Issue #213: Make sure we wait for configuration to be saved
There might be a case where the program is terminated before
configuration is saved.

Also, avoid calling save() multiple times on shutdowns.

License: MIT
Signed-off-by: Hector Sanjuan <hector@protocol.ai>
2017-10-27 21:44:02 +02:00
Hector Sanjuan
8f06baa1bf Issue #162: Rework configuration format
The following commit reimplements ipfs-cluster configuration under
the following premises:

  * Each component is initialized with a configuration object
  defined by its module
  * Each component decides how the JSON representation of its
  configuration looks like
  * Each component parses and validates its own configuration
  * Each component exposes its own defaults
  * Component configurations are make the sections of a
  central JSON configuration file (which replaces the current
  JSON format)
  * Component configurations implement a common interface
  (config.ComponentConfig) with a set of common operations
  * The central configuration file is managed by a
  config.ConfigManager which:
    * Registers ComponentConfigs
    * Assigns the correspondent sections from the JSON file to each
    component and delegates the parsing
    * Delegates the JSON generation for each section
    * Can be notified when the configuration is updated and must be
    saved to disk

The new service.json would then look as follows:

```json
{
  "cluster": {
    "id": "QmTVW8NoRxC5wBhV7WtAYtRn7itipEESfozWN5KmXUQnk2",
    "private_key": "<...>",
    "secret": "00224102ae6aaf94f2606abf69a0e278251ecc1d64815b617ff19d6d2841f786",
    "peers": [],
    "bootstrap": [],
    "leave_on_shutdown": false,
    "listen_multiaddress": "/ip4/0.0.0.0/tcp/9096",
    "state_sync_interval": "1m0s",
    "ipfs_sync_interval": "2m10s",
    "replication_factor": -1,
    "monitor_ping_interval": "15s"
  },
  "consensus": {
    "raft": {
      "heartbeat_timeout": "1s",
      "election_timeout": "1s",
      "commit_timeout": "50ms",
      "max_append_entries": 64,
      "trailing_logs": 10240,
      "snapshot_interval": "2m0s",
      "snapshot_threshold": 8192,
      "leader_lease_timeout": "500ms"
    }
  },
  "api": {
    "restapi": {
      "listen_multiaddress": "/ip4/127.0.0.1/tcp/9094",
      "read_timeout": "30s",
      "read_header_timeout": "5s",
      "write_timeout": "1m0s",
      "idle_timeout": "2m0s"
    }
  },
  "ipfs_connector": {
    "ipfshttp": {
      "proxy_listen_multiaddress": "/ip4/127.0.0.1/tcp/9095",
      "node_multiaddress": "/ip4/127.0.0.1/tcp/5001",
      "connect_swarms_delay": "7s",
      "proxy_read_timeout": "10m0s",
      "proxy_read_header_timeout": "5s",
      "proxy_write_timeout": "10m0s",
      "proxy_idle_timeout": "1m0s"
    }
  },
  "monitor": {
    "monbasic": {
      "check_interval": "15s"
    }
  },
  "informer": {
    "disk": {
      "metric_ttl": "30s",
      "metric_type": "freespace"
    },
    "numpin": {
      "metric_ttl": "10s"
    }
  }
}
```

This new format aims to be easily extensible per component. As such,
it already surfaces quite a few new options which were hardcoded
before.

Additionally, since Go API have changed, some redundant methods have been
removed and small refactoring has happened to take advantage of the new
way.

License: MIT
Signed-off-by: Hector Sanjuan <hector@protocol.ai>
2017-10-18 00:00:12 +02:00
Wyatt
e3ccc1b8f4 Using unshadow to save bootstrappers without changing other functionality 2017-10-11 16:12:21 -04:00
Wyatt
67d38a06c4 Using shadow to actually save bootstrapper, updating cluster restart to respect saved config for tests 2017-10-11 11:09:39 -04:00
Wyatt
a1ec459b30 Peers saved in bootstrapper upon peer rm 2017-10-07 20:27:36 +03:00
Hector Sanjuan
34fdc329fc Fix #24: Auto-join and auto-leave operations for Cluster
This is the third implementation attempt. This time, rather than
broadcasting PeerAdd/Join requests to the whole cluster, we use the
consensus log to broadcast new peers joining.

This makes it easier to recover from errors and to know who exactly
is member of a cluster and who is not. The consensus is, after all,
meant to agree on things, and the list of cluster peers is something
everyone has to agree on.

Raft itself uses a special log operation to maintain the peer set.

The tests are almost unchanged from the previous attempts so it should
be the same, except it doesn't seem possible to bootstrap a bunch of nodes
at the same time using different bootstrap nodes. It works when using
the same. I'm not sure this worked before either, but the code is
simpler than recursively contacting peers, and scales better for
larger clusters.

Nodes have to be careful about joining clusters while keeping the state
from a different cluster (disjoint logs). This may cause problems with
Raft.

License: MIT
Signed-off-by: Hector Sanjuan <hector@protocol.ai>
2017-02-07 18:46:09 +01:00
Hector Sanjuan
6c18c02106 Issue #10: peers/add and peers/rm feature + tests
This commit adds PeerAdd() and PeerRemove() endpoints, CLI support,
tests. Peer management is a delicate issue because of how the consensus
works underneath and the places that need to track such peers.

When adding a peer the procedure is as follows:

* Try to open a connection to the new peer and abort if not reachable
* Broadcast a PeerManagerAddPeer operation which tells all cluster members
to add the new Peer. The Raft leader will add it to Raft's peerset and
the multiaddress will be saved in the ClusterPeers configuration key.
* If the above fails because some cluster node is not responding,
broadcast a PeerRemove() and try to undo any damage.
* If the broadcast succeeds, send our ClusterPeers to the new Peer along with
the local multiaddress we are using in the connection opened in the
first step (that is the multiaddress through which the other peer can reach us)
* The new peer updates its configuration with the new list and joins
the consensus

License: MIT
Signed-off-by: Hector Sanjuan <hector@protocol.ai>
2017-02-02 13:51:49 +01:00