Commit Graph

35 Commits

Author SHA1 Message Date
Wyatt Daviau
77a61890ff Sharding rough draft:
sharding passes manual tests on single node cluster,
adding the shards of a directory and pinning the
clusterDAG to cluster/ipfs state

License: MIT
Signed-off-by: Wyatt Daviau <wdaviau@cs.stanford.edu>
2018-08-07 20:11:23 +02:00
Wyatt Daviau
fa74bc230d ipfs block put call now format aware
License: MIT
Signed-off-by: Wyatt Daviau <wdaviau@cs.stanford.edu>
2018-08-07 20:11:23 +02:00
Wyatt Daviau
11e8e9d62c Addressing second round of comments
License: MIT
Signed-off-by: Wyatt Daviau <wdaviau@cs.stanford.edu>
2018-08-07 20:11:23 +02:00
Wyatt Daviau
9696336491 addressing first round of feedback
License: MIT
Signed-off-by: Wyatt Daviau <wdaviau@cs.stanford.edu>
2018-08-07 20:11:23 +02:00
Wyatt Daviau
d3bca1f022 context cancellation on error
License: MIT
Signed-off-by: Wyatt Daviau <wdaviau@cs.stanford.edu>
2018-08-07 20:11:23 +02:00
Wyatt Daviau
40f8eeedb5 cluster-ctl add to ipfs
RPC call to put a block in ipfs
IPFSConnector method to implement the RPC call
cluster restapi reads from channel and puts
blocks into IPFS via RPC in ctl add handler

License: MIT
Signed-off-by: Wyatt Daviau <wdaviau@cs.stanford.edu>
2018-08-07 20:11:23 +02:00
Wyatt Daviau
c12685a518 Bring importer interface to cluster
dependence on dex replaced with ipld-importer subpackage
slight improvement to importer tests and formatting
gx deps update: new raft, ipld format and ipfs added

License: MIT
Signed-off-by: Wyatt Daviau <wdaviau@cs.stanford.edu>
2018-08-07 20:11:23 +02:00
Wyatt Daviau
029ee3454e addressing feedback
License: MIT
Signed-off-by: Wyatt Daviau <wdaviau@cs.stanford.edu>
2018-08-07 20:11:23 +02:00
Wyatt Daviau
65eb7b3775 printing importer works
License: MIT
Signed-off-by: Wyatt Daviau <wdaviau@cs.stanford.edu>
2018-08-07 20:11:23 +02:00
Wyatt Daviau
20b81f9f3d calling into dex importer
License: MIT
Signed-off-by: Wyatt Daviau <wdaviau@cs.stanford.edu>
2018-08-07 20:11:23 +02:00
Wyatt Daviau
5725e34e3d polishing up print on restapi
License: MIT
Signed-off-by: Wyatt Daviau <wdaviau@cs.stanford.edu>
2018-08-07 20:11:23 +02:00
Wyatt Daviau
9025baa12a streaming bugs fixed
dummy echo server implemented on cluster service api
manual tests with files work
wireshark examination shows transfer-encoding = chunked

License: MIT
Signed-off-by: Wyatt Daviau <wdaviau@cs.stanford.edu>
2018-08-07 20:11:23 +02:00
Hector Sanjuan
33d9cdd3c4 Feat: emancipate Consensus from the Cluster component
This commit promotes the Consensus component (and Raft) to become a fully
independent thing like other components, passed to NewCluster during
initialization. Cluster (main component) no longer creates the consensus
layer internally. This has triggered a number of breaking changes
that I will explain below.

Motivation: Future work will require the possibility of running Cluster
with a consensus layer that is not Raft. The "consensus" layer is in charge
of maintaining two things:
  * The current cluster peerset, as required by the implementation
  * The current cluster pinset (shared state)

While the pinset maintenance has always been in the consensus layer, the
peerset maintenance was handled by the main component (starting by the "peers"
key in the configuration) AND the Raft component (internally)
and this generated lots of confusion: if the user edited the peers in the
configuration they would be greeted with an error.

The bootstrap process (adding a peer to an existing cluster) and configuration
key also complicated many things, since the main component did it, but only
when the consensus was initialized and in single peer mode.

In all this we also mixed the peerstore (list of peer addresses in the libp2p
host) with the peerset, when they need not to be linked.

By initializing the consensus layer before calling NewCluster, all the
difficulties in maintaining the current implementation in the same way
have come to light. Thus, the following changes have been introduced:

* Remove "peers" and "bootstrap" keys from the configuration: we no longer
edit or save the configuration files. This was a very bad practice, requiring
write permissions by the process to the file containing the private key and
additionally made things like Puppet deployments of cluster difficult as
configuration would mutate from its initial version. Needless to say all the
maintenance associated to making sure peers and bootstrap had correct values
when peers are bootstrapped or removed. A loud and detailed error message has
been added when staring cluster with an old config, along with instructions on
how to move forward.

* Introduce a PeerstoreFile ("peerstore") which stores peer addresses: in
ipfs, the peerstore is not persisted because it can be re-built from the
network bootstrappers and the DHT. Cluster should probably also allow
discoverability of peers addresses (when not bootstrapping, as in that case
we have it), but in the meantime, we will read and persist the peerstore
addresses for cluster peers in this file, different from the configuration.
Note that dns multiaddresses are now fully supported and no IPs are saved
when we have DNS multiaddresses for a peer.

* The former "peer_manager" code is now a pstoremgr module, providing utilities
to parse, add, list and generally maintain the libp2p host peerstore, including
operations on the PeerstoreFile. This "pstoremgr" can now also be extended to
perform address autodiscovery and other things indepedently from Cluster.

* Create and initialize Raft outside of the main Cluster component: since we
can now launch Raft independently from Cluster, we have more degrees of
freedom. A new "staging" option when creating the object allows a raft peer to
be launched in Staging mode, waiting to be added to a running consensus, and
thus, not electing itself as leader or doing anything like we were doing
before. This additionally allows us to track when the peer has become a
Voter, which only happens when it's caught up with the state, something that
was wonky previously.

* The raft configuration now includes an InitPeerset key, which allows to
provide a peerset for new peers and which is ignored when staging==true. The
whole Raft initialization code is way cleaner and stronger now.

* Cluster peer bootsrapping is now an ipfs-cluster-service feature. The
--bootstrap flag works as before (additionally allowing comma-separated-list
of entries). What bootstrap does, is to initialize Raft with staging == true,
and then call Join in the main cluster component. Only when the Raft peer
transitions to Voter, consensus becomes ready, and cluster becomes Ready.
This is cleaner, works better and is less complex than before (supporting
both flags and config values). We also backup and clean the state whenever
we are boostrapping, automatically

* ipfs-cluster-service no longer runs the daemon. Starting cluster needs
now "ipfs-cluster-service daemon". The daemon specific flags (bootstrap,
alloc) are now flags for the daemon subcommand. Here we mimic ipfs ("ipfs"
does not start the daemon but print help) and pave the path for merging both
service and ctl in the future.

While this brings some breaking changes, it significantly reduces the
complexity of the configuration, the code and most importantly, the
documentation. It should be easier now to explain the user what is the
right way to launch a cluster peer, and more difficult to make mistakes.

As a side effect, the PR also:

* Fixes #381 - peers with dynamic addresses
* Fixes #371 - peers should be Raft configuration option
* Fixes #378 - waitForUpdates may return before state fully synced
* Fixes #235 - config option shadowing (no cfg saves, no need to shadow)

License: MIT
Signed-off-by: Hector Sanjuan <code@hector.link>
2018-05-07 07:39:41 +02:00
Hector Sanjuan
62bdbaa9f3 Use iff iff it is necessary
License: MIT
Signed-off-by: Hector Sanjuan <code@hector.link>
2018-03-21 00:34:57 +01:00
Hector Sanjuan
6777122abf rest/libp2p-http: address @zenground0 comments
License: MIT
Signed-off-by: Hector Sanjuan <code@hector.link>
2018-03-20 19:51:57 +01:00
Hector Sanjuan
b989e1726d api/rest: refactor run() (codeclimate)
License: MIT
Signed-off-by: Hector Sanjuan <code@hector.link>
2018-03-16 13:53:07 +01:00
Hector Sanjuan
f74c7b2c6e Feat #305: Golinting
License: MIT
Signed-off-by: Hector Sanjuan <code@hector.link>
2018-03-16 13:08:11 +01:00
Hector Sanjuan
4bed9d0076 api/rest/client: add private networks support.
Also, re-use some convinient functions provided by pnet.

License: MIT
Signed-off-by: Hector Sanjuan <code@hector.link>
2018-03-15 17:14:22 +01:00
Hector Sanjuan
fb4bda8880 restapi: Expose the http endpoints on libp2p
This commits allows restapi to serve/tunnel http on a libp2p stream.

NewWitHost(...) allows to provide a libp2p host during initialization
which is then used to obtain a listener with go-libp2p-gostream.

Alternatively, if the configuration provides an ID, PrivateKey and Libp2pListenAddr,
a host is created directly by us and used to get the listener.

The protocol tag used is provided by the p2phttp library which will
be used by the client.

All tests now run against the libp2p node too.

License: MIT
Signed-off-by: Hector Sanjuan <code@hector.link>
2018-03-15 17:14:22 +01:00
Hector Sanjuan
3b715041ac RestAPI: support libp2p host parameters in configuration
This adds support for parameters to create a libp2p host
in the REST API configuration: ID, PrivateKey and ListenMultiaddr.

These parameters default to nil/empty and are ommited in the default
configuration. They are only supposed to be used when the user wants
the REST API to use a different libp2p host than a provided one (upcoming
changes).

Pnet protector not supported yet in this case. Underlying basic auth
should cover that front. Will implement if someone has a usecase.

License: MIT
Signed-off-by: Hector Sanjuan <code@hector.link>
2018-03-15 00:04:54 +01:00
Wyatt Daviau
1b7b9185e2 support for recursive pins
extension to api types
modifications to ipfsconnector

License: MIT
Signed-off-by: Wyatt Daviau <wdaviau@cs.stanford.edu>
2018-03-12 11:06:42 -04:00
Hector Sanjuan
a180f1a5c5
Merge pull request #291 from ipfs/feat/connectivity-graph
Feat/connectivity graph
2018-01-26 12:42:18 +01:00
Hector Sanjuan
f4c57d8581 Tests: make api/rest/client bind on random port
For jenkins

License: MIT
Signed-off-by: Hector Sanjuan <code@hector.link>
2018-01-25 12:54:55 +01:00
ZenGround0
4b26ccd144
Merge branch 'master' into feat/connectivity-graph 2018-01-23 08:34:43 -05:00
Wyatt Daviau
d2ef32f48f Testing and polishing connection graph
Added go tests
Refactored cluster connect graph to new file
Refactored dot file printing to new repo
Fixed code climate issues
Added sharness test

License: MIT
Signed-off-by: Wyatt Daviau <wdaviau@cs.stanford.edu>
2018-01-22 10:03:37 -05:00
Wyatt
e712c87570 First draft of ConnectGraph collection:
added ConnectGraph type and serialization
added cli command hitting cluster api
added cluster api client method + endpoint calling into rpc
added rpc calling into main cluster component
added clustercomponent's function to collect ConnectGraph
added functionality in ipfsconn to retrieve ipfs swarm peers
added dot file printing given ConnectGraphSerial

License: MIT
Signed-off-by: Wyatt Daviau <wdaviau@cs.stanford.edu>
2018-01-22 09:07:12 -05:00
Hector Sanjuan
5f45ba8bce Fix #277: Add min a max replication factor to rest API
License: MIT
Signed-off-by: Hector Sanjuan <code@hector.link>
2018-01-16 16:36:06 +01:00
Hector Sanjuan
89b8fe106e Fix #275: Wait for Raft updates before snapshotting on shutdown
Raft will fail to take a snapshot when applied index is
different from the last index. Therefore, we wait for
all updates to be aplied before snapshotting.

If still it doesn't work, we retry a few times.

License: MIT
Signed-off-by: Hector Sanjuan <code@hector.link>
2018-01-09 15:02:47 +01:00
Hector Sanjuan
84816729ff
Merge pull request #252 from te0d/feat/add-pin-name
Feat/add pin name
2017-12-04 14:25:39 +01:00
Hector Sanjuan
d6a7caf7a4 Issue #259: Address CR comments
License: MIT
Signed-off-by: Hector Sanjuan <code@hector.link>
2017-12-04 13:59:48 +01:00
Hector Sanjuan
4922c95589 Support --local parameter for Status[Local] and Sync[Local] operations
This allows to call the Rest API's status and sync endpoints with a
"?local=true" parameter. This will trigger operations but only on the
local peer. Cluster *Local and RPC-*Local methods have been accordingly,
although they are aliases for the PinTracker methods (but otherwise they
would not be exposed in external APIs). ipfs-cluster-ctl has been updated to
support the new flag.

The rationaly behind this feature is that sometimes, a single cluster peer
(or the ipfs daemon in it) is misbehaving. The user then wants to Sync,
Recover, or see Status for that single peer. This is specially relevant
when working with big pinsets in larger clusters, as a Status() call will
be considerably more expensive when broadcasted everywhere.

Note that the Rest API keeps returning GlobalPinInfo objects even on local=true
calls. This ensures that the user always gets the same datatype from an endpoint.

License: MIT
Signed-off-by: Hector Sanjuan <code@hector.link>
2017-12-01 12:56:26 +01:00
Tom O'Donnell (te0d)
3f28da4415 Fixed Pin's Name Attribute when not Specified
When not specified, the name was being assigned to whichever name
was assigned to the previous pin. Not sure exactly what was
happening, but it works after removing omitempty. The idea was to
make Name optional, but here it will always be assigned.
2017-11-30 10:14:24 -05:00
Tom O'Donnell (te0d)
bd829c3f31 Added Name to Pinned Items
Added Name field to be a part of the stored state of a Pin. This
allows the name to be stored as a part of the consensus if needed.

This is the bare minimum to use. Some other calls like status need
to be checked out.
2017-11-30 10:14:24 -05:00
Hector Sanjuan
e824aea55e RecoverAll: Implement RecoverAllLocal() which recovers all pins in a peer
This adds API, RPC calls to support RecoverAllLocal() (and expose RecoverLocal()
on the Rest API too). cluster-ctl is updated accordingly.

License: MIT
Signed-off-by: Hector Sanjuan <code@hector.link>
2017-11-30 01:53:31 +01:00
Hector Sanjuan
8f06baa1bf Issue #162: Rework configuration format
The following commit reimplements ipfs-cluster configuration under
the following premises:

  * Each component is initialized with a configuration object
  defined by its module
  * Each component decides how the JSON representation of its
  configuration looks like
  * Each component parses and validates its own configuration
  * Each component exposes its own defaults
  * Component configurations are make the sections of a
  central JSON configuration file (which replaces the current
  JSON format)
  * Component configurations implement a common interface
  (config.ComponentConfig) with a set of common operations
  * The central configuration file is managed by a
  config.ConfigManager which:
    * Registers ComponentConfigs
    * Assigns the correspondent sections from the JSON file to each
    component and delegates the parsing
    * Delegates the JSON generation for each section
    * Can be notified when the configuration is updated and must be
    saved to disk

The new service.json would then look as follows:

```json
{
  "cluster": {
    "id": "QmTVW8NoRxC5wBhV7WtAYtRn7itipEESfozWN5KmXUQnk2",
    "private_key": "<...>",
    "secret": "00224102ae6aaf94f2606abf69a0e278251ecc1d64815b617ff19d6d2841f786",
    "peers": [],
    "bootstrap": [],
    "leave_on_shutdown": false,
    "listen_multiaddress": "/ip4/0.0.0.0/tcp/9096",
    "state_sync_interval": "1m0s",
    "ipfs_sync_interval": "2m10s",
    "replication_factor": -1,
    "monitor_ping_interval": "15s"
  },
  "consensus": {
    "raft": {
      "heartbeat_timeout": "1s",
      "election_timeout": "1s",
      "commit_timeout": "50ms",
      "max_append_entries": 64,
      "trailing_logs": 10240,
      "snapshot_interval": "2m0s",
      "snapshot_threshold": 8192,
      "leader_lease_timeout": "500ms"
    }
  },
  "api": {
    "restapi": {
      "listen_multiaddress": "/ip4/127.0.0.1/tcp/9094",
      "read_timeout": "30s",
      "read_header_timeout": "5s",
      "write_timeout": "1m0s",
      "idle_timeout": "2m0s"
    }
  },
  "ipfs_connector": {
    "ipfshttp": {
      "proxy_listen_multiaddress": "/ip4/127.0.0.1/tcp/9095",
      "node_multiaddress": "/ip4/127.0.0.1/tcp/5001",
      "connect_swarms_delay": "7s",
      "proxy_read_timeout": "10m0s",
      "proxy_read_header_timeout": "5s",
      "proxy_write_timeout": "10m0s",
      "proxy_idle_timeout": "1m0s"
    }
  },
  "monitor": {
    "monbasic": {
      "check_interval": "15s"
    }
  },
  "informer": {
    "disk": {
      "metric_ttl": "30s",
      "metric_type": "freespace"
    },
    "numpin": {
      "metric_ttl": "10s"
    }
  }
}
```

This new format aims to be easily extensible per component. As such,
it already surfaces quite a few new options which were hardcoded
before.

Additionally, since Go API have changed, some redundant methods have been
removed and small refactoring has happened to take advantage of the new
way.

License: MIT
Signed-off-by: Hector Sanjuan <hector@protocol.ai>
2017-10-18 00:00:12 +02:00