Added go tests
Refactored cluster connect graph to new file
Refactored dot file printing to new repo
Fixed code climate issues
Added sharness test
License: MIT
Signed-off-by: Wyatt Daviau <wdaviau@cs.stanford.edu>
This PR replaces ReplicationFactor with ReplicationFactorMax
and ReplicationFactor min.
This allows a CID to be pinned even though the desired
replication factor (max) is not reached, and prevents triggering
re-pinnings when the replication factor has not crossed the
lower threshold (min).
License: MIT
Signed-off-by: Hector Sanjuan <code@hector.link>
This enables support for testing in jenkins.
Several minor adjustments have been performed to improve the probability
that the tests pass, but there are still some random
problems appearing with libp2p conections not becoming available or
stopping working (similar to travis, but perhaps more often).
MacOS and Windows builds are broken in worse ways (those issues will
need to be addressed in the future).
Thanks to @zenground0 and @victorbjelkholm for support!
License: MIT
Signed-off-by: Hector Sanjuan <code@hector.link>
I've modified the peer identifier to be 'peername'. I've also
modified the TestLoadJSON to check that it is correctly read from
config and set to a default if empty.
Also added 'peername' fields to configurations for various tests.
This also generates a default configuration section when it
doesn't exist, so it's backwards compatible.
License: MIT
Signed-off-by: Hector Sanjuan <code@hector.link>
The following commit reimplements ipfs-cluster configuration under
the following premises:
* Each component is initialized with a configuration object
defined by its module
* Each component decides how the JSON representation of its
configuration looks like
* Each component parses and validates its own configuration
* Each component exposes its own defaults
* Component configurations are make the sections of a
central JSON configuration file (which replaces the current
JSON format)
* Component configurations implement a common interface
(config.ComponentConfig) with a set of common operations
* The central configuration file is managed by a
config.ConfigManager which:
* Registers ComponentConfigs
* Assigns the correspondent sections from the JSON file to each
component and delegates the parsing
* Delegates the JSON generation for each section
* Can be notified when the configuration is updated and must be
saved to disk
The new service.json would then look as follows:
```json
{
"cluster": {
"id": "QmTVW8NoRxC5wBhV7WtAYtRn7itipEESfozWN5KmXUQnk2",
"private_key": "<...>",
"secret": "00224102ae6aaf94f2606abf69a0e278251ecc1d64815b617ff19d6d2841f786",
"peers": [],
"bootstrap": [],
"leave_on_shutdown": false,
"listen_multiaddress": "/ip4/0.0.0.0/tcp/9096",
"state_sync_interval": "1m0s",
"ipfs_sync_interval": "2m10s",
"replication_factor": -1,
"monitor_ping_interval": "15s"
},
"consensus": {
"raft": {
"heartbeat_timeout": "1s",
"election_timeout": "1s",
"commit_timeout": "50ms",
"max_append_entries": 64,
"trailing_logs": 10240,
"snapshot_interval": "2m0s",
"snapshot_threshold": 8192,
"leader_lease_timeout": "500ms"
}
},
"api": {
"restapi": {
"listen_multiaddress": "/ip4/127.0.0.1/tcp/9094",
"read_timeout": "30s",
"read_header_timeout": "5s",
"write_timeout": "1m0s",
"idle_timeout": "2m0s"
}
},
"ipfs_connector": {
"ipfshttp": {
"proxy_listen_multiaddress": "/ip4/127.0.0.1/tcp/9095",
"node_multiaddress": "/ip4/127.0.0.1/tcp/5001",
"connect_swarms_delay": "7s",
"proxy_read_timeout": "10m0s",
"proxy_read_header_timeout": "5s",
"proxy_write_timeout": "10m0s",
"proxy_idle_timeout": "1m0s"
}
},
"monitor": {
"monbasic": {
"check_interval": "15s"
}
},
"informer": {
"disk": {
"metric_ttl": "30s",
"metric_type": "freespace"
},
"numpin": {
"metric_ttl": "10s"
}
}
}
```
This new format aims to be easily extensible per component. As such,
it already surfaces quite a few new options which were hardcoded
before.
Additionally, since Go API have changed, some redundant methods have been
removed and small refactoring has happened to take advantage of the new
way.
License: MIT
Signed-off-by: Hector Sanjuan <hector@protocol.ai>
We were setting a nil Cid on the PinInfo object for errors.
Cleaned up the logic a bit, and added comments
License: MIT
Signed-off-by: Hector Sanjuan <hector@protocol.ai>
The disk informer uses "ipfs repo stat" to fetch the RepoSize value and
uses it as a metric.
The numpinalloc allocator is now a generalized ascendalloc which
sorts metrics in ascending order and return the ones with lowest
values.
License: MIT
Signed-off-by: Hector Sanjuan <hector@protocol.ai>
It turns out they only worked in round seconds. Tests send a metric
every second, so sometimes they were expired right away.
License: MIT
Signed-off-by: Hector Sanjuan <hector@protocol.ai>
This is a better approach than broadcasting everything all the time
(see 6ee0f3bead). A couple of delays
have been touched in order to make tests less likely to fail randomly.
(Issue #65).
License: MIT
Signed-off-by: Hector Sanjuan <hector@protocol.ai>
There was a bug in the test for re-allocation which hid another
bug in the re-allocation process where a pin which needs to be
re-allocated would only be assigned to the new destinations and not
anymore to the valid allocations it had previously
License: MIT
Signed-off-by: Hector Sanjuan <hector@protocol.ai>
This adds a replication_factor query argument to the API
endpoint which allows to set a replication factor per Pin.
License: MIT
Signed-off-by: Hector Sanjuan <hector@protocol.ai>
An initial, simple approach to this. The PeerMonitor will
check it's metrics, compare to the current set of peers and put
an alert in the alerts channel if the metrics for a peer have expired.
Cluster reads this channel looking for "ping" alerts. The leader
is in charge of triggering repins in all the Cids allocated to
a given peer.
Also, metrics are now broadcasted to the cluster instead of pushed only
to the leader. Since they happen every few seconds it should be okay
regarding how it scales. Main problem was that if the leader is the node
going down, the new leader will not now about it as it doesn't have any
metrics for it, so it won't trigger an alert. If it acted on that then
the component needs to know it is the leader, or cluster needs to
handle alerts in complicated ways when leadership changes. Detecting
leadership changes or letting a component know who is the leader is another
dependency from the consensus algorithm that should be avoided. Therefore
we broadcast, for the moment.
License: MIT
Signed-off-by: Hector Sanjuan <hector@protocol.ai>
New PeerManager, Allocator, Informer components have been added along
with a new "replication_factor" configuration option.
First, cluster peers collect and push metrics (Informer) to the Cluster
leader regularly. The Informer is an interface that can be implemented
in custom wayts to support custom metrics.
Second, on a pin operation, using the information from the collected metrics,
an Allocator can provide a list of preferences as to where the new pin
should be assigned. The Allocator is an interface allowing to provide
different allocation strategies.
Both Allocator and Informer are Cluster Componenets, and have access
to the RPC API.
The allocations are kept in the shared state. Cluster peer failure
detection is still missing and re-allocation is still missing, although
re-pinning something when a node is down/metrics missing does re-allocate
the pin somewhere else.
License: MIT
Signed-off-by: Hector Sanjuan <hector@protocol.ai>