This removes mappintracker and sets stateless tracker as the default (and only) pintracker component.
Because the stateless tracker matches the cluster state with only ongoing operations being kept on memory, and additional information provided by ipfs-pin-ls, syncing operations are not necessary. Therefore the Sync/SyncAll operations are removed cluster-wide.
This introduces the possiblity of running Cluster with multiple informer components. The first one in the list is the used for "allocations". This is just groundwork for working with several informers in the future.
- introduce a new flag for loglevels for individual components
- handle all log related flags together so that we can maintain priority
among log level flags (loglevel < component-loglevel < debug)
Setting up mDNS outside the Cluster is dirtier and allows less configuration.
This adds MDNSInterval to the cluster config options and allow disabling it
when the option is set to 0.
Align with previous behaviour and make sure it is logged that
the configuration file was written.
Do not say "peerstore written with 0 entries" as that might be
taken like an error.
Fixes#865.
This makes the necessary changes so that consensu is selected on "init" with a
flag set, by default, to "crdt". This generates only a "crdt" or a "raft"
section, not both.
If the configuration file has a "raft" section, "raft" will be used to start
the daemon. If it has a "crdt" section, "crdt" will be used. If it has none or
both sections, an error will happen.
This also affects "state *" commands, which will now autoselect how to work
from the existing configuration.
* Daemon: support remote configuration
This:
* Adds support for fetching the configuration from a remote HTTP location:
`ipfs-cluster-service init http://localhost:8080/ipfs/Qm...` will instruct
cluster to read the configuration file from ipfs on start (potentially making
use of ipns and dnslink).
This is done by creating a `service.json` like `{ "source": <url> }`.
The source is then read when loading that configuration every time the daemon starts.
This allows to let users always use a mutating remote configuration, potentially
adding/removing trusted peers from the list or adjusting other things.
* Configuration and state helpers from ipfs-cluster-service have been extracted
to its own cmdutils package. This will help supporting something like an
`ipfs-cluster-follow` command in the next releases.
* Allows to disable the rest api by not defining it in the configuration (I thought
this was already so, but apparently only affected the ipfsproxy).
* Removes informer/allocator configurations from the daemon (--alloc). No one used
a non default pair. In fact, it was potentially buggy to use the reposize one.
It has a few implications to launch a raft peer when you wanted to do crdt and vice-versa.
Same when exporting and exporting states.
Users starting cluster peers should be explicit about their consensus choice.
Also, if we ever want to make `crdt` the default, we can't do that before
making `raft` non-default first. I don't like to break things but otherwise
the experience for new users wanting to try crdts might be aweful.
* Init should take a list of peers
This commit adds `--peers` option to `ipfs-cluster-service init`
`ipfs-cluster-service init --peers <multiaddress,multiaddress>`
- Adds and writes the given peers to the peerstore file
- For raft config section, adds the peer IDs to the `init_peerset`
- For crdt config section, add the peer IDs to the `trusted_peers`
* Do not load API components removed from the config
This commit introduces a map that would keep track of whether components
for a component were missing or not from the JSON config file. This map
can be check while creating cluster to avoid loading a component.
It would consider component only if the component is fully missing from the
config.
Say the component in question is `ipfsproxy` which is under `api`
section.
This would use defaults for `ipfsproxy` and load IPFS proxy.
```
{
"api":{
"ipfsproxy": {}
}
}
```
However, this would not load IPFS proxy
```
{
"api":{}
}
```
Currently, unless doing Join() (--bootstrap), we do not connect to any peers on startup.
We however loaded up the peerstore file and Raft will automatically connect
older peers to figure out who is the leader etc. DHT bootstrap, after Raft
was working, did the rest.
For CRDTs we need to connect to people on a normal boot as otherwise, unless
bootstrapping, this does not happen, even if the peerstore contains known peers.
This introduces a number of changes:
* Move peerstore file management back inside the Cluster component, which was
already in charge of saving the peerstore file.
* We keep saving all "known addresses" but we load them with a non permanent
TTL, so that there will be clean up of peers we're not connected to for long.
* "Bootstrap" (connect) to a small number of peers during Cluster component creation.
* Bootstrap the DHT asap after this, so that other cluster components can
initialize with a working peer discovery mechanism.
* CRDT Trust() method will now:
* Protect the trusted Peer ID in the conn manager
* Give top priority in the PeerManager to that Peer (see below)
* Mark addresses as permanent in the Peerstore
The PeerManager now attaches priorities to peers when importing them and is
able to order them according to that priority. The result is that peers with
high priority are saved first in the peerstore file. When we load the peerstore
file, the first entries in it are given the highest priority.
This means that during startup we will connect to "trusted peers" first
(because they have been tagged with priority in the previous run and saved at
the top of the list). Once connected to a small number of peers, we let the
DHT bootstrap process in the background do the rest and discover the network.
All this makes the peerstore file a "bootstrap" list for CRDTs and we will attempt
to connect to peers on that list until some of those connections succeed.
* Fix error messages (they must be in the form "doing something")
* Improve/reword some error messages
* Document the identity.json existance in the cli docs
* Fix a bunch of typos
* Fix missing folder path in the --help
* Fix cluster not locking when configuration is not there but folder is
* Fix force flag not overriding the config overwrite prompt
* Fix deletion of Raft state on re-init (not necessary if identity persists)
* Fix overwriting on identity (should not be overwritten if already exists)
Much of this paves the way to be able to run without service.json:
* Either taking default values (and using env vars) - maybe someday
* Either by getting a configuration template it from somewhere (ipfs, http)
at runtime - sooner.
Resave configuration if identity.json did not exist. This would remove
identity from service.json
License: MIT
Signed-off-by: Kishan Mohanbhai Sagathiya <kishansagathiya@gmail.com>