ipfs-cluster/package.json

186 lines
4.7 KiB
JSON
Raw Normal View History

2016-12-02 18:33:39 +00:00
{
"author": "hsanjuan",
2016-12-02 18:33:39 +00:00
"bugs": {
"url": "https://github.com/ipfs/ipfs-cluster"
},
"gx": {
"dvcsimport": "github.com/ipfs/ipfs-cluster"
2016-12-02 18:33:39 +00:00
},
"gxDependencies": [
{
"author": "hsanjuan",
"hash": "QmZ88KbrvZMJpXaNwAGffswcYKz8EbeafzAFGMCA6MEZKt",
"name": "go-libp2p-consensus",
"version": "0.0.3"
},
{
"author": "whyrusleeping",
"hash": "QmWRUZmLb9qEpwuHTtrzbdE5LQxm64qftncw5o8tBVPobL",
2016-12-02 18:33:39 +00:00
"name": "go-libp2p",
"version": "6.0.38"
2016-12-02 18:33:39 +00:00
},
{
"author": "hsanjuan",
"hash": "QmX73JLtJ92tDcZajRrYtQDVSLQ5LPnADHwwQLXkTzNRhE",
2016-12-02 18:33:39 +00:00
"name": "go-libp2p-raft",
"version": "1.2.20"
},
{
"author": "urfave",
"hash": "Qmc1AtgBdoUHP8oYSqU81NRYdzohmF45t5XNwVMvhCxsBA",
"name": "cli",
"version": "1.19.1"
},
{
"author": "hashicorp",
"hash": "QmZa48BnsaEMVNf1hT2HYP2ak97fqyTnadXu6xSu2Y8xui",
"name": "raft-boltdb",
"version": "2017.10.24"
},
{
"author": "gorilla",
Fix #382 (again): A better strategy for handling proxy headers This changes the current strategy to extract headers from the IPFS daemon to use them for hijacked endpoints in the proxy. The ipfs daemon is a bit of a mess and what we were doing is not really reliable, specially when it comes to setting CORS headers right (which we were not doing). The new approach is: * For every hijacked request, make an OPTIONS request to the same path, with the given Origin, to the IPFS daemon and extract some CORS headers from that. Use those in the hijacked response * Avoid hijacking OPTIONS request, they should always go through so the IPFS daemon controls all the CORS-preflight things as it wants. * Similar to before, have a only-once-triggered request to extract other interesting or custom headers from a fixed IPFS endpoint. This allows us to have the proxy forward other custom headers and to catch `Access-Control-Expose-Methods`. The difference is that the endpoint use for this and the additional headers are configurable by the user (but with hidden configuration options because this is quite exotic from regular usage). Now the implementation: * Replaced the standard Muxer with gorilla/mux (I have also taken the change to update the gxed version to the latest tag). This gives us much better matching control over routes and allows us to not handle OPTIONS requests. * This allows also to remove the extractArgument code and have proper handlers for the endpoints passing command arguments as the last segment of the URL. A very simple handler that wraps the default ones can be used to extract the argument from the url and put it in the query. Overall much cleaner this way. * No longer capture interesting headers from any random proxied request. This made things complicated with a wrapping handler. We will just trigger the one request to do it when we need it. * When preparing the headers for the hijacked responses: * Trigger the OPTIONS request and figure out which CORS things we should set * Set the additional headers (perhaps triggering a POST request to fetch them) * Set our own headers. * Moved all the headers stuff to a new headers.go file. * Added configuration options (hidden by default) to: * Customize the extract headers endpoint * Customize what additional headers are extracted * Use HTTPs when talking to the IPFS API * I haven't tested this, but I did not want to have hardcoded 'http://' urls around, as before. * Added extra testing for this, and tested manually a lot comparing the daemon original output with our hijacked endpoint outputs while looking at the API traffic with ngrep and making sure the requets happen as expected. Also tested with IPFS companion in FF and Chrome. License: MIT Signed-off-by: Hector Sanjuan <code@hector.link>
2019-01-10 19:03:59 +00:00
"hash": "QmXEPZmhs4r1rab3e2LqnrLvTFKCMEwC5SyEa3xTFJDqtU",
"name": "mux",
Fix #382 (again): A better strategy for handling proxy headers This changes the current strategy to extract headers from the IPFS daemon to use them for hijacked endpoints in the proxy. The ipfs daemon is a bit of a mess and what we were doing is not really reliable, specially when it comes to setting CORS headers right (which we were not doing). The new approach is: * For every hijacked request, make an OPTIONS request to the same path, with the given Origin, to the IPFS daemon and extract some CORS headers from that. Use those in the hijacked response * Avoid hijacking OPTIONS request, they should always go through so the IPFS daemon controls all the CORS-preflight things as it wants. * Similar to before, have a only-once-triggered request to extract other interesting or custom headers from a fixed IPFS endpoint. This allows us to have the proxy forward other custom headers and to catch `Access-Control-Expose-Methods`. The difference is that the endpoint use for this and the additional headers are configurable by the user (but with hidden configuration options because this is quite exotic from regular usage). Now the implementation: * Replaced the standard Muxer with gorilla/mux (I have also taken the change to update the gxed version to the latest tag). This gives us much better matching control over routes and allows us to not handle OPTIONS requests. * This allows also to remove the extractArgument code and have proper handlers for the endpoints passing command arguments as the last segment of the URL. A very simple handler that wraps the default ones can be used to extract the argument from the url and put it in the query. Overall much cleaner this way. * No longer capture interesting headers from any random proxied request. This made things complicated with a wrapping handler. We will just trigger the one request to do it when we need it. * When preparing the headers for the hijacked responses: * Trigger the OPTIONS request and figure out which CORS things we should set * Set the additional headers (perhaps triggering a POST request to fetch them) * Set our own headers. * Moved all the headers stuff to a new headers.go file. * Added configuration options (hidden by default) to: * Customize the extract headers endpoint * Customize what additional headers are extracted * Use HTTPs when talking to the IPFS API * I haven't tested this, but I did not want to have hardcoded 'http://' urls around, as before. * Added extra testing for this, and tested manually a lot comparing the daemon original output with our hijacked endpoint outputs while looking at the API traffic with ngrep and making sure the requets happen as expected. Also tested with IPFS companion in FF and Chrome. License: MIT Signed-off-by: Hector Sanjuan <code@hector.link>
2019-01-10 19:03:59 +00:00
"version": "1.6.2"
},
{
"author": "hsanjuan",
"hash": "QmcJCApoEsCJJap2iS1os9GFX5EuRrfuPeZdjCopz2SyPm",
"name": "go-libp2p-gorpc",
"version": "1.1.4"
},
{
"author": "libp2p",
"hash": "QmTwDsJUPioMKoiuXkAmiPxL1i4tjuG5vkxJgNpiHpXb3Y",
"name": "go-libp2p-pnet",
"version": "3.0.5"
},
{
"author": "dignifiedquire",
"hash": "QmdDpQpe8RHu9qBiFWPaBvSAUr2kRLWipEjzDqAMfWqwFQ",
"name": "go-fs-lock",
"version": "0.1.11"
},
{
"author": "hsanjuan",
"hash": "QmUgYx5qgavtQFAUtgcfFJZdXZfYY7hAN3EUF4yrPhjJnb",
"name": "go-libp2p-http",
"version": "1.1.16"
},
{
"author": "ipfs",
"hash": "QmSM3chHm3ZggBZsY2BuJbvpD9VF2mzdgR5JBQ78KnsbDw",
"name": "go-ipfs-api",
"version": "1.4.8"
},
{
"author": "whyrusleeping",
"hash": "QmTbxNB1NwDesLmKTscr4udL2tVP7MaxvXnD1D9yX7g3PN",
"name": "go-cid",
"version": "0.9.3"
},
{
"author": "hsanjuan",
"hash": "QmYmZ81dU5nnmBFy5MmktXLZpt8QCWhRJd6M1uxVF6vke8",
"name": "go-ipfs-chunker",
"version": "0.1.6"
},
{
"author": "hector",
"hash": "QmdiZuFuiFD1Gbuu8PdqmsfrCR3z4QKSR2bN1NAvnJgTY7",
"name": "go-ipfs-posinfo",
"version": "0.1.5"
},
{
"author": "why",
"hash": "QmSbCXEwpsog4vBf53YntmGk9uHsgZNuU5oBKv3o2kkTSe",
"name": "go-unixfs",
"version": "1.3.11"
},
{
"author": "why",
"hash": "QmP9i4G9nRcfKBnpk1A7CwU7ppLkSn2j6vJeWn2AJ8rfcN",
"name": "go-merkledag",
"version": "1.1.36"
},
{
"hash": "QmUTc27ifFbaTWZBCKFxuMfWfB1jy88MtYtB37vZ9saaXo",
"name": "go-libp2p-kad-dht",
"version": "4.4.30"
},
{
"author": "hsanjuan",
"hash": "QmZuXacgXW4YkAveAQWvFUyLW9vzPtWKADjeoqtk22GcEK",
"name": "go-mfs",
"version": "0.1.48"
},
{
"author": "blang",
"hash": "QmYRGECuvQnRX73fcvPnGbYijBcGN2HbKZQ7jh26qmLiHG",
"name": "semver",
"version": "3.5.1"
},
{
"author": "magik6k",
"hash": "QmQmhotPUzVrMEWNK3x1R5jQ5ZHWyL7tVUrmRPjrBrvyCb",
"name": "go-ipfs-files",
"version": "2.0.6"
},
{
"author": "lanzafame",
"hash": "QmYgGtLm9WJRgh6iuaZap8qVC1gqixFbZCNfhjLNBhWMCm",
"name": "envconfig",
"version": "1.3.1"
},
{
"author": "whyrusleeping",
"hash": "Qmdmn9FrkJbz6SdmxceJs4nXFRzbM9iAefyQwveGSBejXT",
"name": "go-libp2p-pubsub",
"version": "0.11.14"
Fix #382 (again): A better strategy for handling proxy headers This changes the current strategy to extract headers from the IPFS daemon to use them for hijacked endpoints in the proxy. The ipfs daemon is a bit of a mess and what we were doing is not really reliable, specially when it comes to setting CORS headers right (which we were not doing). The new approach is: * For every hijacked request, make an OPTIONS request to the same path, with the given Origin, to the IPFS daemon and extract some CORS headers from that. Use those in the hijacked response * Avoid hijacking OPTIONS request, they should always go through so the IPFS daemon controls all the CORS-preflight things as it wants. * Similar to before, have a only-once-triggered request to extract other interesting or custom headers from a fixed IPFS endpoint. This allows us to have the proxy forward other custom headers and to catch `Access-Control-Expose-Methods`. The difference is that the endpoint use for this and the additional headers are configurable by the user (but with hidden configuration options because this is quite exotic from regular usage). Now the implementation: * Replaced the standard Muxer with gorilla/mux (I have also taken the change to update the gxed version to the latest tag). This gives us much better matching control over routes and allows us to not handle OPTIONS requests. * This allows also to remove the extractArgument code and have proper handlers for the endpoints passing command arguments as the last segment of the URL. A very simple handler that wraps the default ones can be used to extract the argument from the url and put it in the query. Overall much cleaner this way. * No longer capture interesting headers from any random proxied request. This made things complicated with a wrapping handler. We will just trigger the one request to do it when we need it. * When preparing the headers for the hijacked responses: * Trigger the OPTIONS request and figure out which CORS things we should set * Set the additional headers (perhaps triggering a POST request to fetch them) * Set our own headers. * Moved all the headers stuff to a new headers.go file. * Added configuration options (hidden by default) to: * Customize the extract headers endpoint * Customize what additional headers are extracted * Use HTTPs when talking to the IPFS API * I haven't tested this, but I did not want to have hardcoded 'http://' urls around, as before. * Added extra testing for this, and tested manually a lot comparing the daemon original output with our hijacked endpoint outputs while looking at the API traffic with ngrep and making sure the requets happen as expected. Also tested with IPFS companion in FF and Chrome. License: MIT Signed-off-by: Hector Sanjuan <code@hector.link>
2019-01-10 19:03:59 +00:00
},
{
"author": "hsanjuan",
"hash": "QmNNk4iczWp8Q4R1mXQ2mrrjQvWisYqMqbW1an8qGbJZsM",
"name": "cors",
"version": "1.6.0"
},
{
"author": "ZenGround0",
"hash": "QmPuuqyMyoadGDkefg7L11kAwmvQykrHiRkuLjQRpa1bqF",
"name": "go-dot",
"version": "0.0.1"
},
{
"author": "hsanjuan",
"hash": "QmNVpHFt7QmabuVQyguf8AbkLDZoFh7ifBYztqijYT1Sd2",
"name": "go.opencensus.io",
"version": "0.19.0"
},
{
"author": "lanzafame",
"hash": "QmYe4hq5UmoR4LNYHtxNuhHtTYzjgU1FdFoKL8fWj1uMf4",
"name": "go-libp2p-ocgorpc",
"version": "0.1.8"
},
{
"author": "google",
"hash": "QmSSeQqc5QeuefkaM6JFV5tSF9knLUkXKVhW1eYRiqe72W",
"name": "uuid",
"version": "0.1.0"
Consensus: add new "crdt" consensus component This adds a new "crdt" consensus component using go-ds-crdt. This implies several refactors to fully make cluster consensus-component independent: * Delete mapstate and fully adopt dsstate (after people have migrated). * Return errors from state methods rather than ignoring them. * Add a new "datastore" modules so that we can configure datastores in the main configuration like other components. * Let the consensus components fully define the "state.State". Thus, they do not receive the state, they receive the storage where we put the state (a go-datastore). * Allow to customize how the monitor component obtains Peers() (the current peerset), including avoiding using the current peerset. At the moment the crdt consensus uses the monitoring component to define the current peerset. Therefore the monitor component cannot rely on the consensus component to produce a peerset. * Re-factor/re-implementation of "ipfs-cluster-service state" operations. Includes the dissapearance of the "migrate" one. The CRDT consensus component defines creates a crdt-datastore (with ipfs-lite) and uses it to intitialize a dssate. Thus the crdt-store is elegantly wrapped. Any modifications to the state get automatically replicated to other peers. We store all the CRDT DAG blocks in the local datastore. The consensus components only expose a ReadOnly state, as any modifications to the shared state should happen through them. DHT and PubSub facilities must now be created outside of Cluster and passed in so they can be re-used by different components.
2019-02-20 14:24:25 +00:00
},
{
"author": "hsanjuan",
"hash": "QmWqSMkd9LSYahr9NQvrxoZ4sCzkGctQstqfAYKepzukS6",
"name": "go-ds-crdt",
"version": "0.0.1"
2016-12-02 18:33:39 +00:00
}
],
"gxVersion": "0.11.0",
2016-12-02 18:33:39 +00:00
"language": "go",
"license": "MIT",
2016-12-02 18:33:39 +00:00
"name": "ipfs-cluster",
"releaseCmd": "git commit -S -a -m \"gx publish $VERSION\"",
2019-03-07 18:03:51 +00:00
"version": "0.10.0"
2016-12-02 18:33:39 +00:00
}