Commit Graph

33 Commits

Author SHA1 Message Date
Hector Sanjuan
0dfa9ca185
(chore) Upgrade dependencies
Upgrade dependencies and bump to go1.15.
2020-08-27 14:10:58 +02:00
Hector Sanjuan
6c60097421 ipfsproxy: support direct pin mode 2020-04-21 17:23:55 +02:00
Hector Sanjuan
f83ff9b655 staticcheck: fix all staticcheck warnings in the project 2020-04-14 20:16:10 +02:00
Hector Sanjuan
b3853caf36 Dependency ugprade: changes needed
* Libp2p protectors no longer needed, use PSK directly
* Generate cluster 32-byte secret here (helper gone from pnet)
* Switch to go-log/v2 in all places
* DHT bootstrapping not needed. Adjust DHT options for tests.
* Do not rely on dissappeared CidToDsKey and DsKeyToCid functions fro dshelp.
* Disable QUIC (does not support private networks)
* Fix tests: autodiscovery started working properly
2020-03-22 14:50:25 +01:00
Hector Sanjuan
531379b1d9
Feature: Support multiple listeners in configuration
* add ipv6 listening addresses to the default config

* ipfsproxy: support multiple listeners. Add default ipv6.

* mm

* restapi: support multiple listen addresses. enable ipv6

* cluster_config: format default listen addresses

* commands: update for multiple listeners. Fix randomports for udp and ipv6.

* ipfs-cluster-service: fix randomports test

* multiple listeners: fix remaining tests

* golint

* Disable ipv6 in defaults

It is not supported by docker by default. It is not supported in travis-CI
build environments. User can enable it now manually.

* proxy: disable ipv6 in test

* ipfshttp: fix test

Co-authored-by: @RubenKelevra <cyrond@gmail.com>
2020-02-28 11:16:16 -05:00
Kishan Sagathiya
e1faf12bae ipfsproxy: hijack repo/gc and trigger cluster-wide GC
This adds hijacking of the repo/gc endpoint to the proxy to do cluster-wide gc.
2019-12-06 13:08:57 +01:00
Kishan Mohanbhai Sagathiya
bc357400f6 API logs for IPFS Proxy 2019-09-10 13:59:03 +07:00
Hector Sanjuan
1eade4ae58 Fix #732: Introduce native pin/update
This introduces a pin/update operation which allows to Pin a new item to
cluster indicating that said pin is an update to an already-existing pin.

When this is the case, all the configuration for the existing pin is copied to
the new one (including allocations). The IPFS connector will then trigger
pin/update directly in IPFS, allowing an efficient pinning based on
DAG-differences. Since the allocations where the same for both pins,
the pin/update can proceed.

PinUpdate does not unpin the previous pin (it is not possible to do this
atomically in cluster like it happens in IPFS). The user can manually do it
after the pin/update is done.

Internally, after a lot of deliberations on what the optimal way for this is,
I opted for adding a `PinUpdate` option to the `PinOptions` type (carries the
CID to update from). In order to carry this option from the REST API to the
IPFS Connector, it is serialized in the Protobuf (and stored in the
datastore). There is no other way to do this in a simple fashion since the Pin
object is piece of information that is sent around.

Additionally, making it a PinOption plays well with the Pin/PinPath APIs which
need little changes. Effectively, you are pinning a new thing. You are just
indicating that it should be configured from an existing one.

Fixes #732
2019-08-09 16:11:52 +02:00
Hector Sanjuan
7c636061bd
Improve pin/unpin method signatures (#843)
* Improve pin/unpin method signatures:

These changes the following Cluster Go API methods:

* -> Cluster.Pin(ctx, cid, options) (pin, error)
* -> Cluster.Unpin(ctx, cid) (pin, error)
* -> Cluster.PinPath(ctx, path, opts) (pin,error)

Pin and Unpin now return the pinned object.

The signature of the methods now matches that of the API Client, is clearer as
to what options the user can set and is aligned with PinPath, UnpinPath, which
returned pin methods.

The REST API now returns the Pinned/Unpinned object rather than 204-Accepted.

This was necessary for a cleaner pin/update approach, which I'm working on in
another branch.

Most of the changes here are updating tests to the new signatures

* Adapt load-balancing client to new Pin/Unpin signatures

* cluster.go: Fix typo

Co-Authored-By: Kishan Sagathiya <kishansagathiya@gmail.com>

* cluster.go: Fix typo

Co-Authored-By: Kishan Sagathiya <kishansagathiya@gmail.com>
2019-07-22 15:39:11 +02:00
Hector Sanjuan
b804e61ef0 Update deps along with go-libp2p-core refactor
Lots of rewrites in imports...
2019-06-14 13:10:45 +02:00
Hector Sanjuan
a86c7cae2b rpc auth: handle some auth errors gracefully
particuarly we will ignore authorization errors for some broadcasts and somply
not include those responses in the assembled one.
2019-05-09 21:23:49 +02:00
Hector Sanjuan
3d49ac26a5 Feat: Split components into RPC Services
I had thought of this for a very long time but there were no compelling
reasons to do it. Specifying RPC endpoint permissions becomes however
significantly nicer if each Component is a different RPC Service. This also
fixes some naming issues like having to prefix methods with the component name
to separate them from methods named in the same way in some other component
(Pin and IPFSPin).
2019-05-04 21:36:10 +01:00
Hector Sanjuan
036e3da7f1 Proxy pin/update: Respond with BadRequest when arguments missing 2019-05-02 10:32:13 +01:00
Hector Sanjuan
da24114ae0 Proxy: hijack pin/update
The IPFS pin/update endpoint takes two arguments and usually
unpins the first and pins the second. It is a bit more efficient
to do it in a single operation than two separate ones.

This will make the proxy endpoint hijack pin/update requests.

First, the FROM pin is fetched from the state. If present, we
set the options (replication factors, actual allocations) from
that pin to the new one. Then we pin the TO item and proceed
to unpin the FROM item when `unpin` is not false.

We need to support path resolving, just like IPFS, therefore
it was necessary to expose IPFSResolve() via RPC.
2019-04-29 16:36:40 +02:00
Alexey Novikov
53d624e701 fix #636: mitingate long header attack
License: MIT
Signed-off-by: Alexey Novikov <alexey@novikov.io>
2019-03-10 21:16:26 +00:00
Hector Sanjuan
0008f69162 Types: make AddedOutput carry a cid.Cid
This was a leftover. For consisency, all CIDs coming out of the API
should have the canonical cid form ( { "/": "cid" } ), otherwise
we are inconsistent.
2019-03-06 18:48:25 +00:00
Hector Sanjuan
23db807b87 ipfsproxy: use PinPath to match IPFS behaviour
License: MIT
Signed-off-by: Hector Sanjuan <hector@protocol.ai>
2019-03-04 15:54:34 +00:00
Hector Sanjuan
c4b18cd5f6 Address issues from self-review
License: MIT
Signed-off-by: Hector Sanjuan <hector@protocol.ai>
2019-02-27 18:43:29 +00:00
Hector Sanjuan
6447ea51d2 Remove *Serial types. Use pointers for all types.
This takes advantange of the latest features in go-cid, peer.ID and
go-multiaddr and makes the Go types serializable by default.

This means we no longer need to copy between Pin <-> PinSerial, or ID <->
IDSerial etc. We can now efficiently binary-encode these types using short
field keys and without parsing/stringifying (in many cases it just a cast).

We still get the same json output as before (with minor modifications for
Cids).

This should greatly improve Cluster performance and memory usage when dealing
with large collections of items.

License: MIT
Signed-off-by: Hector Sanjuan <hector@protocol.ai>
2019-02-27 17:04:35 +00:00
Adrian Lanzafame
3b3f786d68
add opencensus tracing and metrics
This commit adds support for OpenCensus tracing
and metrics collection. This required support for
context.Context propogation throughout the cluster
codebase, and in particular, the ipfscluster component
interfaces.

The tracing propogates across RPC and HTTP boundaries.
The current default tracing backend is Jaeger.

The metrics currently exports the metrics exposed by
the opencensus http plugin as well as the pprof metrics
to a prometheus endpoint for scraping.
The current default metrics backend is Prometheus.

Metrics are currently exposed by default due to low
overhead, can be turned off if desired, whereas tracing
is off by default as it has a much higher performance
overhead, though the extent of the performance hit can be
adjusted with smaller sampling rates.

License: MIT
Signed-off-by: Adrian Lanzafame <adrianlanzafame92@gmail.com>
2019-02-04 18:53:21 +10:00
Hector Sanjuan
6bda1e340b ipfsproxy: fix typos in comments
License: MIT
Signed-off-by: Hector Sanjuan <code@hector.link>
2019-01-11 13:36:56 +01:00
Hector Sanjuan
a0185fac2a Fix #382 (again): A better strategy for handling proxy headers
This changes the current strategy to extract headers from the IPFS daemon to
use them for hijacked endpoints in the proxy. The ipfs daemon is a bit of a
mess and what we were doing is not really reliable, specially when it comes to
setting CORS headers right (which we were not doing).

The new approach is:

* For every hijacked request, make an OPTIONS request to the same path, with
the given Origin, to the IPFS daemon and extract some CORS headers from
that. Use those in the hijacked response

* Avoid hijacking OPTIONS request, they should always go through so the IPFS
daemon controls all the CORS-preflight things as it wants.

* Similar to before, have a only-once-triggered request to extract other
interesting or custom headers from a fixed IPFS endpoint.  This allows us to
have the proxy forward other custom headers and to catch
`Access-Control-Expose-Methods`. The difference is that the endpoint use for
this and the additional headers are configurable by the user (but with hidden
configuration options because this is quite exotic from regular usage).

Now the implementation:

* Replaced the standard Muxer with gorilla/mux (I have also taken the change
to update the gxed version to the latest tag). This gives us much better
matching control over routes and allows us to not handle OPTIONS requests.

* This allows also to remove the extractArgument code and have proper handlers
for the endpoints passing command arguments as the last segment of the URL. A
very simple handler that wraps the default ones can be used to extract the
argument from the url and put it in the query.  Overall much cleaner this way.

* No longer capture interesting headers from any random proxied request.  This
made things complicated with a wrapping handler. We will just trigger the one
request to do it when we need it.

* When preparing the headers for the hijacked responses:
  * Trigger the OPTIONS request and figure out which CORS things we should set
  * Set the additional headers (perhaps triggering a POST request to fetch them)
  * Set our own headers.

* Moved all the headers stuff to a new headers.go file.

* Added configuration options (hidden by default) to:
  * Customize the extract headers endpoint
  * Customize what additional headers are extracted
  * Use HTTPs when talking to the IPFS API
    * I haven't tested this, but I did not want to have hardcoded 'http://' urls
      around, as before.

* Added extra testing for this, and tested manually a lot comparing the
daemon original output with our hijacked endpoint outputs while looking
at the API traffic with ngrep and making sure the requets happen as expected.
Also tested with IPFS companion in FF and Chrome.

License: MIT
Signed-off-by: Hector Sanjuan <code@hector.link>
2019-01-10 21:35:44 +01:00
Hector Sanjuan
159243d845 proxy: Check all headers to decide if we need to request
And do it by only loading them from the sync.Map once.

License: MIT
Signed-off-by: Hector Sanjuan <code@hector.link>
2018-12-20 14:44:32 +01:00
Hector Sanjuan
3896858a83 proxy: Rename helper to ipfsHeadersKnown()
License: MIT
Signed-off-by: Hector Sanjuan <code@hector.link>
2018-12-20 14:24:06 +01:00
Hector Sanjuan
862c1eb3ea Fix #382: Extract headers from IPFS API requests & apply them to hijacked ones.
This commit makes the proxy extract useful fixed headers (like CORS) from
the IPFS daemon API responses and then apply them to the responses
from hijacked endpoints like /add or /repo/stat.

It does this by caching a list of headers from the first IPFS API
response which has them. If we have not performed any proxied request or
managed to obtain the headers we're interested in, this will try triggering a
request to "/api/v0/version" to obtain them first.

This should fix the issues with using Cluster proxy with IPFS Companion and
Chrome.

License: MIT
Signed-off-by: Hector Sanjuan <code@hector.link>
2018-12-18 16:05:12 +01:00
Kishan Sagathiya
8b33dbec03 Remove proxy_ and Proxy from proxy config
Remove proxy_ and Proxy from proxy config objects without breaking
compatibility with previous revisions

Fixes #616

License: MIT
Signed-off-by: Kishan Mohanbhai Sagathiya <kishansagathiya@gmail.com>
2018-12-13 00:21:21 +05:30
Kishan Sagathiya
56cc3b88d7 Issue #453 Extract the IPFS Proxy from ipfshttp
Added test for ipfsproxy

License: MIT
Signed-off-by: Kishan Mohanbhai Sagathiya <kishansagathiya@gmail.com>
2018-11-04 08:57:09 +05:30
Kishan Sagathiya
411eea664c Merge branch 'master' of github.com:ipfs/ipfs-cluster into issue_453 2018-11-03 20:24:15 +05:30
Kishan Sagathiya
3a5ad6111a Issue #453 Extract the IPFS Proxy from ipfshttp
Changes as requieed
Rename IPFSProxy struct to Server

License: MIT
Signed-off-by: Kishan Mohanbhai Sagathiya <kishansagathiya@gmail.com>
2018-11-01 15:54:05 +05:30
Kishan Sagathiya
fdb573c96f Issue #453 Extract the IPFS Proxy from ipfshttp
Adding more missing pieces in config
Use the right package(not the inbuilt one)
Setup rpc client for proxy in the cluster
Add back SetClient and Shutdown into Connector as they are required to
implement Component interface
Add `ipfsproxy` as into list of logging identifier and add its default
log level

License: MIT
Signed-off-by: Kishan Mohanbhai Sagathiya <kishansagathiya@gmail.com>
2018-10-14 22:42:50 +05:30
Kishan Sagathiya
6803df4e97 Issue #453 Extract the IPFS Proxy from ipfshttp
Go through `ipfsproxy` repo and change things that are inappropriate

License: MIT
Signed-off-by: Kishan Mohanbhai Sagathiya <kishansagathiya@gmail.com>
2018-10-14 18:00:19 +05:30
Kishan Sagathiya
d338f8de53 Issue #453 Extract the IPFS Proxy from ipfshttp
Added config
Change functions accordingly to add new `ipfsproxy` component

License: MIT
Signed-off-by: Kishan Mohanbhai Sagathiya <kishansagathiya@gmail.com>
2018-10-14 16:07:42 +05:30
Kishan Sagathiya
155a65cac3 Issue #453 Extract the IPFS Proxy from ipfshttp
Extract the IPFS Proxy from ipfshttp and make it an api module

The `ipfshttp` IPFSConnector implementation includes the so called IPFS
Proxy. An endpoint which offers an IPFS API, hijacking some interesting
requests and forwarding the rest to the ipfs daemon.

`ipfshttp` should contain an implementation of IPFSConnector whose only
task should be to talk to IPFS
A new module should be created, `api/ipfsproxy`, an API Component
implementation for Cluster. The whole proxy code should be moved here.

License: MIT
Signed-off-by: Kishan Mohanbhai Sagathiya <kishansagathiya@gmail.com>
2018-10-13 19:58:28 +05:30