* cluster and restapi configs can also get values from environment variables
* other config components don't read any values from the environment
License: MIT
Signed-off-by: Robert Ignat <robert.ignat91@gmail.com>
jsonConfigs.getSection() returned a value when it needed to
return a pointer to the jsonSection fields inside the struct.
Even though the jsonSection type is a map, therefore on the heap,
returning it as a value (non-pointer) resulted in it being
disassociated with the jsonConfigs overarching struct.
License: MIT
Signed-off-by: Adrian Lanzafame <adrianlanzafame92@gmail.com>
This commit adds support for OpenCensus tracing
and metrics collection. This required support for
context.Context propogation throughout the cluster
codebase, and in particular, the ipfscluster component
interfaces.
The tracing propogates across RPC and HTTP boundaries.
The current default tracing backend is Jaeger.
The metrics currently exports the metrics exposed by
the opencensus http plugin as well as the pprof metrics
to a prometheus endpoint for scraping.
The current default metrics backend is Prometheus.
Metrics are currently exposed by default due to low
overhead, can be turned off if desired, whereas tracing
is off by default as it has a much higher performance
overhead, though the extent of the performance hit can be
adjusted with smaller sampling rates.
License: MIT
Signed-off-by: Adrian Lanzafame <adrianlanzafame92@gmail.com>
Snap builds have broken again. It seems the credentials have expired without
warning, even though they were not so old anyways. As promised,
next time snaps would break, they would be removed.
License: MIT
Signed-off-by: Hector Sanjuan <code@hector.link>
Fix the pid extraction in test.
Co-Authored-By: hsanjuan <hsanjuan@users.noreply.github.com>
License: MIT
Signed-off-by: Hector Sanjuan <code@hector.link>
They should not be interpreted as 0, since that may overwrite
defaults which are not 0. We simply need to do nothing.
License: MIT
Signed-off-by: Hector Sanjuan <code@hector.link>
The JSON parsing of the config could error, but we skipped error checking and
use Validate() at the end. This caused that maybe some JSON parsing errors
where logged but the final error when validating the configuration came from
somewhere different, creating very confusing error messages for the user.
This changes this, along with removing hardcoded section lists. This also
removes a Sharder component section because, AFAIK, it was a left over
from the sharding branch and in the end there is no separate sharding
component that needs a config.
License: MIT
Signed-off-by: Hector Sanjuan <code@hector.link>
This adds support for handling preflight requests in the REST API
and fixes currently mostly broken CORS.
Before we just let the user add custom response headers to the
configuration "headers" key but this is not the best way because
CORs headers and requests need special handling and doing it wrong
has security implications.
Therefore, I have added specific CORS-related configuration options
which control CORS behavour. We are forced to change the "headers"
defaults and will notify the users about this in the changelog.
License: MIT
Signed-off-by: Hector Sanjuan <code@hector.link>
This changes the current strategy to extract headers from the IPFS daemon to
use them for hijacked endpoints in the proxy. The ipfs daemon is a bit of a
mess and what we were doing is not really reliable, specially when it comes to
setting CORS headers right (which we were not doing).
The new approach is:
* For every hijacked request, make an OPTIONS request to the same path, with
the given Origin, to the IPFS daemon and extract some CORS headers from
that. Use those in the hijacked response
* Avoid hijacking OPTIONS request, they should always go through so the IPFS
daemon controls all the CORS-preflight things as it wants.
* Similar to before, have a only-once-triggered request to extract other
interesting or custom headers from a fixed IPFS endpoint. This allows us to
have the proxy forward other custom headers and to catch
`Access-Control-Expose-Methods`. The difference is that the endpoint use for
this and the additional headers are configurable by the user (but with hidden
configuration options because this is quite exotic from regular usage).
Now the implementation:
* Replaced the standard Muxer with gorilla/mux (I have also taken the change
to update the gxed version to the latest tag). This gives us much better
matching control over routes and allows us to not handle OPTIONS requests.
* This allows also to remove the extractArgument code and have proper handlers
for the endpoints passing command arguments as the last segment of the URL. A
very simple handler that wraps the default ones can be used to extract the
argument from the url and put it in the query. Overall much cleaner this way.
* No longer capture interesting headers from any random proxied request. This
made things complicated with a wrapping handler. We will just trigger the one
request to do it when we need it.
* When preparing the headers for the hijacked responses:
* Trigger the OPTIONS request and figure out which CORS things we should set
* Set the additional headers (perhaps triggering a POST request to fetch them)
* Set our own headers.
* Moved all the headers stuff to a new headers.go file.
* Added configuration options (hidden by default) to:
* Customize the extract headers endpoint
* Customize what additional headers are extracted
* Use HTTPs when talking to the IPFS API
* I haven't tested this, but I did not want to have hardcoded 'http://' urls
around, as before.
* Added extra testing for this, and tested manually a lot comparing the
daemon original output with our hijacked endpoint outputs while looking
at the API traffic with ngrep and making sure the requets happen as expected.
Also tested with IPFS companion in FF and Chrome.
License: MIT
Signed-off-by: Hector Sanjuan <code@hector.link>