Enable spell checking and fix spelling errors (using US locale)

This commit is contained in:
Hector Sanjuan 2022-06-15 12:16:05 +02:00
parent 6260b11e8c
commit 755cebbe0d
32 changed files with 73 additions and 68 deletions

2
.github/config.yml vendored
View File

@ -56,7 +56,7 @@ newPRWelcomeComment: >
* The PR is merged by maintainers when it has been approved and comments addressed.
We currently aim to provide initial feedback/triaging within **two business
days**. Please keep an eye on any labelling actions, as these will indicate
days**. Please keep an eye on any labeling actions, as these will indicate
priorities and status of your contribution.
We are very grateful for your contribution!

View File

@ -62,7 +62,7 @@ jobs:
run: go test -v -timeout 15m -failfast -datastore leveldb .
tests-check:
name: "Build and syntax checks"
name: "Build, syntax and spelling checks"
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
@ -77,6 +77,9 @@ jobs:
- name: Install staticcheck
run: go install honnef.co/go/tools/cmd/staticcheck@latest
- name: Install misspell
run: go install github.com/client9/misspell/cmd/misspell@latest
- name: Check
run: make check

View File

@ -185,7 +185,7 @@ The full list of additional features and bug fixes can be found below.
* Dependency upgrades | [ipfs-cluster/ipfs-cluster#1613](https://github.com/ipfs-cluster/ipfs-cluster/issues/1613) | [ipfs-cluster/ipfs-cluster#1617](https://github.com/ipfs-cluster/ipfs-cluster/issues/1617) | [ipfs-cluster/ipfs-cluster#1627](https://github.com/ipfs-cluster/ipfs-cluster/issues/1627)
* Bump RPC protocol version | [ipfs-cluster/ipfs-cluster#1615](https://github.com/ipfs-cluster/ipfs-cluster/issues/1615)
* Replace cid.Cid with api.Cid wrapper type | [ipfs-cluster/ipfs-cluster#1626](https://github.com/ipfs-cluster/ipfs-cluster/issues/1626)
* Provide string JSON marshalling for PinType | [ipfs-cluster/ipfs-cluster#1628](https://github.com/ipfs-cluster/ipfs-cluster/issues/1628)
* Provide string JSON marshaling for PinType | [ipfs-cluster/ipfs-cluster#1628](https://github.com/ipfs-cluster/ipfs-cluster/issues/1628)
* ipfs-cluster-ctl should exit with status 1 when an argument error happens | [ipfs-cluster/ipfs-cluster#1633](https://github.com/ipfs-cluster/ipfs-cluster/issues/1633) | [ipfs-cluster/ipfs-cluster#1634](https://github.com/ipfs-cluster/ipfs-cluster/issues/1634)
* Revamp and fix basic exported metrics: pins, queued, pinning, pin errors | [ipfs-cluster/ipfs-cluster#1187](https://github.com/ipfs-cluster/ipfs-cluster/issues/1187) | [ipfs-cluster/ipfs-cluster#1470](https://github.com/ipfs-cluster/ipfs-cluster/issues/1470) | [ipfs-cluster/ipfs-cluster#1637](https://github.com/ipfs-cluster/ipfs-cluster/issues/1637)
@ -363,7 +363,7 @@ the latest metric of a certain type received by a peer.
Before, adding content using the `local=true` option would add the blocks to
the peer receiving the request and then allocate the pin normally (i.e. to the
peers with most free space avaiable, which may or not be the local peer). Now,
peers with most free space available, which may or not be the local peer). Now,
"local add" requests will always allocate the pin to the local peer since it
already has the content.
@ -520,7 +520,7 @@ The second question is addressed by enriching pin metadata. Pins will now
store the time that they were added to the cluster. The pin tracker will
additionally keep track of how many times an operation has been retried. Using
these two items, we can prioritize pinning of items that are new and have not
repeteadly failed to pin. The max age and max number of retries used to
repeatedly failed to pin. The max age and max number of retries used to
prioritize a pin can be controlled in the configuration.
Please see the information below for more details about how to make use and
@ -606,7 +606,7 @@ tags allocator with a "group:default" tag will not be present).
This asks the allocator to allocate pins first by the value of the "group"
tag-metric, as produced by the tag informer, and then by the value of the
"freespace" metric. Allocating solely by the "freespace" is the equivalent of
the cluster behaviour on previous versions. This default assumes the default
the cluster behavior on previous versions. This default assumes the default
`informer/tags` configuration section mentioned above is present.
##### REST API
@ -686,7 +686,7 @@ redirect). Clients do keep the HTTP method when following 307 redirects.
The parameters object to the RestAPI client `WaitFor` function now has a
`Limit` field. This allows to return as soon as a number of peers have reached
the target status. When unset, previous behaviour should be maintained.
the target status. When unset, previous behavior should be maintained.
##### Other
@ -719,7 +719,7 @@ constrained disk I/O it will be surely noticed, at least in the first GC
cycle, since the datastore was never GC'ed before.
Badger is the datastore we are more familiar with and the most scalable choice
(chosen by both IPFS and Filecoin). However, it may be that badger behaviour
(chosen by both IPFS and Filecoin). However, it may be that badger behavior
and GC-needs are not best suited or not preferred, or more downsides are
discovered in the future. For those cases, we have added the option to run
with a leveldb backend as an alternative. Level DB does not need GC and it
@ -741,7 +741,7 @@ connect to the `origins` of a pin before pinning. Note that for the moment
[ipfs will keep connected to those peers permanently](https://github.com/ipfs-cluster/ipfs-cluster/issues/1376).
Please read carefully through the notes below, as the release includes subtle
changes in configuration, defaults and behaviours which may in some cases
changes in configuration, defaults and behaviors which may in some cases
affect you (although probably will not).
#### List of changes
@ -871,10 +871,10 @@ state between nodes, creating a new root. Batching allows to group multiple
updates in a single crdt DAG-node. This reduces the number of broadcasts, the
depth of the DAG, the breadth of the DAG and the syncing times when the
Cluster is ingesting many pins, removing most of the overhead in the
process. The batches are automatically commited when reaching a certain age or
process. The batches are automatically committed when reaching a certain age or
a certain size, both configurable.
Additionally, improvements to timeout behaviours have been introduced.
Additionally, improvements to timeout behaviors have been introduced.
For more details, check the list below and the latest documentation on the
[website](https://ipfscluster.io).
@ -1119,7 +1119,7 @@ The IPFS proxy `/pin/add` endpoint now supports `recursive=false` for direct pin
The `/pins` endpoint now return `GlobalPinInfo` objects that include a `name`
field for the pin name. The same objects do not embed redundant information
anymore for each peer in the `peer_map`: `cid` and `peer` are ommitted.
anymore for each peer in the `peer_map`: `cid` and `peer` are omitted.
##### Go APIs
@ -1236,7 +1236,7 @@ how to setup and join these clusters
* A new `peer_addresses` key allows specifying additional peer addresses in the configuration (similar to the `peerstore` file). These are treated as libp2p bootstrap addreses (do not mix with Raft bootstrap process). This setting is mostly useful for CRDT collaborative clusters, as template configurations can be distributed including bootstrap peers (usually the same as trusted peers). The values are the full multiaddress of these peers: `/ip4/x.x.x.x/tcp/1234/p2p/Qmxxx...`.
* `listen_multiaddress` can now be set to be an array providing multiple listen multiaddresses, the new defaults being `/tcp/9096` and `/udp/9096/quic`.
* `enable_relay_hop` (true by default), lets the cluster peer act as a relay for other cluster peers behind NATs. This is only for the Cluster network. As a reminder, while this setting is problematic on IPFS (due to the amount of traffic the HOP peers start relaying), the cluster-peers networks are smaller and do not move huge amounts of content around.
* The `ipfs_sync_interval` option dissappears as the stateless tracker does not keep a state that can lose synchronization with IPFS.
* The `ipfs_sync_interval` option disappears as the stateless tracker does not keep a state that can lose synchronization with IPFS.
* `ipfshttp` section:
* A new `repogc_timeout` key specifies the timeout for garbage collection operations on IPFS. It is set to 24h by default.
@ -1582,7 +1582,7 @@ longer used and the maintenance of Gx dependencies has been dropped. The
#### Summary
As we get ready to introduce a new CRDT-based "consensus" component to replace
Raft, IPFS Cluster 0.10.0 prepares the ground with substancial under-the-hood
Raft, IPFS Cluster 0.10.0 prepares the ground with substantial under-the-hood
changes. many performance improvements and a few very useful features.
First of all, this release **requires** users to run `state upgrade` (or start
@ -1611,7 +1611,7 @@ items to specific Cluster peers, overriding the default allocation policy.
query arguments to the Pin or PinPath endpoints: `POST
/pins/<cid-or-path>?meta-key1=value1&meta-key2=value2...`
Note that on this release we have also removed a lot of backwards-compatiblity
Note that on this release we have also removed a lot of backwards-compatibility
code for things older than version 0.8.0, which kept things working but
printed respective warnings. If you're upgrading from an old release, consider
comparing your configuration with the new default one.
@ -2019,7 +2019,7 @@ Note that the REST API response format for the `/add` endpoint has changed. Thus
##### Bug fixes
* `/add` endpoints improvements and IPFS Companion compatiblity | [ipfs-cluster/ipfs-cluster#582](https://github.com/ipfs-cluster/ipfs-cluster/issues/582) | [ipfs-cluster/ipfs-cluster#569](https://github.com/ipfs-cluster/ipfs-cluster/issues/569)
* `/add` endpoints improvements and IPFS Companion compatibility | [ipfs-cluster/ipfs-cluster#582](https://github.com/ipfs-cluster/ipfs-cluster/issues/582) | [ipfs-cluster/ipfs-cluster#569](https://github.com/ipfs-cluster/ipfs-cluster/issues/569)
* Fix adding with spaces in the name parameter | [ipfs-cluster/ipfs-cluster#583](https://github.com/ipfs-cluster/ipfs-cluster/issues/583)
* Escape filter query parameter | [ipfs-cluster/ipfs-cluster#586](https://github.com/ipfs-cluster/ipfs-cluster/issues/586)
* Fix some race conditions | [ipfs-cluster/ipfs-cluster#597](https://github.com/ipfs-cluster/ipfs-cluster/issues/597)
@ -2352,7 +2352,7 @@ APIs have not changed in this release. The `/health/graph` endpoint has been add
This release includes a number of bufixes regarding the upgrade and import of state, along with two important features:
* Commands to export and import the internal cluster state: these allow to perform easy and human-readable dumps of the shared cluster state while offline, and eventually restore it in a different peer or cluster.
* The introduction of `replication_factor_min` and `replication_factor_max` parameters for every Pin (along with the deprecation of `replication_factor`). The defaults are specified in the configuration. For more information on the usage and behavour of these new options, check the IPFS cluster guide.
* The introduction of `replication_factor_min` and `replication_factor_max` parameters for every Pin (along with the deprecation of `replication_factor`). The defaults are specified in the configuration. For more information on the usage and behavior of these new options, check the IPFS cluster guide.
* Features
* New `ipfs-cluster-service state export/import/cleanup` commands | [ipfs-cluster/ipfs-cluster#240](https://github.com/ipfs-cluster/ipfs-cluster/issues/240) | [ipfs-cluster/ipfs-cluster#290](https://github.com/ipfs-cluster/ipfs-cluster/issues/290)

View File

@ -30,6 +30,7 @@ follow:
check:
go vet ./...
staticcheck --checks all ./...
misspell -locale US .
test:
go test -v ./...

View File

@ -4,6 +4,7 @@
[![Made by](https://img.shields.io/badge/made%20by-Protocol%20Labs-blue.svg?style=flat-square)](https://protocol.ai)
[![Main project](https://img.shields.io/badge/project-ipfs--cluster-blue.svg?style=flat-square)](http://github.com/ipfs-cluster)
[![Matrix channel](https://img.shields.io/badge/matrix-%23ipfs--cluster-blue.svg?style=flat-square)](https://app.element.io/#/room/#ipfs-cluster:ipfs.io)
[![Matrix channel](https://img.shields.io/badge/matrix-%23ipfs--cluster-blue.svg?style=flat-square)](https://app.element.io/#/room/#ipfs-cluster:ipfs.io)
[![pkg.go.dev](https://pkg.go.dev/badge/github.com/ipfs-cluster/ipfs-cluster)](https://pkg.go.dev/github.com/ipfs-cluster/ipfs-cluster)
[![Go Report Card](https://goreportcard.com/badge/github.com/ipfs-cluster/ipfs-cluster)](https://goreportcard.com/report/github.com/ipfs-cluster/ipfs-cluster)
[![codecov](https://codecov.io/gh/ipfs-cluster/ipfs-cluster/branch/master/graph/badge.svg)](https://codecov.io/gh/ipfs-cluster/ipfs-cluster)

View File

@ -125,7 +125,7 @@ func TestAdder_ContextCancelled(t *testing.T) {
defer wg.Done()
_, err := adder.FromMultipart(ctx, r)
if err == nil {
t.Error("expected a context cancelled error")
t.Error("expected a context canceled error")
}
t.Log(err)
}()

View File

@ -12,8 +12,8 @@ import (
"fmt"
"sort"
logging "github.com/ipfs/go-log/v2"
api "github.com/ipfs-cluster/ipfs-cluster/api"
logging "github.com/ipfs/go-log/v2"
peer "github.com/libp2p/go-libp2p-core/peer"
rpc "github.com/libp2p/go-libp2p-gorpc"
)
@ -281,7 +281,7 @@ func (a *Allocator) Allocate(
// the types for all the peers. There cannot be a metric of one type
// for a peer that does not appear in the other types.
//
// Removing such occurences is done in allocate.go, before the
// Removing such occurrences is done in allocate.go, before the
// allocator is called.
//
// Otherwise, the sorting might be funny.

View File

@ -182,7 +182,7 @@ func AddParamsFromQuery(query url.Values) (AddParams, error) {
return params, err
}
// This mimics go-ipfs behaviour.
// This mimics go-ipfs behavior.
if params.CidVersion > 0 {
params.RawLeaves = true
}

View File

@ -28,7 +28,7 @@ const minMaxHeaderBytes = 4096
const defaultMaxHeaderBytes = minMaxHeaderBytes
// Config provides common API configuration values and allows to customize its
// behaviour. It implements most of the config.ComponentConfig interface
// behavior. It implements most of the config.ComponentConfig interface
// (except the Default() and ConfigKey() methods). Config should be embedded
// in a Config object that implements the missing methods and sets the
// meta options.

View File

@ -202,7 +202,7 @@ func MakeGet(t *testing.T, api API, url string, resp interface{}) {
CheckHeaders(t, api.Headers(), url, httpResp.Header)
}
// MakePost performs a POST request agains the API with the given body.
// MakePost performs a POST request against the API with the given body.
func MakePost(t *testing.T, api API, url string, body []byte, resp interface{}) {
MakePostWithContentType(t, api, url, body, "application/json", resp)
}

View File

@ -38,7 +38,7 @@ const (
DefaultMaxHeaderBytes = minMaxHeaderBytes
)
// Config allows to customize behaviour of IPFSProxy.
// Config allows to customize behavior of IPFSProxy.
// It implements the config.ComponentConfig interface.
type Config struct {
config.Saver

View File

@ -43,7 +43,7 @@ var logger = logging.Logger(loggingFacility)
// Client interface defines the interface to be used by API clients to
// interact with the ipfs-cluster-service. All methods take a
// context.Context as their first parameter, this allows for
// timing out and cancelling of requests as well as recording
// timing out and canceling of requests as well as recording
// metrics and tracing of requests through the API.
type Client interface {
// ID returns information about the cluster Peer.

View File

@ -452,7 +452,7 @@ func (pi PinInfo) ToGlobal() GlobalPinInfo {
return gpi
}
// Defined retuns if the PinInfo is not zero.
// Defined returns if the PinInfo is not zero.
func (pi PinInfo) Defined() bool {
return pi.Cid.Defined()
}
@ -684,7 +684,7 @@ const (
PinModeDirect PinMode = 1
)
// PinModeFromString converst a string to PinMode.
// PinModeFromString converts a string to PinMode.
func PinModeFromString(s string) PinMode {
switch s {
case "recursive", "":
@ -1386,7 +1386,7 @@ func (m Metric) Discard() bool {
}
// GetWeight returns the weight of the metric.
// This is for compatiblity.
// This is for compatibility.
func (m Metric) GetWeight() int64 {
return m.Weight
}

View File

@ -261,7 +261,7 @@ func (c *Cluster) watchPinset() {
recoverTimer := time.NewTimer(0) // 0 so that it does an initial recover right away
// This prevents doing an StateSync while doing a RecoverAllLocal,
// which is intended behaviour as for very large pinsets
// which is intended behavior as for very large pinsets
for {
select {
case <-stateSyncTimer.C:
@ -777,7 +777,7 @@ func (c *Cluster) Shutdown(ctx context.Context) error {
logger.Info("shutting down Cluster")
// Cancel discovery service (this shutdowns announcing). Handling
// entries is cancelled along with the context below.
// entries is canceled along with the context below.
if c.discovery != nil {
c.discovery.Close()
}
@ -1179,7 +1179,7 @@ func (c *Cluster) StateSync(ctx context.Context) error {
// StatusAll returns the GlobalPinInfo for all tracked Cids in all peers on
// the out channel. This is done by broacasting a StatusAll to all peers. If
// an error happens, it is returned. This method blocks until it finishes. The
// operation can be aborted by cancelling the context.
// operation can be aborted by canceling the context.
func (c *Cluster) StatusAll(ctx context.Context, filter api.TrackerStatus, out chan<- api.GlobalPinInfo) error {
_, span := trace.StartSpan(ctx, "cluster/StatusAll")
defer span.End()
@ -1249,7 +1249,7 @@ func (c *Cluster) localPinInfoOp(
// RecoverAll triggers a RecoverAllLocal operation on all peers and returns
// GlobalPinInfo objets for all recovered items. This method blocks until
// finished. Operation can be aborted by cancelling the context.
// finished. Operation can be aborted by canceling the context.
func (c *Cluster) RecoverAll(ctx context.Context, out chan<- api.GlobalPinInfo) error {
_, span := trace.StartSpan(ctx, "cluster/RecoverAll")
defer span.End()
@ -1310,7 +1310,7 @@ func (c *Cluster) RecoverLocal(ctx context.Context, h api.Cid) (api.PinInfo, err
// are managed and their allocation, but does not indicate if the item is
// successfully pinned. For that, use the Status*() methods.
//
// The operation can be aborted by cancelling the context. This methods blocks
// The operation can be aborted by canceling the context. This methods blocks
// until the operation has completed.
func (c *Cluster) Pins(ctx context.Context, out chan<- api.Pin) error {
_, span := trace.StartSpan(ctx, "cluster/Pins")

View File

@ -167,7 +167,7 @@ func ErrorOut(m string, a ...interface{}) {
}
// WaitForIPFS hangs until IPFS API becomes available or the given context is
// cancelled. The IPFS API location is determined by the default ipfshttp
// canceled. The IPFS API location is determined by the default ipfshttp
// component configuration and can be overridden using environment variables
// that affect that configuration. Note that we have to do this in the blind,
// since we want to wait for IPFS before we even fetch the IPFS component

View File

@ -320,7 +320,7 @@ func (css *Consensus) setup() {
css.readyCh <- struct{}{}
}
// Shutdown closes this component, cancelling the pubsub subscription and
// Shutdown closes this component, canceling the pubsub subscription and
// closing the datastore.
func (css *Consensus) Shutdown(ctx context.Context) error {
css.shutdownLock.Lock()
@ -335,7 +335,7 @@ func (css *Consensus) Shutdown(ctx context.Context) error {
css.cancel()
// Only close crdt after cancelling the context, otherwise
// Only close crdt after canceling the context, otherwise
// the pubsub broadcaster stays on and locks it.
if crdt := css.crdt; crdt != nil {
crdt.Close()
@ -357,7 +357,7 @@ func (css *Consensus) SetClient(c *rpc.Client) {
css.rpcReady <- struct{}{}
}
// Ready returns a channel which is signalled when the component
// Ready returns a channel which is signaled when the component
// is ready to use.
func (css *Consensus) Ready(ctx context.Context) <-chan struct{} {
return css.readyCh
@ -491,7 +491,7 @@ func (css *Consensus) batchWorker() {
}
if err := css.batchingState.Commit(css.ctx); err != nil {
logger.Errorf("error commiting batch after reaching max size: %s", err)
logger.Errorf("error committing batch after reaching max size: %s", err)
continue
}
logger.Infof("batch commit (size): %d items", maxSize)
@ -506,7 +506,7 @@ func (css *Consensus) batchWorker() {
case <-batchTimer.C:
// Commit
if err := css.batchingState.Commit(css.ctx); err != nil {
logger.Errorf("error commiting batch after reaching max age: %s", err)
logger.Errorf("error committing batch after reaching max age: %s", err)
continue
}
logger.Infof("batch commit (max age): %d items", batchCurSize)
@ -575,7 +575,7 @@ func (css *Consensus) RmPeer(ctx context.Context, pid peer.ID) error {
}
// State returns the cluster shared state. It will block until the consensus
// component is ready, shutdown or the given context has been cancelled.
// component is ready, shutdown or the given context has been canceled.
func (css *Consensus) State(ctx context.Context) (state.ReadOnly, error) {
select {
case <-ctx.Done():

View File

@ -451,7 +451,7 @@ func TestBatching(t *testing.T) {
t.Error("the added pin should be in the state")
}
// Pin 4 things, and check that 3 are commited
// Pin 4 things, and check that 3 are committed
for _, c := range []api.Cid{test.Cid2, test.Cid3, test.Cid4, test.Cid5} {
err = cc.LogPin(ctx, testPin(c))
if err != nil {

View File

@ -246,7 +246,7 @@ func makeServerConf(peers []peer.ID) hraft.Configuration {
}
// WaitForLeader holds until Raft says we have a leader.
// Returns if ctx is cancelled.
// Returns if ctx is canceled.
func (rw *raftWrapper) WaitForLeader(ctx context.Context) (string, error) {
ctx, span := trace.StartSpan(ctx, "consensus/raft/WaitForLeader")
defer span.End()

View File

@ -71,7 +71,7 @@ type Config struct {
}
// badgerOptions is a copy of options.BadgerOptions but
// without the Logger as it cannot be marshalled to/from
// without the Logger as it cannot be marshaled to/from
// JSON.
type badgerOptions struct {
Dir string `json:"dir"`

View File

@ -12,7 +12,7 @@ version: '3.4'
# it from the container. "ipfs-cluster-ctl peers ls" should show all 3 peers a few
# seconds after start.
#
# For persistance, a "compose" folder is created and used to store configurations
# For persistence, a "compose" folder is created and used to store configurations
# and states. This can be used to edit configurations in subsequent runs. It looks
# as follows:
#

View File

@ -29,7 +29,7 @@ const (
)
// Config is used to initialize a Connector and allows to customize
// its behaviour. It implements the config.ComponentConfig interface.
// its behavior. It implements the config.ComponentConfig interface.
type Config struct {
config.Saver

View File

@ -448,7 +448,7 @@ func (ipfs *Connector) pinProgress(ctx context.Context, hash api.Cid, maxDepth a
for {
var pins ipfsPinsResp
if err := dec.Decode(&pins); err != nil {
// If we cancelled the request we should tell the user
// If we canceled the request we should tell the user
// (in case dec.Decode() exited cleanly with an EOF).
select {
case <-ctx.Done():
@ -849,7 +849,7 @@ func (ipfs *Connector) RepoGC(ctx context.Context) (api.RepoGC, error) {
resp := ipfsRepoGCResp{}
if err := dec.Decode(&resp); err != nil {
// If we cancelled the request we should tell the user
// If we canceled the request we should tell the user
// (in case dec.Decode() exited cleanly with an EOF).
select {
case <-ctx.Done():
@ -933,7 +933,7 @@ func (ipfs *Connector) SwarmPeers(ctx context.Context) ([]peer.ID, error) {
return swarm, nil
}
// chanDirectory implementes the files.Directory interace
// chanDirectory implements the files.Directory interface
type chanDirectory struct {
iterator files.DirIterator
}

View File

@ -143,7 +143,7 @@ func (mc *Checker) Alerts() <-chan api.Alert {
}
// Watch will trigger regular CheckPeers on the given interval. It will call
// peersF to obtain a peerset. It can be stopped by cancelling the context.
// peersF to obtain a peerset. It can be stopped by canceling the context.
// Usually you want to launch this in a goroutine.
func (mc *Checker) Watch(ctx context.Context, peersF func(context.Context) ([]peer.ID, error), interval time.Duration) {
ticker := time.NewTicker(interval)

View File

@ -129,7 +129,7 @@ func (mon *Monitor) logFromPubsub() {
return
default:
msg, err := mon.subscription.Next(ctx)
if err != nil { // context cancelled enters here
if err != nil { // context canceled enters here
continue
}
@ -158,7 +158,7 @@ func (mon *Monitor) logFromPubsub() {
}
}
debug("recieved", metric)
debug("received", metric)
err = mon.LogMetric(ctx, metric)
if err != nil {

View File

@ -232,10 +232,10 @@ func (op *Operation) Timestamp() time.Time {
return ts
}
// Cancelled returns whether the context for this
// operation has been cancelled.
func (op *Operation) Cancelled() bool {
ctx, span := trace.StartSpan(op.ctx, "optracker/Cancelled")
// Canceled returns whether the context for this
// operation has been canceled.
func (op *Operation) Canceled() bool {
ctx, span := trace.StartSpan(op.ctx, "optracker/Canceled")
_ = ctx
defer span.End()
select {

View File

@ -39,12 +39,12 @@ func TestOperation(t *testing.T) {
}
if op.Cancelled() {
t.Error("should not be cancelled")
t.Error("should not be canceled")
}
op.Cancel()
if !op.Cancelled() {
t.Error("should be cancelled")
t.Error("should be canceled")
}
if op.ToTrackerStatus() != api.TrackerStatusUnpinning {

View File

@ -73,7 +73,7 @@ func NewOperationTracker(ctx context.Context, pid peer.ID, peerName string) *Ope
// one already exists to do the same thing, in which case nil is returned.
//
// If an operation exists it is of different type, it is
// cancelled and the new one replaces it in the tracker.
// canceled and the new one replaces it in the tracker.
func (opt *OperationTracker) TrackNewOperation(ctx context.Context, pin api.Pin, typ OperationType, ph Phase) *Operation {
ctx = trace.NewContext(opt.ctx, trace.FromContext(ctx))
ctx, span := trace.StartSpan(ctx, "optracker/TrackNewOperation")
@ -94,7 +94,7 @@ func (opt *OperationTracker) TrackNewOperation(ctx context.Context, pin api.Pin,
op2 := newOperation(ctx, pin, typ, ph, opt)
if ok && op.Type() == typ {
// Carry over the attempt count when doing an operation of the
// same type. The old operation exists and was cancelled.
// same type. The old operation exists and was canceled.
op2.attemptCount = op.AttemptCount() // carry the count
}
logger.Debugf("'%s' on cid '%s' has been created with phase '%s'", typ, pin.Cid, ph)

View File

@ -32,7 +32,7 @@ func TestOperationTracker_TrackNewOperation(t *testing.T) {
}
if op.Cancelled() != false {
t.Error("should not be cancelled")
t.Error("should not be canceled")
}
if op.ToTrackerStatus() != api.TrackerStatusPinQueued {
@ -54,7 +54,7 @@ func TestOperationTracker_TrackNewOperation(t *testing.T) {
}
if !op.Cancelled() {
t.Fatal("should have cancelled the original operation")
t.Fatal("should have canceled the original operation")
}
})

View File

@ -535,7 +535,7 @@ func TestTrackUntrackWithCancel(t *testing.T) {
case <-ctx.Done():
return
case <-time.Tick(150 * time.Millisecond):
t.Errorf("operation context should have been cancelled by now")
t.Errorf("operation context should have been canceled by now")
}
} else {
t.Error("slowPin should be pinning and is:", pInfo.Status)

View File

@ -145,7 +145,7 @@ func (spt *Tracker) opWorker(pinF func(*optracker.Operation) error, prioCh, norm
// applyPinF returns true if the operation can be considered "DONE".
func applyPinF(pinF func(*optracker.Operation) error, op *optracker.Operation) bool {
if op.Cancelled() {
// operation was cancelled. Move on.
// operation was canceled. Move on.
// This saves some time, but not 100% needed.
return false
}
@ -155,7 +155,7 @@ func applyPinF(pinF func(*optracker.Operation) error, op *optracker.Operation) b
if err != nil {
if op.Cancelled() {
// there was an error because
// we were cancelled. Move on.
// we were canceled. Move on.
return false
}
op.SetError(err)

View File

@ -227,7 +227,7 @@ func TestTrackUntrackWithCancel(t *testing.T) {
case <-spt.optracker.OpContext(ctx, slowPinCid).Done():
return
case <-time.Tick(100 * time.Millisecond):
t.Errorf("operation context should have been cancelled by now")
t.Errorf("operation context should have been canceled by now")
}
} else {
t.Error("slowPin should be pinning and is:", pInfo.Status)
@ -238,7 +238,7 @@ func TestTrackUntrackWithCancel(t *testing.T) {
// Because we are pinning the slow CID, the fast one will stay
// queued. We proceed to untrack it then. Since it was never
// "pinning", it should simply be unqueued (or ignored), and no
// cancelling of the pinning operation happens (unlike on WithCancel).
// canceling of the pinning operation happens (unlike on WithCancel).
func TestTrackUntrackWithNoCancel(t *testing.T) {
ctx := context.Background()
spt := testStatelessPinTracker(t)
@ -333,7 +333,7 @@ func TestUntrackTrackWithCancel(t *testing.T) {
case <-spt.optracker.OpContext(ctx, slowPinCid).Done():
return
case <-time.Tick(100 * time.Millisecond):
t.Errorf("operation context should have been cancelled by now")
t.Errorf("operation context should have been canceled by now")
}
} else {
t.Error("slowPin should be in unpinning")

View File

@ -408,7 +408,7 @@ func (mock *mockCluster) SendInformerMetrics(ctx context.Context, in struct{}, o
func (mock *mockCluster) Alerts(ctx context.Context, in struct{}, out *[]api.Alert) error {
*out = []api.Alert{
api.Alert{
{
Metric: api.Metric{
Name: "ping",
Peer: PeerID2,