_id
stringlengths
2
7
text
stringlengths
6
2.61k
title
stringclasses
1 value
c177100
// cephRBDVolumeMarkDeleted marks an RBD storage volume as being in "zombie" // state // An RBD storage volume that is in zombie state is not tracked in LXD's // database anymore but still needs to be kept around for the sake of any // dependent storage entities in the storage pool. This usually happens when an // RBD ...
c177101
// cephRBDVolumeUnmarkDeleted unmarks an RBD storage volume as being in "zombie" // state // - An RBD storage volume that is in zombie is not tracked in LXD's database // anymore but still needs to be kept around for the sake of any dependent // storage entities in the storage pool. // - This function is mostly use...
c177102
// cephRBDVolumeRename renames a given RBD storage volume // Note that this usually requires that the image be unmapped under its original // name, then renamed, and finally will be remapped again. If it is not unmapped // under its original name and the callers maps it under its new name the image // will be mapped tw...
c177103
// cephRBDVolumeRename renames a given RBD storage volume // Note that if the snapshot is mapped - which it usually shouldn't be - this // usually requires that the snapshot be unmapped under its original name, then // renamed, and finally will be remapped again. If it is not unmapped under its // original name and the...
c177104
// cephRBDSnapshotDelete deletes an RBD snapshot // This requires that the snapshot does not have any clones and is unmapped and // unprotected.
c177105
// cephRBDVolumeCopy copies an RBD storage volume // This is a non-sparse copy which doesn't introduce any dependency relationship // between the source RBD storage volume and the target RBD storage volume. The // operations is similar to creating an empty RBD storage volume and rsyncing // the contents of the source R...
c177106
// cephRBDVolumeListSnapshots retrieves the snapshots of an RBD storage volume // The format of the snapshot names is simply the part after the @. So given a // valid RBD path relative to a pool // <osd-pool-name>/<rbd-storage-volume>@<rbd-snapshot-name> // this will only return // <rbd-snapshot-name>
c177107
// getRBDSize returns the size the RBD storage volume is supposed to be created // with
c177108
// getRBDFilesystem returns the filesystem the RBD storage volume is supposed to // be created with
c177109
// copyWithoutSnapshotsFull creates a non-sparse copy of a container // This does not introduce a dependency relation between the source RBD storage // volume and the target RBD storage volume.
c177110
// copyWithoutSnapshotsFull creates a sparse copy of a container // This introduces a dependency relation between the source RBD storage volume // and the target RBD storage volume.
c177111
// GetConfigCmd returns a cobra command that lets the caller see the configured // auth backends in Pachyderm
c177112
// SetConfigCmd returns a cobra command that lets the caller configure auth // backends in Pachyderm
c177113
// NewSharder creates a Sharder using a discovery client.
c177114
// NewRouter creates a Router.
c177115
// renewUserCredentials extends the TTL of the Pachyderm authentication token // 'userToken', using the vault plugin's Admin credentials. 'userToken' belongs // to the user who is calling vault, and would like to extend their Pachyderm // session.
c177116
// NewLocalClient returns a Client that stores data on the local file system
c177117
// AddSpanToAnyExisting checks 'ctx' for Jaeger tracing information, and if // tracing metadata is present, it generates a new span for 'operation', marks // it as a child of the existing span, and returns it.
c177118
// InstallJaegerTracerFromEnv installs a Jaeger client as then opentracing // global tracer, relying on environment variables to configure the client. It // returns the address used to initialize the global tracer, if any // initialization occurred
c177119
// UnaryClientInterceptor returns a GRPC interceptor for non-streaming GRPC RPCs
c177120
// StreamClientInterceptor returns a GRPC interceptor for non-streaming GRPC RPCs
c177121
// UnaryServerInterceptor returns a GRPC interceptor for non-streaming GRPC RPCs
c177122
// StreamServerInterceptor returns a GRPC interceptor for non-streaming GRPC RPCs
c177123
// CloseAndReportTraces tries to close the global tracer, which, in the case of // the Jaeger tracer, causes it to send any unreported traces to the collector
c177124
// newWriter creates a new Writer.
c177125
// For sets b.MaxElapsedTime to 'maxElapsed' and returns b
c177126
// Helper function used to log requests and responses from our GRPC method // implementations
c177127
// Format proxies the closure in order to satisfy `logrus.Formatter`'s // interface.
c177128
// NewGRPCLogWriter creates a new GRPC log writer. `logger` specifies the // underlying logger, and `source` specifies where these logs are coming from; // it is added as a entry field for all log messages.
c177129
// Read loads the Pachyderm config on this machine. // If an existing configuration cannot be found, it sets up the defaults. Read // returns a nil Config if and only if it returns a non-nil error.
c177130
// Write writes the configuration in 'c' to this machine's Pachyderm config // file.
c177131
// Read reads val from r.
c177132
// Write writes val to r.
c177133
// NewReadWriter returns a new ReadWriter with rw as both its source and its sink.
c177134
// RunGitHookServer starts the webhook server
c177135
// newLoggingPipe initializes a loggingPipe
c177136
// Read implements the corresponding method of net.Conn
c177137
// Write implements the corresponding method of net.Conn
c177138
// Accept implements the corresponding method of net.Listener for // TestListener
c177139
// Close implements the corresponding method of net.Listener for // TestListener. Any blocked Accept operations will be unblocked and return // errors.
c177140
// errorf is analogous to fmt.Errorf, but generates hashTreeErrors instead of // errorStrings.
c177141
// InitWithKube is like InitServiceEnv, but also assumes that it's run inside // a kubernetes cluster and tries to connect to the kubernetes API server.
c177142
// GetEtcdClient returns the already connected etcd client without modification.
c177143
// GetKubeClient returns the already connected Kubernetes API client without // modification.
c177144
// NewHasher creates a hasher.
c177145
// HashJob computes and returns the hash of a job.
c177146
// HashPipeline computes and returns the hash of a pipeline.
c177147
// Status returns the statuses of workers referenced by pipelineRcName. // pipelineRcName is the name of the pipeline's RC and can be gotten with // ppsutil.PipelineRcName. You can also pass "" for pipelineRcName to get all // clients for all workers.
c177148
// Cancel cancels a set of datums running on workers. // pipelineRcName is the name of the pipeline's RC and can be gotten with // ppsutil.PipelineRcName.
c177149
// Conns returns a slice of connections to worker servers. // pipelineRcName is the name of the pipeline's RC and can be gotten with // ppsutil.PipelineRcName. You can also pass "" for pipelineRcName to get all // clients for all workers.
c177150
// Clients returns a slice of worker clients for a pipeline. // pipelineRcName is the name of the pipeline's RC and can be gotten with // ppsutil.PipelineRcName. You can also pass "" for pipelineRcName to get all // clients for all workers.
c177151
// NewClient returns a worker client for the worker at the IP address passed in.
c177152
// RunFixedArgs wraps a function in a function // that checks its exact argument count.
c177153
// RunBoundedArgs wraps a function in a function // that checks its argument count is within a range.
c177154
// ErrorAndExit errors with the given format and args, and then exits.
c177155
// ParseCommit takes an argument of the form "repo[@branch-or-commit]" and // returns the corresponding *pfs.Commit.
c177156
// ParseBranch takes an argument of the form "repo[@branch]" and // returns the corresponding *pfs.Branch. This uses ParseCommit under the hood // because a branch name is usually interchangeable with a commit-id.
c177157
// ParseFile takes an argument of the form "repo[@branch-or-commit[:path]]", and // returns the corresponding *pfs.File.
c177158
// Set adds a string to r
c177159
// SetDocsUsage sets the usage string for a docs-style command. Docs commands // have no functionality except to output some docs and related commands, and // should not specify a 'Run' attribute.
c177160
// makeCronCommits makes commits to a single cron input's repo. It's // a helper function called by monitorPipeline.
c177161
// Writer implements the corresponding method in the Client interface
c177162
// Reader implements the corresponding method in the Client interface
c177163
// Delete implements the corresponding method in the Client interface
c177164
// Walk implements the corresponding method in the Client interface
c177165
// Exists implements the corresponding method in the Client interface
c177166
// GetBlock encodes a hash into a readable format in the form of a Block.
c177167
// Health implements the Health method for healthServer.
c177168
// split is like path.Split, but uses this library's defaults for canonical // paths
c177169
// ValidatePath checks if a file path is legal
c177170
// MatchDatum checks if a datum matches a filter. To match each string in // filter must correspond match at least 1 datum's Path or Hash. Order of // filter and data is irrelevant.
c177171
// NewCacheServer creates a new CacheServer.
c177172
// authorizePipelineOp checks if the user indicated by 'ctx' is authorized // to perform 'operation' on the pipeline in 'info'
c177173
// sudo is a helper function that copies 'pachClient' grants it PPS's superuser // token, and calls 'f' with the superuser client. This helps isolate PPS's use // of its superuser token so that it's not widely copied and is unlikely to // leak authority to parts of the code that aren't supposed to have it. // // Note t...
c177174
// setPipelineDefaults sets the default values for a pipeline info
c177175
// incrementGCGeneration increments the GC generation number in etcd
c177176
// NewDebugServer creates a new server that serves the debug api over GRPC
c177177
// Health health checks pachd, it returns an error if pachd isn't healthy.
c177178
// In test mode, we use unique names for cache groups, since we might want // to run multiple block servers locally, which would conflict if groups // had the same name. We also do not report stats to prometheus
c177179
// watchGC watches for GC runs and invalidate all cache when GC happens.
c177180
// splitKey splits a key into the format we want, and also postpends // the generation number
c177181
// NewWriter returns a new Writer, it will flush when // it gets termHeight many lines, including the header line. // The header line will be reprinted termHeight many lines have been written. // NewStreamingWriter will panic if it's given a header that doesn't end in \n.
c177182
// Write writes a line to the tabwriter.
c177183
// PrintRepoHeader prints a repo header.
c177184
// PrintRepoInfo pretty-prints repo info.
c177185
// PrintDetailedRepoInfo pretty-prints detailed repo info.
c177186
// PrintBranch pretty-prints a Branch.
c177187
// PrintCommitInfo pretty-prints commit info.
c177188
// PrintDetailedCommitInfo pretty-prints detailed commit info.
c177189
// PrintFileInfo pretty-prints file info. // If recurse is false and directory size is 0, display "-" instead // If fast is true and file size is 0, display "-" instead
c177190
// PrintDetailedFileInfo pretty-prints detailed file info.
c177191
// Add adds an ancestry reference to the given string.
c177192
// RetryNotify calls notify function with the error and wait duration // for each failed attempt before sleep.
c177193
// Get does a filtered write of id's hashtree to the passed in io.Writer.
c177194
// Delete deletes a hashtree from the cache.
c177195
// PrintJobInfo pretty-prints job info.
c177196
// PrintPipelineInfo pretty-prints pipeline info.
c177197
// PrintWorkerStatus pretty prints a worker status.
c177198
// PrintDetailedJobInfo pretty-prints detailed job info.
c177199
// PrintDetailedPipelineInfo pretty-prints detailed pipeline info.