_id stringlengths 2 7 | text stringlengths 6 2.61k | title stringclasses 1
value |
|---|---|---|
c13400 | // ImportMetric receives a metric from another veneur instance | |
c13401 | // ImportMetricGRPC receives a metric from another veneur instance over gRPC.
//
// In practice, this is only called when in the aggregation tier, so we don't
// handle LocalOnly scope. | |
c13402 | // Flush resets the worker's internal metrics and returns their contents. | |
c13403 | // NewEventWorker creates an EventWorker ready to collect events and service checks. | |
c13404 | // Work will start the EventWorker listening for events and service checks.
// This function will never return. | |
c13405 | // Flush returns the EventWorker's stored events and service checks and
// resets the stored contents. | |
c13406 | // NewSpanWorker creates a SpanWorker ready to collect events and service checks. | |
c13407 | // Flush invokes flush on each sink. | |
c13408 | // ReadProxyConfig unmarshals the proxy config file and slurps in its data. | |
c13409 | // ParseInterval handles parsing the flush interval as a time.Duration | |
c13410 | // RefreshDestinations updates the server's list of valid destinations
// for flushing. This should be called periodically to ensure we have
// the latest data. | |
c13411 | // ProxyMetrics takes a slice of JSONMetrics and breaks them up into
// multiple HTTP requests by MetricKey using the hash ring. | |
c13412 | // ImportMetrics feeds a slice of json metrics to the server's workers | |
c13413 | // iterate over a sorted set of jsonmetrics, returning them in contiguous
// nonempty chunks such that each chunk corresponds to a single worker. | |
c13414 | // submitter runs for the lifetime of the sink and performs batch-wise
// submission to the HEC sink. | |
c13415 | // setupHTTPRequest sets up and kicks off an HTTP request. It returns
// the elements of it that are necessary in sending a single batch to
// the HEC. | |
c13416 | // submitOneEvent takes one event and submits it to an HEC HTTP
// connection. It observes the configured splunk_hec_ingest_timeout -
// if the timeout is exceeded, it returns an error. If the timeout is
// 0, it waits forever to submit the event. | |
c13417 | // Flush takes the batched-up events and sends them to the HEC
// endpoint for ingestion. If set, it uses the send timeout configured
// for the span batch. | |
c13418 | // NewLightStepSpanSink creates a new instance of a LightStepSpanSink. | |
c13419 | // Ingest takes in a span and passed it along to the LS client after
// some sanity checks and improvements are made. | |
c13420 | // Flush doesn't need to do anything to the LS tracer, so we emit metrics
// instead. | |
c13421 | // NewConsul creates a new instance of a Consul Discoverer | |
c13422 | // sendMetrics enqueues the metrics into the worker channels | |
c13423 | // Ingest extracts metrics from an SSF span, and feeds them into the
// appropriate metric sinks. | |
c13424 | // NewSocket creates a socket which is intended for use by a single goroutine. | |
c13425 | // Flush the metrics from the LocalFilePlugin | |
c13426 | // IsFramingError returns true if an error is a wire protocol framing
// error. This indicates that the stream can no longer be used for
// reading SSF data and should be closed. | |
c13427 | // Add appends a sample to the batch of samples. | |
c13428 | // Timestamp is a functional option for creating an SSFSample. It sets
// the timestamp field on the sample to the timestamp passed. | |
c13429 | // TimeUnit sets the unit on a sample to the given resolution's SI
// unit symbol. Valid resolutions are the time duration constants from
// Nanosecond through Hour. The non-SI units "minute" and "hour" are
// represented by "min" and "h" respectively.
//
// If a resolution is passed that does not correspond exactly to... | |
c13430 | // Gauge returns an SSFSample representing a gauge at a certain
// value. It's a convenience wrapper around constructing SSFSample
// objects. | |
c13431 | // Histogram returns an SSFSample representing a value on a histogram,
// like a timer or other range. It's a convenience wrapper around
// constructing SSFSample objects. | |
c13432 | // Set returns an SSFSample representing a value on a set, useful for
// counting the unique values that occur in a certain time bound. | |
c13433 | // Status returns an SSFSample capturing the reported state
// of a service | |
c13434 | // IsAcceptableMetric returns true if a metric is meant to be ingested
// by a given sink. | |
c13435 | // populate a single t-digest, of a given compression, with a given number of
// samples, drawn from the given distribution function
// then writes various statistics to the given CSVs | |
c13436 | // Return a gRPC connection for the input destination. The ok value indicates
// if the key was found in the map. | |
c13437 | // Add the destination to the map, and open a new connection to it. If the
// destination already exists, this is a no-op. | |
c13438 | // Delete a destination from the map and close the associated connection. This
// is a no-op if the destination doesn't exist. | |
c13439 | // Keys returns all of the destinations in the map. | |
c13440 | // Clear removes all keys from the map and closes each associated connection. | |
c13441 | // NewClient constructs a new signalfx HTTP client for the given
// endpoint and API token. | |
c13442 | // NewSignalFxSink creates a new SignalFx sink for metrics. | |
c13443 | // Start begins the sink. For SignalFx this is a noop. | |
c13444 | // client returns a client that can be used to submit to vary-by tag's
// value. If no client is specified for that tag value, the default
// client is returned. | |
c13445 | // newPointCollection creates an empty collection object and returns it | |
c13446 | // FlushOtherSamples sends events to SignalFx. Event type samples will be serialized as SFX
// Events directly. All other metric types are ignored | |
c13447 | // HandleTracePacket accepts an incoming packet as bytes and sends it to the
// appropriate worker. | |
c13448 | // ReadMetricSocket listens for available packets to handle. | |
c13449 | // Splits the read metric packet into multiple metrics and handles them | |
c13450 | // ReadStatsdDatagramSocket reads statsd metrics packets from connection off a unix datagram socket. | |
c13451 | // ReadSSFPacketSocket reads SSF packets off a packet connection. | |
c13452 | // ReadTCPSocket listens on Server.TCPAddr for new connections, starting a goroutine for each. | |
c13453 | // HTTPServe starts the HTTP server and listens perpetually until it encounters an unrecoverable error. | |
c13454 | // registerPlugin registers a plugin for use
// on the veneur server. It is blocking
// and not threadsafe. | |
c13455 | // CalculateTickDelay takes the provided time, `Truncate`s it a rounded-down
// multiple of `interval`, then adds `interval` back to find the "next" tick. | |
c13456 | // Set the list of tags to exclude on each sink | |
c13457 | // ValidTrace returns true if an SSFSpan contains all data necessary
// to synthesize a span that can be used as part of a trace. | |
c13458 | // ValidateTrace is identical to ValidTrace, except instead of returning
// a boolean, it returns a non-nil error if the SSFSpan cannot be interpreted
// as a span, and nil otherwise. | |
c13459 | // WriteSSF writes an SSF span with a preceding v0 frame onto a stream
// and returns the number of bytes written, as well as an error.
//
// If the error matches IsFramingError, the stream must be considered
// poisoned and should not be re-used. | |
c13460 | // NewDatadogMetricSink creates a new Datadog sink for trace spans. | |
c13461 | // Flush sends metrics to Datadog | |
c13462 | // NewDatadogSpanSink creates a new Datadog sink for trace spans. | |
c13463 | // Ingest takes the span and adds it to the ringbuffer. | |
c13464 | // Capacity indicates how many spans a client's channel should
// accommodate. This parameter can be used on both generic and
// networked backends. | |
c13465 | // BufferedSize indicates that a client should have a buffer size
// bytes large. See the note on the Buffered option about flushing the
// buffer. | |
c13466 | // FlushInterval sets up a buffered client to perform one synchronous
// flush per time interval in a new goroutine. The goroutine closes
// down when the Client's Close method is called.
//
// This uses a time.Ticker to trigger the flush, so will not trigger
// multiple times if flushing should be slower than the trig... | |
c13467 | // FlushChannel sets up a buffered client to perform one synchronous
// flush any time the given channel has a Time element ready. When the
// Client is closed, FlushWith invokes the passed stop function.
//
// This functional option is mostly useful for tests; code intended to
// be used in production should rely on F... | |
c13468 | // MaxBackoffTime sets the maximum time duration waited between
// reconnection attempts. If this option is not used, the backend uses
// DefaultMaxBackoff. | |
c13469 | // ParallelBackends sets the number of parallel network backend
// connections to send spans with. Each backend holds a connection to
// an SSF receiver open. | |
c13470 | // NewChannelClient constructs and returns a Client that can send
// directly into a span receiver channel. It provides an alternative
// interface to NewBackendClient for constructing internal and
// test-only clients. | |
c13471 | // SetDefaultClient overrides the default client used for recording
// traces, and gracefully closes the existing one.
// This is not safe to run concurrently with other goroutines. | |
c13472 | // NeutralizeClient sets up a client such that all Record or Flush
// operations result in ErrWouldBlock. It dashes all hope of a Client
// ever successfully recording or flushing spans, and is mostly useful
// in tests. | |
c13473 | // Record instructs the client to serialize and send a span. It does
// not wait for a delivery attempt, instead the Client will send the
// result from serializing and submitting the span to the channel
// done, if it is non-nil.
//
// Record returns ErrNoClient if client is nil and ErrWouldBlock if
// the client is n... | |
c13474 | // Initializes a new merging t-digest using the given compression parameter.
// Lower compression values result in reduced memory consumption and less
// precision, especially at the median. Values from 20 to 1000 are recommended
// in Dunning's paper.
//
// The debug flag adds a list to each centroid, which stores all... | |
c13475 | // NewMergingFromData returns a MergingDigest with values initialized from
// MergingDigestData. This should be the way to generate a MergingDigest
// from a serialized protobuf. | |
c13476 | // Adds a new value to the t-digest, with a given weight that must be positive.
// Infinities and NaN cannot be added. | |
c13477 | // combine the mainCentroids and tempCentroids in-place into mainCentroids | |
c13478 | // given a quantile, estimate the index of the centroid that contains it using
// the given compression | |
c13479 | // Returns a value such that the fraction of values in td below that value is
// approximately equal to quantile. Returns NaN if the digest is empty. | |
c13480 | // Merge another digest into this one. Neither td nor other can be shared
// concurrently during the execution of this method. | |
c13481 | // This function provides direct access to the internal list of centroids in
// this t-digest. Having access to this list is very important for analyzing the
// t-digest's statistical properties. However, since it violates the encapsulation
// of the t-digest, it should be used sparingly. Mutating the returned slice ca... | |
c13482 | // tallyMetrics gives a slight overestimate of the number
// of metrics we'll be reporting, so that we can pre-allocate
// a slice of the correct length instead of constantly appending
// for performance | |
c13483 | // forwardGRPC forwards all input metrics to a downstream Veneur, over gRPC. | |
c13484 | // Evaluate an XPath and attempt to consume the result as a nodeset. | |
c13485 | // Get the XPath result as a nodeset. | |
c13486 | // Coerce the result into a string | |
c13487 | // Coerce the result into a number | |
c13488 | // Coerce the result into a boolean | |
c13489 | // Add a variable resolver. | |
c13490 | // NewNode takes a C pointer from the libxml2 library and returns a Node instance of
// the appropriate type. | |
c13491 | // Add a node as a child of the current node.
// Passing in a nodeset will add all the nodes as children of the current node. | |
c13492 | // Insert a node immediately before this node in the document.
// Passing in a nodeset will add all the nodes, in order. | |
c13493 | // Insert a node immediately after this node in the document.
// Passing in a nodeset will add all the nodes, in order. | |
c13494 | // NodePtr returns a pointer to the underlying C struct. | |
c13495 | // Path returns an XPath expression that can be used to
// select this node in the document. | |
c13496 | // Return the attribute node, or nil if the attribute does not exist. | |
c13497 | // Attr returns the value of an attribute.
// If you need to check for the existence of an attribute,
// use Attribute. | |
c13498 | // Search for nodes that match an XPath. This is the simplest way to look for nodes. | |
c13499 | // Evaluate an XPath and coerce the result to a boolean according to the
// XPath rules. In the presence of an error, this function will return false
// even if the expression cannot actually be evaluated.
// In most cases you are better advised to call EvalXPath; this function is
// intended for packages that implemen... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.