docstring stringlengths 52 499 | function stringlengths 67 35.2k | __index_level_0__ int64 52.6k 1.16M |
|---|---|---|
Sets cookie(s) as provided by the query string and redirects to cookie list.
---
tags:
- Cookies
parameters:
- in: query
name: freeform
explode: true
allowEmptyValue: true
schema:
type: object
additionalProperties:
type: string
... | def set_cookies():
cookies = dict(request.args.items())
r = app.make_response(redirect(url_for("view_cookies")))
for key, value in cookies.items():
r.set_cookie(key=key, value=value, secure=secure_cookie())
return r | 114,862 |
Deletes cookie(s) as provided by the query string and redirects to cookie list.
---
tags:
- Cookies
parameters:
- in: query
name: freeform
explode: true
allowEmptyValue: true
schema:
type: object
additionalProperties:
type: string
... | def delete_cookies():
cookies = dict(request.args.items())
r = app.make_response(redirect(url_for("view_cookies")))
for key, value in cookies.items():
r.delete_cookie(key=key)
return r | 114,863 |
Prompts the user for authorization using HTTP Basic Auth.
---
tags:
- Auth
parameters:
- in: path
name: user
type: string
- in: path
name: passwd
type: string
produces:
- application/json
responses:
200:
description: Sucessful aut... | def basic_auth(user="user", passwd="passwd"):
if not check_basic_auth(user, passwd):
return status_code(401)
return jsonify(authenticated=True, user=user) | 114,864 |
Prompts the user for authorization using HTTP Basic Auth.
---
tags:
- Auth
parameters:
- in: path
name: user
type: string
- in: path
name: passwd
type: string
produces:
- application/json
responses:
200:
description: Sucessful aut... | def hidden_basic_auth(user="user", passwd="passwd"):
if not check_basic_auth(user, passwd):
return status_code(404)
return jsonify(authenticated=True, user=user) | 114,865 |
Prompts the user for authorization using bearer authentication.
---
tags:
- Auth
parameters:
- in: header
name: Authorization
schema:
type: string
produces:
- application/json
responses:
200:
description: Sucessful authentication.
401:
... | def bearer_auth():
authorization = request.headers.get("Authorization")
if not (authorization and authorization.startswith("Bearer ")):
response = app.make_response("")
response.headers["WWW-Authenticate"] = "Bearer"
response.status_code = 401
return response
slice_start... | 114,866 |
Returns a delayed response (max of 10 seconds).
---
tags:
- Dynamic data
parameters:
- in: path
name: delay
type: int
produces:
- application/json
responses:
200:
description: A delayed response. | def delay_response(delay):
delay = min(float(delay), 10)
time.sleep(delay)
return jsonify(
get_dict("url", "args", "form", "data", "origin", "headers", "files")
) | 114,869 |
Returns a 304 if an If-Modified-Since header or If-None-Match is present. Returns the same as a GET otherwise.
---
tags:
- Response inspection
parameters:
- in: header
name: If-Modified-Since
- in: header
name: If-None-Match
produces:
- application/json
respon... | def cache():
is_conditional = request.headers.get("If-Modified-Since") or request.headers.get(
"If-None-Match"
)
if is_conditional is None:
response = view_get()
response.headers["Last-Modified"] = http_date()
response.headers["ETag"] = uuid.uuid4().hex
return r... | 114,871 |
Assumes the resource has the given etag and responds to If-None-Match and If-Match headers appropriately.
---
tags:
- Response inspection
parameters:
- in: header
name: If-None-Match
- in: header
name: If-Match
produces:
- application/json
responses:
200... | def etag(etag):
if_none_match = parse_multi_value_header(request.headers.get("If-None-Match"))
if_match = parse_multi_value_header(request.headers.get("If-Match"))
if if_none_match:
if etag in if_none_match or "*" in if_none_match:
response = status_code(304)
response.h... | 114,872 |
Returns n random bytes generated with given seed
---
tags:
- Dynamic data
parameters:
- in: path
name: n
type: int
produces:
- application/octet-stream
responses:
200:
description: Bytes. | def random_bytes(n):
n = min(n, 100 * 1024) # set 100KB limit
params = CaseInsensitiveDict(request.args.items())
if "seed" in params:
random.seed(int(params["seed"]))
response = make_response()
# Note: can't just use os.urandom here because it ignores the seed
response.data = b... | 114,873 |
Streams n random bytes generated with given seed, at given chunk size per packet.
---
tags:
- Dynamic data
parameters:
- in: path
name: n
type: int
produces:
- application/octet-stream
responses:
200:
description: Bytes. | def stream_random_bytes(n):
n = min(n, 100 * 1024) # set 100KB limit
params = CaseInsensitiveDict(request.args.items())
if "seed" in params:
random.seed(int(params["seed"]))
if "chunk_size" in params:
chunk_size = max(1, int(params["chunk_size"]))
else:
chunk_size = 1... | 114,874 |
Streams n random bytes generated with given seed, at given chunk size per packet.
---
tags:
- Dynamic data
parameters:
- in: path
name: numbytes
type: int
produces:
- application/octet-stream
responses:
200:
description: Bytes. | def range_request(numbytes):
if numbytes <= 0 or numbytes > (100 * 1024):
response = Response(
headers={"ETag": "range%d" % numbytes, "Accept-Ranges": "bytes"}
)
response.status_code = 404
response.data = "number of bytes must be in the range (0, 102400]"
re... | 114,875 |
Generate a page containing n links to other pages which do the same.
---
tags:
- Dynamic data
parameters:
- in: path
name: n
type: int
- in: path
name: offset
type: int
produces:
- text/html
responses:
200:
description: HTML links... | def link_page(n, offset):
n = min(max(1, n), 200) # limit to between 1 and 200 links
link = "<a href='{0}'>{1}</a> "
html = ["<html><head><title>Links</title></head><body>"]
for i in xrange(n):
if i == offset:
html.append("{0} ".format(i))
else:
html.appen... | 114,876 |
Add a metric to the metric family.
Args:
labels: A list of label values
value: The value of the metric
created: Optional unix timestamp the child was created at. | def add_metric(self, labels, value, created=None, timestamp=None):
self.samples.append(Sample(self.name + '_total', dict(zip(self._labelnames, labels)), value, timestamp))
if created is not None:
self.samples.append(Sample(self.name + '_created', dict(zip(self._labelnames, labels)),... | 115,005 |
Add a metric to the metric family.
Args:
labels: A list of label values
count_value: The count value of the metric.
sum_value: The sum value of the metric. | def add_metric(self, labels, count_value, sum_value, timestamp=None):
self.samples.append(Sample(self.name + '_count', dict(zip(self._labelnames, labels)), count_value, timestamp))
self.samples.append(Sample(self.name + '_sum', dict(zip(self._labelnames, labels)), sum_value, timestamp)) | 115,007 |
Add a metric to the metric family.
Args:
labels: A list of label values
buckets: A list of lists.
Each inner list can be a pair of bucket name and value,
or a triple of bucket name, value, and exemplar.
The buckets must be sorted, and +Inf present.
... | def add_metric(self, labels, buckets, sum_value, timestamp=None):
for b in buckets:
bucket, value = b[:2]
exemplar = None
if len(b) == 3:
exemplar = b[2]
self.samples.append(Sample(
self.name + '_bucket',
di... | 115,008 |
Add a metric to the metric family.
Args:
labels: A list of label values
buckets: A list of pairs of bucket names and values.
The buckets must be sorted, and +Inf present.
gsum_value: The sum value of the metric. | def add_metric(self, labels, buckets, gsum_value, timestamp=None):
for bucket, value in buckets:
self.samples.append(Sample(
self.name + '_bucket',
dict(list(zip(self._labelnames, labels)) + [('le', bucket)]),
value, timestamp))
# +Inf... | 115,009 |
Add a metric to the metric family.
Args:
labels: A list of label values
value: A dict of labels | def add_metric(self, labels, value, timestamp=None):
self.samples.append(Sample(
self.name + '_info',
dict(dict(zip(self._labelnames, labels)), **value),
1,
timestamp,
)) | 115,010 |
Add a metric to the metric family.
Args:
labels: A list of label values
value: A dict of string state names to booleans | def add_metric(self, labels, value, timestamp=None):
labels = tuple(labels)
for state, enabled in sorted(value.items()):
v = (1 if enabled else 0)
self.samples.append(Sample(
self.name,
dict(zip(self._labelnames + (self.name,), labels + (s... | 115,011 |
Create an TREC dataset instance given a path and fields.
Arguments:
path: Path to the data file.
text_field: The field that will be used for text data.
label_field: The field that will be used for label data.
fine_grained: Whether to use the fine-grained (50-clas... | def __init__(self, path, text_field, label_field,
fine_grained=False, **kwargs):
fields = [('text', text_field), ('label', label_field)]
examples = []
def get_label_str(label):
return label.split(':')[0] if not fine_grained else label
label_field.pr... | 115,686 |
Create a TranslationDataset given paths and fields.
Arguments:
path: Common prefix of paths to the data files for both languages.
exts: A tuple containing the extension to path for each language.
fields: A tuple containing the fields that will be used for data
... | def __init__(self, path, exts, fields, **kwargs):
if not isinstance(fields[0], (tuple, list)):
fields = [('src', fields[0]), ('trg', fields[1])]
src_path, trg_path = tuple(os.path.expanduser(path + x) for x in exts)
examples = []
with io.open(src_path, mode='r', en... | 115,721 |
Process a list of examples to create a batch.
Postprocess the batch with user-provided Pipeline.
Args:
batch (list(object)): A list of object from a batch of examples.
Returns:
object: Processed object given the input and custom
postprocessing Pipeline. | def process(self, batch, *args, **kwargs):
if self.postprocessing is not None:
batch = self.postprocessing(batch)
return batch | 115,727 |
Process a list of examples to create a torch.Tensor.
Pad, numericalize, and postprocess a batch and create a tensor.
Args:
batch (list(object)): A list of object from a batch of examples.
Returns:
torch.autograd.Variable: Processed object given the input
and... | def process(self, batch, device=None):
padded = self.pad(batch)
tensor = self.numericalize(padded, device=device)
return tensor | 115,733 |
Segment one or more datasets with this subword field.
Arguments:
Positional arguments: Dataset objects or other indexable
mutable sequences to segment. If a Dataset object is provided,
all columns corresponding to this field are used; individual
colum... | def segment(self, *args):
sources = []
for arg in args:
if isinstance(arg, Dataset):
sources += [getattr(arg, name) for name, field in
arg.fields.items() if field is self]
else:
sources.append(arg)
for d... | 115,740 |
Preprocess a single example.
Firstly, tokenization and the supplied preprocessing pipeline is applied. Since
this field is always sequential, the result is a list. Then, each element of
the list is preprocessed using ``self.nesting_field.preprocess`` and the resulting
list is returned.
... | def preprocess(self, xs):
return [self.nesting_field.preprocess(x)
for x in super(NestedField, self).preprocess(xs)] | 115,742 |
Create Iterator objects for multiple splits of a dataset.
Arguments:
datasets: Tuple of Dataset objects corresponding to the splits. The
first such object should be the train set.
batch_sizes: Tuple of batch sizes to use for the different splits,
or None ... | def splits(cls, datasets, batch_sizes=None, **kwargs):
if batch_sizes is None:
batch_sizes = [kwargs.pop('batch_size')] * len(datasets)
ret = []
for i in range(len(datasets)):
train = i == 0
ret.append(cls(
datasets[i], batch_size=batc... | 115,750 |
Create an IMDB dataset instance given a path and fields.
Arguments:
path: Path to the dataset's highest level directory
text_field: The field that will be used for text data.
label_field: The field that will be used for label data.
Remaining keyword arguments: Pa... | def __init__(self, path, text_field, label_field, **kwargs):
fields = [('text', text_field), ('label', label_field)]
examples = []
for label in ['pos', 'neg']:
for fname in glob.iglob(os.path.join(path, label, '*.txt')):
with io.open(fname, 'r', encoding="ut... | 115,771 |
Create a LanguageModelingDataset given a path and a field.
Arguments:
path: Path to the data file.
text_field: The field that will be used for text data.
newline_eos: Whether to add an <eos> token for every newline in the
data file. Default: True.
... | def __init__(self, path, text_field, newline_eos=True,
encoding='utf-8', **kwargs):
fields = [('text', text_field)]
text = []
with io.open(path, encoding=encoding) as f:
for line in f:
text += text_field.preprocess(line)
if ne... | 115,778 |
Create a pipeline.
Arguments:
convert_token: The function to apply to input sequence data.
If None, the identity function is used. Default: None | def __init__(self, convert_token=None):
if convert_token is None:
self.convert_token = Pipeline.identity
elif callable(convert_token):
self.convert_token = convert_token
else:
raise ValueError("Pipeline input convert_token {} is not None "
... | 115,781 |
Apply the the current Pipeline(s) to an input.
Arguments:
x: The input to process with the Pipeline(s).
Positional arguments: Forwarded to the `call` function
of the Pipeline(s). | def __call__(self, x, *args):
for pipe in self.pipes:
x = pipe.call(x, *args)
return x | 115,782 |
Apply _only_ the convert_token function of the current pipeline
to the input. If the input is a list, a list with the results of
applying the `convert_token` function to all input elements is
returned.
Arguments:
x: The input to apply the convert_token function to.
... | def call(self, x, *args):
if isinstance(x, list):
return [self.convert_token(tok, *args) for tok in x]
return self.convert_token(x, *args) | 115,783 |
Add a Pipeline to be applied before this processing pipeline.
Arguments:
pipeline: The Pipeline or callable to apply before this
Pipeline. | def add_before(self, pipeline):
if not isinstance(pipeline, Pipeline):
pipeline = Pipeline(pipeline)
self.pipes = pipeline.pipes[:] + self.pipes[:]
return self | 115,784 |
Add a Pipeline to be applied after this processing pipeline.
Arguments:
pipeline: The Pipeline or callable to apply after this
Pipeline. | def add_after(self, pipeline):
if not isinstance(pipeline, Pipeline):
pipeline = Pipeline(pipeline)
self.pipes = self.pipes[:] + pipeline.pipes[:]
return self | 115,785 |
Create a dataset from a list of Examples and Fields.
Arguments:
examples: List of Examples.
fields (List(tuple(str, Field))): The Fields to use in this tuple. The
string is a field name, and the Field is the associated field.
filter_pred (callable or None): U... | def __init__(self, examples, fields, filter_pred=None):
if filter_pred is not None:
make_list = isinstance(examples, list)
examples = filter(filter_pred, examples)
if make_list:
examples = list(examples)
self.examples = examples
self.f... | 115,789 |
Download and unzip an online archive (.zip, .gz, or .tgz).
Arguments:
root (str): Folder to download data to.
check (str or None): Folder whose existence indicates
that the dataset has already been downloaded, or
None to check the existence of root/{cls.n... | def download(cls, root, check=None):
path = os.path.join(root, cls.name)
check = path if check is None else check
if not os.path.isdir(check):
for url in cls.urls:
if isinstance(url, tuple):
url, filename = url
else:
... | 115,792 |
Remove unknown words from dataset examples with respect to given field.
Arguments:
field_names (list(str)): Within example only the parts with field names in
field_names will have their unknown words deleted. | def filter_examples(self, field_names):
for i, example in enumerate(self.examples):
for field_name in field_names:
vocab = set(self.fields[field_name].vocab.stoi)
text = getattr(example, field_name)
example_part = [word for word in text if wor... | 115,793 |
CIFAR-10 dataset and TF model constructor.
Args:
batch_size: dataset batch size. | def __init__(self, batch_size=8, data_dir=None):
self._train_data, self._train_labels = None, None
self._test_data, self._test_labels = None, None
self._batch_size = batch_size
self.img_size = IMAGE_SIZE
self.num_channels = NUM_CHANNELS
self.num_classes = NUM_CLA... | 115,999 |
Load the data in memory.
Args:
dataset: string in ['train', 'test'] | def _load(self, dataset='train'):
data, labels = None, None
if dataset is 'train':
files = [os.path.join(self.cifar10_dir, 'data_batch_%d' % i) for i in range(1, 6)]
else:
files = [os.path.join(self.cifar10_dir, 'test_batch')]
for file in files:
... | 116,000 |
Build a simple convnet (BN before ReLU).
Args:
inputs: a tensor of size [batch_size, height, width, channels]
mode: string in ['train', 'test']
Returns:
the last op containing the predictions
Note:
Best score
Step: 7015 - Epoch: 18/20 ... | def model(self, inputs, mode='train'):
# Extract features
training = (mode == 'train')
with tf.variable_scope('conv1') as scope:
conv = tf.layers.conv2d(inputs=inputs, filters=16, kernel_size=[3, 3], padding='SAME')
bn = tf.layers.batch_normalization(inputs=conv,... | 116,001 |
Lookup and set offsets for any partitions which are awaiting an
explicit reset.
Arguments:
partitions (set of TopicPartitions): the partitions to reset | def reset_offsets_if_needed(self, partitions):
for tp in partitions:
# TODO: If there are several offsets to reset, we could submit offset requests in parallel
if self._subscriptions.is_assigned(tp) and self._subscriptions.is_offset_reset_needed(tp):
self._reset_... | 116,018 |
Update the fetch positions for the provided partitions.
Arguments:
partitions (list of TopicPartitions): partitions to update
Raises:
NoOffsetForPartitionError: if no offset is stored for a given
partition and no reset policy is available | def update_fetch_positions(self, partitions):
# reset the fetch position to the committed position
for tp in partitions:
if not self._subscriptions.is_assigned(tp):
log.warning("partition %s is not assigned - skipping offset"
" update", tp... | 116,020 |
Reset offsets for the given partition using the offset reset strategy.
Arguments:
partition (TopicPartition): the partition that needs reset offset
Raises:
NoOffsetForPartitionError: if no offset reset strategy is defined | def _reset_offset(self, partition):
timestamp = self._subscriptions.assignment[partition].reset_strategy
if timestamp is OffsetResetStrategy.EARLIEST:
strategy = 'earliest'
elif timestamp is OffsetResetStrategy.LATEST:
strategy = 'latest'
else:
... | 116,025 |
Fetch offsets for each partition in timestamps dict. This may send
request to multiple nodes, based on who is Leader for partition.
Arguments:
timestamps (dict): {TopicPartition: int} mapping of fetching
timestamps.
Returns:
Future: resolves to a mapping... | def _send_offset_requests(self, timestamps):
timestamps_by_node = collections.defaultdict(dict)
for partition, timestamp in six.iteritems(timestamps):
node_id = self._client.cluster.leader_for_partition(partition)
if node_id is None:
self._client.add_topi... | 116,032 |
Callback for the response of the list offset call above.
Arguments:
future (Future): the future to update based on response
response (OffsetResponse): response from the server
Raises:
AssertionError: if response does not match partition | def _handle_offset_response(self, future, response):
timestamp_offset_map = {}
for topic, part_data in response.topics:
for partition_info in part_data:
partition, error_code = partition_info[:2]
partition = TopicPartition(topic, partition)
... | 116,034 |
Pure-python Murmur2 implementation.
Based on java client, see org.apache.kafka.common.utils.Utils.murmur2
Args:
data (bytes): opaque bytes
Returns: MurmurHash2 of data | def murmur2(data):
# Python2 bytes is really a str, causing the bitwise operations below to fail
# so convert to bytearray.
if six.PY2:
data = bytearray(bytes(data))
length = len(data)
seed = 0x9747b28c
# 'm' and 'r' are mixing constants generated offline.
# They're not really ... | 116,045 |
Perform leader synchronization and send back the assignment
for the group via SyncGroupRequest
Arguments:
response (JoinResponse): broker response to parse
Returns:
Future: resolves to member assignment encoded-bytes | def _on_join_leader(self, response):
try:
group_assignment = self._perform_assignment(response.leader_id,
response.group_protocol,
response.members)
except Exception as e:... | 116,100 |
Create a metrics repository with a default config, given metric
reporters and the ability to expire eligible sensors
Arguments:
default_config (MetricConfig, optional): The default config
reporters (list of AbstractMetricsReporter, optional):
The metrics reporter... | def __init__(self, default_config=None, reporters=None,
enable_expiration=False):
self._lock = threading.RLock()
self._config = default_config or MetricConfig()
self._sensors = {}
self._metrics = {}
self._children_sensors = {}
self._reporters = r... | 116,138 |
Remove a sensor (if it exists), associated metrics and its children.
Arguments:
name (str): The name of the sensor to be removed | def remove_sensor(self, name):
sensor = self._sensors.get(name)
if sensor:
child_sensors = None
with sensor._lock:
with self._lock:
val = self._sensors.pop(name, None)
if val and val == sensor:
... | 116,141 |
Add a metric to monitor an object that implements measurable.
This metric won't be associated with any sensor.
This is a way to expose existing values as metrics.
Arguments:
metricName (MetricName): The name of the metric
measurable (AbstractMeasurable): The measurable t... | def add_metric(self, metric_name, measurable, config=None):
# NOTE there was a lock here, but i don't think it's needed
metric = KafkaMetric(metric_name, measurable, config or self.config)
self.register_metric(metric) | 116,142 |
Remove a metric if it exists and return it. Return None otherwise.
If a metric is removed, `metric_removal` will be invoked
for each reporter.
Arguments:
metric_name (MetricName): The name of the metric
Returns:
KafkaMetric: the removed `KafkaMetric` or None if ... | def remove_metric(self, metric_name):
with self._lock:
metric = self._metrics.pop(metric_name, None)
if metric:
for reporter in self._reporters:
reporter.metric_removal(metric)
return metric | 116,143 |
Construct a Snappy Message containing multiple Messages
The given payloads will be encoded, compressed, and sent as a single atomic
message to Kafka.
Arguments:
payloads: list(bytes), a list of payload to send be sent to Kafka
key: bytes, a key used for partition routing (optional) | def create_snappy_message(payloads, key=None):
message_set = KafkaProtocol._encode_message_set(
[create_message(payload, pl_key) for payload, pl_key in payloads])
snapped = snappy_encode(message_set)
codec = ATTRIBUTE_CODEC_MASK & CODEC_SNAPPY
return kafka.structs.Message(0, 0x00 | codec,... | 116,161 |
Encode a ProduceRequest struct
Arguments:
payloads: list of ProduceRequestPayload
acks: How "acky" you want the request to be
1: written to disk by the leader
0: immediate response
-1: waits for all replicas to be in sync
timeo... | def encode_produce_request(cls, payloads=(), acks=1, timeout=1000):
if acks not in (1, 0, -1):
raise ValueError('ProduceRequest acks (%s) must be 1, 0, -1' % acks)
topics = []
for topic, topic_payloads in group_by_topic_and_partition(payloads).items():
topic_msg... | 116,165 |
Decode ProduceResponse to ProduceResponsePayload
Arguments:
response: ProduceResponse
Return: list of ProduceResponsePayload | def decode_produce_response(cls, response):
return [
kafka.structs.ProduceResponsePayload(topic, partition, error, offset)
for topic, partitions in response.topics
for partition, error, offset in partitions
] | 116,166 |
Encodes a FetchRequest struct
Arguments:
payloads: list of FetchRequestPayload
max_wait_time (int, optional): ms to block waiting for min_bytes
data. Defaults to 100.
min_bytes (int, optional): minimum bytes required to return before
max_wait_... | def encode_fetch_request(cls, payloads=(), max_wait_time=100, min_bytes=4096):
return kafka.protocol.fetch.FetchRequest[0](
replica_id=-1,
max_wait_time=max_wait_time,
min_bytes=min_bytes,
topics=[(
topic,
[(
... | 116,167 |
Decode FetchResponse struct to FetchResponsePayloads
Arguments:
response: FetchResponse | def decode_fetch_response(cls, response):
return [
kafka.structs.FetchResponsePayload(
topic, partition, error, highwater_offset, [
offset_and_msg
for offset_and_msg in cls.decode_message_set(messages)])
for topic, partitio... | 116,168 |
Decode OffsetResponse into OffsetResponsePayloads
Arguments:
response: OffsetResponse
Returns: list of OffsetResponsePayloads | def decode_offset_response(cls, response):
return [
kafka.structs.OffsetResponsePayload(topic, partition, error, tuple(offsets))
for topic, partitions in response.topics
for partition, error, offsets in partitions
] | 116,171 |
Decode OffsetResponse_v2 into ListOffsetResponsePayloads
Arguments:
response: OffsetResponse_v2
Returns: list of ListOffsetResponsePayloads | def decode_list_offset_response(cls, response):
return [
kafka.structs.ListOffsetResponsePayload(topic, partition, error, timestamp, offset)
for topic, partitions in response.topics
for partition, error, timestamp, offset in partitions
] | 116,172 |
Encode a MetadataRequest
Arguments:
topics: list of strings | def encode_metadata_request(cls, topics=(), payloads=None):
if payloads is not None:
topics = payloads
return kafka.protocol.metadata.MetadataRequest[0](topics) | 116,173 |
Encode a ConsumerMetadataRequest
Arguments:
client_id: string
correlation_id: int
payloads: string (consumer group) | def encode_consumer_metadata_request(cls, client_id, correlation_id, payloads):
message = []
message.append(cls._encode_message_header(client_id, correlation_id,
KafkaProtocol.CONSUMER_METADATA_KEY))
message.append(struct.pack('>h%ds' % ... | 116,174 |
Decode bytes to a kafka.structs.ConsumerMetadataResponse
Arguments:
data: bytes to decode | def decode_consumer_metadata_response(cls, data):
((correlation_id, error, nodeId), cur) = relative_unpack('>ihi', data, 0)
(host, cur) = read_short_string(data, cur)
((port,), cur) = relative_unpack('>i', data, cur)
return kafka.structs.ConsumerMetadataResponse(error, nodeId, ... | 116,175 |
Encode an OffsetCommitRequest struct
Arguments:
group: string, the consumer group you are committing offsets for
payloads: list of OffsetCommitRequestPayload | def encode_offset_commit_request(cls, group, payloads):
return kafka.protocol.commit.OffsetCommitRequest[0](
consumer_group=group,
topics=[(
topic,
[(
partition,
payload.offset,
payload.m... | 116,176 |
Decode OffsetCommitResponse to an OffsetCommitResponsePayload
Arguments:
response: OffsetCommitResponse | def decode_offset_commit_response(cls, response):
return [
kafka.structs.OffsetCommitResponsePayload(topic, partition, error)
for topic, partitions in response.topics
for partition, error in partitions
] | 116,177 |
Encode an OffsetFetchRequest struct. The request is encoded using
version 0 if from_kafka is false, indicating a request for Zookeeper
offsets. It is encoded using version 1 otherwise, indicating a request
for Kafka offsets.
Arguments:
group: string, the consumer group you a... | def encode_offset_fetch_request(cls, group, payloads, from_kafka=False):
version = 1 if from_kafka else 0
return kafka.protocol.commit.OffsetFetchRequest[version](
consumer_group=group,
topics=[(
topic,
list(topic_payloads.keys()))
... | 116,178 |
Decode OffsetFetchResponse to OffsetFetchResponsePayloads
Arguments:
response: OffsetFetchResponse | def decode_offset_fetch_response(cls, response):
return [
kafka.structs.OffsetFetchResponsePayload(
topic, partition, offset, metadata, error
)
for topic, partitions in response.topics
for partition, offset, metadata, error in partitions
... | 116,179 |
Record a value at a known time.
Arguments:
value (double): The value we are recording
time_ms (int): A POSIX timestamp in milliseconds.
Default: The time when record() is evaluated (now)
Raises:
QuotaViolationException: if recording this value moves a... | def record(self, value=1.0, time_ms=None):
if time_ms is None:
time_ms = time.time() * 1000
self._last_record_time = time_ms
with self._lock: # XXX high volume, might be performance issue
# increment all the stats
for stat in self._stats:
... | 116,190 |
Register a compound statistic with this sensor which
yields multiple measurable quantities (like a histogram)
Arguments:
stat (AbstractCompoundStat): The stat to register
config (MetricConfig): The configuration for this stat.
If None then the stat will use the d... | def add_compound(self, compound_stat, config=None):
if not compound_stat:
raise ValueError('compound stat must be non-empty')
self._stats.append(compound_stat)
for named_measurable in compound_stat.stats():
metric = KafkaMetric(named_measurable.name, named_measur... | 116,192 |
Register a metric with this sensor
Arguments:
metric_name (MetricName): The name of the metric
stat (AbstractMeasurableStat): The statistic to keep
config (MetricConfig): A special configuration for this metric.
If None use the sensor default configuration. | def add(self, metric_name, stat, config=None):
with self._lock:
metric = KafkaMetric(metric_name, stat, config or self._config)
self._registry.register_metric(metric)
self._metrics.append(metric)
self._stats.append(stat) | 116,193 |
Fetch the current committed offsets for specified partitions
Arguments:
partitions (list of TopicPartition): partitions to fetch
Returns:
dict: {TopicPartition: OffsetAndMetadata} | def fetch_committed_offsets(self, partitions):
if not partitions:
return {}
while True:
self.ensure_coordinator_ready()
# contact coordinator to fetch committed offsets
future = self._send_offset_fetch_request(partitions)
self._clien... | 116,207 |
Commit specific offsets asynchronously.
Arguments:
offsets (dict {TopicPartition: OffsetAndMetadata}): what to commit
callback (callable, optional): called as callback(offsets, response)
response will be either an Exception or a OffsetCommitResponse
struc... | def commit_offsets_async(self, offsets, callback=None):
self._invoke_completed_offset_commit_callbacks()
if not self.coordinator_unknown():
future = self._do_commit_offsets_async(offsets, callback)
else:
# we don't know the current coordinator, so try to find it ... | 116,210 |
Commit specific offsets synchronously.
This method will retry until the commit completes successfully or an
unrecoverable error is encountered.
Arguments:
offsets (dict {TopicPartition: OffsetAndMetadata}): what to commit
Raises error on failure | def commit_offsets_sync(self, offsets):
assert self.config['api_version'] >= (0, 8, 1), 'Unsupported Broker API'
assert all(map(lambda k: isinstance(k, TopicPartition), offsets))
assert all(map(lambda v: isinstance(v, OffsetAndMetadata),
offsets.values()))
... | 116,212 |
Commit offsets for the specified list of topics and partitions.
This is a non-blocking call which returns a request future that can be
polled in the case of a synchronous commit or ignored in the
asynchronous case.
Arguments:
offsets (dict of {TopicPartition: OffsetAndMetad... | def _send_offset_commit_request(self, offsets):
assert self.config['api_version'] >= (0, 8, 1), 'Unsupported Broker API'
assert all(map(lambda k: isinstance(k, TopicPartition), offsets))
assert all(map(lambda v: isinstance(v, OffsetAndMetadata),
offsets.values()))... | 116,214 |
Fetch the committed offsets for a set of partitions.
This is a non-blocking call. The returned future can be polled to get
the actual offsets returned from the broker.
Arguments:
partitions (list of TopicPartition): the partitions to fetch
Returns:
Future: reso... | def _send_offset_fetch_request(self, partitions):
assert self.config['api_version'] >= (0, 8, 1), 'Unsupported Broker API'
assert all(map(lambda k: isinstance(k, TopicPartition), partitions))
if not partitions:
return Future().success({})
node_id = self.coordinator(... | 116,216 |
Change the topic subscription.
Arguments:
topics (list of str): topics for subscription
Raises:
IllegalStateErrror: if assign_from_user has been used already
TypeError: if a topic is None or a non-str
ValueError: if a topic is an empty string or
... | def change_subscription(self, topics):
if self._user_assignment:
raise IllegalStateError(self._SUBSCRIPTION_EXCEPTION_MESSAGE)
if isinstance(topics, six.string_types):
topics = [topics]
if self.subscription == set(topics):
log.warning("subscription ... | 116,225 |
Add topics to the current group subscription.
This is used by the group leader to ensure that it receives metadata
updates for all topics that any member of the group is subscribed to.
Arguments:
topics (list of str): topics to add to the group subscription | def group_subscribe(self, topics):
if self._user_assignment:
raise IllegalStateError(self._SUBSCRIPTION_EXCEPTION_MESSAGE)
self._group_subscription.update(topics) | 116,226 |
Update the assignment to the specified partitions
This method is called by the coordinator to dynamically assign
partitions based on the consumer's topic subscription. This is different
from assign_from_user() which directly sets the assignment from a
user-supplied TopicPartition list.
... | def assign_from_subscribed(self, assignments):
if not self.partitions_auto_assigned():
raise IllegalStateError(self._SUBSCRIPTION_EXCEPTION_MESSAGE)
for tp in assignments:
if tp.topic not in self.subscription:
raise ValueError("Assigned partition %s for ... | 116,229 |
Mark partition for offset reset using specified or default strategy.
Arguments:
partition (TopicPartition): partition to mark
offset_reset_strategy (OffsetResetStrategy, optional) | def need_offset_reset(self, partition, offset_reset_strategy=None):
if offset_reset_strategy is None:
offset_reset_strategy = self._default_offset_reset_strategy
self.assignment[partition].await_reset(offset_reset_strategy) | 116,234 |
Close this producer.
Arguments:
timeout (float, optional): timeout in seconds to wait for completion. | def close(self, timeout=None):
# drop our atexit handler now to avoid leaks
self._unregister_cleanup()
if not hasattr(self, '_closed') or self._closed:
log.info('Kafka producer closed')
return
if timeout is None:
# threading.TIMEOUT_MAX is a... | 116,246 |
Wait for cluster metadata including partitions for the given topic to
be available.
Arguments:
topic (str): topic we want metadata for
max_wait (float): maximum time in secs for waiting on the metadata
Returns:
set: partition ids for the topic
Raise... | def _wait_on_metadata(self, topic, max_wait):
# add topic to metadata topic list if it is not there already.
self._sender.add_topic(topic)
begin = time.time()
elapsed = 0.0
metadata_event = None
while True:
partitions = self._metadata.partitions_for_t... | 116,252 |
Update offsets using auto_offset_reset policy (smallest|largest)
Arguments:
partition (int): the partition for which offsets should be updated
Returns: Updated offset on success, None on failure | def reset_partition_offset(self, partition):
LATEST = -1
EARLIEST = -2
if self.auto_offset_reset == 'largest':
reqs = [OffsetRequestPayload(self.topic, partition, LATEST, 1)]
elif self.auto_offset_reset == 'smallest':
reqs = [OffsetRequestPayload(self.top... | 116,261 |
Encode an integer to a varint presentation. See
https://developers.google.com/protocol-buffers/docs/encoding?csw=1#varints
on how those can be produced.
Arguments:
num (int): Value to encode
Returns:
bytearray: Encoded presentation of integer with length from 1 to 10
... | def encode_varint_1(num):
# Shift sign to the end of number
num = (num << 1) ^ (num >> 63)
# Max 10 bytes. We assert those are allocated
buf = bytearray(10)
for i in range(10):
# 7 lowest bits from the number and set 8th if we still have pending
# bits left to encode
bu... | 116,269 |
Decode an integer from a varint presentation. See
https://developers.google.com/protocol-buffers/docs/encoding?csw=1#varints
on how those can be produced.
Arguments:
buffer (bytes-like): any object acceptable by ``memoryview``
pos (int): optional position to read from
R... | def decode_varint_1(buffer, pos=0):
value = 0
shift = 0
memview = memoryview(buffer)
for i in range(pos, pos + 10):
try:
byte = _read_byte(memview, i)
except IndexError:
raise ValueError("End of byte stream")
if byte & 0x80 != 0:
value |= ... | 116,277 |
Update CRC-32C checksum with data.
Args:
crc: 32-bit checksum to update as long.
data: byte array, string or iterable over bytes.
Returns:
32-bit updated CRC-32C as long. | def crc_update(crc, data):
if type(data) != array.array or data.itemsize != 1:
buf = array.array("B", data)
else:
buf = data
crc = crc ^ _MASK
for b in buf:
table_index = (crc ^ b) & 0xff
crc = (CRC_TABLE[table_index] ^ (crc >> 8)) & _MASK
return crc ^ _MASK | 116,309 |
Abort the batches that have been sitting in RecordAccumulator for
more than the configured request_timeout due to metadata being
unavailable.
Arguments:
request_timeout_ms (int): milliseconds to timeout
cluster (ClusterMetadata): current metadata for kafka cluster
... | def abort_expired_batches(self, request_timeout_ms, cluster):
expired_batches = []
to_remove = []
count = 0
for tp in list(self._batches.keys()):
assert tp in self._tp_locks, 'TopicPartition not in locks dict'
# We only check if the batch should be expir... | 116,318 |
Close socket and fail all in-flight-requests.
Arguments:
error (Exception, optional): pending in-flight-requests
will be failed with this exception.
Default: kafka.errors.KafkaConnectionError. | def close(self, error=None):
if self.state is ConnectionStates.DISCONNECTED:
return
with self._lock:
if self.state is ConnectionStates.DISCONNECTED:
return
log.info('%s: Closing connection. %s', self, error or '')
self._update_reco... | 116,394 |
Complete or retry the given batch of records.
Arguments:
batch (RecordBatch): The record batch
error (Exception): The error (or None if none)
base_offset (int): The base offset assigned to the records if successful
timestamp_ms (int, optional): The timestamp retu... | def _complete_batch(self, batch, error, base_offset, timestamp_ms=None):
# Standardize no-error to None
if error is Errors.NoError:
error = None
if error is not None and self._can_retry(batch, error):
# retry
log.warning("Got error produce response o... | 116,421 |
Transfer the record batches into a list of produce requests on a
per-node basis.
Arguments:
collated: {node_id: [RecordBatch]}
Returns:
dict: {node_id: ProduceRequest} (version depends on api_version) | def _create_produce_requests(self, collated):
requests = {}
for node_id, batches in six.iteritems(collated):
requests[node_id] = self._produce_request(
node_id, self.config['acks'],
self.config['request_timeout_ms'], batches)
return requests | 116,423 |
Get BrokerMetadata
Arguments:
broker_id (int): node_id for a broker to check
Returns:
BrokerMetadata or None if not found | def broker_metadata(self, broker_id):
return self._brokers.get(broker_id) or self._bootstrap_brokers.get(broker_id) | 116,439 |
Return set of all partitions for topic (whether available or not)
Arguments:
topic (str): topic to check for partitions
Returns:
set: {partition (int), ...} | def partitions_for_topic(self, topic):
if topic not in self._partitions:
return None
return set(self._partitions[topic].keys()) | 116,440 |
Return set of partitions with known leaders
Arguments:
topic (str): topic to check for partitions
Returns:
set: {partition (int), ...}
None if topic not found. | def available_partitions_for_topic(self, topic):
if topic not in self._partitions:
return None
return set([partition for partition, metadata
in six.iteritems(self._partitions[topic])
if metadata.leader != -1]) | 116,441 |
Get set of known topics.
Arguments:
exclude_internal_topics (bool): Whether records from internal topics
(such as offsets) should be exposed to the consumer. If set to
True the only way to receive records from an internal topic is
subscribing to it. D... | def topics(self, exclude_internal_topics=True):
topics = set(self._partitions.keys())
if exclude_internal_topics:
return topics - self.internal_topics
else:
return topics | 116,445 |
Update cluster state given a MetadataResponse.
Arguments:
metadata (MetadataResponse): broker response to a metadata request
Returns: None | def update_metadata(self, metadata):
# In the common case where we ask for a single topic and get back an
# error, we should fail the future
if len(metadata.topics) == 1 and metadata.topics[0][0] != 0:
error_code, topic = metadata.topics[0][:2]
error = Errors.for... | 116,447 |
Update with metadata for a group coordinator
Arguments:
group (str): name of group from GroupCoordinatorRequest
response (GroupCoordinatorResponse): broker response
Returns:
bool: True if metadata is updated, False on error | def add_group_coordinator(self, group, response):
log.debug("Updating coordinator for %s: %s", group, response)
error_type = Errors.for_code(response.error_code)
if error_type is not Errors.NoError:
log.error("GroupCoordinatorResponse error: %s", error_type)
self... | 116,448 |
Do one round of polling. In addition to checking for new data, this does
any needed heart-beating, auto-commits, and offset updates.
Arguments:
timeout_ms (int): The maximum time in milliseconds to block.
Returns:
dict: Map of topic to list of records (may be empty). | def _poll_once(self, timeout_ms, max_records):
self._coordinator.poll()
# Fetch positions if we have partitions we're subscribed to that we
# don't know the offset for
if not self._subscription.has_all_fetch_positions():
self._update_fetch_positions(self._subscripti... | 116,459 |
Get the offset of the next record that will be fetched
Arguments:
partition (TopicPartition): Partition to check
Returns:
int: Offset | def position(self, partition):
if not isinstance(partition, TopicPartition):
raise TypeError('partition must be a TopicPartition namedtuple')
assert self._subscription.is_assigned(partition), 'Partition is not assigned'
offset = self._subscription.assignment[partition].posit... | 116,460 |
Suspend fetching from the requested partitions.
Future calls to :meth:`~kafka.KafkaConsumer.poll` will not return any
records from these partitions until they have been resumed using
:meth:`~kafka.KafkaConsumer.resume`.
Note: This method does not affect partition subscription. In parti... | def pause(self, *partitions):
if not all([isinstance(p, TopicPartition) for p in partitions]):
raise TypeError('partitions must be TopicPartition namedtuples')
for partition in partitions:
log.debug("Pausing partition %s", partition)
self._subscription.pause(... | 116,462 |
Seek to the oldest available offset for partitions.
Arguments:
*partitions: Optionally provide specific TopicPartitions, otherwise
default to all assigned partitions.
Raises:
AssertionError: If any partition is not currently assigned, or if
no pa... | def seek_to_beginning(self, *partitions):
if not all([isinstance(p, TopicPartition) for p in partitions]):
raise TypeError('partitions must be TopicPartition namedtuples')
if not partitions:
partitions = self._subscription.assigned_partitions()
assert partiti... | 116,464 |
Seek to the most recent available offset for partitions.
Arguments:
*partitions: Optionally provide specific TopicPartitions, otherwise
default to all assigned partitions.
Raises:
AssertionError: If any partition is not currently assigned, or if
... | def seek_to_end(self, *partitions):
if not all([isinstance(p, TopicPartition) for p in partitions]):
raise TypeError('partitions must be TopicPartition namedtuples')
if not partitions:
partitions = self._subscription.assigned_partitions()
assert partitions, '... | 116,465 |
Set the fetch position to the committed position (if there is one)
or reset it using the offset reset policy the user has configured.
Arguments:
partitions (List[TopicPartition]): The partitions that need
updating fetch positions.
Raises:
NoOffsetForPart... | def _update_fetch_positions(self, partitions):
# Lookup any positions for partitions which are awaiting reset (which may be the
# case if the user called :meth:`seek_to_beginning` or :meth:`seek_to_end`. We do
# this check first to avoid an unnecessary lookup of committed offsets (which... | 116,472 |
Check whether a node is connected and ok to send more requests.
Arguments:
node_id (int): the id of the node to check
metadata_priority (bool): Mark node as not-ready if a metadata
refresh is required. Default: True
Returns:
bool: True if we are read... | def ready(self, node_id, metadata_priority=True):
self.maybe_connect(node_id)
return self.is_ready(node_id, metadata_priority=metadata_priority) | 116,489 |
Close one or all broker connections.
Arguments:
node_id (int, optional): the id of the node to close | def close(self, node_id=None):
with self._lock:
if node_id is None:
self._close()
conns = list(self._conns.values())
self._conns.clear()
for conn in conns:
conn.close()
elif node_id in self._conn... | 116,492 |
Check whether the node connection has been disconnected or failed.
A disconnected node has either been closed or has failed. Connection
failures are usually transient and can be resumed in the next ready()
call, but there are cases where transient failures need to be caught
and re-acted... | def is_disconnected(self, node_id):
conn = self._conns.get(node_id)
if conn is None:
return False
return conn.disconnected() | 116,493 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.