text_prompt stringlengths 157 13.1k | code_prompt stringlengths 7 19.8k ⌀ |
|---|---|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_group_offsets(self, group, topic=None):
"""Fetch group offsets for given topic and partition otherwise all topics and partitions otherwise. { 'topic': { 'partition': offset-value, } } """ |
group_offsets = {}
try:
all_topics = self.get_my_subscribed_topics(group)
except NoNodeError:
# No offset information of given consumer-group
_log.warning(
"No topics subscribed to consumer-group {group}.".format(
group=group,
),
)
return group_offsets
if topic:
if topic in all_topics:
topics = [topic]
else:
_log.error(
"Topic {topic} not found in topic list {topics} for consumer"
"-group {consumer_group}.".format(
topic=topic,
topics=', '.join(topic for topic in all_topics),
consumer_group=group,
),
)
return group_offsets
else:
topics = all_topics
for topic in topics:
group_offsets[topic] = {}
try:
partitions = self.get_my_subscribed_partitions(group, topic)
except NoNodeError:
_log.warning(
"No partition offsets found for topic {topic}. "
"Continuing to next one...".format(topic=topic),
)
continue
# Fetch offsets for each partition
for partition in partitions:
path = "/consumers/{group_id}/offsets/{topic}/{partition}".format(
group_id=group,
topic=topic,
partition=partition,
)
try:
# Get current offset
offset_json, _ = self.get(path)
group_offsets[topic][partition] = load_json(offset_json)
except NoNodeError:
_log.error("Path {path} not found".format(path=path))
raise
return group_offsets |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _fetch_partition_state(self, topic_id, partition_id):
"""Fetch partition-state for given topic-partition.""" |
state_path = "/brokers/topics/{topic_id}/partitions/{p_id}/state"
try:
partition_state = self.get(
state_path.format(topic_id=topic_id, p_id=partition_id),
)
return partition_state
except NoNodeError:
return {} |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _fetch_partition_info(self, topic_id, partition_id):
"""Fetch partition info for given topic-partition.""" |
info_path = "/brokers/topics/{topic_id}/partitions/{p_id}"
try:
_, partition_info = self.get(
info_path.format(topic_id=topic_id, p_id=partition_id),
)
return partition_info
except NoNodeError:
return {} |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_my_subscribed_topics(self, groupid):
"""Get the list of topics that a consumer is subscribed to :param: groupid: The consumer group ID for the consumer :returns list of kafka topics :rtype: list """ |
path = "/consumers/{group_id}/offsets".format(group_id=groupid)
return self.get_children(path) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_my_subscribed_partitions(self, groupid, topic):
"""Get the list of partitions of a topic that a consumer is subscribed to :param: groupid: The consumer group ID for the consumer :param: topic: The topic name :returns list of partitions :rtype: list """ |
path = "/consumers/{group_id}/offsets/{topic}".format(
group_id=groupid,
topic=topic,
)
return self.get_children(path) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_cluster_assignment(self):
"""Fetch the cluster layout in form of assignment from zookeeper""" |
plan = self.get_cluster_plan()
assignment = {}
for elem in plan['partitions']:
assignment[
(elem['topic'], elem['partition'])
] = elem['replicas']
return assignment |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def create( self, path, value='', acl=None, ephemeral=False, sequence=False, makepath=False ):
"""Creates a Zookeeper node. :param: path: The zookeeper node path :param: value: Zookeeper node value :param: acl: ACL list :param: ephemeral: Boolean indicating where this node is tied to this session. :param: sequence: Boolean indicating whether path is suffixed with a unique index. :param: makepath: Whether the path should be created if it doesn't exist. """ |
_log.debug("ZK: Creating node " + path)
return self.zk.create(path, value, acl, ephemeral, sequence, makepath) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def delete(self, path, recursive=False):
"""Deletes a Zookeeper node. :param: path: The zookeeper node path :param: recursive: Recursively delete node and all its children. """ |
_log.debug("ZK: Deleting node " + path)
return self.zk.delete(path, recursive=recursive) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def execute_plan(self, plan, allow_rf_change=False):
"""Submit reassignment plan for execution.""" |
reassignment_path = '{admin}/{reassignment_node}'\
.format(admin=ADMIN_PATH, reassignment_node=REASSIGNMENT_NODE)
plan_json = dump_json(plan)
base_plan = self.get_cluster_plan()
if not validate_plan(plan, base_plan, allow_rf_change=allow_rf_change):
_log.error('Given plan is invalid. Aborting new reassignment plan ... {plan}'.format(plan=plan))
return False
# Send proposed-plan to zookeeper
try:
_log.info('Sending plan to Zookeeper...')
self.create(reassignment_path, plan_json, makepath=True)
_log.info(
'Re-assign partitions node in Zookeeper updated successfully '
'with {plan}'.format(plan=plan),
)
return True
except NodeExistsError:
_log.warning('Previous plan in progress. Exiting..')
_log.warning('Aborting new reassignment plan... {plan}'.format(plan=plan))
in_progress_plan = load_json(self.get(reassignment_path)[0])
in_progress_partitions = [
'{topic}-{p_id}'.format(
topic=p_data['topic'],
p_id=str(p_data['partition']),
)
for p_data in in_progress_plan['partitions']
]
_log.warning(
'{count} partition(s) reassignment currently in progress:-'
.format(count=len(in_progress_partitions)),
)
_log.warning(
'{partitions}. In Progress reassignment plan...'.format(
partitions=', '.join(in_progress_partitions),
),
)
return False
except Exception as e:
_log.error(
'Could not re-assign partitions {plan}. Error: {e}'
.format(plan=plan, e=e),
)
return False |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_cluster_plan(self):
"""Fetch cluster plan from zookeeper.""" |
_log.info('Fetching current cluster-topology from Zookeeper...')
cluster_layout = self.get_topics(fetch_partition_state=False)
# Re-format cluster-layout
partitions = [
{
'topic': topic_id,
'partition': int(p_id),
'replicas': partitions_data['replicas']
}
for topic_id, topic_info in six.iteritems(cluster_layout)
for p_id, partitions_data in six.iteritems(topic_info['partitions'])
]
return {
'version': 1,
'partitions': partitions
} |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_pending_plan(self):
"""Read the currently running plan on reassign_partitions node.""" |
reassignment_path = '{admin}/{reassignment_node}'\
.format(admin=ADMIN_PATH, reassignment_node=REASSIGNMENT_NODE)
try:
result = self.get(reassignment_path)
return load_json(result[0])
except NoNodeError:
return {} |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def run_command(self):
"""Checks the number of offline partitions""" |
offline = get_topic_partition_with_error(
self.cluster_config,
LEADER_NOT_AVAILABLE_ERROR,
)
errcode = status_code.OK if not offline else status_code.CRITICAL
out = _prepare_output(offline, self.args.verbose)
return errcode, out |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def get_kafka_groups(cls, cluster_config):
'''Get the group_id of groups committed into Kafka.'''
kafka_group_reader = KafkaGroupReader(cluster_config)
return list(kafka_group_reader.read_groups().keys()) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def report_stdout(host, stdout):
"""Take a stdout and print it's lines to output if lines are present. :param host: the host where the process is running :type host: str :param stdout: the std out of that process :type stdout: paramiko.channel.Channel """ |
lines = stdout.readlines()
if lines:
print("STDOUT from {host}:".format(host=host))
for line in lines:
print(line.rstrip(), file=sys.stdout) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def report_stderr(host, stderr):
"""Take a stderr and print it's lines to output if lines are present. :param host: the host where the process is running :type host: str :param stderr: the std error of that process :type stderr: paramiko.channel.Channel """ |
lines = stderr.readlines()
if lines:
print("STDERR from {host}:".format(host=host))
for line in lines:
print(line.rstrip(), file=sys.stderr) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def save_offsets( cls, consumer_offsets_metadata, topics_dict, json_file, groupid, ):
"""Built offsets for given topic-partitions in required format from current offsets metadata and write to given json-file. :param consumer_offsets_metadata: Fetched consumer offsets from kafka. :param topics_dict: Dictionary of topic-partitions. :param json_file: Filename to store consumer-offsets. :param groupid: Current consumer-group. """ |
# Build consumer-offset data in desired format
current_consumer_offsets = defaultdict(dict)
for topic, topic_offsets in six.iteritems(consumer_offsets_metadata):
for partition_offset in topic_offsets:
current_consumer_offsets[topic][partition_offset.partition] = \
partition_offset.current
consumer_offsets_data = {'groupid': groupid, 'offsets': current_consumer_offsets}
cls.write_offsets_to_file(json_file, consumer_offsets_data) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def write_offsets_to_file(cls, json_file_name, consumer_offsets_data):
"""Save built consumer-offsets data to given json file.""" |
# Save consumer-offsets to file
with open(json_file_name, "w") as json_file:
try:
json.dump(consumer_offsets_data, json_file)
except ValueError:
print("Error: Invalid json data {data}".format(data=consumer_offsets_data))
raise
print("Consumer offset data saved in json-file {file}".format(file=json_file_name)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def decommission_brokers(self, broker_ids):
"""Decommission a list of brokers trying to keep the replication group the brokers belong to balanced. :param broker_ids: list of string representing valid broker ids in the cluster :raises: InvalidBrokerIdError when the id is invalid. """ |
groups = set()
for b_id in broker_ids:
try:
broker = self.cluster_topology.brokers[b_id]
except KeyError:
self.log.error("Invalid broker id %s.", b_id)
# Raise an error for now. As alternative we may ignore the
# invalid id and continue with the others.
raise InvalidBrokerIdError(
"Broker id {} does not exist in cluster".format(b_id),
)
broker.mark_decommissioned()
groups.add(broker.replication_group)
for group in groups:
self._decommission_brokers_in_group(group) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _decommission_brokers_in_group(self, group):
"""Decommission the marked brokers of a group.""" |
try:
group.rebalance_brokers()
except EmptyReplicationGroupError:
self.log.warning("No active brokers left in replication group %s", group)
for broker in group.brokers:
if broker.decommissioned and not broker.empty():
# In this case we need to reassign the remaining partitions
# to other replication groups
self.log.info(
"Broker %s can't be decommissioned within the same "
"replication group %s. Moving partitions to other "
"replication groups.",
broker,
broker.replication_group,
)
self._force_broker_decommission(broker)
# Broker should be empty now
if not broker.empty():
# Decommission may be impossible if there are not enough
# brokers to redistributed the replicas.
self.log.error(
"Could not decommission broker %s. "
"Partitions %s cannot be reassigned.",
broker,
broker.partitions,
)
raise BrokerDecommissionError("Broker decommission failed.") |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def rebalance_replication_groups(self):
"""Rebalance partitions over replication groups. First step involves rebalancing replica-count for each partition across replication-groups. Second step involves rebalancing partition-count across replication-groups of the cluster. """ |
# Balance replicas over replication-groups for each partition
if any(b.inactive for b in six.itervalues(self.cluster_topology.brokers)):
self.log.error(
"Impossible to rebalance replication groups because of inactive "
"brokers."
)
raise RebalanceError(
"Impossible to rebalance replication groups because of inactive "
"brokers"
)
# Balance replica-count over replication-groups
self.rebalance_replicas()
# Balance partition-count over replication-groups
self._rebalance_groups_partition_cnt() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def rebalance_brokers(self):
"""Rebalance partition-count across brokers within each replication-group.""" |
for rg in six.itervalues(self.cluster_topology.rgs):
rg.rebalance_brokers() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def revoke_leadership(self, broker_ids):
"""Revoke leadership for given brokers. :param broker_ids: List of broker-ids whose leadership needs to be revoked. """ |
for b_id in broker_ids:
try:
broker = self.cluster_topology.brokers[b_id]
except KeyError:
self.log.error("Invalid broker id %s.", b_id)
raise InvalidBrokerIdError(
"Broker id {} does not exist in cluster".format(b_id),
)
broker.mark_revoked_leadership()
assert(len(self.cluster_topology.brokers) - len(broker_ids) > 0), "Not " \
"all brokers can be revoked for leadership"
opt_leader_cnt = len(self.cluster_topology.partitions) // (
len(self.cluster_topology.brokers) - len(broker_ids)
)
# Balanced brokers transfer leadership to their under-balanced followers
self.rebalancing_non_followers(opt_leader_cnt)
# If the broker-ids to be revoked from leadership are still leaders for any
# partitions, try to forcefully move their leadership to followers if possible
pending_brokers = [
b for b in six.itervalues(self.cluster_topology.brokers)
if b.revoked_leadership and b.count_preferred_replica() > 0
]
for b in pending_brokers:
self._force_revoke_leadership(b) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _force_revoke_leadership(self, broker):
"""Revoke the leadership of given broker for any remaining partitions. Algorithm: 1. Find the partitions (owned_partitions) with given broker as leader. 2. For each partition find the eligible followers. Brokers which are not to be revoked from leadership are eligible followers. 3. Select the follower who is leader for minimum partitions. 4. Assign the selected follower as leader. 5. Notify for any pending owned_partitions whose leader cannot be changed. This could be due to replica size 1 or eligible followers are None. """ |
owned_partitions = list(filter(
lambda p: broker is p.leader,
broker.partitions,
))
for partition in owned_partitions:
if len(partition.replicas) == 1:
self.log.error(
"Cannot be revoked leadership for broker {b} for partition {p}. Replica count: 1"
.format(p=partition, b=broker),
)
continue
eligible_followers = [
follower for follower in partition.followers
if not follower.revoked_leadership
]
if eligible_followers:
# Pick follower with least leader-count
best_fit_follower = min(
eligible_followers,
key=lambda follower: follower.count_preferred_replica(),
)
partition.swap_leader(best_fit_follower)
else:
self.log.error(
"All replicas for partition {p} on broker {b} are to be revoked for leadership.".format(
p=partition,
b=broker,
)
) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def rebalance_leaders(self):
"""Re-order brokers in replicas such that, every broker is assigned as preferred leader evenly. """ |
opt_leader_cnt = len(self.cluster_topology.partitions) // len(self.cluster_topology.brokers)
# Balanced brokers transfer leadership to their under-balanced followers
self.rebalancing_non_followers(opt_leader_cnt) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _rebalance_groups_partition_cnt(self):
"""Re-balance partition-count across replication-groups. Algorithm: The key constraint is not to create any replica-count imbalance while moving partitions across replication-groups. 1) Divide replication-groups into over and under loaded groups in terms of partition-count. 2) For each over-loaded replication-group, select eligible partitions which can be moved to under-replicated groups. Partitions with greater than optimum replica-count for the group have the ability to donate one of their replicas without creating replica-count imbalance. 3) Destination replication-group is selected based on minimum partition-count and ability to accept one of the eligible partition-replicas. 4) Source and destination brokers are selected based on :- * their ability to donate and accept extra partition-replica respectively. * maximum and minimum partition-counts respectively. 5) Move partition-replica from source to destination-broker. 6) Repeat steps 1) to 5) until groups are balanced or cannot be balanced further. """ |
# Segregate replication-groups based on partition-count
total_elements = sum(len(rg.partitions) for rg in six.itervalues(self.cluster_topology.rgs))
over_loaded_rgs, under_loaded_rgs = separate_groups(
list(self.cluster_topology.rgs.values()),
lambda rg: len(rg.partitions),
total_elements,
)
if over_loaded_rgs and under_loaded_rgs:
self.cluster_topology.log.info(
'Over-loaded replication-groups {over_loaded}, under-loaded '
'replication-groups {under_loaded} based on partition-count'
.format(
over_loaded=[rg.id for rg in over_loaded_rgs],
under_loaded=[rg.id for rg in under_loaded_rgs],
)
)
else:
self.cluster_topology.log.info('Replication-groups are balanced based on partition-count.')
return
# Get optimal partition-count per replication-group
opt_partition_cnt, _ = compute_optimum(
len(self.cluster_topology.rgs),
total_elements,
)
# Balance replication-groups
for over_loaded_rg in over_loaded_rgs:
for under_loaded_rg in under_loaded_rgs:
# Filter unique partition with replica-count > opt-replica-count
# in over-loaded-rgs and <= opt-replica-count in under-loaded-rgs
eligible_partitions = set(filter(
lambda partition:
over_loaded_rg.count_replica(partition) >
len(partition.replicas) // len(self.cluster_topology.rgs) and
under_loaded_rg.count_replica(partition) <=
len(partition.replicas) // len(self.cluster_topology.rgs),
over_loaded_rg.partitions,
))
# Move all possible partitions
for eligible_partition in eligible_partitions:
# The difference of partition-count b/w the over-loaded and under-loaded
# replication-groups should be greater than 1 for convergence
if len(over_loaded_rg.partitions) - len(under_loaded_rg.partitions) > 1:
over_loaded_rg.move_partition_replica(
under_loaded_rg,
eligible_partition,
)
else:
break
# Move to next replication-group if either of the groups got
# balanced, otherwise try with next eligible partition
if (len(under_loaded_rg.partitions) == opt_partition_cnt or
len(over_loaded_rg.partitions) == opt_partition_cnt):
break
if len(over_loaded_rg.partitions) == opt_partition_cnt:
# Move to next over-loaded replication-group if balanced
break |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def add_replica(self, partition_name, count=1):
"""Increase the replication-factor for a partition. The replication-group to add to is determined as follows: 1. Find all replication-groups that have brokers not already replicating the partition. 2. Of these, find replication-groups that have fewer than the average number of replicas for this partition. 3. Choose the replication-group with the fewest overall partitions. :param partition_name: (topic_id, partition_id) of the partition to add replicas of. :param count: The number of replicas to add. :raises InvalidReplicationFactorError when the resulting replication factor is greater than the number of brokers in the cluster. """ |
try:
partition = self.cluster_topology.partitions[partition_name]
except KeyError:
raise InvalidPartitionError(
"Partition name {name} not found".format(name=partition_name),
)
if partition.replication_factor + count > len(self.cluster_topology.brokers):
raise InvalidReplicationFactorError(
"Cannot increase replication factor to {0}. There are only "
"{1} brokers."
.format(
partition.replication_factor + count,
len(self.cluster_topology.brokers),
)
)
non_full_rgs = [
rg
for rg in self.cluster_topology.rgs.values()
if rg.count_replica(partition) < len(rg.brokers)
]
for _ in range(count):
total_replicas = sum(
rg.count_replica(partition)
for rg in non_full_rgs
)
opt_replicas, _ = compute_optimum(
len(non_full_rgs),
total_replicas,
)
under_replicated_rgs = [
rg
for rg in non_full_rgs
if rg.count_replica(partition) < opt_replicas
]
candidate_rgs = under_replicated_rgs or non_full_rgs
rg = min(candidate_rgs, key=lambda rg: len(rg.partitions))
rg.add_replica(partition)
if rg.count_replica(partition) >= len(rg.brokers):
non_full_rgs.remove(rg) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def remove_replica(self, partition_name, osr_broker_ids, count=1):
"""Remove one replica of a partition from the cluster. The replication-group to remove from is determined as follows: 1. Find all replication-groups that contain at least one out-of-sync replica for this partition. 2. Of these, find replication-groups with more than the average number of replicas of this partition. 3. Choose the replication-group with the most overall partitions. 4. Repeat steps 1-3 with in-sync replicas After this operation, the preferred leader for this partition will be set to the broker that leads the fewest other partitions, even if the current preferred leader is not removed. This is done to keep the number of preferred replicas balanced across brokers in the cluster. :param partition_name: (topic_id, partition_id) of the partition to remove replicas of. :param osr_broker_ids: A list of the partition's out-of-sync broker ids. :param count: The number of replicas to remove. :raises: InvalidReplicationFactorError when count is greater than the replication factor of the partition. """ |
try:
partition = self.cluster_topology.partitions[partition_name]
except KeyError:
raise InvalidPartitionError(
"Partition name {name} not found".format(name=partition_name),
)
if partition.replication_factor <= count:
raise InvalidReplicationFactorError(
"Cannot remove {0} replicas. Replication factor is only {1}."
.format(count, partition.replication_factor)
)
osr = []
for broker_id in osr_broker_ids:
try:
osr.append(self.cluster_topology.brokers[broker_id])
except KeyError:
raise InvalidBrokerIdError(
"No broker found with id {bid}".format(bid=broker_id),
)
non_empty_rgs = [
rg
for rg in self.cluster_topology.rgs.values()
if rg.count_replica(partition) > 0
]
rgs_with_osr = [
rg
for rg in non_empty_rgs
if any(b in osr for b in rg.brokers)
]
for _ in range(count):
candidate_rgs = rgs_with_osr or non_empty_rgs
total_replicas = sum(
rg.count_replica(partition)
for rg in candidate_rgs
)
opt_replica_cnt, _ = compute_optimum(
len(candidate_rgs),
total_replicas,
)
over_replicated_rgs = [
rg
for rg in candidate_rgs
if rg.count_replica(partition) > opt_replica_cnt
]
candidate_rgs = over_replicated_rgs or candidate_rgs
rg = max(candidate_rgs, key=lambda rg: len(rg.partitions))
osr_in_rg = [b for b in rg.brokers if b in osr]
rg.remove_replica(partition, osr_in_rg)
osr = [b for b in osr if b in partition.replicas]
if rg in rgs_with_osr and len(osr_in_rg) == 1:
rgs_with_osr.remove(rg)
if rg.count_replica(partition) == 0:
non_empty_rgs.remove(rg)
new_leader = min(
partition.replicas,
key=lambda broker: broker.count_preferred_replica(),
)
partition.swap_leader(new_leader) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def preprocess_topics(source_groupid, source_topics, dest_groupid, topics_dest_group):
"""Pre-process the topics in source and destination group for duplicates.""" |
# Is the new consumer already subscribed to any of these topics?
common_topics = [topic for topic in topics_dest_group if topic in source_topics]
if common_topics:
print(
"Error: Consumer Group ID: {groupid} is already "
"subscribed to following topics: {topic}.\nPlease delete this "
"topics from new group before re-running the "
"command.".format(
groupid=dest_groupid,
topic=', '.join(common_topics),
),
file=sys.stderr,
)
sys.exit(1)
# Let's confirm what the user intends to do.
if topics_dest_group:
in_str = (
"New Consumer Group: {dest_groupid} already "
"exists.\nTopics subscribed to by the consumer groups are listed "
"below:\n{source_groupid}: {source_group_topics}\n"
"{dest_groupid}: {dest_group_topics}\nDo you intend to copy into"
"existing consumer destination-group? (y/n)".format(
source_groupid=source_groupid,
source_group_topics=source_topics,
dest_groupid=dest_groupid,
dest_group_topics=topics_dest_group,
)
)
prompt_user_input(in_str) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def create_offsets(zk, consumer_group, offsets):
"""Create path with offset value for each topic-partition of given consumer group. :param zk: Zookeeper client :param consumer_group: Consumer group id for given offsets :type consumer_group: int :param offsets: Offsets of all topic-partitions :type offsets: dict(topic, dict(partition, offset)) """ |
# Create new offsets
for topic, partition_offsets in six.iteritems(offsets):
for partition, offset in six.iteritems(partition_offsets):
new_path = "/consumers/{groupid}/offsets/{topic}/{partition}".format(
groupid=consumer_group,
topic=topic,
partition=partition,
)
try:
zk.create(new_path, value=offset, makepath=True)
except NodeExistsError:
print(
"Error: Path {path} already exists. Please re-run the "
"command.".format(path=new_path),
file=sys.stderr,
)
raise |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def fetch_offsets(zk, consumer_group, topics):
"""Fetch offsets for given topics of given consumer group. :param zk: Zookeeper client :param consumer_group: Consumer group id for given offsets :type consumer_group: int :rtype: dict(topic, dict(partition, offset)) """ |
source_offsets = defaultdict(dict)
for topic, partitions in six.iteritems(topics):
for partition in partitions:
offset, _ = zk.get(
"/consumers/{groupid}/offsets/{topic}/{partition}".format(
groupid=consumer_group,
topic=topic,
partition=partition,
)
)
source_offsets[topic][partition] = offset
return source_offsets |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_offset_topic_partition_count(kafka_config):
"""Given a kafka cluster configuration, return the number of partitions in the offset topic. It will raise an UnknownTopic exception if the topic cannot be found.""" |
metadata = get_topic_partition_metadata(kafka_config.broker_list)
if CONSUMER_OFFSET_TOPIC not in metadata:
raise UnknownTopic("Consumer offset topic is missing.")
return len(metadata[CONSUMER_OFFSET_TOPIC]) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_group_partition(group, partition_count):
"""Given a group name, return the partition number of the consumer offset topic containing the data associated to that group.""" |
def java_string_hashcode(s):
h = 0
for c in s:
h = (31 * h + ord(c)) & 0xFFFFFFFF
return ((h + 0x80000000) & 0xFFFFFFFF) - 0x80000000
return abs(java_string_hashcode(group)) % partition_count |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def topic_offsets_for_timestamp(consumer, timestamp, topics):
"""Given an initialized KafkaConsumer, timestamp, and list of topics, looks up the offsets for the given topics by timestamp. The returned offset for each partition is the earliest offset whose timestamp is greater than or equal to the given timestamp in the corresponding partition. Arguments: consumer (KafkaConsumer):
an initialized kafka-python consumer timestamp (int):
Unix epoch milliseconds. Unit should be milliseconds since beginning of the epoch (midnight Jan 1, 1970 (UTC)) topics (list):
List of topics whose offsets are to be fetched. :returns: ``{TopicPartition: OffsetAndTimestamp}``: mapping from partition to the timestamp and offset of the first message with timestamp greater than or equal to the target timestamp. Returns ``{TopicPartition: None}`` for specific topic-partiitons if: 1. Timestamps are not supported in messages 2. No offsets in the partition after the given timestamp 3. No data in the topic-partition :raises: ValueError: If the target timestamp is negative UnsupportedVersionError: If the broker does not support looking up the offsets by timestamp. KafkaTimeoutError: If fetch failed in request_timeout_ms """ |
tp_timestamps = {}
for topic in topics:
topic_partitions = consumer_partitions_for_topic(consumer, topic)
for tp in topic_partitions:
tp_timestamps[tp] = timestamp
return consumer.offsets_for_times(tp_timestamps) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def consumer_partitions_for_topic(consumer, topic):
"""Returns a list of all TopicPartitions for a given topic. Arguments: consumer: an initialized KafkaConsumer topic: a topic name to fetch TopicPartitions for :returns: list(TopicPartition):
A list of TopicPartitions that belong to the given topic """ |
topic_partitions = []
partitions = consumer.partitions_for_topic(topic)
if partitions is not None:
for partition in partitions:
topic_partitions.append(TopicPartition(topic, partition))
else:
logging.error(
"No partitions found for topic {}. Maybe it doesn't exist?".format(topic),
)
return topic_partitions |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def consumer_commit_for_times(consumer, partition_to_offset, atomic=False):
"""Commits offsets to Kafka using the given KafkaConsumer and offsets, a mapping of TopicPartition to Unix Epoch milliseconds timestamps. Arguments: consumer (KafkaConsumer):
an initialized kafka-python consumer. partitions_to_offset (dict TopicPartition: OffsetAndTimestamp):
Map of TopicPartition to OffsetAndTimestamp. Return value of offsets_for_times. atomic (bool):
Flag to specify whether the commit should fail if offsets are not found for some TopicPartition: timestamp pairs. """ |
no_offsets = set()
for tp, offset in six.iteritems(partition_to_offset):
if offset is None:
logging.error(
"No offsets found for topic-partition {tp}. Either timestamps not supported"
" for the topic {tp}, or no offsets found after timestamp specified, or there is no"
" data in the topic-partition.".format(tp=tp),
)
no_offsets.add(tp)
if atomic and len(no_offsets) > 0:
logging.error(
"Commit aborted; offsets were not found for timestamps in"
" topics {}".format(",".join([str(tp) for tp in no_offsets])),
)
return
offsets_metadata = {
tp: OffsetAndMetadata(partition_to_offset[tp].offset, metadata=None)
for tp in six.iterkeys(partition_to_offset) if tp not in no_offsets
}
if len(offsets_metadata) != 0:
consumer.commit(offsets_metadata) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_cluster_config( cluster_type, cluster_name=None, kafka_topology_base_path=None, ):
"""Return the cluster configuration. Use the local cluster if cluster_name is not specified. :param cluster_type: the type of the cluster :type cluster_type: string :param cluster_name: the name of the cluster :type cluster_name: string :param kafka_topology_base_path: base path to look for <cluster_type>.yaml :type cluster_name: string :returns: the cluster :rtype: ClusterConfig """ |
if not kafka_topology_base_path:
config_dirs = get_conf_dirs()
else:
config_dirs = [kafka_topology_base_path]
topology = None
for config_dir in config_dirs:
try:
topology = TopologyConfiguration(
cluster_type,
config_dir,
)
except MissingConfigurationError:
pass
if not topology:
raise MissingConfigurationError(
"No available configuration for type {0}".format(cluster_type),
)
if cluster_name:
return topology.get_cluster_by_name(cluster_name)
else:
return topology.get_local_cluster() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def iter_configurations(kafka_topology_base_path=None):
"""Cluster topology iterator. Iterate over all the topologies available in config. """ |
if not kafka_topology_base_path:
config_dirs = get_conf_dirs()
else:
config_dirs = [kafka_topology_base_path]
types = set()
for config_dir in config_dirs:
new_types = [x for x in map(
lambda x: os.path.basename(x)[:-5],
glob.glob('{0}/*.yaml'.format(config_dir)),
) if x not in types]
for cluster_type in new_types:
try:
topology = TopologyConfiguration(
cluster_type,
config_dir,
)
except ConfigurationError:
continue
types.add(cluster_type)
yield topology |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def load_topology_config(self):
"""Load the topology configuration""" |
config_path = os.path.join(
self.kafka_topology_path,
'{id}.yaml'.format(id=self.cluster_type),
)
self.log.debug("Loading configuration from %s", config_path)
if os.path.isfile(config_path):
topology_config = load_yaml_config(config_path)
else:
raise MissingConfigurationError(
"Topology configuration {0} for cluster {1} "
"does not exist".format(
config_path,
self.cluster_type,
)
)
self.log.debug("Topology configuration %s", topology_config)
try:
self.clusters = topology_config['clusters']
except KeyError:
self.log.exception("Invalid topology file")
raise InvalidConfigurationError("Invalid topology file {0}".format(
config_path))
if 'local_config' in topology_config:
self.local_config = topology_config['local_config'] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def convert_to_broker_id(string):
"""Convert string to kafka broker_id.""" |
error_msg = 'Positive integer or -1 required, {string} given.'.format(string=string)
try:
value = int(string)
except ValueError:
raise argparse.ArgumentTypeError(error_msg)
if value <= 0 and value != -1:
raise argparse.ArgumentTypeError(error_msg)
return value |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def run():
"""Verify command-line arguments and run commands""" |
args = parse_args()
logging.basicConfig(level=logging.WARN)
# to prevent flooding for sensu-check.
logging.getLogger('kafka').setLevel(logging.CRITICAL)
if args.controller_only and args.first_broker_only:
terminate(
status_code.WARNING,
prepare_terminate_message(
"Only one of controller_only and first_broker_only should be used",
),
args.json,
)
if args.controller_only or args.first_broker_only:
if args.broker_id is None:
terminate(
status_code.WARNING,
prepare_terminate_message("broker_id is not specified"),
args.json,
)
elif args.broker_id == -1:
try:
args.broker_id = get_broker_id(args.data_path)
except Exception as e:
terminate(
status_code.WARNING,
prepare_terminate_message("{}".format(e)),
args.json,
)
try:
cluster_config = config.get_cluster_config(
args.cluster_type,
args.cluster_name,
args.discovery_base_path,
)
code, msg = args.command(cluster_config, args)
except ConfigurationError as e:
terminate(
status_code.CRITICAL,
prepare_terminate_message("ConfigurationError {0}".format(e)),
args.json,
)
terminate(code, msg, args.json) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def exception_logger(exc_type, exc_value, exc_traceback):
"""Log unhandled exceptions""" |
if not issubclass(exc_type, KeyboardInterrupt): # do not log Ctrl-C
_log.critical(
"Uncaught exception:",
exc_info=(exc_type, exc_value, exc_traceback)
)
sys.__excepthook__(exc_type, exc_value, exc_traceback) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _find_topics_with_wrong_rp(topics, zk, default_min_isr):
"""Returns topics with wrong replication factor.""" |
topics_with_wrong_rf = []
for topic_name, partitions in topics.items():
min_isr = get_min_isr(zk, topic_name) or default_min_isr
replication_factor = len(partitions[0].replicas)
if replication_factor >= min_isr + 1:
continue
topics_with_wrong_rf.append({
'replication_factor': replication_factor,
'min_isr': min_isr,
'topic': topic_name,
})
return topics_with_wrong_rf |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def run_command(self):
"""Replication factor command, checks replication factor settings and compare it with min.isr in the cluster.""" |
topics = get_topic_partition_metadata(self.cluster_config.broker_list)
topics_with_wrong_rf = _find_topics_with_wrong_rp(
topics,
self.zk,
self.args.default_min_isr,
)
errcode = status_code.OK if not topics_with_wrong_rf else status_code.CRITICAL
out = _prepare_output(topics_with_wrong_rf, self.args.verbose)
return errcode, out |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def decommission_brokers(self, broker_ids):
"""Decommissioning brokers is done by removing all partitions from the decommissioned brokers and adding them, one-by-one, back to the cluster. :param broker_ids: List of broker ids that should be decommissioned. """ |
decommission_brokers = []
for broker_id in broker_ids:
try:
broker = self.cluster_topology.brokers[broker_id]
broker.mark_decommissioned()
decommission_brokers.append(broker)
except KeyError:
raise InvalidBrokerIdError(
"No broker found with id {broker_id}".format(broker_id=broker_id)
)
partitions = defaultdict(int)
# Remove all partitions from decommissioned brokers.
for broker in decommission_brokers:
broker_partitions = list(broker.partitions)
for partition in broker_partitions:
broker.remove_partition(partition)
partitions[partition.name] += 1
active_brokers = self.cluster_topology.active_brokers
# Create state from the initial cluster topology.
self.state = _State(self.cluster_topology, brokers=active_brokers)
# Add partition replicas to active brokers one-by-one.
for partition_name in sorted(six.iterkeys(partitions)): # repeatability
partition = self.cluster_topology.partitions[partition_name]
replica_count = partitions[partition_name]
try:
self.add_replica(partition_name, replica_count)
except InvalidReplicationFactorError:
raise BrokerDecommissionError(
"Not enough active brokers in the cluster. "
"Partition {partition} has replication-factor {rf}, "
"but only {brokers} active brokers remain."
.format(
partition=partition_name,
rf=partition.replication_factor + replica_count,
brokers=len(active_brokers)
)
) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def add_replica(self, partition_name, count=1):
"""Adding a replica is done by trying to add the replica to every broker in the cluster and choosing the resulting state with the highest fitness score. :param partition_name: (topic_id, partition_id) of the partition to add replicas of. :param count: The number of replicas to add. """ |
try:
partition = self.cluster_topology.partitions[partition_name]
except KeyError:
raise InvalidPartitionError(
"Partition name {name} not found.".format(name=partition_name),
)
active_brokers = self.cluster_topology.active_brokers
if partition.replication_factor + count > len(active_brokers):
raise InvalidReplicationFactorError(
"Cannot increase replication factor from {rf} to {new_rf}."
" There are only {brokers} active brokers."
.format(
rf=partition.replication_factor,
new_rf=partition.replication_factor + count,
brokers=len(active_brokers),
)
)
partition_index = self.state.partition_indices[partition]
for _ in range(count):
# Find eligible replication-groups.
non_full_rgs = [
rg for rg in six.itervalues(self.cluster_topology.rgs)
if rg.count_replica(partition) < len(rg.active_brokers)
]
# Since replicas can only be added to non-full rgs, only consider
# replicas on those rgs when determining which rgs are
# under-replicated.
replica_count = sum(
rg.count_replica(partition)
for rg in non_full_rgs
)
opt_replicas, _ = compute_optimum(
len(non_full_rgs),
replica_count,
)
under_replicated_rgs = [
rg for rg in non_full_rgs
if rg.count_replica(partition) < opt_replicas
] or non_full_rgs
# Add the replica to every eligible broker, as follower and leader
new_states = []
for rg in under_replicated_rgs:
for broker in rg.active_brokers:
if broker not in partition.replicas:
broker_index = self.state.brokers.index(broker)
new_state = self.state.add_replica(
partition_index,
broker_index,
)
new_state_leader = new_state.move_leadership(
partition_index,
broker_index,
)
new_states.extend([new_state, new_state_leader])
# Update cluster topology with highest scoring state.
self.state = sorted(new_states, key=self._score, reverse=True)[0]
self.cluster_topology.update_cluster_topology(self.state.pending_assignment)
# Update the internal state to match.
self.state.clear_pending_assignment() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def remove_replica(self, partition_name, osr_broker_ids, count=1):
"""Removing a replica is done by trying to remove a replica from every broker and choosing the resulting state with the highest fitness score. Out-of-sync replicas will always be removed before in-sync replicas. :param partition_name: (topic_id, partition_id) of the partition to remove replicas of. :param osr_broker_ids: A list of the partition's out-of-sync broker ids. :param count: The number of replicas to remove. """ |
try:
partition = self.cluster_topology.partitions[partition_name]
except KeyError:
raise InvalidPartitionError(
"Partition name {name} not found.".format(name=partition_name),
)
if partition.replication_factor - count < 1:
raise InvalidReplicationFactorError(
"Cannot decrease replication factor from {rf} to {new_rf}."
"Replication factor must be at least 1."
.format(
rf=partition.replication_factor,
new_rf=partition.replication_factor - count,
)
)
osr = {
broker for broker in partition.replicas
if broker.id in osr_broker_ids
}
# Create state from current cluster topology.
state = _State(self.cluster_topology)
partition_index = state.partitions.index(partition)
for _ in range(count):
# Find eligible replication groups.
non_empty_rgs = [
rg for rg in six.itervalues(self.cluster_topology.rgs)
if rg.count_replica(partition) > 0
]
rgs_with_osr = [
rg for rg in non_empty_rgs
if any(b in osr for b in rg.brokers)
]
candidate_rgs = rgs_with_osr or non_empty_rgs
# Since replicas will only be removed from the candidate rgs, only
# count replicas on those rgs when determining which rgs are
# over-replicated.
replica_count = sum(
rg.count_replica(partition)
for rg in candidate_rgs
)
opt_replicas, _ = compute_optimum(
len(candidate_rgs),
replica_count,
)
over_replicated_rgs = [
rg for rg in candidate_rgs
if rg.count_replica(partition) > opt_replicas
] or candidate_rgs
candidate_rgs = over_replicated_rgs or candidate_rgs
# Remove the replica from every eligible broker.
new_states = []
for rg in candidate_rgs:
osr_brokers = {
broker for broker in rg.brokers
if broker in osr
}
candidate_brokers = osr_brokers or rg.brokers
for broker in candidate_brokers:
if broker in partition.replicas:
broker_index = state.brokers.index(broker)
new_states.append(
state.remove_replica(partition_index, broker_index)
)
# Update cluster topology with highest scoring state.
state = sorted(new_states, key=self._score, reverse=True)[0]
self.cluster_topology.update_cluster_topology(state.assignment)
osr = {b for b in osr if b in partition.replicas} |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _prune(self, pop_candidates):
"""Choose a subset of the candidate states to continue on to the next generation. :param pop_candidates: The set of candidate states. """ |
return set(
sorted(pop_candidates, key=self._score, reverse=True)
[:self.args.max_pop]
) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _score(self, state, score_movement=True):
"""Score a state based on how balanced it is. A higher score represents a more balanced state. :param state: The state to score. """ |
score = 0
max_score = 0
if state.total_weight:
# Coefficient of variance is a value between 0 and the sqrt(n)
# where n is the length of the series (the number of brokers)
# so those parameters are scaled by (1 / sqrt(# or brokers)) to
# get a value between 0 and 1.
#
# Since smaller imbalance values are preferred use 1 - x so that
# higher scores correspond to more balanced states.
score += self.args.partition_weight_cv_score_weight * \
(1 - state.broker_weight_cv / sqrt(len(state.brokers)))
score += self.args.leader_weight_cv_score_weight * \
(1 - state.broker_leader_weight_cv / sqrt(len(state.brokers)))
score += self.args.topic_broker_imbalance_score_weight * \
(1 - state.weighted_topic_broker_imbalance)
score += self.args.broker_partition_count_score_weight * \
(1 - state.broker_partition_count_cv / sqrt(len(state.brokers)))
score += self.args.broker_leader_count_score_weight * \
(1 - state.broker_leader_count_cv / sqrt(len(state.brokers)))
max_score += self.args.partition_weight_cv_score_weight
max_score += self.args.leader_weight_cv_score_weight
max_score += self.args.topic_broker_imbalance_score_weight
max_score += self.args.broker_partition_count_score_weight
max_score += self.args.broker_leader_count_score_weight
if self.args.max_movement_size is not None and score_movement:
# Avoid potential divide-by-zero error
max_movement = max(self.args.max_movement_size, 1)
score += self.args.movement_size_score_weight * \
(1 - state.movement_size / max_movement)
max_score += self.args.movement_size_score_weight
if self.args.max_leader_changes is not None and score_movement:
# Avoid potential divide-by-zero error
max_leader = max(self.args.max_leader_changes, 1)
score += self.args.leader_change_score_weight * \
(1 - state.leader_movement_count / max_leader)
max_score += self.args.leader_change_score_weight
return score / max_score |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def move(self, partition, source, dest):
"""Return a new state that is the result of moving a single partition. :param partition: The partition index of the partition to move. :param source: The broker index of the broker to move the partition from. :param dest: The broker index of the broker to move the partition to. """ |
new_state = copy(self)
# Update the partition replica tuple
source_index = self.replicas[partition].index(source)
new_state.replicas = tuple_alter(
self.replicas,
(partition, lambda replicas: tuple_replace(
replicas,
(source_index, dest),
)),
)
new_state.pending_partitions = self.pending_partitions + (partition, )
# Update the broker weights
partition_weight = self.partition_weights[partition]
new_state.broker_weights = tuple_alter(
self.broker_weights,
(source, lambda broker_weight: broker_weight - partition_weight),
(dest, lambda broker_weight: broker_weight + partition_weight),
)
# Update the broker partition count
new_state.broker_partition_counts = tuple_alter(
self.broker_partition_counts,
(source, lambda partition_count: partition_count - 1),
(dest, lambda partition_count: partition_count + 1),
)
# Update the broker leader weights
if source_index == 0:
new_state.broker_leader_weights = tuple_alter(
self.broker_leader_weights,
(source, lambda lw: lw - partition_weight),
(dest, lambda lw: lw + partition_weight),
)
new_state.broker_leader_counts = tuple_alter(
self.broker_leader_counts,
(source, lambda leader_count: leader_count - 1),
(dest, lambda leader_count: leader_count + 1),
)
new_state.leader_movement_count += 1
# Update the topic broker counts
topic = self.partition_topic[partition]
new_state.topic_broker_count = tuple_alter(
self.topic_broker_count,
(topic, lambda broker_count: tuple_alter(
broker_count,
(source, lambda count: count - 1),
(dest, lambda count: count + 1),
)),
)
# Update the topic broker imbalance
new_state.topic_broker_imbalance = tuple_replace(
self.topic_broker_imbalance,
(topic, new_state._calculate_topic_imbalance(topic)),
)
new_state._weighted_topic_broker_imbalance = (
self._weighted_topic_broker_imbalance +
self.topic_weights[topic] * (
new_state.topic_broker_imbalance[topic] -
self.topic_broker_imbalance[topic]
)
)
# Update the replication group replica counts
source_rg = self.broker_rg[source]
dest_rg = self.broker_rg[dest]
if source_rg != dest_rg:
new_state.rg_replicas = tuple_alter(
self.rg_replicas,
(source_rg, lambda replica_counts: tuple_alter(
replica_counts,
(partition, lambda replica_count: replica_count - 1),
)),
(dest_rg, lambda replica_counts: tuple_alter(
replica_counts,
(partition, lambda replica_count: replica_count + 1),
)),
)
# Update the movement sizes
new_state.movement_size += self.partition_sizes[partition]
new_state.movement_count += 1
return new_state |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def move_leadership(self, partition, new_leader):
"""Return a new state that is the result of changing the leadership of a single partition. :param partition: The partition index of the partition to change the leadership of. :param new_leader: The broker index of the new leader replica. """ |
new_state = copy(self)
# Update the partition replica tuple
source = new_state.replicas[partition][0]
new_leader_index = self.replicas[partition].index(new_leader)
new_state.replicas = tuple_alter(
self.replicas,
(partition, lambda replicas: tuple_replace(
replicas,
(0, replicas[new_leader_index]),
(new_leader_index, replicas[0]),
)),
)
new_state.pending_partitions = self.pending_partitions + (partition, )
# Update the leader count
new_state.broker_leader_counts = tuple_alter(
self.broker_leader_counts,
(source, lambda leader_count: leader_count - 1),
(new_leader, lambda leader_count: leader_count + 1),
)
# Update the broker leader weights
partition_weight = self.partition_weights[partition]
new_state.broker_leader_weights = tuple_alter(
self.broker_leader_weights,
(source, lambda leader_weight: leader_weight - partition_weight),
(new_leader, lambda leader_weight: leader_weight + partition_weight),
)
# Update the total leader movement size
new_state.leader_movement_count += 1
return new_state |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def assignment(self):
"""Return the partition assignment that this state represents.""" |
return {
partition.name: [
self.brokers[bid].id for bid in self.replicas[pid]
]
for pid, partition in enumerate(self.partitions)
} |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def pending_assignment(self):
"""Return the pending partition assignment that this state represents.""" |
return {
self.partitions[pid].name: [
self.brokers[bid].id for bid in self.replicas[pid]
]
for pid in set(self.pending_partitions)
} |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def run_command(self):
"""replica_unavailability command, checks number of replicas not available for communication over all brokers in the Kafka cluster.""" |
fetch_unavailable_brokers = True
result = get_topic_partition_with_error(
self.cluster_config,
REPLICA_NOT_AVAILABLE_ERROR,
fetch_unavailable_brokers=fetch_unavailable_brokers,
)
if fetch_unavailable_brokers:
replica_unavailability, unavailable_brokers = result
else:
replica_unavailability = result
errcode = status_code.OK if not replica_unavailability else status_code.CRITICAL
out = _prepare_output(replica_unavailability, unavailable_brokers, self.args.verbose)
return errcode, out |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_min_isr(zk, topic):
"""Return the min-isr for topic, or None if not specified""" |
ISR_CONF_NAME = 'min.insync.replicas'
try:
config = zk.get_topic_config(topic)
except NoNodeError:
return None
if ISR_CONF_NAME in config['config']:
return int(config['config'][ISR_CONF_NAME])
else:
return None |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _process_metadata_response(topics, zk, default_min_isr):
"""Returns not in sync partitions.""" |
not_in_sync_partitions = []
for topic_name, partitions in topics.items():
min_isr = get_min_isr(zk, topic_name) or default_min_isr
if min_isr is None:
continue
for metadata in partitions.values():
cur_isr = len(metadata.isr)
if cur_isr < min_isr:
not_in_sync_partitions.append({
'isr': cur_isr,
'min_isr': min_isr,
'topic': metadata.topic,
'partition': metadata.partition,
})
return not_in_sync_partitions |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def remove_partition(self, partition):
"""Remove partition from partition list.""" |
if partition in self._partitions:
# Remove partition from set
self._partitions.remove(partition)
# Remove broker from replica list of partition
partition.replicas.remove(self)
else:
raise ValueError(
'Partition: {topic_id}:{partition_id} not found in broker '
'{broker_id}'.format(
topic_id=partition.topic.id,
partition_id=partition.partition_id,
broker_id=self._id,
)
) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def add_partition(self, partition):
"""Add partition to partition list.""" |
assert(partition not in self._partitions)
# Add partition to existing set
self._partitions.add(partition)
# Add broker to replica list
partition.add_replica(self) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def move_partition(self, partition, broker_destination):
"""Move partition to destination broker and adjust replicas.""" |
self.remove_partition(partition)
broker_destination.add_partition(partition) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def count_partitions(self, topic):
"""Return count of partitions for given topic.""" |
return sum(1 for p in topic.partitions if p in self.partitions) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def request_leadership(self, opt_count, skip_brokers, skip_partitions):
"""Under-balanced broker requests leadership from current leader, on the pretext that it recursively can maintain its leadership count as optimal. :key_terms: leader-balanced: Count of brokers as leader is at least opt-count Algorithm: ========= Step-1: Broker will request leadership from current-leader of partitions it belongs to. Step-2: Current-leaders will grant their leadership if one of these happens:- a) Either they remain leader-balanced. b) Or they will recursively request leadership from other partitions until they are become leader-balanced. If both of these conditions fail, they will revoke their leadership-grant Step-3: If current-broker becomes leader-balanced it will return otherwise it moves ahead with next partition. """ |
# Possible partitions which can grant leadership to broker
owned_partitions = list(filter(
lambda p: self is not p.leader and len(p.replicas) > 1,
self.partitions,
))
for partition in owned_partitions:
# Partition not available to grant leadership when:
# 1. Broker is already under leadership change or
# 2. Partition has already granted leadership before
if partition.leader in skip_brokers or partition in skip_partitions:
continue
# Current broker is granted leadership temporarily
prev_leader = partition.swap_leader(self)
# Partition shouldn't be used again
skip_partitions.append(partition)
# Continue if prev-leader remains balanced
# If leadership of prev_leader is to be revoked, it is considered balanced
if prev_leader.count_preferred_replica() >= opt_count or \
prev_leader.revoked_leadership:
# If current broker is leader-balanced return else
# request next-partition
if self.count_preferred_replica() >= opt_count:
return
else:
continue
else: # prev-leader (broker) became unbalanced
# Append skip-brokers list so that it is not unbalanced further
skip_brokers.append(prev_leader)
# Try recursively arrange leadership for prev-leader
prev_leader.request_leadership(opt_count, skip_brokers, skip_partitions)
# If prev-leader couldn't be leader-balanced
# revert its previous grant to current-broker
if prev_leader.count_preferred_replica() < opt_count:
# Partition can be used again for rebalancing
skip_partitions.remove(partition)
partition.swap_leader(prev_leader)
# Try requesting leadership from next partition
continue
else:
# If prev-leader successfully balanced
skip_partitions.append(partition)
# Removing from skip-broker list, since it can now again be
# used for granting leadership for some other partition
skip_brokers.remove(prev_leader)
if self.count_preferred_replica() >= opt_count:
# Return if current-broker is leader-balanced
return
else:
continue |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def donate_leadership(self, opt_count, skip_brokers, used_edges):
"""Over-loaded brokers tries to donate their leadership to one of their followers recursively until they become balanced. :key_terms: used_edges: Represent list of tuple/edges (partition, prev-leader, new-leader), which have already been used for donating leadership from prev-leader to new-leader in same partition before. skip_brokers: This is to avoid using same broker recursively for balancing to prevent loops. :Algorithm: * Over-loaded leader tries to donate its leadership to one of its followers * Follower will try to be balanced recursively if it becomes over-balanced * If it is successful, over-loaded leader moves to next partition if required, return otherwise. * If it is unsuccessful, it tries for next-follower or next-partition whatever or returns if none available. """ |
owned_partitions = list(filter(
lambda p: self is p.leader and len(p.replicas) > 1,
self.partitions,
))
for partition in owned_partitions:
# Skip using same partition with broker if already used before
potential_new_leaders = list(filter(
lambda f: f not in skip_brokers,
partition.followers,
))
for follower in potential_new_leaders:
# Don't swap the broker-pair if already swapped before
# in same partition
if (partition, self, follower) in used_edges:
continue
partition.swap_leader(follower)
used_edges.append((partition, follower, self))
# new-leader didn't unbalance
if follower.count_preferred_replica() <= opt_count + 1:
# over-broker balanced
# If over-broker is the one which needs to be revoked from leadership
# it's considered balanced only if its preferred replica count is 0
if (self.count_preferred_replica() <= opt_count + 1 and not self.revoked_leadership) or \
(self.count_preferred_replica() == 0 and self.revoked_leadership):
return
else:
# Try next-partition, not another follower
break
else: # new-leader (broker) became over-balanced
skip_brokers.append(follower)
follower.donate_leadership(opt_count, skip_brokers, used_edges)
# new-leader couldn't be balanced, revert
if follower.count_preferred_replica() > opt_count + 1:
used_edges.append((partition, follower, self))
partition.swap_leader(self)
# Try next leader or partition
continue
else:
# New-leader was successfully balanced
used_edges.append((partition, follower, self))
# New-leader can be reused
skip_brokers.remove(follower)
# If broker is the one which needs to be revoked from leadership
# it's considered balanced only if its preferred replica count is 0
if (self.count_preferred_replica() <= opt_count + 1 and not self.revoked_leadership) or \
(self.count_preferred_replica() == 0 and self.revoked_leadership):
# Now broker is balanced
return
else:
# Try next-partition, not another follower
break |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def ssh_client(host):
"""Start an ssh client. :param host: the host :type host: str :returns: ssh client :rtype: Paramiko client """ |
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect(host)
return ssh |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def find_files_cmd(data_path, minutes, start_time, end_time):
"""Find the log files depending on their modification time. :param data_path: the path to the Kafka data directory :type data_path: str :param minutes: check the files modified in the last N minutes :type minutes: int :param start_time: check the files modified after start_time :type start_time: str :param end_time: check the files modified before end_time :type end_time: str :returns: the find command :rtype: str """ |
if minutes:
return FIND_MINUTES_COMMAND.format(
data_path=data_path,
minutes=minutes,
)
if start_time:
if end_time:
return FIND_RANGE_COMMAND.format(
data_path=data_path,
start_time=start_time,
end_time=end_time,
)
else:
return FIND_START_COMMAND.format(
data_path=data_path,
start_time=start_time,
) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def check_corrupted_files_cmd(java_home, files):
"""Check the file corruption of the specified files. :param java_home: the JAVA_HOME :type java_home: string :param files: list of files to be checked :type files: list of string """ |
files_str = ",".join(files)
check_command = CHECK_COMMAND.format(
ionice=IONICE,
java_home=java_home,
files=files_str,
)
# One line per message can generate several MB/s of data
# Use pre-filtering on the server side to reduce it
command = "{check_command} | {reduce_output}".format(
check_command=check_command,
reduce_output=REDUCE_OUTPUT,
)
return command |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_output_lines_from_command(host, command):
"""Execute a command on the specified host, returning a list of output lines. :param host: the host name :type host: str :param command: the command :type commmand: str """ |
with closing(ssh_client(host)) as ssh:
_, stdout, stderr = ssh.exec_command(command)
lines = stdout.read().splitlines()
report_stderr(host, stderr)
return lines |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def find_files(data_path, brokers, minutes, start_time, end_time):
"""Find all the Kafka log files on the broker that have been modified in the speficied time range. start_time and end_time should be in the format specified by TIME_FORMAT_REGEX. :param data_path: the path to the lof files on the broker :type data_path: str :param brokers: the brokers :type brokers: list of (broker_id, host) pairs :param minutes: check the files modified in the last N minutes :type minutes: int :param start_time: check the files modified after start_time :type start_time: str :param end_time: check the files modified before end_time :type end_time: str :returns: the files :rtype: list of (broker, host, file_path) tuples """ |
command = find_files_cmd(data_path, minutes, start_time, end_time)
pool = Pool(len(brokers))
result = pool.map(
partial(get_output_lines_from_command, command=command),
[host for broker, host in brokers])
return [(broker, host, files)
for (broker, host), files
in zip(brokers, result)] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def parse_output(host, output):
"""Parse the output of the dump tool and print warnings or error messages accordingly. :param host: the source :type host: str :param output: the output of the script on host :type output: list of str """ |
current_file = None
for line in output.readlines():
file_name_search = FILE_PATH_REGEX.search(line)
if file_name_search:
current_file = file_name_search.group(1)
continue
if INVALID_MESSAGE_REGEX.match(line) or INVALID_BYTES_REGEX.match(line):
print_line(host, current_file, line, "ERROR")
elif VALID_MESSAGE_REGEX.match(line) or \
line.startswith('Starting offset:'):
continue
else:
print_line(host, current_file, line, "UNEXPECTED OUTPUT") |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def print_line(host, path, line, line_type):
"""Print a dump tool line to stdout. :param host: the source host :type host: str :param path: the path to the file that is being analyzed :type path: str :param line: the line to be printed :type line: str :param line_type: a header for the line :type line_type: str """ |
print(
"{ltype} Host: {host}, File: {path}".format(
ltype=line_type,
host=host,
path=path,
)
)
print("{ltype} Output: {line}".format(ltype=line_type, line=line)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def check_files_on_host(java_home, host, files, batch_size):
"""Check the files on the host. Files are grouped together in groups of batch_size files. The dump class will be executed on each batch, sequentially. :param java_home: the JAVA_HOME of the broker :type java_home: str :param host: the host where the tool will be executed :type host: str :param files: the list of files to be analyzed :type files: list of str :param batch_size: the size of each batch :type batch_size: int """ |
with closing(ssh_client(host)) as ssh:
for i, batch in enumerate(chunks(files, batch_size)):
command = check_corrupted_files_cmd(java_home, batch)
_, stdout, stderr = ssh.exec_command(command)
report_stderr(host, stderr)
print(
" {host}: file {n_file} of {total}".format(
host=host,
n_file=(i * DEFAULT_BATCH_SIZE),
total=len(files),
)
)
parse_output(host, stdout) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_partition_leaders(cluster_config):
"""Return the current leaders of all partitions. Partitions are returned as a "topic-partition" string. :param cluster_config: the cluster :type cluster_config: kafka_utils.utils.config.ClusterConfig :returns: leaders for partitions :rtype: map of ("topic-partition", broker_id) pairs """ |
client = KafkaClient(cluster_config.broker_list)
result = {}
for topic, topic_data in six.iteritems(client.topic_partitions):
for partition, p_data in six.iteritems(topic_data):
topic_partition = topic + "-" + str(partition)
result[topic_partition] = p_data.leader
return result |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_tp_from_file(file_path):
"""Return the name of the topic-partition given the path to the file. :param file_path: the path to the log file :type file_path: str :returns: the name of the topic-partition, ex. "topic_name-0" :rtype: str """ |
match = TP_FROM_FILE_REGEX.match(file_path)
if not match:
print("File path is not valid: " + file_path)
sys.exit(1)
return match.group(1) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def filter_leader_files(cluster_config, broker_files):
"""Given a list of broker files, filters out all the files that are in the replicas. :param cluster_config: the cluster :type cluster_config: kafka_utils.utils.config.ClusterConfig :param broker_files: the broker files :returns: the filtered list """ |
print("Filtering leaders")
leader_of = get_partition_leaders(cluster_config)
result = []
for broker, host, files in broker_files:
filtered = []
for file_path in files:
tp = get_tp_from_file(file_path)
if tp not in leader_of or leader_of[tp] == broker:
filtered.append(file_path)
result.append((broker, host, filtered))
print(
"Broker: {broker}, leader of {l_count} over {f_count} files".format(
broker=broker,
l_count=len(filtered),
f_count=len(files),
)
)
return result |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def check_cluster( cluster_config, data_path, java_home, check_replicas, batch_size, minutes, start_time, end_time, ):
"""Check the integrity of the Kafka log files in a cluster. start_time and end_time should be in the format specified by TIME_FORMAT_REGEX. :param data_path: the path to the log folder on the broker :type data_path: str :param java_home: the JAVA_HOME of the broker :type java_home: str :param check_replicas: also checks the replica files :type check_replicas: bool :param batch_size: the size of the batch :type batch_size: int :param minutes: check the files modified in the last N minutes :type minutes: int :param start_time: check the files modified after start_time :type start_time: str :param end_time: check the files modified before end_time :type end_time: str """ |
brokers = get_broker_list(cluster_config)
broker_files = find_files(data_path, brokers, minutes, start_time, end_time)
if not check_replicas: # remove replicas
broker_files = filter_leader_files(cluster_config, broker_files)
processes = []
print("Starting {n} parallel processes".format(n=len(broker_files)))
try:
for broker, host, files in broker_files:
print(
" Broker: {host}, {n} files to check".format(
host=host,
n=len(files)),
)
p = Process(
name="dump_process_" + host,
target=check_files_on_host,
args=(java_home, host, files, batch_size),
)
p.start()
processes.append(p)
print("Processes running:")
for process in processes:
process.join()
except KeyboardInterrupt:
print("Terminating all processes")
for process in processes:
process.terminate()
process.join()
print("All processes terminated")
sys.exit(1) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def validate_args(args):
"""Basic option validation. Returns False if the options are not valid, True otherwise. :param args: the command line options :type args: map :param brokers_num: the number of brokers """ |
if not args.minutes and not args.start_time:
print("Error: missing --minutes or --start-time")
return False
if args.minutes and args.start_time:
print("Error: --minutes shouldn't be specified if --start-time is used")
return False
if args.end_time and not args.start_time:
print("Error: --end-time can't be used without --start-time")
return False
if args.minutes and args.minutes <= 0:
print("Error: --minutes must be > 0")
return False
if args.start_time and not TIME_FORMAT_REGEX.match(args.start_time):
print("Error: --start-time format is not valid")
print("Example format: '2015-11-26 11:00:00'")
return False
if args.end_time and not TIME_FORMAT_REGEX.match(args.end_time):
print("Error: --end-time format is not valid")
print("Example format: '2015-11-26 11:00:00'")
return False
if args.batch_size <= 0:
print("Error: --batch-size must be > 0")
return False
return True |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def separate_groups(groups, key, total):
"""Separate the group into overloaded and under-loaded groups. The revised over-loaded groups increases the choice space for future selection of most suitable group based on search criteria. For example: Given the groups (a:4, b:4, c:3, d:2) where the number represents the number of elements for each group. smart_separate_groups sets 'a' and 'c' as optimal, 'b' as over-loaded and 'd' as under-loaded. separate-groups combines 'a' with 'b' as over-loaded, allowing to select between these two groups to transfer the element to 'd'. :param groups: list of groups :param key: function to retrieve element count from group :param total: total number of elements to distribute :returns: sorted lists of over loaded (descending) and under loaded (ascending) group """ |
optimum, extra = compute_optimum(len(groups), total)
over_loaded, under_loaded, optimal = _smart_separate_groups(groups, key, total)
# If every group is optimal return
if not extra:
return over_loaded, under_loaded
# Some groups in optimal may have a number of elements that is optimum + 1.
# In this case they should be considered over_loaded.
potential_under_loaded = [
group for group in optimal
if key(group) == optimum
]
potential_over_loaded = [
group for group in optimal
if key(group) > optimum
]
revised_under_loaded = under_loaded + potential_under_loaded
revised_over_loaded = over_loaded + potential_over_loaded
return (
sorted(revised_over_loaded, key=key, reverse=True),
sorted(revised_under_loaded, key=key),
) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def active_brokers(self):
"""Return set of brokers that are not inactive or decommissioned.""" |
return {
broker
for broker in self._brokers
if not broker.inactive and not broker.decommissioned
} |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def add_broker(self, broker):
"""Add broker to current broker-list.""" |
if broker not in self._brokers:
self._brokers.add(broker)
else:
self.log.warning(
'Broker {broker_id} already present in '
'replication-group {rg_id}'.format(
broker_id=broker.id,
rg_id=self._id,
)
) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def count_replica(self, partition):
"""Return count of replicas of given partition.""" |
return sum(1 for b in partition.replicas if b in self.brokers) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def acquire_partition(self, partition, source_broker):
"""Move a partition from a broker to any of the eligible brokers of the replication group. :param partition: Partition to move :param source_broker: Broker the partition currently belongs to """ |
broker_dest = self._elect_dest_broker(partition)
if not broker_dest:
raise NotEligibleGroupError(
"No eligible brokers to accept partition {p}".format(p=partition),
)
source_broker.move_partition(partition, broker_dest) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _select_broker_pair(self, rg_destination, victim_partition):
"""Select best-fit source and destination brokers based on partition count and presence of partition over the broker. * Get overloaded and underloaded brokers Best-fit Selection Criteria: Source broker: Select broker containing the victim-partition with maximum partitions. Destination broker: NOT containing the victim-partition with minimum partitions. If no such broker found, return first broker. This helps in ensuring:- * Topic-partitions are distributed across brokers. * Partition-count is balanced across replication-groups. """ |
broker_source = self._elect_source_broker(victim_partition)
broker_destination = rg_destination._elect_dest_broker(victim_partition)
return broker_source, broker_destination |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _elect_source_broker(self, victim_partition, broker_subset=None):
"""Select first over loaded broker having victim_partition. Note: The broker with maximum siblings of victim-partitions (same topic) is selected to reduce topic-partition imbalance. """ |
broker_subset = broker_subset or self._brokers
over_loaded_brokers = sorted(
[
broker
for broker in broker_subset
if victim_partition in broker.partitions and not broker.inactive
],
key=lambda b: len(b.partitions),
reverse=True,
)
if not over_loaded_brokers:
return None
broker_topic_partition_cnt = [
(broker, broker.count_partitions(victim_partition.topic))
for broker in over_loaded_brokers
]
max_count_pair = max(
broker_topic_partition_cnt,
key=lambda ele: ele[1],
)
return max_count_pair[0] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _elect_dest_broker(self, victim_partition):
"""Select first under loaded brokers preferring not having partition of same topic as victim partition. """ |
under_loaded_brokers = sorted(
[
broker
for broker in self._brokers
if (victim_partition not in broker.partitions and
not broker.inactive and
not broker.decommissioned)
],
key=lambda b: len(b.partitions)
)
if not under_loaded_brokers:
return None
broker_topic_partition_cnt = [
(broker, broker.count_partitions(victim_partition.topic))
for broker in under_loaded_brokers
if victim_partition not in broker.partitions
]
min_count_pair = min(
broker_topic_partition_cnt,
key=lambda ele: ele[1],
)
return min_count_pair[0] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def rebalance_brokers(self):
"""Rebalance partition-count across brokers.""" |
total_partitions = sum(len(b.partitions) for b in self.brokers)
blacklist = set(b for b in self.brokers if b.decommissioned)
active_brokers = self.get_active_brokers() - blacklist
if not active_brokers:
raise EmptyReplicationGroupError("No active brokers in %s", self._id)
# Separate brokers based on partition count
over_loaded_brokers, under_loaded_brokers = separate_groups(
active_brokers,
lambda b: len(b.partitions),
total_partitions,
)
# Decommissioned brokers are considered overloaded until they have
# no more partitions assigned.
over_loaded_brokers += [b for b in blacklist if not b.empty()]
if not over_loaded_brokers and not under_loaded_brokers:
self.log.info(
'Brokers of replication-group: %s already balanced for '
'partition-count.',
self._id,
)
return
sibling_distance = self.generate_sibling_distance()
while under_loaded_brokers and over_loaded_brokers:
# Get best-fit source-broker, destination-broker and partition
broker_source, broker_destination, victim_partition = \
self._get_target_brokers(
over_loaded_brokers,
under_loaded_brokers,
sibling_distance,
)
# No valid source or target brokers found
if broker_source and broker_destination:
# Move partition
self.log.debug(
'Moving partition {p_name} from broker {broker_source} to '
'broker {broker_destination}'
.format(
p_name=victim_partition.name,
broker_source=broker_source.id,
broker_destination=broker_destination.id,
),
)
broker_source.move_partition(victim_partition, broker_destination)
sibling_distance = self.update_sibling_distance(
sibling_distance,
broker_destination,
victim_partition.topic,
)
else:
# Brokers are balanced or could not be balanced further
break
# Re-evaluate under and over-loaded brokers
over_loaded_brokers, under_loaded_brokers = separate_groups(
active_brokers,
lambda b: len(b.partitions),
total_partitions,
)
# As before add brokers to decommission.
over_loaded_brokers += [b for b in blacklist if not b.empty()] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _get_target_brokers(self, over_loaded_brokers, under_loaded_brokers, sibling_distance):
"""Pick best-suitable source-broker, destination-broker and partition to balance partition-count over brokers in given replication-group. """ |
# Sort given brokers to ensure determinism
over_loaded_brokers = sorted(
over_loaded_brokers,
key=lambda b: len(b.partitions),
reverse=True,
)
under_loaded_brokers = sorted(
under_loaded_brokers,
key=lambda b: len(b.partitions),
)
# pick pair of brokers from source and destination brokers with
# minimum same-partition-count
# Set result in format: (source, dest, preferred-partition)
target = (None, None, None)
min_distance = sys.maxsize
best_partition = None
for source in over_loaded_brokers:
for dest in under_loaded_brokers:
# A decommissioned broker can have less partitions than
# destination. We consider it a valid source because we want to
# move all the partitions out from it.
if (len(source.partitions) - len(dest.partitions) > 1 or
source.decommissioned):
best_partition = source.get_preferred_partition(
dest,
sibling_distance[dest][source],
)
# If no eligible partition continue with next broker.
if best_partition is None:
continue
distance = sibling_distance[dest][source][best_partition.topic]
if distance < min_distance:
min_distance = distance
target = (source, dest, best_partition)
else:
# If relatively-unbalanced then all brokers in destination
# will be thereafter, return from here.
break
return target |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def generate_sibling_distance(self):
"""Generate a dict containing the distance computed as difference in in number of partitions of each topic from under_loaded_brokers to over_loaded_brokers. Negative distance means that the destination broker has got less partitions of a certain topic than the source broker. returns: dict {dest: {source: {topic: distance}}} """ |
sibling_distance = defaultdict(lambda: defaultdict(dict))
topics = {p.topic for p in self.partitions}
for source in self.brokers:
for dest in self.brokers:
if source != dest:
for topic in topics:
sibling_distance[dest][source][topic] = \
dest.count_partitions(topic) - \
source.count_partitions(topic)
return sibling_distance |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def update_sibling_distance(self, sibling_distance, dest, topic):
"""Update the sibling distance for topic and destination broker.""" |
for source in six.iterkeys(sibling_distance[dest]):
sibling_distance[dest][source][topic] = \
dest.count_partitions(topic) - \
source.count_partitions(topic)
return sibling_distance |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def move_partition_replica(self, under_loaded_rg, eligible_partition):
"""Move partition to under-loaded replication-group if possible.""" |
# Evaluate possible source and destination-broker
source_broker, dest_broker = self._get_eligible_broker_pair(
under_loaded_rg,
eligible_partition,
)
if source_broker and dest_broker:
self.log.debug(
'Moving partition {p_name} from broker {source_broker} to '
'replication-group:broker {rg_dest}:{dest_broker}'.format(
p_name=eligible_partition.name,
source_broker=source_broker.id,
dest_broker=dest_broker.id,
rg_dest=under_loaded_rg.id,
),
)
# Move partition if eligible brokers found
source_broker.move_partition(eligible_partition, dest_broker) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _get_eligible_broker_pair(self, under_loaded_rg, eligible_partition):
"""Evaluate and return source and destination broker-pair from over-loaded and under-loaded replication-group if possible, return None otherwise. Return source broker with maximum partitions and destination broker with minimum partitions based on following conditions:- 1) At-least one broker in under-loaded group which does not have victim-partition. This is because a broker cannot have duplicate replica. 2) At-least one broker in over-loaded group which has victim-partition """ |
under_brokers = list(filter(
lambda b: eligible_partition not in b.partitions,
under_loaded_rg.brokers,
))
over_brokers = list(filter(
lambda b: eligible_partition in b.partitions,
self.brokers,
))
# Get source and destination broker
source_broker, dest_broker = None, None
if over_brokers:
source_broker = max(
over_brokers,
key=lambda broker: len(broker.partitions),
)
if under_brokers:
dest_broker = min(
under_brokers,
key=lambda broker: len(broker.partitions),
)
return (source_broker, dest_broker) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def merge_result(res):
""" Merge all items in `res` into a list. This command is used when sending a command to multiple nodes and they result from each node should be merged into a single list. """ |
if not isinstance(res, dict):
raise ValueError('Value should be of dict type')
result = set([])
for _, v in res.items():
for value in v:
result.add(value)
return list(result) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def first_key(res):
""" Returns the first result for the given command. If more then 1 result is returned then a `RedisClusterException` is raised. """ |
if not isinstance(res, dict):
raise ValueError('Value should be of dict type')
if len(res.keys()) != 1:
raise RedisClusterException("More then 1 result from command")
return list(res.values())[0] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def clusterdown_wrapper(func):
""" Wrapper for CLUSTERDOWN error handling. If the cluster reports it is down it is assumed that: - connection_pool was disconnected - connection_pool was reseted - refereh_table_asap set to True It will try 3 times to rerun the command and raises ClusterDownException if it continues to fail. """ |
@wraps(func)
async def inner(*args, **kwargs):
for _ in range(0, 3):
try:
return await func(*args, **kwargs)
except ClusterDownError:
# Try again with the new cluster setup. All other errors
# should be raised.
pass
# If it fails 3 times then raise exception back to caller
raise ClusterDownError("CLUSTERDOWN error. Unable to rebuild the cluster")
return inner |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def parse_debug_object(response):
"Parse the results of Redis's DEBUG OBJECT command into a Python dict"
# The 'type' of the object is the first item in the response, but isn't
# prefixed with a name
response = nativestr(response)
response = 'type:' + response
response = dict([kv.split(':') for kv in response.split()])
# parse some expected int values from the string response
# note: this cmd isn't spec'd so these may not appear in all redis versions
int_fields = ('refcount', 'serializedlength', 'lru', 'lru_seconds_idle')
for field in int_fields:
if field in response:
response[field] = int(response[field])
return response |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def parse_info(response):
"Parse the result of Redis's INFO command into a Python dict"
info = {}
response = nativestr(response)
def get_value(value):
if ',' not in value or '=' not in value:
try:
if '.' in value:
return float(value)
else:
return int(value)
except ValueError:
return value
else:
sub_dict = {}
for item in value.split(','):
k, v = item.rsplit('=', 1)
sub_dict[k] = get_value(v)
return sub_dict
for line in response.splitlines():
if line and not line.startswith('#'):
if line.find(':') != -1:
key, value = line.split(':', 1)
info[key] = get_value(value)
else:
# if the line isn't splittable, append it to the "__raw__" key
info.setdefault('__raw__', []).append(line)
return info |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
async def slowlog_get(self, num=None):
""" Get the entries from the slowlog. If ``num`` is specified, get the most recent ``num`` items. """ |
args = ['SLOWLOG GET']
if num is not None:
args.append(num)
return await self.execute_command(*args) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def cache(self, name, cache_class=Cache, identity_generator_class=IdentityGenerator, compressor_class=Compressor, serializer_class=Serializer, *args, **kwargs):
""" Return a cache object using default identity generator, serializer and compressor. ``name`` is used to identify the series of your cache ``cache_class`` Cache is for normal use and HerdCache is used in case of Thundering Herd Problem ``identity_generator_class`` is the class used to generate the real unique key in cache, can be overwritten to meet your special needs. It should provide `generate` API ``compressor_class`` is the class used to compress cache in redis, can be overwritten with API `compress` and `decompress` retained. ``serializer_class`` is the class used to serialize content before compress, can be overwritten with API `serialize` and `deserialize` retained. """ |
return cache_class(self, app=name,
identity_generator_class=identity_generator_class,
compressor_class=compressor_class,
serializer_class=serializer_class,
*args, **kwargs) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| async def hincrby(self, name, key, amount=1):
"Increment the value of ``key`` in hash ``name`` by ``amount``"
return await self.execute_command('HINCRBY', name, key, amount) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
async def hincrbyfloat(self, name, key, amount=1.0):
""" Increment the value of ``key`` in hash ``name`` by floating ``amount`` """ |
return await self.execute_command('HINCRBYFLOAT', name, key, amount) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
async def hset(self, name, key, value):
""" Set ``key`` to ``value`` within hash ``name`` Returns 1 if HSET created a new field, otherwise 0 """ |
return await self.execute_command('HSET', name, key, value) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
async def hsetnx(self, name, key, value):
""" Set ``key`` to ``value`` within hash ``name`` if ``key`` does not exist. Returns 1 if HSETNX created a field, otherwise 0. """ |
return await self.execute_command('HSETNX', name, key, value) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
async def hmset(self, name, mapping):
""" Set key to value within hash ``name`` for each corresponding key and value from the ``mapping`` dict. """ |
if not mapping:
raise DataError("'hmset' with 'mapping' of length 0")
items = []
for pair in iteritems(mapping):
items.extend(pair)
return await self.execute_command('HMSET', name, *items) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.