body_hash stringlengths 64 64 | body stringlengths 23 109k | docstring stringlengths 1 57k | path stringlengths 4 198 | name stringlengths 1 115 | repository_name stringlengths 7 111 | repository_stars float64 0 191k | lang stringclasses 1 value | body_without_docstring stringlengths 14 108k | unified stringlengths 45 133k |
|---|---|---|---|---|---|---|---|---|---|
5e2a75d3e9b1caae057acba9d808c83beca09416d138e005c9d038f51006ddd4 | def might_contains_all(self, collection):
'\n The might_contains_all method enables you to check if Bloom Filter may contains each element from collection.\n\n :param collection: a collection with elements to be checked.\n :return: True if Bloom Filter can contains each element (Remember that can be false positive result).\n False if Bloom Filter cannot contains each element.\n '
for item in collection:
if (not self.might_contains(item)):
return False
return True | The might_contains_all method enables you to check if Bloom Filter may contains each element from collection.
:param collection: a collection with elements to be checked.
:return: True if Bloom Filter can contains each element (Remember that can be false positive result).
False if Bloom Filter cannot contains each element. | bloom_filter.py | might_contains_all | DahDev/BloomFilter-Python | 0 | python | def might_contains_all(self, collection):
'\n The might_contains_all method enables you to check if Bloom Filter may contains each element from collection.\n\n :param collection: a collection with elements to be checked.\n :return: True if Bloom Filter can contains each element (Remember that can be false positive result).\n False if Bloom Filter cannot contains each element.\n '
for item in collection:
if (not self.might_contains(item)):
return False
return True | def might_contains_all(self, collection):
'\n The might_contains_all method enables you to check if Bloom Filter may contains each element from collection.\n\n :param collection: a collection with elements to be checked.\n :return: True if Bloom Filter can contains each element (Remember that can be false positive result).\n False if Bloom Filter cannot contains each element.\n '
for item in collection:
if (not self.might_contains(item)):
return False
return True<|docstring|>The might_contains_all method enables you to check if Bloom Filter may contains each element from collection.
:param collection: a collection with elements to be checked.
:return: True if Bloom Filter can contains each element (Remember that can be false positive result).
False if Bloom Filter cannot contains each element.<|endoftext|> |
e3ab5d0c620020b6cfb2775f8d3a01a447f11e5c7f62c5f9dddb628f9ee1b983 | def get_expected_probability_of_false_positives(self):
'\n The get_expected_probability_of_false_positives method enables you to get expected probability of false positives.\n\n :return: expected probability of false positives.\n '
return self.get_probability_of_false_positives(self.expected_number_of_elements) | The get_expected_probability_of_false_positives method enables you to get expected probability of false positives.
:return: expected probability of false positives. | bloom_filter.py | get_expected_probability_of_false_positives | DahDev/BloomFilter-Python | 0 | python | def get_expected_probability_of_false_positives(self):
'\n The get_expected_probability_of_false_positives method enables you to get expected probability of false positives.\n\n :return: expected probability of false positives.\n '
return self.get_probability_of_false_positives(self.expected_number_of_elements) | def get_expected_probability_of_false_positives(self):
'\n The get_expected_probability_of_false_positives method enables you to get expected probability of false positives.\n\n :return: expected probability of false positives.\n '
return self.get_probability_of_false_positives(self.expected_number_of_elements)<|docstring|>The get_expected_probability_of_false_positives method enables you to get expected probability of false positives.
:return: expected probability of false positives.<|endoftext|> |
fc939373dbeeb86897684ee649b9b5827018fecf11b29902807619d75ced591d | def get_current_probability_of_false_positives(self):
'\n The get_current_probability_of_false_positives method enables you to get actual probability of false positives.\n\n :return: actual probability of false positives.\n '
return self.get_probability_of_false_positives(self.number_of_elements) | The get_current_probability_of_false_positives method enables you to get actual probability of false positives.
:return: actual probability of false positives. | bloom_filter.py | get_current_probability_of_false_positives | DahDev/BloomFilter-Python | 0 | python | def get_current_probability_of_false_positives(self):
'\n The get_current_probability_of_false_positives method enables you to get actual probability of false positives.\n\n :return: actual probability of false positives.\n '
return self.get_probability_of_false_positives(self.number_of_elements) | def get_current_probability_of_false_positives(self):
'\n The get_current_probability_of_false_positives method enables you to get actual probability of false positives.\n\n :return: actual probability of false positives.\n '
return self.get_probability_of_false_positives(self.number_of_elements)<|docstring|>The get_current_probability_of_false_positives method enables you to get actual probability of false positives.
:return: actual probability of false positives.<|endoftext|> |
6c58d017ce2314830f258a9f67727010a51d66a0e861cb44f3e5216c06904256 | def get_probability_of_false_positives(self, number_of_elements):
'\n The get_probability_of_false_positives method enables you to get probability of false positives based on parameter.\n\n :param number_of_elements: a number of elements in Bloom Filter.\n :return: probability of false positives based on parameter.\n '
return pow((1 - math.exp((((- self.number_of_hash) * number_of_elements) / self.size))), self.number_of_hash) | The get_probability_of_false_positives method enables you to get probability of false positives based on parameter.
:param number_of_elements: a number of elements in Bloom Filter.
:return: probability of false positives based on parameter. | bloom_filter.py | get_probability_of_false_positives | DahDev/BloomFilter-Python | 0 | python | def get_probability_of_false_positives(self, number_of_elements):
'\n The get_probability_of_false_positives method enables you to get probability of false positives based on parameter.\n\n :param number_of_elements: a number of elements in Bloom Filter.\n :return: probability of false positives based on parameter.\n '
return pow((1 - math.exp((((- self.number_of_hash) * number_of_elements) / self.size))), self.number_of_hash) | def get_probability_of_false_positives(self, number_of_elements):
'\n The get_probability_of_false_positives method enables you to get probability of false positives based on parameter.\n\n :param number_of_elements: a number of elements in Bloom Filter.\n :return: probability of false positives based on parameter.\n '
return pow((1 - math.exp((((- self.number_of_hash) * number_of_elements) / self.size))), self.number_of_hash)<|docstring|>The get_probability_of_false_positives method enables you to get probability of false positives based on parameter.
:param number_of_elements: a number of elements in Bloom Filter.
:return: probability of false positives based on parameter.<|endoftext|> |
7400451106f072d6ba603d468d6bb26f0dd22e8c99812b085df2835c13d6d4cd | def clear(self):
'\n The clear method enables you to delete all elements from Bloom Filter.\n '
self.number_of_elements = 0
self.bit_set.setall(0) | The clear method enables you to delete all elements from Bloom Filter. | bloom_filter.py | clear | DahDev/BloomFilter-Python | 0 | python | def clear(self):
'\n \n '
self.number_of_elements = 0
self.bit_set.setall(0) | def clear(self):
'\n \n '
self.number_of_elements = 0
self.bit_set.setall(0)<|docstring|>The clear method enables you to delete all elements from Bloom Filter.<|endoftext|> |
61afff7d5ab655ebd00bf4cb3db381bca68bdfff38374438242bed1a6210018c | def is_empty(self):
'\n The is_empty method enables you to check if Bloom Filter is empty.\n\n :return: True, if Bloom Filter is empty.\n False, if Bloom Filter is not empty.\n '
return (self.number_of_elements == 0) | The is_empty method enables you to check if Bloom Filter is empty.
:return: True, if Bloom Filter is empty.
False, if Bloom Filter is not empty. | bloom_filter.py | is_empty | DahDev/BloomFilter-Python | 0 | python | def is_empty(self):
'\n The is_empty method enables you to check if Bloom Filter is empty.\n\n :return: True, if Bloom Filter is empty.\n False, if Bloom Filter is not empty.\n '
return (self.number_of_elements == 0) | def is_empty(self):
'\n The is_empty method enables you to check if Bloom Filter is empty.\n\n :return: True, if Bloom Filter is empty.\n False, if Bloom Filter is not empty.\n '
return (self.number_of_elements == 0)<|docstring|>The is_empty method enables you to check if Bloom Filter is empty.
:return: True, if Bloom Filter is empty.
False, if Bloom Filter is not empty.<|endoftext|> |
f4e6c9c720800ad521e01057f3dbaa7b635941d7102c9c4b981aaf0dc70c21cc | def get_bits_per_element(self):
'\n The get_bits_per_element method enables you to get actual bits per element.\n\n :return: actual bits per element.\n :raise ValueError: when actual number of inserted element = 0.\n '
if (self.number_of_elements <= 0):
raise ValueError('Bloom Filter is empty!')
return (self.size / self.number_of_elements) | The get_bits_per_element method enables you to get actual bits per element.
:return: actual bits per element.
:raise ValueError: when actual number of inserted element = 0. | bloom_filter.py | get_bits_per_element | DahDev/BloomFilter-Python | 0 | python | def get_bits_per_element(self):
'\n The get_bits_per_element method enables you to get actual bits per element.\n\n :return: actual bits per element.\n :raise ValueError: when actual number of inserted element = 0.\n '
if (self.number_of_elements <= 0):
raise ValueError('Bloom Filter is empty!')
return (self.size / self.number_of_elements) | def get_bits_per_element(self):
'\n The get_bits_per_element method enables you to get actual bits per element.\n\n :return: actual bits per element.\n :raise ValueError: when actual number of inserted element = 0.\n '
if (self.number_of_elements <= 0):
raise ValueError('Bloom Filter is empty!')
return (self.size / self.number_of_elements)<|docstring|>The get_bits_per_element method enables you to get actual bits per element.
:return: actual bits per element.
:raise ValueError: when actual number of inserted element = 0.<|endoftext|> |
70f078ff2a466735e405309f5f5e1436f947a9e0a7ed7eb97f8ec8aacaa3b9bc | def get_value_from_generated_hash(self, data, hash_func):
'\n The get_value_from_generated_hash method enables you to get value from created hash.\n :param data: data to hash.\n :param hash_func: hash function.\n :return: value from hash.\n '
h = hash_func(data)
return int(h.hexdigest(), 16) | The get_value_from_generated_hash method enables you to get value from created hash.
:param data: data to hash.
:param hash_func: hash function.
:return: value from hash. | bloom_filter.py | get_value_from_generated_hash | DahDev/BloomFilter-Python | 0 | python | def get_value_from_generated_hash(self, data, hash_func):
'\n The get_value_from_generated_hash method enables you to get value from created hash.\n :param data: data to hash.\n :param hash_func: hash function.\n :return: value from hash.\n '
h = hash_func(data)
return int(h.hexdigest(), 16) | def get_value_from_generated_hash(self, data, hash_func):
'\n The get_value_from_generated_hash method enables you to get value from created hash.\n :param data: data to hash.\n :param hash_func: hash function.\n :return: value from hash.\n '
h = hash_func(data)
return int(h.hexdigest(), 16)<|docstring|>The get_value_from_generated_hash method enables you to get value from created hash.
:param data: data to hash.
:param hash_func: hash function.
:return: value from hash.<|endoftext|> |
8a5e41570662b4dce802443b2adec3d091035eb5137be69718b5f09f97a27f18 | def __repr__(self):
'Just the binding, without the id key.'
return ('<NexusPortBinding(%s,%s,%s,%s)>' % (self.port_id, self.vlan_id, self.switch_ip, self.instance_id)) | Just the binding, without the id key. | neutron/plugins/ml2/drivers/cisco/nexus/nexus_models_v2.py | __repr__ | leenheer/neutron | 10 | python | def __repr__(self):
return ('<NexusPortBinding(%s,%s,%s,%s)>' % (self.port_id, self.vlan_id, self.switch_ip, self.instance_id)) | def __repr__(self):
return ('<NexusPortBinding(%s,%s,%s,%s)>' % (self.port_id, self.vlan_id, self.switch_ip, self.instance_id))<|docstring|>Just the binding, without the id key.<|endoftext|> |
76711a3e7c14f7bd87da566489ee3857322434a4fd4a4c13353f84edf76dcff6 | def __eq__(self, other):
'Compare only the binding, without the id key.'
return ((self.port_id == other.port_id) and (self.vlan_id == other.vlan_id) and (self.switch_ip == other.switch_ip) and (self.instance_id == other.instance_id)) | Compare only the binding, without the id key. | neutron/plugins/ml2/drivers/cisco/nexus/nexus_models_v2.py | __eq__ | leenheer/neutron | 10 | python | def __eq__(self, other):
return ((self.port_id == other.port_id) and (self.vlan_id == other.vlan_id) and (self.switch_ip == other.switch_ip) and (self.instance_id == other.instance_id)) | def __eq__(self, other):
return ((self.port_id == other.port_id) and (self.vlan_id == other.vlan_id) and (self.switch_ip == other.switch_ip) and (self.instance_id == other.instance_id))<|docstring|>Compare only the binding, without the id key.<|endoftext|> |
f1a3fcda082c7569158022d73db707063500c5ab114460d1ca660e0418bfcacb | def save(self, *args, **kwargs):
'Automatically create the root index template node'
super(Index, self).save(*args, **kwargs)
IndexTemplateNode.objects.get_or_create(parent=None, index=self) | Automatically create the root index template node | mayan/apps/document_indexing/models.py | save | camerondphillips/MAYAN | 0 | python | def save(self, *args, **kwargs):
super(Index, self).save(*args, **kwargs)
IndexTemplateNode.objects.get_or_create(parent=None, index=self) | def save(self, *args, **kwargs):
super(Index, self).save(*args, **kwargs)
IndexTemplateNode.objects.get_or_create(parent=None, index=self)<|docstring|>Automatically create the root index template node<|endoftext|> |
01515429f2a610c264191db43d035660f26a17bb89f538f682311be6600efe74 | @classmethod
def from_dict(cls, json_object):
'Constructs a `TransformerConfig` from a Python dictionary of parameters.'
_params = {}
for key in json_object:
_params[key] = json_object[key]
return cls(**_params) | Constructs a `TransformerConfig` from a Python dictionary of parameters. | research/nlp/atae_lstm/src/config.py | from_dict | mindspore-ai/models | 77 | python | @classmethod
def from_dict(cls, json_object):
_params = {}
for key in json_object:
_params[key] = json_object[key]
return cls(**_params) | @classmethod
def from_dict(cls, json_object):
_params = {}
for key in json_object:
_params[key] = json_object[key]
return cls(**_params)<|docstring|>Constructs a `TransformerConfig` from a Python dictionary of parameters.<|endoftext|> |
95d0be5850c506a8d7eb40c36ed75e457f14ef4312338825838d300313834049 | @classmethod
def from_json_file(cls, json_file):
'Constructs a `TransformerConfig` from a json file of parameters.'
with open(json_file, 'r') as reader:
return cls.from_dict(json.load(reader)) | Constructs a `TransformerConfig` from a json file of parameters. | research/nlp/atae_lstm/src/config.py | from_json_file | mindspore-ai/models | 77 | python | @classmethod
def from_json_file(cls, json_file):
with open(json_file, 'r') as reader:
return cls.from_dict(json.load(reader)) | @classmethod
def from_json_file(cls, json_file):
with open(json_file, 'r') as reader:
return cls.from_dict(json.load(reader))<|docstring|>Constructs a `TransformerConfig` from a json file of parameters.<|endoftext|> |
37a2fabbf34fa196634b773ab45b7ac4538bf22fac31b1fd44b21c3fd84c33ea | def extractUnreliabletranslationsWordpressCom(item):
"\n\tParser for 'unreliabletranslations.wordpress.com'\n\t"
(vol, chp, frag, postfix) = extractVolChapterFragmentPostfix(item['title'])
if ((not (chp or vol)) or ('preview' in item['title'].lower())):
return None
if ((item['tags'] == ['5:50']) and (chp != 5)):
return buildReleaseMessageWithType(item, 'One Night Lovely Wife $5.50: Overwhelming Black Belly Husband', vol, chp, frag=frag, postfix=postfix, tl_type='translated')
tagmap = [('wlod', 'White Lotus Overturned Daily', 'translated'), ('mchtm!', 'My Chief Husband, Too Mensao!', 'translated'), ('vod', "I Became the Villain's Own Daughter", 'translated'), ('PRC', 'PRC', 'translated'), ('Loiterous', 'Loiterous', 'oel')]
for (tagname, name, tl_type) in tagmap:
if (tagname in item['tags']):
return buildReleaseMessageWithType(item, name, vol, chp, frag=frag, postfix=postfix, tl_type=tl_type)
return False | Parser for 'unreliabletranslations.wordpress.com' | WebMirror/management/rss_parser_funcs/feed_parse_extractUnreliabletranslationsWordpressCom.py | extractUnreliabletranslationsWordpressCom | fake-name/ReadableWebProxy | 193 | python | def extractUnreliabletranslationsWordpressCom(item):
"\n\t\n\t"
(vol, chp, frag, postfix) = extractVolChapterFragmentPostfix(item['title'])
if ((not (chp or vol)) or ('preview' in item['title'].lower())):
return None
if ((item['tags'] == ['5:50']) and (chp != 5)):
return buildReleaseMessageWithType(item, 'One Night Lovely Wife $5.50: Overwhelming Black Belly Husband', vol, chp, frag=frag, postfix=postfix, tl_type='translated')
tagmap = [('wlod', 'White Lotus Overturned Daily', 'translated'), ('mchtm!', 'My Chief Husband, Too Mensao!', 'translated'), ('vod', "I Became the Villain's Own Daughter", 'translated'), ('PRC', 'PRC', 'translated'), ('Loiterous', 'Loiterous', 'oel')]
for (tagname, name, tl_type) in tagmap:
if (tagname in item['tags']):
return buildReleaseMessageWithType(item, name, vol, chp, frag=frag, postfix=postfix, tl_type=tl_type)
return False | def extractUnreliabletranslationsWordpressCom(item):
"\n\t\n\t"
(vol, chp, frag, postfix) = extractVolChapterFragmentPostfix(item['title'])
if ((not (chp or vol)) or ('preview' in item['title'].lower())):
return None
if ((item['tags'] == ['5:50']) and (chp != 5)):
return buildReleaseMessageWithType(item, 'One Night Lovely Wife $5.50: Overwhelming Black Belly Husband', vol, chp, frag=frag, postfix=postfix, tl_type='translated')
tagmap = [('wlod', 'White Lotus Overturned Daily', 'translated'), ('mchtm!', 'My Chief Husband, Too Mensao!', 'translated'), ('vod', "I Became the Villain's Own Daughter", 'translated'), ('PRC', 'PRC', 'translated'), ('Loiterous', 'Loiterous', 'oel')]
for (tagname, name, tl_type) in tagmap:
if (tagname in item['tags']):
return buildReleaseMessageWithType(item, name, vol, chp, frag=frag, postfix=postfix, tl_type=tl_type)
return False<|docstring|>Parser for 'unreliabletranslations.wordpress.com'<|endoftext|> |
97f7d23deeeb0fcabaa390c3f2601ed60b8fa032a9c70b7cd2549e3029c300bb | @blueprint.app_template_global()
def add_page_arg(args, page) -> dict[(str, Any)]:
"Add the 'page' value.\n\n Used for pagination.\n "
if (args is None):
args = {}
args['page'] = page
return args | Add the 'page' value.
Used for pagination. | byceps/blueprints/common/core/views.py | add_page_arg | GSH-LAN/byceps | 0 | python | @blueprint.app_template_global()
def add_page_arg(args, page) -> dict[(str, Any)]:
"Add the 'page' value.\n\n Used for pagination.\n "
if (args is None):
args = {}
args['page'] = page
return args | @blueprint.app_template_global()
def add_page_arg(args, page) -> dict[(str, Any)]:
"Add the 'page' value.\n\n Used for pagination.\n "
if (args is None):
args = {}
args['page'] = page
return args<|docstring|>Add the 'page' value.
Used for pagination.<|endoftext|> |
3c41e707c480d369ad026830373450a0a4d7ad76ccd17eaa90d662b623acfb85 | def betterThan(self, point2compare, altLinInEq=False, bestFeasiblePoint=None):
'\n usage: result = involvedPoint.better(pointToCompare)\n\n returns True if the involvedPoint is better than pointToCompare\n and False otherwise\n (if NOT better, mb same fval and same residuals or residuals less than desired contol)\n '
if self.p.isUC:
return (self.f() < point2compare.f())
if altLinInEq:
(mr, point2compareResidual) = (self.mr_alt(bestFeasiblePoint=bestFeasiblePoint), point2compare.mr_alt(bestFeasiblePoint=bestFeasiblePoint))
else:
(mr, point2compareResidual) = (self.mr(), point2compare.mr())
(self_nNaNs, point2compare_nNaNs) = (self.nNaNs(), point2compare.nNaNs())
if (point2compare_nNaNs > self_nNaNs):
return True
elif (point2compare_nNaNs < self_nNaNs):
return False
if (self_nNaNs == 0):
if ((mr > self.p.contol) and (mr > point2compareResidual)):
return False
elif ((point2compareResidual > self.p.contol) and (point2compareResidual > mr)):
return True
else:
if ((mr == 0) and (point2compareResidual == 0)):
if (self.p.solver.__name__ not in ('interalg', 'de')):
self.p.err('you should provide at least one active constraint in each point from R^n where some constraints are undefined')
return (mr < point2compareResidual)
point2compareF_is_NaN = isnan(point2compare.f())
selfF_is_NaN = isnan(self.f())
if isPyPy:
if (type(point2compareF_is_NaN) == ndarray):
point2compareF_is_NaN = asscalar(point2compareF_is_NaN)
if (type(selfF_is_NaN) == ndarray):
selfF_is_NaN = asscalar(selfF_is_NaN)
if (not point2compareF_is_NaN):
if (not selfF_is_NaN):
return (self.f() < point2compare.f())
else:
return False
elif selfF_is_NaN:
return (mr < point2compareResidual)
else:
return True | usage: result = involvedPoint.better(pointToCompare)
returns True if the involvedPoint is better than pointToCompare
and False otherwise
(if NOT better, mb same fval and same residuals or residuals less than desired contol) | OpenOpt/openopt/kernel/Point.py | betterThan | PythonCharmers/OOSuite | 5 | python | def betterThan(self, point2compare, altLinInEq=False, bestFeasiblePoint=None):
'\n usage: result = involvedPoint.better(pointToCompare)\n\n returns True if the involvedPoint is better than pointToCompare\n and False otherwise\n (if NOT better, mb same fval and same residuals or residuals less than desired contol)\n '
if self.p.isUC:
return (self.f() < point2compare.f())
if altLinInEq:
(mr, point2compareResidual) = (self.mr_alt(bestFeasiblePoint=bestFeasiblePoint), point2compare.mr_alt(bestFeasiblePoint=bestFeasiblePoint))
else:
(mr, point2compareResidual) = (self.mr(), point2compare.mr())
(self_nNaNs, point2compare_nNaNs) = (self.nNaNs(), point2compare.nNaNs())
if (point2compare_nNaNs > self_nNaNs):
return True
elif (point2compare_nNaNs < self_nNaNs):
return False
if (self_nNaNs == 0):
if ((mr > self.p.contol) and (mr > point2compareResidual)):
return False
elif ((point2compareResidual > self.p.contol) and (point2compareResidual > mr)):
return True
else:
if ((mr == 0) and (point2compareResidual == 0)):
if (self.p.solver.__name__ not in ('interalg', 'de')):
self.p.err('you should provide at least one active constraint in each point from R^n where some constraints are undefined')
return (mr < point2compareResidual)
point2compareF_is_NaN = isnan(point2compare.f())
selfF_is_NaN = isnan(self.f())
if isPyPy:
if (type(point2compareF_is_NaN) == ndarray):
point2compareF_is_NaN = asscalar(point2compareF_is_NaN)
if (type(selfF_is_NaN) == ndarray):
selfF_is_NaN = asscalar(selfF_is_NaN)
if (not point2compareF_is_NaN):
if (not selfF_is_NaN):
return (self.f() < point2compare.f())
else:
return False
elif selfF_is_NaN:
return (mr < point2compareResidual)
else:
return True | def betterThan(self, point2compare, altLinInEq=False, bestFeasiblePoint=None):
'\n usage: result = involvedPoint.better(pointToCompare)\n\n returns True if the involvedPoint is better than pointToCompare\n and False otherwise\n (if NOT better, mb same fval and same residuals or residuals less than desired contol)\n '
if self.p.isUC:
return (self.f() < point2compare.f())
if altLinInEq:
(mr, point2compareResidual) = (self.mr_alt(bestFeasiblePoint=bestFeasiblePoint), point2compare.mr_alt(bestFeasiblePoint=bestFeasiblePoint))
else:
(mr, point2compareResidual) = (self.mr(), point2compare.mr())
(self_nNaNs, point2compare_nNaNs) = (self.nNaNs(), point2compare.nNaNs())
if (point2compare_nNaNs > self_nNaNs):
return True
elif (point2compare_nNaNs < self_nNaNs):
return False
if (self_nNaNs == 0):
if ((mr > self.p.contol) and (mr > point2compareResidual)):
return False
elif ((point2compareResidual > self.p.contol) and (point2compareResidual > mr)):
return True
else:
if ((mr == 0) and (point2compareResidual == 0)):
if (self.p.solver.__name__ not in ('interalg', 'de')):
self.p.err('you should provide at least one active constraint in each point from R^n where some constraints are undefined')
return (mr < point2compareResidual)
point2compareF_is_NaN = isnan(point2compare.f())
selfF_is_NaN = isnan(self.f())
if isPyPy:
if (type(point2compareF_is_NaN) == ndarray):
point2compareF_is_NaN = asscalar(point2compareF_is_NaN)
if (type(selfF_is_NaN) == ndarray):
selfF_is_NaN = asscalar(selfF_is_NaN)
if (not point2compareF_is_NaN):
if (not selfF_is_NaN):
return (self.f() < point2compare.f())
else:
return False
elif selfF_is_NaN:
return (mr < point2compareResidual)
else:
return True<|docstring|>usage: result = involvedPoint.better(pointToCompare)
returns True if the involvedPoint is better than pointToCompare
and False otherwise
(if NOT better, mb same fval and same residuals or residuals less than desired contol)<|endoftext|> |
33640876126169817242c6db65b9774d82b60368cb0bb0c28069251268d7081c | def eggroll_compute_XY(X, Y):
'\n compute X * Y\n :param X: DTable, with shape (feature_dim, sample_dim)\n :param Y: DTable, with shape (feature_dim, sample_dim)\n :return: a DTable\n '
R = X.join(Y, (lambda x, y: (x * y)))
val = R.collect()
table = dict(val)
R.destroy()
return table | compute X * Y
:param X: DTable, with shape (feature_dim, sample_dim)
:param Y: DTable, with shape (feature_dim, sample_dim)
:return: a DTable | federatedml/ftl/eggroll_computation/util.py | eggroll_compute_XY | bentanust/FedRec | 32 | python | def eggroll_compute_XY(X, Y):
'\n compute X * Y\n :param X: DTable, with shape (feature_dim, sample_dim)\n :param Y: DTable, with shape (feature_dim, sample_dim)\n :return: a DTable\n '
R = X.join(Y, (lambda x, y: (x * y)))
val = R.collect()
table = dict(val)
R.destroy()
return table | def eggroll_compute_XY(X, Y):
'\n compute X * Y\n :param X: DTable, with shape (feature_dim, sample_dim)\n :param Y: DTable, with shape (feature_dim, sample_dim)\n :return: a DTable\n '
R = X.join(Y, (lambda x, y: (x * y)))
val = R.collect()
table = dict(val)
R.destroy()
return table<|docstring|>compute X * Y
:param X: DTable, with shape (feature_dim, sample_dim)
:param Y: DTable, with shape (feature_dim, sample_dim)
:return: a DTable<|endoftext|> |
be9d6e0d726df44b2c88672a81472f9fc2655d0a3e9e0af3af4d6ac0f949dc02 | def eggroll_compute_X_plus_Y(X, Y):
'\n compute X + Y\n :param X: DTable, with shape (feature_dim, sample_dim)\n :param Y: DTable, with shape (feature_dim, sample_dim)\n :return: a DTable\n '
R = X.join(Y, (lambda x, y: (x + y)))
val = R.collect()
table = dict(val)
R.destroy()
return table | compute X + Y
:param X: DTable, with shape (feature_dim, sample_dim)
:param Y: DTable, with shape (feature_dim, sample_dim)
:return: a DTable | federatedml/ftl/eggroll_computation/util.py | eggroll_compute_X_plus_Y | bentanust/FedRec | 32 | python | def eggroll_compute_X_plus_Y(X, Y):
'\n compute X + Y\n :param X: DTable, with shape (feature_dim, sample_dim)\n :param Y: DTable, with shape (feature_dim, sample_dim)\n :return: a DTable\n '
R = X.join(Y, (lambda x, y: (x + y)))
val = R.collect()
table = dict(val)
R.destroy()
return table | def eggroll_compute_X_plus_Y(X, Y):
'\n compute X + Y\n :param X: DTable, with shape (feature_dim, sample_dim)\n :param Y: DTable, with shape (feature_dim, sample_dim)\n :return: a DTable\n '
R = X.join(Y, (lambda x, y: (x + y)))
val = R.collect()
table = dict(val)
R.destroy()
return table<|docstring|>compute X + Y
:param X: DTable, with shape (feature_dim, sample_dim)
:param Y: DTable, with shape (feature_dim, sample_dim)
:return: a DTable<|endoftext|> |
fecd114533a295137f608ddb94819bd0b85d3c8fbe8ac9ec706cc62a1a6c211b | def eggroll_compute_hSum_XY(X, Y):
'\n compute np.sum(X * Y, axis=1)\n :param X: DTable, with shape (feature_dim, sample_dim)\n :param Y: DTable, with shape (feature_dim, sample_dim)\n :return: a DTable\n '
R = X.join(Y, (lambda x, y: np.sum((x * y))))
val = R.collect()
table = dict(val)
R.destroy()
return table | compute np.sum(X * Y, axis=1)
:param X: DTable, with shape (feature_dim, sample_dim)
:param Y: DTable, with shape (feature_dim, sample_dim)
:return: a DTable | federatedml/ftl/eggroll_computation/util.py | eggroll_compute_hSum_XY | bentanust/FedRec | 32 | python | def eggroll_compute_hSum_XY(X, Y):
'\n compute np.sum(X * Y, axis=1)\n :param X: DTable, with shape (feature_dim, sample_dim)\n :param Y: DTable, with shape (feature_dim, sample_dim)\n :return: a DTable\n '
R = X.join(Y, (lambda x, y: np.sum((x * y))))
val = R.collect()
table = dict(val)
R.destroy()
return table | def eggroll_compute_hSum_XY(X, Y):
'\n compute np.sum(X * Y, axis=1)\n :param X: DTable, with shape (feature_dim, sample_dim)\n :param Y: DTable, with shape (feature_dim, sample_dim)\n :return: a DTable\n '
R = X.join(Y, (lambda x, y: np.sum((x * y))))
val = R.collect()
table = dict(val)
R.destroy()
return table<|docstring|>compute np.sum(X * Y, axis=1)
:param X: DTable, with shape (feature_dim, sample_dim)
:param Y: DTable, with shape (feature_dim, sample_dim)
:return: a DTable<|endoftext|> |
ff8137f03ca1580e95cb588781d8679971462b1d89f5ab3ca1b844d5b99b72b1 | def eggroll_compute_vAvg_XY(X, Y, sample_dim):
'\n compute np.mean(X * Y, axis=0)\n :param X: DTable, with shape (feature_dim, sample_dim)\n :param Y: DTable, with shape (feature_dim, sample_dim) or (1, sample_dim)\n :param feature_dim:\n :param sample_dim:\n :return: a DTable\n '
R = X.join(Y, (lambda x, y: ((y * x) / sample_dim)))
result = R.reduce((lambda agg_val, v: (agg_val + v)))
R.destroy()
return result | compute np.mean(X * Y, axis=0)
:param X: DTable, with shape (feature_dim, sample_dim)
:param Y: DTable, with shape (feature_dim, sample_dim) or (1, sample_dim)
:param feature_dim:
:param sample_dim:
:return: a DTable | federatedml/ftl/eggroll_computation/util.py | eggroll_compute_vAvg_XY | bentanust/FedRec | 32 | python | def eggroll_compute_vAvg_XY(X, Y, sample_dim):
'\n compute np.mean(X * Y, axis=0)\n :param X: DTable, with shape (feature_dim, sample_dim)\n :param Y: DTable, with shape (feature_dim, sample_dim) or (1, sample_dim)\n :param feature_dim:\n :param sample_dim:\n :return: a DTable\n '
R = X.join(Y, (lambda x, y: ((y * x) / sample_dim)))
result = R.reduce((lambda agg_val, v: (agg_val + v)))
R.destroy()
return result | def eggroll_compute_vAvg_XY(X, Y, sample_dim):
'\n compute np.mean(X * Y, axis=0)\n :param X: DTable, with shape (feature_dim, sample_dim)\n :param Y: DTable, with shape (feature_dim, sample_dim) or (1, sample_dim)\n :param feature_dim:\n :param sample_dim:\n :return: a DTable\n '
R = X.join(Y, (lambda x, y: ((y * x) / sample_dim)))
result = R.reduce((lambda agg_val, v: (agg_val + v)))
R.destroy()
return result<|docstring|>compute np.mean(X * Y, axis=0)
:param X: DTable, with shape (feature_dim, sample_dim)
:param Y: DTable, with shape (feature_dim, sample_dim) or (1, sample_dim)
:param feature_dim:
:param sample_dim:
:return: a DTable<|endoftext|> |
e907ae2dcff783e852811dd07085f7a99a418582446f1fbf30f5d10fd9081307 | def eggroll_encrypt(public_key, X):
'\n encrypt X\n :param X: DTable\n :return: a dictionary\n '
X2 = X.mapValues((lambda x: encrypt_matrix(public_key, x)))
val = X2.collect()
val = dict(val)
X2.destroy()
return val | encrypt X
:param X: DTable
:return: a dictionary | federatedml/ftl/eggroll_computation/util.py | eggroll_encrypt | bentanust/FedRec | 32 | python | def eggroll_encrypt(public_key, X):
'\n encrypt X\n :param X: DTable\n :return: a dictionary\n '
X2 = X.mapValues((lambda x: encrypt_matrix(public_key, x)))
val = X2.collect()
val = dict(val)
X2.destroy()
return val | def eggroll_encrypt(public_key, X):
'\n encrypt X\n :param X: DTable\n :return: a dictionary\n '
X2 = X.mapValues((lambda x: encrypt_matrix(public_key, x)))
val = X2.collect()
val = dict(val)
X2.destroy()
return val<|docstring|>encrypt X
:param X: DTable
:return: a dictionary<|endoftext|> |
87f5334d5258305321f97715e4773a23f7c07d192e7d81a0af8375359c218b97 | def eggroll_decrypt(private_key, X):
'\n decrypt X\n :param X: DTable\n :return: a dictionary\n '
X2 = X.mapValues((lambda x: decrypt_matrix(private_key, x)))
val = X2.collect()
val = dict(val)
X2.destroy()
return val | decrypt X
:param X: DTable
:return: a dictionary | federatedml/ftl/eggroll_computation/util.py | eggroll_decrypt | bentanust/FedRec | 32 | python | def eggroll_decrypt(private_key, X):
'\n decrypt X\n :param X: DTable\n :return: a dictionary\n '
X2 = X.mapValues((lambda x: decrypt_matrix(private_key, x)))
val = X2.collect()
val = dict(val)
X2.destroy()
return val | def eggroll_decrypt(private_key, X):
'\n decrypt X\n :param X: DTable\n :return: a dictionary\n '
X2 = X.mapValues((lambda x: decrypt_matrix(private_key, x)))
val = X2.collect()
val = dict(val)
X2.destroy()
return val<|docstring|>decrypt X
:param X: DTable
:return: a dictionary<|endoftext|> |
018e34b502899c3931926aaea2cb00c179e356c4da23a422b086cd34991b5fd6 | @mpl.rc_context(fname=CONFIG_FILE)
def plot_activity_hours(images: pd.DataFrame, names: Union[(list, str, pd.Series)], kind: str='kde', polar: bool=False, hist_kws: dict=None, kde_kws: dict=None, polar_kws: dict=None) -> Union[(plt.Axes, plt.PolarAxes)]:
"\n Plots the activity hours of one or multiple taxa by grouping all\n observations into a 24-hour range.\n\n Parameters\n ----------\n images : DataFrame\n DataFrame with the project's images.\n names : list, str or Series\n List of names to plot activity hours for.\n kind : str\n Type of plot. Values can be:\n\n - 'hist' for histogram.\n - 'kde' for kernel density estimate plot.\n polar : bool\n Whether to use a polar (i.e. circular projection) for the plot.\n If polar is True, kind must be one of 'area' or 'hist'. Otherwise\n it must be one of 'hist' or 'kde'.\n hist_kws : dict\n Keyword arguments passed to the seaborn.histplot() function. Only\n has effect if kind is 'hist' and polar is False.\n kde_kws : dict\n Keyword arguments passed to the seaborn.kde() function. Only\n has effect if kind is 'kde'.\n polar_kws : dict\n Keyword arguments passed to a local function when polar is True,\n regardless of kind. Possible arguments are:\n\n - 'density': True or False. Whether to compute density or\n counts. Default is False.\n - 'fill': True or False. Whether to fill the area under the\n line (when kind is 'area') or the rectangles (when kind is\n 'hist'). Default is True.\n\n Returns\n -------\n Axes\n Plot axes.\n\n "
if isinstance(names, str):
names = [names]
if (hist_kws is None):
hist_kws = {}
if (kde_kws is None):
kde_kws = {}
if (polar_kws is None):
polar_kws = {}
taxa = get_lowest_taxon(images, return_rank=False)
inconsistent_names = (set(names) - set(taxa))
if len(inconsistent_names):
raise ValueError(f'{list(inconsistent_names)} were not found in images.')
images = images.copy()
images['taxon'] = taxa
images = images.loc[(images['taxon'].isin(names), :)].reset_index(drop=True)
images[_labels.images.date] = pd.to_datetime(images[_labels.images.date])
images['hour'] = (images[_labels.images.date].dt.hour + (images[_labels.images.date].dt.minute / 60))
images = images.loc[images.index.repeat(images[_labels.images.objects])].reset_index(drop=True)
if polar:
if (kind in ('area', 'hist')):
ax = _plot_polar(images, 'hour', hue='taxon', kind=kind, **polar_kws)
elif (kind == 'kde'):
raise ValueError("kind cannot be 'kde' when polar=True.")
else:
raise ValueError("kind must be one of ['area', 'hist']")
ax.set_theta_direction((- 1))
ax.set_theta_zero_location('N')
x_labels = [f'{h:02}:00' for h in np.arange(0, 24, 2)]
plt.thetagrids(np.arange(0, 360, (360 // 12)), x_labels)
else:
images = images[['taxon', 'hour']]
if (kind == 'area'):
raise ValueError("kind cannot be 'area' when polar=False.")
elif (kind == 'hist'):
ax = sns.histplot(data=images, x='hour', hue='taxon', binwidth=1, binrange=(0, 24), discrete=False, **hist_kws)
elif (kind == 'kde'):
ax = sns.kdeplot(data=images, x='hour', hue='taxon', **kde_kws)
else:
raise ValueError("kind must be one of ['hist', 'kde']")
x_ticks = np.arange(0, 26, 2)
x_labels = [f'{h:02}:00' for h in x_ticks]
ax.set_xlim((- 2), 26)
ax.set_xticks(x_ticks, labels=x_labels)
return ax | Plots the activity hours of one or multiple taxa by grouping all
observations into a 24-hour range.
Parameters
----------
images : DataFrame
DataFrame with the project's images.
names : list, str or Series
List of names to plot activity hours for.
kind : str
Type of plot. Values can be:
- 'hist' for histogram.
- 'kde' for kernel density estimate plot.
polar : bool
Whether to use a polar (i.e. circular projection) for the plot.
If polar is True, kind must be one of 'area' or 'hist'. Otherwise
it must be one of 'hist' or 'kde'.
hist_kws : dict
Keyword arguments passed to the seaborn.histplot() function. Only
has effect if kind is 'hist' and polar is False.
kde_kws : dict
Keyword arguments passed to the seaborn.kde() function. Only
has effect if kind is 'kde'.
polar_kws : dict
Keyword arguments passed to a local function when polar is True,
regardless of kind. Possible arguments are:
- 'density': True or False. Whether to compute density or
counts. Default is False.
- 'fill': True or False. Whether to fill the area under the
line (when kind is 'area') or the rectangles (when kind is
'hist'). Default is True.
Returns
-------
Axes
Plot axes. | wiutils/plotting.py | plot_activity_hours | PEM-Humboldt/wiutils | 0 | python | @mpl.rc_context(fname=CONFIG_FILE)
def plot_activity_hours(images: pd.DataFrame, names: Union[(list, str, pd.Series)], kind: str='kde', polar: bool=False, hist_kws: dict=None, kde_kws: dict=None, polar_kws: dict=None) -> Union[(plt.Axes, plt.PolarAxes)]:
"\n Plots the activity hours of one or multiple taxa by grouping all\n observations into a 24-hour range.\n\n Parameters\n ----------\n images : DataFrame\n DataFrame with the project's images.\n names : list, str or Series\n List of names to plot activity hours for.\n kind : str\n Type of plot. Values can be:\n\n - 'hist' for histogram.\n - 'kde' for kernel density estimate plot.\n polar : bool\n Whether to use a polar (i.e. circular projection) for the plot.\n If polar is True, kind must be one of 'area' or 'hist'. Otherwise\n it must be one of 'hist' or 'kde'.\n hist_kws : dict\n Keyword arguments passed to the seaborn.histplot() function. Only\n has effect if kind is 'hist' and polar is False.\n kde_kws : dict\n Keyword arguments passed to the seaborn.kde() function. Only\n has effect if kind is 'kde'.\n polar_kws : dict\n Keyword arguments passed to a local function when polar is True,\n regardless of kind. Possible arguments are:\n\n - 'density': True or False. Whether to compute density or\n counts. Default is False.\n - 'fill': True or False. Whether to fill the area under the\n line (when kind is 'area') or the rectangles (when kind is\n 'hist'). Default is True.\n\n Returns\n -------\n Axes\n Plot axes.\n\n "
if isinstance(names, str):
names = [names]
if (hist_kws is None):
hist_kws = {}
if (kde_kws is None):
kde_kws = {}
if (polar_kws is None):
polar_kws = {}
taxa = get_lowest_taxon(images, return_rank=False)
inconsistent_names = (set(names) - set(taxa))
if len(inconsistent_names):
raise ValueError(f'{list(inconsistent_names)} were not found in images.')
images = images.copy()
images['taxon'] = taxa
images = images.loc[(images['taxon'].isin(names), :)].reset_index(drop=True)
images[_labels.images.date] = pd.to_datetime(images[_labels.images.date])
images['hour'] = (images[_labels.images.date].dt.hour + (images[_labels.images.date].dt.minute / 60))
images = images.loc[images.index.repeat(images[_labels.images.objects])].reset_index(drop=True)
if polar:
if (kind in ('area', 'hist')):
ax = _plot_polar(images, 'hour', hue='taxon', kind=kind, **polar_kws)
elif (kind == 'kde'):
raise ValueError("kind cannot be 'kde' when polar=True.")
else:
raise ValueError("kind must be one of ['area', 'hist']")
ax.set_theta_direction((- 1))
ax.set_theta_zero_location('N')
x_labels = [f'{h:02}:00' for h in np.arange(0, 24, 2)]
plt.thetagrids(np.arange(0, 360, (360 // 12)), x_labels)
else:
images = images[['taxon', 'hour']]
if (kind == 'area'):
raise ValueError("kind cannot be 'area' when polar=False.")
elif (kind == 'hist'):
ax = sns.histplot(data=images, x='hour', hue='taxon', binwidth=1, binrange=(0, 24), discrete=False, **hist_kws)
elif (kind == 'kde'):
ax = sns.kdeplot(data=images, x='hour', hue='taxon', **kde_kws)
else:
raise ValueError("kind must be one of ['hist', 'kde']")
x_ticks = np.arange(0, 26, 2)
x_labels = [f'{h:02}:00' for h in x_ticks]
ax.set_xlim((- 2), 26)
ax.set_xticks(x_ticks, labels=x_labels)
return ax | @mpl.rc_context(fname=CONFIG_FILE)
def plot_activity_hours(images: pd.DataFrame, names: Union[(list, str, pd.Series)], kind: str='kde', polar: bool=False, hist_kws: dict=None, kde_kws: dict=None, polar_kws: dict=None) -> Union[(plt.Axes, plt.PolarAxes)]:
"\n Plots the activity hours of one or multiple taxa by grouping all\n observations into a 24-hour range.\n\n Parameters\n ----------\n images : DataFrame\n DataFrame with the project's images.\n names : list, str or Series\n List of names to plot activity hours for.\n kind : str\n Type of plot. Values can be:\n\n - 'hist' for histogram.\n - 'kde' for kernel density estimate plot.\n polar : bool\n Whether to use a polar (i.e. circular projection) for the plot.\n If polar is True, kind must be one of 'area' or 'hist'. Otherwise\n it must be one of 'hist' or 'kde'.\n hist_kws : dict\n Keyword arguments passed to the seaborn.histplot() function. Only\n has effect if kind is 'hist' and polar is False.\n kde_kws : dict\n Keyword arguments passed to the seaborn.kde() function. Only\n has effect if kind is 'kde'.\n polar_kws : dict\n Keyword arguments passed to a local function when polar is True,\n regardless of kind. Possible arguments are:\n\n - 'density': True or False. Whether to compute density or\n counts. Default is False.\n - 'fill': True or False. Whether to fill the area under the\n line (when kind is 'area') or the rectangles (when kind is\n 'hist'). Default is True.\n\n Returns\n -------\n Axes\n Plot axes.\n\n "
if isinstance(names, str):
names = [names]
if (hist_kws is None):
hist_kws = {}
if (kde_kws is None):
kde_kws = {}
if (polar_kws is None):
polar_kws = {}
taxa = get_lowest_taxon(images, return_rank=False)
inconsistent_names = (set(names) - set(taxa))
if len(inconsistent_names):
raise ValueError(f'{list(inconsistent_names)} were not found in images.')
images = images.copy()
images['taxon'] = taxa
images = images.loc[(images['taxon'].isin(names), :)].reset_index(drop=True)
images[_labels.images.date] = pd.to_datetime(images[_labels.images.date])
images['hour'] = (images[_labels.images.date].dt.hour + (images[_labels.images.date].dt.minute / 60))
images = images.loc[images.index.repeat(images[_labels.images.objects])].reset_index(drop=True)
if polar:
if (kind in ('area', 'hist')):
ax = _plot_polar(images, 'hour', hue='taxon', kind=kind, **polar_kws)
elif (kind == 'kde'):
raise ValueError("kind cannot be 'kde' when polar=True.")
else:
raise ValueError("kind must be one of ['area', 'hist']")
ax.set_theta_direction((- 1))
ax.set_theta_zero_location('N')
x_labels = [f'{h:02}:00' for h in np.arange(0, 24, 2)]
plt.thetagrids(np.arange(0, 360, (360 // 12)), x_labels)
else:
images = images[['taxon', 'hour']]
if (kind == 'area'):
raise ValueError("kind cannot be 'area' when polar=False.")
elif (kind == 'hist'):
ax = sns.histplot(data=images, x='hour', hue='taxon', binwidth=1, binrange=(0, 24), discrete=False, **hist_kws)
elif (kind == 'kde'):
ax = sns.kdeplot(data=images, x='hour', hue='taxon', **kde_kws)
else:
raise ValueError("kind must be one of ['hist', 'kde']")
x_ticks = np.arange(0, 26, 2)
x_labels = [f'{h:02}:00' for h in x_ticks]
ax.set_xlim((- 2), 26)
ax.set_xticks(x_ticks, labels=x_labels)
return ax<|docstring|>Plots the activity hours of one or multiple taxa by grouping all
observations into a 24-hour range.
Parameters
----------
images : DataFrame
DataFrame with the project's images.
names : list, str or Series
List of names to plot activity hours for.
kind : str
Type of plot. Values can be:
- 'hist' for histogram.
- 'kde' for kernel density estimate plot.
polar : bool
Whether to use a polar (i.e. circular projection) for the plot.
If polar is True, kind must be one of 'area' or 'hist'. Otherwise
it must be one of 'hist' or 'kde'.
hist_kws : dict
Keyword arguments passed to the seaborn.histplot() function. Only
has effect if kind is 'hist' and polar is False.
kde_kws : dict
Keyword arguments passed to the seaborn.kde() function. Only
has effect if kind is 'kde'.
polar_kws : dict
Keyword arguments passed to a local function when polar is True,
regardless of kind. Possible arguments are:
- 'density': True or False. Whether to compute density or
counts. Default is False.
- 'fill': True or False. Whether to fill the area under the
line (when kind is 'area') or the rectangles (when kind is
'hist'). Default is True.
Returns
-------
Axes
Plot axes.<|endoftext|> |
1d9a1965db004d676a40d713986a8eb2035a96b00a0454556135209d83659b05 | @mpl.rc_context(fname=CONFIG_FILE)
def plot_date_ranges(images: pd.DataFrame=None, deployments: pd.DataFrame=None, source: str='both', **kwargs) -> plt.Axes:
"\n Plots deployment date ranges.\n\n Parameters\n ----------\n images : DataFrame\n DataFrame with the project's images.\n deployments : DataFrame\n DataFrame with the project's deployments.\n source : bool\n Source to plot date ranges from: Values can be:\n\n - 'images' to plot date ranges from images (i.e. first image\n to last image taken).\n - 'deployments' to plot date ranges from deployments\n information (i.e. start date and end date).\n - 'both' to plot both sources in two different subplots.\n\n kwargs\n Keyword arguments passed to the sns.relplot() function.\n\n Returns\n -------\n Axes\n Plot axes.\n\n "
df = get_date_ranges(images, deployments, source, compute_delta=False, pivot=False)
df = pd.melt(df, id_vars=[_labels.deployments.deployment_id, 'source'], value_vars=[_labels.deployments.start, _labels.deployments.end])
df = df.rename(columns={'value': 'date'})
df = df.sort_values('date').reset_index(drop=True)
g = sns.relplot(data=df, x='date', y=_labels.deployments.deployment_id, row='source', kind='line', units=_labels.deployments.deployment_id, estimator=None, facet_kws=dict(despine=False), **kwargs)
return g.axes | Plots deployment date ranges.
Parameters
----------
images : DataFrame
DataFrame with the project's images.
deployments : DataFrame
DataFrame with the project's deployments.
source : bool
Source to plot date ranges from: Values can be:
- 'images' to plot date ranges from images (i.e. first image
to last image taken).
- 'deployments' to plot date ranges from deployments
information (i.e. start date and end date).
- 'both' to plot both sources in two different subplots.
kwargs
Keyword arguments passed to the sns.relplot() function.
Returns
-------
Axes
Plot axes. | wiutils/plotting.py | plot_date_ranges | PEM-Humboldt/wiutils | 0 | python | @mpl.rc_context(fname=CONFIG_FILE)
def plot_date_ranges(images: pd.DataFrame=None, deployments: pd.DataFrame=None, source: str='both', **kwargs) -> plt.Axes:
"\n Plots deployment date ranges.\n\n Parameters\n ----------\n images : DataFrame\n DataFrame with the project's images.\n deployments : DataFrame\n DataFrame with the project's deployments.\n source : bool\n Source to plot date ranges from: Values can be:\n\n - 'images' to plot date ranges from images (i.e. first image\n to last image taken).\n - 'deployments' to plot date ranges from deployments\n information (i.e. start date and end date).\n - 'both' to plot both sources in two different subplots.\n\n kwargs\n Keyword arguments passed to the sns.relplot() function.\n\n Returns\n -------\n Axes\n Plot axes.\n\n "
df = get_date_ranges(images, deployments, source, compute_delta=False, pivot=False)
df = pd.melt(df, id_vars=[_labels.deployments.deployment_id, 'source'], value_vars=[_labels.deployments.start, _labels.deployments.end])
df = df.rename(columns={'value': 'date'})
df = df.sort_values('date').reset_index(drop=True)
g = sns.relplot(data=df, x='date', y=_labels.deployments.deployment_id, row='source', kind='line', units=_labels.deployments.deployment_id, estimator=None, facet_kws=dict(despine=False), **kwargs)
return g.axes | @mpl.rc_context(fname=CONFIG_FILE)
def plot_date_ranges(images: pd.DataFrame=None, deployments: pd.DataFrame=None, source: str='both', **kwargs) -> plt.Axes:
"\n Plots deployment date ranges.\n\n Parameters\n ----------\n images : DataFrame\n DataFrame with the project's images.\n deployments : DataFrame\n DataFrame with the project's deployments.\n source : bool\n Source to plot date ranges from: Values can be:\n\n - 'images' to plot date ranges from images (i.e. first image\n to last image taken).\n - 'deployments' to plot date ranges from deployments\n information (i.e. start date and end date).\n - 'both' to plot both sources in two different subplots.\n\n kwargs\n Keyword arguments passed to the sns.relplot() function.\n\n Returns\n -------\n Axes\n Plot axes.\n\n "
df = get_date_ranges(images, deployments, source, compute_delta=False, pivot=False)
df = pd.melt(df, id_vars=[_labels.deployments.deployment_id, 'source'], value_vars=[_labels.deployments.start, _labels.deployments.end])
df = df.rename(columns={'value': 'date'})
df = df.sort_values('date').reset_index(drop=True)
g = sns.relplot(data=df, x='date', y=_labels.deployments.deployment_id, row='source', kind='line', units=_labels.deployments.deployment_id, estimator=None, facet_kws=dict(despine=False), **kwargs)
return g.axes<|docstring|>Plots deployment date ranges.
Parameters
----------
images : DataFrame
DataFrame with the project's images.
deployments : DataFrame
DataFrame with the project's deployments.
source : bool
Source to plot date ranges from: Values can be:
- 'images' to plot date ranges from images (i.e. first image
to last image taken).
- 'deployments' to plot date ranges from deployments
information (i.e. start date and end date).
- 'both' to plot both sources in two different subplots.
kwargs
Keyword arguments passed to the sns.relplot() function.
Returns
-------
Axes
Plot axes.<|endoftext|> |
84a3f45f807c8aed516c9eab682a98eb69351969a03b88e56460e286c7018817 | @mpl.rc_context(fname=CONFIG_FILE)
def plot_detection_history(images: pd.DataFrame, deployments: pd.DataFrame, name: str, mask: bool=False, compute_detection_history_kws: dict=None, heatmap_kws: dict=None) -> plt.Axes:
"\n Plots detection history matrix for a given species.\n\n Parameters\n ----------\n images : DataFrame\n DataFrame with the project's images.\n deployments : DataFrame\n DataFrame with the project's deployments.\n name : str\n Scientific name of the species to plot the detection history for.\n mask : bool\n Whether to mask cells where cameras were not functioning. If True,\n those cells won't be displayed. Otherwise, they will be displayed\n as zero.\n compute_detection_history_kws : dict\n Keyword arguments for the wiutils.compute_detection_history()\n function.\n heatmap_kws : dict\n Keyword arguments for the seaborn.heatmap() function.\n\n Returns\n -------\n Axes\n Plot axes.\n\n "
if (compute_detection_history_kws is None):
compute_detection_history_kws = {}
if (heatmap_kws is None):
heatmap_kws = {}
taxa = get_lowest_taxon(images, return_rank=False)
if (name not in taxa.unique()):
raise ValueError(f'{name} was not found in images.')
result = compute_detection_history(images, deployments, pivot=True, **compute_detection_history_kws)
result = result[(result['taxon'] == name)]
result = result.drop(columns='taxon')
result = result.set_index(_labels.images.deployment_id)
if (not mask):
result = result.fillna(0)
ax = sns.heatmap(data=result, **heatmap_kws)
return ax | Plots detection history matrix for a given species.
Parameters
----------
images : DataFrame
DataFrame with the project's images.
deployments : DataFrame
DataFrame with the project's deployments.
name : str
Scientific name of the species to plot the detection history for.
mask : bool
Whether to mask cells where cameras were not functioning. If True,
those cells won't be displayed. Otherwise, they will be displayed
as zero.
compute_detection_history_kws : dict
Keyword arguments for the wiutils.compute_detection_history()
function.
heatmap_kws : dict
Keyword arguments for the seaborn.heatmap() function.
Returns
-------
Axes
Plot axes. | wiutils/plotting.py | plot_detection_history | PEM-Humboldt/wiutils | 0 | python | @mpl.rc_context(fname=CONFIG_FILE)
def plot_detection_history(images: pd.DataFrame, deployments: pd.DataFrame, name: str, mask: bool=False, compute_detection_history_kws: dict=None, heatmap_kws: dict=None) -> plt.Axes:
"\n Plots detection history matrix for a given species.\n\n Parameters\n ----------\n images : DataFrame\n DataFrame with the project's images.\n deployments : DataFrame\n DataFrame with the project's deployments.\n name : str\n Scientific name of the species to plot the detection history for.\n mask : bool\n Whether to mask cells where cameras were not functioning. If True,\n those cells won't be displayed. Otherwise, they will be displayed\n as zero.\n compute_detection_history_kws : dict\n Keyword arguments for the wiutils.compute_detection_history()\n function.\n heatmap_kws : dict\n Keyword arguments for the seaborn.heatmap() function.\n\n Returns\n -------\n Axes\n Plot axes.\n\n "
if (compute_detection_history_kws is None):
compute_detection_history_kws = {}
if (heatmap_kws is None):
heatmap_kws = {}
taxa = get_lowest_taxon(images, return_rank=False)
if (name not in taxa.unique()):
raise ValueError(f'{name} was not found in images.')
result = compute_detection_history(images, deployments, pivot=True, **compute_detection_history_kws)
result = result[(result['taxon'] == name)]
result = result.drop(columns='taxon')
result = result.set_index(_labels.images.deployment_id)
if (not mask):
result = result.fillna(0)
ax = sns.heatmap(data=result, **heatmap_kws)
return ax | @mpl.rc_context(fname=CONFIG_FILE)
def plot_detection_history(images: pd.DataFrame, deployments: pd.DataFrame, name: str, mask: bool=False, compute_detection_history_kws: dict=None, heatmap_kws: dict=None) -> plt.Axes:
"\n Plots detection history matrix for a given species.\n\n Parameters\n ----------\n images : DataFrame\n DataFrame with the project's images.\n deployments : DataFrame\n DataFrame with the project's deployments.\n name : str\n Scientific name of the species to plot the detection history for.\n mask : bool\n Whether to mask cells where cameras were not functioning. If True,\n those cells won't be displayed. Otherwise, they will be displayed\n as zero.\n compute_detection_history_kws : dict\n Keyword arguments for the wiutils.compute_detection_history()\n function.\n heatmap_kws : dict\n Keyword arguments for the seaborn.heatmap() function.\n\n Returns\n -------\n Axes\n Plot axes.\n\n "
if (compute_detection_history_kws is None):
compute_detection_history_kws = {}
if (heatmap_kws is None):
heatmap_kws = {}
taxa = get_lowest_taxon(images, return_rank=False)
if (name not in taxa.unique()):
raise ValueError(f'{name} was not found in images.')
result = compute_detection_history(images, deployments, pivot=True, **compute_detection_history_kws)
result = result[(result['taxon'] == name)]
result = result.drop(columns='taxon')
result = result.set_index(_labels.images.deployment_id)
if (not mask):
result = result.fillna(0)
ax = sns.heatmap(data=result, **heatmap_kws)
return ax<|docstring|>Plots detection history matrix for a given species.
Parameters
----------
images : DataFrame
DataFrame with the project's images.
deployments : DataFrame
DataFrame with the project's deployments.
name : str
Scientific name of the species to plot the detection history for.
mask : bool
Whether to mask cells where cameras were not functioning. If True,
those cells won't be displayed. Otherwise, they will be displayed
as zero.
compute_detection_history_kws : dict
Keyword arguments for the wiutils.compute_detection_history()
function.
heatmap_kws : dict
Keyword arguments for the seaborn.heatmap() function.
Returns
-------
Axes
Plot axes.<|endoftext|> |
36c2c6fa14d7d3a4408cd4d218015568f98ff8c3550f2382c250428ea46cc574 | def assign_poolinfos_to_allocates_in_primfunc(primfunc, pool_infos):
'Helper to assign poolinfos to allocate nodes in a tir.PrimFunc'
def set_poolinfos(stmt):
if isinstance(stmt, tvm.tir.Allocate):
return tvm.tir.Allocate(buffer_var=stmt.buffer_var, dtype=stmt.dtype, extents=stmt.extents, condition=stmt.condition, body=stmt.body, annotations={tvm.tir.usmp.utils.CANDIDATE_MEMORY_POOL_ATTR: pool_infos})
return primfunc.with_body(stmt_functor.ir_transform(primfunc.body, None, set_poolinfos)) | Helper to assign poolinfos to allocate nodes in a tir.PrimFunc | tests/python/unittest/test_tir_usmp_transform_convert_pool_allocations_to_offsets.py | assign_poolinfos_to_allocates_in_primfunc | billishyahao/tvm | 4,640 | python | def assign_poolinfos_to_allocates_in_primfunc(primfunc, pool_infos):
def set_poolinfos(stmt):
if isinstance(stmt, tvm.tir.Allocate):
return tvm.tir.Allocate(buffer_var=stmt.buffer_var, dtype=stmt.dtype, extents=stmt.extents, condition=stmt.condition, body=stmt.body, annotations={tvm.tir.usmp.utils.CANDIDATE_MEMORY_POOL_ATTR: pool_infos})
return primfunc.with_body(stmt_functor.ir_transform(primfunc.body, None, set_poolinfos)) | def assign_poolinfos_to_allocates_in_primfunc(primfunc, pool_infos):
def set_poolinfos(stmt):
if isinstance(stmt, tvm.tir.Allocate):
return tvm.tir.Allocate(buffer_var=stmt.buffer_var, dtype=stmt.dtype, extents=stmt.extents, condition=stmt.condition, body=stmt.body, annotations={tvm.tir.usmp.utils.CANDIDATE_MEMORY_POOL_ATTR: pool_infos})
return primfunc.with_body(stmt_functor.ir_transform(primfunc.body, None, set_poolinfos))<|docstring|>Helper to assign poolinfos to allocate nodes in a tir.PrimFunc<|endoftext|> |
8d33a285012f4217785e7feddb866074b8e07569366f9b612c6e844c43991d9d | def assign_poolinfos_to_allocates_in_irmodule(mod, pool_infos):
'Helper to assign poolinfos to allocate nodes in a IRModule'
ret = tvm.IRModule()
for (global_var, basefunc) in mod.functions.items():
if isinstance(basefunc, tvm.tir.PrimFunc):
ret[global_var] = assign_poolinfos_to_allocates_in_primfunc(basefunc, pool_infos)
return ret | Helper to assign poolinfos to allocate nodes in a IRModule | tests/python/unittest/test_tir_usmp_transform_convert_pool_allocations_to_offsets.py | assign_poolinfos_to_allocates_in_irmodule | billishyahao/tvm | 4,640 | python | def assign_poolinfos_to_allocates_in_irmodule(mod, pool_infos):
ret = tvm.IRModule()
for (global_var, basefunc) in mod.functions.items():
if isinstance(basefunc, tvm.tir.PrimFunc):
ret[global_var] = assign_poolinfos_to_allocates_in_primfunc(basefunc, pool_infos)
return ret | def assign_poolinfos_to_allocates_in_irmodule(mod, pool_infos):
ret = tvm.IRModule()
for (global_var, basefunc) in mod.functions.items():
if isinstance(basefunc, tvm.tir.PrimFunc):
ret[global_var] = assign_poolinfos_to_allocates_in_primfunc(basefunc, pool_infos)
return ret<|docstring|>Helper to assign poolinfos to allocate nodes in a IRModule<|endoftext|> |
c3bc5517dd57df2ff65215e7991260452dfd20af99a5aa30ff9d0a4f88a816cd | def _assign_targets_to_primfuncs_irmodule(mod, target):
'Helper to assign target for PrimFunc in a IRModule'
ret = tvm.IRModule()
for (global_var, basefunc) in mod.functions.items():
if isinstance(basefunc, tvm.tir.PrimFunc):
ret[global_var] = basefunc.with_attr('target', target)
return ret | Helper to assign target for PrimFunc in a IRModule | tests/python/unittest/test_tir_usmp_transform_convert_pool_allocations_to_offsets.py | _assign_targets_to_primfuncs_irmodule | billishyahao/tvm | 4,640 | python | def _assign_targets_to_primfuncs_irmodule(mod, target):
ret = tvm.IRModule()
for (global_var, basefunc) in mod.functions.items():
if isinstance(basefunc, tvm.tir.PrimFunc):
ret[global_var] = basefunc.with_attr('target', target)
return ret | def _assign_targets_to_primfuncs_irmodule(mod, target):
ret = tvm.IRModule()
for (global_var, basefunc) in mod.functions.items():
if isinstance(basefunc, tvm.tir.PrimFunc):
ret[global_var] = basefunc.with_attr('target', target)
return ret<|docstring|>Helper to assign target for PrimFunc in a IRModule<|endoftext|> |
4c48dbfccb346dd0a64ce52b5e1ba49c0eb8642e16f452a9028dfc7e6cb0b151 | def bezier(p0, p1, p2, p3, t):
'\n @param p0\n @param p1\n @param p2\n @param p3\n @param t\n '
cx = (3.0 * (p1[0] - p0[0]))
bx = ((3.0 * (p2[0] - p1[0])) - cx)
ax = (((p3[0] - p0[0]) - cx) - bx)
cy = (3.0 * (p1[1] - p0[1]))
by = ((3.0 * (p2[1] - p1[1])) - cy)
ay = (((p3[1] - p0[1]) - cy) - by)
tSquared = (t * t)
tCubed = (tSquared * t)
x = ((((ax * tCubed) + (bx * tSquared)) + (cx * t)) + p0[0])
y = ((((ay * tCubed) + (by * tSquared)) + (cy * t)) + p0[1])
result = (x, y)
return result | @param p0
@param p1
@param p2
@param p3
@param t | fitbeat_project/fitbeats/functions.py | bezier | huddlej/fitbeats | 0 | python | def bezier(p0, p1, p2, p3, t):
'\n @param p0\n @param p1\n @param p2\n @param p3\n @param t\n '
cx = (3.0 * (p1[0] - p0[0]))
bx = ((3.0 * (p2[0] - p1[0])) - cx)
ax = (((p3[0] - p0[0]) - cx) - bx)
cy = (3.0 * (p1[1] - p0[1]))
by = ((3.0 * (p2[1] - p1[1])) - cy)
ay = (((p3[1] - p0[1]) - cy) - by)
tSquared = (t * t)
tCubed = (tSquared * t)
x = ((((ax * tCubed) + (bx * tSquared)) + (cx * t)) + p0[0])
y = ((((ay * tCubed) + (by * tSquared)) + (cy * t)) + p0[1])
result = (x, y)
return result | def bezier(p0, p1, p2, p3, t):
'\n @param p0\n @param p1\n @param p2\n @param p3\n @param t\n '
cx = (3.0 * (p1[0] - p0[0]))
bx = ((3.0 * (p2[0] - p1[0])) - cx)
ax = (((p3[0] - p0[0]) - cx) - bx)
cy = (3.0 * (p1[1] - p0[1]))
by = ((3.0 * (p2[1] - p1[1])) - cy)
ay = (((p3[1] - p0[1]) - cy) - by)
tSquared = (t * t)
tCubed = (tSquared * t)
x = ((((ax * tCubed) + (bx * tSquared)) + (cx * t)) + p0[0])
y = ((((ay * tCubed) + (by * tSquared)) + (cy * t)) + p0[1])
result = (x, y)
return result<|docstring|>@param p0
@param p1
@param p2
@param p3
@param t<|endoftext|> |
f63474939695082812acfea0c5e4e78a9442c9377ba80e30a1a8e4c32920fde7 | def calculate_cos(x, y):
'\n Calculate the cosine simlarity between two vectors\n \n @param list x\n @param list y\n @return float cos the cosine similarity between the two vectors\n '
x = normalize(x)
y = normalize(y)
cos = 0
for key in xrange(len(x)):
x_i = x[key]
y_i = y[key]
cos += (x_i * y_i)
return cos | Calculate the cosine simlarity between two vectors
@param list x
@param list y
@return float cos the cosine similarity between the two vectors | fitbeat_project/fitbeats/functions.py | calculate_cos | huddlej/fitbeats | 0 | python | def calculate_cos(x, y):
'\n Calculate the cosine simlarity between two vectors\n \n @param list x\n @param list y\n @return float cos the cosine similarity between the two vectors\n '
x = normalize(x)
y = normalize(y)
cos = 0
for key in xrange(len(x)):
x_i = x[key]
y_i = y[key]
cos += (x_i * y_i)
return cos | def calculate_cos(x, y):
'\n Calculate the cosine simlarity between two vectors\n \n @param list x\n @param list y\n @return float cos the cosine similarity between the two vectors\n '
x = normalize(x)
y = normalize(y)
cos = 0
for key in xrange(len(x)):
x_i = x[key]
y_i = y[key]
cos += (x_i * y_i)
return cos<|docstring|>Calculate the cosine simlarity between two vectors
@param list x
@param list y
@return float cos the cosine similarity between the two vectors<|endoftext|> |
0b0983abeb8c1e0f29cce24b83e5cf2e54dbb9d1d25ebad2b5a6f22b140f2ef6 | def send_message(self, message, tx_id, case_id):
'\n Sends a message to rabbit mq and returns a true or false depending on if it was successful\n :param message: The message to send to the rabbit mq queue\n :param tx_id: Transaction ID used to trace a transaction through the whole system.\n :param case_id: ID used to identify a single instance of a survey collection for a respondent\n :return: a boolean value indicating if it was successful\n '
message_as_string = str(message)
logger.info('sending message', category='rabbitmq')
logger.info('message payload', message=message_as_string, category='rabbitmq')
connection = None
try:
connection = self._connect()
channel = connection.channel()
channel.queue_declare(queue=self.queue, durable=True)
properties = BasicProperties(headers={}, delivery_mode=2)
properties.headers['tx_id'] = tx_id
properties.headers['case_id'] = case_id
channel.basic_publish(exchange='', routing_key=self.queue, body=message_as_string, mandatory=True, properties=properties)
logger.info('sent message', category='rabbitmq')
except (AMQPError, NackError, UnroutableError) as e:
logger.error('unable to send message', exc_info=e, category='rabbitmq')
return False
finally:
if connection:
self._disconnect(connection)
return True | Sends a message to rabbit mq and returns a true or false depending on if it was successful
:param message: The message to send to the rabbit mq queue
:param tx_id: Transaction ID used to trace a transaction through the whole system.
:param case_id: ID used to identify a single instance of a survey collection for a respondent
:return: a boolean value indicating if it was successful | app/submitter/submitter.py | send_message | ONSdigital/eq-questionnaire-runner | 3 | python | def send_message(self, message, tx_id, case_id):
'\n Sends a message to rabbit mq and returns a true or false depending on if it was successful\n :param message: The message to send to the rabbit mq queue\n :param tx_id: Transaction ID used to trace a transaction through the whole system.\n :param case_id: ID used to identify a single instance of a survey collection for a respondent\n :return: a boolean value indicating if it was successful\n '
message_as_string = str(message)
logger.info('sending message', category='rabbitmq')
logger.info('message payload', message=message_as_string, category='rabbitmq')
connection = None
try:
connection = self._connect()
channel = connection.channel()
channel.queue_declare(queue=self.queue, durable=True)
properties = BasicProperties(headers={}, delivery_mode=2)
properties.headers['tx_id'] = tx_id
properties.headers['case_id'] = case_id
channel.basic_publish(exchange=, routing_key=self.queue, body=message_as_string, mandatory=True, properties=properties)
logger.info('sent message', category='rabbitmq')
except (AMQPError, NackError, UnroutableError) as e:
logger.error('unable to send message', exc_info=e, category='rabbitmq')
return False
finally:
if connection:
self._disconnect(connection)
return True | def send_message(self, message, tx_id, case_id):
'\n Sends a message to rabbit mq and returns a true or false depending on if it was successful\n :param message: The message to send to the rabbit mq queue\n :param tx_id: Transaction ID used to trace a transaction through the whole system.\n :param case_id: ID used to identify a single instance of a survey collection for a respondent\n :return: a boolean value indicating if it was successful\n '
message_as_string = str(message)
logger.info('sending message', category='rabbitmq')
logger.info('message payload', message=message_as_string, category='rabbitmq')
connection = None
try:
connection = self._connect()
channel = connection.channel()
channel.queue_declare(queue=self.queue, durable=True)
properties = BasicProperties(headers={}, delivery_mode=2)
properties.headers['tx_id'] = tx_id
properties.headers['case_id'] = case_id
channel.basic_publish(exchange=, routing_key=self.queue, body=message_as_string, mandatory=True, properties=properties)
logger.info('sent message', category='rabbitmq')
except (AMQPError, NackError, UnroutableError) as e:
logger.error('unable to send message', exc_info=e, category='rabbitmq')
return False
finally:
if connection:
self._disconnect(connection)
return True<|docstring|>Sends a message to rabbit mq and returns a true or false depending on if it was successful
:param message: The message to send to the rabbit mq queue
:param tx_id: Transaction ID used to trace a transaction through the whole system.
:param case_id: ID used to identify a single instance of a survey collection for a respondent
:return: a boolean value indicating if it was successful<|endoftext|> |
92718bf7764007f3c5a27eb015885ffdbd4a1e4c5ff62048657bb71aab104387 | def run(self):
'Execute subprocess job.'
cmd = ' '.join([os.fspath(self.executable), ''.join(map(str, self.args)), os.fspath(self.input)])
with open(self.output, 'w') as outf:
p = subprocess.Popen(shlex.split(cmd), stdout=outf, close_fds=True)
(out, error) = p.communicate()
p.kill()
if error:
raise JobRunningError(error)
return out | Execute subprocess job. | src/haddock/libs/libsubprocess.py | run | Seaxingzhou/haddock3 | 1 | python | def run(self):
cmd = ' '.join([os.fspath(self.executable), .join(map(str, self.args)), os.fspath(self.input)])
with open(self.output, 'w') as outf:
p = subprocess.Popen(shlex.split(cmd), stdout=outf, close_fds=True)
(out, error) = p.communicate()
p.kill()
if error:
raise JobRunningError(error)
return out | def run(self):
cmd = ' '.join([os.fspath(self.executable), .join(map(str, self.args)), os.fspath(self.input)])
with open(self.output, 'w') as outf:
p = subprocess.Popen(shlex.split(cmd), stdout=outf, close_fds=True)
(out, error) = p.communicate()
p.kill()
if error:
raise JobRunningError(error)
return out<|docstring|>Execute subprocess job.<|endoftext|> |
2c3c05c4531066d4ccae91127e0b6485cd40577b33a3dca35cd5161d9227833d | def __init__(self, input_file, output_file, cns_folder, modpath, config_path, cns_exec=None, toppar=None):
'\n CNS subprocess.\n\n To execute the job, call the `.run()` method.\n\n Parameters\n ----------\n input_file : str or pathlib.Path\n The path to the .inp CNS file.\n\n output_file : str or pathlib.Path\n The path to the .out CNS file, where the standard output\n will be saved.\n\n cns_folder : str of pathlib.Path\n The path where the CNS scripts needed for the module reside.\n For example, `modules/rigidibody/cns`.\n\n mod_path : str of pathlib.Path\n Path where the results of the haddock3 module executing this\n CNS job will be saved.\n\n config_path : str of pathlib.Path\n Path of the haddock3 configuration file. Will be used to\n manage paths in relative manner.\n\n cns_exec : str of pathlib.Path, optional\n The path to the CNS exec. If not provided defaults to the\n global configuration in HADDOCK3.\n\n toppar : str of pathlib.Path, optional\n Path to the folder containing CNS topology parameters.\n If `None` is given defaults to `cns/toppar` inside HADDOCK3\n source code.\n '
self.input_file = input_file
self.output_file = output_file
self.cns_folder = cns_folder
self.modpath = modpath
self.config_path = Path(config_path).parent
self.toppar = (toppar or global_toppar)
self.cns_exec = cns_exec | CNS subprocess.
To execute the job, call the `.run()` method.
Parameters
----------
input_file : str or pathlib.Path
The path to the .inp CNS file.
output_file : str or pathlib.Path
The path to the .out CNS file, where the standard output
will be saved.
cns_folder : str of pathlib.Path
The path where the CNS scripts needed for the module reside.
For example, `modules/rigidibody/cns`.
mod_path : str of pathlib.Path
Path where the results of the haddock3 module executing this
CNS job will be saved.
config_path : str of pathlib.Path
Path of the haddock3 configuration file. Will be used to
manage paths in relative manner.
cns_exec : str of pathlib.Path, optional
The path to the CNS exec. If not provided defaults to the
global configuration in HADDOCK3.
toppar : str of pathlib.Path, optional
Path to the folder containing CNS topology parameters.
If `None` is given defaults to `cns/toppar` inside HADDOCK3
source code. | src/haddock/libs/libsubprocess.py | __init__ | Seaxingzhou/haddock3 | 1 | python | def __init__(self, input_file, output_file, cns_folder, modpath, config_path, cns_exec=None, toppar=None):
'\n CNS subprocess.\n\n To execute the job, call the `.run()` method.\n\n Parameters\n ----------\n input_file : str or pathlib.Path\n The path to the .inp CNS file.\n\n output_file : str or pathlib.Path\n The path to the .out CNS file, where the standard output\n will be saved.\n\n cns_folder : str of pathlib.Path\n The path where the CNS scripts needed for the module reside.\n For example, `modules/rigidibody/cns`.\n\n mod_path : str of pathlib.Path\n Path where the results of the haddock3 module executing this\n CNS job will be saved.\n\n config_path : str of pathlib.Path\n Path of the haddock3 configuration file. Will be used to\n manage paths in relative manner.\n\n cns_exec : str of pathlib.Path, optional\n The path to the CNS exec. If not provided defaults to the\n global configuration in HADDOCK3.\n\n toppar : str of pathlib.Path, optional\n Path to the folder containing CNS topology parameters.\n If `None` is given defaults to `cns/toppar` inside HADDOCK3\n source code.\n '
self.input_file = input_file
self.output_file = output_file
self.cns_folder = cns_folder
self.modpath = modpath
self.config_path = Path(config_path).parent
self.toppar = (toppar or global_toppar)
self.cns_exec = cns_exec | def __init__(self, input_file, output_file, cns_folder, modpath, config_path, cns_exec=None, toppar=None):
'\n CNS subprocess.\n\n To execute the job, call the `.run()` method.\n\n Parameters\n ----------\n input_file : str or pathlib.Path\n The path to the .inp CNS file.\n\n output_file : str or pathlib.Path\n The path to the .out CNS file, where the standard output\n will be saved.\n\n cns_folder : str of pathlib.Path\n The path where the CNS scripts needed for the module reside.\n For example, `modules/rigidibody/cns`.\n\n mod_path : str of pathlib.Path\n Path where the results of the haddock3 module executing this\n CNS job will be saved.\n\n config_path : str of pathlib.Path\n Path of the haddock3 configuration file. Will be used to\n manage paths in relative manner.\n\n cns_exec : str of pathlib.Path, optional\n The path to the CNS exec. If not provided defaults to the\n global configuration in HADDOCK3.\n\n toppar : str of pathlib.Path, optional\n Path to the folder containing CNS topology parameters.\n If `None` is given defaults to `cns/toppar` inside HADDOCK3\n source code.\n '
self.input_file = input_file
self.output_file = output_file
self.cns_folder = cns_folder
self.modpath = modpath
self.config_path = Path(config_path).parent
self.toppar = (toppar or global_toppar)
self.cns_exec = cns_exec<|docstring|>CNS subprocess.
To execute the job, call the `.run()` method.
Parameters
----------
input_file : str or pathlib.Path
The path to the .inp CNS file.
output_file : str or pathlib.Path
The path to the .out CNS file, where the standard output
will be saved.
cns_folder : str of pathlib.Path
The path where the CNS scripts needed for the module reside.
For example, `modules/rigidibody/cns`.
mod_path : str of pathlib.Path
Path where the results of the haddock3 module executing this
CNS job will be saved.
config_path : str of pathlib.Path
Path of the haddock3 configuration file. Will be used to
manage paths in relative manner.
cns_exec : str of pathlib.Path, optional
The path to the CNS exec. If not provided defaults to the
global configuration in HADDOCK3.
toppar : str of pathlib.Path, optional
Path to the folder containing CNS topology parameters.
If `None` is given defaults to `cns/toppar` inside HADDOCK3
source code.<|endoftext|> |
d795d7d30955e0373d26899c18fc7c6b5ce60db1570d8ea81603af1a77ab6f9b | @property
def cns_exec(self):
'CNS executable path.'
return self._cns_exec | CNS executable path. | src/haddock/libs/libsubprocess.py | cns_exec | Seaxingzhou/haddock3 | 1 | python | @property
def cns_exec(self):
return self._cns_exec | @property
def cns_exec(self):
return self._cns_exec<|docstring|>CNS executable path.<|endoftext|> |
36c93226e932f10a2ccaf9e3f56f80b394ecbc3cf9c93cb0d00781edd8f659a4 | def run(self):
'Run this CNS job script.'
with open(self.input_file) as inp, open(self.output_file, 'w+') as outf:
env = {'MODDIR': str(self.modpath), 'MODULE': str(self.cns_folder), 'RUN': str(self.config_path), 'TOPPAR': str(self.toppar)}
p = subprocess.Popen(self.cns_exec, stdin=inp, stdout=outf, stderr=subprocess.PIPE, close_fds=True, env=env)
(out, error) = p.communicate()
p.kill()
if error:
raise CNSRunningError(error)
return out | Run this CNS job script. | src/haddock/libs/libsubprocess.py | run | Seaxingzhou/haddock3 | 1 | python | def run(self):
with open(self.input_file) as inp, open(self.output_file, 'w+') as outf:
env = {'MODDIR': str(self.modpath), 'MODULE': str(self.cns_folder), 'RUN': str(self.config_path), 'TOPPAR': str(self.toppar)}
p = subprocess.Popen(self.cns_exec, stdin=inp, stdout=outf, stderr=subprocess.PIPE, close_fds=True, env=env)
(out, error) = p.communicate()
p.kill()
if error:
raise CNSRunningError(error)
return out | def run(self):
with open(self.input_file) as inp, open(self.output_file, 'w+') as outf:
env = {'MODDIR': str(self.modpath), 'MODULE': str(self.cns_folder), 'RUN': str(self.config_path), 'TOPPAR': str(self.toppar)}
p = subprocess.Popen(self.cns_exec, stdin=inp, stdout=outf, stderr=subprocess.PIPE, close_fds=True, env=env)
(out, error) = p.communicate()
p.kill()
if error:
raise CNSRunningError(error)
return out<|docstring|>Run this CNS job script.<|endoftext|> |
743351f7e801a6dab2c5f4a5b7b83df404f28c2504e019bd00da496bf1a07640 | def glInitConvolutionEXT():
'Return boolean indicating whether this extension is available'
from OpenGL import extensions
return extensions.hasGLExtension(EXTENSION_NAME) | Return boolean indicating whether this extension is available | PyOpenGL-3.0.2/OpenGL/raw/GL/EXT/convolution.py | glInitConvolutionEXT | frederica07/Dragon_Programming_Process | 0 | python | def glInitConvolutionEXT():
from OpenGL import extensions
return extensions.hasGLExtension(EXTENSION_NAME) | def glInitConvolutionEXT():
from OpenGL import extensions
return extensions.hasGLExtension(EXTENSION_NAME)<|docstring|>Return boolean indicating whether this extension is available<|endoftext|> |
4f68c0ee35bf3fecce023c5b5fe4b40e69d7c32ecb8979b98d8f4c91b543a705 | def is_in_range(self, range, origin_in_degrees, destination_in_degrees):
'\n Function to encapsulate figuring out if two GPS coordinates are within a\n given range\n\n Args:\n range: The maximum range between two GPS coordinates\n origin_in_degrees: Origin GPS coordinates\n destination_in_degrees: Destination GPS coordinates\n\n Returns:\n bool: Whether the GPS coordinates are in a given range\n '
return (self.calculate_distance(origin_in_degrees, destination_in_degrees) <= range) | Function to encapsulate figuring out if two GPS coordinates are within a
given range
Args:
range: The maximum range between two GPS coordinates
origin_in_degrees: Origin GPS coordinates
destination_in_degrees: Destination GPS coordinates
Returns:
bool: Whether the GPS coordinates are in a given range | gps_distance_calculator/calculator.py | is_in_range | terepaii/GPS-Distance-Calculator | 0 | python | def is_in_range(self, range, origin_in_degrees, destination_in_degrees):
'\n Function to encapsulate figuring out if two GPS coordinates are within a\n given range\n\n Args:\n range: The maximum range between two GPS coordinates\n origin_in_degrees: Origin GPS coordinates\n destination_in_degrees: Destination GPS coordinates\n\n Returns:\n bool: Whether the GPS coordinates are in a given range\n '
return (self.calculate_distance(origin_in_degrees, destination_in_degrees) <= range) | def is_in_range(self, range, origin_in_degrees, destination_in_degrees):
'\n Function to encapsulate figuring out if two GPS coordinates are within a\n given range\n\n Args:\n range: The maximum range between two GPS coordinates\n origin_in_degrees: Origin GPS coordinates\n destination_in_degrees: Destination GPS coordinates\n\n Returns:\n bool: Whether the GPS coordinates are in a given range\n '
return (self.calculate_distance(origin_in_degrees, destination_in_degrees) <= range)<|docstring|>Function to encapsulate figuring out if two GPS coordinates are within a
given range
Args:
range: The maximum range between two GPS coordinates
origin_in_degrees: Origin GPS coordinates
destination_in_degrees: Destination GPS coordinates
Returns:
bool: Whether the GPS coordinates are in a given range<|endoftext|> |
a3f1aaa46c7d727afd137b5f43ce4f65cb19fd9c8891727bbe8748a5caad5ae2 | def calculate_distance(self, origin_in_degrees, destination_in_degrees):
'\n Given some GPS coordinates, calculate the distance\n between 2 points. Uses the Haversine formula.\n\n As dataset grows, it might be more beneficial to use\n a quicker estimate formula such as the equirectangular\n distance approximation\n\n Args:\n origin (dict): origin GPS coordinates\n destination (dict): Destination GPS coordinates\n\n Returns:\n int: Distance(in km) from origin to destination\n '
origin_in_radians = self.convert_gps_coordinates_to_radians(origin_in_degrees)
destination_in_radians = self.convert_gps_coordinates_to_radians(destination_in_degrees)
origin_lat = origin_in_radians['latitude']
origin_lon = origin_in_radians['longitude']
dest_lat = destination_in_radians['latitude']
dest_lon = destination_in_radians['longitude']
_logger.debug(f'Computing distance between lat: {origin_lat}, lon: {origin_lon} and lat: {dest_lat}, lon: {dest_lon}')
delta_lon = (dest_lon - origin_lon)
delta_lat = (dest_lat - origin_lat)
a = ((sin((delta_lat / 2)) ** 2) + ((cos(origin_lat) * cos(dest_lat)) * (sin((delta_lon / 2)) ** 2)))
c = (2 * asin(min(1, sqrt(a))))
distance = (EARTH_RADIUS_IN_KM * c)
return round(distance, 5) | Given some GPS coordinates, calculate the distance
between 2 points. Uses the Haversine formula.
As dataset grows, it might be more beneficial to use
a quicker estimate formula such as the equirectangular
distance approximation
Args:
origin (dict): origin GPS coordinates
destination (dict): Destination GPS coordinates
Returns:
int: Distance(in km) from origin to destination | gps_distance_calculator/calculator.py | calculate_distance | terepaii/GPS-Distance-Calculator | 0 | python | def calculate_distance(self, origin_in_degrees, destination_in_degrees):
'\n Given some GPS coordinates, calculate the distance\n between 2 points. Uses the Haversine formula.\n\n As dataset grows, it might be more beneficial to use\n a quicker estimate formula such as the equirectangular\n distance approximation\n\n Args:\n origin (dict): origin GPS coordinates\n destination (dict): Destination GPS coordinates\n\n Returns:\n int: Distance(in km) from origin to destination\n '
origin_in_radians = self.convert_gps_coordinates_to_radians(origin_in_degrees)
destination_in_radians = self.convert_gps_coordinates_to_radians(destination_in_degrees)
origin_lat = origin_in_radians['latitude']
origin_lon = origin_in_radians['longitude']
dest_lat = destination_in_radians['latitude']
dest_lon = destination_in_radians['longitude']
_logger.debug(f'Computing distance between lat: {origin_lat}, lon: {origin_lon} and lat: {dest_lat}, lon: {dest_lon}')
delta_lon = (dest_lon - origin_lon)
delta_lat = (dest_lat - origin_lat)
a = ((sin((delta_lat / 2)) ** 2) + ((cos(origin_lat) * cos(dest_lat)) * (sin((delta_lon / 2)) ** 2)))
c = (2 * asin(min(1, sqrt(a))))
distance = (EARTH_RADIUS_IN_KM * c)
return round(distance, 5) | def calculate_distance(self, origin_in_degrees, destination_in_degrees):
'\n Given some GPS coordinates, calculate the distance\n between 2 points. Uses the Haversine formula.\n\n As dataset grows, it might be more beneficial to use\n a quicker estimate formula such as the equirectangular\n distance approximation\n\n Args:\n origin (dict): origin GPS coordinates\n destination (dict): Destination GPS coordinates\n\n Returns:\n int: Distance(in km) from origin to destination\n '
origin_in_radians = self.convert_gps_coordinates_to_radians(origin_in_degrees)
destination_in_radians = self.convert_gps_coordinates_to_radians(destination_in_degrees)
origin_lat = origin_in_radians['latitude']
origin_lon = origin_in_radians['longitude']
dest_lat = destination_in_radians['latitude']
dest_lon = destination_in_radians['longitude']
_logger.debug(f'Computing distance between lat: {origin_lat}, lon: {origin_lon} and lat: {dest_lat}, lon: {dest_lon}')
delta_lon = (dest_lon - origin_lon)
delta_lat = (dest_lat - origin_lat)
a = ((sin((delta_lat / 2)) ** 2) + ((cos(origin_lat) * cos(dest_lat)) * (sin((delta_lon / 2)) ** 2)))
c = (2 * asin(min(1, sqrt(a))))
distance = (EARTH_RADIUS_IN_KM * c)
return round(distance, 5)<|docstring|>Given some GPS coordinates, calculate the distance
between 2 points. Uses the Haversine formula.
As dataset grows, it might be more beneficial to use
a quicker estimate formula such as the equirectangular
distance approximation
Args:
origin (dict): origin GPS coordinates
destination (dict): Destination GPS coordinates
Returns:
int: Distance(in km) from origin to destination<|endoftext|> |
e3bd27d5aa02058edbdbbcb9a517e0b3f725c66922f07f1b7e50df7c140eeac7 | def convert_gps_coordinates_to_radians(self, gps_coordinates):
'\n Convert a set of gps coordinates from degrees to radians\n\n Args:\n gps_coordinates (dict): A dictionary containing gps coordinates\n\n Returns:\n Dict: A dictionary of GPS coordinates converted to radians\n '
latitude = self.degrees_to_radians(gps_coordinates['latitude'])
longitude = self.degrees_to_radians(gps_coordinates['longitude'])
return {'latitude': latitude, 'longitude': longitude} | Convert a set of gps coordinates from degrees to radians
Args:
gps_coordinates (dict): A dictionary containing gps coordinates
Returns:
Dict: A dictionary of GPS coordinates converted to radians | gps_distance_calculator/calculator.py | convert_gps_coordinates_to_radians | terepaii/GPS-Distance-Calculator | 0 | python | def convert_gps_coordinates_to_radians(self, gps_coordinates):
'\n Convert a set of gps coordinates from degrees to radians\n\n Args:\n gps_coordinates (dict): A dictionary containing gps coordinates\n\n Returns:\n Dict: A dictionary of GPS coordinates converted to radians\n '
latitude = self.degrees_to_radians(gps_coordinates['latitude'])
longitude = self.degrees_to_radians(gps_coordinates['longitude'])
return {'latitude': latitude, 'longitude': longitude} | def convert_gps_coordinates_to_radians(self, gps_coordinates):
'\n Convert a set of gps coordinates from degrees to radians\n\n Args:\n gps_coordinates (dict): A dictionary containing gps coordinates\n\n Returns:\n Dict: A dictionary of GPS coordinates converted to radians\n '
latitude = self.degrees_to_radians(gps_coordinates['latitude'])
longitude = self.degrees_to_radians(gps_coordinates['longitude'])
return {'latitude': latitude, 'longitude': longitude}<|docstring|>Convert a set of gps coordinates from degrees to radians
Args:
gps_coordinates (dict): A dictionary containing gps coordinates
Returns:
Dict: A dictionary of GPS coordinates converted to radians<|endoftext|> |
edbc515ead7dd5dff58f76b19edc5db12e78d91c6277ef79735150ecaf07df35 | def degrees_to_radians(self, degrees):
'\n Function to convert degrees to radians. Round to 7 decimal places\n\n Args:\n degrees: Degrees\n '
_logger.debug(f'Converting {degrees} degrees to radians')
return round((float(degrees) * (pi / 180)), 5) | Function to convert degrees to radians. Round to 7 decimal places
Args:
degrees: Degrees | gps_distance_calculator/calculator.py | degrees_to_radians | terepaii/GPS-Distance-Calculator | 0 | python | def degrees_to_radians(self, degrees):
'\n Function to convert degrees to radians. Round to 7 decimal places\n\n Args:\n degrees: Degrees\n '
_logger.debug(f'Converting {degrees} degrees to radians')
return round((float(degrees) * (pi / 180)), 5) | def degrees_to_radians(self, degrees):
'\n Function to convert degrees to radians. Round to 7 decimal places\n\n Args:\n degrees: Degrees\n '
_logger.debug(f'Converting {degrees} degrees to radians')
return round((float(degrees) * (pi / 180)), 5)<|docstring|>Function to convert degrees to radians. Round to 7 decimal places
Args:
degrees: Degrees<|endoftext|> |
67b4880f1459304157efa1efc4007cd43419fdff25f7c650b2012c4dcc1185e1 | def forward_kinematics(self, joints_readings, up_to_joint=7):
"This function solve forward kinematics by multiplying frame transformation up until a specified\n joint. Reference Lecture 9 slide 13.\n Args:\n joints_readings (list): the state of the robot joints.\n up_to_joint (int, optional): Specify up to what frame you want to compute forward kinematics.\n Defaults to 7.\n Returns:\n np.ndarray The output is a numpy 4*4 matrix describing the transformation from the 'iiwa_link_0' frame to\n the selected joint frame.\n "
assert isinstance(joints_readings, list), ('joint readings of type ' + str(type(joints_readings)))
assert isinstance(up_to_joint, int)
T = np.identity(4)
T[(2, 3)] = 0.1575
for i in range(0, up_to_joint):
T = T.dot(self.T_rotationZ(joints_readings[i]))
T = T.dot(self.T_translation(self.translation_vec[(i, :)]))
T = T.dot(self.T_rotationX(self.X_alpha[i]))
T = T.dot(self.T_rotationY(self.Y_alpha[i]))
assert isinstance(T, np.ndarray), "Output wasn't of type ndarray"
assert (T.shape == (4, 4)), 'Output had wrong dimensions'
return T | This function solve forward kinematics by multiplying frame transformation up until a specified
joint. Reference Lecture 9 slide 13.
Args:
joints_readings (list): the state of the robot joints.
up_to_joint (int, optional): Specify up to what frame you want to compute forward kinematics.
Defaults to 7.
Returns:
np.ndarray The output is a numpy 4*4 matrix describing the transformation from the 'iiwa_link_0' frame to
the selected joint frame. | cw3/cw3q2/src/cw3q2/iiwa14DynStudent.py | forward_kinematics | joerowelll/COMP0127_Robotic_Systems_Engineering-Courseworks | 0 | python | def forward_kinematics(self, joints_readings, up_to_joint=7):
"This function solve forward kinematics by multiplying frame transformation up until a specified\n joint. Reference Lecture 9 slide 13.\n Args:\n joints_readings (list): the state of the robot joints.\n up_to_joint (int, optional): Specify up to what frame you want to compute forward kinematics.\n Defaults to 7.\n Returns:\n np.ndarray The output is a numpy 4*4 matrix describing the transformation from the 'iiwa_link_0' frame to\n the selected joint frame.\n "
assert isinstance(joints_readings, list), ('joint readings of type ' + str(type(joints_readings)))
assert isinstance(up_to_joint, int)
T = np.identity(4)
T[(2, 3)] = 0.1575
for i in range(0, up_to_joint):
T = T.dot(self.T_rotationZ(joints_readings[i]))
T = T.dot(self.T_translation(self.translation_vec[(i, :)]))
T = T.dot(self.T_rotationX(self.X_alpha[i]))
T = T.dot(self.T_rotationY(self.Y_alpha[i]))
assert isinstance(T, np.ndarray), "Output wasn't of type ndarray"
assert (T.shape == (4, 4)), 'Output had wrong dimensions'
return T | def forward_kinematics(self, joints_readings, up_to_joint=7):
"This function solve forward kinematics by multiplying frame transformation up until a specified\n joint. Reference Lecture 9 slide 13.\n Args:\n joints_readings (list): the state of the robot joints.\n up_to_joint (int, optional): Specify up to what frame you want to compute forward kinematics.\n Defaults to 7.\n Returns:\n np.ndarray The output is a numpy 4*4 matrix describing the transformation from the 'iiwa_link_0' frame to\n the selected joint frame.\n "
assert isinstance(joints_readings, list), ('joint readings of type ' + str(type(joints_readings)))
assert isinstance(up_to_joint, int)
T = np.identity(4)
T[(2, 3)] = 0.1575
for i in range(0, up_to_joint):
T = T.dot(self.T_rotationZ(joints_readings[i]))
T = T.dot(self.T_translation(self.translation_vec[(i, :)]))
T = T.dot(self.T_rotationX(self.X_alpha[i]))
T = T.dot(self.T_rotationY(self.Y_alpha[i]))
assert isinstance(T, np.ndarray), "Output wasn't of type ndarray"
assert (T.shape == (4, 4)), 'Output had wrong dimensions'
return T<|docstring|>This function solve forward kinematics by multiplying frame transformation up until a specified
joint. Reference Lecture 9 slide 13.
Args:
joints_readings (list): the state of the robot joints.
up_to_joint (int, optional): Specify up to what frame you want to compute forward kinematics.
Defaults to 7.
Returns:
np.ndarray The output is a numpy 4*4 matrix describing the transformation from the 'iiwa_link_0' frame to
the selected joint frame.<|endoftext|> |
f657e5cc451437012ae45bb20dad204e640d20ec8b9c4a061e5eb458ce9e626e | def get_jacobian_centre_of_mass(self, joint_readings, up_to_joint=7):
'Given the joint values of the robot, compute the Jacobian matrix at the centre of mass of the link.\n Reference - Lecture 9 slide 14.\n\n Args:\n joint_readings (list): the state of the robot joints.\n up_to_joint (int, optional): Specify up to what frame you want to compute the Jacobian.\n Defaults to 7.\n\n Returns:\n jacobian (numpy.ndarray): The output is a numpy 6*7 matrix describing the Jacobian matrix defining at the\n centre of mass of a link.\n '
assert isinstance(joint_readings, list)
assert (len(joint_readings) == 7)
jacobian = np.zeros((6, 7))
T0Gi = self.forward_kinematics_centre_of_mass(joint_readings, up_to_joint)
p = T0Gi[(0:3, 3)]
T = []
for i in range(up_to_joint):
T.append(self.forward_kinematics(joint_readings, i))
Tr = T[i]
z_prev = Tr[(0:3, 2)]
p_prev = Tr[(0:3, 3)]
jacobian[(0:3, i)] = np.cross(z_prev, (p - p_prev))
jacobian[(3:6, i)] = z_prev
assert (jacobian.shape == (6, 7))
return jacobian | Given the joint values of the robot, compute the Jacobian matrix at the centre of mass of the link.
Reference - Lecture 9 slide 14.
Args:
joint_readings (list): the state of the robot joints.
up_to_joint (int, optional): Specify up to what frame you want to compute the Jacobian.
Defaults to 7.
Returns:
jacobian (numpy.ndarray): The output is a numpy 6*7 matrix describing the Jacobian matrix defining at the
centre of mass of a link. | cw3/cw3q2/src/cw3q2/iiwa14DynStudent.py | get_jacobian_centre_of_mass | joerowelll/COMP0127_Robotic_Systems_Engineering-Courseworks | 0 | python | def get_jacobian_centre_of_mass(self, joint_readings, up_to_joint=7):
'Given the joint values of the robot, compute the Jacobian matrix at the centre of mass of the link.\n Reference - Lecture 9 slide 14.\n\n Args:\n joint_readings (list): the state of the robot joints.\n up_to_joint (int, optional): Specify up to what frame you want to compute the Jacobian.\n Defaults to 7.\n\n Returns:\n jacobian (numpy.ndarray): The output is a numpy 6*7 matrix describing the Jacobian matrix defining at the\n centre of mass of a link.\n '
assert isinstance(joint_readings, list)
assert (len(joint_readings) == 7)
jacobian = np.zeros((6, 7))
T0Gi = self.forward_kinematics_centre_of_mass(joint_readings, up_to_joint)
p = T0Gi[(0:3, 3)]
T = []
for i in range(up_to_joint):
T.append(self.forward_kinematics(joint_readings, i))
Tr = T[i]
z_prev = Tr[(0:3, 2)]
p_prev = Tr[(0:3, 3)]
jacobian[(0:3, i)] = np.cross(z_prev, (p - p_prev))
jacobian[(3:6, i)] = z_prev
assert (jacobian.shape == (6, 7))
return jacobian | def get_jacobian_centre_of_mass(self, joint_readings, up_to_joint=7):
'Given the joint values of the robot, compute the Jacobian matrix at the centre of mass of the link.\n Reference - Lecture 9 slide 14.\n\n Args:\n joint_readings (list): the state of the robot joints.\n up_to_joint (int, optional): Specify up to what frame you want to compute the Jacobian.\n Defaults to 7.\n\n Returns:\n jacobian (numpy.ndarray): The output is a numpy 6*7 matrix describing the Jacobian matrix defining at the\n centre of mass of a link.\n '
assert isinstance(joint_readings, list)
assert (len(joint_readings) == 7)
jacobian = np.zeros((6, 7))
T0Gi = self.forward_kinematics_centre_of_mass(joint_readings, up_to_joint)
p = T0Gi[(0:3, 3)]
T = []
for i in range(up_to_joint):
T.append(self.forward_kinematics(joint_readings, i))
Tr = T[i]
z_prev = Tr[(0:3, 2)]
p_prev = Tr[(0:3, 3)]
jacobian[(0:3, i)] = np.cross(z_prev, (p - p_prev))
jacobian[(3:6, i)] = z_prev
assert (jacobian.shape == (6, 7))
return jacobian<|docstring|>Given the joint values of the robot, compute the Jacobian matrix at the centre of mass of the link.
Reference - Lecture 9 slide 14.
Args:
joint_readings (list): the state of the robot joints.
up_to_joint (int, optional): Specify up to what frame you want to compute the Jacobian.
Defaults to 7.
Returns:
jacobian (numpy.ndarray): The output is a numpy 6*7 matrix describing the Jacobian matrix defining at the
centre of mass of a link.<|endoftext|> |
c1e5b07c162735bc4bc80e462cf5759fa0a44c5490d9f8ba4219ae4451e8f526 | def forward_kinematics_centre_of_mass(self, joints_readings, up_to_joint=7):
'This function computes the forward kinematics up to the centre of mass for the given joint frame.\n Reference - Lecture 9 slide 14.\n Args:\n joints_readings (list): the state of the robot joints.\n up_to_joint (int, optional): Specify up to what frame you want to compute forward kinematicks.\n Defaults to 5.\n Returns:\n np.ndarray: A 4x4 homogeneous transformation matrix describing the pose of frame_{up_to_joint} for the\n centre of mass w.r.t the base of the robot.\n '
T = np.identity(4)
T[(2, 3)] = 0.1575
T = self.forward_kinematics(joints_readings, (up_to_joint - 1))
T = T.dot(self.T_rotationZ(joints_readings[(up_to_joint - 1)]))
T = T.dot(self.T_translation(self.link_cm[((up_to_joint - 1), :)]))
return T | This function computes the forward kinematics up to the centre of mass for the given joint frame.
Reference - Lecture 9 slide 14.
Args:
joints_readings (list): the state of the robot joints.
up_to_joint (int, optional): Specify up to what frame you want to compute forward kinematicks.
Defaults to 5.
Returns:
np.ndarray: A 4x4 homogeneous transformation matrix describing the pose of frame_{up_to_joint} for the
centre of mass w.r.t the base of the robot. | cw3/cw3q2/src/cw3q2/iiwa14DynStudent.py | forward_kinematics_centre_of_mass | joerowelll/COMP0127_Robotic_Systems_Engineering-Courseworks | 0 | python | def forward_kinematics_centre_of_mass(self, joints_readings, up_to_joint=7):
'This function computes the forward kinematics up to the centre of mass for the given joint frame.\n Reference - Lecture 9 slide 14.\n Args:\n joints_readings (list): the state of the robot joints.\n up_to_joint (int, optional): Specify up to what frame you want to compute forward kinematicks.\n Defaults to 5.\n Returns:\n np.ndarray: A 4x4 homogeneous transformation matrix describing the pose of frame_{up_to_joint} for the\n centre of mass w.r.t the base of the robot.\n '
T = np.identity(4)
T[(2, 3)] = 0.1575
T = self.forward_kinematics(joints_readings, (up_to_joint - 1))
T = T.dot(self.T_rotationZ(joints_readings[(up_to_joint - 1)]))
T = T.dot(self.T_translation(self.link_cm[((up_to_joint - 1), :)]))
return T | def forward_kinematics_centre_of_mass(self, joints_readings, up_to_joint=7):
'This function computes the forward kinematics up to the centre of mass for the given joint frame.\n Reference - Lecture 9 slide 14.\n Args:\n joints_readings (list): the state of the robot joints.\n up_to_joint (int, optional): Specify up to what frame you want to compute forward kinematicks.\n Defaults to 5.\n Returns:\n np.ndarray: A 4x4 homogeneous transformation matrix describing the pose of frame_{up_to_joint} for the\n centre of mass w.r.t the base of the robot.\n '
T = np.identity(4)
T[(2, 3)] = 0.1575
T = self.forward_kinematics(joints_readings, (up_to_joint - 1))
T = T.dot(self.T_rotationZ(joints_readings[(up_to_joint - 1)]))
T = T.dot(self.T_translation(self.link_cm[((up_to_joint - 1), :)]))
return T<|docstring|>This function computes the forward kinematics up to the centre of mass for the given joint frame.
Reference - Lecture 9 slide 14.
Args:
joints_readings (list): the state of the robot joints.
up_to_joint (int, optional): Specify up to what frame you want to compute forward kinematicks.
Defaults to 5.
Returns:
np.ndarray: A 4x4 homogeneous transformation matrix describing the pose of frame_{up_to_joint} for the
centre of mass w.r.t the base of the robot.<|endoftext|> |
9b5b5a90d0c0fe90cd27c12d7eb2c6d0a4667de9caa9574b3446a9519cccfa3b | def get_B(self, joint_readings):
'Given the joint positions of the robot, compute inertia matrix B.\n Args:\n joint_readings (list): The positions of the robot joints.\n\n Returns:\n B (numpy.ndarray): The output is a numpy 7*7 matrix describing the inertia matrix B.\n '
B = np.zeros((7, 7))
for i in range(1, 8):
R0G = self.forward_kinematics_centre_of_mass(joint_readings, i)[(0:3, 0:3)]
Ioli = np.zeros((3, 3))
for j in range(3):
Ioli[(j, j)] = self.Ixyz[((i - 1), j)]
Ioli = np.matmul(np.matmul(R0G, Ioli), R0G.T)
mli = self.mass[(i - 1)]
Jcm = self.get_jacobian_centre_of_mass(joint_readings, i)
Jpli = Jcm[(0:3, :)]
Joli = Jcm[(3:6, :)]
B += ((mli * np.matmul(Jpli.T, Jpli)) + np.matmul(np.matmul(Joli.T, Ioli), Joli))
return B | Given the joint positions of the robot, compute inertia matrix B.
Args:
joint_readings (list): The positions of the robot joints.
Returns:
B (numpy.ndarray): The output is a numpy 7*7 matrix describing the inertia matrix B. | cw3/cw3q2/src/cw3q2/iiwa14DynStudent.py | get_B | joerowelll/COMP0127_Robotic_Systems_Engineering-Courseworks | 0 | python | def get_B(self, joint_readings):
'Given the joint positions of the robot, compute inertia matrix B.\n Args:\n joint_readings (list): The positions of the robot joints.\n\n Returns:\n B (numpy.ndarray): The output is a numpy 7*7 matrix describing the inertia matrix B.\n '
B = np.zeros((7, 7))
for i in range(1, 8):
R0G = self.forward_kinematics_centre_of_mass(joint_readings, i)[(0:3, 0:3)]
Ioli = np.zeros((3, 3))
for j in range(3):
Ioli[(j, j)] = self.Ixyz[((i - 1), j)]
Ioli = np.matmul(np.matmul(R0G, Ioli), R0G.T)
mli = self.mass[(i - 1)]
Jcm = self.get_jacobian_centre_of_mass(joint_readings, i)
Jpli = Jcm[(0:3, :)]
Joli = Jcm[(3:6, :)]
B += ((mli * np.matmul(Jpli.T, Jpli)) + np.matmul(np.matmul(Joli.T, Ioli), Joli))
return B | def get_B(self, joint_readings):
'Given the joint positions of the robot, compute inertia matrix B.\n Args:\n joint_readings (list): The positions of the robot joints.\n\n Returns:\n B (numpy.ndarray): The output is a numpy 7*7 matrix describing the inertia matrix B.\n '
B = np.zeros((7, 7))
for i in range(1, 8):
R0G = self.forward_kinematics_centre_of_mass(joint_readings, i)[(0:3, 0:3)]
Ioli = np.zeros((3, 3))
for j in range(3):
Ioli[(j, j)] = self.Ixyz[((i - 1), j)]
Ioli = np.matmul(np.matmul(R0G, Ioli), R0G.T)
mli = self.mass[(i - 1)]
Jcm = self.get_jacobian_centre_of_mass(joint_readings, i)
Jpli = Jcm[(0:3, :)]
Joli = Jcm[(3:6, :)]
B += ((mli * np.matmul(Jpli.T, Jpli)) + np.matmul(np.matmul(Joli.T, Ioli), Joli))
return B<|docstring|>Given the joint positions of the robot, compute inertia matrix B.
Args:
joint_readings (list): The positions of the robot joints.
Returns:
B (numpy.ndarray): The output is a numpy 7*7 matrix describing the inertia matrix B.<|endoftext|> |
f265046f858de0cd45f22940f96a8bf21a5c65cfe1a3a8ed2420c0ced456ecf5 | def get_C_times_qdot(self, joint_readings, joint_velocities):
'Given the joint positions and velocities of the robot, compute Coriolis terms C.\n Args:\n joint_readings (list): The positions of the robot joints.\n joint_velocities (list): The velocities of the robot joints.\n\n Returns:\n C (numpy.ndarray): The output is a numpy 7*1 matrix describing the Coriolis terms C times joint velocities.\n '
assert isinstance(joint_readings, list)
assert (len(joint_readings) == 7)
assert isinstance(joint_velocities, list)
assert (len(joint_velocities) == 7)
B = self.get_B(joint_readings)
C = np.zeros((7, 7))
delta = 0.001
for i in range(7):
for j in range(7):
for k in range(7):
q_k_copy = np.copy(joint_readings)
q_k_copy[k] += delta
bij = self.get_B(q_k_copy.tolist())
q_i_copy = np.copy(joint_readings)
q_i_copy[i] += delta
bjk = self.get_B(q_i_copy.tolist())
dbij_dq = ((bij[(i, j)] - B[(i, j)]) / delta)
dbjk_dq = ((bjk[(j, k)] - B[(j, k)]) / delta)
C[(i, j)] = ((dbij_dq - (0.5 * dbjk_dq)) * joint_velocities[k])
C = np.matmul(C, np.array(joint_velocities))
assert isinstance(C, np.ndarray)
assert (C.shape == (7,))
return C | Given the joint positions and velocities of the robot, compute Coriolis terms C.
Args:
joint_readings (list): The positions of the robot joints.
joint_velocities (list): The velocities of the robot joints.
Returns:
C (numpy.ndarray): The output is a numpy 7*1 matrix describing the Coriolis terms C times joint velocities. | cw3/cw3q2/src/cw3q2/iiwa14DynStudent.py | get_C_times_qdot | joerowelll/COMP0127_Robotic_Systems_Engineering-Courseworks | 0 | python | def get_C_times_qdot(self, joint_readings, joint_velocities):
'Given the joint positions and velocities of the robot, compute Coriolis terms C.\n Args:\n joint_readings (list): The positions of the robot joints.\n joint_velocities (list): The velocities of the robot joints.\n\n Returns:\n C (numpy.ndarray): The output is a numpy 7*1 matrix describing the Coriolis terms C times joint velocities.\n '
assert isinstance(joint_readings, list)
assert (len(joint_readings) == 7)
assert isinstance(joint_velocities, list)
assert (len(joint_velocities) == 7)
B = self.get_B(joint_readings)
C = np.zeros((7, 7))
delta = 0.001
for i in range(7):
for j in range(7):
for k in range(7):
q_k_copy = np.copy(joint_readings)
q_k_copy[k] += delta
bij = self.get_B(q_k_copy.tolist())
q_i_copy = np.copy(joint_readings)
q_i_copy[i] += delta
bjk = self.get_B(q_i_copy.tolist())
dbij_dq = ((bij[(i, j)] - B[(i, j)]) / delta)
dbjk_dq = ((bjk[(j, k)] - B[(j, k)]) / delta)
C[(i, j)] = ((dbij_dq - (0.5 * dbjk_dq)) * joint_velocities[k])
C = np.matmul(C, np.array(joint_velocities))
assert isinstance(C, np.ndarray)
assert (C.shape == (7,))
return C | def get_C_times_qdot(self, joint_readings, joint_velocities):
'Given the joint positions and velocities of the robot, compute Coriolis terms C.\n Args:\n joint_readings (list): The positions of the robot joints.\n joint_velocities (list): The velocities of the robot joints.\n\n Returns:\n C (numpy.ndarray): The output is a numpy 7*1 matrix describing the Coriolis terms C times joint velocities.\n '
assert isinstance(joint_readings, list)
assert (len(joint_readings) == 7)
assert isinstance(joint_velocities, list)
assert (len(joint_velocities) == 7)
B = self.get_B(joint_readings)
C = np.zeros((7, 7))
delta = 0.001
for i in range(7):
for j in range(7):
for k in range(7):
q_k_copy = np.copy(joint_readings)
q_k_copy[k] += delta
bij = self.get_B(q_k_copy.tolist())
q_i_copy = np.copy(joint_readings)
q_i_copy[i] += delta
bjk = self.get_B(q_i_copy.tolist())
dbij_dq = ((bij[(i, j)] - B[(i, j)]) / delta)
dbjk_dq = ((bjk[(j, k)] - B[(j, k)]) / delta)
C[(i, j)] = ((dbij_dq - (0.5 * dbjk_dq)) * joint_velocities[k])
C = np.matmul(C, np.array(joint_velocities))
assert isinstance(C, np.ndarray)
assert (C.shape == (7,))
return C<|docstring|>Given the joint positions and velocities of the robot, compute Coriolis terms C.
Args:
joint_readings (list): The positions of the robot joints.
joint_velocities (list): The velocities of the robot joints.
Returns:
C (numpy.ndarray): The output is a numpy 7*1 matrix describing the Coriolis terms C times joint velocities.<|endoftext|> |
6a106823f77c7b4c526644277de291e6c7f4c22b8b08ff3b1bcb099c7773183e | def get_G(self, joint_readings):
'Given the joint positions of the robot, compute the gravity matrix g.\n Args:\n joint_readings (list): The positions of the robot joints.\n\n Returns:\n G (numpy.ndarray): The output is a numpy 7*1 numpy array describing the gravity matrix g.\n '
assert isinstance(joint_readings, list)
assert (len(joint_readings) == 7)
G = np.zeros((7,))
g = [0, 0, self.g]
g = np.array(g)
delta = 0.001
for i in range(7):
G_i = 0
for j in range(7):
mli = self.mass[j]
J_l = self.get_jacobian_centre_of_mass(joint_readings, (j + 1))[(0:3, i)]
J_l = J_l.reshape(3, 1)
G_i += (mli * np.matmul(np.array(g), J_l))
G[i] = G_i
assert isinstance(g, np.ndarray)
assert (G.shape == (7,))
return G | Given the joint positions of the robot, compute the gravity matrix g.
Args:
joint_readings (list): The positions of the robot joints.
Returns:
G (numpy.ndarray): The output is a numpy 7*1 numpy array describing the gravity matrix g. | cw3/cw3q2/src/cw3q2/iiwa14DynStudent.py | get_G | joerowelll/COMP0127_Robotic_Systems_Engineering-Courseworks | 0 | python | def get_G(self, joint_readings):
'Given the joint positions of the robot, compute the gravity matrix g.\n Args:\n joint_readings (list): The positions of the robot joints.\n\n Returns:\n G (numpy.ndarray): The output is a numpy 7*1 numpy array describing the gravity matrix g.\n '
assert isinstance(joint_readings, list)
assert (len(joint_readings) == 7)
G = np.zeros((7,))
g = [0, 0, self.g]
g = np.array(g)
delta = 0.001
for i in range(7):
G_i = 0
for j in range(7):
mli = self.mass[j]
J_l = self.get_jacobian_centre_of_mass(joint_readings, (j + 1))[(0:3, i)]
J_l = J_l.reshape(3, 1)
G_i += (mli * np.matmul(np.array(g), J_l))
G[i] = G_i
assert isinstance(g, np.ndarray)
assert (G.shape == (7,))
return G | def get_G(self, joint_readings):
'Given the joint positions of the robot, compute the gravity matrix g.\n Args:\n joint_readings (list): The positions of the robot joints.\n\n Returns:\n G (numpy.ndarray): The output is a numpy 7*1 numpy array describing the gravity matrix g.\n '
assert isinstance(joint_readings, list)
assert (len(joint_readings) == 7)
G = np.zeros((7,))
g = [0, 0, self.g]
g = np.array(g)
delta = 0.001
for i in range(7):
G_i = 0
for j in range(7):
mli = self.mass[j]
J_l = self.get_jacobian_centre_of_mass(joint_readings, (j + 1))[(0:3, i)]
J_l = J_l.reshape(3, 1)
G_i += (mli * np.matmul(np.array(g), J_l))
G[i] = G_i
assert isinstance(g, np.ndarray)
assert (G.shape == (7,))
return G<|docstring|>Given the joint positions of the robot, compute the gravity matrix g.
Args:
joint_readings (list): The positions of the robot joints.
Returns:
G (numpy.ndarray): The output is a numpy 7*1 numpy array describing the gravity matrix g.<|endoftext|> |
4251cecd383ecfd982b260bdf556dcef4a3063d2be1b914979071cc004759873 | def develop(seed: FiniteSequence, **kwargs) -> Sequence:
"Grow a sequence from a given 'seed' (motive).\n The process does not operate in realtime, and may well\n raise DeadEndReached if it hits a dead-end\n The process is controlled by the following kwargs:\n\n min_beats - controls the length of the sequence. It is\n recommended to keep this figure smaller to start with.\n\n mutators - list of transformers. One will be applied to\n the seed at each stage. These are choosen\n using a weighted random choice. Optionally, you can specify weights\n at the start: eg mutators = [(t1, Decimal(1)), (t2, Decimal('1.5'))...]\n\n constraints - list of constraints. The entire sequence must pass all\n constraints in order to be deemed valid.\n\n adjust_weights: If True, adjust the weighting of the mutators at\n each stage, so that mutators that result in passing results are\n weighted up, and vice versa.\n "
opts = {'mutators': [], 'constraints': [], 'adjust_weights': True, 'min_beats': 1}
opts.update(kwargs)
try:
weights = [y for (x, y) in opts['mutators']]
mutators = [x for (x, y) in opts['mutators']]
except TypeError:
weights = [Decimal(1) for i in range(len(opts['mutators']))]
mutators = opts['mutators']
result = FiniteSequence(seed.events)
transformed = result.events
while (sum([evt.duration for evt in result.events]) < opts['min_beats']):
is_searching = True
while is_searching:
if (set(weights) == {0}):
raise DeadEndReached('The solver ran into a dead-end. Please try again, or adjust the parameters.')
_weights = (float(w) for w in weights)
mutator = random.choices(mutators, _weights)[0]
transformed = list(mutator(Sequence(transformed)))
candidate = FiniteSequence((result.events + transformed))
is_searching = False
for constraint in opts['constraints']:
if (not constraint(candidate)):
is_searching = True
break
if is_searching:
if (opts['adjust_weights'] and (weights[mutators.index(mutator)] > 0)):
weights[mutators.index(mutator)] = (weights[mutators.index(mutator)] - Decimal('0.1'))
continue
result = candidate
if (opts['adjust_weights'] and (weights[mutators.index(mutator)] < 1)):
weights[mutators.index(mutator)] = (weights[mutators.index(mutator)] + Decimal('0.1'))
return result | Grow a sequence from a given 'seed' (motive).
The process does not operate in realtime, and may well
raise DeadEndReached if it hits a dead-end
The process is controlled by the following kwargs:
min_beats - controls the length of the sequence. It is
recommended to keep this figure smaller to start with.
mutators - list of transformers. One will be applied to
the seed at each stage. These are choosen
using a weighted random choice. Optionally, you can specify weights
at the start: eg mutators = [(t1, Decimal(1)), (t2, Decimal('1.5'))...]
constraints - list of constraints. The entire sequence must pass all
constraints in order to be deemed valid.
adjust_weights: If True, adjust the weighting of the mutators at
each stage, so that mutators that result in passing results are
weighted up, and vice versa. | src/composerstoolkit/composers/solvers.py | develop | nickpeck/composerstoolkitv2 | 0 | python | def develop(seed: FiniteSequence, **kwargs) -> Sequence:
"Grow a sequence from a given 'seed' (motive).\n The process does not operate in realtime, and may well\n raise DeadEndReached if it hits a dead-end\n The process is controlled by the following kwargs:\n\n min_beats - controls the length of the sequence. It is\n recommended to keep this figure smaller to start with.\n\n mutators - list of transformers. One will be applied to\n the seed at each stage. These are choosen\n using a weighted random choice. Optionally, you can specify weights\n at the start: eg mutators = [(t1, Decimal(1)), (t2, Decimal('1.5'))...]\n\n constraints - list of constraints. The entire sequence must pass all\n constraints in order to be deemed valid.\n\n adjust_weights: If True, adjust the weighting of the mutators at\n each stage, so that mutators that result in passing results are\n weighted up, and vice versa.\n "
opts = {'mutators': [], 'constraints': [], 'adjust_weights': True, 'min_beats': 1}
opts.update(kwargs)
try:
weights = [y for (x, y) in opts['mutators']]
mutators = [x for (x, y) in opts['mutators']]
except TypeError:
weights = [Decimal(1) for i in range(len(opts['mutators']))]
mutators = opts['mutators']
result = FiniteSequence(seed.events)
transformed = result.events
while (sum([evt.duration for evt in result.events]) < opts['min_beats']):
is_searching = True
while is_searching:
if (set(weights) == {0}):
raise DeadEndReached('The solver ran into a dead-end. Please try again, or adjust the parameters.')
_weights = (float(w) for w in weights)
mutator = random.choices(mutators, _weights)[0]
transformed = list(mutator(Sequence(transformed)))
candidate = FiniteSequence((result.events + transformed))
is_searching = False
for constraint in opts['constraints']:
if (not constraint(candidate)):
is_searching = True
break
if is_searching:
if (opts['adjust_weights'] and (weights[mutators.index(mutator)] > 0)):
weights[mutators.index(mutator)] = (weights[mutators.index(mutator)] - Decimal('0.1'))
continue
result = candidate
if (opts['adjust_weights'] and (weights[mutators.index(mutator)] < 1)):
weights[mutators.index(mutator)] = (weights[mutators.index(mutator)] + Decimal('0.1'))
return result | def develop(seed: FiniteSequence, **kwargs) -> Sequence:
"Grow a sequence from a given 'seed' (motive).\n The process does not operate in realtime, and may well\n raise DeadEndReached if it hits a dead-end\n The process is controlled by the following kwargs:\n\n min_beats - controls the length of the sequence. It is\n recommended to keep this figure smaller to start with.\n\n mutators - list of transformers. One will be applied to\n the seed at each stage. These are choosen\n using a weighted random choice. Optionally, you can specify weights\n at the start: eg mutators = [(t1, Decimal(1)), (t2, Decimal('1.5'))...]\n\n constraints - list of constraints. The entire sequence must pass all\n constraints in order to be deemed valid.\n\n adjust_weights: If True, adjust the weighting of the mutators at\n each stage, so that mutators that result in passing results are\n weighted up, and vice versa.\n "
opts = {'mutators': [], 'constraints': [], 'adjust_weights': True, 'min_beats': 1}
opts.update(kwargs)
try:
weights = [y for (x, y) in opts['mutators']]
mutators = [x for (x, y) in opts['mutators']]
except TypeError:
weights = [Decimal(1) for i in range(len(opts['mutators']))]
mutators = opts['mutators']
result = FiniteSequence(seed.events)
transformed = result.events
while (sum([evt.duration for evt in result.events]) < opts['min_beats']):
is_searching = True
while is_searching:
if (set(weights) == {0}):
raise DeadEndReached('The solver ran into a dead-end. Please try again, or adjust the parameters.')
_weights = (float(w) for w in weights)
mutator = random.choices(mutators, _weights)[0]
transformed = list(mutator(Sequence(transformed)))
candidate = FiniteSequence((result.events + transformed))
is_searching = False
for constraint in opts['constraints']:
if (not constraint(candidate)):
is_searching = True
break
if is_searching:
if (opts['adjust_weights'] and (weights[mutators.index(mutator)] > 0)):
weights[mutators.index(mutator)] = (weights[mutators.index(mutator)] - Decimal('0.1'))
continue
result = candidate
if (opts['adjust_weights'] and (weights[mutators.index(mutator)] < 1)):
weights[mutators.index(mutator)] = (weights[mutators.index(mutator)] + Decimal('0.1'))
return result<|docstring|>Grow a sequence from a given 'seed' (motive).
The process does not operate in realtime, and may well
raise DeadEndReached if it hits a dead-end
The process is controlled by the following kwargs:
min_beats - controls the length of the sequence. It is
recommended to keep this figure smaller to start with.
mutators - list of transformers. One will be applied to
the seed at each stage. These are choosen
using a weighted random choice. Optionally, you can specify weights
at the start: eg mutators = [(t1, Decimal(1)), (t2, Decimal('1.5'))...]
constraints - list of constraints. The entire sequence must pass all
constraints in order to be deemed valid.
adjust_weights: If True, adjust the weighting of the mutators at
each stage, so that mutators that result in passing results are
weighted up, and vice versa.<|endoftext|> |
59de3d708c4c2f3f6b9b2d2c74e80bd09bfc59e3b67d7e002e6c3e69b98c2b6d | def backtracking_markov_solver(starting_event: Event, table: Dict[(int, Dict[(int, int)])], **kwargs) -> FiniteSequence:
'Compose a melodic sequence based upon the\n domain and constraints given.\n\n starting_event: Event dictate the starting pitch.\n All subsequent events will be of similar duration.\n\n constraints - list of constraint functions\n (see composerstoolkit.composers.constraints)\n\n heuristics - list of heuristics (weight maps)\n that can be used to provide a rough shape to the line\n (see composerstoolkit.composers.heuristics)\n\n n_events - the number of notes of the desired target\n sequence. (Default 1)\n '
opts = {'constraints': [], 'heuristics': [], 'n_events': 1}
opts.update(kwargs)
constraints = opts['constraints']
heuristics = opts['heuristics']
n_events = opts['n_events']
tick = 0
seq = FiniteSequence([starting_event])
if (n_events == 1):
return FiniteSequence(seq)
for constraint in constraints:
if (not constraint(seq)):
print(constraint)
raise InputViolatesConstraints('Unable to solve!')
dead_paths = []
choices = list(range(12))
previous_note = seq.events[(- 1)].pitches[(- 1)]
previous_note_pc = (previous_note % 12)
weights = list(table[previous_note_pc].values())
while (tick < (n_events - 1)):
try:
pitch = random.choices(choices, weights)[0]
original_8va = math.floor((previous_note / 12))
_pitch = ((original_8va * 12) + pitch)
if (abs((_pitch - previous_note)) > 12):
_pitch = (_pitch - 12)
note = Event([_pitch], starting_event.duration)
except IndexError:
dead_paths.append(seq[:])
seq = seq[:(- 1)]
tick = (tick - 1)
previous_note = seq.events[(- 1)].pitches[(- 1)]
previous_note_pc = (previous_note % 12)
weights = list(table[previous_note_pc].values())
choices = list(range(12))
if (tick == 0):
raise AllRoutesExhausted('Unable to solve!')
continue
context = FiniteSequence(seq.events[:])
context.events.append(note)
results = {True}
for constraint in constraints:
results.update([constraint(context)])
candidate = seq[:]
candidate.events.append(note)
if ((results == {True}) and (candidate not in dead_paths)):
seq.events.append(note)
tick = (tick + 1)
choices = list(range(12))
previous_note = _pitch
previous_note_pc = (previous_note % 12)
weights = list(table[previous_note_pc].values())
else:
i = choices.index(pitch)
del weights[i]
del choices[i]
return seq | Compose a melodic sequence based upon the
domain and constraints given.
starting_event: Event dictate the starting pitch.
All subsequent events will be of similar duration.
constraints - list of constraint functions
(see composerstoolkit.composers.constraints)
heuristics - list of heuristics (weight maps)
that can be used to provide a rough shape to the line
(see composerstoolkit.composers.heuristics)
n_events - the number of notes of the desired target
sequence. (Default 1) | src/composerstoolkit/composers/solvers.py | backtracking_markov_solver | nickpeck/composerstoolkitv2 | 0 | python | def backtracking_markov_solver(starting_event: Event, table: Dict[(int, Dict[(int, int)])], **kwargs) -> FiniteSequence:
'Compose a melodic sequence based upon the\n domain and constraints given.\n\n starting_event: Event dictate the starting pitch.\n All subsequent events will be of similar duration.\n\n constraints - list of constraint functions\n (see composerstoolkit.composers.constraints)\n\n heuristics - list of heuristics (weight maps)\n that can be used to provide a rough shape to the line\n (see composerstoolkit.composers.heuristics)\n\n n_events - the number of notes of the desired target\n sequence. (Default 1)\n '
opts = {'constraints': [], 'heuristics': [], 'n_events': 1}
opts.update(kwargs)
constraints = opts['constraints']
heuristics = opts['heuristics']
n_events = opts['n_events']
tick = 0
seq = FiniteSequence([starting_event])
if (n_events == 1):
return FiniteSequence(seq)
for constraint in constraints:
if (not constraint(seq)):
print(constraint)
raise InputViolatesConstraints('Unable to solve!')
dead_paths = []
choices = list(range(12))
previous_note = seq.events[(- 1)].pitches[(- 1)]
previous_note_pc = (previous_note % 12)
weights = list(table[previous_note_pc].values())
while (tick < (n_events - 1)):
try:
pitch = random.choices(choices, weights)[0]
original_8va = math.floor((previous_note / 12))
_pitch = ((original_8va * 12) + pitch)
if (abs((_pitch - previous_note)) > 12):
_pitch = (_pitch - 12)
note = Event([_pitch], starting_event.duration)
except IndexError:
dead_paths.append(seq[:])
seq = seq[:(- 1)]
tick = (tick - 1)
previous_note = seq.events[(- 1)].pitches[(- 1)]
previous_note_pc = (previous_note % 12)
weights = list(table[previous_note_pc].values())
choices = list(range(12))
if (tick == 0):
raise AllRoutesExhausted('Unable to solve!')
continue
context = FiniteSequence(seq.events[:])
context.events.append(note)
results = {True}
for constraint in constraints:
results.update([constraint(context)])
candidate = seq[:]
candidate.events.append(note)
if ((results == {True}) and (candidate not in dead_paths)):
seq.events.append(note)
tick = (tick + 1)
choices = list(range(12))
previous_note = _pitch
previous_note_pc = (previous_note % 12)
weights = list(table[previous_note_pc].values())
else:
i = choices.index(pitch)
del weights[i]
del choices[i]
return seq | def backtracking_markov_solver(starting_event: Event, table: Dict[(int, Dict[(int, int)])], **kwargs) -> FiniteSequence:
'Compose a melodic sequence based upon the\n domain and constraints given.\n\n starting_event: Event dictate the starting pitch.\n All subsequent events will be of similar duration.\n\n constraints - list of constraint functions\n (see composerstoolkit.composers.constraints)\n\n heuristics - list of heuristics (weight maps)\n that can be used to provide a rough shape to the line\n (see composerstoolkit.composers.heuristics)\n\n n_events - the number of notes of the desired target\n sequence. (Default 1)\n '
opts = {'constraints': [], 'heuristics': [], 'n_events': 1}
opts.update(kwargs)
constraints = opts['constraints']
heuristics = opts['heuristics']
n_events = opts['n_events']
tick = 0
seq = FiniteSequence([starting_event])
if (n_events == 1):
return FiniteSequence(seq)
for constraint in constraints:
if (not constraint(seq)):
print(constraint)
raise InputViolatesConstraints('Unable to solve!')
dead_paths = []
choices = list(range(12))
previous_note = seq.events[(- 1)].pitches[(- 1)]
previous_note_pc = (previous_note % 12)
weights = list(table[previous_note_pc].values())
while (tick < (n_events - 1)):
try:
pitch = random.choices(choices, weights)[0]
original_8va = math.floor((previous_note / 12))
_pitch = ((original_8va * 12) + pitch)
if (abs((_pitch - previous_note)) > 12):
_pitch = (_pitch - 12)
note = Event([_pitch], starting_event.duration)
except IndexError:
dead_paths.append(seq[:])
seq = seq[:(- 1)]
tick = (tick - 1)
previous_note = seq.events[(- 1)].pitches[(- 1)]
previous_note_pc = (previous_note % 12)
weights = list(table[previous_note_pc].values())
choices = list(range(12))
if (tick == 0):
raise AllRoutesExhausted('Unable to solve!')
continue
context = FiniteSequence(seq.events[:])
context.events.append(note)
results = {True}
for constraint in constraints:
results.update([constraint(context)])
candidate = seq[:]
candidate.events.append(note)
if ((results == {True}) and (candidate not in dead_paths)):
seq.events.append(note)
tick = (tick + 1)
choices = list(range(12))
previous_note = _pitch
previous_note_pc = (previous_note % 12)
weights = list(table[previous_note_pc].values())
else:
i = choices.index(pitch)
del weights[i]
del choices[i]
return seq<|docstring|>Compose a melodic sequence based upon the
domain and constraints given.
starting_event: Event dictate the starting pitch.
All subsequent events will be of similar duration.
constraints - list of constraint functions
(see composerstoolkit.composers.constraints)
heuristics - list of heuristics (weight maps)
that can be used to provide a rough shape to the line
(see composerstoolkit.composers.heuristics)
n_events - the number of notes of the desired target
sequence. (Default 1)<|endoftext|> |
51b8f3f7a5c4b39194611cc035299f31e5505d0a39ad47a3ba4b0aa4db29b74b | def backtracking_solver(starting_event: Event, **kwargs) -> FiniteSequence:
'Compose a melodic sequence based upon the\n domain and constraints given.\n\n starting_event: Event dictate the starting pitch.\n All subsequent events will be of similar duration.\n\n constraints - list of constraint functions\n (see composerstoolkit.composers.constraints)\n\n heuristics - list of heuristics (weight maps)\n that can be used to provide a rough shape to the line\n (see composerstoolkit.composers.heuristics)\n\n n_events - the number of notes of the desired target\n sequence. (Default 1)\n '
opts = {'constraints': [], 'heuristics': [], 'n_events': 1}
opts.update(kwargs)
constraints = opts['constraints']
heuristics = opts['heuristics']
n_events = opts['n_events']
tick = 0
seq = FiniteSequence([starting_event])
use_weights = (len(heuristics) > 0)
if (n_events == 1):
return FiniteSequence(seq)
results = set()
for constraint in constraints:
results.update([constraint(seq)])
if (results != {True}):
raise InputViolatesConstraints('Unable to solve!')
choices = list(range(NOTE_MIN, NOTE_MAX))
dead_paths = []
while (tick < (n_events - 1)):
if use_weights:
weights = [1.0 for i in range(len(choices))]
for heuristic in heuristics:
weights = heuristic(tick, choices, weights)
try:
if use_weights:
note = Event([random.choices(choices, weights)[0]], starting_event.duration)
else:
note = Event([random.choice(choices)], starting_event.duration)
except IndexError:
dead_paths.append(seq[:])
seq = seq[:(- 1)]
tick = (tick - 1)
choices = list(range(NOTE_MIN, NOTE_MAX))
if (tick == 0):
raise AllRoutesExhausted('Unable to solve!')
continue
context = FiniteSequence(seq.events[:])
context.events.append(note)
results = set()
for constraint in constraints:
results.update([constraint(context)])
candidate = seq[:]
candidate.events.append(note)
if ((results == {True}) and (candidate not in dead_paths)):
seq.events.append(note)
tick = (tick + 1)
choices = list(range(NOTE_MIN, NOTE_MAX))
else:
choices.remove(note.pitches[(- 1)])
return seq | Compose a melodic sequence based upon the
domain and constraints given.
starting_event: Event dictate the starting pitch.
All subsequent events will be of similar duration.
constraints - list of constraint functions
(see composerstoolkit.composers.constraints)
heuristics - list of heuristics (weight maps)
that can be used to provide a rough shape to the line
(see composerstoolkit.composers.heuristics)
n_events - the number of notes of the desired target
sequence. (Default 1) | src/composerstoolkit/composers/solvers.py | backtracking_solver | nickpeck/composerstoolkitv2 | 0 | python | def backtracking_solver(starting_event: Event, **kwargs) -> FiniteSequence:
'Compose a melodic sequence based upon the\n domain and constraints given.\n\n starting_event: Event dictate the starting pitch.\n All subsequent events will be of similar duration.\n\n constraints - list of constraint functions\n (see composerstoolkit.composers.constraints)\n\n heuristics - list of heuristics (weight maps)\n that can be used to provide a rough shape to the line\n (see composerstoolkit.composers.heuristics)\n\n n_events - the number of notes of the desired target\n sequence. (Default 1)\n '
opts = {'constraints': [], 'heuristics': [], 'n_events': 1}
opts.update(kwargs)
constraints = opts['constraints']
heuristics = opts['heuristics']
n_events = opts['n_events']
tick = 0
seq = FiniteSequence([starting_event])
use_weights = (len(heuristics) > 0)
if (n_events == 1):
return FiniteSequence(seq)
results = set()
for constraint in constraints:
results.update([constraint(seq)])
if (results != {True}):
raise InputViolatesConstraints('Unable to solve!')
choices = list(range(NOTE_MIN, NOTE_MAX))
dead_paths = []
while (tick < (n_events - 1)):
if use_weights:
weights = [1.0 for i in range(len(choices))]
for heuristic in heuristics:
weights = heuristic(tick, choices, weights)
try:
if use_weights:
note = Event([random.choices(choices, weights)[0]], starting_event.duration)
else:
note = Event([random.choice(choices)], starting_event.duration)
except IndexError:
dead_paths.append(seq[:])
seq = seq[:(- 1)]
tick = (tick - 1)
choices = list(range(NOTE_MIN, NOTE_MAX))
if (tick == 0):
raise AllRoutesExhausted('Unable to solve!')
continue
context = FiniteSequence(seq.events[:])
context.events.append(note)
results = set()
for constraint in constraints:
results.update([constraint(context)])
candidate = seq[:]
candidate.events.append(note)
if ((results == {True}) and (candidate not in dead_paths)):
seq.events.append(note)
tick = (tick + 1)
choices = list(range(NOTE_MIN, NOTE_MAX))
else:
choices.remove(note.pitches[(- 1)])
return seq | def backtracking_solver(starting_event: Event, **kwargs) -> FiniteSequence:
'Compose a melodic sequence based upon the\n domain and constraints given.\n\n starting_event: Event dictate the starting pitch.\n All subsequent events will be of similar duration.\n\n constraints - list of constraint functions\n (see composerstoolkit.composers.constraints)\n\n heuristics - list of heuristics (weight maps)\n that can be used to provide a rough shape to the line\n (see composerstoolkit.composers.heuristics)\n\n n_events - the number of notes of the desired target\n sequence. (Default 1)\n '
opts = {'constraints': [], 'heuristics': [], 'n_events': 1}
opts.update(kwargs)
constraints = opts['constraints']
heuristics = opts['heuristics']
n_events = opts['n_events']
tick = 0
seq = FiniteSequence([starting_event])
use_weights = (len(heuristics) > 0)
if (n_events == 1):
return FiniteSequence(seq)
results = set()
for constraint in constraints:
results.update([constraint(seq)])
if (results != {True}):
raise InputViolatesConstraints('Unable to solve!')
choices = list(range(NOTE_MIN, NOTE_MAX))
dead_paths = []
while (tick < (n_events - 1)):
if use_weights:
weights = [1.0 for i in range(len(choices))]
for heuristic in heuristics:
weights = heuristic(tick, choices, weights)
try:
if use_weights:
note = Event([random.choices(choices, weights)[0]], starting_event.duration)
else:
note = Event([random.choice(choices)], starting_event.duration)
except IndexError:
dead_paths.append(seq[:])
seq = seq[:(- 1)]
tick = (tick - 1)
choices = list(range(NOTE_MIN, NOTE_MAX))
if (tick == 0):
raise AllRoutesExhausted('Unable to solve!')
continue
context = FiniteSequence(seq.events[:])
context.events.append(note)
results = set()
for constraint in constraints:
results.update([constraint(context)])
candidate = seq[:]
candidate.events.append(note)
if ((results == {True}) and (candidate not in dead_paths)):
seq.events.append(note)
tick = (tick + 1)
choices = list(range(NOTE_MIN, NOTE_MAX))
else:
choices.remove(note.pitches[(- 1)])
return seq<|docstring|>Compose a melodic sequence based upon the
domain and constraints given.
starting_event: Event dictate the starting pitch.
All subsequent events will be of similar duration.
constraints - list of constraint functions
(see composerstoolkit.composers.constraints)
heuristics - list of heuristics (weight maps)
that can be used to provide a rough shape to the line
(see composerstoolkit.composers.heuristics)
n_events - the number of notes of the desired target
sequence. (Default 1)<|endoftext|> |
764f1325b07d8223392fab164a22f7c5c27a4b4b9508a66da6dff7e2c694f2a2 | def __init__(self, vocab=None, sos=True, eos=True):
'\n vocab:\n - 当为list或tuple时,它是一个词表列表\n - 当为str并且存在路径时,是一个以换行为分割的,每行一个词的词表文件\n sos: 是否在转换的时候加入[CLS]开头\n sos: 是否在转换的时候加入[SEP]结尾\n '
super(TextTransformer, self).__init__(self, dynamic=True)
self.sos = sos
self.eos = eos
self.fitted = False
self.word_index = {PAD: 0, UNK: 1, SOS: 2, EOS: 3}
if isinstance(vocab, (list, tuple)):
words = []
wordi = {}
for (i, w) in enumerate(vocab):
if (w not in wordi):
wordi[w] = True
words.append(w)
words = ([x for x in [PAD, UNK, SOS, EOS] if (x not in wordi)] + words)
self.word_index = {w: i for (i, w) in enumerate(words)}
self.index_word = {v: k for (k, v) in self.word_index.items()}
self.vocab_size = len(self.word_index)
self.fitted = True
elif (isinstance(vocab, str) and os.path.exists(vocab)):
words = []
wordi = {}
with open(vocab, 'r', encoding='utf-8') as fp:
for line in fp:
if line.endswith('\n'):
line = line[:(- 1)]
if (len(line) and (line not in wordi)):
wordi[line] = True
words.append(line)
words = ([x for x in [PAD, UNK, SOS, EOS] if (x not in wordi)] + words)
self.word_index = {w: i for (i, w) in enumerate(words)}
self.index_word = {v: k for (k, v) in self.word_index.items()}
self.vocab_size = len(self.word_index)
self.fitted = True | vocab:
- 当为list或tuple时,它是一个词表列表
- 当为str并且存在路径时,是一个以换行为分割的,每行一个词的词表文件
sos: 是否在转换的时候加入[CLS]开头
sos: 是否在转换的时候加入[SEP]结尾 | tf_text_model/text_transformer.py | __init__ | deepdialog/tf-text-model | 0 | python | def __init__(self, vocab=None, sos=True, eos=True):
'\n vocab:\n - 当为list或tuple时,它是一个词表列表\n - 当为str并且存在路径时,是一个以换行为分割的,每行一个词的词表文件\n sos: 是否在转换的时候加入[CLS]开头\n sos: 是否在转换的时候加入[SEP]结尾\n '
super(TextTransformer, self).__init__(self, dynamic=True)
self.sos = sos
self.eos = eos
self.fitted = False
self.word_index = {PAD: 0, UNK: 1, SOS: 2, EOS: 3}
if isinstance(vocab, (list, tuple)):
words = []
wordi = {}
for (i, w) in enumerate(vocab):
if (w not in wordi):
wordi[w] = True
words.append(w)
words = ([x for x in [PAD, UNK, SOS, EOS] if (x not in wordi)] + words)
self.word_index = {w: i for (i, w) in enumerate(words)}
self.index_word = {v: k for (k, v) in self.word_index.items()}
self.vocab_size = len(self.word_index)
self.fitted = True
elif (isinstance(vocab, str) and os.path.exists(vocab)):
words = []
wordi = {}
with open(vocab, 'r', encoding='utf-8') as fp:
for line in fp:
if line.endswith('\n'):
line = line[:(- 1)]
if (len(line) and (line not in wordi)):
wordi[line] = True
words.append(line)
words = ([x for x in [PAD, UNK, SOS, EOS] if (x not in wordi)] + words)
self.word_index = {w: i for (i, w) in enumerate(words)}
self.index_word = {v: k for (k, v) in self.word_index.items()}
self.vocab_size = len(self.word_index)
self.fitted = True | def __init__(self, vocab=None, sos=True, eos=True):
'\n vocab:\n - 当为list或tuple时,它是一个词表列表\n - 当为str并且存在路径时,是一个以换行为分割的,每行一个词的词表文件\n sos: 是否在转换的时候加入[CLS]开头\n sos: 是否在转换的时候加入[SEP]结尾\n '
super(TextTransformer, self).__init__(self, dynamic=True)
self.sos = sos
self.eos = eos
self.fitted = False
self.word_index = {PAD: 0, UNK: 1, SOS: 2, EOS: 3}
if isinstance(vocab, (list, tuple)):
words = []
wordi = {}
for (i, w) in enumerate(vocab):
if (w not in wordi):
wordi[w] = True
words.append(w)
words = ([x for x in [PAD, UNK, SOS, EOS] if (x not in wordi)] + words)
self.word_index = {w: i for (i, w) in enumerate(words)}
self.index_word = {v: k for (k, v) in self.word_index.items()}
self.vocab_size = len(self.word_index)
self.fitted = True
elif (isinstance(vocab, str) and os.path.exists(vocab)):
words = []
wordi = {}
with open(vocab, 'r', encoding='utf-8') as fp:
for line in fp:
if line.endswith('\n'):
line = line[:(- 1)]
if (len(line) and (line not in wordi)):
wordi[line] = True
words.append(line)
words = ([x for x in [PAD, UNK, SOS, EOS] if (x not in wordi)] + words)
self.word_index = {w: i for (i, w) in enumerate(words)}
self.index_word = {v: k for (k, v) in self.word_index.items()}
self.vocab_size = len(self.word_index)
self.fitted = True<|docstring|>vocab:
- 当为list或tuple时,它是一个词表列表
- 当为str并且存在路径时,是一个以换行为分割的,每行一个词的词表文件
sos: 是否在转换的时候加入[CLS]开头
sos: 是否在转换的时候加入[SEP]结尾<|endoftext|> |
69e657ffef60e5b526b94572a18a7a9de039a3d358052a9fe871bb23b3b4c359 | def nodejs_binary_macro(name, data=[], args=[], visibility=None, tags=[], testonly=0, **kwargs):
'This macro exists only to wrap the nodejs_binary as an .exe for Windows.\n\n This is exposed in the public API at `//:defs.bzl` as `nodejs_binary`, so most\n users loading `nodejs_binary` are actually executing this macro.\n\n Args:\n name: name of the label\n data: runtime dependencies\n args: applied to the wrapper binary\n visibility: applied to the wrapper binary\n tags: applied to the wrapper binary\n testonly: applied to nodejs_binary and wrapper binary\n **kwargs: passed to the nodejs_binary\n '
all_data = (data + ['@bazel_tools//tools/bash/runfiles'])
nodejs_binary(name=('%s_bin' % name), data=all_data, testonly=testonly, visibility=['//visibility:private'], **kwargs)
native.sh_binary(name=name, args=args, tags=tags, srcs=[(':%s_bin.sh' % name)], data=[(':%s_bin' % name)], testonly=testonly, visibility=visibility) | This macro exists only to wrap the nodejs_binary as an .exe for Windows.
This is exposed in the public API at `//:defs.bzl` as `nodejs_binary`, so most
users loading `nodejs_binary` are actually executing this macro.
Args:
name: name of the label
data: runtime dependencies
args: applied to the wrapper binary
visibility: applied to the wrapper binary
tags: applied to the wrapper binary
testonly: applied to nodejs_binary and wrapper binary
**kwargs: passed to the nodejs_binary | internal/node/node.bzl | nodejs_binary_macro | Jumblemuddle/rules_nodejs | 2 | python | def nodejs_binary_macro(name, data=[], args=[], visibility=None, tags=[], testonly=0, **kwargs):
'This macro exists only to wrap the nodejs_binary as an .exe for Windows.\n\n This is exposed in the public API at `//:defs.bzl` as `nodejs_binary`, so most\n users loading `nodejs_binary` are actually executing this macro.\n\n Args:\n name: name of the label\n data: runtime dependencies\n args: applied to the wrapper binary\n visibility: applied to the wrapper binary\n tags: applied to the wrapper binary\n testonly: applied to nodejs_binary and wrapper binary\n **kwargs: passed to the nodejs_binary\n '
all_data = (data + ['@bazel_tools//tools/bash/runfiles'])
nodejs_binary(name=('%s_bin' % name), data=all_data, testonly=testonly, visibility=['//visibility:private'], **kwargs)
native.sh_binary(name=name, args=args, tags=tags, srcs=[(':%s_bin.sh' % name)], data=[(':%s_bin' % name)], testonly=testonly, visibility=visibility) | def nodejs_binary_macro(name, data=[], args=[], visibility=None, tags=[], testonly=0, **kwargs):
'This macro exists only to wrap the nodejs_binary as an .exe for Windows.\n\n This is exposed in the public API at `//:defs.bzl` as `nodejs_binary`, so most\n users loading `nodejs_binary` are actually executing this macro.\n\n Args:\n name: name of the label\n data: runtime dependencies\n args: applied to the wrapper binary\n visibility: applied to the wrapper binary\n tags: applied to the wrapper binary\n testonly: applied to nodejs_binary and wrapper binary\n **kwargs: passed to the nodejs_binary\n '
all_data = (data + ['@bazel_tools//tools/bash/runfiles'])
nodejs_binary(name=('%s_bin' % name), data=all_data, testonly=testonly, visibility=['//visibility:private'], **kwargs)
native.sh_binary(name=name, args=args, tags=tags, srcs=[(':%s_bin.sh' % name)], data=[(':%s_bin' % name)], testonly=testonly, visibility=visibility)<|docstring|>This macro exists only to wrap the nodejs_binary as an .exe for Windows.
This is exposed in the public API at `//:defs.bzl` as `nodejs_binary`, so most
users loading `nodejs_binary` are actually executing this macro.
Args:
name: name of the label
data: runtime dependencies
args: applied to the wrapper binary
visibility: applied to the wrapper binary
tags: applied to the wrapper binary
testonly: applied to nodejs_binary and wrapper binary
**kwargs: passed to the nodejs_binary<|endoftext|> |
7717820be1693cfe77b97ef906203d2e4a2ec87dcf3ea378ddc08c697ebcab2f | def nodejs_test_macro(name, data=[], args=[], visibility=None, tags=[], **kwargs):
'This macro exists only to wrap the nodejs_test as an .exe for Windows.\n\n This is exposed in the public API at `//:defs.bzl` as `nodejs_test`, so most\n users loading `nodejs_test` are actually executing this macro.\n\n Args:\n name: name of the label\n data: runtime dependencies\n args: applied to the wrapper binary\n visibility: applied to the wrapper binary\n tags: applied to the wrapper binary\n **kwargs: passed to the nodejs_test\n '
all_data = (data + ['@bazel_tools//tools/bash/runfiles'])
nodejs_test(name=('%s_bin' % name), data=all_data, testonly=1, tags=['manual'], **kwargs)
native.sh_test(name=name, args=args, tags=tags, visibility=visibility, srcs=[(':%s_bin.sh' % name)], data=[(':%s_bin' % name)]) | This macro exists only to wrap the nodejs_test as an .exe for Windows.
This is exposed in the public API at `//:defs.bzl` as `nodejs_test`, so most
users loading `nodejs_test` are actually executing this macro.
Args:
name: name of the label
data: runtime dependencies
args: applied to the wrapper binary
visibility: applied to the wrapper binary
tags: applied to the wrapper binary
**kwargs: passed to the nodejs_test | internal/node/node.bzl | nodejs_test_macro | Jumblemuddle/rules_nodejs | 2 | python | def nodejs_test_macro(name, data=[], args=[], visibility=None, tags=[], **kwargs):
'This macro exists only to wrap the nodejs_test as an .exe for Windows.\n\n This is exposed in the public API at `//:defs.bzl` as `nodejs_test`, so most\n users loading `nodejs_test` are actually executing this macro.\n\n Args:\n name: name of the label\n data: runtime dependencies\n args: applied to the wrapper binary\n visibility: applied to the wrapper binary\n tags: applied to the wrapper binary\n **kwargs: passed to the nodejs_test\n '
all_data = (data + ['@bazel_tools//tools/bash/runfiles'])
nodejs_test(name=('%s_bin' % name), data=all_data, testonly=1, tags=['manual'], **kwargs)
native.sh_test(name=name, args=args, tags=tags, visibility=visibility, srcs=[(':%s_bin.sh' % name)], data=[(':%s_bin' % name)]) | def nodejs_test_macro(name, data=[], args=[], visibility=None, tags=[], **kwargs):
'This macro exists only to wrap the nodejs_test as an .exe for Windows.\n\n This is exposed in the public API at `//:defs.bzl` as `nodejs_test`, so most\n users loading `nodejs_test` are actually executing this macro.\n\n Args:\n name: name of the label\n data: runtime dependencies\n args: applied to the wrapper binary\n visibility: applied to the wrapper binary\n tags: applied to the wrapper binary\n **kwargs: passed to the nodejs_test\n '
all_data = (data + ['@bazel_tools//tools/bash/runfiles'])
nodejs_test(name=('%s_bin' % name), data=all_data, testonly=1, tags=['manual'], **kwargs)
native.sh_test(name=name, args=args, tags=tags, visibility=visibility, srcs=[(':%s_bin.sh' % name)], data=[(':%s_bin' % name)])<|docstring|>This macro exists only to wrap the nodejs_test as an .exe for Windows.
This is exposed in the public API at `//:defs.bzl` as `nodejs_test`, so most
users loading `nodejs_test` are actually executing this macro.
Args:
name: name of the label
data: runtime dependencies
args: applied to the wrapper binary
visibility: applied to the wrapper binary
tags: applied to the wrapper binary
**kwargs: passed to the nodejs_test<|endoftext|> |
dbcb1298d839fb4acae1ff5f8384c61673e92a30212a2bd6efb0bdd8fba21ba4 | async def async_setup_platform(hass, config, async_add_devices, discovery_info=None):
'Set up platform.'
import broadlink
host = (config.get(CONF_HOST), config.get(CONF_PORT))
mac = get_broadlink_mac(config.get(CONF_MAC))
link = broadlink.rm(host, mac, None)
try:
(await hass.async_add_job(link.auth))
except socket.timeout:
_LOGGER.warning('Timeout trying to authenticate to broadlink')
raise PlatformNotReady
async_add_devices([BroadlinkRM(link, config)]) | Set up platform. | media_player.py | async_setup_platform | elupus/hass_broadlink | 0 | python | async def async_setup_platform(hass, config, async_add_devices, discovery_info=None):
import broadlink
host = (config.get(CONF_HOST), config.get(CONF_PORT))
mac = get_broadlink_mac(config.get(CONF_MAC))
link = broadlink.rm(host, mac, None)
try:
(await hass.async_add_job(link.auth))
except socket.timeout:
_LOGGER.warning('Timeout trying to authenticate to broadlink')
raise PlatformNotReady
async_add_devices([BroadlinkRM(link, config)]) | async def async_setup_platform(hass, config, async_add_devices, discovery_info=None):
import broadlink
host = (config.get(CONF_HOST), config.get(CONF_PORT))
mac = get_broadlink_mac(config.get(CONF_MAC))
link = broadlink.rm(host, mac, None)
try:
(await hass.async_add_job(link.auth))
except socket.timeout:
_LOGGER.warning('Timeout trying to authenticate to broadlink')
raise PlatformNotReady
async_add_devices([BroadlinkRM(link, config)])<|docstring|>Set up platform.<|endoftext|> |
ed66c220141064441a5fe43122de648d4bc40efb11b92ce6933e528308dfbf73 | def get_supported_by_config(config):
'Calculate support flags based on available configuration entries.'
support = 0
for mapping in SUPPORT_MAPPING:
if (mapping[0] in config):
support = (support | mapping[1])
if config.get(CONF_SOURCES):
support = (support | SUPPORT_SELECT_SOURCE)
if config.get(CONF_SOUND_MODES):
support = (support | SUPPORT_SELECT_SOUND_MODE)
if config.get(CONF_DIGITS):
support = (support | SUPPORT_PLAY_MEDIA)
if config.get(CONF_VOLUME_SET):
support = (support | SUPPORT_VOLUME_SET)
return support | Calculate support flags based on available configuration entries. | media_player.py | get_supported_by_config | elupus/hass_broadlink | 0 | python | def get_supported_by_config(config):
support = 0
for mapping in SUPPORT_MAPPING:
if (mapping[0] in config):
support = (support | mapping[1])
if config.get(CONF_SOURCES):
support = (support | SUPPORT_SELECT_SOURCE)
if config.get(CONF_SOUND_MODES):
support = (support | SUPPORT_SELECT_SOUND_MODE)
if config.get(CONF_DIGITS):
support = (support | SUPPORT_PLAY_MEDIA)
if config.get(CONF_VOLUME_SET):
support = (support | SUPPORT_VOLUME_SET)
return support | def get_supported_by_config(config):
support = 0
for mapping in SUPPORT_MAPPING:
if (mapping[0] in config):
support = (support | mapping[1])
if config.get(CONF_SOURCES):
support = (support | SUPPORT_SELECT_SOURCE)
if config.get(CONF_SOUND_MODES):
support = (support | SUPPORT_SELECT_SOUND_MODE)
if config.get(CONF_DIGITS):
support = (support | SUPPORT_PLAY_MEDIA)
if config.get(CONF_VOLUME_SET):
support = (support | SUPPORT_VOLUME_SET)
return support<|docstring|>Calculate support flags based on available configuration entries.<|endoftext|> |
c93807d21720cbd2e029c8a14b54feae2a9cb10a204b8e85cf4620d910d70a66 | def get_broadlink_mac(mac: str):
'Convert a mac address string with : in it to just a flat string.'
return binascii.unhexlify(mac.encode().replace(b':', b'')) | Convert a mac address string with : in it to just a flat string. | media_player.py | get_broadlink_mac | elupus/hass_broadlink | 0 | python | def get_broadlink_mac(mac: str):
return binascii.unhexlify(mac.encode().replace(b':', b)) | def get_broadlink_mac(mac: str):
return binascii.unhexlify(mac.encode().replace(b':', b))<|docstring|>Convert a mac address string with : in it to just a flat string.<|endoftext|> |
02d9b92c09fc66939caafddd2be0e80a7cf76ea3302a974157e17391949436d5 | def __init__(self, link, config):
'Initialize device.'
super().__init__()
self._support = get_supported_by_config(config)
self._config = config
self._link = link
self._state = STATE_OFF
self._source = None
self._sound_mode = None
self._muted = None
self._volume_level = None
self._lock = asyncio.Lock()
self._volume_timestamp = (datetime.now() + timedelta(seconds=(- 100)))
self._volume_calls = 0
self._volume_step = None
self._volume_levels = None
self._volume_restore = None
if (CONF_VOLUME_SET in config):
volume_set = config[CONF_VOLUME_SET]
scale = (volume_set[CONF_VOLUME_MAX] - volume_set[CONF_VOLUME_MIN])
offset = volume_set[CONF_VOLUME_MIN]
self._volume_step = (volume_set[CONF_VOLUME_STEP] / scale)
self._volume_levels = {((level - offset) / scale): code for (level, code) in volume_set[CONF_VOLUME_LEVELS].items()}
_LOGGER.debug('Converted step %f, volumes: %s', self._volume_step, self._volume_levels)
if (CONF_VOLUME_RESTORE in volume_set):
self._volume_restore = ((volume_set[CONF_VOLUME_RESTORE] - offset) / scale) | Initialize device. | media_player.py | __init__ | elupus/hass_broadlink | 0 | python | def __init__(self, link, config):
super().__init__()
self._support = get_supported_by_config(config)
self._config = config
self._link = link
self._state = STATE_OFF
self._source = None
self._sound_mode = None
self._muted = None
self._volume_level = None
self._lock = asyncio.Lock()
self._volume_timestamp = (datetime.now() + timedelta(seconds=(- 100)))
self._volume_calls = 0
self._volume_step = None
self._volume_levels = None
self._volume_restore = None
if (CONF_VOLUME_SET in config):
volume_set = config[CONF_VOLUME_SET]
scale = (volume_set[CONF_VOLUME_MAX] - volume_set[CONF_VOLUME_MIN])
offset = volume_set[CONF_VOLUME_MIN]
self._volume_step = (volume_set[CONF_VOLUME_STEP] / scale)
self._volume_levels = {((level - offset) / scale): code for (level, code) in volume_set[CONF_VOLUME_LEVELS].items()}
_LOGGER.debug('Converted step %f, volumes: %s', self._volume_step, self._volume_levels)
if (CONF_VOLUME_RESTORE in volume_set):
self._volume_restore = ((volume_set[CONF_VOLUME_RESTORE] - offset) / scale) | def __init__(self, link, config):
super().__init__()
self._support = get_supported_by_config(config)
self._config = config
self._link = link
self._state = STATE_OFF
self._source = None
self._sound_mode = None
self._muted = None
self._volume_level = None
self._lock = asyncio.Lock()
self._volume_timestamp = (datetime.now() + timedelta(seconds=(- 100)))
self._volume_calls = 0
self._volume_step = None
self._volume_levels = None
self._volume_restore = None
if (CONF_VOLUME_SET in config):
volume_set = config[CONF_VOLUME_SET]
scale = (volume_set[CONF_VOLUME_MAX] - volume_set[CONF_VOLUME_MIN])
offset = volume_set[CONF_VOLUME_MIN]
self._volume_step = (volume_set[CONF_VOLUME_STEP] / scale)
self._volume_levels = {((level - offset) / scale): code for (level, code) in volume_set[CONF_VOLUME_LEVELS].items()}
_LOGGER.debug('Converted step %f, volumes: %s', self._volume_step, self._volume_levels)
if (CONF_VOLUME_RESTORE in volume_set):
self._volume_restore = ((volume_set[CONF_VOLUME_RESTORE] - offset) / scale)<|docstring|>Initialize device.<|endoftext|> |
b3beab0634c5853cfb697b3879ecbf8e8249c61aea2ddb94317139d90d26eb8a | async def send(self, command):
'Send b64 encoded command to device.'
if (command is None):
raise Exception('No command defined!')
packet = b64decode(command[CONF_CODE])
(await self.hass.async_add_job(self._link.send_data, packet))
if command[CONF_DELAY]:
(await asyncio.sleep(command[CONF_DELAY])) | Send b64 encoded command to device. | media_player.py | send | elupus/hass_broadlink | 0 | python | async def send(self, command):
if (command is None):
raise Exception('No command defined!')
packet = b64decode(command[CONF_CODE])
(await self.hass.async_add_job(self._link.send_data, packet))
if command[CONF_DELAY]:
(await asyncio.sleep(command[CONF_DELAY])) | async def send(self, command):
if (command is None):
raise Exception('No command defined!')
packet = b64decode(command[CONF_CODE])
(await self.hass.async_add_job(self._link.send_data, packet))
if command[CONF_DELAY]:
(await asyncio.sleep(command[CONF_DELAY]))<|docstring|>Send b64 encoded command to device.<|endoftext|> |
c139875c7374a849c481dbbb4b721ff429a14d0c2ee013476aac3776a803ee50 | @property
def name(self):
'Return the name of the controlled device.'
return self._config.get(CONF_NAME) | Return the name of the controlled device. | media_player.py | name | elupus/hass_broadlink | 0 | python | @property
def name(self):
return self._config.get(CONF_NAME) | @property
def name(self):
return self._config.get(CONF_NAME)<|docstring|>Return the name of the controlled device.<|endoftext|> |
e42efc876f1e9ba1fff8e262b88bbfd4ac0401dbf9d3af60a09896093ccf5c48 | @property
def state(self):
'Return the state of the device.'
return self._state | Return the state of the device. | media_player.py | state | elupus/hass_broadlink | 0 | python | @property
def state(self):
return self._state | @property
def state(self):
return self._state<|docstring|>Return the state of the device.<|endoftext|> |
13d8372e686430de03804a7b98750f895bfda47e68c3a022b45164adf07534e0 | @property
def supported_features(self):
'Flag media player features that are supported.'
return self._support | Flag media player features that are supported. | media_player.py | supported_features | elupus/hass_broadlink | 0 | python | @property
def supported_features(self):
return self._support | @property
def supported_features(self):
return self._support<|docstring|>Flag media player features that are supported.<|endoftext|> |
f22c291cbdd32d507bd2395dcc44cd13c0e487be97b1493e4efa63e1bdc9377d | async def async_turn_on(self):
'Turn on media player.'
async with self._lock:
(await self.send(self._config.get(CONF_COMMAND_ON)))
self._state = STATE_ON
if self._volume_restore:
(await self.async_set_volume_level(self._volume_restore)) | Turn on media player. | media_player.py | async_turn_on | elupus/hass_broadlink | 0 | python | async def async_turn_on(self):
async with self._lock:
(await self.send(self._config.get(CONF_COMMAND_ON)))
self._state = STATE_ON
if self._volume_restore:
(await self.async_set_volume_level(self._volume_restore)) | async def async_turn_on(self):
async with self._lock:
(await self.send(self._config.get(CONF_COMMAND_ON)))
self._state = STATE_ON
if self._volume_restore:
(await self.async_set_volume_level(self._volume_restore))<|docstring|>Turn on media player.<|endoftext|> |
75cce3fadb98e1cc37505d8bb7db4ee6da018f9ac0a0fcb5e85acba3122a8884 | async def async_turn_off(self):
'Turn off media player.'
async with self._lock:
(await self.send(self._config.get(CONF_COMMAND_OFF)))
self._state = STATE_OFF | Turn off media player. | media_player.py | async_turn_off | elupus/hass_broadlink | 0 | python | async def async_turn_off(self):
async with self._lock:
(await self.send(self._config.get(CONF_COMMAND_OFF)))
self._state = STATE_OFF | async def async_turn_off(self):
async with self._lock:
(await self.send(self._config.get(CONF_COMMAND_OFF)))
self._state = STATE_OFF<|docstring|>Turn off media player.<|endoftext|> |
9af4bef44cc9e77ecfd3c56e25fe279fac946f7cfb8387ea268df8483a856be3 | async def async_volume_up(self):
'Volume up media player.'
async with self._lock:
(await self.send_volume(self._config.get(CONF_VOLUME_UP)))
if ((CONF_VOLUME_STEP in self._config) and (self._volume_level is not None)):
self._volume_level += self._volume_step | Volume up media player. | media_player.py | async_volume_up | elupus/hass_broadlink | 0 | python | async def async_volume_up(self):
async with self._lock:
(await self.send_volume(self._config.get(CONF_VOLUME_UP)))
if ((CONF_VOLUME_STEP in self._config) and (self._volume_level is not None)):
self._volume_level += self._volume_step | async def async_volume_up(self):
async with self._lock:
(await self.send_volume(self._config.get(CONF_VOLUME_UP)))
if ((CONF_VOLUME_STEP in self._config) and (self._volume_level is not None)):
self._volume_level += self._volume_step<|docstring|>Volume up media player.<|endoftext|> |
cb8a03306233c0762b11558f2473e3bdc2ca7bc10dc27e2b617965353873eb50 | async def async_volume_down(self):
'Volume down media player.'
async with self._lock:
(await self.send_volume(self._config.get(CONF_VOLUME_DOWN)))
if ((CONF_VOLUME_STEP in self._config) and (self._volume_level is not None)):
self._volume_level -= self._volume_step | Volume down media player. | media_player.py | async_volume_down | elupus/hass_broadlink | 0 | python | async def async_volume_down(self):
async with self._lock:
(await self.send_volume(self._config.get(CONF_VOLUME_DOWN)))
if ((CONF_VOLUME_STEP in self._config) and (self._volume_level is not None)):
self._volume_level -= self._volume_step | async def async_volume_down(self):
async with self._lock:
(await self.send_volume(self._config.get(CONF_VOLUME_DOWN)))
if ((CONF_VOLUME_STEP in self._config) and (self._volume_level is not None)):
self._volume_level -= self._volume_step<|docstring|>Volume down media player.<|endoftext|> |
dad5b1c1011e541aa217418fdf7575f1538d32c82fc64233bea58cd89aa32a0d | async def async_mute_volume(self, mute):
'Send mute command.'
async with self._lock:
if (mute and (CONF_VOLUME_MUTE_ON in self._config)):
(await self.send(self._config.get(CONF_VOLUME_MUTE_ON)))
self._muted = True
elif ((not mute) and (CONF_VOLUME_MUTE_OFF in self._config)):
(await self.send(self._config.get(CONF_VOLUME_MUTE_OFF)))
self._muted = False
else:
(await self.send(self._config.get(CONF_VOLUME_MUTE))) | Send mute command. | media_player.py | async_mute_volume | elupus/hass_broadlink | 0 | python | async def async_mute_volume(self, mute):
async with self._lock:
if (mute and (CONF_VOLUME_MUTE_ON in self._config)):
(await self.send(self._config.get(CONF_VOLUME_MUTE_ON)))
self._muted = True
elif ((not mute) and (CONF_VOLUME_MUTE_OFF in self._config)):
(await self.send(self._config.get(CONF_VOLUME_MUTE_OFF)))
self._muted = False
else:
(await self.send(self._config.get(CONF_VOLUME_MUTE))) | async def async_mute_volume(self, mute):
async with self._lock:
if (mute and (CONF_VOLUME_MUTE_ON in self._config)):
(await self.send(self._config.get(CONF_VOLUME_MUTE_ON)))
self._muted = True
elif ((not mute) and (CONF_VOLUME_MUTE_OFF in self._config)):
(await self.send(self._config.get(CONF_VOLUME_MUTE_OFF)))
self._muted = False
else:
(await self.send(self._config.get(CONF_VOLUME_MUTE)))<|docstring|>Send mute command.<|endoftext|> |
d24cf9f818f312956f8c8e3c801b689d7e3acf00344adb7b6969f4fa3614d4ba | async def async_media_next_track(self):
'Send next track command.'
async with self._lock:
(await self.send(self._config.get(CONF_NEXT_TRACK))) | Send next track command. | media_player.py | async_media_next_track | elupus/hass_broadlink | 0 | python | async def async_media_next_track(self):
async with self._lock:
(await self.send(self._config.get(CONF_NEXT_TRACK))) | async def async_media_next_track(self):
async with self._lock:
(await self.send(self._config.get(CONF_NEXT_TRACK)))<|docstring|>Send next track command.<|endoftext|> |
75a7f3ca24f1bdeb79d37c2b2bfdd3804c6efc9067739aee5574945538c67108 | async def async_media_previous_track(self):
'Send the previous track command.'
async with self._lock:
(await self.send(self._config.get(CONF_PREVIOUS_TRACK))) | Send the previous track command. | media_player.py | async_media_previous_track | elupus/hass_broadlink | 0 | python | async def async_media_previous_track(self):
async with self._lock:
(await self.send(self._config.get(CONF_PREVIOUS_TRACK))) | async def async_media_previous_track(self):
async with self._lock:
(await self.send(self._config.get(CONF_PREVIOUS_TRACK)))<|docstring|>Send the previous track command.<|endoftext|> |
2127dd2abd4b7ac6a0c99053ce7337c53f93ff2efda505a1c9fc898cad8aea66 | async def async_select_source(self, source):
'Select a specific source.'
async with self._lock:
(await self.send(self._config.get(CONF_SOURCES)[source]))
self._source = source
self._sound_mode = None | Select a specific source. | media_player.py | async_select_source | elupus/hass_broadlink | 0 | python | async def async_select_source(self, source):
async with self._lock:
(await self.send(self._config.get(CONF_SOURCES)[source]))
self._source = source
self._sound_mode = None | async def async_select_source(self, source):
async with self._lock:
(await self.send(self._config.get(CONF_SOURCES)[source]))
self._source = source
self._sound_mode = None<|docstring|>Select a specific source.<|endoftext|> |
ed7ac15dbcf070508e30c3fa4e8fc96b0b2e225a5df6fc197a873da4ab1c2ecf | async def async_select_sound_mode(self, sound_mode):
'Select a specific source.'
async with self._lock:
(await self.send(self._config.get(CONF_SOUND_MODES)[sound_mode]))
self._sound_mode = sound_mode | Select a specific source. | media_player.py | async_select_sound_mode | elupus/hass_broadlink | 0 | python | async def async_select_sound_mode(self, sound_mode):
async with self._lock:
(await self.send(self._config.get(CONF_SOUND_MODES)[sound_mode]))
self._sound_mode = sound_mode | async def async_select_sound_mode(self, sound_mode):
async with self._lock:
(await self.send(self._config.get(CONF_SOUND_MODES)[sound_mode]))
self._sound_mode = sound_mode<|docstring|>Select a specific source.<|endoftext|> |
1a69e07812bcebaed83c59c842ac34560cb5c08dd84bff4a5904e5c210a6b494 | async def async_play_media(self, media_type, media_id, **kwargs):
'Switch to a specific channel.'
if (media_type != MEDIA_TYPE_CHANNEL):
_LOGGER.error('Unsupported media type %s', media_type)
return
cv.positive_int(media_id)
async with self._lock:
for digit in media_id:
(await self.send(self._config.get(CONF_DIGITS).get(digit))) | Switch to a specific channel. | media_player.py | async_play_media | elupus/hass_broadlink | 0 | python | async def async_play_media(self, media_type, media_id, **kwargs):
if (media_type != MEDIA_TYPE_CHANNEL):
_LOGGER.error('Unsupported media type %s', media_type)
return
cv.positive_int(media_id)
async with self._lock:
for digit in media_id:
(await self.send(self._config.get(CONF_DIGITS).get(digit))) | async def async_play_media(self, media_type, media_id, **kwargs):
if (media_type != MEDIA_TYPE_CHANNEL):
_LOGGER.error('Unsupported media type %s', media_type)
return
cv.positive_int(media_id)
async with self._lock:
for digit in media_id:
(await self.send(self._config.get(CONF_DIGITS).get(digit)))<|docstring|>Switch to a specific channel.<|endoftext|> |
21d900946eaf4ec8940e5315e6b8299dad5aa33097ad0a079a7d313377c2c69c | async def async_set_volume_level(self, volume):
'Set volume level, range 0..1.'
if (CONF_VOLUME_SET not in self._config):
raise NotImplementedError()
config = self._config[CONF_VOLUME_SET]
self._volume_calls += 1
volume_calls = self._volume_calls
async with self._lock:
if (self._volume_calls != volume_calls):
_LOGGER.debug('Aborted volume change early')
def items():
if self._volume_level:
(yield (self._volume_level, None))
(yield from self._volume_levels.items())
(base_level, base_code) = min(items(), key=(lambda kv: abs((volume - kv[0]))))
steps = int(round(((volume - base_level) / self._volume_step)))
if (steps > 0):
code = self._config.get(CONF_VOLUME_UP)
else:
code = self._config.get(CONF_VOLUME_DOWN)
target = (base_level + (self._volume_step * steps))
_LOGGER.debug('Volume base %f(%f) target %f(%f) steps %f', base_level, convert_volume_to_device(config, base_level), target, convert_volume_to_device(config, target), steps)
self._volume_level = target
if base_code:
(await self.send(base_code))
self._volume_timestamp = datetime.now()
for step in range(abs(steps)):
(await self.send_volume(code))
if (self._volume_calls != volume_calls):
_LOGGER.debug('Aborted volume change')
self._volume_level = (base_level + (self._volume_step * copysign((step + 1), steps)))
break
_LOGGER.debug('Volume level %f(%f)', self._volume_level, convert_volume_to_device(config, self._volume_level)) | Set volume level, range 0..1. | media_player.py | async_set_volume_level | elupus/hass_broadlink | 0 | python | async def async_set_volume_level(self, volume):
if (CONF_VOLUME_SET not in self._config):
raise NotImplementedError()
config = self._config[CONF_VOLUME_SET]
self._volume_calls += 1
volume_calls = self._volume_calls
async with self._lock:
if (self._volume_calls != volume_calls):
_LOGGER.debug('Aborted volume change early')
def items():
if self._volume_level:
(yield (self._volume_level, None))
(yield from self._volume_levels.items())
(base_level, base_code) = min(items(), key=(lambda kv: abs((volume - kv[0]))))
steps = int(round(((volume - base_level) / self._volume_step)))
if (steps > 0):
code = self._config.get(CONF_VOLUME_UP)
else:
code = self._config.get(CONF_VOLUME_DOWN)
target = (base_level + (self._volume_step * steps))
_LOGGER.debug('Volume base %f(%f) target %f(%f) steps %f', base_level, convert_volume_to_device(config, base_level), target, convert_volume_to_device(config, target), steps)
self._volume_level = target
if base_code:
(await self.send(base_code))
self._volume_timestamp = datetime.now()
for step in range(abs(steps)):
(await self.send_volume(code))
if (self._volume_calls != volume_calls):
_LOGGER.debug('Aborted volume change')
self._volume_level = (base_level + (self._volume_step * copysign((step + 1), steps)))
break
_LOGGER.debug('Volume level %f(%f)', self._volume_level, convert_volume_to_device(config, self._volume_level)) | async def async_set_volume_level(self, volume):
if (CONF_VOLUME_SET not in self._config):
raise NotImplementedError()
config = self._config[CONF_VOLUME_SET]
self._volume_calls += 1
volume_calls = self._volume_calls
async with self._lock:
if (self._volume_calls != volume_calls):
_LOGGER.debug('Aborted volume change early')
def items():
if self._volume_level:
(yield (self._volume_level, None))
(yield from self._volume_levels.items())
(base_level, base_code) = min(items(), key=(lambda kv: abs((volume - kv[0]))))
steps = int(round(((volume - base_level) / self._volume_step)))
if (steps > 0):
code = self._config.get(CONF_VOLUME_UP)
else:
code = self._config.get(CONF_VOLUME_DOWN)
target = (base_level + (self._volume_step * steps))
_LOGGER.debug('Volume base %f(%f) target %f(%f) steps %f', base_level, convert_volume_to_device(config, base_level), target, convert_volume_to_device(config, target), steps)
self._volume_level = target
if base_code:
(await self.send(base_code))
self._volume_timestamp = datetime.now()
for step in range(abs(steps)):
(await self.send_volume(code))
if (self._volume_calls != volume_calls):
_LOGGER.debug('Aborted volume change')
self._volume_level = (base_level + (self._volume_step * copysign((step + 1), steps)))
break
_LOGGER.debug('Volume level %f(%f)', self._volume_level, convert_volume_to_device(config, self._volume_level))<|docstring|>Set volume level, range 0..1.<|endoftext|> |
a230e9d37929382ac7c4d37d01e5841e8e5641258dad013cd8b749e93f5ac13d | @property
def media_content_type(self):
'Return content type currently active.'
return MEDIA_TYPE_CHANNEL | Return content type currently active. | media_player.py | media_content_type | elupus/hass_broadlink | 0 | python | @property
def media_content_type(self):
return MEDIA_TYPE_CHANNEL | @property
def media_content_type(self):
return MEDIA_TYPE_CHANNEL<|docstring|>Return content type currently active.<|endoftext|> |
73802913736db41b8ee5af3301696c7e7e568f1c9b236660f69b57e94b3c22c2 | @property
def source(self):
'Return the current input source.'
return self._source | Return the current input source. | media_player.py | source | elupus/hass_broadlink | 0 | python | @property
def source(self):
return self._source | @property
def source(self):
return self._source<|docstring|>Return the current input source.<|endoftext|> |
37ccfb26ea141436ffaade14b8373d43acac7c4bcb7c60307d9833bf76348149 | @property
def source_list(self):
'List of available input sources.'
return list(self._config.get(CONF_SOURCES).keys()) | List of available input sources. | media_player.py | source_list | elupus/hass_broadlink | 0 | python | @property
def source_list(self):
return list(self._config.get(CONF_SOURCES).keys()) | @property
def source_list(self):
return list(self._config.get(CONF_SOURCES).keys())<|docstring|>List of available input sources.<|endoftext|> |
72b3fd855ee04d1e34b2e973d1a8c166010065d48765c620671bb4f985362813 | @property
def sound_mode(self):
'Name of the current sound mode.'
return self._sound_mode | Name of the current sound mode. | media_player.py | sound_mode | elupus/hass_broadlink | 0 | python | @property
def sound_mode(self):
return self._sound_mode | @property
def sound_mode(self):
return self._sound_mode<|docstring|>Name of the current sound mode.<|endoftext|> |
60cd8e1cf51e50065f88b67119a23984b99b074dd5a547fab4bf31869bfc220a | @property
def sound_mode_list(self):
'List of available sound modes.'
return list(self._config.get(CONF_SOUND_MODES).keys()) | List of available sound modes. | media_player.py | sound_mode_list | elupus/hass_broadlink | 0 | python | @property
def sound_mode_list(self):
return list(self._config.get(CONF_SOUND_MODES).keys()) | @property
def sound_mode_list(self):
return list(self._config.get(CONF_SOUND_MODES).keys())<|docstring|>List of available sound modes.<|endoftext|> |
cd2875b5eefff83c0e36834a340cf81a3e9a9fd0e28e1d29d10b2b36d3ba2cd3 | @property
def is_volume_muted(self):
'Boolean if volume is currently muted.'
return self._muted | Boolean if volume is currently muted. | media_player.py | is_volume_muted | elupus/hass_broadlink | 0 | python | @property
def is_volume_muted(self):
return self._muted | @property
def is_volume_muted(self):
return self._muted<|docstring|>Boolean if volume is currently muted.<|endoftext|> |
8de5b137b1b0ed8dcf592165e20993b960a54c967501a802c882b79ec2c96994 | @property
def media_title(self):
'Title of current playing media.'
return self._source | Title of current playing media. | media_player.py | media_title | elupus/hass_broadlink | 0 | python | @property
def media_title(self):
return self._source | @property
def media_title(self):
return self._source<|docstring|>Title of current playing media.<|endoftext|> |
33dd8fb90d485960332a9e72f1de42ec8bc55d9637144cb5904e9ec5a3220dc0 | def _init_layers(self):
'Initialize layers of the head.'
self.relu = nn.ReLU(inplace=True)
self.p_cls_convs = nn.ModuleList()
self.n_cls_convs = nn.ModuleList()
self.w_cls_convs = nn.ModuleList()
self.p_reg_convs = nn.ModuleList()
self.n_reg_convs = nn.ModuleList()
self.w_reg_convs = nn.ModuleList()
for i in range(self.stacked_convs):
chn = (self.in_channels if (i == 0) else self.feat_channels)
self.n_cls_convs.append(ConvModule(chn, self.feat_channels, 3, stride=1, padding=1, conv_cfg=self.conv_cfg, norm_cfg=self.norm_cfg))
self.n_reg_convs.append(ConvModule(chn, self.feat_channels, 3, stride=1, padding=1, conv_cfg=self.conv_cfg, norm_cfg=self.norm_cfg))
self.w_cls_convs.append(ConvModule(chn, self.feat_channels, 3, stride=1, dilation=3, padding=3, conv_cfg=self.conv_cfg, act_cfg=dict(type='ReLU', inplace=True), norm_cfg=dict(type='BN', requires_grad=True)))
self.w_reg_convs.append(ConvModule(chn, self.feat_channels, 3, stride=1, dilation=3, padding=3, conv_cfg=self.conv_cfg, act_cfg=dict(type='ReLU', inplace=True), norm_cfg=dict(type='BN', requires_grad=True)))
self.p_cls_convs.append(ConvModule(chn, self.feat_channels, 1, stride=1, padding=0, conv_cfg=self.conv_cfg, act_cfg=dict(type='ReLU', inplace=True), norm_cfg=dict(type='BN', requires_grad=True)))
self.p_reg_convs.append(ConvModule(chn, self.feat_channels, 1, stride=1, padding=0, conv_cfg=self.conv_cfg, act_cfg=dict(type='ReLU', inplace=True), norm_cfg=dict(type='BN', requires_grad=True)))
self.retina_cls = nn.Conv2d(self.feat_channels, (self.num_anchors * self.cls_out_channels), 3, padding=1)
self.retina_reg = nn.Conv2d(self.feat_channels, (self.num_anchors * 4), 3, padding=1) | Initialize layers of the head. | mmdet/models/dense_heads/wan_base_head.py | _init_layers | jms0923/new-header | 0 | python | def _init_layers(self):
self.relu = nn.ReLU(inplace=True)
self.p_cls_convs = nn.ModuleList()
self.n_cls_convs = nn.ModuleList()
self.w_cls_convs = nn.ModuleList()
self.p_reg_convs = nn.ModuleList()
self.n_reg_convs = nn.ModuleList()
self.w_reg_convs = nn.ModuleList()
for i in range(self.stacked_convs):
chn = (self.in_channels if (i == 0) else self.feat_channels)
self.n_cls_convs.append(ConvModule(chn, self.feat_channels, 3, stride=1, padding=1, conv_cfg=self.conv_cfg, norm_cfg=self.norm_cfg))
self.n_reg_convs.append(ConvModule(chn, self.feat_channels, 3, stride=1, padding=1, conv_cfg=self.conv_cfg, norm_cfg=self.norm_cfg))
self.w_cls_convs.append(ConvModule(chn, self.feat_channels, 3, stride=1, dilation=3, padding=3, conv_cfg=self.conv_cfg, act_cfg=dict(type='ReLU', inplace=True), norm_cfg=dict(type='BN', requires_grad=True)))
self.w_reg_convs.append(ConvModule(chn, self.feat_channels, 3, stride=1, dilation=3, padding=3, conv_cfg=self.conv_cfg, act_cfg=dict(type='ReLU', inplace=True), norm_cfg=dict(type='BN', requires_grad=True)))
self.p_cls_convs.append(ConvModule(chn, self.feat_channels, 1, stride=1, padding=0, conv_cfg=self.conv_cfg, act_cfg=dict(type='ReLU', inplace=True), norm_cfg=dict(type='BN', requires_grad=True)))
self.p_reg_convs.append(ConvModule(chn, self.feat_channels, 1, stride=1, padding=0, conv_cfg=self.conv_cfg, act_cfg=dict(type='ReLU', inplace=True), norm_cfg=dict(type='BN', requires_grad=True)))
self.retina_cls = nn.Conv2d(self.feat_channels, (self.num_anchors * self.cls_out_channels), 3, padding=1)
self.retina_reg = nn.Conv2d(self.feat_channels, (self.num_anchors * 4), 3, padding=1) | def _init_layers(self):
self.relu = nn.ReLU(inplace=True)
self.p_cls_convs = nn.ModuleList()
self.n_cls_convs = nn.ModuleList()
self.w_cls_convs = nn.ModuleList()
self.p_reg_convs = nn.ModuleList()
self.n_reg_convs = nn.ModuleList()
self.w_reg_convs = nn.ModuleList()
for i in range(self.stacked_convs):
chn = (self.in_channels if (i == 0) else self.feat_channels)
self.n_cls_convs.append(ConvModule(chn, self.feat_channels, 3, stride=1, padding=1, conv_cfg=self.conv_cfg, norm_cfg=self.norm_cfg))
self.n_reg_convs.append(ConvModule(chn, self.feat_channels, 3, stride=1, padding=1, conv_cfg=self.conv_cfg, norm_cfg=self.norm_cfg))
self.w_cls_convs.append(ConvModule(chn, self.feat_channels, 3, stride=1, dilation=3, padding=3, conv_cfg=self.conv_cfg, act_cfg=dict(type='ReLU', inplace=True), norm_cfg=dict(type='BN', requires_grad=True)))
self.w_reg_convs.append(ConvModule(chn, self.feat_channels, 3, stride=1, dilation=3, padding=3, conv_cfg=self.conv_cfg, act_cfg=dict(type='ReLU', inplace=True), norm_cfg=dict(type='BN', requires_grad=True)))
self.p_cls_convs.append(ConvModule(chn, self.feat_channels, 1, stride=1, padding=0, conv_cfg=self.conv_cfg, act_cfg=dict(type='ReLU', inplace=True), norm_cfg=dict(type='BN', requires_grad=True)))
self.p_reg_convs.append(ConvModule(chn, self.feat_channels, 1, stride=1, padding=0, conv_cfg=self.conv_cfg, act_cfg=dict(type='ReLU', inplace=True), norm_cfg=dict(type='BN', requires_grad=True)))
self.retina_cls = nn.Conv2d(self.feat_channels, (self.num_anchors * self.cls_out_channels), 3, padding=1)
self.retina_reg = nn.Conv2d(self.feat_channels, (self.num_anchors * 4), 3, padding=1)<|docstring|>Initialize layers of the head.<|endoftext|> |
3302c7204e973bff3039e0ae21527602523aa53129e4dc95ac5412e9879ca0e3 | def init_weights(self):
'Initialize weights of the head.'
for m in self.n_cls_convs:
normal_init(m.conv, std=0.01)
for m in self.p_cls_convs:
normal_init(m.conv, std=0.01)
for m in self.w_cls_convs:
normal_init(m.conv, std=0.01)
for m in self.n_reg_convs:
normal_init(m.conv, std=0.01)
for m in self.p_reg_convs:
normal_init(m.conv, std=0.01)
for m in self.w_reg_convs:
normal_init(m.conv, std=0.01)
bias_cls = bias_init_with_prob(0.01)
normal_init(self.retina_cls, std=0.01, bias=bias_cls)
normal_init(self.retina_reg, std=0.01) | Initialize weights of the head. | mmdet/models/dense_heads/wan_base_head.py | init_weights | jms0923/new-header | 0 | python | def init_weights(self):
for m in self.n_cls_convs:
normal_init(m.conv, std=0.01)
for m in self.p_cls_convs:
normal_init(m.conv, std=0.01)
for m in self.w_cls_convs:
normal_init(m.conv, std=0.01)
for m in self.n_reg_convs:
normal_init(m.conv, std=0.01)
for m in self.p_reg_convs:
normal_init(m.conv, std=0.01)
for m in self.w_reg_convs:
normal_init(m.conv, std=0.01)
bias_cls = bias_init_with_prob(0.01)
normal_init(self.retina_cls, std=0.01, bias=bias_cls)
normal_init(self.retina_reg, std=0.01) | def init_weights(self):
for m in self.n_cls_convs:
normal_init(m.conv, std=0.01)
for m in self.p_cls_convs:
normal_init(m.conv, std=0.01)
for m in self.w_cls_convs:
normal_init(m.conv, std=0.01)
for m in self.n_reg_convs:
normal_init(m.conv, std=0.01)
for m in self.p_reg_convs:
normal_init(m.conv, std=0.01)
for m in self.w_reg_convs:
normal_init(m.conv, std=0.01)
bias_cls = bias_init_with_prob(0.01)
normal_init(self.retina_cls, std=0.01, bias=bias_cls)
normal_init(self.retina_reg, std=0.01)<|docstring|>Initialize weights of the head.<|endoftext|> |
554afb8e85f81fc4a63fcde5b43bcd35e66c2907019a61366bf807cd77ef66b0 | def forward(self, feats):
'Forward features from the upstream network.\n\n Args:\n feats (tuple[Tensor]): Features from the upstream network, each is\n a 4D-tensor.\n\n Returns:\n tuple: A tuple of classification scores and bbox prediction.\n\n - cls_scores (list[Tensor]): Classification scores for all scale levels, each is a 4D-tensor, the channels number is num_anchors * num_classes.\n - bbox_preds (list[Tensor]): Box energies / deltas for all scale levels, each is a 4D-tensor, the channels number is num_anchors * 4.\n '
return multi_apply(self.forward_single, feats) | Forward features from the upstream network.
Args:
feats (tuple[Tensor]): Features from the upstream network, each is
a 4D-tensor.
Returns:
tuple: A tuple of classification scores and bbox prediction.
- cls_scores (list[Tensor]): Classification scores for all scale levels, each is a 4D-tensor, the channels number is num_anchors * num_classes.
- bbox_preds (list[Tensor]): Box energies / deltas for all scale levels, each is a 4D-tensor, the channels number is num_anchors * 4. | mmdet/models/dense_heads/wan_base_head.py | forward | jms0923/new-header | 0 | python | def forward(self, feats):
'Forward features from the upstream network.\n\n Args:\n feats (tuple[Tensor]): Features from the upstream network, each is\n a 4D-tensor.\n\n Returns:\n tuple: A tuple of classification scores and bbox prediction.\n\n - cls_scores (list[Tensor]): Classification scores for all scale levels, each is a 4D-tensor, the channels number is num_anchors * num_classes.\n - bbox_preds (list[Tensor]): Box energies / deltas for all scale levels, each is a 4D-tensor, the channels number is num_anchors * 4.\n '
return multi_apply(self.forward_single, feats) | def forward(self, feats):
'Forward features from the upstream network.\n\n Args:\n feats (tuple[Tensor]): Features from the upstream network, each is\n a 4D-tensor.\n\n Returns:\n tuple: A tuple of classification scores and bbox prediction.\n\n - cls_scores (list[Tensor]): Classification scores for all scale levels, each is a 4D-tensor, the channels number is num_anchors * num_classes.\n - bbox_preds (list[Tensor]): Box energies / deltas for all scale levels, each is a 4D-tensor, the channels number is num_anchors * 4.\n '
return multi_apply(self.forward_single, feats)<|docstring|>Forward features from the upstream network.
Args:
feats (tuple[Tensor]): Features from the upstream network, each is
a 4D-tensor.
Returns:
tuple: A tuple of classification scores and bbox prediction.
- cls_scores (list[Tensor]): Classification scores for all scale levels, each is a 4D-tensor, the channels number is num_anchors * num_classes.
- bbox_preds (list[Tensor]): Box energies / deltas for all scale levels, each is a 4D-tensor, the channels number is num_anchors * 4.<|endoftext|> |
fc602d2d716f1bce2569e56c94a50bc1bb993a39ce310bc0b7eaefba9367a77c | def forward_single(self, x):
'Forward feature of a single scale level.\n\n Args:\n x (Tensor): Features of a single scale level.\n\n Returns:\n tuple:\n cls_score (Tensor): Cls scores for a single scale level\n the channels number is num_anchors * num_classes.\n bbox_pred (Tensor): Box energies / deltas for a single scale\n level, the channels number is num_anchors * 4.\n '
cls_feat = x
reg_feat = x
for (p_cls_conv, n_cls_conv, w_cls_conv) in zip(self.p_cls_convs, self.n_cls_convs, self.w_cls_convs):
identity = cls_feat
p_cls_feat = cls_feat
n_cls_feat = cls_feat
w_cls_feat = cls_feat
p_cls_feat = p_cls_conv(p_cls_feat)
n_cls_feat = n_cls_conv(n_cls_feat)
w_cls_feat = w_cls_conv(w_cls_feat)
cls_feat = ((p_cls_feat + n_cls_feat) + w_cls_feat)
cls_feat += identity
cls_feat = self.relu(cls_feat)
for (p_reg_conv, n_reg_conv, w_reg_conv) in zip(self.p_reg_convs, self.n_reg_convs, self.w_reg_convs):
identity = reg_feat
p_reg_feat = reg_feat
n_reg_feat = reg_feat
w_reg_feat = reg_feat
p_reg_feat = p_reg_conv(p_reg_feat)
n_reg_feat = n_reg_conv(n_reg_feat)
w_reg_feat = w_reg_conv(w_reg_feat)
reg_feat = ((p_reg_feat + n_reg_feat) + w_reg_feat)
reg_feat += identity
reg_feat = self.relu(reg_feat)
cls_score = self.retina_cls(cls_feat)
bbox_pred = self.retina_reg(reg_feat)
return (cls_score, bbox_pred) | Forward feature of a single scale level.
Args:
x (Tensor): Features of a single scale level.
Returns:
tuple:
cls_score (Tensor): Cls scores for a single scale level
the channels number is num_anchors * num_classes.
bbox_pred (Tensor): Box energies / deltas for a single scale
level, the channels number is num_anchors * 4. | mmdet/models/dense_heads/wan_base_head.py | forward_single | jms0923/new-header | 0 | python | def forward_single(self, x):
'Forward feature of a single scale level.\n\n Args:\n x (Tensor): Features of a single scale level.\n\n Returns:\n tuple:\n cls_score (Tensor): Cls scores for a single scale level\n the channels number is num_anchors * num_classes.\n bbox_pred (Tensor): Box energies / deltas for a single scale\n level, the channels number is num_anchors * 4.\n '
cls_feat = x
reg_feat = x
for (p_cls_conv, n_cls_conv, w_cls_conv) in zip(self.p_cls_convs, self.n_cls_convs, self.w_cls_convs):
identity = cls_feat
p_cls_feat = cls_feat
n_cls_feat = cls_feat
w_cls_feat = cls_feat
p_cls_feat = p_cls_conv(p_cls_feat)
n_cls_feat = n_cls_conv(n_cls_feat)
w_cls_feat = w_cls_conv(w_cls_feat)
cls_feat = ((p_cls_feat + n_cls_feat) + w_cls_feat)
cls_feat += identity
cls_feat = self.relu(cls_feat)
for (p_reg_conv, n_reg_conv, w_reg_conv) in zip(self.p_reg_convs, self.n_reg_convs, self.w_reg_convs):
identity = reg_feat
p_reg_feat = reg_feat
n_reg_feat = reg_feat
w_reg_feat = reg_feat
p_reg_feat = p_reg_conv(p_reg_feat)
n_reg_feat = n_reg_conv(n_reg_feat)
w_reg_feat = w_reg_conv(w_reg_feat)
reg_feat = ((p_reg_feat + n_reg_feat) + w_reg_feat)
reg_feat += identity
reg_feat = self.relu(reg_feat)
cls_score = self.retina_cls(cls_feat)
bbox_pred = self.retina_reg(reg_feat)
return (cls_score, bbox_pred) | def forward_single(self, x):
'Forward feature of a single scale level.\n\n Args:\n x (Tensor): Features of a single scale level.\n\n Returns:\n tuple:\n cls_score (Tensor): Cls scores for a single scale level\n the channels number is num_anchors * num_classes.\n bbox_pred (Tensor): Box energies / deltas for a single scale\n level, the channels number is num_anchors * 4.\n '
cls_feat = x
reg_feat = x
for (p_cls_conv, n_cls_conv, w_cls_conv) in zip(self.p_cls_convs, self.n_cls_convs, self.w_cls_convs):
identity = cls_feat
p_cls_feat = cls_feat
n_cls_feat = cls_feat
w_cls_feat = cls_feat
p_cls_feat = p_cls_conv(p_cls_feat)
n_cls_feat = n_cls_conv(n_cls_feat)
w_cls_feat = w_cls_conv(w_cls_feat)
cls_feat = ((p_cls_feat + n_cls_feat) + w_cls_feat)
cls_feat += identity
cls_feat = self.relu(cls_feat)
for (p_reg_conv, n_reg_conv, w_reg_conv) in zip(self.p_reg_convs, self.n_reg_convs, self.w_reg_convs):
identity = reg_feat
p_reg_feat = reg_feat
n_reg_feat = reg_feat
w_reg_feat = reg_feat
p_reg_feat = p_reg_conv(p_reg_feat)
n_reg_feat = n_reg_conv(n_reg_feat)
w_reg_feat = w_reg_conv(w_reg_feat)
reg_feat = ((p_reg_feat + n_reg_feat) + w_reg_feat)
reg_feat += identity
reg_feat = self.relu(reg_feat)
cls_score = self.retina_cls(cls_feat)
bbox_pred = self.retina_reg(reg_feat)
return (cls_score, bbox_pred)<|docstring|>Forward feature of a single scale level.
Args:
x (Tensor): Features of a single scale level.
Returns:
tuple:
cls_score (Tensor): Cls scores for a single scale level
the channels number is num_anchors * num_classes.
bbox_pred (Tensor): Box energies / deltas for a single scale
level, the channels number is num_anchors * 4.<|endoftext|> |
dd0c2a39fe509c9d72b7a38f9a3fd47f5d0c0b857279780f6147e4059568bc04 | def _build_btree(self, prediction):
'Generate a prediction distribution over all tokens, for each batch sample.'
embedding = self._dataset.get_embedding()
token_vectors = embedding.get_tokens_values()
token_vectors_shape = token_vectors.shape
token_vectors_3d_pl = self._dual.add(self.embedding, shape=token_vectors_shape, default_value=0.0).add_pl()
self._dual.set_values(self.embedding, token_vectors)
token_vectors_4d = tf.expand_dims(token_vectors_3d_pl, 0)
m0 = (1.0 - token_vectors_4d)
m1 = token_vectors_4d
m2 = (tf.maximum(token_vectors_4d, 1.0) - 1.0)
prediction = tf.clip_by_value(prediction, 0.0, 1.0)
prediction_vectors_3d = tf.reduce_sum(prediction, axis=3)
prediction_vectors_4d = tf.expand_dims(prediction_vectors_3d, axis=1)
p0 = (1.0 - prediction_vectors_4d)
p1 = prediction_vectors_4d
tree_paths_probs = ((m0 * p0) + (m1 * p1))
tree_paths_probs = tf.maximum(tree_paths_probs, m2)
tree_predictions = tf.reduce_prod(tree_paths_probs, axis=3)
sum_predictions = tf.reduce_sum(tree_predictions, axis=2)
sum_distributions = tf.reduce_sum(sum_predictions, axis=1, keepdims=True)
prediction_distributions = tf.divide(sum_predictions, sum_distributions)
return prediction_distributions | Generate a prediction distribution over all tokens, for each batch sample. | rsm/components/token_embedding_decoder.py | _build_btree | Cerenaut/rsm | 0 | python | def _build_btree(self, prediction):
embedding = self._dataset.get_embedding()
token_vectors = embedding.get_tokens_values()
token_vectors_shape = token_vectors.shape
token_vectors_3d_pl = self._dual.add(self.embedding, shape=token_vectors_shape, default_value=0.0).add_pl()
self._dual.set_values(self.embedding, token_vectors)
token_vectors_4d = tf.expand_dims(token_vectors_3d_pl, 0)
m0 = (1.0 - token_vectors_4d)
m1 = token_vectors_4d
m2 = (tf.maximum(token_vectors_4d, 1.0) - 1.0)
prediction = tf.clip_by_value(prediction, 0.0, 1.0)
prediction_vectors_3d = tf.reduce_sum(prediction, axis=3)
prediction_vectors_4d = tf.expand_dims(prediction_vectors_3d, axis=1)
p0 = (1.0 - prediction_vectors_4d)
p1 = prediction_vectors_4d
tree_paths_probs = ((m0 * p0) + (m1 * p1))
tree_paths_probs = tf.maximum(tree_paths_probs, m2)
tree_predictions = tf.reduce_prod(tree_paths_probs, axis=3)
sum_predictions = tf.reduce_sum(tree_predictions, axis=2)
sum_distributions = tf.reduce_sum(sum_predictions, axis=1, keepdims=True)
prediction_distributions = tf.divide(sum_predictions, sum_distributions)
return prediction_distributions | def _build_btree(self, prediction):
embedding = self._dataset.get_embedding()
token_vectors = embedding.get_tokens_values()
token_vectors_shape = token_vectors.shape
token_vectors_3d_pl = self._dual.add(self.embedding, shape=token_vectors_shape, default_value=0.0).add_pl()
self._dual.set_values(self.embedding, token_vectors)
token_vectors_4d = tf.expand_dims(token_vectors_3d_pl, 0)
m0 = (1.0 - token_vectors_4d)
m1 = token_vectors_4d
m2 = (tf.maximum(token_vectors_4d, 1.0) - 1.0)
prediction = tf.clip_by_value(prediction, 0.0, 1.0)
prediction_vectors_3d = tf.reduce_sum(prediction, axis=3)
prediction_vectors_4d = tf.expand_dims(prediction_vectors_3d, axis=1)
p0 = (1.0 - prediction_vectors_4d)
p1 = prediction_vectors_4d
tree_paths_probs = ((m0 * p0) + (m1 * p1))
tree_paths_probs = tf.maximum(tree_paths_probs, m2)
tree_predictions = tf.reduce_prod(tree_paths_probs, axis=3)
sum_predictions = tf.reduce_sum(tree_predictions, axis=2)
sum_distributions = tf.reduce_sum(sum_predictions, axis=1, keepdims=True)
prediction_distributions = tf.divide(sum_predictions, sum_distributions)
return prediction_distributions<|docstring|>Generate a prediction distribution over all tokens, for each batch sample.<|endoftext|> |
bbe506b58827f88998d09b85a97f0235b2af46c1691f18f727c7f43adbd6a9c1 | def recursive_global_max_soln(self, root: TreeNode) -> int:
'\n Recursive solution by updating a global maximum variable\n '
self.res = (- float('inf'))
def helper(node: TreeNode) -> int:
if (node is None):
return 0
left_res = helper(node.left)
right_res = helper(node.right)
self.res = max(self.res, ((left_res + right_res) + node.val))
return max((node.val + max(left_res, right_res)), 0)
helper(root)
return self.res | Recursive solution by updating a global maximum variable | Python/124.binary-tree-maximum-path-sum.py | recursive_global_max_soln | Dxyk/LeetCode | 0 | python | def recursive_global_max_soln(self, root: TreeNode) -> int:
'\n \n '
self.res = (- float('inf'))
def helper(node: TreeNode) -> int:
if (node is None):
return 0
left_res = helper(node.left)
right_res = helper(node.right)
self.res = max(self.res, ((left_res + right_res) + node.val))
return max((node.val + max(left_res, right_res)), 0)
helper(root)
return self.res | def recursive_global_max_soln(self, root: TreeNode) -> int:
'\n \n '
self.res = (- float('inf'))
def helper(node: TreeNode) -> int:
if (node is None):
return 0
left_res = helper(node.left)
right_res = helper(node.right)
self.res = max(self.res, ((left_res + right_res) + node.val))
return max((node.val + max(left_res, right_res)), 0)
helper(root)
return self.res<|docstring|>Recursive solution by updating a global maximum variable<|endoftext|> |
81a8d511e5225b3705f34689f2e80bcdc4aa6898a749c9ac1b813a5122799848 | def recursive_soln(self, root: TreeNode) -> int:
'\n Recursive solution by returning two values every call.\n\n The helper returns two values:\n - the max value when root is in the path (can only be a linked list)\n - the max value when the path starts with the root (can be a tree)\n '
def helper(node: TreeNode) -> Tuple[(int, int)]:
if (node is None):
return ((- float('inf')), (- float('inf')))
(left_root, left_sub) = helper(node.left)
(right_root, right_sub) = helper(node.right)
all_subs = [left_root, left_sub, right_root, right_sub, ((node.val + left_root) + right_root)]
return ((node.val + max(left_root, right_root, 0)), max(all_subs))
return max(helper(root)) | Recursive solution by returning two values every call.
The helper returns two values:
- the max value when root is in the path (can only be a linked list)
- the max value when the path starts with the root (can be a tree) | Python/124.binary-tree-maximum-path-sum.py | recursive_soln | Dxyk/LeetCode | 0 | python | def recursive_soln(self, root: TreeNode) -> int:
'\n Recursive solution by returning two values every call.\n\n The helper returns two values:\n - the max value when root is in the path (can only be a linked list)\n - the max value when the path starts with the root (can be a tree)\n '
def helper(node: TreeNode) -> Tuple[(int, int)]:
if (node is None):
return ((- float('inf')), (- float('inf')))
(left_root, left_sub) = helper(node.left)
(right_root, right_sub) = helper(node.right)
all_subs = [left_root, left_sub, right_root, right_sub, ((node.val + left_root) + right_root)]
return ((node.val + max(left_root, right_root, 0)), max(all_subs))
return max(helper(root)) | def recursive_soln(self, root: TreeNode) -> int:
'\n Recursive solution by returning two values every call.\n\n The helper returns two values:\n - the max value when root is in the path (can only be a linked list)\n - the max value when the path starts with the root (can be a tree)\n '
def helper(node: TreeNode) -> Tuple[(int, int)]:
if (node is None):
return ((- float('inf')), (- float('inf')))
(left_root, left_sub) = helper(node.left)
(right_root, right_sub) = helper(node.right)
all_subs = [left_root, left_sub, right_root, right_sub, ((node.val + left_root) + right_root)]
return ((node.val + max(left_root, right_root, 0)), max(all_subs))
return max(helper(root))<|docstring|>Recursive solution by returning two values every call.
The helper returns two values:
- the max value when root is in the path (can only be a linked list)
- the max value when the path starts with the root (can be a tree)<|endoftext|> |
a59b129c9255add5c8422640010700f3a2040d9f3335120246d1caf5b92eb83a | def p_program(p):
'program : block DOT'
p[0] = s.Program('<program>', [p[1]]) | program : block DOT | parser.py | p_program | FaruNL/compiler-2 | 0 | python | def p_program(p):
p[0] = s.Program('<program>', [p[1]]) | def p_program(p):
p[0] = s.Program('<program>', [p[1]])<|docstring|>program : block DOT<|endoftext|> |
34bd5c7c1cd468a1f4ec9ac4868e4e74d8f7249c614308eea293cdeaab24ff08 | def p_block(p):
'block : constDecl varDecl procDecl statement'
p[0] = s.Block('<block>', [p[1], p[2], p[3], p[4]]) | block : constDecl varDecl procDecl statement | parser.py | p_block | FaruNL/compiler-2 | 0 | python | def p_block(p):
p[0] = s.Block('<block>', [p[1], p[2], p[3], p[4]]) | def p_block(p):
p[0] = s.Block('<block>', [p[1], p[2], p[3], p[4]])<|docstring|>block : constDecl varDecl procDecl statement<|endoftext|> |
124028b8ede809ab27934021cf362ff05d341b6d02b7c4b22995efd0dbb5eb5e | def p_constDecl(p):
'constDecl : CONST constAssignmentList SEMI\n | empty'
if (len(p) == 4):
p[0] = s.ConstDecl('<constDecl1>', [p[2]])
else:
p[0] = s.ConstDecl('<constDecl2>', [], 'empty') | constDecl : CONST constAssignmentList SEMI
| empty | parser.py | p_constDecl | FaruNL/compiler-2 | 0 | python | def p_constDecl(p):
'constDecl : CONST constAssignmentList SEMI\n | empty'
if (len(p) == 4):
p[0] = s.ConstDecl('<constDecl1>', [p[2]])
else:
p[0] = s.ConstDecl('<constDecl2>', [], 'empty') | def p_constDecl(p):
'constDecl : CONST constAssignmentList SEMI\n | empty'
if (len(p) == 4):
p[0] = s.ConstDecl('<constDecl1>', [p[2]])
else:
p[0] = s.ConstDecl('<constDecl2>', [], 'empty')<|docstring|>constDecl : CONST constAssignmentList SEMI
| empty<|endoftext|> |
648c9b060394f7bf17435f4ce9e131b0cff5fa3bb8fe159441d42482512fbe1a | def p_constAssignmentList(p):
'constAssignmentList : ID ASSIGN NUMBER\n | constAssignmentList COMMA ID ASSIGN NUMBER'
if (len(p) == 4):
p[0] = s.ConstAssignmentList('<constAssignmentList1>', [], [p[1], p[3]])
else:
p[0] = s.ConstAssignmentList('<constAssignmentList2>', [p[1]], [p[3], p[5]]) | constAssignmentList : ID ASSIGN NUMBER
| constAssignmentList COMMA ID ASSIGN NUMBER | parser.py | p_constAssignmentList | FaruNL/compiler-2 | 0 | python | def p_constAssignmentList(p):
'constAssignmentList : ID ASSIGN NUMBER\n | constAssignmentList COMMA ID ASSIGN NUMBER'
if (len(p) == 4):
p[0] = s.ConstAssignmentList('<constAssignmentList1>', [], [p[1], p[3]])
else:
p[0] = s.ConstAssignmentList('<constAssignmentList2>', [p[1]], [p[3], p[5]]) | def p_constAssignmentList(p):
'constAssignmentList : ID ASSIGN NUMBER\n | constAssignmentList COMMA ID ASSIGN NUMBER'
if (len(p) == 4):
p[0] = s.ConstAssignmentList('<constAssignmentList1>', [], [p[1], p[3]])
else:
p[0] = s.ConstAssignmentList('<constAssignmentList2>', [p[1]], [p[3], p[5]])<|docstring|>constAssignmentList : ID ASSIGN NUMBER
| constAssignmentList COMMA ID ASSIGN NUMBER<|endoftext|> |
af6ebf1f120f9e448a47e2a64cad8e60bfcb07e101a9cc4726d3128a1e07dc46 | def p_varDecl(p):
'varDecl : VAR identList SEMI\n | empty'
if (len(p) == 4):
p[0] = s.VarDecl('<varDecl1>', [p[2]])
else:
p[0] = s.VarDecl('<varDecl2>', [], 'empty') | varDecl : VAR identList SEMI
| empty | parser.py | p_varDecl | FaruNL/compiler-2 | 0 | python | def p_varDecl(p):
'varDecl : VAR identList SEMI\n | empty'
if (len(p) == 4):
p[0] = s.VarDecl('<varDecl1>', [p[2]])
else:
p[0] = s.VarDecl('<varDecl2>', [], 'empty') | def p_varDecl(p):
'varDecl : VAR identList SEMI\n | empty'
if (len(p) == 4):
p[0] = s.VarDecl('<varDecl1>', [p[2]])
else:
p[0] = s.VarDecl('<varDecl2>', [], 'empty')<|docstring|>varDecl : VAR identList SEMI
| empty<|endoftext|> |
71172b027d59221cf1abc905f2d0cdd6fe54819eb417c5e5ef1d13bb315ce0fb | def p_identList(p):
'identList : ID\n | identList COMMA ID'
if (len(p) == 2):
p[0] = s.IdentList('<identList1>', [], p[1])
else:
p[0] = s.IdentList('<identList2>', [p[1]], p[3]) | identList : ID
| identList COMMA ID | parser.py | p_identList | FaruNL/compiler-2 | 0 | python | def p_identList(p):
'identList : ID\n | identList COMMA ID'
if (len(p) == 2):
p[0] = s.IdentList('<identList1>', [], p[1])
else:
p[0] = s.IdentList('<identList2>', [p[1]], p[3]) | def p_identList(p):
'identList : ID\n | identList COMMA ID'
if (len(p) == 2):
p[0] = s.IdentList('<identList1>', [], p[1])
else:
p[0] = s.IdentList('<identList2>', [p[1]], p[3])<|docstring|>identList : ID
| identList COMMA ID<|endoftext|> |
829b45357a3f7199f0283bf3fa516689aa3fec4874ca99bbe8c67b0038db4a56 | def p_procDecl(p):
'procDecl : procDecl PROCEDURE ID SEMI block SEMI\n | empty'
if (len(p) == 7):
p[0] = s.ProcDecl('<procDecl1>', [p[1], p[5]], p[3])
else:
p[0] = s.ProcDecl('<procDecl2>', [], 'empty') | procDecl : procDecl PROCEDURE ID SEMI block SEMI
| empty | parser.py | p_procDecl | FaruNL/compiler-2 | 0 | python | def p_procDecl(p):
'procDecl : procDecl PROCEDURE ID SEMI block SEMI\n | empty'
if (len(p) == 7):
p[0] = s.ProcDecl('<procDecl1>', [p[1], p[5]], p[3])
else:
p[0] = s.ProcDecl('<procDecl2>', [], 'empty') | def p_procDecl(p):
'procDecl : procDecl PROCEDURE ID SEMI block SEMI\n | empty'
if (len(p) == 7):
p[0] = s.ProcDecl('<procDecl1>', [p[1], p[5]], p[3])
else:
p[0] = s.ProcDecl('<procDecl2>', [], 'empty')<|docstring|>procDecl : procDecl PROCEDURE ID SEMI block SEMI
| empty<|endoftext|> |
e0d0ca5bad1e7f647bd439ecd09aa00ed44675fde20f5cbd93c3efb642a73a54 | def p_statement(p):
'statement : ID UPDATE expression\n | CALL ID\n | BEGIN statementList END\n | IF condition THEN statement\n | WHILE condition DO statement\n | empty'
if (len(p) == 4):
if (p[2] == ':='):
p[0] = s.Statement('<statement1>', [p[3]], p[1])
else:
p[0] = s.Node('<statement3>', [p[2]])
elif (len(p) == 5):
if (p[1] == 'IF'):
p[0] = s.Node('<statement4>', [p[2], p[4]])
else:
p[0] = s.Node('<statement5>', [p[2], p[4]])
elif (len(p) == 3):
p[0] = s.Node('<statement2>', [], p[2])
else:
p[0] = s.Node('<statement>', [], 'empty') | statement : ID UPDATE expression
| CALL ID
| BEGIN statementList END
| IF condition THEN statement
| WHILE condition DO statement
| empty | parser.py | p_statement | FaruNL/compiler-2 | 0 | python | def p_statement(p):
'statement : ID UPDATE expression\n | CALL ID\n | BEGIN statementList END\n | IF condition THEN statement\n | WHILE condition DO statement\n | empty'
if (len(p) == 4):
if (p[2] == ':='):
p[0] = s.Statement('<statement1>', [p[3]], p[1])
else:
p[0] = s.Node('<statement3>', [p[2]])
elif (len(p) == 5):
if (p[1] == 'IF'):
p[0] = s.Node('<statement4>', [p[2], p[4]])
else:
p[0] = s.Node('<statement5>', [p[2], p[4]])
elif (len(p) == 3):
p[0] = s.Node('<statement2>', [], p[2])
else:
p[0] = s.Node('<statement>', [], 'empty') | def p_statement(p):
'statement : ID UPDATE expression\n | CALL ID\n | BEGIN statementList END\n | IF condition THEN statement\n | WHILE condition DO statement\n | empty'
if (len(p) == 4):
if (p[2] == ':='):
p[0] = s.Statement('<statement1>', [p[3]], p[1])
else:
p[0] = s.Node('<statement3>', [p[2]])
elif (len(p) == 5):
if (p[1] == 'IF'):
p[0] = s.Node('<statement4>', [p[2], p[4]])
else:
p[0] = s.Node('<statement5>', [p[2], p[4]])
elif (len(p) == 3):
p[0] = s.Node('<statement2>', [], p[2])
else:
p[0] = s.Node('<statement>', [], 'empty')<|docstring|>statement : ID UPDATE expression
| CALL ID
| BEGIN statementList END
| IF condition THEN statement
| WHILE condition DO statement
| empty<|endoftext|> |
d5c8839b5e007948cc9b124c49436bf4a3502336337cb6bbc776d32300852241 | def p_statementList(p):
'statementList : statement\n | statementList SEMI statement'
if (len(p) == 2):
p[0] = s.StatementList('<statementList1>', [p[1]])
else:
p[0] = s.StatementList('<statementList2>', [p[1], p[3]]) | statementList : statement
| statementList SEMI statement | parser.py | p_statementList | FaruNL/compiler-2 | 0 | python | def p_statementList(p):
'statementList : statement\n | statementList SEMI statement'
if (len(p) == 2):
p[0] = s.StatementList('<statementList1>', [p[1]])
else:
p[0] = s.StatementList('<statementList2>', [p[1], p[3]]) | def p_statementList(p):
'statementList : statement\n | statementList SEMI statement'
if (len(p) == 2):
p[0] = s.StatementList('<statementList1>', [p[1]])
else:
p[0] = s.StatementList('<statementList2>', [p[1], p[3]])<|docstring|>statementList : statement
| statementList SEMI statement<|endoftext|> |
dc82d3ebf71bea132cccd3d5e544266440bac5f8e5eca8fd90e108e8a898580c | def p_condition(p):
'condition : ODD expression\n | expression relation expression'
if (len(p) == 3):
p[0] = s.Condition('<condition1>', [p[2]])
else:
p[0] = s.Condition('<condition2>', [p[1], p[2], p[3]]) | condition : ODD expression
| expression relation expression | parser.py | p_condition | FaruNL/compiler-2 | 0 | python | def p_condition(p):
'condition : ODD expression\n | expression relation expression'
if (len(p) == 3):
p[0] = s.Condition('<condition1>', [p[2]])
else:
p[0] = s.Condition('<condition2>', [p[1], p[2], p[3]]) | def p_condition(p):
'condition : ODD expression\n | expression relation expression'
if (len(p) == 3):
p[0] = s.Condition('<condition1>', [p[2]])
else:
p[0] = s.Condition('<condition2>', [p[1], p[2], p[3]])<|docstring|>condition : ODD expression
| expression relation expression<|endoftext|> |
0a31a6658ac37c72465d4f3d94335956f683efc582b806638832e9e34917dc5b | def p_relation(p):
'relation : ASSIGN\n | NE\n | LT\n | GT\n | LTE\n | GTE'
if (p[1] == '='):
p[0] = s.Relation('<relation>', [], p[1])
elif (p[1] == '<>'):
p[0] = s.Relation('<relation>', [], p[1])
elif (p[1] == '<'):
p[0] = s.Relation('<relation>', [], p[1])
elif (p[1] == '>'):
p[0] = s.Relation('<relation>', [], p[1])
elif (p[1] == '<='):
p[0] = s.Relation('<relation>', [], p[1])
else:
p[0] = s.Relation('<relation>', [], p[1]) | relation : ASSIGN
| NE
| LT
| GT
| LTE
| GTE | parser.py | p_relation | FaruNL/compiler-2 | 0 | python | def p_relation(p):
'relation : ASSIGN\n | NE\n | LT\n | GT\n | LTE\n | GTE'
if (p[1] == '='):
p[0] = s.Relation('<relation>', [], p[1])
elif (p[1] == '<>'):
p[0] = s.Relation('<relation>', [], p[1])
elif (p[1] == '<'):
p[0] = s.Relation('<relation>', [], p[1])
elif (p[1] == '>'):
p[0] = s.Relation('<relation>', [], p[1])
elif (p[1] == '<='):
p[0] = s.Relation('<relation>', [], p[1])
else:
p[0] = s.Relation('<relation>', [], p[1]) | def p_relation(p):
'relation : ASSIGN\n | NE\n | LT\n | GT\n | LTE\n | GTE'
if (p[1] == '='):
p[0] = s.Relation('<relation>', [], p[1])
elif (p[1] == '<>'):
p[0] = s.Relation('<relation>', [], p[1])
elif (p[1] == '<'):
p[0] = s.Relation('<relation>', [], p[1])
elif (p[1] == '>'):
p[0] = s.Relation('<relation>', [], p[1])
elif (p[1] == '<='):
p[0] = s.Relation('<relation>', [], p[1])
else:
p[0] = s.Relation('<relation>', [], p[1])<|docstring|>relation : ASSIGN
| NE
| LT
| GT
| LTE
| GTE<|endoftext|> |
8bd65a342da62e2b7bc52fb1b8fc661db423cfd06b81a07ac6f1230f9f9312a7 | def p_expression(p):
'expression : term\n | addingOperator term\n | expression addingOperator term'
if (len(p) == 2):
p[0] = s.Expression('<expression1>', [p[1]])
elif (len(p) == 3):
p[0] = s.Expression('<expression2>', [p[1], p[2]])
else:
p[0] = s.Expression('<expression3>', [p[1], p[2], p[3]]) | expression : term
| addingOperator term
| expression addingOperator term | parser.py | p_expression | FaruNL/compiler-2 | 0 | python | def p_expression(p):
'expression : term\n | addingOperator term\n | expression addingOperator term'
if (len(p) == 2):
p[0] = s.Expression('<expression1>', [p[1]])
elif (len(p) == 3):
p[0] = s.Expression('<expression2>', [p[1], p[2]])
else:
p[0] = s.Expression('<expression3>', [p[1], p[2], p[3]]) | def p_expression(p):
'expression : term\n | addingOperator term\n | expression addingOperator term'
if (len(p) == 2):
p[0] = s.Expression('<expression1>', [p[1]])
elif (len(p) == 3):
p[0] = s.Expression('<expression2>', [p[1], p[2]])
else:
p[0] = s.Expression('<expression3>', [p[1], p[2], p[3]])<|docstring|>expression : term
| addingOperator term
| expression addingOperator term<|endoftext|> |
7517bc10cfcc30fd7edfa758a25896b901e834c2d1c71abae62c3bd7fbbc4e4b | def p_addingOperator(p):
'addingOperator : PLUS\n | MINUS'
if (p[1] == '+'):
p[0] = s.AddingOperator('<adding_op>', [], p[1])
else:
p[0] = s.AddingOperator('<adding_op>', [], p[1]) | addingOperator : PLUS
| MINUS | parser.py | p_addingOperator | FaruNL/compiler-2 | 0 | python | def p_addingOperator(p):
'addingOperator : PLUS\n | MINUS'
if (p[1] == '+'):
p[0] = s.AddingOperator('<adding_op>', [], p[1])
else:
p[0] = s.AddingOperator('<adding_op>', [], p[1]) | def p_addingOperator(p):
'addingOperator : PLUS\n | MINUS'
if (p[1] == '+'):
p[0] = s.AddingOperator('<adding_op>', [], p[1])
else:
p[0] = s.AddingOperator('<adding_op>', [], p[1])<|docstring|>addingOperator : PLUS
| MINUS<|endoftext|> |
c4317d4e28548946d5ceff818ba4d26d552be5c2ac1b7ec05014904a684f4526 | def p_term(p):
'term : factor\n | term multiplyingOperator factor'
if (len(p) == 2):
p[0] = s.Term('<term1>', [p[1]])
else:
p[0] = s.Term('<term2>', [p[1], p[2], p[3]]) | term : factor
| term multiplyingOperator factor | parser.py | p_term | FaruNL/compiler-2 | 0 | python | def p_term(p):
'term : factor\n | term multiplyingOperator factor'
if (len(p) == 2):
p[0] = s.Term('<term1>', [p[1]])
else:
p[0] = s.Term('<term2>', [p[1], p[2], p[3]]) | def p_term(p):
'term : factor\n | term multiplyingOperator factor'
if (len(p) == 2):
p[0] = s.Term('<term1>', [p[1]])
else:
p[0] = s.Term('<term2>', [p[1], p[2], p[3]])<|docstring|>term : factor
| term multiplyingOperator factor<|endoftext|> |
f57adadfb28672a3f71bfe40ae1bbcdd5208d11c12a5029c9cc958fba6548552 | def p_multiplyingOperator(p):
'multiplyingOperator : TIMES\n | DIVIDE'
if (p[1] == '*'):
p[0] = s.MultiplyingOperator('<multiply_op>', [], p[1])
else:
p[0] = s.MultiplyingOperator('<multiply_op>', [], p[1]) | multiplyingOperator : TIMES
| DIVIDE | parser.py | p_multiplyingOperator | FaruNL/compiler-2 | 0 | python | def p_multiplyingOperator(p):
'multiplyingOperator : TIMES\n | DIVIDE'
if (p[1] == '*'):
p[0] = s.MultiplyingOperator('<multiply_op>', [], p[1])
else:
p[0] = s.MultiplyingOperator('<multiply_op>', [], p[1]) | def p_multiplyingOperator(p):
'multiplyingOperator : TIMES\n | DIVIDE'
if (p[1] == '*'):
p[0] = s.MultiplyingOperator('<multiply_op>', [], p[1])
else:
p[0] = s.MultiplyingOperator('<multiply_op>', [], p[1])<|docstring|>multiplyingOperator : TIMES
| DIVIDE<|endoftext|> |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.