repo stringlengths 7 54 | path stringlengths 4 192 | url stringlengths 87 284 | code stringlengths 78 104k | code_tokens list | docstring stringlengths 1 46.9k | docstring_tokens list | language stringclasses 1
value | partition stringclasses 3
values |
|---|---|---|---|---|---|---|---|---|
bitesofcode/projexui | projexui/widgets/xorbgridedit/xorbgridedit.py | https://github.com/bitesofcode/projexui/blob/f18a73bec84df90b034ca69b9deea118dbedfc4d/projexui/widgets/xorbgridedit/xorbgridedit.py#L186-L194 | def refresh(self):
"""
Commits changes stored in the interface to the database.
"""
table = self.tableType()
if table:
table.markTableCacheExpired()
self.uiRecordTREE.searchRecords(self.uiSearchTXT.text()) | [
"def",
"refresh",
"(",
"self",
")",
":",
"table",
"=",
"self",
".",
"tableType",
"(",
")",
"if",
"table",
":",
"table",
".",
"markTableCacheExpired",
"(",
")",
"self",
".",
"uiRecordTREE",
".",
"searchRecords",
"(",
"self",
".",
"uiSearchTXT",
".",
"text... | Commits changes stored in the interface to the database. | [
"Commits",
"changes",
"stored",
"in",
"the",
"interface",
"to",
"the",
"database",
"."
] | python | train |
mitsei/dlkit | dlkit/aws_adapter/repository/sessions.py | https://github.com/mitsei/dlkit/blob/445f968a175d61c8d92c0f617a3c17dc1dc7c584/dlkit/aws_adapter/repository/sessions.py#L1272-L1291 | def delete_asset_content(self, asset_content_id=None):
"""Deletes content from an ``Asset``.
arg: asset_content_id (osid.id.Id): the ``Id`` of the
``AssetContent``
raise: NotFound - ``asset_content_id`` is not found
raise: NullArgument - ``asset_content_id`` is ``null``
raise: OperationFailed - unable to complete request
raise: PermissionDenied - authorization failure
*compliance: mandatory -- This method must be implemented.*
"""
asset_content = self._get_asset_content(asset_content_id)
if asset_content.has_url() and 'amazonaws.com' in asset_content.get_url():
# print "Still have to implement removing files from aws"
key = asset_content.get_url().split('amazonaws.com')[1]
remove_file(self._config_map, key)
self._provider_session.delete_asset_content(asset_content_id)
else:
self._provider_session.delete_asset_content(asset_content_id) | [
"def",
"delete_asset_content",
"(",
"self",
",",
"asset_content_id",
"=",
"None",
")",
":",
"asset_content",
"=",
"self",
".",
"_get_asset_content",
"(",
"asset_content_id",
")",
"if",
"asset_content",
".",
"has_url",
"(",
")",
"and",
"'amazonaws.com'",
"in",
"a... | Deletes content from an ``Asset``.
arg: asset_content_id (osid.id.Id): the ``Id`` of the
``AssetContent``
raise: NotFound - ``asset_content_id`` is not found
raise: NullArgument - ``asset_content_id`` is ``null``
raise: OperationFailed - unable to complete request
raise: PermissionDenied - authorization failure
*compliance: mandatory -- This method must be implemented.* | [
"Deletes",
"content",
"from",
"an",
"Asset",
"."
] | python | train |
portfors-lab/sparkle | sparkle/gui/dialogs/specgram_dlg.py | https://github.com/portfors-lab/sparkle/blob/5fad1cf2bec58ec6b15d91da20f6236a74826110/sparkle/gui/dialogs/specgram_dlg.py#L23-L34 | def values(self):
"""Gets the parameter values
:returns: dict of inputs:
| *'nfft'*: int -- length, in samples, of FFT chunks
| *'window'*: str -- name of window to apply to FFT chunks
| *'overlap'*: float -- percent overlap of windows
"""
self.vals['nfft'] = self.ui.nfftSpnbx.value()
self.vals['window'] = str(self.ui.windowCmbx.currentText()).lower()
self.vals['overlap'] = self.ui.overlapSpnbx.value()
return self.vals | [
"def",
"values",
"(",
"self",
")",
":",
"self",
".",
"vals",
"[",
"'nfft'",
"]",
"=",
"self",
".",
"ui",
".",
"nfftSpnbx",
".",
"value",
"(",
")",
"self",
".",
"vals",
"[",
"'window'",
"]",
"=",
"str",
"(",
"self",
".",
"ui",
".",
"windowCmbx",
... | Gets the parameter values
:returns: dict of inputs:
| *'nfft'*: int -- length, in samples, of FFT chunks
| *'window'*: str -- name of window to apply to FFT chunks
| *'overlap'*: float -- percent overlap of windows | [
"Gets",
"the",
"parameter",
"values"
] | python | train |
multiformats/py-multicodec | multicodec/multicodec.py | https://github.com/multiformats/py-multicodec/blob/23213b8b40b21e17e2e1844224498cbd8e359bfa/multicodec/multicodec.py#L50-L60 | def remove_prefix(bytes_):
"""
Removes prefix from a prefixed data
:param bytes bytes_: multicodec prefixed data bytes
:return: prefix removed data bytes
:rtype: bytes
"""
prefix_int = extract_prefix(bytes_)
prefix = varint.encode(prefix_int)
return bytes_[len(prefix):] | [
"def",
"remove_prefix",
"(",
"bytes_",
")",
":",
"prefix_int",
"=",
"extract_prefix",
"(",
"bytes_",
")",
"prefix",
"=",
"varint",
".",
"encode",
"(",
"prefix_int",
")",
"return",
"bytes_",
"[",
"len",
"(",
"prefix",
")",
":",
"]"
] | Removes prefix from a prefixed data
:param bytes bytes_: multicodec prefixed data bytes
:return: prefix removed data bytes
:rtype: bytes | [
"Removes",
"prefix",
"from",
"a",
"prefixed",
"data"
] | python | valid |
CZ-NIC/yangson | yangson/datamodel.py | https://github.com/CZ-NIC/yangson/blob/a4b9464041fa8b28f6020a420ababf18fddf5d4a/yangson/datamodel.py#L100-L110 | def from_raw(self, robj: RawObject) -> RootNode:
"""Create an instance node from a raw data tree.
Args:
robj: Dictionary representing a raw data tree.
Returns:
Root instance node.
"""
cooked = self.schema.from_raw(robj)
return RootNode(cooked, self.schema, cooked.timestamp) | [
"def",
"from_raw",
"(",
"self",
",",
"robj",
":",
"RawObject",
")",
"->",
"RootNode",
":",
"cooked",
"=",
"self",
".",
"schema",
".",
"from_raw",
"(",
"robj",
")",
"return",
"RootNode",
"(",
"cooked",
",",
"self",
".",
"schema",
",",
"cooked",
".",
"... | Create an instance node from a raw data tree.
Args:
robj: Dictionary representing a raw data tree.
Returns:
Root instance node. | [
"Create",
"an",
"instance",
"node",
"from",
"a",
"raw",
"data",
"tree",
"."
] | python | train |
speechinformaticslab/vfclust | vfclust/vfclust.py | https://github.com/speechinformaticslab/vfclust/blob/7ca733dea4782c828024765726cce65de095d33c/vfclust/vfclust.py#L213-L229 | def lemmatize(self):
"""Lemmatize all Units in self.unit_list.
Modifies:
- self.unit_list: converts the .text property into its lemmatized form.
This method lemmatizes all inflected variants of permissible words to
those words' respective canonical forms. This is done to ensure that
each instance of a permissible word will correspond to a term vector with
which semantic relatedness to other words' term vectors can be computed.
(Term vectors were derived from a corpus in which inflected words were
similarly lemmatized, meaning that , e.g., 'dogs' will not have a term
vector to use for semantic relatedness computation.)
"""
for unit in self.unit_list:
if lemmatizer.lemmatize(unit.text) in self.lemmas:
unit.text = lemmatizer.lemmatize(unit.text) | [
"def",
"lemmatize",
"(",
"self",
")",
":",
"for",
"unit",
"in",
"self",
".",
"unit_list",
":",
"if",
"lemmatizer",
".",
"lemmatize",
"(",
"unit",
".",
"text",
")",
"in",
"self",
".",
"lemmas",
":",
"unit",
".",
"text",
"=",
"lemmatizer",
".",
"lemmat... | Lemmatize all Units in self.unit_list.
Modifies:
- self.unit_list: converts the .text property into its lemmatized form.
This method lemmatizes all inflected variants of permissible words to
those words' respective canonical forms. This is done to ensure that
each instance of a permissible word will correspond to a term vector with
which semantic relatedness to other words' term vectors can be computed.
(Term vectors were derived from a corpus in which inflected words were
similarly lemmatized, meaning that , e.g., 'dogs' will not have a term
vector to use for semantic relatedness computation.) | [
"Lemmatize",
"all",
"Units",
"in",
"self",
".",
"unit_list",
"."
] | python | train |
has2k1/mizani | mizani/utils.py | https://github.com/has2k1/mizani/blob/312d0550ee0136fd1b0384829b33f3b2065f47c8/mizani/utils.py#L257-L277 | def same_log10_order_of_magnitude(x, delta=0.1):
"""
Return true if range is approximately in same order of magnitude
For example these sequences are in the same order of magnitude:
- [1, 8, 5] # [1, 10)
- [35, 20, 80] # [10 100)
- [232, 730] # [100, 1000)
Parameters
----------
x : array-like
Values in base 10. Must be size 2 and
``rng[0] <= rng[1]``.
delta : float
Fuzz factor for approximation. It is multiplicative.
"""
dmin = np.log10(np.min(x)*(1-delta))
dmax = np.log10(np.max(x)*(1+delta))
return np.floor(dmin) == np.floor(dmax) | [
"def",
"same_log10_order_of_magnitude",
"(",
"x",
",",
"delta",
"=",
"0.1",
")",
":",
"dmin",
"=",
"np",
".",
"log10",
"(",
"np",
".",
"min",
"(",
"x",
")",
"*",
"(",
"1",
"-",
"delta",
")",
")",
"dmax",
"=",
"np",
".",
"log10",
"(",
"np",
".",... | Return true if range is approximately in same order of magnitude
For example these sequences are in the same order of magnitude:
- [1, 8, 5] # [1, 10)
- [35, 20, 80] # [10 100)
- [232, 730] # [100, 1000)
Parameters
----------
x : array-like
Values in base 10. Must be size 2 and
``rng[0] <= rng[1]``.
delta : float
Fuzz factor for approximation. It is multiplicative. | [
"Return",
"true",
"if",
"range",
"is",
"approximately",
"in",
"same",
"order",
"of",
"magnitude"
] | python | valid |
Tanganelli/CoAPthon3 | coapthon/reverse_proxy/coap.py | https://github.com/Tanganelli/CoAPthon3/blob/985763bfe2eb9e00f49ec100c5b8877c2ed7d531/coapthon/reverse_proxy/coap.py#L165-L180 | def discover_remote_results(self, response, name):
"""
Create a new remote server resource for each valid discover response.
:param response: the response to the discovery request
:param name: the server name
"""
host, port = response.source
if response.code == defines.Codes.CONTENT.number:
resource = Resource('server', self, visible=True, observable=False, allow_children=True)
self.add_resource(name, resource)
self._mapping[name] = (host, port)
self.parse_core_link_format(response.payload, name, (host, port))
else:
logger.error("Server: " + response.source + " isn't valid.") | [
"def",
"discover_remote_results",
"(",
"self",
",",
"response",
",",
"name",
")",
":",
"host",
",",
"port",
"=",
"response",
".",
"source",
"if",
"response",
".",
"code",
"==",
"defines",
".",
"Codes",
".",
"CONTENT",
".",
"number",
":",
"resource",
"=",... | Create a new remote server resource for each valid discover response.
:param response: the response to the discovery request
:param name: the server name | [
"Create",
"a",
"new",
"remote",
"server",
"resource",
"for",
"each",
"valid",
"discover",
"response",
"."
] | python | train |
barryp/py-amqplib | extras/generate_skeleton_0_8.py | https://github.com/barryp/py-amqplib/blob/2b3a47de34b4712c111d0a55d7ff109dffc2a7b2/extras/generate_skeleton_0_8.py#L83-L104 | def _reindent(s, indent, reformat=True):
"""
Remove the existing indentation from each line of a chunk of
text, s, and then prefix each line with a new indent string.
Also removes trailing whitespace from each line, and leading and
trailing blank lines.
"""
s = textwrap.dedent(s)
s = s.split('\n')
s = [x.rstrip() for x in s]
while s and (not s[0]):
s = s[1:]
while s and (not s[-1]):
s = s[:-1]
if reformat:
s = '\n'.join(s)
s = textwrap.wrap(s, initial_indent=indent, subsequent_indent=indent)
else:
s = [indent + x for x in s]
return '\n'.join(s) + '\n' | [
"def",
"_reindent",
"(",
"s",
",",
"indent",
",",
"reformat",
"=",
"True",
")",
":",
"s",
"=",
"textwrap",
".",
"dedent",
"(",
"s",
")",
"s",
"=",
"s",
".",
"split",
"(",
"'\\n'",
")",
"s",
"=",
"[",
"x",
".",
"rstrip",
"(",
")",
"for",
"x",
... | Remove the existing indentation from each line of a chunk of
text, s, and then prefix each line with a new indent string.
Also removes trailing whitespace from each line, and leading and
trailing blank lines. | [
"Remove",
"the",
"existing",
"indentation",
"from",
"each",
"line",
"of",
"a",
"chunk",
"of",
"text",
"s",
"and",
"then",
"prefix",
"each",
"line",
"with",
"a",
"new",
"indent",
"string",
"."
] | python | train |
heuer/cablemap | cablemap.core/cablemap/core/c14n.py | https://github.com/heuer/cablemap/blob/42066c8fc2972d237a2c35578e14525aaf705f38/cablemap.core/cablemap/core/c14n.py#L174-L181 | def canonicalize_origin(origin):
"""\
"""
origin = origin.replace(u'USMISSION', u'') \
.replace(u'AMEMBASSY', u'') \
.replace(u'EMBASSY', u'').strip()
return _STATION_C14N.get(origin, origin) | [
"def",
"canonicalize_origin",
"(",
"origin",
")",
":",
"origin",
"=",
"origin",
".",
"replace",
"(",
"u'USMISSION'",
",",
"u''",
")",
".",
"replace",
"(",
"u'AMEMBASSY'",
",",
"u''",
")",
".",
"replace",
"(",
"u'EMBASSY'",
",",
"u''",
")",
".",
"strip",
... | \ | [
"\\"
] | python | train |
tropo/tropo-webapi-python | build/lib/tropo.py | https://github.com/tropo/tropo-webapi-python/blob/f87772644a6b45066a4c5218f0c1f6467b64ab3c/build/lib/tropo.py#L756-L768 | def message (self, say_obj, to, **options):
"""
A shortcut method to create a session, say something, and hang up, all in one step. This is particularly useful for sending out a quick SMS or IM.
Argument: "say_obj" is a Say object
Argument: "to" is a String
Argument: **options is a set of optional keyword arguments.
See https://www.tropo.com/docs/webapi/message
"""
if isinstance(say_obj, basestring):
say = Say(say_obj).obj
else:
say = say_obj
self._steps.append(Message(say, to, **options).obj) | [
"def",
"message",
"(",
"self",
",",
"say_obj",
",",
"to",
",",
"*",
"*",
"options",
")",
":",
"if",
"isinstance",
"(",
"say_obj",
",",
"basestring",
")",
":",
"say",
"=",
"Say",
"(",
"say_obj",
")",
".",
"obj",
"else",
":",
"say",
"=",
"say_obj",
... | A shortcut method to create a session, say something, and hang up, all in one step. This is particularly useful for sending out a quick SMS or IM.
Argument: "say_obj" is a Say object
Argument: "to" is a String
Argument: **options is a set of optional keyword arguments.
See https://www.tropo.com/docs/webapi/message | [
"A",
"shortcut",
"method",
"to",
"create",
"a",
"session",
"say",
"something",
"and",
"hang",
"up",
"all",
"in",
"one",
"step",
".",
"This",
"is",
"particularly",
"useful",
"for",
"sending",
"out",
"a",
"quick",
"SMS",
"or",
"IM",
".",
"Argument",
":",
... | python | train |
senaite/senaite.core | bika/lims/browser/attachment.py | https://github.com/senaite/senaite.core/blob/7602ce2ea2f9e81eb34e20ce17b98a3e70713f85/bika/lims/browser/attachment.py#L451-L458 | def is_analysis_attachment_allowed(self, analysis):
"""Checks if the analysis
"""
if analysis.getAttachmentOption() not in ["p", "r"]:
return False
if api.get_workflow_status_of(analysis) in ["retracted"]:
return False
return True | [
"def",
"is_analysis_attachment_allowed",
"(",
"self",
",",
"analysis",
")",
":",
"if",
"analysis",
".",
"getAttachmentOption",
"(",
")",
"not",
"in",
"[",
"\"p\"",
",",
"\"r\"",
"]",
":",
"return",
"False",
"if",
"api",
".",
"get_workflow_status_of",
"(",
"a... | Checks if the analysis | [
"Checks",
"if",
"the",
"analysis"
] | python | train |
hamperbot/hamper | hamper/plugins/karma_adv.py | https://github.com/hamperbot/hamper/blob/6f841ec4dcc319fdd7bb3ca1f990e3b7a458771b/hamper/plugins/karma_adv.py#L101-L131 | def modify_karma(self, words):
"""
Given a regex object, look through the groups and modify karma
as necessary
"""
# 'user': karma
k = defaultdict(int)
if words:
# For loop through all of the group members
for word_tuple in words:
word = word_tuple[0]
ending = word[-1]
# This will either end with a - or +, if it's a - subract 1
# kara, if it ends with a +, add 1 karma
change = -1 if ending == '-' else 1
# Now strip the ++ or -- from the end
if '-' in ending:
word = word.rstrip('-')
elif '+' in ending:
word = word.rstrip('+')
# Check if surrounded by parens, if so, remove them
if word.startswith('(') and word.endswith(')'):
word = word[1:-1]
# Finally strip whitespace
word = word.strip()
# Add the user to the dict
if word:
k[word] += change
return k | [
"def",
"modify_karma",
"(",
"self",
",",
"words",
")",
":",
"# 'user': karma",
"k",
"=",
"defaultdict",
"(",
"int",
")",
"if",
"words",
":",
"# For loop through all of the group members",
"for",
"word_tuple",
"in",
"words",
":",
"word",
"=",
"word_tuple",
"[",
... | Given a regex object, look through the groups and modify karma
as necessary | [
"Given",
"a",
"regex",
"object",
"look",
"through",
"the",
"groups",
"and",
"modify",
"karma",
"as",
"necessary"
] | python | train |
jjangsangy/py-translate | translate/languages.py | https://github.com/jjangsangy/py-translate/blob/fe6279b2ee353f42ce73333ffae104e646311956/translate/languages.py#L12-L32 | def translation_table(language, filepath='supported_translations.json'):
'''
Opens up file located under the etc directory containing language
codes and prints them out.
:param file: Path to location of json file
:type file: str
:return: language codes
:rtype: dict
'''
fullpath = abspath(join(dirname(__file__), 'etc', filepath))
if not isfile(fullpath):
raise IOError('File does not exist at {0}'.format(fullpath))
with open(fullpath, 'rt') as fp:
raw_data = json.load(fp).get(language, None)
assert(raw_data is not None)
return dict((code['language'], code['name']) for code in raw_data) | [
"def",
"translation_table",
"(",
"language",
",",
"filepath",
"=",
"'supported_translations.json'",
")",
":",
"fullpath",
"=",
"abspath",
"(",
"join",
"(",
"dirname",
"(",
"__file__",
")",
",",
"'etc'",
",",
"filepath",
")",
")",
"if",
"not",
"isfile",
"(",
... | Opens up file located under the etc directory containing language
codes and prints them out.
:param file: Path to location of json file
:type file: str
:return: language codes
:rtype: dict | [
"Opens",
"up",
"file",
"located",
"under",
"the",
"etc",
"directory",
"containing",
"language",
"codes",
"and",
"prints",
"them",
"out",
"."
] | python | test |
mseclab/PyJFuzz | pyjfuzz/core/pjf_decoretors.py | https://github.com/mseclab/PyJFuzz/blob/f777067076f62c9ab74ffea6e90fd54402b7a1b4/pyjfuzz/core/pjf_decoretors.py#L34-L41 | def mutate_object_decorate(self, func):
"""
Mutate a generic object based on type
"""
def mutate():
obj = func()
return self.Mutators.get_mutator(obj, type(obj))
return mutate | [
"def",
"mutate_object_decorate",
"(",
"self",
",",
"func",
")",
":",
"def",
"mutate",
"(",
")",
":",
"obj",
"=",
"func",
"(",
")",
"return",
"self",
".",
"Mutators",
".",
"get_mutator",
"(",
"obj",
",",
"type",
"(",
"obj",
")",
")",
"return",
"mutate... | Mutate a generic object based on type | [
"Mutate",
"a",
"generic",
"object",
"based",
"on",
"type"
] | python | test |
indico/indico-plugins | livesync/indico_livesync/models/queue.py | https://github.com/indico/indico-plugins/blob/fe50085cc63be9b8161b09539e662e7b04e4b38e/livesync/indico_livesync/models/queue.py#L213-L224 | def object(self):
"""Return the changed object."""
if self.type == EntryType.category:
return self.category
elif self.type == EntryType.event:
return self.event
elif self.type == EntryType.session:
return self.session
elif self.type == EntryType.contribution:
return self.contribution
elif self.type == EntryType.subcontribution:
return self.subcontribution | [
"def",
"object",
"(",
"self",
")",
":",
"if",
"self",
".",
"type",
"==",
"EntryType",
".",
"category",
":",
"return",
"self",
".",
"category",
"elif",
"self",
".",
"type",
"==",
"EntryType",
".",
"event",
":",
"return",
"self",
".",
"event",
"elif",
... | Return the changed object. | [
"Return",
"the",
"changed",
"object",
"."
] | python | train |
gwastro/pycbc | pycbc/tmpltbank/partitioned_bank.py | https://github.com/gwastro/pycbc/blob/7a64cdd104d263f1b6ea0b01e6841837d05a4cb3/pycbc/tmpltbank/partitioned_bank.py#L432-L495 | def add_point_by_chi_coords(self, chi_coords, mass1, mass2, spin1z, spin2z,
point_fupper=None, mus=None):
"""
Add a point to the partitioned template bank. The point_fupper and mus
kwargs must be provided for all templates if the vary fupper capability
is desired. This requires that the chi_coords, as well as mus and
point_fupper if needed, to be precalculated. If you just have the
masses and don't want to worry about translations see
add_point_by_masses, which will do translations and then call this.
Parameters
-----------
chi_coords : numpy.array
The position of the point in the chi coordinates.
mass1 : float
The heavier mass of the point to add.
mass2 : float
The lighter mass of the point to add.
spin1z: float
The [aligned] spin on the heavier body.
spin2z: float
The [aligned] spin on the lighter body.
The upper frequency cutoff to use for this point. This value must
be one of the ones already calculated in the metric.
mus : numpy.array
A 2D array where idx 0 holds the upper frequency cutoff and idx 1
holds the coordinates in the [not covaried] mu parameter space for
each value of the upper frequency cutoff.
"""
chi1_bin, chi2_bin = self.find_point_bin(chi_coords)
self.bank[chi1_bin][chi2_bin].append(copy.deepcopy(chi_coords))
curr_bank = self.massbank[chi1_bin][chi2_bin]
if curr_bank['mass1s'].size:
curr_bank['mass1s'] = numpy.append(curr_bank['mass1s'],
numpy.array([mass1]))
curr_bank['mass2s'] = numpy.append(curr_bank['mass2s'],
numpy.array([mass2]))
curr_bank['spin1s'] = numpy.append(curr_bank['spin1s'],
numpy.array([spin1z]))
curr_bank['spin2s'] = numpy.append(curr_bank['spin2s'],
numpy.array([spin2z]))
if point_fupper is not None:
curr_bank['freqcuts'] = numpy.append(curr_bank['freqcuts'],
numpy.array([point_fupper]))
# Mus needs to append onto axis 0. See below for contents of
# the mus variable
if mus is not None:
curr_bank['mus'] = numpy.append(curr_bank['mus'],
numpy.array([mus[:,:]]), axis=0)
else:
curr_bank['mass1s'] = numpy.array([mass1])
curr_bank['mass2s'] = numpy.array([mass2])
curr_bank['spin1s'] = numpy.array([spin1z])
curr_bank['spin2s'] = numpy.array([spin2z])
if point_fupper is not None:
curr_bank['freqcuts'] = numpy.array([point_fupper])
# curr_bank['mus'] is a 3D array
# NOTE: mu relates to the non-covaried Cartesian coordinate system
# Axis 0: Template index
# Axis 1: Frequency cutoff index
# Axis 2: Mu coordinate index
if mus is not None:
curr_bank['mus'] = numpy.array([mus[:,:]]) | [
"def",
"add_point_by_chi_coords",
"(",
"self",
",",
"chi_coords",
",",
"mass1",
",",
"mass2",
",",
"spin1z",
",",
"spin2z",
",",
"point_fupper",
"=",
"None",
",",
"mus",
"=",
"None",
")",
":",
"chi1_bin",
",",
"chi2_bin",
"=",
"self",
".",
"find_point_bin"... | Add a point to the partitioned template bank. The point_fupper and mus
kwargs must be provided for all templates if the vary fupper capability
is desired. This requires that the chi_coords, as well as mus and
point_fupper if needed, to be precalculated. If you just have the
masses and don't want to worry about translations see
add_point_by_masses, which will do translations and then call this.
Parameters
-----------
chi_coords : numpy.array
The position of the point in the chi coordinates.
mass1 : float
The heavier mass of the point to add.
mass2 : float
The lighter mass of the point to add.
spin1z: float
The [aligned] spin on the heavier body.
spin2z: float
The [aligned] spin on the lighter body.
The upper frequency cutoff to use for this point. This value must
be one of the ones already calculated in the metric.
mus : numpy.array
A 2D array where idx 0 holds the upper frequency cutoff and idx 1
holds the coordinates in the [not covaried] mu parameter space for
each value of the upper frequency cutoff. | [
"Add",
"a",
"point",
"to",
"the",
"partitioned",
"template",
"bank",
".",
"The",
"point_fupper",
"and",
"mus",
"kwargs",
"must",
"be",
"provided",
"for",
"all",
"templates",
"if",
"the",
"vary",
"fupper",
"capability",
"is",
"desired",
".",
"This",
"requires... | python | train |
gwastro/pycbc | pycbc/distributions/arbitrary.py | https://github.com/gwastro/pycbc/blob/7a64cdd104d263f1b6ea0b01e6841837d05a4cb3/pycbc/distributions/arbitrary.py#L250-L286 | def get_arrays_from_file(params_file, params=None):
"""Reads the values of one or more parameters from an hdf file and
returns as a dictionary.
Parameters
----------
params_file : str
The hdf file that contains the values of the parameters.
params : {None, list}
If provided, will just retrieve the given parameter names.
Returns
-------
dict
A dictionary of the parameters mapping `param_name -> array`.
"""
try:
f = h5py.File(params_file, 'r')
except:
raise ValueError('File not found.')
if params is not None:
if not isinstance(params, list):
params = [params]
for p in params:
if p not in f.keys():
raise ValueError('Parameter {} is not in {}'
.format(p, params_file))
else:
params = [str(k) for k in f.keys()]
params_values = {p:f[p][:] for p in params}
try:
bandwidth = f.attrs["bandwidth"]
except KeyError:
bandwidth = "scott"
f.close()
return params_values, bandwidth | [
"def",
"get_arrays_from_file",
"(",
"params_file",
",",
"params",
"=",
"None",
")",
":",
"try",
":",
"f",
"=",
"h5py",
".",
"File",
"(",
"params_file",
",",
"'r'",
")",
"except",
":",
"raise",
"ValueError",
"(",
"'File not found.'",
")",
"if",
"params",
... | Reads the values of one or more parameters from an hdf file and
returns as a dictionary.
Parameters
----------
params_file : str
The hdf file that contains the values of the parameters.
params : {None, list}
If provided, will just retrieve the given parameter names.
Returns
-------
dict
A dictionary of the parameters mapping `param_name -> array`. | [
"Reads",
"the",
"values",
"of",
"one",
"or",
"more",
"parameters",
"from",
"an",
"hdf",
"file",
"and",
"returns",
"as",
"a",
"dictionary",
"."
] | python | train |
glue-viz/glue-vispy-viewers | glue_vispy_viewers/extern/vispy/gloo/framebuffer.py | https://github.com/glue-viz/glue-vispy-viewers/blob/54a4351d98c1f90dfb1a557d1b447c1f57470eea/glue_vispy_viewers/extern/vispy/gloo/framebuffer.py#L123-L131 | def activate(self):
""" Activate/use this frame buffer.
"""
# Send command
self._glir.command('FRAMEBUFFER', self._id, True)
# Associate canvas now
canvas = get_current_canvas()
if canvas is not None:
canvas.context.glir.associate(self.glir) | [
"def",
"activate",
"(",
"self",
")",
":",
"# Send command",
"self",
".",
"_glir",
".",
"command",
"(",
"'FRAMEBUFFER'",
",",
"self",
".",
"_id",
",",
"True",
")",
"# Associate canvas now",
"canvas",
"=",
"get_current_canvas",
"(",
")",
"if",
"canvas",
"is",
... | Activate/use this frame buffer. | [
"Activate",
"/",
"use",
"this",
"frame",
"buffer",
"."
] | python | train |
3DLIRIOUS/MeshLabXML | meshlabxml/select.py | https://github.com/3DLIRIOUS/MeshLabXML/blob/177cce21e92baca500f56a932d66bd9a33257af8/meshlabxml/select.py#L6-L36 | def all(script, face=True, vert=True):
""" Select all the faces of the current mesh
Args:
script: the FilterScript object or script filename to write
the filter to.
faces (bool): If True the filter will select all the faces.
verts (bool): If True the filter will select all the vertices.
Layer stack:
No impacts
MeshLab versions:
2016.12
1.3.4BETA
"""
filter_xml = ''.join([
' <filter name="Select All">\n',
' <Param name="allFaces" ',
'value="{}" '.format(str(face).lower()),
'description="DSelect all Faces" ',
'type="RichBool" ',
'/>\n',
' <Param name="allVerts" ',
'value="{}" '.format(str(vert).lower()),
'description="Select all Vertices" ',
'type="RichBool" ',
'/>\n',
' </filter>\n'])
util.write_filter(script, filter_xml)
return None | [
"def",
"all",
"(",
"script",
",",
"face",
"=",
"True",
",",
"vert",
"=",
"True",
")",
":",
"filter_xml",
"=",
"''",
".",
"join",
"(",
"[",
"' <filter name=\"Select All\">\\n'",
",",
"' <Param name=\"allFaces\" '",
",",
"'value=\"{}\" '",
".",
"format",
"("... | Select all the faces of the current mesh
Args:
script: the FilterScript object or script filename to write
the filter to.
faces (bool): If True the filter will select all the faces.
verts (bool): If True the filter will select all the vertices.
Layer stack:
No impacts
MeshLab versions:
2016.12
1.3.4BETA | [
"Select",
"all",
"the",
"faces",
"of",
"the",
"current",
"mesh"
] | python | test |
googleads/googleads-python-lib | googleads/adwords.py | https://github.com/googleads/googleads-python-lib/blob/aa3b1b474b0f9789ca55ca46f4b2b57aeae38874/googleads/adwords.py#L2050-L2060 | def ContainsAll(self, *values):
"""Sets the type of the WHERE clause as "contains all".
Args:
*values: The values to be used in the WHERE condition.
Returns:
The query builder that this WHERE builder links to.
"""
self._awql = self._CreateMultipleValuesCondition(values, 'CONTAINS_ALL')
return self._query_builder | [
"def",
"ContainsAll",
"(",
"self",
",",
"*",
"values",
")",
":",
"self",
".",
"_awql",
"=",
"self",
".",
"_CreateMultipleValuesCondition",
"(",
"values",
",",
"'CONTAINS_ALL'",
")",
"return",
"self",
".",
"_query_builder"
] | Sets the type of the WHERE clause as "contains all".
Args:
*values: The values to be used in the WHERE condition.
Returns:
The query builder that this WHERE builder links to. | [
"Sets",
"the",
"type",
"of",
"the",
"WHERE",
"clause",
"as",
"contains",
"all",
"."
] | python | train |
inveniosoftware-contrib/invenio-workflows | invenio_workflows/api.py | https://github.com/inveniosoftware-contrib/invenio-workflows/blob/9c09fd29509a3db975ac2aba337e6760d8cfd3c2/invenio_workflows/api.py#L98-L136 | def save(self, status=None, callback_pos=None, id_workflow=None):
"""Save object to persistent storage."""
if self.model is None:
raise WorkflowsMissingModel()
with db.session.begin_nested():
workflow_object_before_save.send(self)
self.model.modified = datetime.now()
if status is not None:
self.model.status = status
if id_workflow is not None:
workflow = Workflow.query.filter_by(uuid=id_workflow).one()
self.model.workflow = workflow
# Special handling of JSON fields to mark update
if self.model.callback_pos is None:
self.model.callback_pos = list()
elif callback_pos is not None:
self.model.callback_pos = callback_pos
flag_modified(self.model, 'callback_pos')
if self.model.data is None:
self.model.data = dict()
flag_modified(self.model, 'data')
if self.model.extra_data is None:
self.model.extra_data = dict()
flag_modified(self.model, 'extra_data')
db.session.merge(self.model)
if self.id is not None:
self.log.debug("Saved object: {id} at {callback_pos}".format(
id=self.model.id or "new",
callback_pos=self.model.callback_pos
))
workflow_object_after_save.send(self) | [
"def",
"save",
"(",
"self",
",",
"status",
"=",
"None",
",",
"callback_pos",
"=",
"None",
",",
"id_workflow",
"=",
"None",
")",
":",
"if",
"self",
".",
"model",
"is",
"None",
":",
"raise",
"WorkflowsMissingModel",
"(",
")",
"with",
"db",
".",
"session"... | Save object to persistent storage. | [
"Save",
"object",
"to",
"persistent",
"storage",
"."
] | python | train |
saltstack/salt | salt/runners/smartos_vmadm.py | https://github.com/saltstack/salt/blob/e8541fd6e744ab0df786c0f76102e41631f45d46/salt/runners/smartos_vmadm.py#L190-L247 | def list_vms(search=None, verbose=False):
'''
List all vms
search : string
filter vms, see the execution module
verbose : boolean
print additional information about the vm
CLI Example:
.. code-block:: bash
salt-run vmadm.list
salt-run vmadm.list search='type=KVM'
salt-run vmadm.list verbose=True
'''
ret = OrderedDict() if verbose else []
client = salt.client.get_local_client(__opts__['conf_file'])
try:
vmadm_args = {}
vmadm_args['order'] = 'uuid,alias,hostname,state,type,cpu_cap,vcpus,ram'
if search:
vmadm_args['search'] = search
for cn in client.cmd_iter('G@virtual:physical and G@os:smartos',
'vmadm.list', kwarg=vmadm_args,
tgt_type='compound'):
if not cn:
continue
node = next(six.iterkeys(cn))
if not isinstance(cn[node], dict) or \
'ret' not in cn[node] or \
not isinstance(cn[node]['ret'], dict):
continue
for vm in cn[node]['ret']:
vmcfg = cn[node]['ret'][vm]
if verbose:
ret[vm] = OrderedDict()
ret[vm]['hostname'] = vmcfg['hostname']
ret[vm]['alias'] = vmcfg['alias']
ret[vm]['computenode'] = node
ret[vm]['state'] = vmcfg['state']
ret[vm]['resources'] = OrderedDict()
ret[vm]['resources']['memory'] = vmcfg['ram']
if vmcfg['type'] == 'KVM':
ret[vm]['resources']['cpu'] = "{0:.2f}".format(int(vmcfg['vcpus']))
else:
if vmcfg['cpu_cap'] != '':
ret[vm]['resources']['cpu'] = "{0:.2f}".format(int(vmcfg['cpu_cap'])/100)
else:
ret.append(vm)
except SaltClientError as client_error:
return "{0}".format(client_error)
if not verbose:
ret = sorted(ret)
return ret | [
"def",
"list_vms",
"(",
"search",
"=",
"None",
",",
"verbose",
"=",
"False",
")",
":",
"ret",
"=",
"OrderedDict",
"(",
")",
"if",
"verbose",
"else",
"[",
"]",
"client",
"=",
"salt",
".",
"client",
".",
"get_local_client",
"(",
"__opts__",
"[",
"'conf_f... | List all vms
search : string
filter vms, see the execution module
verbose : boolean
print additional information about the vm
CLI Example:
.. code-block:: bash
salt-run vmadm.list
salt-run vmadm.list search='type=KVM'
salt-run vmadm.list verbose=True | [
"List",
"all",
"vms"
] | python | train |
singularityhub/sregistry-cli | sregistry/main/google_drive/query.py | https://github.com/singularityhub/sregistry-cli/blob/abc96140a1d15b5e96d83432e1e0e1f4f8f36331/sregistry/main/google_drive/query.py#L30-L63 | def list_containers(self):
'''return a list of containers. Since Google Drive definitely has other
kinds of files, we look for containers in a special sregistry folder,
(meaning the parent folder is sregistry) and with properties of type
as container.
'''
# Get or create the base
folder = self._get_or_create_folder(self._base)
next_page = None
containers = []
# Parse the base for all containers, possibly over multiple pages
while True:
query = "mimeType='application/octet-stream'" # ensures container
query += " and properties has { key='type' and value='container' }"
query += " and '%s' in parents" %folder['id'] # ensures in parent folder
response = self._service.files().list(q=query,
spaces='drive',
fields='nextPageToken, files(id, name, properties)',
pageToken=next_page).execute()
containers += response.get('files', [])
# If there is a next page, keep going!
next_page = response.get('nextPageToken')
if not next_page:
break
if len(containers) == 0:
bot.info("No containers found, based on properties type:container")
sys.exit(1)
return containers | [
"def",
"list_containers",
"(",
"self",
")",
":",
"# Get or create the base",
"folder",
"=",
"self",
".",
"_get_or_create_folder",
"(",
"self",
".",
"_base",
")",
"next_page",
"=",
"None",
"containers",
"=",
"[",
"]",
"# Parse the base for all containers, possibly over... | return a list of containers. Since Google Drive definitely has other
kinds of files, we look for containers in a special sregistry folder,
(meaning the parent folder is sregistry) and with properties of type
as container. | [
"return",
"a",
"list",
"of",
"containers",
".",
"Since",
"Google",
"Drive",
"definitely",
"has",
"other",
"kinds",
"of",
"files",
"we",
"look",
"for",
"containers",
"in",
"a",
"special",
"sregistry",
"folder",
"(",
"meaning",
"the",
"parent",
"folder",
"is",... | python | test |
rigetti/grove | grove/alpha/jordan_gradient/jordan_gradient.py | https://github.com/rigetti/grove/blob/dc6bf6ec63e8c435fe52b1e00f707d5ce4cdb9b3/grove/alpha/jordan_gradient/jordan_gradient.py#L10-L25 | def gradient_program(f_h: float, precision: int) -> Program:
"""
Gradient estimation via Jordan's algorithm (10.1103/PhysRevLett.95.050501).
:param f_h: Oracle output at perturbation h.
:param precision: Bit precision of gradient.
:return: Quil program to estimate gradient of f.
"""
# encode oracle values into phase
phase_factor = np.exp(1.0j * 2 * np.pi * abs(f_h))
U = np.array([[phase_factor, 0],
[0, phase_factor]])
p_gradient = phase_estimation(U, precision)
return p_gradient | [
"def",
"gradient_program",
"(",
"f_h",
":",
"float",
",",
"precision",
":",
"int",
")",
"->",
"Program",
":",
"# encode oracle values into phase",
"phase_factor",
"=",
"np",
".",
"exp",
"(",
"1.0j",
"*",
"2",
"*",
"np",
".",
"pi",
"*",
"abs",
"(",
"f_h",... | Gradient estimation via Jordan's algorithm (10.1103/PhysRevLett.95.050501).
:param f_h: Oracle output at perturbation h.
:param precision: Bit precision of gradient.
:return: Quil program to estimate gradient of f. | [
"Gradient",
"estimation",
"via",
"Jordan",
"s",
"algorithm",
"(",
"10",
".",
"1103",
"/",
"PhysRevLett",
".",
"95",
".",
"050501",
")",
"."
] | python | train |
jtpaasch/simplygithub | simplygithub/branches.py | https://github.com/jtpaasch/simplygithub/blob/b77506275ec276ce90879bf1ea9299a79448b903/simplygithub/branches.py#L72-L95 | def create_branch(profile, name, branch_off):
"""Create a branch.
Args:
profile
A profile generated from ``simplygithub.authentication.profile``.
Such profiles tell this module (i) the ``repo`` to connect to,
and (ii) the ``token`` to connect with.
name
The name of the new branch.
branch_off
The name of a branch to create the new branch off of.
Returns:
A dict with data about the new branch.
"""
branch_off_sha = get_branch_sha(profile, branch_off)
ref = "heads/" + name
data = refs.create_ref(profile, ref, branch_off_sha)
return data | [
"def",
"create_branch",
"(",
"profile",
",",
"name",
",",
"branch_off",
")",
":",
"branch_off_sha",
"=",
"get_branch_sha",
"(",
"profile",
",",
"branch_off",
")",
"ref",
"=",
"\"heads/\"",
"+",
"name",
"data",
"=",
"refs",
".",
"create_ref",
"(",
"profile",
... | Create a branch.
Args:
profile
A profile generated from ``simplygithub.authentication.profile``.
Such profiles tell this module (i) the ``repo`` to connect to,
and (ii) the ``token`` to connect with.
name
The name of the new branch.
branch_off
The name of a branch to create the new branch off of.
Returns:
A dict with data about the new branch. | [
"Create",
"a",
"branch",
"."
] | python | train |
pypa/pipenv | pipenv/vendor/pipreqs/pipreqs.py | https://github.com/pypa/pipenv/blob/cae8d76c210b9777e90aab76e9c4b0e53bb19cde/pipenv/vendor/pipreqs/pipreqs.py#L307-L330 | def clean(file_, imports):
"""Remove modules that aren't imported in project from file."""
modules_not_imported = compare_modules(file_, imports)
re_remove = re.compile("|".join(modules_not_imported))
to_write = []
try:
f = open_func(file_, "r+")
except OSError:
logging.error("Failed on file: {}".format(file_))
raise
else:
for i in f.readlines():
if re_remove.match(i) is None:
to_write.append(i)
f.seek(0)
f.truncate()
for i in to_write:
f.write(i)
finally:
f.close()
logging.info("Successfully cleaned up requirements in " + file_) | [
"def",
"clean",
"(",
"file_",
",",
"imports",
")",
":",
"modules_not_imported",
"=",
"compare_modules",
"(",
"file_",
",",
"imports",
")",
"re_remove",
"=",
"re",
".",
"compile",
"(",
"\"|\"",
".",
"join",
"(",
"modules_not_imported",
")",
")",
"to_write",
... | Remove modules that aren't imported in project from file. | [
"Remove",
"modules",
"that",
"aren",
"t",
"imported",
"in",
"project",
"from",
"file",
"."
] | python | train |
saltstack/salt | salt/utils/gitfs.py | https://github.com/saltstack/salt/blob/e8541fd6e744ab0df786c0f76102e41631f45d46/salt/utils/gitfs.py#L2312-L2325 | def clear_cache(self):
'''
Completely clear cache
'''
errors = []
for rdir in (self.cache_root, self.file_list_cachedir):
if os.path.exists(rdir):
try:
shutil.rmtree(rdir)
except OSError as exc:
errors.append(
'Unable to delete {0}: {1}'.format(rdir, exc)
)
return errors | [
"def",
"clear_cache",
"(",
"self",
")",
":",
"errors",
"=",
"[",
"]",
"for",
"rdir",
"in",
"(",
"self",
".",
"cache_root",
",",
"self",
".",
"file_list_cachedir",
")",
":",
"if",
"os",
".",
"path",
".",
"exists",
"(",
"rdir",
")",
":",
"try",
":",
... | Completely clear cache | [
"Completely",
"clear",
"cache"
] | python | train |
facebook/pyre-check | sapp/sapp/analysis_output.py | https://github.com/facebook/pyre-check/blob/4a9604d943d28ef20238505a51acfb1f666328d7/sapp/sapp/analysis_output.py#L121-L127 | def file_names(self) -> Iterable[str]:
"""Generates all file names that are used to generate file_handles.
"""
if self.is_sharded():
yield from ShardedFile(self.filename_spec).get_filenames()
elif self.filename_spec:
yield self.filename_spec | [
"def",
"file_names",
"(",
"self",
")",
"->",
"Iterable",
"[",
"str",
"]",
":",
"if",
"self",
".",
"is_sharded",
"(",
")",
":",
"yield",
"from",
"ShardedFile",
"(",
"self",
".",
"filename_spec",
")",
".",
"get_filenames",
"(",
")",
"elif",
"self",
".",
... | Generates all file names that are used to generate file_handles. | [
"Generates",
"all",
"file",
"names",
"that",
"are",
"used",
"to",
"generate",
"file_handles",
"."
] | python | train |
macbre/data-flow-graph | sources/elasticsearch/logs2dataflow.py | https://github.com/macbre/data-flow-graph/blob/16164c3860f3defe3354c19b8536ed01b3bfdb61/sources/elasticsearch/logs2dataflow.py#L45-L52 | def format_timestamp(ts):
"""
Format the UTC timestamp for Elasticsearch
eg. 2014-07-09T08:37:18.000Z
@see https://docs.python.org/2/library/time.html#time.strftime
"""
tz_info = tz.tzutc()
return datetime.fromtimestamp(ts, tz=tz_info).strftime("%Y-%m-%dT%H:%M:%S.000Z") | [
"def",
"format_timestamp",
"(",
"ts",
")",
":",
"tz_info",
"=",
"tz",
".",
"tzutc",
"(",
")",
"return",
"datetime",
".",
"fromtimestamp",
"(",
"ts",
",",
"tz",
"=",
"tz_info",
")",
".",
"strftime",
"(",
"\"%Y-%m-%dT%H:%M:%S.000Z\"",
")"
] | Format the UTC timestamp for Elasticsearch
eg. 2014-07-09T08:37:18.000Z
@see https://docs.python.org/2/library/time.html#time.strftime | [
"Format",
"the",
"UTC",
"timestamp",
"for",
"Elasticsearch",
"eg",
".",
"2014",
"-",
"07",
"-",
"09T08",
":",
"37",
":",
"18",
".",
"000Z"
] | python | train |
gitpython-developers/GitPython | git/objects/tree.py | https://github.com/gitpython-developers/GitPython/blob/1f66e25c25cde2423917ee18c4704fff83b837d1/git/objects/tree.py#L214-L244 | def join(self, file):
"""Find the named object in this tree's contents
:return: ``git.Blob`` or ``git.Tree`` or ``git.Submodule``
:raise KeyError: if given file or tree does not exist in tree"""
msg = "Blob or Tree named %r not found"
if '/' in file:
tree = self
item = self
tokens = file.split('/')
for i, token in enumerate(tokens):
item = tree[token]
if item.type == 'tree':
tree = item
else:
# safety assertion - blobs are at the end of the path
if i != len(tokens) - 1:
raise KeyError(msg % file)
return item
# END handle item type
# END for each token of split path
if item == self:
raise KeyError(msg % file)
return item
else:
for info in self._cache:
if info[2] == file: # [2] == name
return self._map_id_to_type[info[1] >> 12](self.repo, info[0], info[1],
join_path(self.path, info[2]))
# END for each obj
raise KeyError(msg % file) | [
"def",
"join",
"(",
"self",
",",
"file",
")",
":",
"msg",
"=",
"\"Blob or Tree named %r not found\"",
"if",
"'/'",
"in",
"file",
":",
"tree",
"=",
"self",
"item",
"=",
"self",
"tokens",
"=",
"file",
".",
"split",
"(",
"'/'",
")",
"for",
"i",
",",
"to... | Find the named object in this tree's contents
:return: ``git.Blob`` or ``git.Tree`` or ``git.Submodule``
:raise KeyError: if given file or tree does not exist in tree | [
"Find",
"the",
"named",
"object",
"in",
"this",
"tree",
"s",
"contents",
":",
"return",
":",
"git",
".",
"Blob",
"or",
"git",
".",
"Tree",
"or",
"git",
".",
"Submodule"
] | python | train |
pingali/dgit | dgitcore/helper.py | https://github.com/pingali/dgit/blob/ecde01f40b98f0719dbcfb54452270ed2f86686d/dgitcore/helper.py#L232-L244 | def log_repo_action(func):
"""
Log all repo actions to .dgit/log.json
"""
def _inner(*args, **kwargs):
result = func(*args, **kwargs)
log_action(func, result, *args, **kwargs)
return result
_inner.__name__ = func.__name__
_inner.__doc__ = func.__doc__
return _inner | [
"def",
"log_repo_action",
"(",
"func",
")",
":",
"def",
"_inner",
"(",
"*",
"args",
",",
"*",
"*",
"kwargs",
")",
":",
"result",
"=",
"func",
"(",
"*",
"args",
",",
"*",
"*",
"kwargs",
")",
"log_action",
"(",
"func",
",",
"result",
",",
"*",
"arg... | Log all repo actions to .dgit/log.json | [
"Log",
"all",
"repo",
"actions",
"to",
".",
"dgit",
"/",
"log",
".",
"json"
] | python | valid |
rfosterslo/wagtailplus | wagtailplus/wagtaillinks/views/links.py | https://github.com/rfosterslo/wagtailplus/blob/22cac857175d8a6f77e470751831c14a92ccd768/wagtailplus/wagtaillinks/views/links.py#L57-L78 | def post(self, request, *args, **kwargs):
"""
Returns POST response.
:param request: the request instance.
:rtype: django.http.HttpResponse.
"""
form = None
link_type = int(request.POST.get('link_type', 0))
if link_type == Link.LINK_TYPE_EMAIL:
form = EmailLinkForm(**self.get_form_kwargs())
elif link_type == Link.LINK_TYPE_EXTERNAL:
form = ExternalLinkForm(**self.get_form_kwargs())
if form:
if form.is_valid():
return self.form_valid(form)
else:
return self.form_invalid(form)
else:
raise Http404() | [
"def",
"post",
"(",
"self",
",",
"request",
",",
"*",
"args",
",",
"*",
"*",
"kwargs",
")",
":",
"form",
"=",
"None",
"link_type",
"=",
"int",
"(",
"request",
".",
"POST",
".",
"get",
"(",
"'link_type'",
",",
"0",
")",
")",
"if",
"link_type",
"==... | Returns POST response.
:param request: the request instance.
:rtype: django.http.HttpResponse. | [
"Returns",
"POST",
"response",
"."
] | python | train |
vertexproject/synapse | synapse/common.py | https://github.com/vertexproject/synapse/blob/22e67c5a8f6d7caddbcf34b39ab1bd2d6c4a6e0b/synapse/common.py#L255-L270 | def listdir(*paths, glob=None):
'''
List the (optionally glob filtered) full paths from a dir.
Args:
*paths ([str,...]): A list of path elements
glob (str): An optional fnmatch glob str
'''
path = genpath(*paths)
names = os.listdir(path)
if glob is not None:
names = fnmatch.filter(names, glob)
retn = [os.path.join(path, name) for name in names]
return retn | [
"def",
"listdir",
"(",
"*",
"paths",
",",
"glob",
"=",
"None",
")",
":",
"path",
"=",
"genpath",
"(",
"*",
"paths",
")",
"names",
"=",
"os",
".",
"listdir",
"(",
"path",
")",
"if",
"glob",
"is",
"not",
"None",
":",
"names",
"=",
"fnmatch",
".",
... | List the (optionally glob filtered) full paths from a dir.
Args:
*paths ([str,...]): A list of path elements
glob (str): An optional fnmatch glob str | [
"List",
"the",
"(",
"optionally",
"glob",
"filtered",
")",
"full",
"paths",
"from",
"a",
"dir",
"."
] | python | train |
airspeed-velocity/asv | asv/feed.py | https://github.com/airspeed-velocity/asv/blob/d23bb8b74e8adacbfa3cf5724bda55fb39d56ba6/asv/feed.py#L201-L217 | def _get_id(owner, date, content):
"""
Generate an unique Atom id for the given content
"""
h = hashlib.sha256()
# Hash still contains the original project url, keep as is
h.update("github.com/spacetelescope/asv".encode('utf-8'))
for x in content:
if x is None:
h.update(",".encode('utf-8'))
else:
h.update(x.encode('utf-8'))
h.update(",".encode('utf-8'))
if date is None:
date = datetime.datetime(1970, 1, 1)
return "tag:{0},{1}:/{2}".format(owner, date.strftime('%Y-%m-%d'), h.hexdigest()) | [
"def",
"_get_id",
"(",
"owner",
",",
"date",
",",
"content",
")",
":",
"h",
"=",
"hashlib",
".",
"sha256",
"(",
")",
"# Hash still contains the original project url, keep as is",
"h",
".",
"update",
"(",
"\"github.com/spacetelescope/asv\"",
".",
"encode",
"(",
"'u... | Generate an unique Atom id for the given content | [
"Generate",
"an",
"unique",
"Atom",
"id",
"for",
"the",
"given",
"content"
] | python | train |
PrefPy/prefpy | prefpy/mov.py | https://github.com/PrefPy/prefpy/blob/f395ba3782f05684fa5de0cece387a6da9391d02/prefpy/mov.py#L306-L346 | def AppMoVMaximin(profile):
"""
Returns an integer that is equal to the margin of victory of the election profile, that is,
the smallest number k such that changing k votes can change the winners.
:ivar Profile profile: A Profile object that represents an election profile.
"""
# Currently, we expect the profile to contain complete ordering over candidates.
elecType = profile.getElecType()
if elecType != "soc" and elecType != "toc":
print("ERROR: unsupported profile type")
exit()
# Initialization
n = profile.numVoters
m = profile.numCands
# Compute the original winner d
wmgMap = profile.getWmg()
# Initialize each Copeland score as infinity.
maximinscores = {}
for cand in wmgMap.keys():
maximinscores[cand] = float("inf")
# For each pair of candidates, calculate the number of votes in which one beat the other.
# For each pair of candidates, calculate the number of times each beats the other.
for cand1, cand2 in itertools.combinations(wmgMap.keys(), 2):
if cand2 in wmgMap[cand1].keys():
maximinscores[cand1] = min(maximinscores[cand1], wmgMap[cand1][cand2])
maximinscores[cand2] = min(maximinscores[cand2], wmgMap[cand2][cand1])
d = max(maximinscores.items(), key=lambda x: x[1])[0]
#Compute c* = argmax_c maximinscores(c)
scores_without_d = maximinscores.copy()
del scores_without_d[d]
c_star = max(scores_without_d.items(), key=lambda x: x[1])[0]
return (maximinscores[d] - maximinscores[c_star])/2 | [
"def",
"AppMoVMaximin",
"(",
"profile",
")",
":",
"# Currently, we expect the profile to contain complete ordering over candidates.",
"elecType",
"=",
"profile",
".",
"getElecType",
"(",
")",
"if",
"elecType",
"!=",
"\"soc\"",
"and",
"elecType",
"!=",
"\"toc\"",
":",
"p... | Returns an integer that is equal to the margin of victory of the election profile, that is,
the smallest number k such that changing k votes can change the winners.
:ivar Profile profile: A Profile object that represents an election profile. | [
"Returns",
"an",
"integer",
"that",
"is",
"equal",
"to",
"the",
"margin",
"of",
"victory",
"of",
"the",
"election",
"profile",
"that",
"is",
"the",
"smallest",
"number",
"k",
"such",
"that",
"changing",
"k",
"votes",
"can",
"change",
"the",
"winners",
"."
... | python | train |
rueckstiess/mtools | mtools/util/logevent.py | https://github.com/rueckstiess/mtools/blob/a6a22910c3569c0c8a3908660ca218a4557e4249/mtools/util/logevent.py#L542-L552 | def nreturned(self):
"""
Extract counters if available (lazy).
Looks for nreturned, nReturned, or nMatched counter.
"""
if not self._counters_calculated:
self._counters_calculated = True
self._extract_counters()
return self._nreturned | [
"def",
"nreturned",
"(",
"self",
")",
":",
"if",
"not",
"self",
".",
"_counters_calculated",
":",
"self",
".",
"_counters_calculated",
"=",
"True",
"self",
".",
"_extract_counters",
"(",
")",
"return",
"self",
".",
"_nreturned"
] | Extract counters if available (lazy).
Looks for nreturned, nReturned, or nMatched counter. | [
"Extract",
"counters",
"if",
"available",
"(",
"lazy",
")",
"."
] | python | train |
openstack/proliantutils | proliantutils/ilo/ris.py | https://github.com/openstack/proliantutils/blob/86ef3b47b4eca97c221577e3570b0240d6a25f22/proliantutils/ilo/ris.py#L1154-L1196 | def reset_bios_to_default(self):
"""Resets the BIOS settings to default values.
:raises: IloError, on an error from iLO.
:raises: IloCommandNotSupportedError, if the command is not supported
on the server.
"""
# Check if the BIOS resource if exists.
headers_bios, bios_uri, bios_settings = self._check_bios_resource()
# Get the BaseConfig resource.
try:
base_config_uri = bios_settings['links']['BaseConfigs']['href']
except KeyError:
msg = ("BaseConfigs resource not found. Couldn't apply the BIOS "
"Settings.")
raise exception.IloCommandNotSupportedError(msg)
# Check if BIOS resource supports patch, else get the settings
if not self._operation_allowed(headers_bios, 'PATCH'):
headers, bios_uri, _ = self._get_bios_settings_resource(
bios_settings)
self._validate_if_patch_supported(headers, bios_uri)
status, headers, config = self._rest_get(base_config_uri)
if status != 200:
msg = self._get_extended_error(config)
raise exception.IloError(msg)
new_bios_settings = {}
for cfg in config['BaseConfigs']:
default_settings = cfg.get('default', None)
if default_settings is not None:
new_bios_settings = default_settings
break
else:
msg = ("Default Settings not found in 'BaseConfigs' resource.")
raise exception.IloCommandNotSupportedError(msg)
request_headers = self._get_bios_hash_password(self.bios_password)
status, headers, response = self._rest_patch(bios_uri, request_headers,
new_bios_settings)
if status >= 300:
msg = self._get_extended_error(response)
raise exception.IloError(msg) | [
"def",
"reset_bios_to_default",
"(",
"self",
")",
":",
"# Check if the BIOS resource if exists.",
"headers_bios",
",",
"bios_uri",
",",
"bios_settings",
"=",
"self",
".",
"_check_bios_resource",
"(",
")",
"# Get the BaseConfig resource.",
"try",
":",
"base_config_uri",
"=... | Resets the BIOS settings to default values.
:raises: IloError, on an error from iLO.
:raises: IloCommandNotSupportedError, if the command is not supported
on the server. | [
"Resets",
"the",
"BIOS",
"settings",
"to",
"default",
"values",
"."
] | python | train |
brocade/pynos | pynos/versions/ver_6/ver_6_0_1/yang/brocade_port_profile.py | https://github.com/brocade/pynos/blob/bd8a34e98f322de3fc06750827d8bbc3a0c00380/pynos/versions/ver_6/ver_6_0_1/yang/brocade_port_profile.py#L529-L546 | def port_profile_qos_profile_qos_flowcontrol_pfc_pfc_tx(self, **kwargs):
"""Auto Generated Code
"""
config = ET.Element("config")
port_profile = ET.SubElement(config, "port-profile", xmlns="urn:brocade.com:mgmt:brocade-port-profile")
name_key = ET.SubElement(port_profile, "name")
name_key.text = kwargs.pop('name')
qos_profile = ET.SubElement(port_profile, "qos-profile")
qos = ET.SubElement(qos_profile, "qos")
flowcontrol = ET.SubElement(qos, "flowcontrol")
pfc = ET.SubElement(flowcontrol, "pfc")
pfc_cos_key = ET.SubElement(pfc, "pfc-cos")
pfc_cos_key.text = kwargs.pop('pfc_cos')
pfc_tx = ET.SubElement(pfc, "pfc-tx")
pfc_tx.text = kwargs.pop('pfc_tx')
callback = kwargs.pop('callback', self._callback)
return callback(config) | [
"def",
"port_profile_qos_profile_qos_flowcontrol_pfc_pfc_tx",
"(",
"self",
",",
"*",
"*",
"kwargs",
")",
":",
"config",
"=",
"ET",
".",
"Element",
"(",
"\"config\"",
")",
"port_profile",
"=",
"ET",
".",
"SubElement",
"(",
"config",
",",
"\"port-profile\"",
",",
... | Auto Generated Code | [
"Auto",
"Generated",
"Code"
] | python | train |
CalebBell/thermo | thermo/chemical.py | https://github.com/CalebBell/thermo/blob/3857ed023a3e64fd3039a32d53576c24990ef1c3/thermo/chemical.py#L2505-L2520 | def Vm(self):
r'''Molar volume of the chemical at its current phase and
temperature and pressure, in units of [m^3/mol].
Utilizes the object oriented interfaces
:obj:`thermo.volume.VolumeSolid`,
:obj:`thermo.volume.VolumeLiquid`,
and :obj:`thermo.volume.VolumeGas` to perform the
actual calculation of each property.
Examples
--------
>>> Chemical('ethylbenzene', T=550, P=3E6).Vm
0.00017758024401627633
'''
return phase_select_property(phase=self.phase, s=self.Vms, l=self.Vml, g=self.Vmg) | [
"def",
"Vm",
"(",
"self",
")",
":",
"return",
"phase_select_property",
"(",
"phase",
"=",
"self",
".",
"phase",
",",
"s",
"=",
"self",
".",
"Vms",
",",
"l",
"=",
"self",
".",
"Vml",
",",
"g",
"=",
"self",
".",
"Vmg",
")"
] | r'''Molar volume of the chemical at its current phase and
temperature and pressure, in units of [m^3/mol].
Utilizes the object oriented interfaces
:obj:`thermo.volume.VolumeSolid`,
:obj:`thermo.volume.VolumeLiquid`,
and :obj:`thermo.volume.VolumeGas` to perform the
actual calculation of each property.
Examples
--------
>>> Chemical('ethylbenzene', T=550, P=3E6).Vm
0.00017758024401627633 | [
"r",
"Molar",
"volume",
"of",
"the",
"chemical",
"at",
"its",
"current",
"phase",
"and",
"temperature",
"and",
"pressure",
"in",
"units",
"of",
"[",
"m^3",
"/",
"mol",
"]",
"."
] | python | valid |
FlaskGuys/Flask-Imagine-AzureAdapter | flask_imagine_azure_adapter/__init__.py | https://github.com/FlaskGuys/Flask-Imagine-AzureAdapter/blob/1ca83fb040602ba1be983a7d1cfd052323a86f1a/flask_imagine_azure_adapter/__init__.py#L111-L127 | def remove_cached_item(self, path):
"""
Remove cached resource item
:param path: str
:return: PIL.Image
"""
item_path = '%s/%s' % (
self.cache_folder,
path.strip('/')
)
self.blob_service.delete_blob(self.container_name, item_path)
while self.blob_service.exists(self.container_name, item_path):
time.sleep(0.5)
return True | [
"def",
"remove_cached_item",
"(",
"self",
",",
"path",
")",
":",
"item_path",
"=",
"'%s/%s'",
"%",
"(",
"self",
".",
"cache_folder",
",",
"path",
".",
"strip",
"(",
"'/'",
")",
")",
"self",
".",
"blob_service",
".",
"delete_blob",
"(",
"self",
".",
"co... | Remove cached resource item
:param path: str
:return: PIL.Image | [
"Remove",
"cached",
"resource",
"item",
":",
"param",
"path",
":",
"str",
":",
"return",
":",
"PIL",
".",
"Image"
] | python | train |
src-d/jgit-spark-connector | python/sourced/engine/engine.py | https://github.com/src-d/jgit-spark-connector/blob/79d05a0bcf0da435685d6118828a8884e2fe4b94/python/sourced/engine/engine.py#L359-L369 | def master_ref(self):
"""
Filters the current DataFrame to only contain those rows whose reference is master.
>>> master_df = refs_df.master_ref
:rtype: ReferencesDataFrame
"""
return ReferencesDataFrame(self._engine_dataframe.getMaster(),
self._session, self._implicits)
return self.ref('refs/heads/master') | [
"def",
"master_ref",
"(",
"self",
")",
":",
"return",
"ReferencesDataFrame",
"(",
"self",
".",
"_engine_dataframe",
".",
"getMaster",
"(",
")",
",",
"self",
".",
"_session",
",",
"self",
".",
"_implicits",
")",
"return",
"self",
".",
"ref",
"(",
"'refs/hea... | Filters the current DataFrame to only contain those rows whose reference is master.
>>> master_df = refs_df.master_ref
:rtype: ReferencesDataFrame | [
"Filters",
"the",
"current",
"DataFrame",
"to",
"only",
"contain",
"those",
"rows",
"whose",
"reference",
"is",
"master",
"."
] | python | train |
Azure/azure-sdk-for-python | azure-servicebus/azure/servicebus/control_client/models.py | https://github.com/Azure/azure-sdk-for-python/blob/d7306fde32f60a293a7567678692bdad31e4b667/azure-servicebus/azure/servicebus/control_client/models.py#L202-L217 | def unlock(self):
''' Unlocks itself if find queue name or topic name and subscription
name. '''
if self._queue_name:
self.service_bus_service.unlock_queue_message(
self._queue_name,
self.broker_properties['SequenceNumber'],
self.broker_properties['LockToken'])
elif self._topic_name and self._subscription_name:
self.service_bus_service.unlock_subscription_message(
self._topic_name,
self._subscription_name,
self.broker_properties['SequenceNumber'],
self.broker_properties['LockToken'])
else:
raise AzureServiceBusPeekLockError(_ERROR_MESSAGE_NOT_PEEK_LOCKED_ON_UNLOCK) | [
"def",
"unlock",
"(",
"self",
")",
":",
"if",
"self",
".",
"_queue_name",
":",
"self",
".",
"service_bus_service",
".",
"unlock_queue_message",
"(",
"self",
".",
"_queue_name",
",",
"self",
".",
"broker_properties",
"[",
"'SequenceNumber'",
"]",
",",
"self",
... | Unlocks itself if find queue name or topic name and subscription
name. | [
"Unlocks",
"itself",
"if",
"find",
"queue",
"name",
"or",
"topic",
"name",
"and",
"subscription",
"name",
"."
] | python | test |
klorenz/python-argdeco | argdeco/command_decorator.py | https://github.com/klorenz/python-argdeco/blob/8d01acef8c19d6883873689d017b14857876412d/argdeco/command_decorator.py#L202-L213 | def update(self, command=None, **kwargs):
"""update data, which is usually passed in ArgumentParser initialization
e.g. command.update(prog="foo")
"""
if command is None:
argparser = self.argparser
else:
argparser = self[command]
for k,v in kwargs.items():
setattr(argparser, k, v) | [
"def",
"update",
"(",
"self",
",",
"command",
"=",
"None",
",",
"*",
"*",
"kwargs",
")",
":",
"if",
"command",
"is",
"None",
":",
"argparser",
"=",
"self",
".",
"argparser",
"else",
":",
"argparser",
"=",
"self",
"[",
"command",
"]",
"for",
"k",
",... | update data, which is usually passed in ArgumentParser initialization
e.g. command.update(prog="foo") | [
"update",
"data",
"which",
"is",
"usually",
"passed",
"in",
"ArgumentParser",
"initialization"
] | python | train |
pylast/pylast | src/pylast/__init__.py | https://github.com/pylast/pylast/blob/a52f66d316797fc819b5f1d186d77f18ba97b4ff/src/pylast/__init__.py#L1610-L1621 | def get_mbid(self):
"""Returns the MusicBrainz ID of the album or track."""
doc = self._request(self.ws_prefix + ".getInfo", cacheable=True)
try:
lfm = doc.getElementsByTagName("lfm")[0]
opus = next(self._get_children_by_tag_name(lfm, self.ws_prefix))
mbid = next(self._get_children_by_tag_name(opus, "mbid"))
return mbid.firstChild.nodeValue
except StopIteration:
return None | [
"def",
"get_mbid",
"(",
"self",
")",
":",
"doc",
"=",
"self",
".",
"_request",
"(",
"self",
".",
"ws_prefix",
"+",
"\".getInfo\"",
",",
"cacheable",
"=",
"True",
")",
"try",
":",
"lfm",
"=",
"doc",
".",
"getElementsByTagName",
"(",
"\"lfm\"",
")",
"[",... | Returns the MusicBrainz ID of the album or track. | [
"Returns",
"the",
"MusicBrainz",
"ID",
"of",
"the",
"album",
"or",
"track",
"."
] | python | train |
ionelmc/python-cogen | cogen/core/proactors/base.py | https://github.com/ionelmc/python-cogen/blob/83b0edb88425eba6e5bfda9f1dcd34642517e2a8/cogen/core/proactors/base.py#L186-L194 | def add_token(self, act, coro, performer):
"""
Adds a completion token `act` in the proactor with associated `coro`
corutine and perform callable.
"""
assert act not in self.tokens
act.coro = coro
self.tokens[act] = performer
self.register_fd(act, performer) | [
"def",
"add_token",
"(",
"self",
",",
"act",
",",
"coro",
",",
"performer",
")",
":",
"assert",
"act",
"not",
"in",
"self",
".",
"tokens",
"act",
".",
"coro",
"=",
"coro",
"self",
".",
"tokens",
"[",
"act",
"]",
"=",
"performer",
"self",
".",
"regi... | Adds a completion token `act` in the proactor with associated `coro`
corutine and perform callable. | [
"Adds",
"a",
"completion",
"token",
"act",
"in",
"the",
"proactor",
"with",
"associated",
"coro",
"corutine",
"and",
"perform",
"callable",
"."
] | python | train |
apple/turicreate | deps/src/libxml2-2.9.1/python/libxml2.py | https://github.com/apple/turicreate/blob/74514c3f99e25b46f22c6e02977fe3da69221c2e/deps/src/libxml2-2.9.1/python/libxml2.py#L6567-L6575 | def CurrentNode(self):
"""Hacking interface allowing to get the xmlNodePtr
correponding to the current node being accessed by the
xmlTextReader. This is dangerous because the underlying
node may be destroyed on the next Reads. """
ret = libxml2mod.xmlTextReaderCurrentNode(self._o)
if ret is None:raise treeError('xmlTextReaderCurrentNode() failed')
__tmp = xmlNode(_obj=ret)
return __tmp | [
"def",
"CurrentNode",
"(",
"self",
")",
":",
"ret",
"=",
"libxml2mod",
".",
"xmlTextReaderCurrentNode",
"(",
"self",
".",
"_o",
")",
"if",
"ret",
"is",
"None",
":",
"raise",
"treeError",
"(",
"'xmlTextReaderCurrentNode() failed'",
")",
"__tmp",
"=",
"xmlNode",... | Hacking interface allowing to get the xmlNodePtr
correponding to the current node being accessed by the
xmlTextReader. This is dangerous because the underlying
node may be destroyed on the next Reads. | [
"Hacking",
"interface",
"allowing",
"to",
"get",
"the",
"xmlNodePtr",
"correponding",
"to",
"the",
"current",
"node",
"being",
"accessed",
"by",
"the",
"xmlTextReader",
".",
"This",
"is",
"dangerous",
"because",
"the",
"underlying",
"node",
"may",
"be",
"destroy... | python | train |
datastax/python-driver | cassandra/metadata.py | https://github.com/datastax/python-driver/blob/30a80d0b798b1f45f8cb77163b1fa791f3e3ca29/cassandra/metadata.py#L1140-L1166 | def export_as_string(self):
"""
Returns a string of CQL queries that can be used to recreate this table
along with all indexes on it. The returned string is formatted to
be human readable.
"""
if self._exc_info:
import traceback
ret = "/*\nWarning: Table %s.%s is incomplete because of an error processing metadata.\n" % \
(self.keyspace_name, self.name)
for line in traceback.format_exception(*self._exc_info):
ret += line
ret += "\nApproximate structure, for reference:\n(this should not be used to reproduce this schema)\n\n%s\n*/" % self._all_as_cql()
elif not self.is_cql_compatible:
# If we can't produce this table with CQL, comment inline
ret = "/*\nWarning: Table %s.%s omitted because it has constructs not compatible with CQL (was created via legacy API).\n" % \
(self.keyspace_name, self.name)
ret += "\nApproximate structure, for reference:\n(this should not be used to reproduce this schema)\n\n%s\n*/" % self._all_as_cql()
elif self.virtual:
ret = ('/*\nWarning: Table {ks}.{tab} is a virtual table and cannot be recreated with CQL.\n'
'Structure, for reference:\n'
'{cql}\n*/').format(ks=self.keyspace_name, tab=self.name, cql=self._all_as_cql())
else:
ret = self._all_as_cql()
return ret | [
"def",
"export_as_string",
"(",
"self",
")",
":",
"if",
"self",
".",
"_exc_info",
":",
"import",
"traceback",
"ret",
"=",
"\"/*\\nWarning: Table %s.%s is incomplete because of an error processing metadata.\\n\"",
"%",
"(",
"self",
".",
"keyspace_name",
",",
"self",
".",... | Returns a string of CQL queries that can be used to recreate this table
along with all indexes on it. The returned string is formatted to
be human readable. | [
"Returns",
"a",
"string",
"of",
"CQL",
"queries",
"that",
"can",
"be",
"used",
"to",
"recreate",
"this",
"table",
"along",
"with",
"all",
"indexes",
"on",
"it",
".",
"The",
"returned",
"string",
"is",
"formatted",
"to",
"be",
"human",
"readable",
"."
] | python | train |
benedictpaten/sonLib | tree.py | https://github.com/benedictpaten/sonLib/blob/1decb75bb439b70721ec776f685ce98e25217d26/tree.py#L162-L172 | def transformByDistance(wV, subModel, alphabetSize=4):
"""
transform wV by given substitution matrix
"""
nc = [0.0]*alphabetSize
for i in xrange(0, alphabetSize):
j = wV[i]
k = subModel[i]
for l in xrange(0, alphabetSize):
nc[l] += j * k[l]
return nc | [
"def",
"transformByDistance",
"(",
"wV",
",",
"subModel",
",",
"alphabetSize",
"=",
"4",
")",
":",
"nc",
"=",
"[",
"0.0",
"]",
"*",
"alphabetSize",
"for",
"i",
"in",
"xrange",
"(",
"0",
",",
"alphabetSize",
")",
":",
"j",
"=",
"wV",
"[",
"i",
"]",
... | transform wV by given substitution matrix | [
"transform",
"wV",
"by",
"given",
"substitution",
"matrix"
] | python | train |
RudolfCardinal/pythonlib | cardinal_pythonlib/rnc_text.py | https://github.com/RudolfCardinal/pythonlib/blob/0b84cb35f38bd7d8723958dae51b480a829b7227/cardinal_pythonlib/rnc_text.py#L503-L512 | def dictlist_convert_to_string(dict_list: Iterable[Dict], key: str) -> None:
"""
Process an iterable of dictionaries. For each dictionary ``d``, convert
(in place) ``d[key]`` to a string form, ``str(d[key])``. If the result is a
blank string, convert it to ``None``.
"""
for d in dict_list:
d[key] = str(d[key])
if d[key] == "":
d[key] = None | [
"def",
"dictlist_convert_to_string",
"(",
"dict_list",
":",
"Iterable",
"[",
"Dict",
"]",
",",
"key",
":",
"str",
")",
"->",
"None",
":",
"for",
"d",
"in",
"dict_list",
":",
"d",
"[",
"key",
"]",
"=",
"str",
"(",
"d",
"[",
"key",
"]",
")",
"if",
... | Process an iterable of dictionaries. For each dictionary ``d``, convert
(in place) ``d[key]`` to a string form, ``str(d[key])``. If the result is a
blank string, convert it to ``None``. | [
"Process",
"an",
"iterable",
"of",
"dictionaries",
".",
"For",
"each",
"dictionary",
"d",
"convert",
"(",
"in",
"place",
")",
"d",
"[",
"key",
"]",
"to",
"a",
"string",
"form",
"str",
"(",
"d",
"[",
"key",
"]",
")",
".",
"If",
"the",
"result",
"is"... | python | train |
tensorflow/tensor2tensor | tensor2tensor/layers/discretization.py | https://github.com/tensorflow/tensor2tensor/blob/272500b6efe353aeb638d2745ed56e519462ca31/tensor2tensor/layers/discretization.py#L1399-L1426 | def isemhash_bottleneck(x,
bottleneck_bits,
bottleneck_noise,
discretize_warmup_steps,
mode,
isemhash_noise_dev=0.5,
isemhash_mix_prob=0.5):
"""Improved semantic hashing bottleneck."""
with tf.variable_scope("isemhash_bottleneck"):
x = tf.layers.dense(x, bottleneck_bits, name="dense")
y = common_layers.saturating_sigmoid(x)
if isemhash_noise_dev > 0 and mode == tf.estimator.ModeKeys.TRAIN:
noise = tf.truncated_normal(
common_layers.shape_list(x), mean=0.0, stddev=isemhash_noise_dev)
y = common_layers.saturating_sigmoid(x + noise)
d = tf.to_float(tf.less(0.5, y)) + y - tf.stop_gradient(y)
d = 2.0 * d - 1.0 # Move from [0, 1] to [-1, 1].
if mode == tf.estimator.ModeKeys.TRAIN: # Flip some bits.
noise = tf.random_uniform(common_layers.shape_list(x))
noise = 2.0 * tf.to_float(tf.less(bottleneck_noise, noise)) - 1.0
d *= noise
d = common_layers.mix(
d,
2.0 * y - 1.0,
discretize_warmup_steps,
mode == tf.estimator.ModeKeys.TRAIN,
max_prob=isemhash_mix_prob)
return d, 0.0 | [
"def",
"isemhash_bottleneck",
"(",
"x",
",",
"bottleneck_bits",
",",
"bottleneck_noise",
",",
"discretize_warmup_steps",
",",
"mode",
",",
"isemhash_noise_dev",
"=",
"0.5",
",",
"isemhash_mix_prob",
"=",
"0.5",
")",
":",
"with",
"tf",
".",
"variable_scope",
"(",
... | Improved semantic hashing bottleneck. | [
"Improved",
"semantic",
"hashing",
"bottleneck",
"."
] | python | train |
SFDO-Tooling/CumulusCI | cumulusci/core/keychain/BaseProjectKeychain.py | https://github.com/SFDO-Tooling/CumulusCI/blob/e19047921ca771a297e045f22f0bb201651bb6f7/cumulusci/core/keychain/BaseProjectKeychain.py#L151-L156 | def set_default_org(self, name):
""" set the default org for tasks by name key """
org = self.get_org(name)
self.unset_default_org()
org.config["default"] = True
self.set_org(org) | [
"def",
"set_default_org",
"(",
"self",
",",
"name",
")",
":",
"org",
"=",
"self",
".",
"get_org",
"(",
"name",
")",
"self",
".",
"unset_default_org",
"(",
")",
"org",
".",
"config",
"[",
"\"default\"",
"]",
"=",
"True",
"self",
".",
"set_org",
"(",
"... | set the default org for tasks by name key | [
"set",
"the",
"default",
"org",
"for",
"tasks",
"by",
"name",
"key"
] | python | train |
timmahrt/ProMo | promo/morph_utils/morph_sequence.py | https://github.com/timmahrt/ProMo/blob/99d9f5cc01ff328a62973c5a5da910cc905ae4d5/promo/morph_utils/morph_sequence.py#L98-L130 | def _getSmallestDifference(inputList, targetVal):
'''
Returns the value in inputList that is closest to targetVal
Iteratively splits the dataset in two, so it should be pretty fast
'''
targetList = inputList[:]
retVal = None
while True:
# If we're down to one value, stop iterating
if len(targetList) == 1:
retVal = targetList[0]
break
halfPoint = int(len(targetList) / 2.0) - 1
a = targetList[halfPoint]
b = targetList[halfPoint + 1]
leftDiff = abs(targetVal - a)
rightDiff = abs(targetVal - b)
# If the distance is 0, stop iterating, the targetVal is present
# in the inputList
if leftDiff == 0 or rightDiff == 0:
retVal = targetVal
break
# Look at left half or right half
if leftDiff < rightDiff:
targetList = targetList[:halfPoint + 1]
else:
targetList = targetList[halfPoint + 1:]
return retVal | [
"def",
"_getSmallestDifference",
"(",
"inputList",
",",
"targetVal",
")",
":",
"targetList",
"=",
"inputList",
"[",
":",
"]",
"retVal",
"=",
"None",
"while",
"True",
":",
"# If we're down to one value, stop iterating",
"if",
"len",
"(",
"targetList",
")",
"==",
... | Returns the value in inputList that is closest to targetVal
Iteratively splits the dataset in two, so it should be pretty fast | [
"Returns",
"the",
"value",
"in",
"inputList",
"that",
"is",
"closest",
"to",
"targetVal",
"Iteratively",
"splits",
"the",
"dataset",
"in",
"two",
"so",
"it",
"should",
"be",
"pretty",
"fast"
] | python | train |
kubernetes-client/python | kubernetes/client/apis/certificates_v1beta1_api.py | https://github.com/kubernetes-client/python/blob/5e512ff564c244c50cab780d821542ed56aa965a/kubernetes/client/apis/certificates_v1beta1_api.py#L1157-L1180 | def replace_certificate_signing_request_approval(self, name, body, **kwargs):
"""
replace approval of the specified CertificateSigningRequest
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.replace_certificate_signing_request_approval(name, body, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str name: name of the CertificateSigningRequest (required)
:param V1beta1CertificateSigningRequest body: (required)
:param str dry_run: When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed
:param str field_manager: fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint.
:param str pretty: If 'true', then the output is pretty printed.
:return: V1beta1CertificateSigningRequest
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.replace_certificate_signing_request_approval_with_http_info(name, body, **kwargs)
else:
(data) = self.replace_certificate_signing_request_approval_with_http_info(name, body, **kwargs)
return data | [
"def",
"replace_certificate_signing_request_approval",
"(",
"self",
",",
"name",
",",
"body",
",",
"*",
"*",
"kwargs",
")",
":",
"kwargs",
"[",
"'_return_http_data_only'",
"]",
"=",
"True",
"if",
"kwargs",
".",
"get",
"(",
"'async_req'",
")",
":",
"return",
... | replace approval of the specified CertificateSigningRequest
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.replace_certificate_signing_request_approval(name, body, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str name: name of the CertificateSigningRequest (required)
:param V1beta1CertificateSigningRequest body: (required)
:param str dry_run: When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed
:param str field_manager: fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint.
:param str pretty: If 'true', then the output is pretty printed.
:return: V1beta1CertificateSigningRequest
If the method is called asynchronously,
returns the request thread. | [
"replace",
"approval",
"of",
"the",
"specified",
"CertificateSigningRequest",
"This",
"method",
"makes",
"a",
"synchronous",
"HTTP",
"request",
"by",
"default",
".",
"To",
"make",
"an",
"asynchronous",
"HTTP",
"request",
"please",
"pass",
"async_req",
"=",
"True",... | python | train |
apache/airflow | airflow/contrib/hooks/mongo_hook.py | https://github.com/apache/airflow/blob/b69c686ad8a0c89b9136bb4b31767257eb7b2597/airflow/contrib/hooks/mongo_hook.py#L93-L102 | def get_collection(self, mongo_collection, mongo_db=None):
"""
Fetches a mongo collection object for querying.
Uses connection schema as DB unless specified.
"""
mongo_db = mongo_db if mongo_db is not None else self.connection.schema
mongo_conn = self.get_conn()
return mongo_conn.get_database(mongo_db).get_collection(mongo_collection) | [
"def",
"get_collection",
"(",
"self",
",",
"mongo_collection",
",",
"mongo_db",
"=",
"None",
")",
":",
"mongo_db",
"=",
"mongo_db",
"if",
"mongo_db",
"is",
"not",
"None",
"else",
"self",
".",
"connection",
".",
"schema",
"mongo_conn",
"=",
"self",
".",
"ge... | Fetches a mongo collection object for querying.
Uses connection schema as DB unless specified. | [
"Fetches",
"a",
"mongo",
"collection",
"object",
"for",
"querying",
"."
] | python | test |
gwastro/pycbc | pycbc/filter/matchedfilter.py | https://github.com/gwastro/pycbc/blob/7a64cdd104d263f1b6ea0b01e6841837d05a4cb3/pycbc/filter/matchedfilter.py#L1255-L1280 | def smear(idx, factor):
"""
This function will take as input an array of indexes and return every
unique index within the specified factor of the inputs.
E.g.: smear([5,7,100],2) = [3,4,5,6,7,8,9,98,99,100,101,102]
Parameters
-----------
idx : numpy.array of ints
The indexes to be smeared.
factor : idx
The factor by which to smear out the input array.
Returns
--------
new_idx : numpy.array of ints
The smeared array of indexes.
"""
s = [idx]
for i in range(factor+1):
a = i - factor/2
s += [idx + a]
return numpy.unique(numpy.concatenate(s)) | [
"def",
"smear",
"(",
"idx",
",",
"factor",
")",
":",
"s",
"=",
"[",
"idx",
"]",
"for",
"i",
"in",
"range",
"(",
"factor",
"+",
"1",
")",
":",
"a",
"=",
"i",
"-",
"factor",
"/",
"2",
"s",
"+=",
"[",
"idx",
"+",
"a",
"]",
"return",
"numpy",
... | This function will take as input an array of indexes and return every
unique index within the specified factor of the inputs.
E.g.: smear([5,7,100],2) = [3,4,5,6,7,8,9,98,99,100,101,102]
Parameters
-----------
idx : numpy.array of ints
The indexes to be smeared.
factor : idx
The factor by which to smear out the input array.
Returns
--------
new_idx : numpy.array of ints
The smeared array of indexes. | [
"This",
"function",
"will",
"take",
"as",
"input",
"an",
"array",
"of",
"indexes",
"and",
"return",
"every",
"unique",
"index",
"within",
"the",
"specified",
"factor",
"of",
"the",
"inputs",
"."
] | python | train |
StackStorm/pybind | pybind/slxos/v17s_1_02/routing_system/router/router_bgp/router_bgp_attributes/__init__.py | https://github.com/StackStorm/pybind/blob/44c467e71b2b425be63867aba6e6fa28b2cfe7fb/pybind/slxos/v17s_1_02/routing_system/router/router_bgp/router_bgp_attributes/__init__.py#L349-L370 | def _set_cluster_id(self, v, load=False):
"""
Setter method for cluster_id, mapped from YANG variable /routing_system/router/router_bgp/router_bgp_attributes/cluster_id (container)
If this variable is read-only (config: false) in the
source YANG file, then _set_cluster_id is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_cluster_id() directly.
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=cluster_id.cluster_id, is_container='container', presence=False, yang_name="cluster-id", rest_name="cluster-id", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Configure Route-Reflector Cluster-ID'}}, namespace='urn:brocade.com:mgmt:brocade-bgp', defining_module='brocade-bgp', yang_type='container', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """cluster_id must be of a type compatible with container""",
'defined-type': "container",
'generated-type': """YANGDynClass(base=cluster_id.cluster_id, is_container='container', presence=False, yang_name="cluster-id", rest_name="cluster-id", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Configure Route-Reflector Cluster-ID'}}, namespace='urn:brocade.com:mgmt:brocade-bgp', defining_module='brocade-bgp', yang_type='container', is_config=True)""",
})
self.__cluster_id = t
if hasattr(self, '_set'):
self._set() | [
"def",
"_set_cluster_id",
"(",
"self",
",",
"v",
",",
"load",
"=",
"False",
")",
":",
"if",
"hasattr",
"(",
"v",
",",
"\"_utype\"",
")",
":",
"v",
"=",
"v",
".",
"_utype",
"(",
"v",
")",
"try",
":",
"t",
"=",
"YANGDynClass",
"(",
"v",
",",
"bas... | Setter method for cluster_id, mapped from YANG variable /routing_system/router/router_bgp/router_bgp_attributes/cluster_id (container)
If this variable is read-only (config: false) in the
source YANG file, then _set_cluster_id is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_cluster_id() directly. | [
"Setter",
"method",
"for",
"cluster_id",
"mapped",
"from",
"YANG",
"variable",
"/",
"routing_system",
"/",
"router",
"/",
"router_bgp",
"/",
"router_bgp_attributes",
"/",
"cluster_id",
"(",
"container",
")",
"If",
"this",
"variable",
"is",
"read",
"-",
"only",
... | python | train |
Azure/azure-uamqp-python | uamqp/authentication/cbs_auth_async.py | https://github.com/Azure/azure-uamqp-python/blob/b67e4fcaf2e8a337636947523570239c10a58ae2/uamqp/authentication/cbs_auth_async.py#L21-L54 | async def create_authenticator_async(self, connection, debug=False, loop=None, **kwargs):
"""Create the async AMQP session and the CBS channel with which
to negotiate the token.
:param connection: The underlying AMQP connection on which
to create the session.
:type connection: ~uamqp.async_ops.connection_async.ConnectionAsync
:param debug: Whether to emit network trace logging events for the
CBS session. Default is `False`. Logging events are set at INFO level.
:type debug: bool
:param loop: A user specified event loop.
:type loop: ~asycnio.AbstractEventLoop
:rtype: uamqp.c_uamqp.CBSTokenAuth
"""
self.loop = loop or asyncio.get_event_loop()
self._connection = connection
self._session = SessionAsync(connection, loop=self.loop, **kwargs)
try:
self._cbs_auth = c_uamqp.CBSTokenAuth(
self.audience,
self.token_type,
self.token,
int(self.expires_at),
self._session._session, # pylint: disable=protected-access
self.timeout,
self._connection.container_id)
self._cbs_auth.set_trace(debug)
except ValueError:
await self._session.destroy_async()
raise errors.AMQPConnectionError(
"Unable to open authentication session on connection {}.\n"
"Please confirm target hostname exists: {}".format(
connection.container_id, connection.hostname)) from None
return self._cbs_auth | [
"async",
"def",
"create_authenticator_async",
"(",
"self",
",",
"connection",
",",
"debug",
"=",
"False",
",",
"loop",
"=",
"None",
",",
"*",
"*",
"kwargs",
")",
":",
"self",
".",
"loop",
"=",
"loop",
"or",
"asyncio",
".",
"get_event_loop",
"(",
")",
"... | Create the async AMQP session and the CBS channel with which
to negotiate the token.
:param connection: The underlying AMQP connection on which
to create the session.
:type connection: ~uamqp.async_ops.connection_async.ConnectionAsync
:param debug: Whether to emit network trace logging events for the
CBS session. Default is `False`. Logging events are set at INFO level.
:type debug: bool
:param loop: A user specified event loop.
:type loop: ~asycnio.AbstractEventLoop
:rtype: uamqp.c_uamqp.CBSTokenAuth | [
"Create",
"the",
"async",
"AMQP",
"session",
"and",
"the",
"CBS",
"channel",
"with",
"which",
"to",
"negotiate",
"the",
"token",
"."
] | python | train |
yfpeng/bioc | bioc/utils.py | https://github.com/yfpeng/bioc/blob/47ddaa010960d9ba673aefe068e7bbaf39f0fff4/bioc/utils.py#L11-L18 | def pad_char(text: str, width: int, char: str = '\n') -> str:
"""Pads a text until length width."""
dis = width - len(text)
if dis < 0:
raise ValueError
if dis > 0:
text += char * dis
return text | [
"def",
"pad_char",
"(",
"text",
":",
"str",
",",
"width",
":",
"int",
",",
"char",
":",
"str",
"=",
"'\\n'",
")",
"->",
"str",
":",
"dis",
"=",
"width",
"-",
"len",
"(",
"text",
")",
"if",
"dis",
"<",
"0",
":",
"raise",
"ValueError",
"if",
"dis... | Pads a text until length width. | [
"Pads",
"a",
"text",
"until",
"length",
"width",
"."
] | python | train |
baliame/http-hmac-python | httphmac/v1.py | https://github.com/baliame/http-hmac-python/blob/9884c0cbfdb712f9f37080a8efbfdce82850785f/httphmac/v1.py#L84-L99 | def check(self, request, secret):
"""Verifies whether or not the request bears an authorization appropriate and valid for this version of the signature.
This verifies every element of the signature, including headers other than Authorization.
Keyword arguments:
request -- A request object which can be consumed by this API.
secret -- The base64-encoded secret key for the HMAC authorization.
"""
if request.get_header("Authorization") == "":
return False
ah = self.parse_auth_headers(request.get_header("Authorization"))
if "id" not in ah:
return False
if "signature" not in ah:
return False
return ah["signature"] == self.sign(request, ah, secret) | [
"def",
"check",
"(",
"self",
",",
"request",
",",
"secret",
")",
":",
"if",
"request",
".",
"get_header",
"(",
"\"Authorization\"",
")",
"==",
"\"\"",
":",
"return",
"False",
"ah",
"=",
"self",
".",
"parse_auth_headers",
"(",
"request",
".",
"get_header",
... | Verifies whether or not the request bears an authorization appropriate and valid for this version of the signature.
This verifies every element of the signature, including headers other than Authorization.
Keyword arguments:
request -- A request object which can be consumed by this API.
secret -- The base64-encoded secret key for the HMAC authorization. | [
"Verifies",
"whether",
"or",
"not",
"the",
"request",
"bears",
"an",
"authorization",
"appropriate",
"and",
"valid",
"for",
"this",
"version",
"of",
"the",
"signature",
".",
"This",
"verifies",
"every",
"element",
"of",
"the",
"signature",
"including",
"headers"... | python | train |
dnanexus/dx-toolkit | src/python/dxpy/api.py | https://github.com/dnanexus/dx-toolkit/blob/74befb53ad90fcf902d8983ae6d74580f402d619/src/python/dxpy/api.py#L1148-L1154 | def record_rename(object_id, input_params={}, always_retry=True, **kwargs):
"""
Invokes the /record-xxxx/rename API method.
For more info, see: https://wiki.dnanexus.com/API-Specification-v1.0.0/Name#API-method%3A-%2Fclass-xxxx%2Frename
"""
return DXHTTPRequest('/%s/rename' % object_id, input_params, always_retry=always_retry, **kwargs) | [
"def",
"record_rename",
"(",
"object_id",
",",
"input_params",
"=",
"{",
"}",
",",
"always_retry",
"=",
"True",
",",
"*",
"*",
"kwargs",
")",
":",
"return",
"DXHTTPRequest",
"(",
"'/%s/rename'",
"%",
"object_id",
",",
"input_params",
",",
"always_retry",
"="... | Invokes the /record-xxxx/rename API method.
For more info, see: https://wiki.dnanexus.com/API-Specification-v1.0.0/Name#API-method%3A-%2Fclass-xxxx%2Frename | [
"Invokes",
"the",
"/",
"record",
"-",
"xxxx",
"/",
"rename",
"API",
"method",
"."
] | python | train |
rbuffat/pyepw | pyepw/epw.py | https://github.com/rbuffat/pyepw/blob/373d4d3c8386c8d35789f086ac5f6018c2711745/pyepw/epw.py#L1350-L1370 | def ws004c(self, value=None):
"""Corresponds to IDD Field `ws004c`
Args:
value (float): value for IDD Field `ws004c`
Unit: m/s
if `value` is None it will not be checked against the
specification and is assumed to be a missing value
Raises:
ValueError: if `value` is not a valid value
"""
if value is not None:
try:
value = float(value)
except ValueError:
raise ValueError('value {} need to be of type float '
'for field `ws004c`'.format(value))
self._ws004c = value | [
"def",
"ws004c",
"(",
"self",
",",
"value",
"=",
"None",
")",
":",
"if",
"value",
"is",
"not",
"None",
":",
"try",
":",
"value",
"=",
"float",
"(",
"value",
")",
"except",
"ValueError",
":",
"raise",
"ValueError",
"(",
"'value {} need to be of type float '... | Corresponds to IDD Field `ws004c`
Args:
value (float): value for IDD Field `ws004c`
Unit: m/s
if `value` is None it will not be checked against the
specification and is assumed to be a missing value
Raises:
ValueError: if `value` is not a valid value | [
"Corresponds",
"to",
"IDD",
"Field",
"ws004c"
] | python | train |
jreese/aiosqlite | aiosqlite/core.py | https://github.com/jreese/aiosqlite/blob/3f548b568b8db9a57022b6e2c9627f5cdefb983f/aiosqlite/core.py#L213-L219 | async def execute_insert(
self, sql: str, parameters: Iterable[Any] = None
) -> Optional[sqlite3.Row]:
"""Helper to insert and get the last_insert_rowid."""
if parameters is None:
parameters = []
return await self._execute(self._execute_insert, sql, parameters) | [
"async",
"def",
"execute_insert",
"(",
"self",
",",
"sql",
":",
"str",
",",
"parameters",
":",
"Iterable",
"[",
"Any",
"]",
"=",
"None",
")",
"->",
"Optional",
"[",
"sqlite3",
".",
"Row",
"]",
":",
"if",
"parameters",
"is",
"None",
":",
"parameters",
... | Helper to insert and get the last_insert_rowid. | [
"Helper",
"to",
"insert",
"and",
"get",
"the",
"last_insert_rowid",
"."
] | python | train |
trevisanj/a99 | a99/conversion.py | https://github.com/trevisanj/a99/blob/193e6e3c9b3e4f4a0ba7eb3eece846fe7045c539/a99/conversion.py#L169-L181 | def valid_fits_key(key):
"""
Makes valid key for a FITS header
"The keyword names may be up to 8 characters long and can only contain
uppercase letters A to Z, the digits 0 to 9, the hyphen, and the underscore
character." (http://fits.gsfc.nasa.gov/fits_primer.html)
"""
ret = re.sub("[^A-Z0-9\-_]", "", key.upper())[:8]
if len(ret) == 0:
raise RuntimeError("key '{0!s}' has no valid characters to be a key in a FITS header".format(key))
return ret | [
"def",
"valid_fits_key",
"(",
"key",
")",
":",
"ret",
"=",
"re",
".",
"sub",
"(",
"\"[^A-Z0-9\\-_]\"",
",",
"\"\"",
",",
"key",
".",
"upper",
"(",
")",
")",
"[",
":",
"8",
"]",
"if",
"len",
"(",
"ret",
")",
"==",
"0",
":",
"raise",
"RuntimeError"... | Makes valid key for a FITS header
"The keyword names may be up to 8 characters long and can only contain
uppercase letters A to Z, the digits 0 to 9, the hyphen, and the underscore
character." (http://fits.gsfc.nasa.gov/fits_primer.html) | [
"Makes",
"valid",
"key",
"for",
"a",
"FITS",
"header",
"The",
"keyword",
"names",
"may",
"be",
"up",
"to",
"8",
"characters",
"long",
"and",
"can",
"only",
"contain",
"uppercase",
"letters",
"A",
"to",
"Z",
"the",
"digits",
"0",
"to",
"9",
"the",
"hyph... | python | train |
Azure/azure-sdk-for-python | azure-servicemanagement-legacy/azure/servicemanagement/websitemanagementservice.py | https://github.com/Azure/azure-sdk-for-python/blob/d7306fde32f60a293a7567678692bdad31e4b667/azure-servicemanagement-legacy/azure/servicemanagement/websitemanagementservice.py#L155-L179 | def delete_site(self, webspace_name, website_name,
delete_empty_server_farm=False, delete_metrics=False):
'''
Delete a website.
webspace_name:
The name of the webspace.
website_name:
The name of the website.
delete_empty_server_farm:
If the site being deleted is the last web site in a server farm,
you can delete the server farm by setting this to True.
delete_metrics:
To also delete the metrics for the site that you are deleting, you
can set this to True.
'''
path = self._get_sites_details_path(webspace_name, website_name)
query = ''
if delete_empty_server_farm:
query += '&deleteEmptyServerFarm=true'
if delete_metrics:
query += '&deleteMetrics=true'
if query:
path = path + '?' + query.lstrip('&')
return self._perform_delete(path) | [
"def",
"delete_site",
"(",
"self",
",",
"webspace_name",
",",
"website_name",
",",
"delete_empty_server_farm",
"=",
"False",
",",
"delete_metrics",
"=",
"False",
")",
":",
"path",
"=",
"self",
".",
"_get_sites_details_path",
"(",
"webspace_name",
",",
"website_nam... | Delete a website.
webspace_name:
The name of the webspace.
website_name:
The name of the website.
delete_empty_server_farm:
If the site being deleted is the last web site in a server farm,
you can delete the server farm by setting this to True.
delete_metrics:
To also delete the metrics for the site that you are deleting, you
can set this to True. | [
"Delete",
"a",
"website",
"."
] | python | test |
apache/airflow | airflow/models/dagrun.py | https://github.com/apache/airflow/blob/b69c686ad8a0c89b9136bb4b31767257eb7b2597/airflow/models/dagrun.py#L221-L229 | def get_previous_dagrun(self, session=None):
"""The previous DagRun, if there is one"""
return session.query(DagRun).filter(
DagRun.dag_id == self.dag_id,
DagRun.execution_date < self.execution_date
).order_by(
DagRun.execution_date.desc()
).first() | [
"def",
"get_previous_dagrun",
"(",
"self",
",",
"session",
"=",
"None",
")",
":",
"return",
"session",
".",
"query",
"(",
"DagRun",
")",
".",
"filter",
"(",
"DagRun",
".",
"dag_id",
"==",
"self",
".",
"dag_id",
",",
"DagRun",
".",
"execution_date",
"<",
... | The previous DagRun, if there is one | [
"The",
"previous",
"DagRun",
"if",
"there",
"is",
"one"
] | python | test |
bwhite/hadoopy | hadoopy/thirdparty/pyinstaller/PyInstaller/depend/dylib.py | https://github.com/bwhite/hadoopy/blob/ff39b4e6d4e6efaf1f571cf0f2c0e0d7ab28c2d6/hadoopy/thirdparty/pyinstaller/PyInstaller/depend/dylib.py#L175-L222 | def mac_set_relative_dylib_deps(libname):
"""
On Mac OS X set relative paths to dynamic library dependencies of `libname`.
Relative paths allow to avoid using environment variable DYLD_LIBRARY_PATH.
There are known some issues with DYLD_LIBRARY_PATH. Relative paths is
more flexible mechanism.
Current location of dependend libraries is derived from the location
of the executable (paths start with '@executable_path').
@executable_path or @loader_path fail in some situations
(@loader_path - qt4 plugins, @executable_path -
Python built-in hashlib module).
"""
from PyInstaller.lib.macholib import util
from PyInstaller.lib.macholib.MachO import MachO
# Ignore bootloader otherwise PyInstaller fails with exception like
# 'ValueError: total_size > low_offset (288 > 0)'
if os.path.basename(libname) in _BOOTLOADER_FNAMES:
return
def match_func(pth):
"""For system libraries is still used absolute path. It is unchanged."""
# Match non system dynamic libraries.
if not util.in_system_path(pth):
# Use relative path to dependend dynamic libraries bases on
# location of the executable.
return os.path.join('@executable_path', os.path.basename(pth))
# Rewrite mach headers with @executable_path.
dll = MachO(libname)
dll.rewriteLoadCommands(match_func)
# Write changes into file.
# Write code is based on macholib example.
try:
f = open(dll.filename, 'rb+')
for header in dll.headers:
f.seek(0)
dll.write(f)
f.seek(0, 2)
f.flush()
f.close()
except Exception:
pass | [
"def",
"mac_set_relative_dylib_deps",
"(",
"libname",
")",
":",
"from",
"PyInstaller",
".",
"lib",
".",
"macholib",
"import",
"util",
"from",
"PyInstaller",
".",
"lib",
".",
"macholib",
".",
"MachO",
"import",
"MachO",
"# Ignore bootloader otherwise PyInstaller fails ... | On Mac OS X set relative paths to dynamic library dependencies of `libname`.
Relative paths allow to avoid using environment variable DYLD_LIBRARY_PATH.
There are known some issues with DYLD_LIBRARY_PATH. Relative paths is
more flexible mechanism.
Current location of dependend libraries is derived from the location
of the executable (paths start with '@executable_path').
@executable_path or @loader_path fail in some situations
(@loader_path - qt4 plugins, @executable_path -
Python built-in hashlib module). | [
"On",
"Mac",
"OS",
"X",
"set",
"relative",
"paths",
"to",
"dynamic",
"library",
"dependencies",
"of",
"libname",
"."
] | python | train |
timothyb0912/pylogit | pylogit/estimation.py | https://github.com/timothyb0912/pylogit/blob/f83b0fd6debaa7358d87c3828428f6d4ead71357/pylogit/estimation.py#L331-L351 | def convenience_calc_log_likelihood(self, params):
"""
Calculates the log-likelihood for this model and dataset.
"""
shapes, intercepts, betas = self.convenience_split_params(params)
args = [betas,
self.design,
self.alt_id_vector,
self.rows_to_obs,
self.rows_to_alts,
self.choice_vector,
self.utility_transform]
kwargs = {"intercept_params": intercepts,
"shape_params": shapes,
"ridge": self.ridge,
"weights": self.weights}
log_likelihood = cc.calc_log_likelihood(*args, **kwargs)
return log_likelihood | [
"def",
"convenience_calc_log_likelihood",
"(",
"self",
",",
"params",
")",
":",
"shapes",
",",
"intercepts",
",",
"betas",
"=",
"self",
".",
"convenience_split_params",
"(",
"params",
")",
"args",
"=",
"[",
"betas",
",",
"self",
".",
"design",
",",
"self",
... | Calculates the log-likelihood for this model and dataset. | [
"Calculates",
"the",
"log",
"-",
"likelihood",
"for",
"this",
"model",
"and",
"dataset",
"."
] | python | train |
tensorpack/tensorpack | examples/Saliency/saliency-maps.py | https://github.com/tensorpack/tensorpack/blob/d7a13cb74c9066bc791d7aafc3b744b60ee79a9f/examples/Saliency/saliency-maps.py#L40-L52 | def saliency_map(output, input, name="saliency_map"):
"""
Produce a saliency map as described in the paper:
`Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps
<https://arxiv.org/abs/1312.6034>`_.
The saliency map is the gradient of the max element in output w.r.t input.
Returns:
tf.Tensor: the saliency map. Has the same shape as input.
"""
max_outp = tf.reduce_max(output, 1)
saliency_op = tf.gradients(max_outp, input)[:][0]
return tf.identity(saliency_op, name=name) | [
"def",
"saliency_map",
"(",
"output",
",",
"input",
",",
"name",
"=",
"\"saliency_map\"",
")",
":",
"max_outp",
"=",
"tf",
".",
"reduce_max",
"(",
"output",
",",
"1",
")",
"saliency_op",
"=",
"tf",
".",
"gradients",
"(",
"max_outp",
",",
"input",
")",
... | Produce a saliency map as described in the paper:
`Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps
<https://arxiv.org/abs/1312.6034>`_.
The saliency map is the gradient of the max element in output w.r.t input.
Returns:
tf.Tensor: the saliency map. Has the same shape as input. | [
"Produce",
"a",
"saliency",
"map",
"as",
"described",
"in",
"the",
"paper",
":",
"Deep",
"Inside",
"Convolutional",
"Networks",
":",
"Visualising",
"Image",
"Classification",
"Models",
"and",
"Saliency",
"Maps",
"<https",
":",
"//",
"arxiv",
".",
"org",
"/",
... | python | train |
petebachant/PXL | pxl/io.py | https://github.com/petebachant/PXL/blob/d7d06cb74422e1ac0154741351fbecea080cfcc0/pxl/io.py#L70-L92 | def savehdf(filename, datadict, groupname="data", mode="a", metadata=None,
as_dataframe=False, append=False):
"""Save a dictionary of arrays to file--similar to how `scipy.io.savemat`
works. If `datadict` is a DataFrame, it will be converted automatically.
"""
if as_dataframe:
df = _pd.DataFrame(datadict)
df.to_hdf(filename, groupname)
else:
if isinstance(datadict, _pd.DataFrame):
datadict = datadict.to_dict("list")
with _h5py.File(filename, mode) as f:
for key, value in datadict.items():
if append:
try:
f[groupname + "/" + key] = np.append(f[groupname + "/" + key], value)
except KeyError:
f[groupname + "/" + key] = value
else:
f[groupname + "/" + key] = value
if metadata:
for key, value in metadata.items():
f[groupname].attrs[key] = value | [
"def",
"savehdf",
"(",
"filename",
",",
"datadict",
",",
"groupname",
"=",
"\"data\"",
",",
"mode",
"=",
"\"a\"",
",",
"metadata",
"=",
"None",
",",
"as_dataframe",
"=",
"False",
",",
"append",
"=",
"False",
")",
":",
"if",
"as_dataframe",
":",
"df",
"... | Save a dictionary of arrays to file--similar to how `scipy.io.savemat`
works. If `datadict` is a DataFrame, it will be converted automatically. | [
"Save",
"a",
"dictionary",
"of",
"arrays",
"to",
"file",
"--",
"similar",
"to",
"how",
"scipy",
".",
"io",
".",
"savemat",
"works",
".",
"If",
"datadict",
"is",
"a",
"DataFrame",
"it",
"will",
"be",
"converted",
"automatically",
"."
] | python | train |
klavinslab/coral | coral/analysis/_structure/nupack.py | https://github.com/klavinslab/coral/blob/17f59591211562a59a051f474cd6cecba4829df9/coral/analysis/_structure/nupack.py#L548-L572 | def count(self, strand, pseudo=False):
'''Enumerates the total number of secondary structures over the
structural ensemble Ω(π). Runs the \'count\' command.
:param strand: Strand on which to run count. Strands must be either
coral.DNA or coral.RNA).
:type strand: list
:param pseudo: Enable pseudoknots.
:type pseudo: bool
:returns: The count of the number of structures in the structural
ensemble.
:rtype: int
'''
# Set up command flags
if pseudo:
cmd_args = ['-pseudo']
else:
cmd_args = []
# Set up the input file and run the command
stdout = self._run('count', cmd_args, [str(strand)]).split('\n')
# Return the count
return int(float(stdout[-2])) | [
"def",
"count",
"(",
"self",
",",
"strand",
",",
"pseudo",
"=",
"False",
")",
":",
"# Set up command flags",
"if",
"pseudo",
":",
"cmd_args",
"=",
"[",
"'-pseudo'",
"]",
"else",
":",
"cmd_args",
"=",
"[",
"]",
"# Set up the input file and run the command",
"st... | Enumerates the total number of secondary structures over the
structural ensemble Ω(π). Runs the \'count\' command.
:param strand: Strand on which to run count. Strands must be either
coral.DNA or coral.RNA).
:type strand: list
:param pseudo: Enable pseudoknots.
:type pseudo: bool
:returns: The count of the number of structures in the structural
ensemble.
:rtype: int | [
"Enumerates",
"the",
"total",
"number",
"of",
"secondary",
"structures",
"over",
"the",
"structural",
"ensemble",
"Ω",
"(",
"π",
")",
".",
"Runs",
"the",
"\\",
"count",
"\\",
"command",
"."
] | python | train |
ronhanson/python-tbx | tbx/process.py | https://github.com/ronhanson/python-tbx/blob/87f72ae0cadecafbcd144f1e930181fba77f6b83/tbx/process.py#L38-L51 | def synchronized(lock):
"""
Synchronization decorator; provide thread-safe locking on a function
http://code.activestate.com/recipes/465057/
"""
def wrap(f):
def synchronize(*args, **kw):
lock.acquire()
try:
return f(*args, **kw)
finally:
lock.release()
return synchronize
return wrap | [
"def",
"synchronized",
"(",
"lock",
")",
":",
"def",
"wrap",
"(",
"f",
")",
":",
"def",
"synchronize",
"(",
"*",
"args",
",",
"*",
"*",
"kw",
")",
":",
"lock",
".",
"acquire",
"(",
")",
"try",
":",
"return",
"f",
"(",
"*",
"args",
",",
"*",
"... | Synchronization decorator; provide thread-safe locking on a function
http://code.activestate.com/recipes/465057/ | [
"Synchronization",
"decorator",
";",
"provide",
"thread",
"-",
"safe",
"locking",
"on",
"a",
"function",
"http",
":",
"//",
"code",
".",
"activestate",
".",
"com",
"/",
"recipes",
"/",
"465057",
"/"
] | python | train |
saltstack/salt | salt/modules/nspawn.py | https://github.com/saltstack/salt/blob/e8541fd6e744ab0df786c0f76102e41631f45d46/salt/modules/nspawn.py#L333-L351 | def pid(name):
'''
Returns the PID of a container
name
Container name
CLI Example:
.. code-block:: bash
salt myminion nspawn.pid arch1
'''
try:
return int(info(name).get('PID'))
except (TypeError, ValueError) as exc:
raise CommandExecutionError(
'Unable to get PID for container \'{0}\': {1}'.format(name, exc)
) | [
"def",
"pid",
"(",
"name",
")",
":",
"try",
":",
"return",
"int",
"(",
"info",
"(",
"name",
")",
".",
"get",
"(",
"'PID'",
")",
")",
"except",
"(",
"TypeError",
",",
"ValueError",
")",
"as",
"exc",
":",
"raise",
"CommandExecutionError",
"(",
"'Unable... | Returns the PID of a container
name
Container name
CLI Example:
.. code-block:: bash
salt myminion nspawn.pid arch1 | [
"Returns",
"the",
"PID",
"of",
"a",
"container"
] | python | train |
pyviz/holoviews | holoviews/ipython/preprocessors.py | https://github.com/pyviz/holoviews/blob/ae0dd2f3de448b0ca5e9065aabd6ef8d84c7e655/holoviews/ipython/preprocessors.py#L75-L83 | def strip_magics(source):
"""
Given the source of a cell, filter out all cell and line magics.
"""
filtered=[]
for line in source.splitlines():
if not line.startswith('%') or line.startswith('%%'):
filtered.append(line)
return '\n'.join(filtered) | [
"def",
"strip_magics",
"(",
"source",
")",
":",
"filtered",
"=",
"[",
"]",
"for",
"line",
"in",
"source",
".",
"splitlines",
"(",
")",
":",
"if",
"not",
"line",
".",
"startswith",
"(",
"'%'",
")",
"or",
"line",
".",
"startswith",
"(",
"'%%'",
")",
... | Given the source of a cell, filter out all cell and line magics. | [
"Given",
"the",
"source",
"of",
"a",
"cell",
"filter",
"out",
"all",
"cell",
"and",
"line",
"magics",
"."
] | python | train |
apache/incubator-mxnet | example/ssd/detect/detector.py | https://github.com/apache/incubator-mxnet/blob/1af29e9c060a4c7d60eeaacba32afdb9a7775ba7/example/ssd/detect/detector.py#L120-L142 | def im_detect(self, im_list, root_dir=None, extension=None, show_timer=False):
"""
wrapper for detecting multiple images
Parameters:
----------
im_list : list of str
image path or list of image paths
root_dir : str
directory of input images, optional if image path already
has full directory information
extension : str
image extension, eg. ".jpg", optional
Returns:
----------
list of detection results in format [det0, det1...], det is in
format np.array([id, score, xmin, ymin, xmax, ymax]...)
"""
test_db = TestDB(im_list, root_dir=root_dir, extension=extension)
test_iter = DetIter(test_db, 1, self.data_shape, self.mean_pixels,
is_train=False)
return self.detect_iter(test_iter, show_timer) | [
"def",
"im_detect",
"(",
"self",
",",
"im_list",
",",
"root_dir",
"=",
"None",
",",
"extension",
"=",
"None",
",",
"show_timer",
"=",
"False",
")",
":",
"test_db",
"=",
"TestDB",
"(",
"im_list",
",",
"root_dir",
"=",
"root_dir",
",",
"extension",
"=",
... | wrapper for detecting multiple images
Parameters:
----------
im_list : list of str
image path or list of image paths
root_dir : str
directory of input images, optional if image path already
has full directory information
extension : str
image extension, eg. ".jpg", optional
Returns:
----------
list of detection results in format [det0, det1...], det is in
format np.array([id, score, xmin, ymin, xmax, ymax]...) | [
"wrapper",
"for",
"detecting",
"multiple",
"images"
] | python | train |
zeroSteiner/smoke-zephyr | smoke_zephyr/argparse_types.py | https://github.com/zeroSteiner/smoke-zephyr/blob/a6d2498aeacc72ee52e7806f783a4d83d537ffb2/smoke_zephyr/argparse_types.py#L78-L84 | def bin_b64_type(arg):
"""An argparse type representing binary data encoded in base64."""
try:
arg = base64.standard_b64decode(arg)
except (binascii.Error, TypeError):
raise argparse.ArgumentTypeError("{0} is invalid base64 data".format(repr(arg)))
return arg | [
"def",
"bin_b64_type",
"(",
"arg",
")",
":",
"try",
":",
"arg",
"=",
"base64",
".",
"standard_b64decode",
"(",
"arg",
")",
"except",
"(",
"binascii",
".",
"Error",
",",
"TypeError",
")",
":",
"raise",
"argparse",
".",
"ArgumentTypeError",
"(",
"\"{0} is in... | An argparse type representing binary data encoded in base64. | [
"An",
"argparse",
"type",
"representing",
"binary",
"data",
"encoded",
"in",
"base64",
"."
] | python | train |
haikuginger/beekeeper | beekeeper/variables.py | https://github.com/haikuginger/beekeeper/blob/b647d3add0b407ec5dc3a2a39c4f6dac31243b18/beekeeper/variables.py#L17-L29 | def merge(var1, var2):
"""
Take two copies of a variable and reconcile them. var1 is assumed
to be the higher-level variable, and so will be overridden by var2
where such becomes necessary.
"""
out = {}
out['value'] = var2.get('value', var1.get('value', None))
out['mimetype'] = var2.get('mimetype', var1.get('mimetype', None))
out['types'] = var2.get('types') + [x for x in var1.get('types') if x not in var2.get('types')]
out['optional'] = var2.get('optional', var1.get('optional', False))
out['filename'] = var2.get('filename', var2.get('filename', None))
return Variable(var1.default_type, **out) | [
"def",
"merge",
"(",
"var1",
",",
"var2",
")",
":",
"out",
"=",
"{",
"}",
"out",
"[",
"'value'",
"]",
"=",
"var2",
".",
"get",
"(",
"'value'",
",",
"var1",
".",
"get",
"(",
"'value'",
",",
"None",
")",
")",
"out",
"[",
"'mimetype'",
"]",
"=",
... | Take two copies of a variable and reconcile them. var1 is assumed
to be the higher-level variable, and so will be overridden by var2
where such becomes necessary. | [
"Take",
"two",
"copies",
"of",
"a",
"variable",
"and",
"reconcile",
"them",
".",
"var1",
"is",
"assumed",
"to",
"be",
"the",
"higher",
"-",
"level",
"variable",
"and",
"so",
"will",
"be",
"overridden",
"by",
"var2",
"where",
"such",
"becomes",
"necessary",... | python | train |
Alignak-monitoring/alignak | alignak/objects/schedulingitem.py | https://github.com/Alignak-monitoring/alignak/blob/f3c145207e83159b799d3714e4241399c7740a64/alignak/objects/schedulingitem.py#L3233-L3241 | def unset_impact_state(self):
"""Unset impact, only if impact state change is set in configuration
:return: None
"""
cls = self.__class__
if cls.enable_problem_impacts_states_change and not self.state_changed_since_impact:
self.state = self.state_before_impact
self.state_id = self.state_id_before_impact | [
"def",
"unset_impact_state",
"(",
"self",
")",
":",
"cls",
"=",
"self",
".",
"__class__",
"if",
"cls",
".",
"enable_problem_impacts_states_change",
"and",
"not",
"self",
".",
"state_changed_since_impact",
":",
"self",
".",
"state",
"=",
"self",
".",
"state_befor... | Unset impact, only if impact state change is set in configuration
:return: None | [
"Unset",
"impact",
"only",
"if",
"impact",
"state",
"change",
"is",
"set",
"in",
"configuration"
] | python | train |
DataDog/integrations-core | tokumx/datadog_checks/tokumx/vendor/pymongo/database.py | https://github.com/DataDog/integrations-core/blob/ebd41c873cf9f97a8c51bf9459bc6a7536af8acd/tokumx/datadog_checks/tokumx/vendor/pymongo/database.py#L544-L569 | def collection_names(self, include_system_collections=True):
"""Get a list of all the collection names in this database.
:Parameters:
- `include_system_collections` (optional): if ``False`` list
will not include system collections (e.g ``system.indexes``)
"""
with self.__client._socket_for_reads(
ReadPreference.PRIMARY) as (sock_info, slave_okay):
wire_version = sock_info.max_wire_version
results = self._list_collections(sock_info, slave_okay)
# Iterating the cursor to completion may require a socket for getmore.
# Ensure we do that outside the "with" block so we don't require more
# than one socket at a time.
names = [result["name"] for result in results]
if wire_version <= 2:
# MongoDB 2.4 and older return index namespaces and collection
# namespaces prefixed with the database name.
names = [n[len(self.__name) + 1:] for n in names
if n.startswith(self.__name + ".") and "$" not in n]
if not include_system_collections:
names = [name for name in names if not name.startswith("system.")]
return names | [
"def",
"collection_names",
"(",
"self",
",",
"include_system_collections",
"=",
"True",
")",
":",
"with",
"self",
".",
"__client",
".",
"_socket_for_reads",
"(",
"ReadPreference",
".",
"PRIMARY",
")",
"as",
"(",
"sock_info",
",",
"slave_okay",
")",
":",
"wire_... | Get a list of all the collection names in this database.
:Parameters:
- `include_system_collections` (optional): if ``False`` list
will not include system collections (e.g ``system.indexes``) | [
"Get",
"a",
"list",
"of",
"all",
"the",
"collection",
"names",
"in",
"this",
"database",
"."
] | python | train |
crm416/semantic | semantic/dates.py | https://github.com/crm416/semantic/blob/46deb8fefb3ea58aad2fedc8d0d62f3ee254b8fe/semantic/dates.py#L476-L495 | def extractDates(inp, tz=None, now=None):
"""Extract semantic date information from an input string.
This is a convenience method which would only be used if
you'd rather not initialize a DateService object.
Args:
inp (str): The input string to be parsed.
tz: An optional Pytz timezone. All datetime objects returned will
be relative to the supplied timezone, or timezone-less if none
is supplied.
now: The time to which all returned datetime objects should be
relative. For example, if the text is "In 5 hours", the
datetime returned will be now + datetime.timedelta(hours=5).
Uses datetime.datetime.now() if none is supplied.
Returns:
A list of datetime objects extracted from input.
"""
service = DateService(tz=tz, now=now)
return service.extractDates(inp) | [
"def",
"extractDates",
"(",
"inp",
",",
"tz",
"=",
"None",
",",
"now",
"=",
"None",
")",
":",
"service",
"=",
"DateService",
"(",
"tz",
"=",
"tz",
",",
"now",
"=",
"now",
")",
"return",
"service",
".",
"extractDates",
"(",
"inp",
")"
] | Extract semantic date information from an input string.
This is a convenience method which would only be used if
you'd rather not initialize a DateService object.
Args:
inp (str): The input string to be parsed.
tz: An optional Pytz timezone. All datetime objects returned will
be relative to the supplied timezone, or timezone-less if none
is supplied.
now: The time to which all returned datetime objects should be
relative. For example, if the text is "In 5 hours", the
datetime returned will be now + datetime.timedelta(hours=5).
Uses datetime.datetime.now() if none is supplied.
Returns:
A list of datetime objects extracted from input. | [
"Extract",
"semantic",
"date",
"information",
"from",
"an",
"input",
"string",
".",
"This",
"is",
"a",
"convenience",
"method",
"which",
"would",
"only",
"be",
"used",
"if",
"you",
"d",
"rather",
"not",
"initialize",
"a",
"DateService",
"object",
"."
] | python | train |
Scifabric/pybossa-client | pbclient/__init__.py | https://github.com/Scifabric/pybossa-client/blob/998d7cb0207ff5030dc800f0c2577c5692316c2c/pbclient/__init__.py#L218-L233 | def find_project(**kwargs):
"""Return a list with matching project arguments.
:param kwargs: PYBOSSA Project members
:rtype: list
:returns: A list of projects that match the kwargs
"""
try:
res = _pybossa_req('get', 'project', params=kwargs)
if type(res).__name__ == 'list':
return [Project(project) for project in res]
else:
return res
except: # pragma: no cover
raise | [
"def",
"find_project",
"(",
"*",
"*",
"kwargs",
")",
":",
"try",
":",
"res",
"=",
"_pybossa_req",
"(",
"'get'",
",",
"'project'",
",",
"params",
"=",
"kwargs",
")",
"if",
"type",
"(",
"res",
")",
".",
"__name__",
"==",
"'list'",
":",
"return",
"[",
... | Return a list with matching project arguments.
:param kwargs: PYBOSSA Project members
:rtype: list
:returns: A list of projects that match the kwargs | [
"Return",
"a",
"list",
"with",
"matching",
"project",
"arguments",
"."
] | python | valid |
etingof/pysnmp | pysnmp/smi/mibs/SNMPv2-SMI.py | https://github.com/etingof/pysnmp/blob/cde062dd42f67dfd2d7686286a322d40e9c3a4b7/pysnmp/smi/mibs/SNMPv2-SMI.py#L993-L1055 | def writeUndo(self, varBind, **context):
"""Finalize Managed Object Instance modification.
Implements the third (unsuccessful) step of the multi-step workflow
of the SNMP SET command processing (:RFC:`1905#section-4.2.5`).
The goal of the third phase is to roll the Managed Object Instance
being modified back into its previous state. The system transitions
into the *undo* state whenever any of the simultaneously modified
Managed Objects Instances fail on the *commit* state transitioning.
The role of this object in the MIB tree is non-terminal. It does not
access the actual Managed Object Instance, but just traverses one level
down the MIB tree and hands off the query to the underlying objects.
Parameters
----------
varBind: :py:class:`~pysnmp.smi.rfc1902.ObjectType` object representing
new Managed Object Instance value to set
Other Parameters
----------------
\*\*context:
Query parameters:
* `cbFun` (callable) - user-supplied callable that is invoked to
pass the new value of the Managed Object Instance or an error.
Notes
-----
The callback functions (e.g. `cbFun`) have the same signature as this
method where `varBind` contains the new Managed Object Instance value.
In case of an error, the `error` key in the `context` dict will contain
an exception object.
"""
name, val = varBind
(debug.logger & debug.FLAG_INS and
debug.logger('%s: writeUndo(%s, %r)' % (self, name, val)))
cbFun = context['cbFun']
instances = context['instances'].setdefault(self.name, {self.ST_CREATE: {}, self.ST_DESTROY: {}})
idx = context['idx']
if idx in instances[self.ST_CREATE]:
self.createUndo(varBind, **context)
return
if idx in instances[self.ST_DESTROY]:
self.destroyUndo(varBind, **context)
return
try:
node = self.getBranch(name, **context)
except (error.NoSuchInstanceError, error.NoSuchObjectError) as exc:
cbFun(varBind, **dict(context, error=exc))
else:
node.writeUndo(varBind, **context) | [
"def",
"writeUndo",
"(",
"self",
",",
"varBind",
",",
"*",
"*",
"context",
")",
":",
"name",
",",
"val",
"=",
"varBind",
"(",
"debug",
".",
"logger",
"&",
"debug",
".",
"FLAG_INS",
"and",
"debug",
".",
"logger",
"(",
"'%s: writeUndo(%s, %r)'",
"%",
"("... | Finalize Managed Object Instance modification.
Implements the third (unsuccessful) step of the multi-step workflow
of the SNMP SET command processing (:RFC:`1905#section-4.2.5`).
The goal of the third phase is to roll the Managed Object Instance
being modified back into its previous state. The system transitions
into the *undo* state whenever any of the simultaneously modified
Managed Objects Instances fail on the *commit* state transitioning.
The role of this object in the MIB tree is non-terminal. It does not
access the actual Managed Object Instance, but just traverses one level
down the MIB tree and hands off the query to the underlying objects.
Parameters
----------
varBind: :py:class:`~pysnmp.smi.rfc1902.ObjectType` object representing
new Managed Object Instance value to set
Other Parameters
----------------
\*\*context:
Query parameters:
* `cbFun` (callable) - user-supplied callable that is invoked to
pass the new value of the Managed Object Instance or an error.
Notes
-----
The callback functions (e.g. `cbFun`) have the same signature as this
method where `varBind` contains the new Managed Object Instance value.
In case of an error, the `error` key in the `context` dict will contain
an exception object. | [
"Finalize",
"Managed",
"Object",
"Instance",
"modification",
"."
] | python | train |
Azure/blobxfer | blobxfer/models/crypto.py | https://github.com/Azure/blobxfer/blob/3eccbe7530cc6a20ab2d30f9e034b6f021817f34/blobxfer/models/crypto.py#L395-L405 | def initialize_hmac(self):
# type: (EncryptionMetadata) -> hmac.HMAC
"""Initialize an hmac from a signing key if it exists
:param EncryptionMetadata self: this
:rtype: hmac.HMAC or None
:return: hmac
"""
if self._signkey is not None:
return hmac.new(self._signkey, digestmod=hashlib.sha256)
else:
return None | [
"def",
"initialize_hmac",
"(",
"self",
")",
":",
"# type: (EncryptionMetadata) -> hmac.HMAC",
"if",
"self",
".",
"_signkey",
"is",
"not",
"None",
":",
"return",
"hmac",
".",
"new",
"(",
"self",
".",
"_signkey",
",",
"digestmod",
"=",
"hashlib",
".",
"sha256",
... | Initialize an hmac from a signing key if it exists
:param EncryptionMetadata self: this
:rtype: hmac.HMAC or None
:return: hmac | [
"Initialize",
"an",
"hmac",
"from",
"a",
"signing",
"key",
"if",
"it",
"exists",
":",
"param",
"EncryptionMetadata",
"self",
":",
"this",
":",
"rtype",
":",
"hmac",
".",
"HMAC",
"or",
"None",
":",
"return",
":",
"hmac"
] | python | train |
pantsbuild/pants | src/python/pants/util/desktop.py | https://github.com/pantsbuild/pants/blob/b72e650da0df685824ffdcc71988b8c282d0962d/src/python/pants/util/desktop.py#L40-L51 | def ui_open(*files):
"""Attempts to open the given files using the preferred desktop viewer or editor.
:raises :class:`OpenError`: if there is a problem opening any of the files.
"""
if files:
osname = get_os_name()
opener = _OPENER_BY_OS.get(osname)
if opener:
opener(files)
else:
raise OpenError('Open currently not supported for ' + osname) | [
"def",
"ui_open",
"(",
"*",
"files",
")",
":",
"if",
"files",
":",
"osname",
"=",
"get_os_name",
"(",
")",
"opener",
"=",
"_OPENER_BY_OS",
".",
"get",
"(",
"osname",
")",
"if",
"opener",
":",
"opener",
"(",
"files",
")",
"else",
":",
"raise",
"OpenEr... | Attempts to open the given files using the preferred desktop viewer or editor.
:raises :class:`OpenError`: if there is a problem opening any of the files. | [
"Attempts",
"to",
"open",
"the",
"given",
"files",
"using",
"the",
"preferred",
"desktop",
"viewer",
"or",
"editor",
"."
] | python | train |
python-openxml/python-docx | docx/oxml/shared.py | https://github.com/python-openxml/python-docx/blob/6756f6cd145511d3eb6d1d188beea391b1ddfd53/docx/oxml/shared.py#L24-L29 | def new(cls, nsptagname, val):
"""
Return a new ``CT_DecimalNumber`` element having tagname *nsptagname*
and ``val`` attribute set to *val*.
"""
return OxmlElement(nsptagname, attrs={qn('w:val'): str(val)}) | [
"def",
"new",
"(",
"cls",
",",
"nsptagname",
",",
"val",
")",
":",
"return",
"OxmlElement",
"(",
"nsptagname",
",",
"attrs",
"=",
"{",
"qn",
"(",
"'w:val'",
")",
":",
"str",
"(",
"val",
")",
"}",
")"
] | Return a new ``CT_DecimalNumber`` element having tagname *nsptagname*
and ``val`` attribute set to *val*. | [
"Return",
"a",
"new",
"CT_DecimalNumber",
"element",
"having",
"tagname",
"*",
"nsptagname",
"*",
"and",
"val",
"attribute",
"set",
"to",
"*",
"val",
"*",
"."
] | python | train |
census-instrumentation/opencensus-python | opencensus/metrics/export/gauge.py | https://github.com/census-instrumentation/opencensus-python/blob/992b223f7e34c5dcb65922b7d5c827e7a1351e7d/opencensus/metrics/export/gauge.py#L265-L277 | def remove_time_series(self, label_values):
"""Remove the time series for specific label values.
:type label_values: list(:class:`LabelValue`)
:param label_values: Label values of the time series to remove.
"""
if label_values is None:
raise ValueError
if any(lv is None for lv in label_values):
raise ValueError
if len(label_values) != self._len_label_keys:
raise ValueError
self._remove_time_series(label_values) | [
"def",
"remove_time_series",
"(",
"self",
",",
"label_values",
")",
":",
"if",
"label_values",
"is",
"None",
":",
"raise",
"ValueError",
"if",
"any",
"(",
"lv",
"is",
"None",
"for",
"lv",
"in",
"label_values",
")",
":",
"raise",
"ValueError",
"if",
"len",
... | Remove the time series for specific label values.
:type label_values: list(:class:`LabelValue`)
:param label_values: Label values of the time series to remove. | [
"Remove",
"the",
"time",
"series",
"for",
"specific",
"label",
"values",
"."
] | python | train |
awesmubarak/markdown_strings | markdown_strings/__init__.py | https://github.com/awesmubarak/markdown_strings/blob/569e225e7a8f23469efe8df244d3d3fd0e8c3b3e/markdown_strings/__init__.py#L141-L155 | def image(alt_text, link_url, title=""):
"""Return an inline image.
Keyword arguments:
title -- Specify the title of the image, as seen when hovering over it.
>>> image("This is an image", "https://tinyurl.com/bright-green-tree")
''
>>> image("This is an image", "https://tinyurl.com/bright-green-tree", "tree")
' "tree"'
"""
image_string = ""
if title:
image_string += ' "' + esc_format(title) + '"'
return image_string | [
"def",
"image",
"(",
"alt_text",
",",
"link_url",
",",
"title",
"=",
"\"\"",
")",
":",
"image_string",
"=",
"\"\"",
"if",
"title",
":",
"image_string",
"+=",
"' \"'"... | Return an inline image.
Keyword arguments:
title -- Specify the title of the image, as seen when hovering over it.
>>> image("This is an image", "https://tinyurl.com/bright-green-tree")
''
>>> image("This is an image", "https://tinyurl.com/bright-green-tree", "tree")
' "tree"' | [
"Return",
"an",
"inline",
"image",
"."
] | python | train |
sci-bots/dmf-device-ui | dmf_device_ui/canvas.py | https://github.com/sci-bots/dmf-device-ui/blob/05b480683c9fa43f91ce5a58de2fa90cdf363fc8/dmf_device_ui/canvas.py#L786-L810 | def render_registration(self):
'''
Render pinned points on video frame as red rectangle.
'''
surface = self.get_surface()
if self.canvas is None or self.df_canvas_corners.shape[0] == 0:
return surface
corners = self.df_canvas_corners.copy()
corners['w'] = 1
transform = self.canvas.shapes_to_canvas_transform
canvas_corners = corners.values.dot(transform.T.values).T
points_x = canvas_corners[0]
points_y = canvas_corners[1]
cairo_context = cairo.Context(surface)
cairo_context.move_to(points_x[0], points_y[0])
for x, y in zip(points_x[1:], points_y[1:]):
cairo_context.line_to(x, y)
cairo_context.line_to(points_x[0], points_y[0])
cairo_context.set_source_rgb(1, 0, 0)
cairo_context.stroke()
return surface | [
"def",
"render_registration",
"(",
"self",
")",
":",
"surface",
"=",
"self",
".",
"get_surface",
"(",
")",
"if",
"self",
".",
"canvas",
"is",
"None",
"or",
"self",
".",
"df_canvas_corners",
".",
"shape",
"[",
"0",
"]",
"==",
"0",
":",
"return",
"surfac... | Render pinned points on video frame as red rectangle. | [
"Render",
"pinned",
"points",
"on",
"video",
"frame",
"as",
"red",
"rectangle",
"."
] | python | train |
Esri/ArcREST | src/arcrest/common/geometry.py | https://github.com/Esri/ArcREST/blob/ab240fde2b0200f61d4a5f6df033516e53f2f416/src/arcrest/common/geometry.py#L48-L53 | def asDictionary(self):
"""returns the wkid id for use in json calls"""
if self._wkid == None and self._wkt is not None:
return {"wkt": self._wkt}
else:
return {"wkid": self._wkid} | [
"def",
"asDictionary",
"(",
"self",
")",
":",
"if",
"self",
".",
"_wkid",
"==",
"None",
"and",
"self",
".",
"_wkt",
"is",
"not",
"None",
":",
"return",
"{",
"\"wkt\"",
":",
"self",
".",
"_wkt",
"}",
"else",
":",
"return",
"{",
"\"wkid\"",
":",
"sel... | returns the wkid id for use in json calls | [
"returns",
"the",
"wkid",
"id",
"for",
"use",
"in",
"json",
"calls"
] | python | train |
vertexproject/synapse | synapse/lib/reflect.py | https://github.com/vertexproject/synapse/blob/22e67c5a8f6d7caddbcf34b39ab1bd2d6c4a6e0b/synapse/lib/reflect.py#L11-L23 | def getClsNames(item):
'''
Return a list of "fully qualified" class names for an instance.
Example:
for name in getClsNames(foo):
print(name)
'''
mro = inspect.getmro(item.__class__)
mro = [c for c in mro if c not in clsskip]
return ['%s.%s' % (c.__module__, c.__name__) for c in mro] | [
"def",
"getClsNames",
"(",
"item",
")",
":",
"mro",
"=",
"inspect",
".",
"getmro",
"(",
"item",
".",
"__class__",
")",
"mro",
"=",
"[",
"c",
"for",
"c",
"in",
"mro",
"if",
"c",
"not",
"in",
"clsskip",
"]",
"return",
"[",
"'%s.%s'",
"%",
"(",
"c",... | Return a list of "fully qualified" class names for an instance.
Example:
for name in getClsNames(foo):
print(name) | [
"Return",
"a",
"list",
"of",
"fully",
"qualified",
"class",
"names",
"for",
"an",
"instance",
"."
] | python | train |
horazont/aioxmpp | aioxmpp/service.py | https://github.com/horazont/aioxmpp/blob/22a68e5e1d23f2a4dee470092adbd4672f9ef061/aioxmpp/service.py#L1449-L1463 | def is_inbound_message_filter(cb):
"""
Return true if `cb` has been decorated with :func:`inbound_message_filter`.
"""
try:
handlers = get_magic_attr(cb)
except AttributeError:
return False
hs = HandlerSpec(
(_apply_inbound_message_filter, ())
)
return hs in handlers | [
"def",
"is_inbound_message_filter",
"(",
"cb",
")",
":",
"try",
":",
"handlers",
"=",
"get_magic_attr",
"(",
"cb",
")",
"except",
"AttributeError",
":",
"return",
"False",
"hs",
"=",
"HandlerSpec",
"(",
"(",
"_apply_inbound_message_filter",
",",
"(",
")",
")",... | Return true if `cb` has been decorated with :func:`inbound_message_filter`. | [
"Return",
"true",
"if",
"cb",
"has",
"been",
"decorated",
"with",
":",
"func",
":",
"inbound_message_filter",
"."
] | python | train |
twilio/twilio-python | twilio/rest/preview/deployed_devices/fleet/key.py | https://github.com/twilio/twilio-python/blob/c867895f55dcc29f522e6e8b8868d0d18483132f/twilio/rest/preview/deployed_devices/fleet/key.py#L342-L356 | def _proxy(self):
"""
Generate an instance context for the instance, the context is capable of
performing various actions. All instance actions are proxied to the context
:returns: KeyContext for this KeyInstance
:rtype: twilio.rest.preview.deployed_devices.fleet.key.KeyContext
"""
if self._context is None:
self._context = KeyContext(
self._version,
fleet_sid=self._solution['fleet_sid'],
sid=self._solution['sid'],
)
return self._context | [
"def",
"_proxy",
"(",
"self",
")",
":",
"if",
"self",
".",
"_context",
"is",
"None",
":",
"self",
".",
"_context",
"=",
"KeyContext",
"(",
"self",
".",
"_version",
",",
"fleet_sid",
"=",
"self",
".",
"_solution",
"[",
"'fleet_sid'",
"]",
",",
"sid",
... | Generate an instance context for the instance, the context is capable of
performing various actions. All instance actions are proxied to the context
:returns: KeyContext for this KeyInstance
:rtype: twilio.rest.preview.deployed_devices.fleet.key.KeyContext | [
"Generate",
"an",
"instance",
"context",
"for",
"the",
"instance",
"the",
"context",
"is",
"capable",
"of",
"performing",
"various",
"actions",
".",
"All",
"instance",
"actions",
"are",
"proxied",
"to",
"the",
"context"
] | python | train |
numenta/nupic | src/nupic/algorithms/utils.py | https://github.com/numenta/nupic/blob/5922fafffdccc8812e72b3324965ad2f7d4bbdad/src/nupic/algorithms/utils.py#L26-L71 | def importAndRunFunction(
path,
moduleName,
funcName,
**keywords
):
"""
Run a named function specified by a filesystem path, module name
and function name.
Returns the value returned by the imported function.
Use this when access is needed to code that has
not been added to a package accessible from the ordinary Python
path. Encapsulates the multiple lines usually needed to
safely manipulate and restore the Python path.
Parameters
----------
path: filesystem path
Path to the directory where the desired module is stored.
This will be used to temporarily augment the Python path.
moduleName: basestring
Name of the module, without trailing extension, where the desired
function is stored. This module should be in the directory specified
with path.
funcName: basestring
Name of the function to import and call.
keywords:
Keyword arguments to be passed to the imported function.
"""
import sys
originalPath = sys.path
try:
augmentedPath = [path] + sys.path
sys.path = augmentedPath
func = getattr(__import__(moduleName, fromlist=[funcName]), funcName)
sys.path = originalPath
except:
# Restore the original path in case of an exception.
sys.path = originalPath
raise
return func(**keywords) | [
"def",
"importAndRunFunction",
"(",
"path",
",",
"moduleName",
",",
"funcName",
",",
"*",
"*",
"keywords",
")",
":",
"import",
"sys",
"originalPath",
"=",
"sys",
".",
"path",
"try",
":",
"augmentedPath",
"=",
"[",
"path",
"]",
"+",
"sys",
".",
"path",
... | Run a named function specified by a filesystem path, module name
and function name.
Returns the value returned by the imported function.
Use this when access is needed to code that has
not been added to a package accessible from the ordinary Python
path. Encapsulates the multiple lines usually needed to
safely manipulate and restore the Python path.
Parameters
----------
path: filesystem path
Path to the directory where the desired module is stored.
This will be used to temporarily augment the Python path.
moduleName: basestring
Name of the module, without trailing extension, where the desired
function is stored. This module should be in the directory specified
with path.
funcName: basestring
Name of the function to import and call.
keywords:
Keyword arguments to be passed to the imported function. | [
"Run",
"a",
"named",
"function",
"specified",
"by",
"a",
"filesystem",
"path",
"module",
"name",
"and",
"function",
"name",
"."
] | python | valid |
openstack/networking-cisco | networking_cisco/apps/saf/server/dfa_server.py | https://github.com/openstack/networking-cisco/blob/aa58a30aec25b86f9aa5952b0863045975debfa9/networking_cisco/apps/saf/server/dfa_server.py#L697-L701 | def _get_segmentation_id(self, netid, segid, source):
"""Allocate segmentation id. """
return self.seg_drvr.allocate_segmentation_id(netid, seg_id=segid,
source=source) | [
"def",
"_get_segmentation_id",
"(",
"self",
",",
"netid",
",",
"segid",
",",
"source",
")",
":",
"return",
"self",
".",
"seg_drvr",
".",
"allocate_segmentation_id",
"(",
"netid",
",",
"seg_id",
"=",
"segid",
",",
"source",
"=",
"source",
")"
] | Allocate segmentation id. | [
"Allocate",
"segmentation",
"id",
"."
] | python | train |
azraq27/neural | neural/alignment.py | https://github.com/azraq27/neural/blob/fe91bfeecbf73ad99708cf5dca66cb61fcd529f5/neural/alignment.py#L120-L135 | def convert_coord(coord_from,matrix_file,base_to_aligned=True):
'''Takes an XYZ array (in DICOM coordinates) and uses the matrix file produced by 3dAllineate to transform it. By default, the 3dAllineate
matrix transforms from base to aligned space; to get the inverse transform set ``base_to_aligned`` to ``False``'''
with open(matrix_file) as f:
try:
values = [float(y) for y in ' '.join([x for x in f.readlines() if x.strip()[0]!='#']).strip().split()]
except:
nl.notify('Error reading values from matrix file %s' % matrix_file, level=nl.level.error)
return False
if len(values)!=12:
nl.notify('Error: found %d values in matrix file %s (expecting 12)' % (len(values),matrix_file), level=nl.level.error)
return False
matrix = np.vstack((np.array(values).reshape((3,-1)),[0,0,0,1]))
if not base_to_aligned:
matrix = np.linalg.inv(matrix)
return np.dot(matrix,list(coord_from) + [1])[:3] | [
"def",
"convert_coord",
"(",
"coord_from",
",",
"matrix_file",
",",
"base_to_aligned",
"=",
"True",
")",
":",
"with",
"open",
"(",
"matrix_file",
")",
"as",
"f",
":",
"try",
":",
"values",
"=",
"[",
"float",
"(",
"y",
")",
"for",
"y",
"in",
"' '",
".... | Takes an XYZ array (in DICOM coordinates) and uses the matrix file produced by 3dAllineate to transform it. By default, the 3dAllineate
matrix transforms from base to aligned space; to get the inverse transform set ``base_to_aligned`` to ``False`` | [
"Takes",
"an",
"XYZ",
"array",
"(",
"in",
"DICOM",
"coordinates",
")",
"and",
"uses",
"the",
"matrix",
"file",
"produced",
"by",
"3dAllineate",
"to",
"transform",
"it",
".",
"By",
"default",
"the",
"3dAllineate",
"matrix",
"transforms",
"from",
"base",
"to",... | python | train |
symphonyoss/python-symphony | symphony/Pod/streams.py | https://github.com/symphonyoss/python-symphony/blob/b939f35fbda461183ec0c01790c754f89a295be0/symphony/Pod/streams.py#L87-L93 | def promote_owner(self, stream_id, user_id):
''' promote user to owner in stream '''
req_hook = 'pod/v1/room/' + stream_id + '/membership/promoteOwner'
req_args = '{ "id": %s }' % user_id
status_code, response = self.__rest__.POST_query(req_hook, req_args)
self.logger.debug('%s: %s' % (status_code, response))
return status_code, response | [
"def",
"promote_owner",
"(",
"self",
",",
"stream_id",
",",
"user_id",
")",
":",
"req_hook",
"=",
"'pod/v1/room/'",
"+",
"stream_id",
"+",
"'/membership/promoteOwner'",
"req_args",
"=",
"'{ \"id\": %s }'",
"%",
"user_id",
"status_code",
",",
"response",
"=",
"self... | promote user to owner in stream | [
"promote",
"user",
"to",
"owner",
"in",
"stream"
] | python | train |
mbarkhau/tinypng | tinypng/api.py | https://github.com/mbarkhau/tinypng/blob/58e33cd5b46b26aab530a184b70856f7e936d79a/tinypng/api.py#L116-L122 | def shrink_data(in_data, api_key=None):
"""Shrink binary data of a png
returns (api_info, shrunk_data)
"""
info = get_shrink_data_info(in_data, api_key)
return info, get_shrunk_data(info) | [
"def",
"shrink_data",
"(",
"in_data",
",",
"api_key",
"=",
"None",
")",
":",
"info",
"=",
"get_shrink_data_info",
"(",
"in_data",
",",
"api_key",
")",
"return",
"info",
",",
"get_shrunk_data",
"(",
"info",
")"
] | Shrink binary data of a png
returns (api_info, shrunk_data) | [
"Shrink",
"binary",
"data",
"of",
"a",
"png"
] | python | train |
DataDog/integrations-core | datadog_checks_dev/datadog_checks/dev/tooling/commands/dep.py | https://github.com/DataDog/integrations-core/blob/ebd41c873cf9f97a8c51bf9459bc6a7536af8acd/datadog_checks_dev/datadog_checks/dev/tooling/commands/dep.py#L150-L168 | def freeze():
"""Combine all dependencies for the Agent's static environment."""
echo_waiting('Verifying collected packages...')
catalog, errors = make_catalog()
if errors:
for error in errors:
echo_failure(error)
abort()
static_file = get_agent_requirements()
echo_info('Static file: {}'.format(static_file))
pre_packages = list(read_packages(static_file))
catalog.write_packages(static_file)
post_packages = list(read_packages(static_file))
display_package_changes(pre_packages, post_packages) | [
"def",
"freeze",
"(",
")",
":",
"echo_waiting",
"(",
"'Verifying collected packages...'",
")",
"catalog",
",",
"errors",
"=",
"make_catalog",
"(",
")",
"if",
"errors",
":",
"for",
"error",
"in",
"errors",
":",
"echo_failure",
"(",
"error",
")",
"abort",
"(",... | Combine all dependencies for the Agent's static environment. | [
"Combine",
"all",
"dependencies",
"for",
"the",
"Agent",
"s",
"static",
"environment",
"."
] | python | train |
quantopian/pyfolio | pyfolio/capacity.py | https://github.com/quantopian/pyfolio/blob/712716ab0cdebbec9fabb25eea3bf40e4354749d/pyfolio/capacity.py#L10-L42 | def daily_txns_with_bar_data(transactions, market_data):
"""
Sums the absolute value of shares traded in each name on each day.
Adds columns containing the closing price and total daily volume for
each day-ticker combination.
Parameters
----------
transactions : pd.DataFrame
Prices and amounts of executed trades. One row per trade.
- See full explanation in tears.create_full_tear_sheet
market_data : pd.Panel
Contains "volume" and "price" DataFrames for the tickers
in the passed positions DataFrames
Returns
-------
txn_daily : pd.DataFrame
Daily totals for transacted shares in each traded name.
price and volume columns for close price and daily volume for
the corresponding ticker, respectively.
"""
transactions.index.name = 'date'
txn_daily = pd.DataFrame(transactions.assign(
amount=abs(transactions.amount)).groupby(
['symbol', pd.TimeGrouper('D')]).sum()['amount'])
txn_daily['price'] = market_data['price'].unstack()
txn_daily['volume'] = market_data['volume'].unstack()
txn_daily = txn_daily.reset_index().set_index('date')
return txn_daily | [
"def",
"daily_txns_with_bar_data",
"(",
"transactions",
",",
"market_data",
")",
":",
"transactions",
".",
"index",
".",
"name",
"=",
"'date'",
"txn_daily",
"=",
"pd",
".",
"DataFrame",
"(",
"transactions",
".",
"assign",
"(",
"amount",
"=",
"abs",
"(",
"tra... | Sums the absolute value of shares traded in each name on each day.
Adds columns containing the closing price and total daily volume for
each day-ticker combination.
Parameters
----------
transactions : pd.DataFrame
Prices and amounts of executed trades. One row per trade.
- See full explanation in tears.create_full_tear_sheet
market_data : pd.Panel
Contains "volume" and "price" DataFrames for the tickers
in the passed positions DataFrames
Returns
-------
txn_daily : pd.DataFrame
Daily totals for transacted shares in each traded name.
price and volume columns for close price and daily volume for
the corresponding ticker, respectively. | [
"Sums",
"the",
"absolute",
"value",
"of",
"shares",
"traded",
"in",
"each",
"name",
"on",
"each",
"day",
".",
"Adds",
"columns",
"containing",
"the",
"closing",
"price",
"and",
"total",
"daily",
"volume",
"for",
"each",
"day",
"-",
"ticker",
"combination",
... | python | valid |
CalebBell/fluids | fluids/control_valve.py | https://github.com/CalebBell/fluids/blob/57f556752e039f1d3e5a822f408c184783db2828/fluids/control_valve.py#L1183-L1473 | def control_valve_noise_g_2011(m, P1, P2, T1, rho, gamma, MW, Kv,
d, Di, t_pipe, Fd, FL, FLP=None, FP=None,
rho_pipe=7800.0, c_pipe=5000.0,
P_air=101325.0, rho_air=1.2, c_air=343.0,
An=-3.8, Stp=0.2, T2=None, beta=0.93):
r'''Calculates the sound made by a gas flowing through a control valve
according to the standard IEC 60534-8-3 (2011) [1]_.
Parameters
----------
m : float
Mass flow rate of gas through the control valve, [kg/s]
P1 : float
Inlet pressure of the gas before valves and reducers [Pa]
P2 : float
Outlet pressure of the gas after valves and reducers [Pa]
T1 : float
Inlet gas temperature, [K]
rho : float
Density of the gas at the inlet [kg/m^3]
gamma : float
Specific heat capacity ratio [-]
MW : float
Molecular weight of the gas [g/mol]
Kv : float
Metric Kv valve flow coefficient (flow rate of water at a pressure drop
of 1 bar) [m^3/hr]
d : float
Diameter of the valve [m]
Di : float
Internal diameter of the pipe before and after the valve [m]
t_pipe : float
Wall thickness of the pipe after the valve, [m]
Fd : float
Valve style modifier (0.1 to 1; varies tremendously depending on the
type of valve and position; do not use the default at all!) [-]
FL : float
Liquid pressure recovery factor of a control valve without attached
fittings (normally 0.8-0.9 at full open and decreasing as opened
further to below 0.5; use default very cautiously!) [-]
FLP : float, optional
Combined liquid pressure recovery factor with piping geometry factor,
for a control valve with attached fittings [-]
FP : float, optional
Piping geometry factor [-]
rho_pipe : float, optional
Density of the pipe wall material at flowing conditions, [kg/m^3]
c_pipe : float, optional
Speed of sound of the pipe wall material at flowing conditions, [m/s]
P_air : float, optional
Pressure of the air surrounding the valve and pipe wall, [Pa]
rho_air : float, optional
Density of the air surrounding the valve and pipe wall, [kg/m^3]
c_air : float, optional
Speed of sound of the air surrounding the valve and pipe wall, [m/s]
An : float, optional
Valve correction factor for acoustic efficiency
Stp : float, optional
Strouhal number at the peak `fp`; between 0.1 and 0.3 typically, [-]
T2 : float, optional
Outlet gas temperature; assumed `T1` if not provided (a PH flash
should be used to obtain this if possible), [K]
beta : float, optional
Valve outlet / expander inlet contraction coefficient, [-]
Returns
-------
LpAe1m : float
A weighted sound pressure level 1 m from the pipe wall, 1 m distance
dowstream of the valve (at reference sound pressure level 2E-5), [dBA]
Notes
-----
For formulas see [1]_. This takes on the order of 100 us to compute.
For values of `An`, see [1]_.
This model was checked against six examples in [1]_; they match to all
given decimals.
Several additional formulas are given for multihole trim valves,
control valves with two or more fixed area stages, and multipath,
multistage trim valves.
Examples
--------
>>> control_valve_noise_g_2011(m=2.22, P1=1E6, P2=7.2E5, T1=450, rho=5.3,
... gamma=1.22, MW=19.8, Kv=77.85, d=0.1, Di=0.2031, FL=None, FLP=0.792,
... FP=0.98, Fd=0.296, t_pipe=0.008, rho_pipe=8000.0, c_pipe=5000.0,
... rho_air=1.293, c_air=343.0, An=-3.8, Stp=0.2)
91.67702674629604
References
----------
.. [1] IEC 60534-8-3 : Industrial-Process Control Valves - Part 8-3: Noise
Considerations - Control Valve Aerodynamic Noise Prediction Method."
'''
k = gamma # alias
C = Kv_to_Cv(Kv)
N14 = 4.6E-3
N16 = 4.89E4
fs = 1.0 # structural loss factor reference frequency, Hz
P_air_std = 101325.0
if T2 is None:
T2 = T1
x = (P1 - P2)/P1
# FLP/FP when fittings attached
FL_term = FLP/FP if FP is not None else FL
P_vc = P1*(1.0 - x/FL_term**2)
x_vcc = 1.0 - (2.0/(k + 1.0))**(k/(k - 1.0)) # mostly matches
xc = FL_term**2*x_vcc
alpha = (1.0 - x_vcc)/(1.0 - xc)
xB = 1.0 - 1.0/alpha*(1.0/k)**((k/(k - 1.0)))
xCE = 1.0 - 1.0/(22.0*alpha)
# Regime determination check - should be ordered or won't work
assert xc < x_vcc
assert x_vcc < xB
assert xB < xCE
regime = None
if x <= xc:
regime = 1
elif xc < x <= x_vcc:
regime = 2
elif x_vcc < x <= xB:
regime = 3
elif xB < x <= xCE:
regime = 4
else:
regime = 5
# print('regime', regime)
Dj = N14*Fd*(C*(FL_term))**0.5
Mj5 = (2.0/(k - 1.0)*( 22.0**((k-1.0)/k) - 1.0 ))**0.5
if regime == 1:
Mvc = ( (2.0/(k-1.0)) *((1.0 - x/FL_term**2)**((1.0 - k)/k) - 1.0) )**0.5 # Not match
elif regime in (2, 3, 4):
Mj = ( (2.0/(k-1.0))*((1.0/(alpha*(1.0-x)))**((k - 1.0)/k) - 1.0) )**0.5 # Not match
Mj = min(Mj, Mj5)
elif regime == 5:
pass
if regime == 1:
Tvc = T1*(1.0 - x/(FL_term)**2)**((k - 1.0)/k)
cvc = (k*P1/rho*(1 - x/(FL_term)**2)**((k-1.0)/k))**0.5
Wm = 0.5*m*(Mvc*cvc)**2
else:
Tvcc = 2.0*T1/(k + 1.0)
cvcc = (2.0*k*P1/(k+1.0)/rho)**0.5
Wm = 0.5*m*cvcc*cvcc
# print('Wm', Wm)
if regime == 1:
fp = Stp*Mvc*cvc/Dj
elif regime in (2, 3):
fp = Stp*Mj*cvcc/Dj
elif regime == 4:
fp = 1.4*Stp*cvcc/Dj/(Mj*Mj - 1.0)**0.5
elif regime == 5:
fp = 1.4*Stp*cvcc/Dj/(Mj5*Mj5 - 1.0)**0.5
# print('fp', fp)
if regime == 1:
eta = 10.0**An*FL_term**2*(Mvc)**3
elif regime == 2:
eta = 10.0**An*x/x_vcc*Mj**(6.6*FL_term*FL_term)
elif regime == 3:
eta = 10.0**An*Mj**(6.6*FL_term*FL_term)
elif regime == 4:
eta = 0.5*10.0**An*Mj*Mj*(2.0**0.5)**(6.6*FL_term*FL_term)
elif regime == 5:
eta = 0.5*10.0**An*Mj5*Mj5*(2.0**0.5)**(6.6*FL_term*FL_term)
# print('eta', eta)
Wa = eta*Wm
rho2 = rho*(P2/P1)
# Speed of sound
c2 = (k*R*T2/(MW/1000.))**0.5
Mo = 4.0*m/(pi*d*d*rho2*c2)
M2 = 4.0*m/(pi*Di*Di*rho2*c2)
# print('M2', M2)
Lg = 16.0*log10(1.0/(1.0 - min(M2, 0.3))) # dB
if M2 > 0.3:
Up = 4.0*m/(pi*rho2*Di*Di)
UR = Up*Di*Di/(beta*d*d)
WmR = 0.5*m*UR*UR*( (1.0 - d*d/(Di*Di))**2 + 0.2)
fpR = Stp*UR/d
MR = UR/c2
# Value listed in appendix here is wrong, "based on another
# earlier standard. Calculation thereon is wrong". Assumed
# correct, matches spreadsheet to three decimals.
eta_R = 10**An*MR**3
WaR = eta_R*WmR
L_piR = 10.0*log10((3.2E9)*WaR*rho2*c2/(Di*Di)) + Lg
# print('Up', Up)
# print('UR', UR)
# print('WmR', WmR)
# print('fpR', fpR)
# print('MR', MR)
# print('eta_R', eta_R, eta_R/8.8E-4)
# print('WaR', WaR)
# print('L_piR', L_piR)
L_pi = 10.0*log10((3.2E9)*Wa*rho2*c2/(Di*Di)) + Lg
# print('L_pi', L_pi)
fr = c_pipe/(pi*Di)
fo = 0.25*fr*(c2/c_air)
fg = 3**0.5*c_air**2/(pi*t_pipe*c_pipe)
if d > 0.15:
dTL = 0.0
elif 0.05 <= d <= 0.15:
dTL = -16660.0*d**3 + 6370.0*d**2 - 813.0*d + 35.8
else:
dTL = 9.0
# print(dTL, 'dTL')
P_air_ratio = P_air/P_air_std
LpAe1m_sum = 0.0
LPis = []
LPIRs = []
L_pe1m_fis = []
for fi, A_weight in zip(fis_l_2015, A_weights_l_2015):
# This gets adjusted when Ma > 0.3
fi_turb_ratio = fi/fp
t1 = 1.0 + (0.5*fi_turb_ratio)**2.5
t2 = 1.0 + (0.5/fi_turb_ratio)**1.7
# Formula forgot to use log10, but log10 is needed for the numbers
Lpif = L_pi - 8.0 - 10.0*log10(t1*t2)
# print(Lpif, 'Lpif')
LPis.append(Lpif)
if M2 > 0.3:
fiR_turb_ratio = fi/fpR
t1 = 1.0 + (0.5*fiR_turb_ratio)**2.5
t2 = 1.0 + (0.5/fiR_turb_ratio)**1.7
# Again, log10 is missing
LpiRf = L_piR - 8.0 - 10.0*log10(t1*t2)
LPIRs.append(LpiRf)
LpiSf = 10.0*log10( 10**(0.1*Lpif) + 10.0**(0.1*LpiRf) )
if fi < fo:
Gx = (fo/fr)**(2.0/3.0)*(fi/fo)**4.0
if fo < fg:
Gy = (fo/fg)
else:
Gy = 1.0
else:
if fi < fr:
Gx = (fi/fr)**0.5
else:
Gx = 1.0
if fi < fg:
Gy = fi/fg
else:
Gy = 1.0
eta_s = (0.01/fi)**0.5
# print('eta_s', eta_s)
# up to eta_s is good
den = (rho2*c2 + 2.0*pi*t_pipe*fi*rho_pipe*eta_s)/(415.0*Gy) + 1.0
TL_fi = 10.0*log10(8.25E-7*(c2/(t_pipe*fi))**2*Gx/den*P_air_ratio) - dTL
# Formula forgot to use log10, but log10 is needed for the numbers
if M2 > 0.3:
term = LpiSf
else:
term = Lpif
L_pe1m_fi = term + TL_fi - 10.0*log10((Di + 2.0*t_pipe + 2.0)/(Di + 2.0*t_pipe))
L_pe1m_fis.append(L_pe1m_fi)
# print(L_pe1m_fi)
LpAe1m_sum += 10.0**(0.1*(L_pe1m_fi + A_weight))
LpAe1m = 10.0*log10(LpAe1m_sum)
return LpAe1m | [
"def",
"control_valve_noise_g_2011",
"(",
"m",
",",
"P1",
",",
"P2",
",",
"T1",
",",
"rho",
",",
"gamma",
",",
"MW",
",",
"Kv",
",",
"d",
",",
"Di",
",",
"t_pipe",
",",
"Fd",
",",
"FL",
",",
"FLP",
"=",
"None",
",",
"FP",
"=",
"None",
",",
"r... | r'''Calculates the sound made by a gas flowing through a control valve
according to the standard IEC 60534-8-3 (2011) [1]_.
Parameters
----------
m : float
Mass flow rate of gas through the control valve, [kg/s]
P1 : float
Inlet pressure of the gas before valves and reducers [Pa]
P2 : float
Outlet pressure of the gas after valves and reducers [Pa]
T1 : float
Inlet gas temperature, [K]
rho : float
Density of the gas at the inlet [kg/m^3]
gamma : float
Specific heat capacity ratio [-]
MW : float
Molecular weight of the gas [g/mol]
Kv : float
Metric Kv valve flow coefficient (flow rate of water at a pressure drop
of 1 bar) [m^3/hr]
d : float
Diameter of the valve [m]
Di : float
Internal diameter of the pipe before and after the valve [m]
t_pipe : float
Wall thickness of the pipe after the valve, [m]
Fd : float
Valve style modifier (0.1 to 1; varies tremendously depending on the
type of valve and position; do not use the default at all!) [-]
FL : float
Liquid pressure recovery factor of a control valve without attached
fittings (normally 0.8-0.9 at full open and decreasing as opened
further to below 0.5; use default very cautiously!) [-]
FLP : float, optional
Combined liquid pressure recovery factor with piping geometry factor,
for a control valve with attached fittings [-]
FP : float, optional
Piping geometry factor [-]
rho_pipe : float, optional
Density of the pipe wall material at flowing conditions, [kg/m^3]
c_pipe : float, optional
Speed of sound of the pipe wall material at flowing conditions, [m/s]
P_air : float, optional
Pressure of the air surrounding the valve and pipe wall, [Pa]
rho_air : float, optional
Density of the air surrounding the valve and pipe wall, [kg/m^3]
c_air : float, optional
Speed of sound of the air surrounding the valve and pipe wall, [m/s]
An : float, optional
Valve correction factor for acoustic efficiency
Stp : float, optional
Strouhal number at the peak `fp`; between 0.1 and 0.3 typically, [-]
T2 : float, optional
Outlet gas temperature; assumed `T1` if not provided (a PH flash
should be used to obtain this if possible), [K]
beta : float, optional
Valve outlet / expander inlet contraction coefficient, [-]
Returns
-------
LpAe1m : float
A weighted sound pressure level 1 m from the pipe wall, 1 m distance
dowstream of the valve (at reference sound pressure level 2E-5), [dBA]
Notes
-----
For formulas see [1]_. This takes on the order of 100 us to compute.
For values of `An`, see [1]_.
This model was checked against six examples in [1]_; they match to all
given decimals.
Several additional formulas are given for multihole trim valves,
control valves with two or more fixed area stages, and multipath,
multistage trim valves.
Examples
--------
>>> control_valve_noise_g_2011(m=2.22, P1=1E6, P2=7.2E5, T1=450, rho=5.3,
... gamma=1.22, MW=19.8, Kv=77.85, d=0.1, Di=0.2031, FL=None, FLP=0.792,
... FP=0.98, Fd=0.296, t_pipe=0.008, rho_pipe=8000.0, c_pipe=5000.0,
... rho_air=1.293, c_air=343.0, An=-3.8, Stp=0.2)
91.67702674629604
References
----------
.. [1] IEC 60534-8-3 : Industrial-Process Control Valves - Part 8-3: Noise
Considerations - Control Valve Aerodynamic Noise Prediction Method." | [
"r",
"Calculates",
"the",
"sound",
"made",
"by",
"a",
"gas",
"flowing",
"through",
"a",
"control",
"valve",
"according",
"to",
"the",
"standard",
"IEC",
"60534",
"-",
"8",
"-",
"3",
"(",
"2011",
")",
"[",
"1",
"]",
"_",
"."
] | python | train |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.