nwo stringlengths 5 86 | sha stringlengths 40 40 | path stringlengths 4 189 | language stringclasses 1 value | identifier stringlengths 1 94 | parameters stringlengths 2 4.03k | argument_list stringclasses 1 value | return_statement stringlengths 0 11.5k | docstring stringlengths 1 33.2k | docstring_summary stringlengths 0 5.15k | docstring_tokens list | function stringlengths 34 151k | function_tokens list | url stringlengths 90 278 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
hanpfei/chromium-net | 392cc1fa3a8f92f42e4071ab6e674d8e0482f83f | build/mac/tweak_info_plist.py | python | _ConvertPlist | (source_plist, output_plist, fmt) | return subprocess.call(
['plutil', '-convert', fmt, '-o', output_plist, source_plist]) | Convert |source_plist| to |fmt| and save as |output_plist|. | Convert |source_plist| to |fmt| and save as |output_plist|. | [
"Convert",
"|source_plist|",
"to",
"|fmt|",
"and",
"save",
"as",
"|output_plist|",
"."
] | def _ConvertPlist(source_plist, output_plist, fmt):
"""Convert |source_plist| to |fmt| and save as |output_plist|."""
return subprocess.call(
['plutil', '-convert', fmt, '-o', output_plist, source_plist]) | [
"def",
"_ConvertPlist",
"(",
"source_plist",
",",
"output_plist",
",",
"fmt",
")",
":",
"return",
"subprocess",
".",
"call",
"(",
"[",
"'plutil'",
",",
"'-convert'",
",",
"fmt",
",",
"'-o'",
",",
"output_plist",
",",
"source_plist",
"]",
")"
] | https://github.com/hanpfei/chromium-net/blob/392cc1fa3a8f92f42e4071ab6e674d8e0482f83f/build/mac/tweak_info_plist.py#L34-L37 | |
domino-team/openwrt-cc | 8b181297c34d14d3ca521cc9f31430d561dbc688 | package/gli-pub/openwrt-node-packages-master/node/node-v6.9.1/deps/npm/node_modules/node-gyp/gyp/pylib/gyp/xcodeproj_file.py | python | XCHierarchicalElement.Hashables | (self) | return hashables | Custom hashables for XCHierarchicalElements.
XCHierarchicalElements are special. Generally, their hashes shouldn't
change if the paths don't change. The normal XCObject implementation of
Hashables adds a hashable for each object, which means that if
the hierarchical structure changes (possibly due to changes caused when
TakeOverOnlyChild runs and encounters slight changes in the hierarchy),
the hashes will change. For example, if a project file initially contains
a/b/f1 and a/b becomes collapsed into a/b, f1 will have a single parent
a/b. If someone later adds a/f2 to the project file, a/b can no longer be
collapsed, and f1 winds up with parent b and grandparent a. That would
be sufficient to change f1's hash.
To counteract this problem, hashables for all XCHierarchicalElements except
for the main group (which has neither a name nor a path) are taken to be
just the set of path components. Because hashables are inherited from
parents, this provides assurance that a/b/f1 has the same set of hashables
whether its parent is b or a/b.
The main group is a special case. As it is permitted to have no name or
path, it is permitted to use the standard XCObject hash mechanism. This
is not considered a problem because there can be only one main group. | Custom hashables for XCHierarchicalElements. | [
"Custom",
"hashables",
"for",
"XCHierarchicalElements",
"."
] | def Hashables(self):
"""Custom hashables for XCHierarchicalElements.
XCHierarchicalElements are special. Generally, their hashes shouldn't
change if the paths don't change. The normal XCObject implementation of
Hashables adds a hashable for each object, which means that if
the hierarchical structure changes (possibly due to changes caused when
TakeOverOnlyChild runs and encounters slight changes in the hierarchy),
the hashes will change. For example, if a project file initially contains
a/b/f1 and a/b becomes collapsed into a/b, f1 will have a single parent
a/b. If someone later adds a/f2 to the project file, a/b can no longer be
collapsed, and f1 winds up with parent b and grandparent a. That would
be sufficient to change f1's hash.
To counteract this problem, hashables for all XCHierarchicalElements except
for the main group (which has neither a name nor a path) are taken to be
just the set of path components. Because hashables are inherited from
parents, this provides assurance that a/b/f1 has the same set of hashables
whether its parent is b or a/b.
The main group is a special case. As it is permitted to have no name or
path, it is permitted to use the standard XCObject hash mechanism. This
is not considered a problem because there can be only one main group.
"""
if self == self.PBXProjectAncestor()._properties['mainGroup']:
# super
return XCObject.Hashables(self)
hashables = []
# Put the name in first, ensuring that if TakeOverOnlyChild collapses
# children into a top-level group like "Source", the name always goes
# into the list of hashables without interfering with path components.
if 'name' in self._properties:
# Make it less likely for people to manipulate hashes by following the
# pattern of always pushing an object type value onto the list first.
hashables.append(self.__class__.__name__ + '.name')
hashables.append(self._properties['name'])
# NOTE: This still has the problem that if an absolute path is encountered,
# including paths with a sourceTree, they'll still inherit their parents'
# hashables, even though the paths aren't relative to their parents. This
# is not expected to be much of a problem in practice.
path = self.PathFromSourceTreeAndPath()
if path != None:
components = path.split(posixpath.sep)
for component in components:
hashables.append(self.__class__.__name__ + '.path')
hashables.append(component)
hashables.extend(self._hashables)
return hashables | [
"def",
"Hashables",
"(",
"self",
")",
":",
"if",
"self",
"==",
"self",
".",
"PBXProjectAncestor",
"(",
")",
".",
"_properties",
"[",
"'mainGroup'",
"]",
":",
"# super",
"return",
"XCObject",
".",
"Hashables",
"(",
"self",
")",
"hashables",
"=",
"[",
"]",... | https://github.com/domino-team/openwrt-cc/blob/8b181297c34d14d3ca521cc9f31430d561dbc688/package/gli-pub/openwrt-node-packages-master/node/node-v6.9.1/deps/npm/node_modules/node-gyp/gyp/pylib/gyp/xcodeproj_file.py#L951-L1004 | |
mingchen/protobuf-ios | 0958df34558cd54cb7b6e6ca5c8855bf3d475046 | compiler/python/mox.py | python | MockMethod.InAnyOrder | (self, group_name="default") | return self._CheckAndCreateNewGroup(group_name, UnorderedGroup) | Move this method into a group of unordered calls.
A group of unordered calls must be defined together, and must be executed
in full before the next expected method can be called. There can be
multiple groups that are expected serially, if they are given
different group names. The same group name can be reused if there is a
standard method call, or a group with a different name, spliced between
usages.
Args:
group_name: the name of the unordered group.
Returns:
self | Move this method into a group of unordered calls. | [
"Move",
"this",
"method",
"into",
"a",
"group",
"of",
"unordered",
"calls",
"."
] | def InAnyOrder(self, group_name="default"):
"""Move this method into a group of unordered calls.
A group of unordered calls must be defined together, and must be executed
in full before the next expected method can be called. There can be
multiple groups that are expected serially, if they are given
different group names. The same group name can be reused if there is a
standard method call, or a group with a different name, spliced between
usages.
Args:
group_name: the name of the unordered group.
Returns:
self
"""
return self._CheckAndCreateNewGroup(group_name, UnorderedGroup) | [
"def",
"InAnyOrder",
"(",
"self",
",",
"group_name",
"=",
"\"default\"",
")",
":",
"return",
"self",
".",
"_CheckAndCreateNewGroup",
"(",
"group_name",
",",
"UnorderedGroup",
")"
] | https://github.com/mingchen/protobuf-ios/blob/0958df34558cd54cb7b6e6ca5c8855bf3d475046/compiler/python/mox.py#L686-L702 | |
MythTV/mythtv | d282a209cb8be85d036f85a62a8ec971b67d45f4 | mythtv/contrib/imports/mirobridge/mirobridge/mirobridge_interpreter_3_0_0.py | python | MiroInterpreter.do_quit | (self, line) | quit -- Quits Miro cli. | quit -- Quits Miro cli. | [
"quit",
"--",
"Quits",
"Miro",
"cli",
"."
] | def do_quit(self, line):
"""quit -- Quits Miro cli."""
self.quit_flag = True | [
"def",
"do_quit",
"(",
"self",
",",
"line",
")",
":",
"self",
".",
"quit_flag",
"=",
"True"
] | https://github.com/MythTV/mythtv/blob/d282a209cb8be85d036f85a62a8ec971b67d45f4/mythtv/contrib/imports/mirobridge/mirobridge/mirobridge_interpreter_3_0_0.py#L148-L150 | ||
Xilinx/Vitis-AI | fc74d404563d9951b57245443c73bef389f3657f | tools/Vitis-AI-Quantizer/vai_q_tensorflow1.x/tensorflow/python/eager/context.py | python | Context.disable_run_metadata | (self) | Disables tracing of op execution via RunMetadata. | Disables tracing of op execution via RunMetadata. | [
"Disables",
"tracing",
"of",
"op",
"execution",
"via",
"RunMetadata",
"."
] | def disable_run_metadata(self):
"""Disables tracing of op execution via RunMetadata."""
if not self._context_handle:
return
pywrap_tensorflow.TFE_ContextDisableRunMetadata(self._context_handle) | [
"def",
"disable_run_metadata",
"(",
"self",
")",
":",
"if",
"not",
"self",
".",
"_context_handle",
":",
"return",
"pywrap_tensorflow",
".",
"TFE_ContextDisableRunMetadata",
"(",
"self",
".",
"_context_handle",
")"
] | https://github.com/Xilinx/Vitis-AI/blob/fc74d404563d9951b57245443c73bef389f3657f/tools/Vitis-AI-Quantizer/vai_q_tensorflow1.x/tensorflow/python/eager/context.py#L1432-L1436 | ||
windystrife/UnrealEngine_NVIDIAGameWorks | b50e6338a7c5b26374d66306ebc7807541ff815e | Engine/Extras/ThirdPartyNotUE/emsdk/Win64/python/2.7.5.3_64bit/Lib/lib2to3/refactor.py | python | RefactoringTool.refactor_doctest | (self, block, lineno, indent, filename) | return block | Refactors one doctest.
A doctest is given as a block of lines, the first of which starts
with ">>>" (possibly indented), while the remaining lines start
with "..." (identically indented). | Refactors one doctest. | [
"Refactors",
"one",
"doctest",
"."
] | def refactor_doctest(self, block, lineno, indent, filename):
"""Refactors one doctest.
A doctest is given as a block of lines, the first of which starts
with ">>>" (possibly indented), while the remaining lines start
with "..." (identically indented).
"""
try:
tree = self.parse_block(block, lineno, indent)
except Exception as err:
if self.logger.isEnabledFor(logging.DEBUG):
for line in block:
self.log_debug("Source: %s", line.rstrip(u"\n"))
self.log_error("Can't parse docstring in %s line %s: %s: %s",
filename, lineno, err.__class__.__name__, err)
return block
if self.refactor_tree(tree, filename):
new = unicode(tree).splitlines(True)
# Undo the adjustment of the line numbers in wrap_toks() below.
clipped, new = new[:lineno-1], new[lineno-1:]
assert clipped == [u"\n"] * (lineno-1), clipped
if not new[-1].endswith(u"\n"):
new[-1] += u"\n"
block = [indent + self.PS1 + new.pop(0)]
if new:
block += [indent + self.PS2 + line for line in new]
return block | [
"def",
"refactor_doctest",
"(",
"self",
",",
"block",
",",
"lineno",
",",
"indent",
",",
"filename",
")",
":",
"try",
":",
"tree",
"=",
"self",
".",
"parse_block",
"(",
"block",
",",
"lineno",
",",
"indent",
")",
"except",
"Exception",
"as",
"err",
":"... | https://github.com/windystrife/UnrealEngine_NVIDIAGameWorks/blob/b50e6338a7c5b26374d66306ebc7807541ff815e/Engine/Extras/ThirdPartyNotUE/emsdk/Win64/python/2.7.5.3_64bit/Lib/lib2to3/refactor.py#L595-L622 | |
CRYTEK/CRYENGINE | 232227c59a220cbbd311576f0fbeba7bb53b2a8c | Code/Tools/waf-1.7.13/waflib/Tools/c_config.py | python | define_cond | (self, key, val) | Conditionally define a name::
def configure(conf):
conf.define_cond('A', True)
# equivalent to:
# if val: conf.define('A', 1)
# else: conf.undefine('A')
:param key: define name
:type key: string
:param val: value
:type val: int or string | Conditionally define a name:: | [
"Conditionally",
"define",
"a",
"name",
"::"
] | def define_cond(self, key, val):
"""
Conditionally define a name::
def configure(conf):
conf.define_cond('A', True)
# equivalent to:
# if val: conf.define('A', 1)
# else: conf.undefine('A')
:param key: define name
:type key: string
:param val: value
:type val: int or string
"""
assert key and isinstance(key, str)
if val:
self.define(key, 1)
else:
self.undefine(key) | [
"def",
"define_cond",
"(",
"self",
",",
"key",
",",
"val",
")",
":",
"assert",
"key",
"and",
"isinstance",
"(",
"key",
",",
"str",
")",
"if",
"val",
":",
"self",
".",
"define",
"(",
"key",
",",
"1",
")",
"else",
":",
"self",
".",
"undefine",
"(",... | https://github.com/CRYTEK/CRYENGINE/blob/232227c59a220cbbd311576f0fbeba7bb53b2a8c/Code/Tools/waf-1.7.13/waflib/Tools/c_config.py#L852-L872 | ||
catboost/catboost | 167f64f237114a4d10b2b4ee42adb4569137debe | contrib/python/scikit-learn/py3/sklearn/feature_extraction/text.py | python | TfidfTransformer.fit | (self, X, y=None) | return self | Learn the idf vector (global term weights)
Parameters
----------
X : sparse matrix, [n_samples, n_features]
a matrix of term/token counts | Learn the idf vector (global term weights) | [
"Learn",
"the",
"idf",
"vector",
"(",
"global",
"term",
"weights",
")"
] | def fit(self, X, y=None):
"""Learn the idf vector (global term weights)
Parameters
----------
X : sparse matrix, [n_samples, n_features]
a matrix of term/token counts
"""
X = check_array(X, accept_sparse=('csr', 'csc'))
if not sp.issparse(X):
X = sp.csr_matrix(X)
dtype = X.dtype if X.dtype in FLOAT_DTYPES else np.float64
if self.use_idf:
n_samples, n_features = X.shape
df = _document_frequency(X)
df = df.astype(dtype, **_astype_copy_false(df))
# perform idf smoothing if required
df += int(self.smooth_idf)
n_samples += int(self.smooth_idf)
# log+1 instead of log makes sure terms with zero idf don't get
# suppressed entirely.
idf = np.log(n_samples / df) + 1
self._idf_diag = sp.diags(idf, offsets=0,
shape=(n_features, n_features),
format='csr',
dtype=dtype)
return self | [
"def",
"fit",
"(",
"self",
",",
"X",
",",
"y",
"=",
"None",
")",
":",
"X",
"=",
"check_array",
"(",
"X",
",",
"accept_sparse",
"=",
"(",
"'csr'",
",",
"'csc'",
")",
")",
"if",
"not",
"sp",
".",
"issparse",
"(",
"X",
")",
":",
"X",
"=",
"sp",
... | https://github.com/catboost/catboost/blob/167f64f237114a4d10b2b4ee42adb4569137debe/contrib/python/scikit-learn/py3/sklearn/feature_extraction/text.py#L1442-L1472 | |
catboost/catboost | 167f64f237114a4d10b2b4ee42adb4569137debe | contrib/python/Jinja2/py3/jinja2/utils.py | python | url_quote | (obj: t.Any, charset: str = "utf-8", for_qs: bool = False) | return rv | Quote a string for use in a URL using the given charset.
:param obj: String or bytes to quote. Other types are converted to
string then encoded to bytes using the given charset.
:param charset: Encode text to bytes using this charset.
:param for_qs: Quote "/" and use "+" for spaces. | Quote a string for use in a URL using the given charset. | [
"Quote",
"a",
"string",
"for",
"use",
"in",
"a",
"URL",
"using",
"the",
"given",
"charset",
"."
] | def url_quote(obj: t.Any, charset: str = "utf-8", for_qs: bool = False) -> str:
"""Quote a string for use in a URL using the given charset.
:param obj: String or bytes to quote. Other types are converted to
string then encoded to bytes using the given charset.
:param charset: Encode text to bytes using this charset.
:param for_qs: Quote "/" and use "+" for spaces.
"""
if not isinstance(obj, bytes):
if not isinstance(obj, str):
obj = str(obj)
obj = obj.encode(charset)
safe = b"" if for_qs else b"/"
rv = quote_from_bytes(obj, safe)
if for_qs:
rv = rv.replace("%20", "+")
return rv | [
"def",
"url_quote",
"(",
"obj",
":",
"t",
".",
"Any",
",",
"charset",
":",
"str",
"=",
"\"utf-8\"",
",",
"for_qs",
":",
"bool",
"=",
"False",
")",
"->",
"str",
":",
"if",
"not",
"isinstance",
"(",
"obj",
",",
"bytes",
")",
":",
"if",
"not",
"isin... | https://github.com/catboost/catboost/blob/167f64f237114a4d10b2b4ee42adb4569137debe/contrib/python/Jinja2/py3/jinja2/utils.py#L463-L483 | |
gem5/gem5 | 141cc37c2d4b93959d4c249b8f7e6a8b2ef75338 | ext/ply/example/calcdebug/calc.py | python | p_expression_group | (p) | expression : '(' expression ') | expression : '(' expression ') | [
"expression",
":",
"(",
"expression",
")"
] | def p_expression_group(p):
"expression : '(' expression ')'"
p[0] = p[2] | [
"def",
"p_expression_group",
"(",
"p",
")",
":",
"p",
"[",
"0",
"]",
"=",
"p",
"[",
"2",
"]"
] | https://github.com/gem5/gem5/blob/141cc37c2d4b93959d4c249b8f7e6a8b2ef75338/ext/ply/example/calcdebug/calc.py#L76-L78 | ||
CRYTEK/CRYENGINE | 232227c59a220cbbd311576f0fbeba7bb53b2a8c | Editor/Python/windows/Lib/site-packages/pip/_vendor/html5lib/html5parser.py | python | HTMLParser.documentEncoding | (self) | return self.tokenizer.stream.charEncoding[0] | The name of the character encoding
that was used to decode the input stream,
or :obj:`None` if that is not determined yet. | The name of the character encoding
that was used to decode the input stream,
or :obj:`None` if that is not determined yet. | [
"The",
"name",
"of",
"the",
"character",
"encoding",
"that",
"was",
"used",
"to",
"decode",
"the",
"input",
"stream",
"or",
":",
"obj",
":",
"None",
"if",
"that",
"is",
"not",
"determined",
"yet",
"."
] | def documentEncoding(self):
"""The name of the character encoding
that was used to decode the input stream,
or :obj:`None` if that is not determined yet.
"""
if not hasattr(self, 'tokenizer'):
return None
return self.tokenizer.stream.charEncoding[0] | [
"def",
"documentEncoding",
"(",
"self",
")",
":",
"if",
"not",
"hasattr",
"(",
"self",
",",
"'tokenizer'",
")",
":",
"return",
"None",
"return",
"self",
".",
"tokenizer",
".",
"stream",
".",
"charEncoding",
"[",
"0",
"]"
] | https://github.com/CRYTEK/CRYENGINE/blob/232227c59a220cbbd311576f0fbeba7bb53b2a8c/Editor/Python/windows/Lib/site-packages/pip/_vendor/html5lib/html5parser.py#L134-L142 | |
GoSSIP-SJTU/TripleDoggy | 03648d6b19c812504b14e8b98c8c7b3f443f4e54 | utils/llvm-build/llvmbuild/main.py | python | LLVMProjectInfo.write_cmake_exports_fragment | (self, output_path, enabled_optional_components) | write_cmake_exports_fragment(output_path) -> None
Generate a CMake fragment which includes LLVMBuild library
dependencies expressed similarly to how CMake would write
them via install(EXPORT). | write_cmake_exports_fragment(output_path) -> None | [
"write_cmake_exports_fragment",
"(",
"output_path",
")",
"-",
">",
"None"
] | def write_cmake_exports_fragment(self, output_path, enabled_optional_components):
"""
write_cmake_exports_fragment(output_path) -> None
Generate a CMake fragment which includes LLVMBuild library
dependencies expressed similarly to how CMake would write
them via install(EXPORT).
"""
dependencies = list(self.get_fragment_dependencies())
# Write out the CMake exports fragment.
make_install_dir(os.path.dirname(output_path))
f = open(output_path, 'w')
f.write("""\
# Explicit library dependency information.
#
# The following property assignments tell CMake about link
# dependencies of libraries imported from LLVM.
""")
self.foreach_cmake_library(
lambda ci:
f.write("""\
set_property(TARGET %s PROPERTY IMPORTED_LINK_INTERFACE_LIBRARIES %s)\n""" % (
ci.get_prefixed_library_name(), " ".join(sorted(
dep.get_prefixed_library_name()
for dep in self.get_required_libraries_for_component(ci)))))
,
enabled_optional_components,
skip_disabled = True,
skip_not_installed = True # Do not export internal libraries like gtest
)
f.close() | [
"def",
"write_cmake_exports_fragment",
"(",
"self",
",",
"output_path",
",",
"enabled_optional_components",
")",
":",
"dependencies",
"=",
"list",
"(",
"self",
".",
"get_fragment_dependencies",
"(",
")",
")",
"# Write out the CMake exports fragment.",
"make_install_dir",
... | https://github.com/GoSSIP-SJTU/TripleDoggy/blob/03648d6b19c812504b14e8b98c8c7b3f443f4e54/utils/llvm-build/llvmbuild/main.py#L606-L640 | ||
macchina-io/macchina.io | ef24ba0e18379c3dd48fb84e6dbf991101cb8db0 | platform/JS/V8/tools/gyp/pylib/gyp/generator/ninja.py | python | Target.Linkable | (self) | return self.type in ('static_library', 'shared_library') | Return true if this is a target that can be linked against. | Return true if this is a target that can be linked against. | [
"Return",
"true",
"if",
"this",
"is",
"a",
"target",
"that",
"can",
"be",
"linked",
"against",
"."
] | def Linkable(self):
"""Return true if this is a target that can be linked against."""
return self.type in ('static_library', 'shared_library') | [
"def",
"Linkable",
"(",
"self",
")",
":",
"return",
"self",
".",
"type",
"in",
"(",
"'static_library'",
",",
"'shared_library'",
")"
] | https://github.com/macchina-io/macchina.io/blob/ef24ba0e18379c3dd48fb84e6dbf991101cb8db0/platform/JS/V8/tools/gyp/pylib/gyp/generator/ninja.py#L155-L157 | |
ApolloAuto/apollo-platform | 86d9dc6743b496ead18d597748ebabd34a513289 | ros/third_party/lib_x86_64/python2.7/dist-packages/numpy/lib/scimath.py | python | _fix_int_lt_zero | (x) | return x | Convert `x` to double if it has real, negative components.
Otherwise, output is just the array version of the input (via asarray).
Parameters
----------
x : array_like
Returns
-------
array
Examples
--------
>>> np.lib.scimath._fix_int_lt_zero([1,2])
array([1, 2])
>>> np.lib.scimath._fix_int_lt_zero([-1,2])
array([-1., 2.]) | Convert `x` to double if it has real, negative components. | [
"Convert",
"x",
"to",
"double",
"if",
"it",
"has",
"real",
"negative",
"components",
"."
] | def _fix_int_lt_zero(x):
"""Convert `x` to double if it has real, negative components.
Otherwise, output is just the array version of the input (via asarray).
Parameters
----------
x : array_like
Returns
-------
array
Examples
--------
>>> np.lib.scimath._fix_int_lt_zero([1,2])
array([1, 2])
>>> np.lib.scimath._fix_int_lt_zero([-1,2])
array([-1., 2.])
"""
x = asarray(x)
if any(isreal(x) & (x < 0)):
x = x * 1.0
return x | [
"def",
"_fix_int_lt_zero",
"(",
"x",
")",
":",
"x",
"=",
"asarray",
"(",
"x",
")",
"if",
"any",
"(",
"isreal",
"(",
"x",
")",
"&",
"(",
"x",
"<",
"0",
")",
")",
":",
"x",
"=",
"x",
"*",
"1.0",
"return",
"x"
] | https://github.com/ApolloAuto/apollo-platform/blob/86d9dc6743b496ead18d597748ebabd34a513289/ros/third_party/lib_x86_64/python2.7/dist-packages/numpy/lib/scimath.py#L118-L142 | |
catboost/catboost | 167f64f237114a4d10b2b4ee42adb4569137debe | contrib/tools/python/src/Lib/ctypes/macholib/dylib.py | python | dylib_info | (filename) | return is_dylib.groupdict() | A dylib name can take one of the following four forms:
Location/Name.SomeVersion_Suffix.dylib
Location/Name.SomeVersion.dylib
Location/Name_Suffix.dylib
Location/Name.dylib
returns None if not found or a mapping equivalent to:
dict(
location='Location',
name='Name.SomeVersion_Suffix.dylib',
shortname='Name',
version='SomeVersion',
suffix='Suffix',
)
Note that SomeVersion and Suffix are optional and may be None
if not present. | A dylib name can take one of the following four forms:
Location/Name.SomeVersion_Suffix.dylib
Location/Name.SomeVersion.dylib
Location/Name_Suffix.dylib
Location/Name.dylib | [
"A",
"dylib",
"name",
"can",
"take",
"one",
"of",
"the",
"following",
"four",
"forms",
":",
"Location",
"/",
"Name",
".",
"SomeVersion_Suffix",
".",
"dylib",
"Location",
"/",
"Name",
".",
"SomeVersion",
".",
"dylib",
"Location",
"/",
"Name_Suffix",
".",
"d... | def dylib_info(filename):
"""
A dylib name can take one of the following four forms:
Location/Name.SomeVersion_Suffix.dylib
Location/Name.SomeVersion.dylib
Location/Name_Suffix.dylib
Location/Name.dylib
returns None if not found or a mapping equivalent to:
dict(
location='Location',
name='Name.SomeVersion_Suffix.dylib',
shortname='Name',
version='SomeVersion',
suffix='Suffix',
)
Note that SomeVersion and Suffix are optional and may be None
if not present.
"""
is_dylib = DYLIB_RE.match(filename)
if not is_dylib:
return None
return is_dylib.groupdict() | [
"def",
"dylib_info",
"(",
"filename",
")",
":",
"is_dylib",
"=",
"DYLIB_RE",
".",
"match",
"(",
"filename",
")",
"if",
"not",
"is_dylib",
":",
"return",
"None",
"return",
"is_dylib",
".",
"groupdict",
"(",
")"
] | https://github.com/catboost/catboost/blob/167f64f237114a4d10b2b4ee42adb4569137debe/contrib/tools/python/src/Lib/ctypes/macholib/dylib.py#L19-L42 | |
infinidb/infinidb | 6c9f5dfdabc41ad80e81ba9e1a4eb0d7271a5d23 | writeengine/bulk/qa-bulkload.py | python | find_paths | () | return (bulk_dir, data_dir) | Find DBRoot and BulkRoot. | Find DBRoot and BulkRoot. | [
"Find",
"DBRoot",
"and",
"BulkRoot",
"."
] | def find_paths():
"""Find DBRoot and BulkRoot."""
try:
config_file = os.environ['CALPONT_CONFIG_FILE']
except KeyError:
try:
logger.info("Environment variable CALPONT_CONFIG_FILE not set, looking for system Calpont.xml")
config_file = '/usr/local/Calpont/etc/Calpont.xml'
os.lstat(config_file)
except:
logger.error('No config file available')
sys.exit('No config file available')
try:
xmldoc = xml.dom.minidom.parse(config_file)
bulk_node = xmldoc.getElementsByTagName('BulkRoot')[0]
db_node = xmldoc.getElementsByTagName('DBRoot')[0]
bulk_dir = bulk_node.childNodes[0].nodeValue
data_dir = db_node.childNodes[0].nodeValue
except Exception, e:
logger.error('Error parsing config file')
logger.error(e)
sys.exit('Error parsing config file')
return (bulk_dir, data_dir) | [
"def",
"find_paths",
"(",
")",
":",
"try",
":",
"config_file",
"=",
"os",
".",
"environ",
"[",
"'CALPONT_CONFIG_FILE'",
"]",
"except",
"KeyError",
":",
"try",
":",
"logger",
".",
"info",
"(",
"\"Environment variable CALPONT_CONFIG_FILE not set, looking for system Calp... | https://github.com/infinidb/infinidb/blob/6c9f5dfdabc41ad80e81ba9e1a4eb0d7271a5d23/writeengine/bulk/qa-bulkload.py#L48-L73 | |
wxWidgets/wxPython-Classic | 19571e1ae65f1ac445f5491474121998c97a1bf0 | src/msw/_core.py | python | Rect2D.MoveRightBottomTo | (*args, **kwargs) | return _core_.Rect2D_MoveRightBottomTo(*args, **kwargs) | MoveRightBottomTo(self, Point2D pt) | MoveRightBottomTo(self, Point2D pt) | [
"MoveRightBottomTo",
"(",
"self",
"Point2D",
"pt",
")"
] | def MoveRightBottomTo(*args, **kwargs):
"""MoveRightBottomTo(self, Point2D pt)"""
return _core_.Rect2D_MoveRightBottomTo(*args, **kwargs) | [
"def",
"MoveRightBottomTo",
"(",
"*",
"args",
",",
"*",
"*",
"kwargs",
")",
":",
"return",
"_core_",
".",
"Rect2D_MoveRightBottomTo",
"(",
"*",
"args",
",",
"*",
"*",
"kwargs",
")"
] | https://github.com/wxWidgets/wxPython-Classic/blob/19571e1ae65f1ac445f5491474121998c97a1bf0/src/msw/_core.py#L1947-L1949 | |
wxWidgets/wxPython-Classic | 19571e1ae65f1ac445f5491474121998c97a1bf0 | src/gtk/_core.py | python | MouseState.GetX | (*args, **kwargs) | return _core_.MouseState_GetX(*args, **kwargs) | GetX(self) -> int | GetX(self) -> int | [
"GetX",
"(",
"self",
")",
"-",
">",
"int"
] | def GetX(*args, **kwargs):
"""GetX(self) -> int"""
return _core_.MouseState_GetX(*args, **kwargs) | [
"def",
"GetX",
"(",
"*",
"args",
",",
"*",
"*",
"kwargs",
")",
":",
"return",
"_core_",
".",
"MouseState_GetX",
"(",
"*",
"args",
",",
"*",
"*",
"kwargs",
")"
] | https://github.com/wxWidgets/wxPython-Classic/blob/19571e1ae65f1ac445f5491474121998c97a1bf0/src/gtk/_core.py#L4438-L4440 | |
wxWidgets/wxPython-Classic | 19571e1ae65f1ac445f5491474121998c97a1bf0 | src/gtk/_core.py | python | Rect.GetBottomLeft | (*args, **kwargs) | return _core_.Rect_GetBottomLeft(*args, **kwargs) | GetBottomLeft(self) -> Point | GetBottomLeft(self) -> Point | [
"GetBottomLeft",
"(",
"self",
")",
"-",
">",
"Point"
] | def GetBottomLeft(*args, **kwargs):
"""GetBottomLeft(self) -> Point"""
return _core_.Rect_GetBottomLeft(*args, **kwargs) | [
"def",
"GetBottomLeft",
"(",
"*",
"args",
",",
"*",
"*",
"kwargs",
")",
":",
"return",
"_core_",
".",
"Rect_GetBottomLeft",
"(",
"*",
"args",
",",
"*",
"*",
"kwargs",
")"
] | https://github.com/wxWidgets/wxPython-Classic/blob/19571e1ae65f1ac445f5491474121998c97a1bf0/src/gtk/_core.py#L1345-L1347 | |
wlanjie/AndroidFFmpeg | 7baf9122f4b8e1c74e7baf4be5c422c7a5ba5aaf | tools/fdk-aac-build/armeabi/toolchain/lib/python2.7/nntplib.py | python | NNTP.statcmd | (self, line) | return self.statparse(resp) | Internal: process a STAT, NEXT or LAST command. | Internal: process a STAT, NEXT or LAST command. | [
"Internal",
":",
"process",
"a",
"STAT",
"NEXT",
"or",
"LAST",
"command",
"."
] | def statcmd(self, line):
"""Internal: process a STAT, NEXT or LAST command."""
resp = self.shortcmd(line)
return self.statparse(resp) | [
"def",
"statcmd",
"(",
"self",
",",
"line",
")",
":",
"resp",
"=",
"self",
".",
"shortcmd",
"(",
"line",
")",
"return",
"self",
".",
"statparse",
"(",
"resp",
")"
] | https://github.com/wlanjie/AndroidFFmpeg/blob/7baf9122f4b8e1c74e7baf4be5c422c7a5ba5aaf/tools/fdk-aac-build/armeabi/toolchain/lib/python2.7/nntplib.py#L382-L385 | |
Studio3T/robomongo | 2411cd032e2e69b968dadda13ac91ca4ef3483b0 | src/third-party/qscintilla-2.8.4/sources/Python/configure.py | python | _generate_pro | (target_config, opts, module_config) | return pro_name | Generate the .pro file for the module and return its name.
target_config is the target configuration. opts are the command line
options. module_config is the module configuration. | Generate the .pro file for the module and return its name.
target_config is the target configuration. opts are the command line
options. module_config is the module configuration. | [
"Generate",
"the",
".",
"pro",
"file",
"for",
"the",
"module",
"and",
"return",
"its",
"name",
".",
"target_config",
"is",
"the",
"target",
"configuration",
".",
"opts",
"are",
"the",
"command",
"line",
"options",
".",
"module_config",
"is",
"the",
"module",... | def _generate_pro(target_config, opts, module_config):
""" Generate the .pro file for the module and return its name.
target_config is the target configuration. opts are the command line
options. module_config is the module configuration.
"""
inform("Generating the .pro file for the %s module..." % module_config.name)
# Without the 'no_check_exist' magic the target.files must exist when qmake
# is run otherwise the install and uninstall targets are not generated.
qmake_config = module_config.get_qmake_configuration(target_config)
pro_name = module_config.name + '.pro'
pro = open(pro_name, 'w')
pro.write('TEMPLATE = lib\n')
qt = qmake_config.get('QT')
if qt:
pro.write('QT += %s\n' % qt)
pro.write('CONFIG += %s\n' % ('debug' if opts.debug else 'release'))
pro.write('CONFIG += %s\n' % ('staticlib' if opts.static else 'plugin'))
config = qmake_config.get('CONFIG')
if config:
pro.write('CONFIG += %s\n' % config)
# Work around QTBUG-39300.
pro.write('CONFIG -= android_install\n')
qt5_qmake_config = _get_qt_qmake_config(qmake_config, 'Qt5')
qt4_qmake_config = _get_qt_qmake_config(qmake_config, 'Qt4')
if qt5_qmake_config or qt4_qmake_config:
pro.write('''
greaterThan(QT_MAJOR_VERSION, 4) {
''')
if qt5_qmake_config:
_write_qt_qmake_config(qt5_qmake_config, pro)
if qt4_qmake_config:
pro.write('} else {\n')
_write_qt_qmake_config(qt4_qmake_config, pro)
pro.write('}\n')
mname = module_config.name
if not opts.static:
pro.write('''
win32 {
PY_MODULE = %s.pyd
target.files = %s.pyd
LIBS += -L%s
} else {
PY_MODULE = %s.so
target.files = %s.so
}
target.CONFIG = no_check_exist
''' % (mname, mname, quote(target_config.py_pylib_dir), mname, mname))
pro.write('''
target.path = %s
INSTALLS += target
''' % quote(target_config.module_dir))
if module_config.qscintilla_api_file and target_config.api_dir != '':
pro.write('''
api.path = %s/api/python
api.files = %s.api
INSTALLS += api
''' % (target_config.api_dir, module_config.qscintilla_api_file))
sip_installs = module_config.get_sip_installs(target_config)
if sip_installs is not None:
path, files = sip_installs
pro.write('''
sip.path = %s
sip.files =''' % quote(path))
for f in files:
pro.write(' \\\n %s' % f)
pro.write('''
INSTALLS += sip
''')
pro.write('\n')
# These optimisations could apply to other platforms.
if module_config.no_exceptions:
if target_config.py_platform.startswith('linux') or target_config.py_platform == 'darwin':
pro.write('QMAKE_CXXFLAGS += -fno-exceptions\n')
if target_config.py_platform.startswith('linux') and not opts.static:
if target_config.py_version >= 0x030000:
entry_point = 'PyInit_%s' % mname
else:
entry_point = 'init%s' % mname
exp = open('%s.exp' % mname, 'wt')
exp.write('{ global: %s; local: *; };' % entry_point)
exp.close()
pro.write('QMAKE_LFLAGS += -Wl,--version-script=%s.exp\n' % mname)
if target_config.prot_is_public:
pro.write('DEFINES += SIP_PROTECTED_IS_PUBLIC protected=public\n')
defines = qmake_config.get('DEFINES')
if defines:
pro.write('DEFINES += %s\n' % defines)
includepath = qmake_config.get('INCLUDEPATH')
if includepath:
pro.write('INCLUDEPATH += %s\n' % includepath)
# Make sure the SIP include directory is searched before the Python include
# directory if they are different.
pro.write('INCLUDEPATH += %s\n' % quote(target_config.sip_inc_dir))
if target_config.py_inc_dir != target_config.sip_inc_dir:
pro.write('INCLUDEPATH += %s\n' % quote(target_config.py_inc_dir))
libs = qmake_config.get('LIBS')
if libs:
pro.write('LIBS += %s\n' % libs)
if not opts.static:
pro.write('''
win32 {
QMAKE_POST_LINK = $(COPY_FILE) $(DESTDIR_TARGET) $$PY_MODULE
} else {
QMAKE_POST_LINK = $(COPY_FILE) $(TARGET) $$PY_MODULE
}
macx {
QMAKE_LFLAGS += "-undefined dynamic_lookup"
greaterThan(QT_MAJOR_VERSION, 4) {
QMAKE_LFLAGS += "-install_name $$absolute_path($$PY_MODULE, $$target.path)"
}
''')
dylib = module_config.get_mac_wrapped_library_file(target_config)
if dylib:
pro.write('''
QMAKE_POST_LINK = $$QMAKE_POST_LINK$$escape_expand(\\\\n\\\\t)$$quote(install_name_tool -change %s %s $$PY_MODULE)
''' % (os.path.basename(dylib), dylib))
pro.write('}\n')
pro.write('\n')
pro.write('TARGET = %s\n' % mname)
pro.write('HEADERS = sipAPI%s.h\n' % mname)
pro.write('SOURCES =')
for s in glob.glob('*.cpp'):
pro.write(' \\\n %s' % s)
pro.write('\n')
pro.close()
return pro_name | [
"def",
"_generate_pro",
"(",
"target_config",
",",
"opts",
",",
"module_config",
")",
":",
"inform",
"(",
"\"Generating the .pro file for the %s module...\"",
"%",
"module_config",
".",
"name",
")",
"# Without the 'no_check_exist' magic the target.files must exist when qmake",
... | https://github.com/Studio3T/robomongo/blob/2411cd032e2e69b968dadda13ac91ca4ef3483b0/src/third-party/qscintilla-2.8.4/sources/Python/configure.py#L1300-L1468 | |
aws/lumberyard | f85344403c1c2e77ec8c75deb2c116e97b713217 | dev/Gems/CloudGemMetric/v1/AWS/python/windows/Lib/numba/__init__.py | python | _ensure_pynumpy | () | Make sure Python and Numpy have supported versions. | Make sure Python and Numpy have supported versions. | [
"Make",
"sure",
"Python",
"and",
"Numpy",
"have",
"supported",
"versions",
"."
] | def _ensure_pynumpy():
"""
Make sure Python and Numpy have supported versions.
"""
import warnings
from . import numpy_support
pyver = sys.version_info[:2]
if pyver < (2, 7) or ((3,) <= pyver < (3, 4)):
raise ImportError("Numba needs Python 2.7 or greater, or 3.4 or greater")
np_version = numpy_support.version[:2]
if np_version < (1, 7):
raise ImportError("Numba needs Numpy 1.7 or greater") | [
"def",
"_ensure_pynumpy",
"(",
")",
":",
"import",
"warnings",
"from",
".",
"import",
"numpy_support",
"pyver",
"=",
"sys",
".",
"version_info",
"[",
":",
"2",
"]",
"if",
"pyver",
"<",
"(",
"2",
",",
"7",
")",
"or",
"(",
"(",
"3",
",",
")",
"<=",
... | https://github.com/aws/lumberyard/blob/f85344403c1c2e77ec8c75deb2c116e97b713217/dev/Gems/CloudGemMetric/v1/AWS/python/windows/Lib/numba/__init__.py#L112-L125 | ||
tomahawk-player/tomahawk-resolvers | 7f827bbe410ccfdb0446f7d6a91acc2199c9cc8d | archive/spotify/breakpad/third_party/protobuf/protobuf/python/google/protobuf/internal/cpp_message.py | python | _AddDescriptors | (message_descriptor, dictionary) | Sets up a new protocol message class dictionary.
Args:
message_descriptor: A Descriptor instance describing this message type.
dictionary: Class dictionary to which we'll add a '__slots__' entry. | Sets up a new protocol message class dictionary. | [
"Sets",
"up",
"a",
"new",
"protocol",
"message",
"class",
"dictionary",
"."
] | def _AddDescriptors(message_descriptor, dictionary):
"""Sets up a new protocol message class dictionary.
Args:
message_descriptor: A Descriptor instance describing this message type.
dictionary: Class dictionary to which we'll add a '__slots__' entry.
"""
dictionary['__descriptors'] = {}
for field in message_descriptor.fields:
dictionary['__descriptors'][field.name] = GetFieldDescriptor(
field.full_name)
dictionary['__slots__'] = list(dictionary['__descriptors'].iterkeys()) + [
'_cmsg', '_owner', '_composite_fields', 'Extensions'] | [
"def",
"_AddDescriptors",
"(",
"message_descriptor",
",",
"dictionary",
")",
":",
"dictionary",
"[",
"'__descriptors'",
"]",
"=",
"{",
"}",
"for",
"field",
"in",
"message_descriptor",
".",
"fields",
":",
"dictionary",
"[",
"'__descriptors'",
"]",
"[",
"field",
... | https://github.com/tomahawk-player/tomahawk-resolvers/blob/7f827bbe410ccfdb0446f7d6a91acc2199c9cc8d/archive/spotify/breakpad/third_party/protobuf/protobuf/python/google/protobuf/internal/cpp_message.py#L377-L390 | ||
openvinotoolkit/openvino | dedcbeafa8b84cccdc55ca64b8da516682b381c7 | .ci/openvino-onnx/watchdog/src/watchdog.py | python | Watchdog._get_last_status | (pr) | return last_status | Get last commit status posted from Jenkins.
:param pr: Single PR being currently checked
:type pr: github.PullRequest.PullRequest
:return: Either last PR status posted from Jenkins or None
:rtype: github.CommitStatus.CommitStatus | Get last commit status posted from Jenkins. | [
"Get",
"last",
"commit",
"status",
"posted",
"from",
"Jenkins",
"."
] | def _get_last_status(pr):
"""Get last commit status posted from Jenkins.
:param pr: Single PR being currently checked
:type pr: github.PullRequest.PullRequest
:return: Either last PR status posted from Jenkins or None
:rtype: github.CommitStatus.CommitStatus
"""
# Find last commit in PR
last_commit = pr.get_commits().reversed[0]
# Get statuses and filter them to contain only those related to Jenkins CI
# and check if CI in Jenkins started
statuses = last_commit.get_statuses()
jenk_statuses = [stat for stat in statuses if
_GITHUB_CI_CHECK_NAME in stat.context]
try:
last_status = jenk_statuses[0]
except IndexError:
last_status = None
return last_status | [
"def",
"_get_last_status",
"(",
"pr",
")",
":",
"# Find last commit in PR",
"last_commit",
"=",
"pr",
".",
"get_commits",
"(",
")",
".",
"reversed",
"[",
"0",
"]",
"# Get statuses and filter them to contain only those related to Jenkins CI",
"# and check if CI in Jenkins star... | https://github.com/openvinotoolkit/openvino/blob/dedcbeafa8b84cccdc55ca64b8da516682b381c7/.ci/openvino-onnx/watchdog/src/watchdog.py#L185-L205 | |
apple/swift-lldb | d74be846ef3e62de946df343e8c234bde93a8912 | scripts/Python/static-binding/lldb.py | python | SBDebugger.GetAsync | (self) | return _lldb.SBDebugger_GetAsync(self) | GetAsync(SBDebugger self) -> bool | GetAsync(SBDebugger self) -> bool | [
"GetAsync",
"(",
"SBDebugger",
"self",
")",
"-",
">",
"bool"
] | def GetAsync(self):
"""GetAsync(SBDebugger self) -> bool"""
return _lldb.SBDebugger_GetAsync(self) | [
"def",
"GetAsync",
"(",
"self",
")",
":",
"return",
"_lldb",
".",
"SBDebugger_GetAsync",
"(",
"self",
")"
] | https://github.com/apple/swift-lldb/blob/d74be846ef3e62de946df343e8c234bde93a8912/scripts/Python/static-binding/lldb.py#L3867-L3869 | |
SmingHub/Sming | cde389ed030905694983121a32f9028976b57194 | Sming/Components/Storage/Tools/hwconfig/partition.py | python | Table.__init__ | (self, devices) | Create table of partitions against list of registered devices. | Create table of partitions against list of registered devices. | [
"Create",
"table",
"of",
"partitions",
"against",
"list",
"of",
"registered",
"devices",
"."
] | def __init__(self, devices):
"""Create table of partitions against list of registered devices."""
super().__init__(self)
self.devices = devices | [
"def",
"__init__",
"(",
"self",
",",
"devices",
")",
":",
"super",
"(",
")",
".",
"__init__",
"(",
"self",
")",
"self",
".",
"devices",
"=",
"devices"
] | https://github.com/SmingHub/Sming/blob/cde389ed030905694983121a32f9028976b57194/Sming/Components/Storage/Tools/hwconfig/partition.py#L134-L137 | ||
krishauser/Klampt | 972cc83ea5befac3f653c1ba20f80155768ad519 | Python/klampt/robotsim.py | python | TriangleMesh.translate | (self, t: Point) | return _robotsim.TriangleMesh_translate(self, t) | r"""
Translates all the vertices by v=v+t.
Args:
t (:obj:`list of 3 floats`) | r"""
Translates all the vertices by v=v+t. | [
"r",
"Translates",
"all",
"the",
"vertices",
"by",
"v",
"=",
"v",
"+",
"t",
"."
] | def translate(self, t: Point) ->None:
r"""
Translates all the vertices by v=v+t.
Args:
t (:obj:`list of 3 floats`)
"""
return _robotsim.TriangleMesh_translate(self, t) | [
"def",
"translate",
"(",
"self",
",",
"t",
":",
"Point",
")",
"->",
"None",
":",
"return",
"_robotsim",
".",
"TriangleMesh_translate",
"(",
"self",
",",
"t",
")"
] | https://github.com/krishauser/Klampt/blob/972cc83ea5befac3f653c1ba20f80155768ad519/Python/klampt/robotsim.py#L936-L943 | |
wxWidgets/wxPython-Classic | 19571e1ae65f1ac445f5491474121998c97a1bf0 | src/osx_carbon/_controls.py | python | ListCtrl.InsertColumn | (*args, **kwargs) | return _controls_.ListCtrl_InsertColumn(*args, **kwargs) | InsertColumn(self, long col, String heading, int format=LIST_FORMAT_LEFT,
int width=-1) -> long | InsertColumn(self, long col, String heading, int format=LIST_FORMAT_LEFT,
int width=-1) -> long | [
"InsertColumn",
"(",
"self",
"long",
"col",
"String",
"heading",
"int",
"format",
"=",
"LIST_FORMAT_LEFT",
"int",
"width",
"=",
"-",
"1",
")",
"-",
">",
"long"
] | def InsertColumn(*args, **kwargs):
"""
InsertColumn(self, long col, String heading, int format=LIST_FORMAT_LEFT,
int width=-1) -> long
"""
return _controls_.ListCtrl_InsertColumn(*args, **kwargs) | [
"def",
"InsertColumn",
"(",
"*",
"args",
",",
"*",
"*",
"kwargs",
")",
":",
"return",
"_controls_",
".",
"ListCtrl_InsertColumn",
"(",
"*",
"args",
",",
"*",
"*",
"kwargs",
")"
] | https://github.com/wxWidgets/wxPython-Classic/blob/19571e1ae65f1ac445f5491474121998c97a1bf0/src/osx_carbon/_controls.py#L4721-L4726 | |
catboost/catboost | 167f64f237114a4d10b2b4ee42adb4569137debe | contrib/python/scipy/scipy/special/basic.py | python | sph_jn | (n, z) | return jn[:(n+1)], jnp[:(n+1)] | Compute spherical Bessel function jn(z) and derivative.
This function computes the value and first derivative of jn(z) for all
orders up to and including n.
Parameters
----------
n : int
Maximum order of jn to compute
z : complex
Argument at which to evaluate
Returns
-------
jn : ndarray
Value of j0(z), ..., jn(z)
jnp : ndarray
First derivative j0'(z), ..., jn'(z)
See also
--------
spherical_jn
References
----------
.. [1] Zhang, Shanjie and Jin, Jianming. "Computation of Special
Functions", John Wiley and Sons, 1996, chapter 8.
http://jin.ece.illinois.edu/specfunc.html | Compute spherical Bessel function jn(z) and derivative. | [
"Compute",
"spherical",
"Bessel",
"function",
"jn",
"(",
"z",
")",
"and",
"derivative",
"."
] | def sph_jn(n, z):
"""Compute spherical Bessel function jn(z) and derivative.
This function computes the value and first derivative of jn(z) for all
orders up to and including n.
Parameters
----------
n : int
Maximum order of jn to compute
z : complex
Argument at which to evaluate
Returns
-------
jn : ndarray
Value of j0(z), ..., jn(z)
jnp : ndarray
First derivative j0'(z), ..., jn'(z)
See also
--------
spherical_jn
References
----------
.. [1] Zhang, Shanjie and Jin, Jianming. "Computation of Special
Functions", John Wiley and Sons, 1996, chapter 8.
http://jin.ece.illinois.edu/specfunc.html
"""
if not (isscalar(n) and isscalar(z)):
raise ValueError("arguments must be scalars.")
if (n != floor(n)) or (n < 0):
raise ValueError("n must be a non-negative integer.")
if (n < 1):
n1 = 1
else:
n1 = n
if iscomplex(z):
nm, jn, jnp, yn, ynp = specfun.csphjy(n1, z)
else:
nm, jn, jnp = specfun.sphj(n1, z)
return jn[:(n+1)], jnp[:(n+1)] | [
"def",
"sph_jn",
"(",
"n",
",",
"z",
")",
":",
"if",
"not",
"(",
"isscalar",
"(",
"n",
")",
"and",
"isscalar",
"(",
"z",
")",
")",
":",
"raise",
"ValueError",
"(",
"\"arguments must be scalars.\"",
")",
"if",
"(",
"n",
"!=",
"floor",
"(",
"n",
")",... | https://github.com/catboost/catboost/blob/167f64f237114a4d10b2b4ee42adb4569137debe/contrib/python/scipy/scipy/special/basic.py#L690-L733 | |
msitt/blpapi-python | bebcf43668c9e5f5467b1f685f9baebbfc45bc87 | src/blpapi/internals.py | python | CorrelationId.__str__ | (self) | x.__str__() <==> str(x) | x.__str__() <==> str(x) | [
"x",
".",
"__str__",
"()",
"<",
"==",
">",
"str",
"(",
"x",
")"
] | def __str__(self):
"""x.__str__() <==> str(x)"""
valueType = self.type()
valueTypeName = CorrelationId.__TYPE_NAMES[valueType]
if valueType == CorrelationId.UNSET_TYPE:
return valueTypeName
else:
return "({0}: {1!r}, ClassId: {2})".format(
valueTypeName, self.value(), self.classId()) | [
"def",
"__str__",
"(",
"self",
")",
":",
"valueType",
"=",
"self",
".",
"type",
"(",
")",
"valueTypeName",
"=",
"CorrelationId",
".",
"__TYPE_NAMES",
"[",
"valueType",
"]",
"if",
"valueType",
"==",
"CorrelationId",
".",
"UNSET_TYPE",
":",
"return",
"valueTyp... | https://github.com/msitt/blpapi-python/blob/bebcf43668c9e5f5467b1f685f9baebbfc45bc87/src/blpapi/internals.py#L357-L366 | ||
mongodb/mongo | d8ff665343ad29cf286ee2cf4a1960d29371937b | src/third_party/scons-3.1.2/scons-local-3.1.2/SCons/Tool/tex.py | python | InternalLaTeXAuxAction | (XXXLaTeXAction, target = None, source= None, env=None) | return result | A builder for LaTeX files that checks the output in the aux file
and decides how many times to use LaTeXAction, and BibTeXAction. | A builder for LaTeX files that checks the output in the aux file
and decides how many times to use LaTeXAction, and BibTeXAction. | [
"A",
"builder",
"for",
"LaTeX",
"files",
"that",
"checks",
"the",
"output",
"in",
"the",
"aux",
"file",
"and",
"decides",
"how",
"many",
"times",
"to",
"use",
"LaTeXAction",
"and",
"BibTeXAction",
"."
] | def InternalLaTeXAuxAction(XXXLaTeXAction, target = None, source= None, env=None):
"""A builder for LaTeX files that checks the output in the aux file
and decides how many times to use LaTeXAction, and BibTeXAction."""
global must_rerun_latex
# This routine is called with two actions. In this file for DVI builds
# with LaTeXAction and from the pdflatex.py with PDFLaTeXAction
# set this up now for the case where the user requests a different extension
# for the target filename
if (XXXLaTeXAction == LaTeXAction):
callerSuffix = ".dvi"
else:
callerSuffix = env['PDFSUFFIX']
basename = SCons.Util.splitext(str(source[0]))[0]
basedir = os.path.split(str(source[0]))[0]
basefile = os.path.split(str(basename))[1]
abspath = os.path.abspath(basedir)
targetext = os.path.splitext(str(target[0]))[1]
targetdir = os.path.split(str(target[0]))[0]
saved_env = {}
for var in SCons.Scanner.LaTeX.LaTeX.env_variables:
saved_env[var] = modify_env_var(env, var, abspath)
# Create base file names with the target directory since the auxiliary files
# will be made there. That's because the *COM variables have the cd
# command in the prolog. We check
# for the existence of files before opening them--even ones like the
# aux file that TeX always creates--to make it possible to write tests
# with stubs that don't necessarily generate all of the same files.
targetbase = os.path.join(targetdir, basefile)
# if there is a \makeindex there will be a .idx and thus
# we have to run makeindex at least once to keep the build
# happy even if there is no index.
# Same for glossaries, nomenclature, and acronyms
src_content = source[0].get_text_contents()
run_makeindex = makeindex_re.search(src_content) and not os.path.isfile(targetbase + '.idx')
run_nomenclature = makenomenclature_re.search(src_content) and not os.path.isfile(targetbase + '.nlo')
run_glossary = makeglossary_re.search(src_content) and not os.path.isfile(targetbase + '.glo')
run_glossaries = makeglossaries_re.search(src_content) and not os.path.isfile(targetbase + '.glo')
run_acronyms = makeacronyms_re.search(src_content) and not os.path.isfile(targetbase + '.acn')
saved_hashes = {}
suffix_nodes = {}
for suffix in all_suffixes+sum(newglossary_suffix, []):
theNode = env.fs.File(targetbase + suffix)
suffix_nodes[suffix] = theNode
saved_hashes[suffix] = theNode.get_csig()
if Verbose:
print("hashes: ",saved_hashes)
must_rerun_latex = True
# .aux files already processed by BibTex
already_bibtexed = []
#
# routine to update MD5 hash and compare
#
def check_MD5(filenode, suffix):
global must_rerun_latex
# two calls to clear old csig
filenode.clear_memoized_values()
filenode.ninfo = filenode.new_ninfo()
new_md5 = filenode.get_csig()
if saved_hashes[suffix] == new_md5:
if Verbose:
print("file %s not changed" % (targetbase+suffix))
return False # unchanged
saved_hashes[suffix] = new_md5
must_rerun_latex = True
if Verbose:
print("file %s changed, rerunning Latex, new hash = " % (targetbase+suffix), new_md5)
return True # changed
# generate the file name that latex will generate
resultfilename = targetbase + callerSuffix
count = 0
while (must_rerun_latex and count < int(env.subst('$LATEXRETRIES'))) :
result = XXXLaTeXAction(target, source, env)
if result != 0:
return result
count = count + 1
must_rerun_latex = False
# Decide if various things need to be run, or run again.
# Read the log file to find warnings/errors
logfilename = targetbase + '.log'
logContent = ''
if os.path.isfile(logfilename):
with open(logfilename, "rb") as f:
logContent = f.read().decode(errors='replace')
# Read the fls file to find all .aux files
flsfilename = targetbase + '.fls'
flsContent = ''
auxfiles = []
if os.path.isfile(flsfilename):
with open(flsfilename, "r") as f:
flsContent = f.read()
auxfiles = openout_aux_re.findall(flsContent)
# remove duplicates
dups = {}
for x in auxfiles:
dups[x] = 1
auxfiles = list(dups.keys())
bcffiles = []
if os.path.isfile(flsfilename):
with open(flsfilename, "r") as f:
flsContent = f.read()
bcffiles = openout_bcf_re.findall(flsContent)
# remove duplicates
dups = {}
for x in bcffiles:
dups[x] = 1
bcffiles = list(dups.keys())
if Verbose:
print("auxfiles ",auxfiles)
print("bcffiles ",bcffiles)
# Now decide if bibtex will need to be run.
# The information that bibtex reads from the .aux file is
# pass-independent. If we find (below) that the .bbl file is unchanged,
# then the last latex saw a correct bibliography.
# Therefore only do this once
# Go through all .aux files and remember the files already done.
for auxfilename in auxfiles:
if auxfilename not in already_bibtexed:
already_bibtexed.append(auxfilename)
target_aux = os.path.join(targetdir, auxfilename)
if os.path.isfile(target_aux):
with open(target_aux, "r") as f:
content = f.read()
if content.find("bibdata") != -1:
if Verbose:
print("Need to run bibtex on ",auxfilename)
bibfile = env.fs.File(SCons.Util.splitext(target_aux)[0])
result = BibTeXAction(bibfile, bibfile, env)
if result != 0:
check_file_error_message(env['BIBTEX'], 'blg')
must_rerun_latex = True
# Now decide if biber will need to be run.
# When the backend for biblatex is biber (by choice or default) the
# citation information is put in the .bcf file.
# The information that biber reads from the .bcf file is
# pass-independent. If we find (below) that the .bbl file is unchanged,
# then the last latex saw a correct bibliography.
# Therefore only do this once
# Go through all .bcf files and remember the files already done.
for bcffilename in bcffiles:
if bcffilename not in already_bibtexed:
already_bibtexed.append(bcffilename)
target_bcf = os.path.join(targetdir, bcffilename)
if os.path.isfile(target_bcf):
with open(target_bcf, "r") as f:
content = f.read()
if content.find("bibdata") != -1:
if Verbose:
print("Need to run biber on ",bcffilename)
bibfile = env.fs.File(SCons.Util.splitext(target_bcf)[0])
result = BiberAction(bibfile, bibfile, env)
if result != 0:
check_file_error_message(env['BIBER'], 'blg')
must_rerun_latex = True
# Now decide if latex will need to be run again due to index.
if check_MD5(suffix_nodes['.idx'],'.idx') or (count == 1 and run_makeindex):
# We must run makeindex
if Verbose:
print("Need to run makeindex")
idxfile = suffix_nodes['.idx']
result = MakeIndexAction(idxfile, idxfile, env)
if result != 0:
check_file_error_message(env['MAKEINDEX'], 'ilg')
return result
# TO-DO: need to add a way for the user to extend this list for whatever
# auxiliary files they create in other (or their own) packages
# Harder is case is where an action needs to be called -- that should be rare (I hope?)
for index in check_suffixes:
check_MD5(suffix_nodes[index],index)
# Now decide if latex will need to be run again due to nomenclature.
if check_MD5(suffix_nodes['.nlo'],'.nlo') or (count == 1 and run_nomenclature):
# We must run makeindex
if Verbose:
print("Need to run makeindex for nomenclature")
nclfile = suffix_nodes['.nlo']
result = MakeNclAction(nclfile, nclfile, env)
if result != 0:
check_file_error_message('%s (nomenclature)' % env['MAKENCL'],
'nlg')
#return result
# Now decide if latex will need to be run again due to glossary.
if check_MD5(suffix_nodes['.glo'],'.glo') or (count == 1 and run_glossaries) or (count == 1 and run_glossary):
# We must run makeindex
if Verbose:
print("Need to run makeindex for glossary")
glofile = suffix_nodes['.glo']
result = MakeGlossaryAction(glofile, glofile, env)
if result != 0:
check_file_error_message('%s (glossary)' % env['MAKEGLOSSARY'],
'glg')
#return result
# Now decide if latex will need to be run again due to acronyms.
if check_MD5(suffix_nodes['.acn'],'.acn') or (count == 1 and run_acronyms):
# We must run makeindex
if Verbose:
print("Need to run makeindex for acronyms")
acrfile = suffix_nodes['.acn']
result = MakeAcronymsAction(acrfile, acrfile, env)
if result != 0:
check_file_error_message('%s (acronyms)' % env['MAKEACRONYMS'],
'alg')
return result
# Now decide if latex will need to be run again due to newglossary command.
for ig in range(len(newglossary_suffix)):
if check_MD5(suffix_nodes[newglossary_suffix[ig][2]],newglossary_suffix[ig][2]) or (count == 1):
# We must run makeindex
if Verbose:
print("Need to run makeindex for newglossary")
newglfile = suffix_nodes[newglossary_suffix[ig][2]]
MakeNewGlossaryAction = SCons.Action.Action("$MAKENEWGLOSSARYCOM ${SOURCE.filebase}%s -s ${SOURCE.filebase}.ist -t ${SOURCE.filebase}%s -o ${SOURCE.filebase}%s" % (newglossary_suffix[ig][2],newglossary_suffix[ig][0],newglossary_suffix[ig][1]), "$MAKENEWGLOSSARYCOMSTR")
result = MakeNewGlossaryAction(newglfile, newglfile, env)
if result != 0:
check_file_error_message('%s (newglossary)' % env['MAKENEWGLOSSARY'],
newglossary_suffix[ig][0])
return result
# Now decide if latex needs to be run yet again to resolve warnings.
if warning_rerun_re.search(logContent):
must_rerun_latex = True
if Verbose:
print("rerun Latex due to latex or package rerun warning")
if rerun_citations_re.search(logContent):
must_rerun_latex = True
if Verbose:
print("rerun Latex due to 'Rerun to get citations correct' warning")
if undefined_references_re.search(logContent):
must_rerun_latex = True
if Verbose:
print("rerun Latex due to undefined references or citations")
if (count >= int(env.subst('$LATEXRETRIES')) and must_rerun_latex):
print("reached max number of retries on Latex ,",int(env.subst('$LATEXRETRIES')))
# end of while loop
# rename Latex's output to what the target name is
if not (str(target[0]) == resultfilename and os.path.isfile(resultfilename)):
if os.path.isfile(resultfilename):
print("move %s to %s" % (resultfilename, str(target[0]), ))
shutil.move(resultfilename,str(target[0]))
# Original comment (when TEXPICTS was not restored):
# The TEXPICTS enviroment variable is needed by a dvi -> pdf step
# later on Mac OSX so leave it
#
# It is also used when searching for pictures (implicit dependencies).
# Why not set the variable again in the respective builder instead
# of leaving local modifications in the environment? What if multiple
# latex builds in different directories need different TEXPICTS?
for var in SCons.Scanner.LaTeX.LaTeX.env_variables:
if var == 'TEXPICTS':
continue
if saved_env[var] is _null:
try:
del env['ENV'][var]
except KeyError:
pass # was never set
else:
env['ENV'][var] = saved_env[var]
return result | [
"def",
"InternalLaTeXAuxAction",
"(",
"XXXLaTeXAction",
",",
"target",
"=",
"None",
",",
"source",
"=",
"None",
",",
"env",
"=",
"None",
")",
":",
"global",
"must_rerun_latex",
"# This routine is called with two actions. In this file for DVI builds",
"# with LaTeXAction and... | https://github.com/mongodb/mongo/blob/d8ff665343ad29cf286ee2cf4a1960d29371937b/src/third_party/scons-3.1.2/scons-local-3.1.2/SCons/Tool/tex.py#L197-L493 | |
catboost/catboost | 167f64f237114a4d10b2b4ee42adb4569137debe | contrib/python/pathlib2/pathlib2/__init__.py | python | PurePath.parent | (self) | return self._from_parsed_parts(drv, root, parts[:-1]) | The logical parent of the path. | The logical parent of the path. | [
"The",
"logical",
"parent",
"of",
"the",
"path",
"."
] | def parent(self):
"""The logical parent of the path."""
drv = self._drv
root = self._root
parts = self._parts
if len(parts) == 1 and (drv or root):
return self
return self._from_parsed_parts(drv, root, parts[:-1]) | [
"def",
"parent",
"(",
"self",
")",
":",
"drv",
"=",
"self",
".",
"_drv",
"root",
"=",
"self",
".",
"_root",
"parts",
"=",
"self",
".",
"_parts",
"if",
"len",
"(",
"parts",
")",
"==",
"1",
"and",
"(",
"drv",
"or",
"root",
")",
":",
"return",
"se... | https://github.com/catboost/catboost/blob/167f64f237114a4d10b2b4ee42adb4569137debe/contrib/python/pathlib2/pathlib2/__init__.py#L1178-L1185 | |
hanpfei/chromium-net | 392cc1fa3a8f92f42e4071ab6e674d8e0482f83f | third_party/catapult/third_party/gsutil/third_party/boto/boto/ec2/cloudwatch/alarm.py | python | MetricAlarms.__init__ | (self, connection=None) | Parses a list of MetricAlarms. | Parses a list of MetricAlarms. | [
"Parses",
"a",
"list",
"of",
"MetricAlarms",
"."
] | def __init__(self, connection=None):
"""
Parses a list of MetricAlarms.
"""
list.__init__(self)
self.connection = connection | [
"def",
"__init__",
"(",
"self",
",",
"connection",
"=",
"None",
")",
":",
"list",
".",
"__init__",
"(",
"self",
")",
"self",
".",
"connection",
"=",
"connection"
] | https://github.com/hanpfei/chromium-net/blob/392cc1fa3a8f92f42e4071ab6e674d8e0482f83f/third_party/catapult/third_party/gsutil/third_party/boto/boto/ec2/cloudwatch/alarm.py#L31-L36 | ||
OSGeo/gdal | 3748fc4ba4fba727492774b2b908a2130c864a83 | swig/python/osgeo/gdal.py | python | PushErrorHandler | (*args) | return _gdal.PushErrorHandler(*args) | r"""PushErrorHandler(CPLErrorHandler pfnErrorHandler=0) -> CPLErr | r"""PushErrorHandler(CPLErrorHandler pfnErrorHandler=0) -> CPLErr | [
"r",
"PushErrorHandler",
"(",
"CPLErrorHandler",
"pfnErrorHandler",
"=",
"0",
")",
"-",
">",
"CPLErr"
] | def PushErrorHandler(*args):
r"""PushErrorHandler(CPLErrorHandler pfnErrorHandler=0) -> CPLErr"""
return _gdal.PushErrorHandler(*args) | [
"def",
"PushErrorHandler",
"(",
"*",
"args",
")",
":",
"return",
"_gdal",
".",
"PushErrorHandler",
"(",
"*",
"args",
")"
] | https://github.com/OSGeo/gdal/blob/3748fc4ba4fba727492774b2b908a2130c864a83/swig/python/osgeo/gdal.py#L1514-L1516 | |
facebookincubator/BOLT | 88c70afe9d388ad430cc150cc158641701397f70 | clang/tools/scan-build-py/lib/libscanbuild/arguments.py | python | parse_args_for_analyze_build | () | return args | Parse and validate command-line arguments for analyze-build. | Parse and validate command-line arguments for analyze-build. | [
"Parse",
"and",
"validate",
"command",
"-",
"line",
"arguments",
"for",
"analyze",
"-",
"build",
"."
] | def parse_args_for_analyze_build():
""" Parse and validate command-line arguments for analyze-build. """
from_build_command = False
parser = create_analyze_parser(from_build_command)
args = parser.parse_args()
reconfigure_logging(args.verbose)
logging.debug('Raw arguments %s', sys.argv)
normalize_args_for_analyze(args, from_build_command)
validate_args_for_analyze(parser, args, from_build_command)
logging.debug('Parsed arguments: %s', args)
return args | [
"def",
"parse_args_for_analyze_build",
"(",
")",
":",
"from_build_command",
"=",
"False",
"parser",
"=",
"create_analyze_parser",
"(",
"from_build_command",
")",
"args",
"=",
"parser",
".",
"parse_args",
"(",
")",
"reconfigure_logging",
"(",
"args",
".",
"verbose",
... | https://github.com/facebookincubator/BOLT/blob/88c70afe9d388ad430cc150cc158641701397f70/clang/tools/scan-build-py/lib/libscanbuild/arguments.py#L45-L58 | |
wxWidgets/wxPython-Classic | 19571e1ae65f1ac445f5491474121998c97a1bf0 | src/osx_carbon/_gdi.py | python | GraphicsContext.CreateLinearGradientBrush | (*args) | return _gdi_.GraphicsContext_CreateLinearGradientBrush(*args) | CreateLinearGradientBrush(self, Double x1, Double y1, Double x2, Double y2, Colour c1,
Colour c2) -> GraphicsBrush
CreateLinearGradientBrush(self, Double x1, Double y1, Double x2, Double y2, GraphicsGradientStops stops) -> GraphicsBrush
Creates a native brush, having a linear gradient, starting at (x1,y1)
to (x2,y2) with the given boundary colors or the specified stops. | CreateLinearGradientBrush(self, Double x1, Double y1, Double x2, Double y2, Colour c1,
Colour c2) -> GraphicsBrush
CreateLinearGradientBrush(self, Double x1, Double y1, Double x2, Double y2, GraphicsGradientStops stops) -> GraphicsBrush | [
"CreateLinearGradientBrush",
"(",
"self",
"Double",
"x1",
"Double",
"y1",
"Double",
"x2",
"Double",
"y2",
"Colour",
"c1",
"Colour",
"c2",
")",
"-",
">",
"GraphicsBrush",
"CreateLinearGradientBrush",
"(",
"self",
"Double",
"x1",
"Double",
"y1",
"Double",
"x2",
... | def CreateLinearGradientBrush(*args):
"""
CreateLinearGradientBrush(self, Double x1, Double y1, Double x2, Double y2, Colour c1,
Colour c2) -> GraphicsBrush
CreateLinearGradientBrush(self, Double x1, Double y1, Double x2, Double y2, GraphicsGradientStops stops) -> GraphicsBrush
Creates a native brush, having a linear gradient, starting at (x1,y1)
to (x2,y2) with the given boundary colors or the specified stops.
"""
return _gdi_.GraphicsContext_CreateLinearGradientBrush(*args) | [
"def",
"CreateLinearGradientBrush",
"(",
"*",
"args",
")",
":",
"return",
"_gdi_",
".",
"GraphicsContext_CreateLinearGradientBrush",
"(",
"*",
"args",
")"
] | https://github.com/wxWidgets/wxPython-Classic/blob/19571e1ae65f1ac445f5491474121998c97a1bf0/src/osx_carbon/_gdi.py#L6096-L6105 | |
intel/caffe | 3f494b442ee3f9d17a07b09ecbd5fa2bbda00836 | examples/faster-rcnn/lib/datasets/pascal_voc.py | python | pascal_voc.image_path_at | (self, i) | return self.image_path_from_index(self._image_index[i]) | Return the absolute path to image i in the image sequence. | Return the absolute path to image i in the image sequence. | [
"Return",
"the",
"absolute",
"path",
"to",
"image",
"i",
"in",
"the",
"image",
"sequence",
"."
] | def image_path_at(self, i):
"""
Return the absolute path to image i in the image sequence.
"""
return self.image_path_from_index(self._image_index[i]) | [
"def",
"image_path_at",
"(",
"self",
",",
"i",
")",
":",
"return",
"self",
".",
"image_path_from_index",
"(",
"self",
".",
"_image_index",
"[",
"i",
"]",
")"
] | https://github.com/intel/caffe/blob/3f494b442ee3f9d17a07b09ecbd5fa2bbda00836/examples/faster-rcnn/lib/datasets/pascal_voc.py#L57-L61 | |
google/lmctfy | 94729318edb06f7d149f67581a07a4c70ed29250 | gmock/scripts/generator/cpp/ast.py | python | Node.IsDefinition | (self) | return False | Returns bool if this node is a definition. | Returns bool if this node is a definition. | [
"Returns",
"bool",
"if",
"this",
"node",
"is",
"a",
"definition",
"."
] | def IsDefinition(self):
"""Returns bool if this node is a definition."""
return False | [
"def",
"IsDefinition",
"(",
"self",
")",
":",
"return",
"False"
] | https://github.com/google/lmctfy/blob/94729318edb06f7d149f67581a07a4c70ed29250/gmock/scripts/generator/cpp/ast.py#L119-L121 | |
aimerykong/Low-Rank-Bilinear-Pooling | 487eb2c857fd9c95357a5166b0c15ad0fe135b28 | caffe-20160312/scripts/cpp_lint.py | python | _VerboseLevel | () | return _cpplint_state.verbose_level | Returns the module's verbosity setting. | Returns the module's verbosity setting. | [
"Returns",
"the",
"module",
"s",
"verbosity",
"setting",
"."
] | def _VerboseLevel():
"""Returns the module's verbosity setting."""
return _cpplint_state.verbose_level | [
"def",
"_VerboseLevel",
"(",
")",
":",
"return",
"_cpplint_state",
".",
"verbose_level"
] | https://github.com/aimerykong/Low-Rank-Bilinear-Pooling/blob/487eb2c857fd9c95357a5166b0c15ad0fe135b28/caffe-20160312/scripts/cpp_lint.py#L777-L779 | |
windystrife/UnrealEngine_NVIDIAGameWorks | b50e6338a7c5b26374d66306ebc7807541ff815e | Engine/Source/ThirdParty/CEF3/cef_source/tools/cef_parser.py | python | obj_function.get_capi_parts | (self, defined_structs = [], prefix = None) | return { 'retval' : retval, 'name' : name, 'args' : args } | Return the parts of the C API function definition. | Return the parts of the C API function definition. | [
"Return",
"the",
"parts",
"of",
"the",
"C",
"API",
"function",
"definition",
"."
] | def get_capi_parts(self, defined_structs = [], prefix = None):
""" Return the parts of the C API function definition. """
retval = ''
dict = self.retval.get_type().get_capi(defined_structs)
if dict['format'] == 'single':
retval = dict['value']
name = self.get_capi_name(prefix)
args = []
if isinstance(self, obj_function_virtual):
# virtual functions get themselves as the first argument
str = 'struct _'+self.parent.get_capi_name()+'* self'
if isinstance(self, obj_function_virtual) and self.is_const():
# const virtual functions get const self pointers
str = 'const '+str
args.append(str)
if len(self.arguments) > 0:
for cls in self.arguments:
type = cls.get_type()
dict = type.get_capi(defined_structs)
if dict['format'] == 'single':
args.append(dict['value'])
elif dict['format'] == 'multi-arg':
# add an additional argument for the size of the array
type_name = type.get_name()
if type.is_const():
# for const arrays pass the size argument by value
args.append('size_t '+type_name+'Count')
else:
# for non-const arrays pass the size argument by address
args.append('size_t* '+type_name+'Count')
args.append(dict['value'])
return { 'retval' : retval, 'name' : name, 'args' : args } | [
"def",
"get_capi_parts",
"(",
"self",
",",
"defined_structs",
"=",
"[",
"]",
",",
"prefix",
"=",
"None",
")",
":",
"retval",
"=",
"''",
"dict",
"=",
"self",
".",
"retval",
".",
"get_type",
"(",
")",
".",
"get_capi",
"(",
"defined_structs",
")",
"if",
... | https://github.com/windystrife/UnrealEngine_NVIDIAGameWorks/blob/b50e6338a7c5b26374d66306ebc7807541ff815e/Engine/Source/ThirdParty/CEF3/cef_source/tools/cef_parser.py#L1150-L1185 | |
wxWidgets/wxPython-Classic | 19571e1ae65f1ac445f5491474121998c97a1bf0 | src/msw/html.py | python | HtmlTagHandler.GetParser | (*args, **kwargs) | return _html.HtmlTagHandler_GetParser(*args, **kwargs) | GetParser(self) -> HtmlParser | GetParser(self) -> HtmlParser | [
"GetParser",
"(",
"self",
")",
"-",
">",
"HtmlParser"
] | def GetParser(*args, **kwargs):
"""GetParser(self) -> HtmlParser"""
return _html.HtmlTagHandler_GetParser(*args, **kwargs) | [
"def",
"GetParser",
"(",
"*",
"args",
",",
"*",
"*",
"kwargs",
")",
":",
"return",
"_html",
".",
"HtmlTagHandler_GetParser",
"(",
"*",
"args",
",",
"*",
"*",
"kwargs",
")"
] | https://github.com/wxWidgets/wxPython-Classic/blob/19571e1ae65f1ac445f5491474121998c97a1bf0/src/msw/html.py#L409-L411 | |
BitMEX/api-connectors | 37a3a5b806ad5d0e0fc975ab86d9ed43c3bcd812 | auto-generated/python/swagger_client/models/instrument.py | python | Instrument.impact_ask_price | (self) | return self._impact_ask_price | Gets the impact_ask_price of this Instrument. # noqa: E501
:return: The impact_ask_price of this Instrument. # noqa: E501
:rtype: float | Gets the impact_ask_price of this Instrument. # noqa: E501 | [
"Gets",
"the",
"impact_ask_price",
"of",
"this",
"Instrument",
".",
"#",
"noqa",
":",
"E501"
] | def impact_ask_price(self):
"""Gets the impact_ask_price of this Instrument. # noqa: E501
:return: The impact_ask_price of this Instrument. # noqa: E501
:rtype: float
"""
return self._impact_ask_price | [
"def",
"impact_ask_price",
"(",
"self",
")",
":",
"return",
"self",
".",
"_impact_ask_price"
] | https://github.com/BitMEX/api-connectors/blob/37a3a5b806ad5d0e0fc975ab86d9ed43c3bcd812/auto-generated/python/swagger_client/models/instrument.py#L2410-L2417 | |
mysql/mysql-workbench | 2f35f9034f015cbcd22139a60e1baa2e3e8e795c | plugins/migration/dbcopy/db_copy_progress.py | python | ProgressMainView.final_message | (self) | return "Finished performing tasks." | Subclass and override to change the text message to be shown when tasks finish successfully. | Subclass and override to change the text message to be shown when tasks finish successfully. | [
"Subclass",
"and",
"override",
"to",
"change",
"the",
"text",
"message",
"to",
"be",
"shown",
"when",
"tasks",
"finish",
"successfully",
"."
] | def final_message(self):
"""Subclass and override to change the text message to be shown when tasks finish successfully."""
return "Finished performing tasks." | [
"def",
"final_message",
"(",
"self",
")",
":",
"return",
"\"Finished performing tasks.\""
] | https://github.com/mysql/mysql-workbench/blob/2f35f9034f015cbcd22139a60e1baa2e3e8e795c/plugins/migration/dbcopy/db_copy_progress.py#L512-L514 | |
baidu-research/tensorflow-allreduce | 66d5b855e90b0949e9fa5cca5599fd729a70e874 | tensorflow/python/training/optimizer.py | python | Optimizer._prepare | (self) | Create all needed tensors before applying gradients.
This is called with the name_scope using the "name" that
users have chosen for the application of gradients. | Create all needed tensors before applying gradients. | [
"Create",
"all",
"needed",
"tensors",
"before",
"applying",
"gradients",
"."
] | def _prepare(self):
"""Create all needed tensors before applying gradients.
This is called with the name_scope using the "name" that
users have chosen for the application of gradients.
"""
pass | [
"def",
"_prepare",
"(",
"self",
")",
":",
"pass"
] | https://github.com/baidu-research/tensorflow-allreduce/blob/66d5b855e90b0949e9fa5cca5599fd729a70e874/tensorflow/python/training/optimizer.py#L542-L548 | ||
goldeneye-source/ges-code | 2630cd8ef3d015af53c72ec2e19fc1f7e7fe8d9d | thirdparty/protobuf-2.3.0/python/google/protobuf/text_format.py | python | _Tokenizer._ParseError | (self, message) | return ParseError('%d:%d : %s' % (
self._line + 1, self._column + 1, message)) | Creates and *returns* a ParseError for the current token. | Creates and *returns* a ParseError for the current token. | [
"Creates",
"and",
"*",
"returns",
"*",
"a",
"ParseError",
"for",
"the",
"current",
"token",
"."
] | def _ParseError(self, message):
"""Creates and *returns* a ParseError for the current token."""
return ParseError('%d:%d : %s' % (
self._line + 1, self._column + 1, message)) | [
"def",
"_ParseError",
"(",
"self",
",",
"message",
")",
":",
"return",
"ParseError",
"(",
"'%d:%d : %s'",
"%",
"(",
"self",
".",
"_line",
"+",
"1",
",",
"self",
".",
"_column",
"+",
"1",
",",
"message",
")",
")"
] | https://github.com/goldeneye-source/ges-code/blob/2630cd8ef3d015af53c72ec2e19fc1f7e7fe8d9d/thirdparty/protobuf-2.3.0/python/google/protobuf/text_format.py#L609-L612 | |
lightvector/KataGo | 20d34784703c5b4000643d3ccc43bb37d418f3b5 | python/sgfmill/sgf_moves.py | python | indicate_first_player | (sgf_game) | Add a PL property to the root node if appropriate.
Looks at the first child of the root to see who the first player is, and
sets PL it isn't the expected player (ie, black normally, but white if
there is a handicap), or if there are non-handicap setup stones. | Add a PL property to the root node if appropriate. | [
"Add",
"a",
"PL",
"property",
"to",
"the",
"root",
"node",
"if",
"appropriate",
"."
] | def indicate_first_player(sgf_game):
"""Add a PL property to the root node if appropriate.
Looks at the first child of the root to see who the first player is, and
sets PL it isn't the expected player (ie, black normally, but white if
there is a handicap), or if there are non-handicap setup stones.
"""
root = sgf_game.get_root()
first_player, move = root[0].get_move()
if first_player is None:
return
has_handicap = root.has_property("HA")
if root.has_property("AW"):
specify_pl = True
elif root.has_property("AB") and not has_handicap:
specify_pl = True
elif not has_handicap and first_player == 'w':
specify_pl = True
elif has_handicap and first_player == 'b':
specify_pl = True
else:
specify_pl = False
if specify_pl:
root.set('PL', first_player) | [
"def",
"indicate_first_player",
"(",
"sgf_game",
")",
":",
"root",
"=",
"sgf_game",
".",
"get_root",
"(",
")",
"first_player",
",",
"move",
"=",
"root",
"[",
"0",
"]",
".",
"get_move",
"(",
")",
"if",
"first_player",
"is",
"None",
":",
"return",
"has_han... | https://github.com/lightvector/KataGo/blob/20d34784703c5b4000643d3ccc43bb37d418f3b5/python/sgfmill/sgf_moves.py#L78-L102 | ||
RoboJackets/robocup-software | bce13ce53ddb2ecb9696266d980722c34617dc15 | rj_gameplay/stp/utils/world_state_converter.py | python | build_game_info | (
play_state_msg: msg.PlayState, match_state_msg: msg.MatchState
) | return game_info | :return: GameInfo class from rc.py | :return: GameInfo class from rc.py | [
":",
"return",
":",
"GameInfo",
"class",
"from",
"rc",
".",
"py"
] | def build_game_info(
play_state_msg: msg.PlayState, match_state_msg: msg.MatchState
) -> rc.GameInfo:
"""
:return: GameInfo class from rc.py
"""
period = rc.GamePeriod(match_state_msg.period)
state = rc.GameState(play_state_msg.state)
restart = rc.GameRestart(play_state_msg.restart)
our_restart = play_state_msg.our_restart
game_info = rc.GameInfo(
period,
state,
restart,
our_restart,
np.array(
[
play_state_msg.placement_point.x,
play_state_msg.placement_point.y,
]
),
)
return game_info | [
"def",
"build_game_info",
"(",
"play_state_msg",
":",
"msg",
".",
"PlayState",
",",
"match_state_msg",
":",
"msg",
".",
"MatchState",
")",
"->",
"rc",
".",
"GameInfo",
":",
"period",
"=",
"rc",
".",
"GamePeriod",
"(",
"match_state_msg",
".",
"period",
")",
... | https://github.com/RoboJackets/robocup-software/blob/bce13ce53ddb2ecb9696266d980722c34617dc15/rj_gameplay/stp/utils/world_state_converter.py#L156-L184 | |
macchina-io/macchina.io | ef24ba0e18379c3dd48fb84e6dbf991101cb8db0 | platform/JS/V8/tools/gyp/pylib/gyp/MSVSSettings.py | python | _Same | (tool, name, setting_type) | Defines a setting that has the same name in MSVS and MSBuild.
Args:
tool: a dictionary that gives the names of the tool for MSVS and MSBuild.
name: the name of the setting.
setting_type: the type of this setting. | Defines a setting that has the same name in MSVS and MSBuild. | [
"Defines",
"a",
"setting",
"that",
"has",
"the",
"same",
"name",
"in",
"MSVS",
"and",
"MSBuild",
"."
] | def _Same(tool, name, setting_type):
"""Defines a setting that has the same name in MSVS and MSBuild.
Args:
tool: a dictionary that gives the names of the tool for MSVS and MSBuild.
name: the name of the setting.
setting_type: the type of this setting.
"""
_Renamed(tool, name, name, setting_type) | [
"def",
"_Same",
"(",
"tool",
",",
"name",
",",
"setting_type",
")",
":",
"_Renamed",
"(",
"tool",
",",
"name",
",",
"name",
",",
"setting_type",
")"
] | https://github.com/macchina-io/macchina.io/blob/ef24ba0e18379c3dd48fb84e6dbf991101cb8db0/platform/JS/V8/tools/gyp/pylib/gyp/MSVSSettings.py#L233-L241 | ||
bareos/bareos | 56a10bb368b0a81e977bb51304033fe49d59efb0 | core/src/plugins/filed/python/vmware/BareosFdPluginVMware.py | python | BareosVADPWrapper.get_vm_details_by_uuid | (self) | Get details of VM given by plugin options uuid
and save result in self.vm
Returns True on success, False otherwise | Get details of VM given by plugin options uuid
and save result in self.vm
Returns True on success, False otherwise | [
"Get",
"details",
"of",
"VM",
"given",
"by",
"plugin",
"options",
"uuid",
"and",
"save",
"result",
"in",
"self",
".",
"vm",
"Returns",
"True",
"on",
"success",
"False",
"otherwise"
] | def get_vm_details_by_uuid(self):
"""
Get details of VM given by plugin options uuid
and save result in self.vm
Returns True on success, False otherwise
"""
search_index = self.si.content.searchIndex
self.vm = search_index.FindByUuid(None, self.options["uuid"], True, True)
if self.vm is None:
return False
else:
return True | [
"def",
"get_vm_details_by_uuid",
"(",
"self",
")",
":",
"search_index",
"=",
"self",
".",
"si",
".",
"content",
".",
"searchIndex",
"self",
".",
"vm",
"=",
"search_index",
".",
"FindByUuid",
"(",
"None",
",",
"self",
".",
"options",
"[",
"\"uuid\"",
"]",
... | https://github.com/bareos/bareos/blob/56a10bb368b0a81e977bb51304033fe49d59efb0/core/src/plugins/filed/python/vmware/BareosFdPluginVMware.py#L1074-L1085 | ||
SequoiaDB/SequoiaDB | 2894ed7e5bd6fe57330afc900cf76d0ff0df9f64 | tools/server/php_linux/libxml2/lib/python2.4/site-packages/libxml2.py | python | parserCtxt.popInput | (self) | return ret | xmlPopInput: the current input pointed by ctxt->input came
to an end pop it and return the next char. | xmlPopInput: the current input pointed by ctxt->input came
to an end pop it and return the next char. | [
"xmlPopInput",
":",
"the",
"current",
"input",
"pointed",
"by",
"ctxt",
"-",
">",
"input",
"came",
"to",
"an",
"end",
"pop",
"it",
"and",
"return",
"the",
"next",
"char",
"."
] | def popInput(self):
"""xmlPopInput: the current input pointed by ctxt->input came
to an end pop it and return the next char. """
ret = libxml2mod.xmlPopInput(self._o)
return ret | [
"def",
"popInput",
"(",
"self",
")",
":",
"ret",
"=",
"libxml2mod",
".",
"xmlPopInput",
"(",
"self",
".",
"_o",
")",
"return",
"ret"
] | https://github.com/SequoiaDB/SequoiaDB/blob/2894ed7e5bd6fe57330afc900cf76d0ff0df9f64/tools/server/php_linux/libxml2/lib/python2.4/site-packages/libxml2.py#L5466-L5470 | |
cmu-db/bustub | fe1b9e984bd2967997b52df872c873d80f71cf7d | build_support/cpplint.py | python | CheckSectionSpacing | (filename, clean_lines, class_info, linenum, error) | Checks for additional blank line issues related to sections.
Currently the only thing checked here is blank line before protected/private.
Args:
filename: The name of the current file.
clean_lines: A CleansedLines instance containing the file.
class_info: A _ClassInfo objects.
linenum: The number of the line to check.
error: The function to call with any errors found. | Checks for additional blank line issues related to sections. | [
"Checks",
"for",
"additional",
"blank",
"line",
"issues",
"related",
"to",
"sections",
"."
] | def CheckSectionSpacing(filename, clean_lines, class_info, linenum, error):
"""Checks for additional blank line issues related to sections.
Currently the only thing checked here is blank line before protected/private.
Args:
filename: The name of the current file.
clean_lines: A CleansedLines instance containing the file.
class_info: A _ClassInfo objects.
linenum: The number of the line to check.
error: The function to call with any errors found.
"""
# Skip checks if the class is small, where small means 25 lines or less.
# 25 lines seems like a good cutoff since that's the usual height of
# terminals, and any class that can't fit in one screen can't really
# be considered "small".
#
# Also skip checks if we are on the first line. This accounts for
# classes that look like
# class Foo { public: ... };
#
# If we didn't find the end of the class, last_line would be zero,
# and the check will be skipped by the first condition.
if (class_info.last_line - class_info.starting_linenum <= 24 or
linenum <= class_info.starting_linenum):
return
matched = Match(r'\s*(public|protected|private):', clean_lines.lines[linenum])
if matched:
# Issue warning if the line before public/protected/private was
# not a blank line, but don't do this if the previous line contains
# "class" or "struct". This can happen two ways:
# - We are at the beginning of the class.
# - We are forward-declaring an inner class that is semantically
# private, but needed to be public for implementation reasons.
# Also ignores cases where the previous line ends with a backslash as can be
# common when defining classes in C macros.
prev_line = clean_lines.lines[linenum - 1]
if (not IsBlankLine(prev_line) and
not Search(r'\b(class|struct)\b', prev_line) and
not Search(r'\\$', prev_line)):
# Try a bit harder to find the beginning of the class. This is to
# account for multi-line base-specifier lists, e.g.:
# class Derived
# : public Base {
end_class_head = class_info.starting_linenum
for i in range(class_info.starting_linenum, linenum):
if Search(r'\{\s*$', clean_lines.lines[i]):
end_class_head = i
break
if end_class_head < linenum - 1:
error(filename, linenum, 'whitespace/blank_line', 3,
'"%s:" should be preceded by a blank line' % matched.group(1)) | [
"def",
"CheckSectionSpacing",
"(",
"filename",
",",
"clean_lines",
",",
"class_info",
",",
"linenum",
",",
"error",
")",
":",
"# Skip checks if the class is small, where small means 25 lines or less.",
"# 25 lines seems like a good cutoff since that's the usual height of",
"# termina... | https://github.com/cmu-db/bustub/blob/fe1b9e984bd2967997b52df872c873d80f71cf7d/build_support/cpplint.py#L3893-L3945 | ||
mantidproject/mantid | 03deeb89254ec4289edb8771e0188c2090a02f32 | qt/python/mantidqt/mantidqt/widgets/sliceviewer/view.py | python | SliceViewerDataView.on_region_selection_toggle | (self, state) | Switch state of the region selection | Switch state of the region selection | [
"Switch",
"state",
"of",
"the",
"region",
"selection"
] | def on_region_selection_toggle(self, state):
"""Switch state of the region selection"""
self.presenter.region_selection(state)
self._region_selection_on = state
# If state is off and track cursor is on, make sure line plots are re-connected to move cursor
if not state and self.track_cursor_checked():
if self._line_plots:
self._line_plots.connect() | [
"def",
"on_region_selection_toggle",
"(",
"self",
",",
"state",
")",
":",
"self",
".",
"presenter",
".",
"region_selection",
"(",
"state",
")",
"self",
".",
"_region_selection_on",
"=",
"state",
"# If state is off and track cursor is on, make sure line plots are re-connecte... | https://github.com/mantidproject/mantid/blob/03deeb89254ec4289edb8771e0188c2090a02f32/qt/python/mantidqt/mantidqt/widgets/sliceviewer/view.py#L412-L419 | ||
adobe/chromium | cfe5bf0b51b1f6b9fe239c2a3c2f2364da9967d7 | tools/site_compare/scrapers/chrome/chromebase.py | python | Time | (urls, size, timeout, kwargs) | return ret | Measure how long it takes to load each of a series of URLs
Args:
urls: list of URLs to time
size: size of browser window to use
timeout: amount of time to wait for page to load
kwargs: miscellaneous keyword args
Returns:
A list of tuples (url, time). "time" can be "crashed" or "timeout" | Measure how long it takes to load each of a series of URLs | [
"Measure",
"how",
"long",
"it",
"takes",
"to",
"load",
"each",
"of",
"a",
"series",
"of",
"URLs"
] | def Time(urls, size, timeout, kwargs):
"""Measure how long it takes to load each of a series of URLs
Args:
urls: list of URLs to time
size: size of browser window to use
timeout: amount of time to wait for page to load
kwargs: miscellaneous keyword args
Returns:
A list of tuples (url, time). "time" can be "crashed" or "timeout"
"""
if "path" in kwargs and kwargs["path"]: path = kwargs["path"]
else: path = DEFAULT_PATH
proc = None
# Visit each URL we're given
if type(urls) in types.StringTypes: urls = [urls]
ret = []
for url in urls:
try:
# Invoke the browser if necessary
if not proc:
(wnd, proc, address_bar, render_pane) = InvokeBrowser(path)
# Resize and reposition the frame
windowing.MoveAndSizeWindow(wnd, (0,0), size, render_pane)
# Double-click in the address bar, type the name, and press Enter
mouse.ClickInWindow(address_bar)
keyboard.TypeString(url, 0.1)
keyboard.TypeString("\n")
# Wait for the page to finish loading
load_time = windowing.WaitForThrobber(wnd, (20, 16, 36, 32), timeout)
timedout = load_time < 0
if timedout:
load_time = "timeout"
# Send an alt-F4 to make the browser close; if this times out,
# we've probably got a crash
windowing.SetForegroundWindow(wnd)
keyboard.TypeString(r"{\4}", use_modifiers=True)
if not windowing.WaitForProcessExit(proc, timeout):
windowing.EndProcess(proc)
load_time = "crashed"
proc = None
except pywintypes.error:
proc = None
load_time = "crashed"
ret.append( (url, load_time) )
if proc:
windowing.SetForegroundWindow(wnd)
keyboard.TypeString(r"{\4}", use_modifiers=True)
if not windowing.WaitForProcessExit(proc, timeout):
windowing.EndProcess(proc)
return ret | [
"def",
"Time",
"(",
"urls",
",",
"size",
",",
"timeout",
",",
"kwargs",
")",
":",
"if",
"\"path\"",
"in",
"kwargs",
"and",
"kwargs",
"[",
"\"path\"",
"]",
":",
"path",
"=",
"kwargs",
"[",
"\"path\"",
"]",
"else",
":",
"path",
"=",
"DEFAULT_PATH",
"pr... | https://github.com/adobe/chromium/blob/cfe5bf0b51b1f6b9fe239c2a3c2f2364da9967d7/tools/site_compare/scrapers/chrome/chromebase.py#L118-L181 | |
catboost/catboost | 167f64f237114a4d10b2b4ee42adb4569137debe | contrib/python/scikit-learn/py2/sklearn/learning_curve.py | python | learning_curve | (estimator, X, y, train_sizes=np.linspace(0.1, 1.0, 5),
cv=None, scoring=None, exploit_incremental_learning=False,
n_jobs=1, pre_dispatch="all", verbose=0,
error_score='raise') | return train_sizes_abs, out[0], out[1] | Learning curve.
.. deprecated:: 0.18
This module will be removed in 0.20.
Use :func:`sklearn.model_selection.learning_curve` instead.
Determines cross-validated training and test scores for different training
set sizes.
A cross-validation generator splits the whole dataset k times in training
and test data. Subsets of the training set with varying sizes will be used
to train the estimator and a score for each training subset size and the
test set will be computed. Afterwards, the scores will be averaged over
all k runs for each training subset size.
Read more in the :ref:`User Guide <learning_curves>`.
Parameters
----------
estimator : object type that implements the "fit" and "predict" methods
An object of that type which is cloned for each validation.
X : array-like, shape (n_samples, n_features)
Training vector, where n_samples is the number of samples and
n_features is the number of features.
y : array-like, shape (n_samples) or (n_samples, n_features), optional
Target relative to X for classification or regression;
None for unsupervised learning.
train_sizes : array-like, shape (n_ticks,), dtype float or int
Relative or absolute numbers of training examples that will be used to
generate the learning curve. If the dtype is float, it is regarded as a
fraction of the maximum size of the training set (that is determined
by the selected validation method), i.e. it has to be within (0, 1].
Otherwise it is interpreted as absolute sizes of the training sets.
Note that for classification the number of samples usually have to
be big enough to contain at least one sample from each class.
(default: np.linspace(0.1, 1.0, 5))
cv : int, cross-validation generator or an iterable, optional
Determines the cross-validation splitting strategy.
Possible inputs for cv are:
- None, to use the default 3-fold cross-validation,
- integer, to specify the number of folds.
- An object to be used as a cross-validation generator.
- An iterable yielding train/test splits.
For integer/None inputs, if the estimator is a classifier and ``y`` is
either binary or multiclass,
:class:`sklearn.model_selection.StratifiedKFold` is used. In all
other cases, :class:`sklearn.model_selection.KFold` is used.
Refer :ref:`User Guide <cross_validation>` for the various
cross-validation strategies that can be used here.
scoring : string, callable or None, optional, default: None
A string (see model evaluation documentation) or
a scorer callable object / function with signature
``scorer(estimator, X, y)``.
exploit_incremental_learning : boolean, optional, default: False
If the estimator supports incremental learning, this will be
used to speed up fitting for different training set sizes.
n_jobs : integer, optional
Number of jobs to run in parallel (default 1).
pre_dispatch : integer or string, optional
Number of predispatched jobs for parallel execution (default is
all). The option can reduce the allocated memory. The string can
be an expression like '2*n_jobs'.
verbose : integer, optional
Controls the verbosity: the higher, the more messages.
error_score : 'raise' (default) or numeric
Value to assign to the score if an error occurs in estimator fitting.
If set to 'raise', the error is raised. If a numeric value is given,
FitFailedWarning is raised. This parameter does not affect the refit
step, which will always raise the error.
Returns
-------
train_sizes_abs : array, shape = (n_unique_ticks,), dtype int
Numbers of training examples that has been used to generate the
learning curve. Note that the number of ticks might be less
than n_ticks because duplicate entries will be removed.
train_scores : array, shape (n_ticks, n_cv_folds)
Scores on training sets.
test_scores : array, shape (n_ticks, n_cv_folds)
Scores on test set.
Notes
-----
See :ref:`examples/model_selection/plot_learning_curve.py
<sphx_glr_auto_examples_model_selection_plot_learning_curve.py>` | Learning curve. | [
"Learning",
"curve",
"."
] | def learning_curve(estimator, X, y, train_sizes=np.linspace(0.1, 1.0, 5),
cv=None, scoring=None, exploit_incremental_learning=False,
n_jobs=1, pre_dispatch="all", verbose=0,
error_score='raise'):
"""Learning curve.
.. deprecated:: 0.18
This module will be removed in 0.20.
Use :func:`sklearn.model_selection.learning_curve` instead.
Determines cross-validated training and test scores for different training
set sizes.
A cross-validation generator splits the whole dataset k times in training
and test data. Subsets of the training set with varying sizes will be used
to train the estimator and a score for each training subset size and the
test set will be computed. Afterwards, the scores will be averaged over
all k runs for each training subset size.
Read more in the :ref:`User Guide <learning_curves>`.
Parameters
----------
estimator : object type that implements the "fit" and "predict" methods
An object of that type which is cloned for each validation.
X : array-like, shape (n_samples, n_features)
Training vector, where n_samples is the number of samples and
n_features is the number of features.
y : array-like, shape (n_samples) or (n_samples, n_features), optional
Target relative to X for classification or regression;
None for unsupervised learning.
train_sizes : array-like, shape (n_ticks,), dtype float or int
Relative or absolute numbers of training examples that will be used to
generate the learning curve. If the dtype is float, it is regarded as a
fraction of the maximum size of the training set (that is determined
by the selected validation method), i.e. it has to be within (0, 1].
Otherwise it is interpreted as absolute sizes of the training sets.
Note that for classification the number of samples usually have to
be big enough to contain at least one sample from each class.
(default: np.linspace(0.1, 1.0, 5))
cv : int, cross-validation generator or an iterable, optional
Determines the cross-validation splitting strategy.
Possible inputs for cv are:
- None, to use the default 3-fold cross-validation,
- integer, to specify the number of folds.
- An object to be used as a cross-validation generator.
- An iterable yielding train/test splits.
For integer/None inputs, if the estimator is a classifier and ``y`` is
either binary or multiclass,
:class:`sklearn.model_selection.StratifiedKFold` is used. In all
other cases, :class:`sklearn.model_selection.KFold` is used.
Refer :ref:`User Guide <cross_validation>` for the various
cross-validation strategies that can be used here.
scoring : string, callable or None, optional, default: None
A string (see model evaluation documentation) or
a scorer callable object / function with signature
``scorer(estimator, X, y)``.
exploit_incremental_learning : boolean, optional, default: False
If the estimator supports incremental learning, this will be
used to speed up fitting for different training set sizes.
n_jobs : integer, optional
Number of jobs to run in parallel (default 1).
pre_dispatch : integer or string, optional
Number of predispatched jobs for parallel execution (default is
all). The option can reduce the allocated memory. The string can
be an expression like '2*n_jobs'.
verbose : integer, optional
Controls the verbosity: the higher, the more messages.
error_score : 'raise' (default) or numeric
Value to assign to the score if an error occurs in estimator fitting.
If set to 'raise', the error is raised. If a numeric value is given,
FitFailedWarning is raised. This parameter does not affect the refit
step, which will always raise the error.
Returns
-------
train_sizes_abs : array, shape = (n_unique_ticks,), dtype int
Numbers of training examples that has been used to generate the
learning curve. Note that the number of ticks might be less
than n_ticks because duplicate entries will be removed.
train_scores : array, shape (n_ticks, n_cv_folds)
Scores on training sets.
test_scores : array, shape (n_ticks, n_cv_folds)
Scores on test set.
Notes
-----
See :ref:`examples/model_selection/plot_learning_curve.py
<sphx_glr_auto_examples_model_selection_plot_learning_curve.py>`
"""
if exploit_incremental_learning and not hasattr(estimator, "partial_fit"):
raise ValueError("An estimator must support the partial_fit interface "
"to exploit incremental learning")
X, y = indexable(X, y)
# Make a list since we will be iterating multiple times over the folds
cv = list(check_cv(cv, X, y, classifier=is_classifier(estimator)))
scorer = check_scoring(estimator, scoring=scoring)
# HACK as long as boolean indices are allowed in cv generators
if cv[0][0].dtype == bool:
new_cv = []
for i in range(len(cv)):
new_cv.append((np.nonzero(cv[i][0])[0], np.nonzero(cv[i][1])[0]))
cv = new_cv
n_max_training_samples = len(cv[0][0])
# Because the lengths of folds can be significantly different, it is
# not guaranteed that we use all of the available training data when we
# use the first 'n_max_training_samples' samples.
train_sizes_abs = _translate_train_sizes(train_sizes,
n_max_training_samples)
n_unique_ticks = train_sizes_abs.shape[0]
if verbose > 0:
print("[learning_curve] Training set sizes: " + str(train_sizes_abs))
parallel = Parallel(n_jobs=n_jobs, pre_dispatch=pre_dispatch,
verbose=verbose)
if exploit_incremental_learning:
classes = np.unique(y) if is_classifier(estimator) else None
out = parallel(delayed(_incremental_fit_estimator)(
clone(estimator), X, y, classes, train, test, train_sizes_abs,
scorer, verbose) for train, test in cv)
else:
out = parallel(delayed(_fit_and_score)(
clone(estimator), X, y, scorer, train[:n_train_samples], test,
verbose, parameters=None, fit_params=None, return_train_score=True,
error_score=error_score)
for train, test in cv for n_train_samples in train_sizes_abs)
out = np.array(out)[:, :2]
n_cv_folds = out.shape[0] // n_unique_ticks
out = out.reshape(n_cv_folds, n_unique_ticks, 2)
out = np.asarray(out).transpose((2, 1, 0))
return train_sizes_abs, out[0], out[1] | [
"def",
"learning_curve",
"(",
"estimator",
",",
"X",
",",
"y",
",",
"train_sizes",
"=",
"np",
".",
"linspace",
"(",
"0.1",
",",
"1.0",
",",
"5",
")",
",",
"cv",
"=",
"None",
",",
"scoring",
"=",
"None",
",",
"exploit_incremental_learning",
"=",
"False"... | https://github.com/catboost/catboost/blob/167f64f237114a4d10b2b4ee42adb4569137debe/contrib/python/scikit-learn/py2/sklearn/learning_curve.py#L29-L179 | |
ycm-core/ycmd | fc0fb7e5e15176cc5a2a30c80956335988c6b59a | ycmd/completers/typescript/typescript_completer.py | python | TypeScriptCompleter._Reload | ( self, request_data ) | Synchronize TSServer's view of the file to
the contents of the unsaved buffer. | Synchronize TSServer's view of the file to
the contents of the unsaved buffer. | [
"Synchronize",
"TSServer",
"s",
"view",
"of",
"the",
"file",
"to",
"the",
"contents",
"of",
"the",
"unsaved",
"buffer",
"."
] | def _Reload( self, request_data ):
"""
Synchronize TSServer's view of the file to
the contents of the unsaved buffer.
"""
filename = request_data[ 'filepath' ]
contents = request_data[ 'file_data' ][ filename ][ 'contents' ]
tmpfile = NamedTemporaryFile( delete = False )
tmpfile.write( utils.ToBytes( contents ) )
tmpfile.close()
self._SendRequest( 'reload', {
'file': filename,
'tmpfile': tmpfile.name
} )
utils.RemoveIfExists( tmpfile.name ) | [
"def",
"_Reload",
"(",
"self",
",",
"request_data",
")",
":",
"filename",
"=",
"request_data",
"[",
"'filepath'",
"]",
"contents",
"=",
"request_data",
"[",
"'file_data'",
"]",
"[",
"filename",
"]",
"[",
"'contents'",
"]",
"tmpfile",
"=",
"NamedTemporaryFile",... | https://github.com/ycm-core/ycmd/blob/fc0fb7e5e15176cc5a2a30c80956335988c6b59a/ycmd/completers/typescript/typescript_completer.py#L348-L363 | ||
krishauser/Klampt | 972cc83ea5befac3f653c1ba20f80155768ad519 | Python/klampt/robotsim.py | python | WorldModel.numRobotLinks | (self, robot: int) | return _robotsim.WorldModel_numRobotLinks(self, robot) | r"""
Returns the number of links on the given robot.
Args:
robot (int) | r"""
Returns the number of links on the given robot. | [
"r",
"Returns",
"the",
"number",
"of",
"links",
"on",
"the",
"given",
"robot",
"."
] | def numRobotLinks(self, robot: int) ->int:
r"""
Returns the number of links on the given robot.
Args:
robot (int)
"""
return _robotsim.WorldModel_numRobotLinks(self, robot) | [
"def",
"numRobotLinks",
"(",
"self",
",",
"robot",
":",
"int",
")",
"->",
"int",
":",
"return",
"_robotsim",
".",
"WorldModel_numRobotLinks",
"(",
"self",
",",
"robot",
")"
] | https://github.com/krishauser/Klampt/blob/972cc83ea5befac3f653c1ba20f80155768ad519/Python/klampt/robotsim.py#L5704-L5711 | |
catboost/catboost | 167f64f237114a4d10b2b4ee42adb4569137debe | contrib/python/numpy/py3/numpy/core/multiarray.py | python | empty_like | (prototype, dtype=None, order=None, subok=None, shape=None) | return (prototype,) | empty_like(prototype, dtype=None, order='K', subok=True, shape=None)
Return a new array with the same shape and type as a given array.
Parameters
----------
prototype : array_like
The shape and data-type of `prototype` define these same attributes
of the returned array.
dtype : data-type, optional
Overrides the data type of the result.
.. versionadded:: 1.6.0
order : {'C', 'F', 'A', or 'K'}, optional
Overrides the memory layout of the result. 'C' means C-order,
'F' means F-order, 'A' means 'F' if `prototype` is Fortran
contiguous, 'C' otherwise. 'K' means match the layout of `prototype`
as closely as possible.
.. versionadded:: 1.6.0
subok : bool, optional.
If True, then the newly created array will use the sub-class
type of `prototype`, otherwise it will be a base-class array. Defaults
to True.
shape : int or sequence of ints, optional.
Overrides the shape of the result. If order='K' and the number of
dimensions is unchanged, will try to keep order, otherwise,
order='C' is implied.
.. versionadded:: 1.17.0
Returns
-------
out : ndarray
Array of uninitialized (arbitrary) data with the same
shape and type as `prototype`.
See Also
--------
ones_like : Return an array of ones with shape and type of input.
zeros_like : Return an array of zeros with shape and type of input.
full_like : Return a new array with shape of input filled with value.
empty : Return a new uninitialized array.
Notes
-----
This function does *not* initialize the returned array; to do that use
`zeros_like` or `ones_like` instead. It may be marginally faster than
the functions that do set the array values.
Examples
--------
>>> a = ([1,2,3], [4,5,6]) # a is array-like
>>> np.empty_like(a)
array([[-1073741821, -1073741821, 3], # uninitialized
[ 0, 0, -1073741821]])
>>> a = np.array([[1., 2., 3.],[4.,5.,6.]])
>>> np.empty_like(a)
array([[ -2.00000715e+000, 1.48219694e-323, -2.00000572e+000], # uninitialized
[ 4.38791518e-305, -2.00000715e+000, 4.17269252e-309]]) | empty_like(prototype, dtype=None, order='K', subok=True, shape=None) | [
"empty_like",
"(",
"prototype",
"dtype",
"=",
"None",
"order",
"=",
"K",
"subok",
"=",
"True",
"shape",
"=",
"None",
")"
] | def empty_like(prototype, dtype=None, order=None, subok=None, shape=None):
"""
empty_like(prototype, dtype=None, order='K', subok=True, shape=None)
Return a new array with the same shape and type as a given array.
Parameters
----------
prototype : array_like
The shape and data-type of `prototype` define these same attributes
of the returned array.
dtype : data-type, optional
Overrides the data type of the result.
.. versionadded:: 1.6.0
order : {'C', 'F', 'A', or 'K'}, optional
Overrides the memory layout of the result. 'C' means C-order,
'F' means F-order, 'A' means 'F' if `prototype` is Fortran
contiguous, 'C' otherwise. 'K' means match the layout of `prototype`
as closely as possible.
.. versionadded:: 1.6.0
subok : bool, optional.
If True, then the newly created array will use the sub-class
type of `prototype`, otherwise it will be a base-class array. Defaults
to True.
shape : int or sequence of ints, optional.
Overrides the shape of the result. If order='K' and the number of
dimensions is unchanged, will try to keep order, otherwise,
order='C' is implied.
.. versionadded:: 1.17.0
Returns
-------
out : ndarray
Array of uninitialized (arbitrary) data with the same
shape and type as `prototype`.
See Also
--------
ones_like : Return an array of ones with shape and type of input.
zeros_like : Return an array of zeros with shape and type of input.
full_like : Return a new array with shape of input filled with value.
empty : Return a new uninitialized array.
Notes
-----
This function does *not* initialize the returned array; to do that use
`zeros_like` or `ones_like` instead. It may be marginally faster than
the functions that do set the array values.
Examples
--------
>>> a = ([1,2,3], [4,5,6]) # a is array-like
>>> np.empty_like(a)
array([[-1073741821, -1073741821, 3], # uninitialized
[ 0, 0, -1073741821]])
>>> a = np.array([[1., 2., 3.],[4.,5.,6.]])
>>> np.empty_like(a)
array([[ -2.00000715e+000, 1.48219694e-323, -2.00000572e+000], # uninitialized
[ 4.38791518e-305, -2.00000715e+000, 4.17269252e-309]])
"""
return (prototype,) | [
"def",
"empty_like",
"(",
"prototype",
",",
"dtype",
"=",
"None",
",",
"order",
"=",
"None",
",",
"subok",
"=",
"None",
",",
"shape",
"=",
"None",
")",
":",
"return",
"(",
"prototype",
",",
")"
] | https://github.com/catboost/catboost/blob/167f64f237114a4d10b2b4ee42adb4569137debe/contrib/python/numpy/py3/numpy/core/multiarray.py#L81-L145 | |
aws/lumberyard | f85344403c1c2e77ec8c75deb2c116e97b713217 | dev/Tools/Python/3.7.10/mac/Python.framework/Versions/3.7/lib/python3.7/_pyio.py | python | TextIOWrapper._read_chunk | (self) | return not eof | Read and decode the next chunk of data from the BufferedReader. | Read and decode the next chunk of data from the BufferedReader. | [
"Read",
"and",
"decode",
"the",
"next",
"chunk",
"of",
"data",
"from",
"the",
"BufferedReader",
"."
] | def _read_chunk(self):
"""
Read and decode the next chunk of data from the BufferedReader.
"""
# The return value is True unless EOF was reached. The decoded
# string is placed in self._decoded_chars (replacing its previous
# value). The entire input chunk is sent to the decoder, though
# some of it may remain buffered in the decoder, yet to be
# converted.
if self._decoder is None:
raise ValueError("no decoder")
if self._telling:
# To prepare for tell(), we need to snapshot a point in the
# file where the decoder's input buffer is empty.
dec_buffer, dec_flags = self._decoder.getstate()
# Given this, we know there was a valid snapshot point
# len(dec_buffer) bytes ago with decoder state (b'', dec_flags).
# Read a chunk, decode it, and put the result in self._decoded_chars.
if self._has_read1:
input_chunk = self.buffer.read1(self._CHUNK_SIZE)
else:
input_chunk = self.buffer.read(self._CHUNK_SIZE)
eof = not input_chunk
decoded_chars = self._decoder.decode(input_chunk, eof)
self._set_decoded_chars(decoded_chars)
if decoded_chars:
self._b2cratio = len(input_chunk) / len(self._decoded_chars)
else:
self._b2cratio = 0.0
if self._telling:
# At the snapshot point, len(dec_buffer) bytes before the read,
# the next input to be decoded is dec_buffer + input_chunk.
self._snapshot = (dec_flags, dec_buffer + input_chunk)
return not eof | [
"def",
"_read_chunk",
"(",
"self",
")",
":",
"# The return value is True unless EOF was reached. The decoded",
"# string is placed in self._decoded_chars (replacing its previous",
"# value). The entire input chunk is sent to the decoder, though",
"# some of it may remain buffered in the decoder,... | https://github.com/aws/lumberyard/blob/f85344403c1c2e77ec8c75deb2c116e97b713217/dev/Tools/Python/3.7.10/mac/Python.framework/Versions/3.7/lib/python3.7/_pyio.py#L2211-L2251 | |
ceph/ceph | 959663007321a369c83218414a29bd9dbc8bda3a | src/pybind/mgr/orchestrator/_interface.py | python | Orchestrator.remove_host | (self, host: str, force: bool, offline: bool) | Remove a host from the orchestrator inventory.
:param host: hostname | Remove a host from the orchestrator inventory. | [
"Remove",
"a",
"host",
"from",
"the",
"orchestrator",
"inventory",
"."
] | def remove_host(self, host: str, force: bool, offline: bool) -> OrchResult[str]:
"""
Remove a host from the orchestrator inventory.
:param host: hostname
"""
raise NotImplementedError() | [
"def",
"remove_host",
"(",
"self",
",",
"host",
":",
"str",
",",
"force",
":",
"bool",
",",
"offline",
":",
"bool",
")",
"->",
"OrchResult",
"[",
"str",
"]",
":",
"raise",
"NotImplementedError",
"(",
")"
] | https://github.com/ceph/ceph/blob/959663007321a369c83218414a29bd9dbc8bda3a/src/pybind/mgr/orchestrator/_interface.py#L350-L356 | ||
hughperkins/tf-coriander | 970d3df6c11400ad68405f22b0c42a52374e94ca | tensorflow/python/debug/cli/debugger_cli_common.py | python | TabCompletionRegistry.deregister_context | (self, context_words) | Deregister a list of context words.
Args:
context_words: A list of context words to deregister, as a list of str.
Raises:
KeyError: if there are word(s) in context_words that do not correspond
to any registered contexts. | Deregister a list of context words. | [
"Deregister",
"a",
"list",
"of",
"context",
"words",
"."
] | def deregister_context(self, context_words):
"""Deregister a list of context words.
Args:
context_words: A list of context words to deregister, as a list of str.
Raises:
KeyError: if there are word(s) in context_words that do not correspond
to any registered contexts.
"""
for context_word in context_words:
if context_word not in self._comp_dict:
raise KeyError("Cannot deregister unregistered context word \"%s\"" %
context_word)
for context_word in context_words:
del self._comp_dict[context_word] | [
"def",
"deregister_context",
"(",
"self",
",",
"context_words",
")",
":",
"for",
"context_word",
"in",
"context_words",
":",
"if",
"context_word",
"not",
"in",
"self",
".",
"_comp_dict",
":",
"raise",
"KeyError",
"(",
"\"Cannot deregister unregistered context word \\\... | https://github.com/hughperkins/tf-coriander/blob/970d3df6c11400ad68405f22b0c42a52374e94ca/tensorflow/python/debug/cli/debugger_cli_common.py#L585-L602 | ||
mantidproject/mantid | 03deeb89254ec4289edb8771e0188c2090a02f32 | qt/python/mantidqtinterfaces/mantidqtinterfaces/Muon/GUI/Common/contexts/fitting_contexts/general_fitting_context.py | python | GeneralFittingContext.simultaneous_fit_functions_for_undo | (self, fit_functions: list) | Sets the previous simultaneous fit functions that can be used when undoing. | Sets the previous simultaneous fit functions that can be used when undoing. | [
"Sets",
"the",
"previous",
"simultaneous",
"fit",
"functions",
"that",
"can",
"be",
"used",
"when",
"undoing",
"."
] | def simultaneous_fit_functions_for_undo(self, fit_functions: list) -> None:
"""Sets the previous simultaneous fit functions that can be used when undoing."""
self._simultaneous_fit_functions_for_undo = fit_functions | [
"def",
"simultaneous_fit_functions_for_undo",
"(",
"self",
",",
"fit_functions",
":",
"list",
")",
"->",
"None",
":",
"self",
".",
"_simultaneous_fit_functions_for_undo",
"=",
"fit_functions"
] | https://github.com/mantidproject/mantid/blob/03deeb89254ec4289edb8771e0188c2090a02f32/qt/python/mantidqtinterfaces/mantidqtinterfaces/Muon/GUI/Common/contexts/fitting_contexts/general_fitting_context.py#L109-L111 | ||
facebook/ThreatExchange | 31914a51820c73c8a0daffe62ccca29a6e3d359e | hasher-matcher-actioner/hmalib/common/models/bank.py | python | BanksTable.remove_bank_member_signals_to_process | (self, bank_member_id: str) | For a bank_member, remove all signals from the
BankMemberSignalCursorIndex on this table.
All systems that want to "do" something with bank_member_signals use
this index. eg. building indexes, syncing signals to another
hash_exchange. | For a bank_member, remove all signals from the
BankMemberSignalCursorIndex on this table. | [
"For",
"a",
"bank_member",
"remove",
"all",
"signals",
"from",
"the",
"BankMemberSignalCursorIndex",
"on",
"this",
"table",
"."
] | def remove_bank_member_signals_to_process(self, bank_member_id: str):
"""
For a bank_member, remove all signals from the
BankMemberSignalCursorIndex on this table.
All systems that want to "do" something with bank_member_signals use
this index. eg. building indexes, syncing signals to another
hash_exchange.
"""
for signal in self.get_signals_for_bank_member(bank_member_id=bank_member_id):
self._table.update_item(
Key={
"PK": BankMemberSignal.get_pk(bank_member_id=bank_member_id),
"SK": BankMemberSignal.get_sk(signal.signal_id),
},
UpdateExpression=f"SET UpdatedAt = :updated_at REMOVE #gsi_pk, #gsi_sk",
ExpressionAttributeNames={
"#gsi_pk": BankMemberSignal.BANK_MEMBER_SIGNAL_CURSOR_INDEX_SIGNAL_TYPE,
"#gsi_sk": BankMemberSignal.BANK_MEMBER_SIGNAL_CURSOR_INDEX_CHRONO_KEY,
},
ExpressionAttributeValues={":updated_at": datetime.now().isoformat()},
) | [
"def",
"remove_bank_member_signals_to_process",
"(",
"self",
",",
"bank_member_id",
":",
"str",
")",
":",
"for",
"signal",
"in",
"self",
".",
"get_signals_for_bank_member",
"(",
"bank_member_id",
"=",
"bank_member_id",
")",
":",
"self",
".",
"_table",
".",
"update... | https://github.com/facebook/ThreatExchange/blob/31914a51820c73c8a0daffe62ccca29a6e3d359e/hasher-matcher-actioner/hmalib/common/models/bank.py#L654-L675 | ||
catboost/catboost | 167f64f237114a4d10b2b4ee42adb4569137debe | contrib/tools/python3/src/Lib/sched.py | python | scheduler.enterabs | (self, time, priority, action, argument=(), kwargs=_sentinel) | return event | Enter a new event in the queue at an absolute time.
Returns an ID for the event which can be used to remove it,
if necessary. | Enter a new event in the queue at an absolute time. | [
"Enter",
"a",
"new",
"event",
"in",
"the",
"queue",
"at",
"an",
"absolute",
"time",
"."
] | def enterabs(self, time, priority, action, argument=(), kwargs=_sentinel):
"""Enter a new event in the queue at an absolute time.
Returns an ID for the event which can be used to remove it,
if necessary.
"""
if kwargs is _sentinel:
kwargs = {}
event = Event(time, priority, action, argument, kwargs)
with self._lock:
heapq.heappush(self._queue, event)
return event | [
"def",
"enterabs",
"(",
"self",
",",
"time",
",",
"priority",
",",
"action",
",",
"argument",
"=",
"(",
")",
",",
"kwargs",
"=",
"_sentinel",
")",
":",
"if",
"kwargs",
"is",
"_sentinel",
":",
"kwargs",
"=",
"{",
"}",
"event",
"=",
"Event",
"(",
"ti... | https://github.com/catboost/catboost/blob/167f64f237114a4d10b2b4ee42adb4569137debe/contrib/tools/python3/src/Lib/sched.py#L65-L77 | |
aws/lumberyard | f85344403c1c2e77ec8c75deb2c116e97b713217 | dev/Tools/Python/3.7.10/windows/Lib/pydoc.py | python | _get_revised_path | (given_path, argv0) | return revised_path | Ensures current directory is on returned path, and argv0 directory is not
Exception: argv0 dir is left alone if it's also pydoc's directory.
Returns a new path entry list, or None if no adjustment is needed. | Ensures current directory is on returned path, and argv0 directory is not | [
"Ensures",
"current",
"directory",
"is",
"on",
"returned",
"path",
"and",
"argv0",
"directory",
"is",
"not"
] | def _get_revised_path(given_path, argv0):
"""Ensures current directory is on returned path, and argv0 directory is not
Exception: argv0 dir is left alone if it's also pydoc's directory.
Returns a new path entry list, or None if no adjustment is needed.
"""
# Scripts may get the current directory in their path by default if they're
# run with the -m switch, or directly from the current directory.
# The interactive prompt also allows imports from the current directory.
# Accordingly, if the current directory is already present, don't make
# any changes to the given_path
if '' in given_path or os.curdir in given_path or os.getcwd() in given_path:
return None
# Otherwise, add the current directory to the given path, and remove the
# script directory (as long as the latter isn't also pydoc's directory.
stdlib_dir = os.path.dirname(__file__)
script_dir = os.path.dirname(argv0)
revised_path = given_path.copy()
if script_dir in given_path and not os.path.samefile(script_dir, stdlib_dir):
revised_path.remove(script_dir)
revised_path.insert(0, os.getcwd())
return revised_path | [
"def",
"_get_revised_path",
"(",
"given_path",
",",
"argv0",
")",
":",
"# Scripts may get the current directory in their path by default if they're",
"# run with the -m switch, or directly from the current directory.",
"# The interactive prompt also allows imports from the current directory.",
... | https://github.com/aws/lumberyard/blob/f85344403c1c2e77ec8c75deb2c116e97b713217/dev/Tools/Python/3.7.10/windows/Lib/pydoc.py#L2622-L2646 | |
RobotLocomotion/drake | 0e18a34604c45ed65bc9018a54f7610f91cdad5b | tools/workspace/drake_visualizer/_drake_visualizer_builtin_scripts/show_hydroelastic_contact.py | python | TwoToneMap.hsvToRgb | (self, hsv) | return r, g, b | Convert hue, saturation and lightness to r, g, b values.
Hue in [0, 360], s in [0, 1], l in [0, 1]. | Convert hue, saturation and lightness to r, g, b values.
Hue in [0, 360], s in [0, 1], l in [0, 1]. | [
"Convert",
"hue",
"saturation",
"and",
"lightness",
"to",
"r",
"g",
"b",
"values",
".",
"Hue",
"in",
"[",
"0",
"360",
"]",
"s",
"in",
"[",
"0",
"1",
"]",
"l",
"in",
"[",
"0",
"1",
"]",
"."
] | def hsvToRgb(self, hsv):
'''Convert hue, saturation and lightness to r, g, b values.
Hue in [0, 360], s in [0, 1], l in [0, 1].'''
h, s, v = hsv
r = g = b = 0.0
c = s * v
h /= 60.0
x = c * (1 - abs((h % 2) - 1))
if (h >= 0 and h < 1):
r = c
g = x
elif (h >= 1 and h < 2):
r = x
g = c
elif (h >= 2 and h < 3):
g = c
b = x
elif (h >= 3 and h < 4):
g = x
b = c
elif (h >= 4 and h < 5):
r = x
b = c
else:
r = c
b = x
m = v - c
r += m
g += m
b += m
return r, g, b | [
"def",
"hsvToRgb",
"(",
"self",
",",
"hsv",
")",
":",
"h",
",",
"s",
",",
"v",
"=",
"hsv",
"r",
"=",
"g",
"=",
"b",
"=",
"0.0",
"c",
"=",
"s",
"*",
"v",
"h",
"/=",
"60.0",
"x",
"=",
"c",
"*",
"(",
"1",
"-",
"abs",
"(",
"(",
"h",
"%",
... | https://github.com/RobotLocomotion/drake/blob/0e18a34604c45ed65bc9018a54f7610f91cdad5b/tools/workspace/drake_visualizer/_drake_visualizer_builtin_scripts/show_hydroelastic_contact.py#L435-L466 | |
baidu-research/tensorflow-allreduce | 66d5b855e90b0949e9fa5cca5599fd729a70e874 | tensorflow/python/feature_column/feature_column.py | python | _IndicatorColumn._transform_feature | (self, inputs) | return math_ops.reduce_sum(one_hot_id_tensor, axis=[-2]) | Returns dense `Tensor` representing feature.
Args:
inputs: A `_LazyBuilder` object to access inputs.
Returns:
Transformed feature `Tensor`.
Raises:
ValueError: if input rank is not known at graph building time. | Returns dense `Tensor` representing feature. | [
"Returns",
"dense",
"Tensor",
"representing",
"feature",
"."
] | def _transform_feature(self, inputs):
"""Returns dense `Tensor` representing feature.
Args:
inputs: A `_LazyBuilder` object to access inputs.
Returns:
Transformed feature `Tensor`.
Raises:
ValueError: if input rank is not known at graph building time.
"""
id_weight_pair = self.categorical_column._get_sparse_tensors(inputs) # pylint: disable=protected-access
id_tensor = id_weight_pair.id_tensor
weight_tensor = id_weight_pair.weight_tensor
# If the underlying column is weighted, return the input as a dense tensor.
if weight_tensor is not None:
weighted_column = sparse_ops.sparse_merge(
sp_ids=id_tensor,
sp_values=weight_tensor,
vocab_size=self._variable_shape[-1])
return sparse_ops.sparse_tensor_to_dense(weighted_column)
dense_id_tensor = sparse_ops.sparse_tensor_to_dense(
id_tensor, default_value=-1)
# One hot must be float for tf.concat reasons since all other inputs to
# input_layer are float32.
one_hot_id_tensor = array_ops.one_hot(
dense_id_tensor,
depth=self._variable_shape[-1],
on_value=1.0,
off_value=0.0)
# Reduce to get a multi-hot per example.
return math_ops.reduce_sum(one_hot_id_tensor, axis=[-2]) | [
"def",
"_transform_feature",
"(",
"self",
",",
"inputs",
")",
":",
"id_weight_pair",
"=",
"self",
".",
"categorical_column",
".",
"_get_sparse_tensors",
"(",
"inputs",
")",
"# pylint: disable=protected-access",
"id_tensor",
"=",
"id_weight_pair",
".",
"id_tensor",
"we... | https://github.com/baidu-research/tensorflow-allreduce/blob/66d5b855e90b0949e9fa5cca5599fd729a70e874/tensorflow/python/feature_column/feature_column.py#L2455-L2491 | |
pmq20/node-packer | 12c46c6e44fbc14d9ee645ebd17d5296b324f7e0 | current/tools/gyp/pylib/gyp/msvs_emulation.py | python | MsvsSettings.IsEmbedManifest | (self, config) | return embed == 'true' | Returns whether manifest should be linked into binary. | Returns whether manifest should be linked into binary. | [
"Returns",
"whether",
"manifest",
"should",
"be",
"linked",
"into",
"binary",
"."
] | def IsEmbedManifest(self, config):
"""Returns whether manifest should be linked into binary."""
config = self._TargetConfig(config)
embed = self._Setting(('VCManifestTool', 'EmbedManifest'), config,
default='true')
return embed == 'true' | [
"def",
"IsEmbedManifest",
"(",
"self",
",",
"config",
")",
":",
"config",
"=",
"self",
".",
"_TargetConfig",
"(",
"config",
")",
"embed",
"=",
"self",
".",
"_Setting",
"(",
"(",
"'VCManifestTool'",
",",
"'EmbedManifest'",
")",
",",
"config",
",",
"default"... | https://github.com/pmq20/node-packer/blob/12c46c6e44fbc14d9ee645ebd17d5296b324f7e0/current/tools/gyp/pylib/gyp/msvs_emulation.py#L787-L792 | |
windystrife/UnrealEngine_NVIDIAGameWorks | b50e6338a7c5b26374d66306ebc7807541ff815e | Engine/Extras/ThirdPartyNotUE/emsdk/Win64/python/2.7.5.3_64bit/Lib/decimal.py | python | Context.subtract | (self, a, b) | Return the difference between the two operands.
>>> ExtendedContext.subtract(Decimal('1.3'), Decimal('1.07'))
Decimal('0.23')
>>> ExtendedContext.subtract(Decimal('1.3'), Decimal('1.30'))
Decimal('0.00')
>>> ExtendedContext.subtract(Decimal('1.3'), Decimal('2.07'))
Decimal('-0.77')
>>> ExtendedContext.subtract(8, 5)
Decimal('3')
>>> ExtendedContext.subtract(Decimal(8), 5)
Decimal('3')
>>> ExtendedContext.subtract(8, Decimal(5))
Decimal('3') | Return the difference between the two operands. | [
"Return",
"the",
"difference",
"between",
"the",
"two",
"operands",
"."
] | def subtract(self, a, b):
"""Return the difference between the two operands.
>>> ExtendedContext.subtract(Decimal('1.3'), Decimal('1.07'))
Decimal('0.23')
>>> ExtendedContext.subtract(Decimal('1.3'), Decimal('1.30'))
Decimal('0.00')
>>> ExtendedContext.subtract(Decimal('1.3'), Decimal('2.07'))
Decimal('-0.77')
>>> ExtendedContext.subtract(8, 5)
Decimal('3')
>>> ExtendedContext.subtract(Decimal(8), 5)
Decimal('3')
>>> ExtendedContext.subtract(8, Decimal(5))
Decimal('3')
"""
a = _convert_other(a, raiseit=True)
r = a.__sub__(b, context=self)
if r is NotImplemented:
raise TypeError("Unable to convert %s to Decimal" % b)
else:
return r | [
"def",
"subtract",
"(",
"self",
",",
"a",
",",
"b",
")",
":",
"a",
"=",
"_convert_other",
"(",
"a",
",",
"raiseit",
"=",
"True",
")",
"r",
"=",
"a",
".",
"__sub__",
"(",
"b",
",",
"context",
"=",
"self",
")",
"if",
"r",
"is",
"NotImplemented",
... | https://github.com/windystrife/UnrealEngine_NVIDIAGameWorks/blob/b50e6338a7c5b26374d66306ebc7807541ff815e/Engine/Extras/ThirdPartyNotUE/emsdk/Win64/python/2.7.5.3_64bit/Lib/decimal.py#L5317-L5338 | ||
trilinos/Trilinos | 6168be6dd51e35e1cd681e9c4b24433e709df140 | packages/seacas/scripts/exodus2.in.py | python | exodus.put_side_set_dist_fact | (self, id, sideSetDistFact) | exo.put_side_set_dist_fact(side_set_id, ss_dist_facts)
-> store the list of distribution factors for nodes in a side set
input value(s):
<int> node_set_id node set *ID* (not *INDEX*)
<list<float>> ns_dist_facts a list of distribution factors,
e.g. nodal 'weights'
NOTE:
The number of nodes (and distribution factors) in a side set is
the sum of all face nodes. A single node can be counted more
than once, i.e. once for each face it belongs to in the side set. | exo.put_side_set_dist_fact(side_set_id, ss_dist_facts) | [
"exo",
".",
"put_side_set_dist_fact",
"(",
"side_set_id",
"ss_dist_facts",
")"
] | def put_side_set_dist_fact(self, id, sideSetDistFact):
"""
exo.put_side_set_dist_fact(side_set_id, ss_dist_facts)
-> store the list of distribution factors for nodes in a side set
input value(s):
<int> node_set_id node set *ID* (not *INDEX*)
<list<float>> ns_dist_facts a list of distribution factors,
e.g. nodal 'weights'
NOTE:
The number of nodes (and distribution factors) in a side set is
the sum of all face nodes. A single node can be counted more
than once, i.e. once for each face it belongs to in the side set.
"""
self.__ex_put_side_set_dist_fact(id, sideSetDistFact) | [
"def",
"put_side_set_dist_fact",
"(",
"self",
",",
"id",
",",
"sideSetDistFact",
")",
":",
"self",
".",
"__ex_put_side_set_dist_fact",
"(",
"id",
",",
"sideSetDistFact",
")"
] | https://github.com/trilinos/Trilinos/blob/6168be6dd51e35e1cd681e9c4b24433e709df140/packages/seacas/scripts/exodus2.in.py#L2767-L2783 | ||
gnuradio/gnuradio | 09c3c4fa4bfb1a02caac74cb5334dfe065391e3b | gr-utils/modtool/core/base.py | python | ModTool._setup_scm | (self, mode='active') | Initialize source control management. | Initialize source control management. | [
"Initialize",
"source",
"control",
"management",
"."
] | def _setup_scm(self, mode='active'):
""" Initialize source control management. """
self.options = SimpleNamespace(scm_mode=self._scm)
if mode == 'active':
self.scm = SCMRepoFactory(
self.options, '.').make_active_scm_manager()
else:
self.scm = SCMRepoFactory(
self.options, '.').make_empty_scm_manager()
if self.scm is None:
logger.error("Error: Can't set up SCM.")
exit(1) | [
"def",
"_setup_scm",
"(",
"self",
",",
"mode",
"=",
"'active'",
")",
":",
"self",
".",
"options",
"=",
"SimpleNamespace",
"(",
"scm_mode",
"=",
"self",
".",
"_scm",
")",
"if",
"mode",
"==",
"'active'",
":",
"self",
".",
"scm",
"=",
"SCMRepoFactory",
"(... | https://github.com/gnuradio/gnuradio/blob/09c3c4fa4bfb1a02caac74cb5334dfe065391e3b/gr-utils/modtool/core/base.py#L164-L175 | ||
PaddlePaddle/Paddle | 1252f4bb3e574df80aa6d18c7ddae1b3a90bd81c | python/paddle/fluid/dygraph/dygraph_to_static/partial_program.py | python | PartialProgramLayer._remove_op_call_stack | (self, main_program) | return main_program | Remove op's python call stack with redundant low-level error messages related to
transforamtions to avoid confusing users. | Remove op's python call stack with redundant low-level error messages related to
transforamtions to avoid confusing users. | [
"Remove",
"op",
"s",
"python",
"call",
"stack",
"with",
"redundant",
"low",
"-",
"level",
"error",
"messages",
"related",
"to",
"transforamtions",
"to",
"avoid",
"confusing",
"users",
"."
] | def _remove_op_call_stack(self, main_program):
"""
Remove op's python call stack with redundant low-level error messages related to
transforamtions to avoid confusing users.
"""
assert isinstance(main_program, framework.Program)
for block in main_program.blocks:
for op in block.ops:
if op.has_attr("op_callstack"):
op._remove_attr("op_callstack")
return main_program | [
"def",
"_remove_op_call_stack",
"(",
"self",
",",
"main_program",
")",
":",
"assert",
"isinstance",
"(",
"main_program",
",",
"framework",
".",
"Program",
")",
"for",
"block",
"in",
"main_program",
".",
"blocks",
":",
"for",
"op",
"in",
"block",
".",
"ops",
... | https://github.com/PaddlePaddle/Paddle/blob/1252f4bb3e574df80aa6d18c7ddae1b3a90bd81c/python/paddle/fluid/dygraph/dygraph_to_static/partial_program.py#L502-L513 | |
BitMEX/api-connectors | 37a3a5b806ad5d0e0fc975ab86d9ed43c3bcd812 | auto-generated/python/swagger_client/models/order_book_l2.py | python | OrderBookL2.__eq__ | (self, other) | return self.__dict__ == other.__dict__ | Returns true if both objects are equal | Returns true if both objects are equal | [
"Returns",
"true",
"if",
"both",
"objects",
"are",
"equal"
] | def __eq__(self, other):
"""Returns true if both objects are equal"""
if not isinstance(other, OrderBookL2):
return False
return self.__dict__ == other.__dict__ | [
"def",
"__eq__",
"(",
"self",
",",
"other",
")",
":",
"if",
"not",
"isinstance",
"(",
"other",
",",
"OrderBookL2",
")",
":",
"return",
"False",
"return",
"self",
".",
"__dict__",
"==",
"other",
".",
"__dict__"
] | https://github.com/BitMEX/api-connectors/blob/37a3a5b806ad5d0e0fc975ab86d9ed43c3bcd812/auto-generated/python/swagger_client/models/order_book_l2.py#L213-L218 | |
psi4/psi4 | be533f7f426b6ccc263904e55122899b16663395 | psi4/driver/qcdb/libmintsbasisset.py | python | BasisSet.molecule | (self) | return self.molecule | Molecule this basis is for.
* @return Shared pointer to the molecule for this basis set. | Molecule this basis is for.
* @return Shared pointer to the molecule for this basis set. | [
"Molecule",
"this",
"basis",
"is",
"for",
".",
"*",
"@return",
"Shared",
"pointer",
"to",
"the",
"molecule",
"for",
"this",
"basis",
"set",
"."
] | def molecule(self):
"""Molecule this basis is for.
* @return Shared pointer to the molecule for this basis set.
"""
return self.molecule | [
"def",
"molecule",
"(",
"self",
")",
":",
"return",
"self",
".",
"molecule"
] | https://github.com/psi4/psi4/blob/be533f7f426b6ccc263904e55122899b16663395/psi4/driver/qcdb/libmintsbasisset.py#L1044-L1049 | |
catboost/catboost | 167f64f237114a4d10b2b4ee42adb4569137debe | contrib/python/ipython/py2/IPython/lib/display.py | python | Audio._data_and_metadata | (self) | shortcut for returning metadata with url information, if defined | shortcut for returning metadata with url information, if defined | [
"shortcut",
"for",
"returning",
"metadata",
"with",
"url",
"information",
"if",
"defined"
] | def _data_and_metadata(self):
"""shortcut for returning metadata with url information, if defined"""
md = {}
if self.url:
md['url'] = self.url
if md:
return self.data, md
else:
return self.data | [
"def",
"_data_and_metadata",
"(",
"self",
")",
":",
"md",
"=",
"{",
"}",
"if",
"self",
".",
"url",
":",
"md",
"[",
"'url'",
"]",
"=",
"self",
".",
"url",
"if",
"md",
":",
"return",
"self",
".",
"data",
",",
"md",
"else",
":",
"return",
"self",
... | https://github.com/catboost/catboost/blob/167f64f237114a4d10b2b4ee42adb4569137debe/contrib/python/ipython/py2/IPython/lib/display.py#L162-L170 | ||
wxWidgets/wxPython-Classic | 19571e1ae65f1ac445f5491474121998c97a1bf0 | src/gtk/richtext.py | python | RichTextCtrl.GetURLCursor | (*args, **kwargs) | return _richtext.RichTextCtrl_GetURLCursor(*args, **kwargs) | GetURLCursor(self) -> Cursor
Get URL cursor | GetURLCursor(self) -> Cursor | [
"GetURLCursor",
"(",
"self",
")",
"-",
">",
"Cursor"
] | def GetURLCursor(*args, **kwargs):
"""
GetURLCursor(self) -> Cursor
Get URL cursor
"""
return _richtext.RichTextCtrl_GetURLCursor(*args, **kwargs) | [
"def",
"GetURLCursor",
"(",
"*",
"args",
",",
"*",
"*",
"kwargs",
")",
":",
"return",
"_richtext",
".",
"RichTextCtrl_GetURLCursor",
"(",
"*",
"args",
",",
"*",
"*",
"kwargs",
")"
] | https://github.com/wxWidgets/wxPython-Classic/blob/19571e1ae65f1ac445f5491474121998c97a1bf0/src/gtk/richtext.py#L3013-L3019 | |
apple/swift-lldb | d74be846ef3e62de946df343e8c234bde93a8912 | scripts/Python/static-binding/lldb.py | python | SBExpressionOptions.GetPlaygroundTransformEnabled | (self) | return _lldb.SBExpressionOptions_GetPlaygroundTransformEnabled(self) | GetPlaygroundTransformEnabled(SBExpressionOptions self) -> bool | GetPlaygroundTransformEnabled(SBExpressionOptions self) -> bool | [
"GetPlaygroundTransformEnabled",
"(",
"SBExpressionOptions",
"self",
")",
"-",
">",
"bool"
] | def GetPlaygroundTransformEnabled(self):
"""GetPlaygroundTransformEnabled(SBExpressionOptions self) -> bool"""
return _lldb.SBExpressionOptions_GetPlaygroundTransformEnabled(self) | [
"def",
"GetPlaygroundTransformEnabled",
"(",
"self",
")",
":",
"return",
"_lldb",
".",
"SBExpressionOptions_GetPlaygroundTransformEnabled",
"(",
"self",
")"
] | https://github.com/apple/swift-lldb/blob/d74be846ef3e62de946df343e8c234bde93a8912/scripts/Python/static-binding/lldb.py#L5047-L5049 | |
miyosuda/TensorFlowAndroidDemo | 35903e0221aa5f109ea2dbef27f20b52e317f42d | jni-build/jni/include/external/bazel_tools/tools/android/incremental_install.py | python | Adb.InstallMultiple | (self, apk, pkg=None) | Invoke 'adb install-multiple'. | Invoke 'adb install-multiple'. | [
"Invoke",
"adb",
"install",
"-",
"multiple",
"."
] | def InstallMultiple(self, apk, pkg=None):
"""Invoke 'adb install-multiple'."""
pkg_args = ["-p", pkg] if pkg else []
ret, stdout, stderr, args = self._Exec(
["install-multiple", "-r"] + pkg_args + [apk])
if "FAILED" in stdout or "FAILED" in stderr:
raise AdbError(args, ret, stdout, stderr) | [
"def",
"InstallMultiple",
"(",
"self",
",",
"apk",
",",
"pkg",
"=",
"None",
")",
":",
"pkg_args",
"=",
"[",
"\"-p\"",
",",
"pkg",
"]",
"if",
"pkg",
"else",
"[",
"]",
"ret",
",",
"stdout",
",",
"stderr",
",",
"args",
"=",
"self",
".",
"_Exec",
"("... | https://github.com/miyosuda/TensorFlowAndroidDemo/blob/35903e0221aa5f109ea2dbef27f20b52e317f42d/jni-build/jni/include/external/bazel_tools/tools/android/incremental_install.py#L222-L229 | ||
NVIDIA/TensorRT | 42805f078052daad1a98bc5965974fcffaad0960 | tools/Polygraphy/polygraphy/mod/exporter.py | python | deprecate | (remove_in, use_instead, module_name=None, name=None) | return deprecate_impl | Decorator that marks a function or class as deprecated.
When the function or class is used, a warning will be issued.
Args:
remove_in (str):
The version in which the decorated type will be removed.
use_instead (str):
The function or class to use instead.
module_name (str):
The name of the containing module. This will be used to
generate more informative warnings.
Defaults to None.
name (str):
The name of the object being deprecated.
If not provided, this is automatically determined based on the decorated type.
Defaults to None. | Decorator that marks a function or class as deprecated.
When the function or class is used, a warning will be issued. | [
"Decorator",
"that",
"marks",
"a",
"function",
"or",
"class",
"as",
"deprecated",
".",
"When",
"the",
"function",
"or",
"class",
"is",
"used",
"a",
"warning",
"will",
"be",
"issued",
"."
] | def deprecate(remove_in, use_instead, module_name=None, name=None):
"""
Decorator that marks a function or class as deprecated.
When the function or class is used, a warning will be issued.
Args:
remove_in (str):
The version in which the decorated type will be removed.
use_instead (str):
The function or class to use instead.
module_name (str):
The name of the containing module. This will be used to
generate more informative warnings.
Defaults to None.
name (str):
The name of the object being deprecated.
If not provided, this is automatically determined based on the decorated type.
Defaults to None.
"""
def deprecate_impl(obj):
if config.INTERNAL_CORRECTNESS_CHECKS and version(polygraphy.__version__) >= version(remove_in):
G_LOGGER.internal_error("{:} should have been removed in version: {:}".format(obj, remove_in))
nonlocal name
name = name or obj.__name__
if inspect.ismodule(obj):
class DeprecatedModule(object):
def __getattr__(self, attr_name):
warn_deprecated(name, use_instead, remove_in, module_name)
self = obj
return getattr(self, attr_name)
def __setattr__(self, attr_name, value):
warn_deprecated(name, use_instead, remove_in, module_name)
self = obj
return setattr(self, attr_name, value)
DeprecatedModule.__doc__ = "Deprecated: Use {:} instead".format(use_instead)
return DeprecatedModule()
elif inspect.isclass(obj):
class Deprecated(obj):
def __init__(self, *args, **kwargs):
warn_deprecated(name, use_instead, remove_in, module_name)
super().__init__(*args, **kwargs)
Deprecated.__doc__ = "Deprecated: Use {:} instead".format(use_instead)
return Deprecated
elif inspect.isfunction(obj):
def wrapped(*args, **kwargs):
warn_deprecated(name, use_instead, remove_in, module_name)
return obj(*args, **kwargs)
wrapped.__doc__ = "Deprecated: Use {:} instead".format(use_instead)
return wrapped
else:
G_LOGGER.internal_error("deprecate is not implemented for: {:}".format(obj))
return deprecate_impl | [
"def",
"deprecate",
"(",
"remove_in",
",",
"use_instead",
",",
"module_name",
"=",
"None",
",",
"name",
"=",
"None",
")",
":",
"def",
"deprecate_impl",
"(",
"obj",
")",
":",
"if",
"config",
".",
"INTERNAL_CORRECTNESS_CHECKS",
"and",
"version",
"(",
"polygrap... | https://github.com/NVIDIA/TensorRT/blob/42805f078052daad1a98bc5965974fcffaad0960/tools/Polygraphy/polygraphy/mod/exporter.py#L223-L285 | |
ricardoquesada/Spidermonkey | 4a75ea2543408bd1b2c515aa95901523eeef7858 | addon-sdk/source/python-lib/mozrunner/__init__.py | python | CLI.__init__ | (self) | Setup command line parser and parse arguments | Setup command line parser and parse arguments | [
"Setup",
"command",
"line",
"parser",
"and",
"parse",
"arguments"
] | def __init__(self):
""" Setup command line parser and parse arguments """
self.metadata = self.get_metadata_from_egg()
self.parser = optparse.OptionParser(version="%prog " + self.metadata["Version"])
for names, opts in self.parser_options.items():
self.parser.add_option(*names, **opts)
(self.options, self.args) = self.parser.parse_args()
if self.options.info:
self.print_metadata()
sys.exit(0)
# XXX should use action='append' instead of rolling our own
try:
self.addons = self.options.addons.split(',')
except:
self.addons = [] | [
"def",
"__init__",
"(",
"self",
")",
":",
"self",
".",
"metadata",
"=",
"self",
".",
"get_metadata_from_egg",
"(",
")",
"self",
".",
"parser",
"=",
"optparse",
".",
"OptionParser",
"(",
"version",
"=",
"\"%prog \"",
"+",
"self",
".",
"metadata",
"[",
"\"... | https://github.com/ricardoquesada/Spidermonkey/blob/4a75ea2543408bd1b2c515aa95901523eeef7858/addon-sdk/source/python-lib/mozrunner/__init__.py#L621-L637 | ||
deepmind/open_spiel | 4ca53bea32bb2875c7385d215424048ae92f78c8 | open_spiel/python/algorithms/dqn.py | python | DQN.copy_with_noise | (self, sigma=0.0, copy_weights=True) | return copied_object | Copies the object and perturbates it with noise.
Args:
sigma: gaussian dropout variance term : Multiplicative noise following
(1+sigma*epsilon), epsilon standard gaussian variable, multiplies each
model weight. sigma=0 means no perturbation.
copy_weights: Boolean determining whether to copy model weights (True) or
just model hyperparameters.
Returns:
Perturbated copy of the model. | Copies the object and perturbates it with noise. | [
"Copies",
"the",
"object",
"and",
"perturbates",
"it",
"with",
"noise",
"."
] | def copy_with_noise(self, sigma=0.0, copy_weights=True):
"""Copies the object and perturbates it with noise.
Args:
sigma: gaussian dropout variance term : Multiplicative noise following
(1+sigma*epsilon), epsilon standard gaussian variable, multiplies each
model weight. sigma=0 means no perturbation.
copy_weights: Boolean determining whether to copy model weights (True) or
just model hyperparameters.
Returns:
Perturbated copy of the model.
"""
_ = self._kwargs.pop("self", None)
copied_object = DQN(**self._kwargs)
q_network = getattr(copied_object, "_q_network")
target_q_network = getattr(copied_object, "_target_q_network")
if copy_weights:
copy_weights = tf.group(*[
va.assign(vb * (1 + sigma * tf.random.normal(vb.shape)))
for va, vb in zip(q_network.variables, self._q_network.variables)
])
self._session.run(copy_weights)
copy_target_weights = tf.group(*[
va.assign(vb * (1 + sigma * tf.random.normal(vb.shape)))
for va, vb in zip(target_q_network.variables,
self._target_q_network.variables)
])
self._session.run(copy_target_weights)
return copied_object | [
"def",
"copy_with_noise",
"(",
"self",
",",
"sigma",
"=",
"0.0",
",",
"copy_weights",
"=",
"True",
")",
":",
"_",
"=",
"self",
".",
"_kwargs",
".",
"pop",
"(",
"\"self\"",
",",
"None",
")",
"copied_object",
"=",
"DQN",
"(",
"*",
"*",
"self",
".",
"... | https://github.com/deepmind/open_spiel/blob/4ca53bea32bb2875c7385d215424048ae92f78c8/open_spiel/python/algorithms/dqn.py#L436-L468 | |
miyosuda/TensorFlowAndroidMNIST | 7b5a4603d2780a8a2834575706e9001977524007 | jni-build/jni/include/tensorflow/contrib/layers/python/layers/feature_column_ops.py | python | check_feature_columns | (feature_columns) | Checks the validity of the set of FeatureColumns.
Args:
feature_columns: A set of instances or subclasses of FeatureColumn.
Raises:
ValueError: If there are duplicate feature column keys. | Checks the validity of the set of FeatureColumns. | [
"Checks",
"the",
"validity",
"of",
"the",
"set",
"of",
"FeatureColumns",
"."
] | def check_feature_columns(feature_columns):
"""Checks the validity of the set of FeatureColumns.
Args:
feature_columns: A set of instances or subclasses of FeatureColumn.
Raises:
ValueError: If there are duplicate feature column keys.
"""
seen_keys = set()
for f in feature_columns:
key = f.key
if key in seen_keys:
raise ValueError('Duplicate feature column key found for column: {}. '
'This usually means that the column is almost identical '
'to another column, and one must be discarded.'.format(
f.name))
seen_keys.add(key) | [
"def",
"check_feature_columns",
"(",
"feature_columns",
")",
":",
"seen_keys",
"=",
"set",
"(",
")",
"for",
"f",
"in",
"feature_columns",
":",
"key",
"=",
"f",
".",
"key",
"if",
"key",
"in",
"seen_keys",
":",
"raise",
"ValueError",
"(",
"'Duplicate feature c... | https://github.com/miyosuda/TensorFlowAndroidMNIST/blob/7b5a4603d2780a8a2834575706e9001977524007/jni-build/jni/include/tensorflow/contrib/layers/python/layers/feature_column_ops.py#L302-L319 | ||
H-uru/Plasma | c2140ea046e82e9c199e257a7f2e7edb42602871 | Scripts/Python/tldnPwrTwrPeriscope.py | python | tldnPwrTwrPeriscope.__del__ | (self) | make sure the dialog is unloaded | make sure the dialog is unloaded | [
"make",
"sure",
"the",
"dialog",
"is",
"unloaded"
] | def __del__(self):
"make sure the dialog is unloaded"
PtUnloadDialog(Vignette.value) | [
"def",
"__del__",
"(",
"self",
")",
":",
"PtUnloadDialog",
"(",
"Vignette",
".",
"value",
")"
] | https://github.com/H-uru/Plasma/blob/c2140ea046e82e9c199e257a7f2e7edb42602871/Scripts/Python/tldnPwrTwrPeriscope.py#L239-L241 | ||
benoitsteiner/tensorflow-opencl | cb7cb40a57fde5cfd4731bc551e82a1e2fef43a5 | tensorflow/python/profiler/option_builder.py | python | ProfileOptionBuilder.order_by | (self, attribute) | return self | Order the displayed profiler nodes based on a attribute.
Supported attribute includes micros, bytes, occurrence, params, etc.
https://github.com/tensorflow/tensorflow/tree/master/tensorflow/core/profiler/g3doc/options.md
Args:
attribute: An attribute the profiler node has.
Returns:
self | Order the displayed profiler nodes based on a attribute. | [
"Order",
"the",
"displayed",
"profiler",
"nodes",
"based",
"on",
"a",
"attribute",
"."
] | def order_by(self, attribute):
# pylint: disable=line-too-long
"""Order the displayed profiler nodes based on a attribute.
Supported attribute includes micros, bytes, occurrence, params, etc.
https://github.com/tensorflow/tensorflow/tree/master/tensorflow/core/profiler/g3doc/options.md
Args:
attribute: An attribute the profiler node has.
Returns:
self
"""
# pylint: enable=line-too-long
self._options['order_by'] = attribute
return self | [
"def",
"order_by",
"(",
"self",
",",
"attribute",
")",
":",
"# pylint: disable=line-too-long",
"# pylint: enable=line-too-long",
"self",
".",
"_options",
"[",
"'order_by'",
"]",
"=",
"attribute",
"return",
"self"
] | https://github.com/benoitsteiner/tensorflow-opencl/blob/cb7cb40a57fde5cfd4731bc551e82a1e2fef43a5/tensorflow/python/profiler/option_builder.py#L419-L433 | |
apiaryio/snowcrash | b5b39faa85f88ee17459edf39fdc6fe4fc70d2e3 | tools/gyp/pylib/gyp/xcode_emulation.py | python | XcodeArchsDefault._VariableMapping | (self, sdkroot) | Returns the dictionary of variable mapping depending on the SDKROOT. | Returns the dictionary of variable mapping depending on the SDKROOT. | [
"Returns",
"the",
"dictionary",
"of",
"variable",
"mapping",
"depending",
"on",
"the",
"SDKROOT",
"."
] | def _VariableMapping(self, sdkroot):
"""Returns the dictionary of variable mapping depending on the SDKROOT."""
sdkroot = sdkroot.lower()
if 'iphoneos' in sdkroot:
return self._archs['ios']
elif 'iphonesimulator' in sdkroot:
return self._archs['iossim']
else:
return self._archs['mac'] | [
"def",
"_VariableMapping",
"(",
"self",
",",
"sdkroot",
")",
":",
"sdkroot",
"=",
"sdkroot",
".",
"lower",
"(",
")",
"if",
"'iphoneos'",
"in",
"sdkroot",
":",
"return",
"self",
".",
"_archs",
"[",
"'ios'",
"]",
"elif",
"'iphonesimulator'",
"in",
"sdkroot",... | https://github.com/apiaryio/snowcrash/blob/b5b39faa85f88ee17459edf39fdc6fe4fc70d2e3/tools/gyp/pylib/gyp/xcode_emulation.py#L53-L61 | ||
aws/lumberyard | f85344403c1c2e77ec8c75deb2c116e97b713217 | dev/Gems/CloudGemMetric/v1/AWS/python/windows/Lib/pandas/core/groupby/grouper.py | python | Grouper._get_grouper | (self, obj, validate: bool = True) | return self.binner, self.grouper, self.obj | Parameters
----------
obj : the subject object
validate : boolean, default True
if True, validate the grouper
Returns
-------
a tuple of binner, grouper, obj (possibly sorted) | Parameters
----------
obj : the subject object
validate : boolean, default True
if True, validate the grouper | [
"Parameters",
"----------",
"obj",
":",
"the",
"subject",
"object",
"validate",
":",
"boolean",
"default",
"True",
"if",
"True",
"validate",
"the",
"grouper"
] | def _get_grouper(self, obj, validate: bool = True):
"""
Parameters
----------
obj : the subject object
validate : boolean, default True
if True, validate the grouper
Returns
-------
a tuple of binner, grouper, obj (possibly sorted)
"""
self._set_grouper(obj)
self.grouper, _, self.obj = get_grouper(
self.obj,
[self.key],
axis=self.axis,
level=self.level,
sort=self.sort,
validate=validate,
)
return self.binner, self.grouper, self.obj | [
"def",
"_get_grouper",
"(",
"self",
",",
"obj",
",",
"validate",
":",
"bool",
"=",
"True",
")",
":",
"self",
".",
"_set_grouper",
"(",
"obj",
")",
"self",
".",
"grouper",
",",
"_",
",",
"self",
".",
"obj",
"=",
"get_grouper",
"(",
"self",
".",
"obj... | https://github.com/aws/lumberyard/blob/f85344403c1c2e77ec8c75deb2c116e97b713217/dev/Gems/CloudGemMetric/v1/AWS/python/windows/Lib/pandas/core/groupby/grouper.py#L120-L142 | |
wlanjie/AndroidFFmpeg | 7baf9122f4b8e1c74e7baf4be5c422c7a5ba5aaf | tools/fdk-aac-build/x86/toolchain/lib/python2.7/SocketServer.py | python | BaseServer.server_close | (self) | Called to clean-up the server.
May be overridden. | Called to clean-up the server. | [
"Called",
"to",
"clean",
"-",
"up",
"the",
"server",
"."
] | def server_close(self):
"""Called to clean-up the server.
May be overridden.
"""
pass | [
"def",
"server_close",
"(",
"self",
")",
":",
"pass"
] | https://github.com/wlanjie/AndroidFFmpeg/blob/7baf9122f4b8e1c74e7baf4be5c422c7a5ba5aaf/tools/fdk-aac-build/x86/toolchain/lib/python2.7/SocketServer.py#L324-L330 | ||
mongodb/mongo | d8ff665343ad29cf286ee2cf4a1960d29371937b | buildscripts/idl/idl/errors.py | python | ParserContext.add_empty_access_check | (self, location) | Add an error about specifying one of ignore, none, simple or complex in an access check. | Add an error about specifying one of ignore, none, simple or complex in an access check. | [
"Add",
"an",
"error",
"about",
"specifying",
"one",
"of",
"ignore",
"none",
"simple",
"or",
"complex",
"in",
"an",
"access",
"check",
"."
] | def add_empty_access_check(self, location):
# type: (common.SourceLocation) -> None
"""Add an error about specifying one of ignore, none, simple or complex in an access check."""
# pylint: disable=invalid-name
self._add_error(
location, ERROR_ID_EMPTY_ACCESS_CHECK,
"Must one and only one of either a 'ignore', 'none', 'simple', or 'complex' in an access_check."
) | [
"def",
"add_empty_access_check",
"(",
"self",
",",
"location",
")",
":",
"# type: (common.SourceLocation) -> None",
"# pylint: disable=invalid-name",
"self",
".",
"_add_error",
"(",
"location",
",",
"ERROR_ID_EMPTY_ACCESS_CHECK",
",",
"\"Must one and only one of either a 'ignore'... | https://github.com/mongodb/mongo/blob/d8ff665343ad29cf286ee2cf4a1960d29371937b/buildscripts/idl/idl/errors.py#L956-L963 | ||
Vipermdl/OCR_detection_IC15 | 8eebd353d6fac97f5832a138d7af3bd3071670db | base/base_data_loader.py | python | BaseDataLoader.split_validation | (self) | return valid_data_loader | Split validation data from data loader based on self.config['validation'] | Split validation data from data loader based on self.config['validation'] | [
"Split",
"validation",
"data",
"from",
"data",
"loader",
"based",
"on",
"self",
".",
"config",
"[",
"validation",
"]"
] | def split_validation(self):
"""
Split validation data from data loader based on self.config['validation']
"""
validation_split = self.config['validation']['validation_split']
shuffle = self.config['validation']['shuffle']
if validation_split == 0.0:
return None
if shuffle:
self._shuffle_data()
valid_data_loader = copy(self)
split = int(self._n_samples() * validation_split)
packed = self._pack_data()
train_data = self._unpack_data(packed[split:])
val_data = self._unpack_data(packed[:split])
valid_data_loader._update_data(val_data)
self._update_data(train_data)
return valid_data_loader | [
"def",
"split_validation",
"(",
"self",
")",
":",
"validation_split",
"=",
"self",
".",
"config",
"[",
"'validation'",
"]",
"[",
"'validation_split'",
"]",
"shuffle",
"=",
"self",
".",
"config",
"[",
"'validation'",
"]",
"[",
"'shuffle'",
"]",
"if",
"validat... | https://github.com/Vipermdl/OCR_detection_IC15/blob/8eebd353d6fac97f5832a138d7af3bd3071670db/base/base_data_loader.py#L84-L101 | |
aws/lumberyard | f85344403c1c2e77ec8c75deb2c116e97b713217 | dev/Tools/AWSPythonSDK/1.5.8/docutils/utils/math/math2html.py | python | Globable.globincluding | (self, magicchar) | return glob | Glob a bit of text up to (including) the magic char. | Glob a bit of text up to (including) the magic char. | [
"Glob",
"a",
"bit",
"of",
"text",
"up",
"to",
"(",
"including",
")",
"the",
"magic",
"char",
"."
] | def globincluding(self, magicchar):
"Glob a bit of text up to (including) the magic char."
glob = self.glob(lambda: self.current() != magicchar) + magicchar
self.skip(magicchar)
return glob | [
"def",
"globincluding",
"(",
"self",
",",
"magicchar",
")",
":",
"glob",
"=",
"self",
".",
"glob",
"(",
"lambda",
":",
"self",
".",
"current",
"(",
")",
"!=",
"magicchar",
")",
"+",
"magicchar",
"self",
".",
"skip",
"(",
"magicchar",
")",
"return",
"... | https://github.com/aws/lumberyard/blob/f85344403c1c2e77ec8c75deb2c116e97b713217/dev/Tools/AWSPythonSDK/1.5.8/docutils/utils/math/math2html.py#L1906-L1910 | |
mhammond/pywin32 | 44afd86ba8485194df93234639243252deeb40d5 | com/win32comext/axdebug/gateways.py | python | DebugCodeContext.SetBreakPoint | (self, bps) | bps -- an integer with flags. | bps -- an integer with flags. | [
"bps",
"--",
"an",
"integer",
"with",
"flags",
"."
] | def SetBreakPoint(self, bps):
"""bps -- an integer with flags."""
RaiseNotImpl("SetBreakPoint") | [
"def",
"SetBreakPoint",
"(",
"self",
",",
"bps",
")",
":",
"RaiseNotImpl",
"(",
"\"SetBreakPoint\"",
")"
] | https://github.com/mhammond/pywin32/blob/44afd86ba8485194df93234639243252deeb40d5/com/win32comext/axdebug/gateways.py#L315-L317 | ||
dmlc/xgboost | 2775c2a1abd4b5b759ff517617434c8b9aeb4cc0 | python-package/xgboost/core.py | python | DMatrix.get_base_margin | (self) | return self.get_float_info('base_margin') | Get the base margin of the DMatrix.
Returns
-------
base_margin | Get the base margin of the DMatrix. | [
"Get",
"the",
"base",
"margin",
"of",
"the",
"DMatrix",
"."
] | def get_base_margin(self) -> np.ndarray:
"""Get the base margin of the DMatrix.
Returns
-------
base_margin
"""
return self.get_float_info('base_margin') | [
"def",
"get_base_margin",
"(",
"self",
")",
"->",
"np",
".",
"ndarray",
":",
"return",
"self",
".",
"get_float_info",
"(",
"'base_margin'",
")"
] | https://github.com/dmlc/xgboost/blob/2775c2a1abd4b5b759ff517617434c8b9aeb4cc0/python-package/xgboost/core.py#L885-L892 | |
FreeCAD/FreeCAD | ba42231b9c6889b89e064d6d563448ed81e376ec | src/Mod/Draft/draftobjects/label.py | python | Label.onDocumentRestored | (self, obj) | Execute code when the document is restored.
It calls the parent class to add missing annotation properties. | Execute code when the document is restored. | [
"Execute",
"code",
"when",
"the",
"document",
"is",
"restored",
"."
] | def onDocumentRestored(self, obj):
"""Execute code when the document is restored.
It calls the parent class to add missing annotation properties.
"""
super(Label, self).onDocumentRestored(obj) | [
"def",
"onDocumentRestored",
"(",
"self",
",",
"obj",
")",
":",
"super",
"(",
"Label",
",",
"self",
")",
".",
"onDocumentRestored",
"(",
"obj",
")"
] | https://github.com/FreeCAD/FreeCAD/blob/ba42231b9c6889b89e064d6d563448ed81e376ec/src/Mod/Draft/draftobjects/label.py#L222-L227 | ||
KhronosGroup/SPIR | f33c27876d9f3d5810162b60fa89cc13d2b55725 | bindings/python/clang/cindex.py | python | Token.kind | (self) | return TokenKind.from_value(conf.lib.clang_getTokenKind(self)) | Obtain the TokenKind of the current token. | Obtain the TokenKind of the current token. | [
"Obtain",
"the",
"TokenKind",
"of",
"the",
"current",
"token",
"."
] | def kind(self):
"""Obtain the TokenKind of the current token."""
return TokenKind.from_value(conf.lib.clang_getTokenKind(self)) | [
"def",
"kind",
"(",
"self",
")",
":",
"return",
"TokenKind",
".",
"from_value",
"(",
"conf",
".",
"lib",
".",
"clang_getTokenKind",
"(",
"self",
")",
")"
] | https://github.com/KhronosGroup/SPIR/blob/f33c27876d9f3d5810162b60fa89cc13d2b55725/bindings/python/clang/cindex.py#L2420-L2422 | |
natanielruiz/android-yolo | 1ebb54f96a67a20ff83ddfc823ed83a13dc3a47f | jni-build/jni/include/tensorflow/python/training/supervisor.py | python | Supervisor.summary_computed | (self, sess, summary, global_step=None) | Indicate that a summary was computed.
Args:
sess: A `Session` object.
summary: A Summary proto, or a string holding a serialized summary proto.
global_step: Int. global step this summary is associated with. If `None`,
it will try to fetch the current step.
Raises:
TypeError: if 'summary' is not a Summary proto or a string.
RuntimeError: if the Supervisor was created without a `logdir`. | Indicate that a summary was computed. | [
"Indicate",
"that",
"a",
"summary",
"was",
"computed",
"."
] | def summary_computed(self, sess, summary, global_step=None):
"""Indicate that a summary was computed.
Args:
sess: A `Session` object.
summary: A Summary proto, or a string holding a serialized summary proto.
global_step: Int. global step this summary is associated with. If `None`,
it will try to fetch the current step.
Raises:
TypeError: if 'summary' is not a Summary proto or a string.
RuntimeError: if the Supervisor was created without a `logdir`.
"""
if not self._summary_writer:
raise RuntimeError("Writing a summary requires a summary writer.")
if global_step is None and self.global_step is not None:
global_step = training_util.global_step(sess, self.global_step)
self._summary_writer.add_summary(summary, global_step) | [
"def",
"summary_computed",
"(",
"self",
",",
"sess",
",",
"summary",
",",
"global_step",
"=",
"None",
")",
":",
"if",
"not",
"self",
".",
"_summary_writer",
":",
"raise",
"RuntimeError",
"(",
"\"Writing a summary requires a summary writer.\"",
")",
"if",
"global_s... | https://github.com/natanielruiz/android-yolo/blob/1ebb54f96a67a20ff83ddfc823ed83a13dc3a47f/jni-build/jni/include/tensorflow/python/training/supervisor.py#L805-L822 | ||
Kitware/VTK | 5b4df4d90a4f31194d97d3c639dd38ea8f81e8b8 | Wrapping/Python/vtkmodules/wx/wxVTKRenderWindow.py | python | wxVTKRenderWindow.SetDesiredUpdateRate | (self, rate) | Mirrors the method with the same name in
vtkRenderWindowInteractor. | Mirrors the method with the same name in
vtkRenderWindowInteractor. | [
"Mirrors",
"the",
"method",
"with",
"the",
"same",
"name",
"in",
"vtkRenderWindowInteractor",
"."
] | def SetDesiredUpdateRate(self, rate):
"""Mirrors the method with the same name in
vtkRenderWindowInteractor.
"""
self._DesiredUpdateRate = rate | [
"def",
"SetDesiredUpdateRate",
"(",
"self",
",",
"rate",
")",
":",
"self",
".",
"_DesiredUpdateRate",
"=",
"rate"
] | https://github.com/Kitware/VTK/blob/5b4df4d90a4f31194d97d3c639dd38ea8f81e8b8/Wrapping/Python/vtkmodules/wx/wxVTKRenderWindow.py#L242-L246 | ||
wxWidgets/wxPython-Classic | 19571e1ae65f1ac445f5491474121998c97a1bf0 | src/osx_carbon/_gdi.py | python | DC.DrawRotatedTextPoint | (*args, **kwargs) | return _gdi_.DC_DrawRotatedTextPoint(*args, **kwargs) | DrawRotatedTextPoint(self, String text, Point pt, double angle)
Draws the text rotated by *angle* degrees, if supported by the platform.
**NOTE**: Under Win9x only TrueType fonts can be drawn by this
function. In particular, a font different from ``wx.NORMAL_FONT``
should be used as the it is not normally a TrueType
font. ``wx.SWISS_FONT`` is an example of a font which is. | DrawRotatedTextPoint(self, String text, Point pt, double angle) | [
"DrawRotatedTextPoint",
"(",
"self",
"String",
"text",
"Point",
"pt",
"double",
"angle",
")"
] | def DrawRotatedTextPoint(*args, **kwargs):
"""
DrawRotatedTextPoint(self, String text, Point pt, double angle)
Draws the text rotated by *angle* degrees, if supported by the platform.
**NOTE**: Under Win9x only TrueType fonts can be drawn by this
function. In particular, a font different from ``wx.NORMAL_FONT``
should be used as the it is not normally a TrueType
font. ``wx.SWISS_FONT`` is an example of a font which is.
"""
return _gdi_.DC_DrawRotatedTextPoint(*args, **kwargs) | [
"def",
"DrawRotatedTextPoint",
"(",
"*",
"args",
",",
"*",
"*",
"kwargs",
")",
":",
"return",
"_gdi_",
".",
"DC_DrawRotatedTextPoint",
"(",
"*",
"args",
",",
"*",
"*",
"kwargs",
")"
] | https://github.com/wxWidgets/wxPython-Classic/blob/19571e1ae65f1ac445f5491474121998c97a1bf0/src/osx_carbon/_gdi.py#L3758-L3769 | |
rrwick/Unicycler | 96ffea71e3a78d63ade19d6124946773e65cf129 | unicycler/assembly_graph.py | python | split_path_multiple | (path, segs) | return path_parts | Like split_path, but segs is a list, all of which split the path. | Like split_path, but segs is a list, all of which split the path. | [
"Like",
"split_path",
"but",
"segs",
"is",
"a",
"list",
"all",
"of",
"which",
"split",
"the",
"path",
"."
] | def split_path_multiple(path, segs):
"""
Like split_path, but segs is a list, all of which split the path.
"""
path_parts = [path]
for seg in segs:
new_path_parts = []
for part in path_parts:
new_path_parts += split_path(part, seg)
path_parts = new_path_parts
return path_parts | [
"def",
"split_path_multiple",
"(",
"path",
",",
"segs",
")",
":",
"path_parts",
"=",
"[",
"path",
"]",
"for",
"seg",
"in",
"segs",
":",
"new_path_parts",
"=",
"[",
"]",
"for",
"part",
"in",
"path_parts",
":",
"new_path_parts",
"+=",
"split_path",
"(",
"p... | https://github.com/rrwick/Unicycler/blob/96ffea71e3a78d63ade19d6124946773e65cf129/unicycler/assembly_graph.py#L2626-L2636 | |
ChromiumWebApps/chromium | c7361d39be8abd1574e6ce8957c8dbddd4c6ccf7 | tools/bisect-perf-regression.py | python | BisectPerformanceMetrics.RunPostSync | (self, depot) | return True | Performs any work after syncing.
Returns:
True if successful. | Performs any work after syncing. | [
"Performs",
"any",
"work",
"after",
"syncing",
"."
] | def RunPostSync(self, depot):
"""Performs any work after syncing.
Returns:
True if successful.
"""
if self.opts.target_platform == 'android':
if not bisect_utils.SetupAndroidBuildEnvironment(self.opts,
path_to_src=self.src_cwd):
return False
if depot == 'cros':
return self.CreateCrosChroot()
else:
return self.RunGClientHooks()
return True | [
"def",
"RunPostSync",
"(",
"self",
",",
"depot",
")",
":",
"if",
"self",
".",
"opts",
".",
"target_platform",
"==",
"'android'",
":",
"if",
"not",
"bisect_utils",
".",
"SetupAndroidBuildEnvironment",
"(",
"self",
".",
"opts",
",",
"path_to_src",
"=",
"self",... | https://github.com/ChromiumWebApps/chromium/blob/c7361d39be8abd1574e6ce8957c8dbddd4c6ccf7/tools/bisect-perf-regression.py#L1516-L1531 | |
thalium/icebox | 99d147d5b9269222225443ce171b4fd46d8985d4 | third_party/virtualbox/src/libs/libxml2-2.9.4/python/libxml2.py | python | xmlNode.xpointerNewContext | (self, doc, origin) | return __tmp | Create a new XPointer context | Create a new XPointer context | [
"Create",
"a",
"new",
"XPointer",
"context"
] | def xpointerNewContext(self, doc, origin):
"""Create a new XPointer context """
if doc is None: doc__o = None
else: doc__o = doc._o
if origin is None: origin__o = None
else: origin__o = origin._o
ret = libxml2mod.xmlXPtrNewContext(doc__o, self._o, origin__o)
if ret is None:raise treeError('xmlXPtrNewContext() failed')
__tmp = xpathContext(_obj=ret)
return __tmp | [
"def",
"xpointerNewContext",
"(",
"self",
",",
"doc",
",",
"origin",
")",
":",
"if",
"doc",
"is",
"None",
":",
"doc__o",
"=",
"None",
"else",
":",
"doc__o",
"=",
"doc",
".",
"_o",
"if",
"origin",
"is",
"None",
":",
"origin__o",
"=",
"None",
"else",
... | https://github.com/thalium/icebox/blob/99d147d5b9269222225443ce171b4fd46d8985d4/third_party/virtualbox/src/libs/libxml2-2.9.4/python/libxml2.py#L3926-L3935 | |
mongodb/mongo | d8ff665343ad29cf286ee2cf4a1960d29371937b | buildscripts/resmokelib/run/__init__.py | python | RunPlugin._add_run | (cls, subparsers) | Create and add the parser for the Run subcommand. | Create and add the parser for the Run subcommand. | [
"Create",
"and",
"add",
"the",
"parser",
"for",
"the",
"Run",
"subcommand",
"."
] | def _add_run(cls, subparsers): # pylint: disable=too-many-statements
"""Create and add the parser for the Run subcommand."""
parser = subparsers.add_parser("run", help="Runs the specified tests.")
parser.set_defaults(dry_run="off", shuffle="auto", stagger_jobs="off",
majority_read_concern="on")
parser.add_argument("test_files", metavar="TEST_FILES", nargs="*",
help="Explicit test files to run")
parser.add_argument(
"--suites", dest="suite_files", metavar="SUITE1,SUITE2",
help=("Comma separated list of YAML files that each specify the configuration"
" of a suite. If the file is located in the resmokeconfig/suites/"
" directory, then the basename without the .yml extension can be"
" specified, e.g. 'core'. If a list of files is passed in as"
" positional arguments, they will be run using the suites'"
" configurations."))
parser.add_argument("--configDir", dest="config_dir", metavar="CONFIG_DIR",
help="Directory to search for resmoke configuration files")
parser.add_argument("--installDir", dest="install_dir", metavar="INSTALL_DIR",
help="Directory to search for MongoDB binaries")
parser.add_argument(
"--alwaysUseLogFiles", dest="always_use_log_files", action="store_true",
help=("Logs server output to a file located in the db path and prevents the"
" cleaning of dbpaths after testing. Note that conflicting options"
" passed in from test files may cause an error."))
parser.add_argument(
"--basePort", dest="base_port", metavar="PORT",
help=("The starting port number to use for mongod and mongos processes"
" spawned by resmoke.py or the tests themselves. Each fixture and Job"
" allocates a contiguous range of ports."))
parser.add_argument("--continueOnFailure", action="store_true", dest="continue_on_failure",
help="Executes all tests in all suites, even if some of them fail.")
parser.add_argument("--dbtest", dest="dbtest_executable", metavar="PATH",
help="The path to the dbtest executable for resmoke to use.")
parser.add_argument(
"--excludeWithAnyTags", action="append", dest="exclude_with_any_tags",
metavar="TAG1,TAG2",
help=("Comma separated list of tags. Any jstest that contains any of the"
" specified tags will be excluded from any suites that are run."
" The tag '{}' is implicitly part of this list.".format(config.EXCLUDED_TAG)))
parser.add_argument("--genny", dest="genny_executable", metavar="PATH",
help="The path to the genny executable for resmoke to use.")
parser.add_argument(
"--spawnUsing", dest="spawn_using", choices=("python", "jasper"),
help=("Allows you to spawn resmoke processes using python or Jasper."
"Defaults to python. Options are 'python' or 'jasper'."))
parser.add_argument(
"--includeWithAnyTags", action="append", dest="include_with_any_tags",
metavar="TAG1,TAG2",
help=("Comma separated list of tags. For the jstest portion of the suite(s),"
" only tests which have at least one of the specified tags will be"
" run."))
parser.add_argument(
"--includeWithAllTags", action="append", dest="include_with_all_tags",
metavar="TAG1,TAG2",
help=("Comma separated list of tags. For the jstest portion of the suite(s),"
"tests that have all of the specified tags will be run."))
parser.add_argument("-n", action="store_const", const="tests", dest="dry_run",
help="Outputs the tests that would be run.")
parser.add_argument(
"--recordWith", dest="undo_recorder_path", metavar="PATH",
help="Record execution of mongo, mongod and mongos processes;"
"specify the path to UndoDB's 'live-record' binary")
# TODO: add support for --dryRun=commands
parser.add_argument(
"--dryRun", action="store", dest="dry_run", choices=("off", "tests"), metavar="MODE",
help=("Instead of running the tests, outputs the tests that would be run"
" (if MODE=tests). Defaults to MODE=%(default)s."))
parser.add_argument(
"-j", "--jobs", type=int, dest="jobs", metavar="JOBS",
help=("The number of Job instances to use. Each instance will receive its"
" own MongoDB deployment to dispatch tests to."))
parser.set_defaults(logger_file="console")
parser.add_argument(
"--mongocryptdSetParameters", dest="mongocryptd_set_parameters", action="append",
metavar="{key1: value1, key2: value2, ..., keyN: valueN}",
help=("Passes one or more --setParameter options to all mongocryptd processes"
" started by resmoke.py. The argument is specified as bracketed YAML -"
" i.e. JSON with support for single quoted and unquoted keys."))
parser.add_argument("--nojournal", action="store_true", dest="no_journal",
help="Disables journaling for all mongod's.")
parser.add_argument("--numClientsPerFixture", type=int, dest="num_clients_per_fixture",
help="Number of clients running tests per fixture.")
parser.add_argument(
"--shellConnString", dest="shell_conn_string", metavar="CONN_STRING",
help="Overrides the default fixture and connects with a mongodb:// connection"
" string to an existing MongoDB cluster instead. This is useful for"
" connecting to a MongoDB deployment started outside of resmoke.py including"
" one running in a debugger.")
parser.add_argument(
"--shellPort", dest="shell_port", metavar="PORT",
help="Convenience form of --shellConnString for connecting to an"
" existing MongoDB cluster with the URL mongodb://localhost:[PORT]."
" This is useful for connecting to a server running in a debugger.")
parser.add_argument("--repeat", "--repeatSuites", type=int, dest="repeat_suites",
metavar="N",
help="Repeats the given suite(s) N times, or until one fails.")
parser.add_argument(
"--repeatTests", type=int, dest="repeat_tests", metavar="N",
help="Repeats the tests inside each suite N times. This applies to tests"
" defined in the suite configuration as well as tests defined on the command"
" line.")
parser.add_argument(
"--repeatTestsMax", type=int, dest="repeat_tests_max", metavar="N",
help="Repeats the tests inside each suite no more than N time when"
" --repeatTestsSecs is specified. This applies to tests defined in the suite"
" configuration as well as tests defined on the command line.")
parser.add_argument(
"--repeatTestsMin", type=int, dest="repeat_tests_min", metavar="N",
help="Repeats the tests inside each suite at least N times when"
" --repeatTestsSecs is specified. This applies to tests defined in the suite"
" configuration as well as tests defined on the command line.")
parser.add_argument(
"--repeatTestsSecs", type=float, dest="repeat_tests_secs", metavar="SECONDS",
help="Repeats the tests inside each suite this amount of time. Note that"
" this option is mutually exclusive with --repeatTests. This applies to"
" tests defined in the suite configuration as well as tests defined on the"
" command line.")
parser.add_argument(
"--seed", type=int, dest="seed", metavar="SEED",
help=("Seed for the random number generator. Useful in combination with the"
" --shuffle option for producing a consistent test execution order."))
parser.add_argument("--mongo", dest="mongo_executable", metavar="PATH",
help="The path to the mongo shell executable for resmoke.py to use.")
parser.add_argument(
"--shuffle", action="store_const", const="on", dest="shuffle",
help=("Randomizes the order in which tests are executed. This is equivalent"
" to specifying --shuffleMode=on."))
parser.add_argument(
"--shuffleMode", action="store", dest="shuffle", choices=("on", "off", "auto"),
metavar="ON|OFF|AUTO",
help=("Controls whether to randomize the order in which tests are executed."
" Defaults to auto when not supplied. auto enables randomization in"
" all cases except when the number of jobs requested is 1."))
parser.add_argument(
"--executor", dest="executor_file",
help="OBSOLETE: Superceded by --suites; specify --suites=SUITE path/to/test"
" to run a particular test under a particular suite configuration.")
parser.add_argument(
"--linearChain", action="store", dest="linear_chain", choices=("on", "off"),
metavar="ON|OFF", help="Enable or disable linear chaining for tests using "
"ReplicaSetFixture.")
parser.add_argument(
"--backupOnRestartDir", action="store", type=str, dest="backup_on_restart_dir",
metavar="DIRECTORY", help=
"Every time a mongod restarts on existing data files, the data files will be backed up underneath the input directory."
)
parser.add_argument(
"--replayFile", action="store", type=str, dest="replay_file", metavar="FILE", help=
"Run the tests listed in the input file. This is an alternative to passing test files as positional arguments on the command line. Each line in the file must be a path to a test file relative to the current working directory. A short-hand for `resmoke run --replay_file foo` is `resmoke run @foo`."
)
parser.add_argument(
"--mrlog", action="store_const", const="mrlog", dest="mrlog", help=
"Pipe output through the `mrlog` binary for converting logv2 logs to human readable logs."
)
parser.add_argument(
"--userFriendlyOutput", action="store", type=str, dest="user_friendly_output",
metavar="FILE", help=
"Have resmoke redirect all output to FILE. Additionally, stdout will contain lines that typically indicate that the test is making progress, or an error has happened. If `mrlog` is in the path it will be used. `tee` and `egrep` must be in the path."
)
parser.add_argument(
"--runAllFeatureFlagTests", dest="run_all_feature_flag_tests", action="store_true",
help=
"Run MongoDB servers with all feature flags enabled and only run tests tags with these feature flags"
)
parser.add_argument(
"--runAllFeatureFlagsNoTests", dest="run_all_feature_flags_no_tests",
action="store_true", help=
"Run MongoDB servers with all feature flags enabled but don't run any tests tagged with these feature flags; used for multiversion suites"
)
parser.add_argument("--additionalFeatureFlags", dest="additional_feature_flags",
action="append", metavar="featureFlag1, featureFlag2, ...",
help="Additional feature flags")
parser.add_argument("--maxTestQueueSize", type=int, dest="max_test_queue_size",
help=argparse.SUPPRESS)
mongodb_server_options = parser.add_argument_group(
title=_MONGODB_SERVER_OPTIONS_TITLE,
description=("Options related to starting a MongoDB cluster that are forwarded from"
" resmoke.py to the fixture."))
mongodb_server_options.add_argument(
"--mongod", dest="mongod_executable", metavar="PATH",
help="The path to the mongod executable for resmoke.py to use.")
mongodb_server_options.add_argument(
"--mongos", dest="mongos_executable", metavar="PATH",
help="The path to the mongos executable for resmoke.py to use.")
mongodb_server_options.add_argument(
"--mongodSetParameters", dest="mongod_set_parameters", action="append",
metavar="{key1: value1, key2: value2, ..., keyN: valueN}",
help=("Passes one or more --setParameter options to all mongod processes"
" started by resmoke.py. The argument is specified as bracketed YAML -"
" i.e. JSON with support for single quoted and unquoted keys."))
mongodb_server_options.add_argument(
"--mongosSetParameters", dest="mongos_set_parameters", action="append",
metavar="{key1: value1, key2: value2, ..., keyN: valueN}",
help=("Passes one or more --setParameter options to all mongos processes"
" started by resmoke.py. The argument is specified as bracketed YAML -"
" i.e. JSON with support for single quoted and unquoted keys."))
mongodb_server_options.add_argument(
"--dbpathPrefix", dest="dbpath_prefix", metavar="PATH",
help=("The directory which will contain the dbpaths of any mongod's started"
" by resmoke.py or the tests themselves."))
mongodb_server_options.add_argument(
"--majorityReadConcern", action="store", dest="majority_read_concern", choices=("on",
"off"),
metavar="ON|OFF", help=("Enable or disable majority read concern support."
" Defaults to %(default)s."))
mongodb_server_options.add_argument("--flowControl", action="store", dest="flow_control",
choices=("on", "off"), metavar="ON|OFF",
help=("Enable or disable flow control."))
mongodb_server_options.add_argument("--flowControlTicketOverride", type=int, action="store",
dest="flow_control_tickets", metavar="TICKET_OVERRIDE",
help=("Number of tickets available for flow control."))
mongodb_server_options.add_argument("--storageEngine", dest="storage_engine",
metavar="ENGINE",
help="The storage engine used by dbtests and jstests.")
mongodb_server_options.add_argument(
"--storageEngineCacheSizeGB", dest="storage_engine_cache_size_gb", metavar="CONFIG",
help="Sets the storage engine cache size configuration"
" setting for all mongod's.")
mongodb_server_options.add_argument(
"--numReplSetNodes", type=int, dest="num_replset_nodes", metavar="N",
help="The number of nodes to initialize per ReplicaSetFixture. This is also "
"used to indicate the number of replica set members per shard in a "
"ShardedClusterFixture.")
mongodb_server_options.add_argument(
"--numShards", type=int, dest="num_shards", metavar="N",
help="The number of shards to use in a ShardedClusterFixture.")
mongodb_server_options.add_argument(
"--wiredTigerCollectionConfigString", dest="wt_coll_config", metavar="CONFIG",
help="Sets the WiredTiger collection configuration setting for all mongod's.")
mongodb_server_options.add_argument(
"--wiredTigerEngineConfigString", dest="wt_engine_config", metavar="CONFIG",
help="Sets the WiredTiger engine configuration setting for all mongod's.")
mongodb_server_options.add_argument(
"--wiredTigerIndexConfigString", dest="wt_index_config", metavar="CONFIG",
help="Sets the WiredTiger index configuration setting for all mongod's.")
mongodb_server_options.add_argument("--transportLayer", dest="transport_layer",
metavar="TRANSPORT",
help="The transport layer used by jstests")
mongodb_server_options.add_argument(
"--fuzzMongodConfigs", dest="fuzz_mongod_configs", action="store_true",
help="Will randomly choose storage configs that were not specified.")
mongodb_server_options.add_argument("--configFuzzSeed", dest="config_fuzz_seed",
metavar="PATH",
help="Sets the seed used by storage config fuzzer")
internal_options = parser.add_argument_group(
title=_INTERNAL_OPTIONS_TITLE,
description=("Internal options for advanced users and resmoke developers."
" These are not meant to be invoked when running resmoke locally."))
internal_options.add_argument(
"--log", dest="logger_file", metavar="LOGGER",
help=("A YAML file that specifies the logging configuration. If the file is"
" located in the resmokeconfig/suites/ directory, then the basename"
" without the .yml extension can be specified, e.g. 'console'."))
# Used for testing resmoke.
#
# `is_inner_level`:
# Marks the resmoke process as a child of a parent resmoke process, meaning that"
# it was started by a shell process which itself was started by a top-level"
# resmoke process. This is used to ensure the hang-analyzer is called properly."
#
# `test_archival`:
# Allows unit testing of resmoke's archival feature where we write out the names
# of the files to be archived, instead of doing the actual archival, which can
# be time and resource intensive.
#
# `test_analysis`:
# When specified, the hang-analyzer writes out the pids it will analyze without
# actually running analysis, which can be time and resource intensive.
internal_options.add_argument("--internalParam", action="append", dest="internal_params",
help=argparse.SUPPRESS)
internal_options.add_argument("--perfReportFile", dest="perf_report_file",
metavar="PERF_REPORT",
help="Writes a JSON file with performance test results.")
internal_options.add_argument("--cedarReportFile", dest="cedar_report_file",
metavar="CEDAR_REPORT",
help="Writes a JSON file with performance test results.")
internal_options.add_argument(
"--reportFailureStatus", action="store", dest="report_failure_status",
choices=("fail", "silentfail"), metavar="STATUS",
help="Controls if the test failure status should be reported as failed"
" or be silently ignored (STATUS=silentfail). Dynamic test failures will"
" never be silently ignored. Defaults to STATUS=%(default)s.")
internal_options.add_argument(
"--reportFile", dest="report_file", metavar="REPORT",
help="Writes a JSON file with test status and timing information.")
internal_options.add_argument(
"--staggerJobs", action="store", dest="stagger_jobs", choices=("on", "off"),
metavar="ON|OFF", help=("Enables or disables the stagger of launching resmoke jobs."
" Defaults to %(default)s."))
internal_options.add_argument(
"--exportMongodConfig", dest="export_mongod_config", choices=("off", "regular",
"detailed"),
help=("Exports a yaml containing the history of each mongod config option to"
" {nodeName}_config.yml."
" Defaults to 'off'. A 'detailed' export will include locations of accesses."))
evergreen_options = parser.add_argument_group(
title=_EVERGREEN_ARGUMENT_TITLE, description=(
"Options used to propagate information about the Evergreen task running this"
" script."))
evergreen_options.add_argument("--evergreenURL", dest="evergreen_url",
metavar="EVERGREEN_URL",
help=("The URL of the Evergreen service."))
evergreen_options.add_argument(
"--archiveLimitMb", type=int, dest="archive_limit_mb", metavar="ARCHIVE_LIMIT_MB",
help=("Sets the limit (in MB) for archived files to S3. A value of 0"
" indicates there is no limit."))
evergreen_options.add_argument(
"--archiveLimitTests", type=int, dest="archive_limit_tests",
metavar="ARCHIVE_LIMIT_TESTS",
help=("Sets the maximum number of tests to archive to S3. A value"
" of 0 indicates there is no limit."))
evergreen_options.add_argument("--buildId", dest="build_id", metavar="BUILD_ID",
help="Sets the build ID of the task.")
evergreen_options.add_argument("--buildloggerUrl", action="store", dest="buildlogger_url",
metavar="URL",
help="The root url of the buildlogger server.")
evergreen_options.add_argument(
"--distroId", dest="distro_id", metavar="DISTRO_ID",
help=("Sets the identifier for the Evergreen distro running the"
" tests."))
evergreen_options.add_argument(
"--executionNumber", type=int, dest="execution_number", metavar="EXECUTION_NUMBER",
help=("Sets the number for the Evergreen execution running the"
" tests."))
evergreen_options.add_argument(
"--gitRevision", dest="git_revision", metavar="GIT_REVISION",
help=("Sets the git revision for the Evergreen task running the"
" tests."))
# We intentionally avoid adding a new command line option that starts with --suite so it doesn't
# become ambiguous with the --suites option and break how engineers run resmoke.py locally.
evergreen_options.add_argument(
"--originSuite", dest="origin_suite", metavar="SUITE",
help=("Indicates the name of the test suite prior to the"
" evergreen_generate_resmoke_tasks.py script splitting it"
" up."))
evergreen_options.add_argument(
"--patchBuild", action="store_true", dest="patch_build",
help=("Indicates that the Evergreen task running the tests is a"
" patch build."))
evergreen_options.add_argument(
"--projectName", dest="project_name", metavar="PROJECT_NAME",
help=("Sets the name of the Evergreen project running the tests."))
evergreen_options.add_argument("--revisionOrderId", dest="revision_order_id",
metavar="REVISION_ORDER_ID",
help="Sets the chronological order number of this commit.")
evergreen_options.add_argument("--tagFile", action="append", dest="tag_files",
metavar="TAG_FILES",
help="One or more YAML files that associate tests and tags.")
evergreen_options.add_argument(
"--taskName", dest="task_name", metavar="TASK_NAME",
help="Sets the name of the Evergreen task running the tests.")
evergreen_options.add_argument("--taskId", dest="task_id", metavar="TASK_ID",
help="Sets the Id of the Evergreen task running the tests.")
evergreen_options.add_argument(
"--variantName", dest="variant_name", metavar="VARIANT_NAME",
help=("Sets the name of the Evergreen build variant running the"
" tests."))
evergreen_options.add_argument("--versionId", dest="version_id", metavar="VERSION_ID",
help="Sets the version ID of the task.")
cedar_options = parser.add_argument_group(
title=_CEDAR_ARGUMENT_TITLE,
description=("Options used to propagate Cedar service connection information."))
cedar_options.add_argument("--cedarURL", dest="cedar_url", metavar="CEDAR_URL",
help=("The URL of the Cedar service."))
cedar_options.add_argument("--cedarRPCPort", dest="cedar_rpc_port",
metavar="CEDAR_RPC_PORT",
help=("The RPC port of the Cedar service."))
benchmark_options = parser.add_argument_group(
title=_BENCHMARK_ARGUMENT_TITLE,
description="Options for running Benchmark/Benchrun tests")
benchmark_options.add_argument("--benchmarkFilter", type=str, dest="benchmark_filter",
metavar="BENCHMARK_FILTER",
help="Regex to filter Google benchmark tests to run.")
benchmark_options.add_argument(
"--benchmarkListTests",
dest="benchmark_list_tests",
action="store_true",
# metavar="BENCHMARK_LIST_TESTS",
help=("Lists all Google benchmark test configurations in each"
" test file."))
benchmark_min_time_help = (
"Minimum time to run each benchmark/benchrun test for. Use this option instead of "
"--benchmarkRepetitions to make a test run for a longer or shorter duration.")
benchmark_options.add_argument("--benchmarkMinTimeSecs", type=int,
dest="benchmark_min_time_secs", metavar="BENCHMARK_MIN_TIME",
help=benchmark_min_time_help)
benchmark_repetitions_help = (
"Set --benchmarkRepetitions=1 if you'd like to run the benchmark/benchrun tests only once."
" By default, each test is run multiple times to provide statistics on the variance"
" between runs; use --benchmarkMinTimeSecs if you'd like to run a test for a longer or"
" shorter duration.")
benchmark_options.add_argument(
"--benchmarkRepetitions", type=int, dest="benchmark_repetitions",
metavar="BENCHMARK_REPETITIONS", help=benchmark_repetitions_help) | [
"def",
"_add_run",
"(",
"cls",
",",
"subparsers",
")",
":",
"# pylint: disable=too-many-statements",
"parser",
"=",
"subparsers",
".",
"add_parser",
"(",
"\"run\"",
",",
"help",
"=",
"\"Runs the specified tests.\"",
")",
"parser",
".",
"set_defaults",
"(",
"dry_run"... | https://github.com/mongodb/mongo/blob/d8ff665343ad29cf286ee2cf4a1960d29371937b/buildscripts/resmokelib/run/__init__.py#L640-L1130 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.