nwo stringlengths 5 86 | sha stringlengths 40 40 | path stringlengths 4 189 | language stringclasses 1 value | identifier stringlengths 1 94 | parameters stringlengths 2 4.03k | argument_list stringclasses 1 value | return_statement stringlengths 0 11.5k | docstring stringlengths 1 33.2k | docstring_summary stringlengths 0 5.15k | docstring_tokens list | function stringlengths 34 151k | function_tokens list | url stringlengths 90 278 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
wxWidgets/wxPython-Classic | 19571e1ae65f1ac445f5491474121998c97a1bf0 | src/osx_carbon/_windows.py | python | MDIClientWindow.CreateClient | (*args, **kwargs) | return _windows_.MDIClientWindow_CreateClient(*args, **kwargs) | CreateClient(self, MDIParentFrame parent, long style=wxVSCROLL|wxHSCROLL) -> bool | CreateClient(self, MDIParentFrame parent, long style=wxVSCROLL|wxHSCROLL) -> bool | [
"CreateClient",
"(",
"self",
"MDIParentFrame",
"parent",
"long",
"style",
"=",
"wxVSCROLL|wxHSCROLL",
")",
"-",
">",
"bool"
] | def CreateClient(*args, **kwargs):
"""CreateClient(self, MDIParentFrame parent, long style=wxVSCROLL|wxHSCROLL) -> bool"""
return _windows_.MDIClientWindow_CreateClient(*args, **kwargs) | [
"def",
"CreateClient",
"(",
"*",
"args",
",",
"*",
"*",
"kwargs",
")",
":",
"return",
"_windows_",
".",
"MDIClientWindow_CreateClient",
"(",
"*",
"args",
",",
"*",
"*",
"kwargs",
")"
] | https://github.com/wxWidgets/wxPython-Classic/blob/19571e1ae65f1ac445f5491474121998c97a1bf0/src/osx_carbon/_windows.py#L4112-L4114 | |
aws/lumberyard | f85344403c1c2e77ec8c75deb2c116e97b713217 | dev/Gems/CloudGemDefectReporter/v1/AWS/common-code/Lib/setuptools/sandbox.py | python | DirectorySandbox._remap_input | (self, operation, path, *args, **kw) | return path | Called for path inputs | Called for path inputs | [
"Called",
"for",
"path",
"inputs"
] | def _remap_input(self, operation, path, *args, **kw):
"""Called for path inputs"""
if operation in self.write_ops and not self._ok(path):
self._violation(operation, os.path.realpath(path), *args, **kw)
return path | [
"def",
"_remap_input",
"(",
"self",
",",
"operation",
",",
"path",
",",
"*",
"args",
",",
"*",
"*",
"kw",
")",
":",
"if",
"operation",
"in",
"self",
".",
"write_ops",
"and",
"not",
"self",
".",
"_ok",
"(",
"path",
")",
":",
"self",
".",
"_violation... | https://github.com/aws/lumberyard/blob/f85344403c1c2e77ec8c75deb2c116e97b713217/dev/Gems/CloudGemDefectReporter/v1/AWS/common-code/Lib/setuptools/sandbox.py#L449-L453 | |
mantidproject/mantid | 03deeb89254ec4289edb8771e0188c2090a02f32 | Framework/PythonInterface/plugins/algorithms/WorkflowAlgorithms/DirectILLDiagnostics.py | python | DirectILLDiagnostics._finalize | (self, outWS) | Do final cleanup and set the output property. | Do final cleanup and set the output property. | [
"Do",
"final",
"cleanup",
"and",
"set",
"the",
"output",
"property",
"."
] | def _finalize(self, outWS):
"""Do final cleanup and set the output property."""
self.setProperty(common.PROP_OUTPUT_WS, outWS)
self._cleanup.cleanup(outWS)
self._cleanup.finalCleanup()
self._report.toLog(self.log()) | [
"def",
"_finalize",
"(",
"self",
",",
"outWS",
")",
":",
"self",
".",
"setProperty",
"(",
"common",
".",
"PROP_OUTPUT_WS",
",",
"outWS",
")",
"self",
".",
"_cleanup",
".",
"cleanup",
"(",
"outWS",
")",
"self",
".",
"_cleanup",
".",
"finalCleanup",
"(",
... | https://github.com/mantidproject/mantid/blob/03deeb89254ec4289edb8771e0188c2090a02f32/Framework/PythonInterface/plugins/algorithms/WorkflowAlgorithms/DirectILLDiagnostics.py#L706-L711 | ||
aws/lumberyard | f85344403c1c2e77ec8c75deb2c116e97b713217 | dev/Gems/CloudGemDefectReporter/v1/AWS/common-code/Lib/pkg_resources/_vendor/pyparsing.py | python | ParserElement.enablePackrat | (cache_size_limit=128) | Enables "packrat" parsing, which adds memoizing to the parsing logic.
Repeated parse attempts at the same string location (which happens
often in many complex grammars) can immediately return a cached value,
instead of re-executing parsing/validating code. Memoizing is done of
both valid results and parsing exceptions.
Parameters:
- cache_size_limit - (default=C{128}) - if an integer value is provided
will limit the size of the packrat cache; if None is passed, then
the cache size will be unbounded; if 0 is passed, the cache will
be effectively disabled.
This speedup may break existing programs that use parse actions that
have side-effects. For this reason, packrat parsing is disabled when
you first import pyparsing. To activate the packrat feature, your
program must call the class method C{ParserElement.enablePackrat()}. If
your program uses C{psyco} to "compile as you go", you must call
C{enablePackrat} before calling C{psyco.full()}. If you do not do this,
Python will crash. For best results, call C{enablePackrat()} immediately
after importing pyparsing.
Example::
import pyparsing
pyparsing.ParserElement.enablePackrat() | Enables "packrat" parsing, which adds memoizing to the parsing logic.
Repeated parse attempts at the same string location (which happens
often in many complex grammars) can immediately return a cached value,
instead of re-executing parsing/validating code. Memoizing is done of
both valid results and parsing exceptions.
Parameters:
- cache_size_limit - (default=C{128}) - if an integer value is provided
will limit the size of the packrat cache; if None is passed, then
the cache size will be unbounded; if 0 is passed, the cache will
be effectively disabled.
This speedup may break existing programs that use parse actions that
have side-effects. For this reason, packrat parsing is disabled when
you first import pyparsing. To activate the packrat feature, your
program must call the class method C{ParserElement.enablePackrat()}. If
your program uses C{psyco} to "compile as you go", you must call
C{enablePackrat} before calling C{psyco.full()}. If you do not do this,
Python will crash. For best results, call C{enablePackrat()} immediately
after importing pyparsing.
Example::
import pyparsing
pyparsing.ParserElement.enablePackrat() | [
"Enables",
"packrat",
"parsing",
"which",
"adds",
"memoizing",
"to",
"the",
"parsing",
"logic",
".",
"Repeated",
"parse",
"attempts",
"at",
"the",
"same",
"string",
"location",
"(",
"which",
"happens",
"often",
"in",
"many",
"complex",
"grammars",
")",
"can",
... | def enablePackrat(cache_size_limit=128):
"""Enables "packrat" parsing, which adds memoizing to the parsing logic.
Repeated parse attempts at the same string location (which happens
often in many complex grammars) can immediately return a cached value,
instead of re-executing parsing/validating code. Memoizing is done of
both valid results and parsing exceptions.
Parameters:
- cache_size_limit - (default=C{128}) - if an integer value is provided
will limit the size of the packrat cache; if None is passed, then
the cache size will be unbounded; if 0 is passed, the cache will
be effectively disabled.
This speedup may break existing programs that use parse actions that
have side-effects. For this reason, packrat parsing is disabled when
you first import pyparsing. To activate the packrat feature, your
program must call the class method C{ParserElement.enablePackrat()}. If
your program uses C{psyco} to "compile as you go", you must call
C{enablePackrat} before calling C{psyco.full()}. If you do not do this,
Python will crash. For best results, call C{enablePackrat()} immediately
after importing pyparsing.
Example::
import pyparsing
pyparsing.ParserElement.enablePackrat()
"""
if not ParserElement._packratEnabled:
ParserElement._packratEnabled = True
if cache_size_limit is None:
ParserElement.packrat_cache = ParserElement._UnboundedCache()
else:
ParserElement.packrat_cache = ParserElement._FifoCache(cache_size_limit)
ParserElement._parse = ParserElement._parseCache | [
"def",
"enablePackrat",
"(",
"cache_size_limit",
"=",
"128",
")",
":",
"if",
"not",
"ParserElement",
".",
"_packratEnabled",
":",
"ParserElement",
".",
"_packratEnabled",
"=",
"True",
"if",
"cache_size_limit",
"is",
"None",
":",
"ParserElement",
".",
"packrat_cach... | https://github.com/aws/lumberyard/blob/f85344403c1c2e77ec8c75deb2c116e97b713217/dev/Gems/CloudGemDefectReporter/v1/AWS/common-code/Lib/pkg_resources/_vendor/pyparsing.py#L1537-L1569 | ||
llvm/llvm-project | ffa6262cb4e2a335d26416fad39a581b4f98c5f4 | libcxx/utils/gdb/libcxx/printers.py | python | _typename_for_nth_generic_argument | (gdb_type, n) | return _prettify_typename(element_type) | Returns a pretty string for the nth argument of the given type.
Arguments:
gdb_type(gdb.Type): A type object, such as the one for std::map<int, int>
n: The (zero indexed) index of the argument to return.
Returns:
A string for the nth argument, such a "std::string" | Returns a pretty string for the nth argument of the given type. | [
"Returns",
"a",
"pretty",
"string",
"for",
"the",
"nth",
"argument",
"of",
"the",
"given",
"type",
"."
] | def _typename_for_nth_generic_argument(gdb_type, n):
"""Returns a pretty string for the nth argument of the given type.
Arguments:
gdb_type(gdb.Type): A type object, such as the one for std::map<int, int>
n: The (zero indexed) index of the argument to return.
Returns:
A string for the nth argument, such a "std::string"
"""
element_type = gdb_type.template_argument(n)
return _prettify_typename(element_type) | [
"def",
"_typename_for_nth_generic_argument",
"(",
"gdb_type",
",",
"n",
")",
":",
"element_type",
"=",
"gdb_type",
".",
"template_argument",
"(",
"n",
")",
"return",
"_prettify_typename",
"(",
"element_type",
")"
] | https://github.com/llvm/llvm-project/blob/ffa6262cb4e2a335d26416fad39a581b4f98c5f4/libcxx/utils/gdb/libcxx/printers.py#L97-L108 | |
snap-stanford/snap-python | d53c51b0a26aa7e3e7400b014cdf728948fde80a | setup/snap.py | python | TBool.__init__ | (self, *args) | __init__(TBool self) -> TBool
__init__(TBool self, bool const & _Val) -> TBool
Parameters:
_Val: bool const &
__init__(TBool self, TSIn SIn) -> TBool
Parameters:
SIn: TSIn & | __init__(TBool self) -> TBool
__init__(TBool self, bool const & _Val) -> TBool | [
"__init__",
"(",
"TBool",
"self",
")",
"-",
">",
"TBool",
"__init__",
"(",
"TBool",
"self",
"bool",
"const",
"&",
"_Val",
")",
"-",
">",
"TBool"
] | def __init__(self, *args):
"""
__init__(TBool self) -> TBool
__init__(TBool self, bool const & _Val) -> TBool
Parameters:
_Val: bool const &
__init__(TBool self, TSIn SIn) -> TBool
Parameters:
SIn: TSIn &
"""
_snap.TBool_swiginit(self,_snap.new_TBool(*args)) | [
"def",
"__init__",
"(",
"self",
",",
"*",
"args",
")",
":",
"_snap",
".",
"TBool_swiginit",
"(",
"self",
",",
"_snap",
".",
"new_TBool",
"(",
"*",
"args",
")",
")"
] | https://github.com/snap-stanford/snap-python/blob/d53c51b0a26aa7e3e7400b014cdf728948fde80a/setup/snap.py#L12096-L12110 | ||
microsoft/ivy | 9f3c7ecc0b2383129fdd0953e10890d98d09a82d | ivy/ivy_parser.py | python | p_optelse_else_fmla | (p) | optelse : ELSE fmla | optelse : ELSE fmla | [
"optelse",
":",
"ELSE",
"fmla"
] | def p_optelse_else_fmla(p):
'optelse : ELSE fmla'
p[0] = [p[2]] | [
"def",
"p_optelse_else_fmla",
"(",
"p",
")",
":",
"p",
"[",
"0",
"]",
"=",
"[",
"p",
"[",
"2",
"]",
"]"
] | https://github.com/microsoft/ivy/blob/9f3c7ecc0b2383129fdd0953e10890d98d09a82d/ivy/ivy_parser.py#L2549-L2551 | ||
wxWidgets/wxPython-Classic | 19571e1ae65f1ac445f5491474121998c97a1bf0 | src/msw/_gdi.py | python | AlphaPixelData.__nonzero__ | (*args, **kwargs) | return _gdi_.AlphaPixelData___nonzero__(*args, **kwargs) | __nonzero__(self) -> bool | __nonzero__(self) -> bool | [
"__nonzero__",
"(",
"self",
")",
"-",
">",
"bool"
] | def __nonzero__(*args, **kwargs):
"""__nonzero__(self) -> bool"""
return _gdi_.AlphaPixelData___nonzero__(*args, **kwargs) | [
"def",
"__nonzero__",
"(",
"*",
"args",
",",
"*",
"*",
"kwargs",
")",
":",
"return",
"_gdi_",
".",
"AlphaPixelData___nonzero__",
"(",
"*",
"args",
",",
"*",
"*",
"kwargs",
")"
] | https://github.com/wxWidgets/wxPython-Classic/blob/19571e1ae65f1ac445f5491474121998c97a1bf0/src/msw/_gdi.py#L1175-L1177 | |
wxWidgets/wxPython-Classic | 19571e1ae65f1ac445f5491474121998c97a1bf0 | src/gtk/dataview.py | python | DataViewTreeCtrl.GetItemIcon | (*args, **kwargs) | return _dataview.DataViewTreeCtrl_GetItemIcon(*args, **kwargs) | GetItemIcon(self, DataViewItem item) -> Icon | GetItemIcon(self, DataViewItem item) -> Icon | [
"GetItemIcon",
"(",
"self",
"DataViewItem",
"item",
")",
"-",
">",
"Icon"
] | def GetItemIcon(*args, **kwargs):
"""GetItemIcon(self, DataViewItem item) -> Icon"""
return _dataview.DataViewTreeCtrl_GetItemIcon(*args, **kwargs) | [
"def",
"GetItemIcon",
"(",
"*",
"args",
",",
"*",
"*",
"kwargs",
")",
":",
"return",
"_dataview",
".",
"DataViewTreeCtrl_GetItemIcon",
"(",
"*",
"args",
",",
"*",
"*",
"kwargs",
")"
] | https://github.com/wxWidgets/wxPython-Classic/blob/19571e1ae65f1ac445f5491474121998c97a1bf0/src/gtk/dataview.py#L2544-L2546 | |
deepmind/open_spiel | 4ca53bea32bb2875c7385d215424048ae92f78c8 | open_spiel/python/pytorch/rcfr.py | python | _descendant_states | (state, depth_limit, depth, include_terminals,
include_chance_states) | Recursive descendant state generator.
Decision states are always yielded.
Args:
state: The current state.
depth_limit: The descendant depth limit. Zero will ensure only
`initial_state` is generated and negative numbers specify the absence of a
limit.
depth: The current descendant depth.
include_terminals: Whether or not to include terminal states.
include_chance_states: Whether or not to include chance states.
Yields:
`State`, a state that is `initial_state` or one of its descendants. | Recursive descendant state generator. | [
"Recursive",
"descendant",
"state",
"generator",
"."
] | def _descendant_states(state, depth_limit, depth, include_terminals,
include_chance_states):
"""Recursive descendant state generator.
Decision states are always yielded.
Args:
state: The current state.
depth_limit: The descendant depth limit. Zero will ensure only
`initial_state` is generated and negative numbers specify the absence of a
limit.
depth: The current descendant depth.
include_terminals: Whether or not to include terminal states.
include_chance_states: Whether or not to include chance states.
Yields:
`State`, a state that is `initial_state` or one of its descendants.
"""
if state.is_terminal():
if include_terminals:
yield state
return
if depth > depth_limit >= 0:
return
if not state.is_chance_node() or include_chance_states:
yield state
for action in state.legal_actions():
state_for_search = state.child(action)
for substate in _descendant_states(state_for_search, depth_limit, depth + 1,
include_terminals,
include_chance_states):
yield substate | [
"def",
"_descendant_states",
"(",
"state",
",",
"depth_limit",
",",
"depth",
",",
"include_terminals",
",",
"include_chance_states",
")",
":",
"if",
"state",
".",
"is_terminal",
"(",
")",
":",
"if",
"include_terminals",
":",
"yield",
"state",
"return",
"if",
"... | https://github.com/deepmind/open_spiel/blob/4ca53bea32bb2875c7385d215424048ae92f78c8/open_spiel/python/pytorch/rcfr.py#L421-L455 | ||
turi-code/SFrame | 796b9bdfb2fa1b881d82080754643c7e68629cd2 | oss_src/unity/python/sframe/toolkits/_model.py | python | Model.save | (self, location) | return glconnect.get_unity().save_model(self, _make_internal_url(location)) | Save the model. The model is saved as a directory which can then be
loaded using the :py:func:`~graphlab.load_model` method.
Parameters
----------
location : string
Target destination for the model. Can be a local path or remote URL.
See Also
----------
graphlab.load_model
Examples
----------
>>> model.save('my_model_file')
>>> loaded_model = graphlab.load_model('my_model_file') | Save the model. The model is saved as a directory which can then be
loaded using the :py:func:`~graphlab.load_model` method. | [
"Save",
"the",
"model",
".",
"The",
"model",
"is",
"saved",
"as",
"a",
"directory",
"which",
"can",
"then",
"be",
"loaded",
"using",
"the",
":",
"py",
":",
"func",
":",
"~graphlab",
".",
"load_model",
"method",
"."
] | def save(self, location):
"""
Save the model. The model is saved as a directory which can then be
loaded using the :py:func:`~graphlab.load_model` method.
Parameters
----------
location : string
Target destination for the model. Can be a local path or remote URL.
See Also
----------
graphlab.load_model
Examples
----------
>>> model.save('my_model_file')
>>> loaded_model = graphlab.load_model('my_model_file')
"""
_mt._get_metric_tracker().track('toolkit.model.save')
return glconnect.get_unity().save_model(self, _make_internal_url(location)) | [
"def",
"save",
"(",
"self",
",",
"location",
")",
":",
"_mt",
".",
"_get_metric_tracker",
"(",
")",
".",
"track",
"(",
"'toolkit.model.save'",
")",
"return",
"glconnect",
".",
"get_unity",
"(",
")",
".",
"save_model",
"(",
"self",
",",
"_make_internal_url",
... | https://github.com/turi-code/SFrame/blob/796b9bdfb2fa1b881d82080754643c7e68629cd2/oss_src/unity/python/sframe/toolkits/_model.py#L549-L570 | |
apple/swift-lldb | d74be846ef3e62de946df343e8c234bde93a8912 | scripts/Python/static-binding/lldb.py | python | SBAttachInfo.SetUserID | (self, uid) | return _lldb.SBAttachInfo_SetUserID(self, uid) | SetUserID(SBAttachInfo self, uint32_t uid) | SetUserID(SBAttachInfo self, uint32_t uid) | [
"SetUserID",
"(",
"SBAttachInfo",
"self",
"uint32_t",
"uid",
")"
] | def SetUserID(self, uid):
"""SetUserID(SBAttachInfo self, uint32_t uid)"""
return _lldb.SBAttachInfo_SetUserID(self, uid) | [
"def",
"SetUserID",
"(",
"self",
",",
"uid",
")",
":",
"return",
"_lldb",
".",
"SBAttachInfo_SetUserID",
"(",
"self",
",",
"uid",
")"
] | https://github.com/apple/swift-lldb/blob/d74be846ef3e62de946df343e8c234bde93a8912/scripts/Python/static-binding/lldb.py#L1142-L1144 | |
xhzdeng/crpn | a5aef0f80dbe486103123f740c634fb01e6cc9a1 | caffe-fast-rcnn/scripts/cpp_lint.py | python | RemoveMultiLineComments | (filename, lines, error) | Removes multiline (c-style) comments from lines. | Removes multiline (c-style) comments from lines. | [
"Removes",
"multiline",
"(",
"c",
"-",
"style",
")",
"comments",
"from",
"lines",
"."
] | def RemoveMultiLineComments(filename, lines, error):
"""Removes multiline (c-style) comments from lines."""
lineix = 0
while lineix < len(lines):
lineix_begin = FindNextMultiLineCommentStart(lines, lineix)
if lineix_begin >= len(lines):
return
lineix_end = FindNextMultiLineCommentEnd(lines, lineix_begin)
if lineix_end >= len(lines):
error(filename, lineix_begin + 1, 'readability/multiline_comment', 5,
'Could not find end of multi-line comment')
return
RemoveMultiLineCommentsFromRange(lines, lineix_begin, lineix_end + 1)
lineix = lineix_end + 1 | [
"def",
"RemoveMultiLineComments",
"(",
"filename",
",",
"lines",
",",
"error",
")",
":",
"lineix",
"=",
"0",
"while",
"lineix",
"<",
"len",
"(",
"lines",
")",
":",
"lineix_begin",
"=",
"FindNextMultiLineCommentStart",
"(",
"lines",
",",
"lineix",
")",
"if",
... | https://github.com/xhzdeng/crpn/blob/a5aef0f80dbe486103123f740c634fb01e6cc9a1/caffe-fast-rcnn/scripts/cpp_lint.py#L1155-L1168 | ||
mindspore-ai/mindspore | fb8fd3338605bb34fa5cea054e535a8b1d753fab | mindspore/python/mindspore/ops/_grad/grad_quant_ops.py | python | get_bprop_fakequant_with_minmax_per_channel_update | (self) | return bprop | Generate bprop for MinMaxUpdatePerChannel for Ascend | Generate bprop for MinMaxUpdatePerChannel for Ascend | [
"Generate",
"bprop",
"for",
"MinMaxUpdatePerChannel",
"for",
"Ascend"
] | def get_bprop_fakequant_with_minmax_per_channel_update(self):
"""Generate bprop for MinMaxUpdatePerChannel for Ascend"""
def bprop(x, x_min, x_max, out, dout):
return zeros_like(x), zeros_like(x_min), zeros_like(x_max)
return bprop | [
"def",
"get_bprop_fakequant_with_minmax_per_channel_update",
"(",
"self",
")",
":",
"def",
"bprop",
"(",
"x",
",",
"x_min",
",",
"x_max",
",",
"out",
",",
"dout",
")",
":",
"return",
"zeros_like",
"(",
"x",
")",
",",
"zeros_like",
"(",
"x_min",
")",
",",
... | https://github.com/mindspore-ai/mindspore/blob/fb8fd3338605bb34fa5cea054e535a8b1d753fab/mindspore/python/mindspore/ops/_grad/grad_quant_ops.py#L175-L181 | |
qt/qt | 0a2f2382541424726168804be2c90b91381608c6 | src/3rdparty/webkit/Source/ThirdParty/gyp/pylib/gyp/easy_xml.py | python | EasyXml.SetAttributes | (self, element, attribute_description) | Sets the attributes of an element.
Args:
element: The node to which the child will be added.
attribute_description: A dictionary that maps attribute names to
attribute values. | Sets the attributes of an element. | [
"Sets",
"the",
"attributes",
"of",
"an",
"element",
"."
] | def SetAttributes(self, element, attribute_description):
""" Sets the attributes of an element.
Args:
element: The node to which the child will be added.
attribute_description: A dictionary that maps attribute names to
attribute values.
"""
for attribute, value in attribute_description.iteritems():
element.setAttribute(attribute, value) | [
"def",
"SetAttributes",
"(",
"self",
",",
"element",
",",
"attribute_description",
")",
":",
"for",
"attribute",
",",
"value",
"in",
"attribute_description",
".",
"iteritems",
"(",
")",
":",
"element",
".",
"setAttribute",
"(",
"attribute",
",",
"value",
")"
] | https://github.com/qt/qt/blob/0a2f2382541424726168804be2c90b91381608c6/src/3rdparty/webkit/Source/ThirdParty/gyp/pylib/gyp/easy_xml.py#L96-L105 | ||
wlanjie/AndroidFFmpeg | 7baf9122f4b8e1c74e7baf4be5c422c7a5ba5aaf | tools/fdk-aac-build/armeabi/toolchain/lib/python2.7/lib-tk/turtle.py | python | TPen.color | (self, *args) | Return or set the pencolor and fillcolor.
Arguments:
Several input formats are allowed.
They use 0, 1, 2, or 3 arguments as follows:
color()
Return the current pencolor and the current fillcolor
as a pair of color specification strings as are returned
by pencolor and fillcolor.
color(colorstring), color((r,g,b)), color(r,g,b)
inputs as in pencolor, set both, fillcolor and pencolor,
to the given value.
color(colorstring1, colorstring2),
color((r1,g1,b1), (r2,g2,b2))
equivalent to pencolor(colorstring1) and fillcolor(colorstring2)
and analogously, if the other input format is used.
If turtleshape is a polygon, outline and interior of that polygon
is drawn with the newly set colors.
For mor info see: pencolor, fillcolor
Example (for a Turtle instance named turtle):
>>> turtle.color('red', 'green')
>>> turtle.color()
('red', 'green')
>>> colormode(255)
>>> color((40, 80, 120), (160, 200, 240))
>>> color()
('#285078', '#a0c8f0') | Return or set the pencolor and fillcolor. | [
"Return",
"or",
"set",
"the",
"pencolor",
"and",
"fillcolor",
"."
] | def color(self, *args):
"""Return or set the pencolor and fillcolor.
Arguments:
Several input formats are allowed.
They use 0, 1, 2, or 3 arguments as follows:
color()
Return the current pencolor and the current fillcolor
as a pair of color specification strings as are returned
by pencolor and fillcolor.
color(colorstring), color((r,g,b)), color(r,g,b)
inputs as in pencolor, set both, fillcolor and pencolor,
to the given value.
color(colorstring1, colorstring2),
color((r1,g1,b1), (r2,g2,b2))
equivalent to pencolor(colorstring1) and fillcolor(colorstring2)
and analogously, if the other input format is used.
If turtleshape is a polygon, outline and interior of that polygon
is drawn with the newly set colors.
For mor info see: pencolor, fillcolor
Example (for a Turtle instance named turtle):
>>> turtle.color('red', 'green')
>>> turtle.color()
('red', 'green')
>>> colormode(255)
>>> color((40, 80, 120), (160, 200, 240))
>>> color()
('#285078', '#a0c8f0')
"""
if args:
l = len(args)
if l == 1:
pcolor = fcolor = args[0]
elif l == 2:
pcolor, fcolor = args
elif l == 3:
pcolor = fcolor = args
pcolor = self._colorstr(pcolor)
fcolor = self._colorstr(fcolor)
self.pen(pencolor=pcolor, fillcolor=fcolor)
else:
return self._color(self._pencolor), self._color(self._fillcolor) | [
"def",
"color",
"(",
"self",
",",
"*",
"args",
")",
":",
"if",
"args",
":",
"l",
"=",
"len",
"(",
"args",
")",
"if",
"l",
"==",
"1",
":",
"pcolor",
"=",
"fcolor",
"=",
"args",
"[",
"0",
"]",
"elif",
"l",
"==",
"2",
":",
"pcolor",
",",
"fcol... | https://github.com/wlanjie/AndroidFFmpeg/blob/7baf9122f4b8e1c74e7baf4be5c422c7a5ba5aaf/tools/fdk-aac-build/armeabi/toolchain/lib/python2.7/lib-tk/turtle.py#L2090-L2134 | ||
aws/lumberyard | f85344403c1c2e77ec8c75deb2c116e97b713217 | dev/Gems/CloudGemMetric/v1/AWS/common-code/Lib/numba/npdatetime.py | python | get_timedelta_conversion_factor | (src_unit, dest_unit) | return _get_conversion_multiplier(DATETIME_UNITS[src_unit],
DATETIME_UNITS[dest_unit]) | Return an integer multiplier allowing to convert from timedeltas
of *src_unit* to *dest_unit*. | Return an integer multiplier allowing to convert from timedeltas
of *src_unit* to *dest_unit*. | [
"Return",
"an",
"integer",
"multiplier",
"allowing",
"to",
"convert",
"from",
"timedeltas",
"of",
"*",
"src_unit",
"*",
"to",
"*",
"dest_unit",
"*",
"."
] | def get_timedelta_conversion_factor(src_unit, dest_unit):
"""
Return an integer multiplier allowing to convert from timedeltas
of *src_unit* to *dest_unit*.
"""
return _get_conversion_multiplier(DATETIME_UNITS[src_unit],
DATETIME_UNITS[dest_unit]) | [
"def",
"get_timedelta_conversion_factor",
"(",
"src_unit",
",",
"dest_unit",
")",
":",
"return",
"_get_conversion_multiplier",
"(",
"DATETIME_UNITS",
"[",
"src_unit",
"]",
",",
"DATETIME_UNITS",
"[",
"dest_unit",
"]",
")"
] | https://github.com/aws/lumberyard/blob/f85344403c1c2e77ec8c75deb2c116e97b713217/dev/Gems/CloudGemMetric/v1/AWS/common-code/Lib/numba/npdatetime.py#L111-L117 | |
aws/lumberyard | f85344403c1c2e77ec8c75deb2c116e97b713217 | dev/Tools/Python/3.7.10/windows/Lib/idlelib/configdialog.py | python | ConfigDialog.__init__ | (self, parent, title='', *, _htest=False, _utest=False) | Show the tabbed dialog for user configuration.
Args:
parent - parent of this dialog
title - string which is the title of this popup dialog
_htest - bool, change box location when running htest
_utest - bool, don't wait_window when running unittest
Note: Focus set on font page fontlist.
Methods:
create_widgets
cancel: Bound to DELETE_WINDOW protocol. | Show the tabbed dialog for user configuration. | [
"Show",
"the",
"tabbed",
"dialog",
"for",
"user",
"configuration",
"."
] | def __init__(self, parent, title='', *, _htest=False, _utest=False):
"""Show the tabbed dialog for user configuration.
Args:
parent - parent of this dialog
title - string which is the title of this popup dialog
_htest - bool, change box location when running htest
_utest - bool, don't wait_window when running unittest
Note: Focus set on font page fontlist.
Methods:
create_widgets
cancel: Bound to DELETE_WINDOW protocol.
"""
Toplevel.__init__(self, parent)
self.parent = parent
if _htest:
parent.instance_dict = {}
if not _utest:
self.withdraw()
self.configure(borderwidth=5)
self.title(title or 'IDLE Preferences')
x = parent.winfo_rootx() + 20
y = parent.winfo_rooty() + (30 if not _htest else 150)
self.geometry(f'+{x}+{y}')
# Each theme element key is its display name.
# The first value of the tuple is the sample area tag name.
# The second value is the display name list sort index.
self.create_widgets()
self.resizable(height=FALSE, width=FALSE)
self.transient(parent)
self.protocol("WM_DELETE_WINDOW", self.cancel)
self.fontpage.fontlist.focus_set()
# XXX Decide whether to keep or delete these key bindings.
# Key bindings for this dialog.
# self.bind('<Escape>', self.Cancel) #dismiss dialog, no save
# self.bind('<Alt-a>', self.Apply) #apply changes, save
# self.bind('<F1>', self.Help) #context help
# Attach callbacks after loading config to avoid calling them.
tracers.attach()
if not _utest:
self.grab_set()
self.wm_deiconify()
self.wait_window() | [
"def",
"__init__",
"(",
"self",
",",
"parent",
",",
"title",
"=",
"''",
",",
"*",
",",
"_htest",
"=",
"False",
",",
"_utest",
"=",
"False",
")",
":",
"Toplevel",
".",
"__init__",
"(",
"self",
",",
"parent",
")",
"self",
".",
"parent",
"=",
"parent"... | https://github.com/aws/lumberyard/blob/f85344403c1c2e77ec8c75deb2c116e97b713217/dev/Tools/Python/3.7.10/windows/Lib/idlelib/configdialog.py#L48-L94 | ||
Xilinx/Vitis-AI | fc74d404563d9951b57245443c73bef389f3657f | tools/Vitis-AI-Quantizer/vai_q_tensorflow1.x/tensorflow/python/ops/rnn.py | python | dynamic_rnn | (cell,
inputs,
sequence_length=None,
initial_state=None,
dtype=None,
parallel_iterations=None,
swap_memory=False,
time_major=False,
scope=None) | Creates a recurrent neural network specified by RNNCell `cell`.
Performs fully dynamic unrolling of `inputs`.
Example:
```python
# create a BasicRNNCell
rnn_cell = tf.compat.v1.nn.rnn_cell.BasicRNNCell(hidden_size)
# 'outputs' is a tensor of shape [batch_size, max_time, cell_state_size]
# defining initial state
initial_state = rnn_cell.zero_state(batch_size, dtype=tf.float32)
# 'state' is a tensor of shape [batch_size, cell_state_size]
outputs, state = tf.compat.v1.nn.dynamic_rnn(rnn_cell, input_data,
initial_state=initial_state,
dtype=tf.float32)
```
```python
# create 2 LSTMCells
rnn_layers = [tf.compat.v1.nn.rnn_cell.LSTMCell(size) for size in [128, 256]]
# create a RNN cell composed sequentially of a number of RNNCells
multi_rnn_cell = tf.compat.v1.nn.rnn_cell.MultiRNNCell(rnn_layers)
# 'outputs' is a tensor of shape [batch_size, max_time, 256]
# 'state' is a N-tuple where N is the number of LSTMCells containing a
# tf.nn.rnn_cell.LSTMStateTuple for each cell
outputs, state = tf.compat.v1.nn.dynamic_rnn(cell=multi_rnn_cell,
inputs=data,
dtype=tf.float32)
```
Args:
cell: An instance of RNNCell.
inputs: The RNN inputs.
If `time_major == False` (default), this must be a `Tensor` of shape:
`[batch_size, max_time, ...]`, or a nested tuple of such elements.
If `time_major == True`, this must be a `Tensor` of shape: `[max_time,
batch_size, ...]`, or a nested tuple of such elements. This may also be
a (possibly nested) tuple of Tensors satisfying this property. The
first two dimensions must match across all the inputs, but otherwise the
ranks and other shape components may differ. In this case, input to
`cell` at each time-step will replicate the structure of these tuples,
except for the time dimension (from which the time is taken). The input
to `cell` at each time step will be a `Tensor` or (possibly nested)
tuple of Tensors each with dimensions `[batch_size, ...]`.
sequence_length: (optional) An int32/int64 vector sized `[batch_size]`. Used
to copy-through state and zero-out outputs when past a batch element's
sequence length. This parameter enables users to extract the last valid
state and properly padded outputs, so it is provided for correctness.
initial_state: (optional) An initial state for the RNN. If `cell.state_size`
is an integer, this must be a `Tensor` of appropriate type and shape
`[batch_size, cell.state_size]`. If `cell.state_size` is a tuple, this
should be a tuple of tensors having shapes `[batch_size, s] for s in
cell.state_size`.
dtype: (optional) The data type for the initial state and expected output.
Required if initial_state is not provided or RNN state has a heterogeneous
dtype.
parallel_iterations: (Default: 32). The number of iterations to run in
parallel. Those operations which do not have any temporal dependency and
can be run in parallel, will be. This parameter trades off time for
space. Values >> 1 use more memory but take less time, while smaller
values use less memory but computations take longer.
swap_memory: Transparently swap the tensors produced in forward inference
but needed for back prop from GPU to CPU. This allows training RNNs which
would typically not fit on a single GPU, with very minimal (or no)
performance penalty.
time_major: The shape format of the `inputs` and `outputs` Tensors. If true,
these `Tensors` must be shaped `[max_time, batch_size, depth]`. If false,
these `Tensors` must be shaped `[batch_size, max_time, depth]`. Using
`time_major = True` is a bit more efficient because it avoids transposes
at the beginning and end of the RNN calculation. However, most TensorFlow
data is batch-major, so by default this function accepts input and emits
output in batch-major form.
scope: VariableScope for the created subgraph; defaults to "rnn".
Returns:
A pair (outputs, state) where:
outputs: The RNN output `Tensor`.
If time_major == False (default), this will be a `Tensor` shaped:
`[batch_size, max_time, cell.output_size]`.
If time_major == True, this will be a `Tensor` shaped:
`[max_time, batch_size, cell.output_size]`.
Note, if `cell.output_size` is a (possibly nested) tuple of integers
or `TensorShape` objects, then `outputs` will be a tuple having the
same structure as `cell.output_size`, containing Tensors having shapes
corresponding to the shape data in `cell.output_size`.
state: The final state. If `cell.state_size` is an int, this
will be shaped `[batch_size, cell.state_size]`. If it is a
`TensorShape`, this will be shaped `[batch_size] + cell.state_size`.
If it is a (possibly nested) tuple of ints or `TensorShape`, this will
be a tuple having the corresponding shapes. If cells are `LSTMCells`
`state` will be a tuple containing a `LSTMStateTuple` for each cell.
Raises:
TypeError: If `cell` is not an instance of RNNCell.
ValueError: If inputs is None or an empty list. | Creates a recurrent neural network specified by RNNCell `cell`. | [
"Creates",
"a",
"recurrent",
"neural",
"network",
"specified",
"by",
"RNNCell",
"cell",
"."
] | def dynamic_rnn(cell,
inputs,
sequence_length=None,
initial_state=None,
dtype=None,
parallel_iterations=None,
swap_memory=False,
time_major=False,
scope=None):
"""Creates a recurrent neural network specified by RNNCell `cell`.
Performs fully dynamic unrolling of `inputs`.
Example:
```python
# create a BasicRNNCell
rnn_cell = tf.compat.v1.nn.rnn_cell.BasicRNNCell(hidden_size)
# 'outputs' is a tensor of shape [batch_size, max_time, cell_state_size]
# defining initial state
initial_state = rnn_cell.zero_state(batch_size, dtype=tf.float32)
# 'state' is a tensor of shape [batch_size, cell_state_size]
outputs, state = tf.compat.v1.nn.dynamic_rnn(rnn_cell, input_data,
initial_state=initial_state,
dtype=tf.float32)
```
```python
# create 2 LSTMCells
rnn_layers = [tf.compat.v1.nn.rnn_cell.LSTMCell(size) for size in [128, 256]]
# create a RNN cell composed sequentially of a number of RNNCells
multi_rnn_cell = tf.compat.v1.nn.rnn_cell.MultiRNNCell(rnn_layers)
# 'outputs' is a tensor of shape [batch_size, max_time, 256]
# 'state' is a N-tuple where N is the number of LSTMCells containing a
# tf.nn.rnn_cell.LSTMStateTuple for each cell
outputs, state = tf.compat.v1.nn.dynamic_rnn(cell=multi_rnn_cell,
inputs=data,
dtype=tf.float32)
```
Args:
cell: An instance of RNNCell.
inputs: The RNN inputs.
If `time_major == False` (default), this must be a `Tensor` of shape:
`[batch_size, max_time, ...]`, or a nested tuple of such elements.
If `time_major == True`, this must be a `Tensor` of shape: `[max_time,
batch_size, ...]`, or a nested tuple of such elements. This may also be
a (possibly nested) tuple of Tensors satisfying this property. The
first two dimensions must match across all the inputs, but otherwise the
ranks and other shape components may differ. In this case, input to
`cell` at each time-step will replicate the structure of these tuples,
except for the time dimension (from which the time is taken). The input
to `cell` at each time step will be a `Tensor` or (possibly nested)
tuple of Tensors each with dimensions `[batch_size, ...]`.
sequence_length: (optional) An int32/int64 vector sized `[batch_size]`. Used
to copy-through state and zero-out outputs when past a batch element's
sequence length. This parameter enables users to extract the last valid
state and properly padded outputs, so it is provided for correctness.
initial_state: (optional) An initial state for the RNN. If `cell.state_size`
is an integer, this must be a `Tensor` of appropriate type and shape
`[batch_size, cell.state_size]`. If `cell.state_size` is a tuple, this
should be a tuple of tensors having shapes `[batch_size, s] for s in
cell.state_size`.
dtype: (optional) The data type for the initial state and expected output.
Required if initial_state is not provided or RNN state has a heterogeneous
dtype.
parallel_iterations: (Default: 32). The number of iterations to run in
parallel. Those operations which do not have any temporal dependency and
can be run in parallel, will be. This parameter trades off time for
space. Values >> 1 use more memory but take less time, while smaller
values use less memory but computations take longer.
swap_memory: Transparently swap the tensors produced in forward inference
but needed for back prop from GPU to CPU. This allows training RNNs which
would typically not fit on a single GPU, with very minimal (or no)
performance penalty.
time_major: The shape format of the `inputs` and `outputs` Tensors. If true,
these `Tensors` must be shaped `[max_time, batch_size, depth]`. If false,
these `Tensors` must be shaped `[batch_size, max_time, depth]`. Using
`time_major = True` is a bit more efficient because it avoids transposes
at the beginning and end of the RNN calculation. However, most TensorFlow
data is batch-major, so by default this function accepts input and emits
output in batch-major form.
scope: VariableScope for the created subgraph; defaults to "rnn".
Returns:
A pair (outputs, state) where:
outputs: The RNN output `Tensor`.
If time_major == False (default), this will be a `Tensor` shaped:
`[batch_size, max_time, cell.output_size]`.
If time_major == True, this will be a `Tensor` shaped:
`[max_time, batch_size, cell.output_size]`.
Note, if `cell.output_size` is a (possibly nested) tuple of integers
or `TensorShape` objects, then `outputs` will be a tuple having the
same structure as `cell.output_size`, containing Tensors having shapes
corresponding to the shape data in `cell.output_size`.
state: The final state. If `cell.state_size` is an int, this
will be shaped `[batch_size, cell.state_size]`. If it is a
`TensorShape`, this will be shaped `[batch_size] + cell.state_size`.
If it is a (possibly nested) tuple of ints or `TensorShape`, this will
be a tuple having the corresponding shapes. If cells are `LSTMCells`
`state` will be a tuple containing a `LSTMStateTuple` for each cell.
Raises:
TypeError: If `cell` is not an instance of RNNCell.
ValueError: If inputs is None or an empty list.
"""
rnn_cell_impl.assert_like_rnncell("cell", cell)
with vs.variable_scope(scope or "rnn") as varscope:
# Create a new scope in which the caching device is either
# determined by the parent scope, or is set to place the cached
# Variable using the same placement as for the rest of the RNN.
if _should_cache():
if varscope.caching_device is None:
varscope.set_caching_device(lambda op: op.device)
# By default, time_major==False and inputs are batch-major: shaped
# [batch, time, depth]
# For internal calculations, we transpose to [time, batch, depth]
flat_input = nest.flatten(inputs)
if not time_major:
# (B,T,D) => (T,B,D)
flat_input = [ops.convert_to_tensor(input_) for input_ in flat_input]
flat_input = tuple(_transpose_batch_time(input_) for input_ in flat_input)
parallel_iterations = parallel_iterations or 32
if sequence_length is not None:
sequence_length = math_ops.cast(sequence_length, dtypes.int32)
if sequence_length.get_shape().rank not in (None, 1):
raise ValueError(
"sequence_length must be a vector of length batch_size, "
"but saw shape: %s" % sequence_length.get_shape())
sequence_length = array_ops.identity( # Just to find it in the graph.
sequence_length,
name="sequence_length")
batch_size = _best_effort_input_batch_size(flat_input)
if initial_state is not None:
state = initial_state
else:
if not dtype:
raise ValueError("If there is no initial_state, you must give a dtype.")
if getattr(cell, "get_initial_state", None) is not None:
state = cell.get_initial_state(
inputs=None, batch_size=batch_size, dtype=dtype)
else:
state = cell.zero_state(batch_size, dtype)
def _assert_has_shape(x, shape):
x_shape = array_ops.shape(x)
packed_shape = array_ops.stack(shape)
return control_flow_ops.Assert(
math_ops.reduce_all(math_ops.equal(x_shape, packed_shape)), [
"Expected shape for Tensor %s is " % x.name, packed_shape,
" but saw shape: ", x_shape
])
if not context.executing_eagerly() and sequence_length is not None:
# Perform some shape validation
with ops.control_dependencies(
[_assert_has_shape(sequence_length, [batch_size])]):
sequence_length = array_ops.identity(
sequence_length, name="CheckSeqLen")
inputs = nest.pack_sequence_as(structure=inputs, flat_sequence=flat_input)
(outputs, final_state) = _dynamic_rnn_loop(
cell,
inputs,
state,
parallel_iterations=parallel_iterations,
swap_memory=swap_memory,
sequence_length=sequence_length,
dtype=dtype)
# Outputs of _dynamic_rnn_loop are always shaped [time, batch, depth].
# If we are performing batch-major calculations, transpose output back
# to shape [batch, time, depth]
if not time_major:
# (T,B,D) => (B,T,D)
outputs = nest.map_structure(_transpose_batch_time, outputs)
return (outputs, final_state) | [
"def",
"dynamic_rnn",
"(",
"cell",
",",
"inputs",
",",
"sequence_length",
"=",
"None",
",",
"initial_state",
"=",
"None",
",",
"dtype",
"=",
"None",
",",
"parallel_iterations",
"=",
"None",
",",
"swap_memory",
"=",
"False",
",",
"time_major",
"=",
"False",
... | https://github.com/Xilinx/Vitis-AI/blob/fc74d404563d9951b57245443c73bef389f3657f/tools/Vitis-AI-Quantizer/vai_q_tensorflow1.x/tensorflow/python/ops/rnn.py#L521-L716 | ||
SoarGroup/Soar | a1c5e249499137a27da60533c72969eef3b8ab6b | scons/scons-local-4.1.0/SCons/Environment.py | python | Base._update_onlynew | (self, other) | Private method to add new items to an environment's consvar dict.
Only adds items from `other` whose keys do not already appear in
the existing dict; values from `other` are not used for replacement.
Bypasses the normal checks that occur when users try to set items. | Private method to add new items to an environment's consvar dict. | [
"Private",
"method",
"to",
"add",
"new",
"items",
"to",
"an",
"environment",
"s",
"consvar",
"dict",
"."
] | def _update_onlynew(self, other):
"""Private method to add new items to an environment's consvar dict.
Only adds items from `other` whose keys do not already appear in
the existing dict; values from `other` are not used for replacement.
Bypasses the normal checks that occur when users try to set items.
"""
for k, v in other.items():
if k not in self._dict:
self._dict[k] = v | [
"def",
"_update_onlynew",
"(",
"self",
",",
"other",
")",
":",
"for",
"k",
",",
"v",
"in",
"other",
".",
"items",
"(",
")",
":",
"if",
"k",
"not",
"in",
"self",
".",
"_dict",
":",
"self",
".",
"_dict",
"[",
"k",
"]",
"=",
"v"
] | https://github.com/SoarGroup/Soar/blob/a1c5e249499137a27da60533c72969eef3b8ab6b/scons/scons-local-4.1.0/SCons/Environment.py#L1137-L1146 | ||
aws/lumberyard | f85344403c1c2e77ec8c75deb2c116e97b713217 | dev/Gems/CloudGemMetric/v1/AWS/common-code/Lib/numpy/lib/stride_tricks.py | python | _broadcast_shape | (*args) | return b.shape | Returns the shape of the arrays that would result from broadcasting the
supplied arrays against each other. | Returns the shape of the arrays that would result from broadcasting the
supplied arrays against each other. | [
"Returns",
"the",
"shape",
"of",
"the",
"arrays",
"that",
"would",
"result",
"from",
"broadcasting",
"the",
"supplied",
"arrays",
"against",
"each",
"other",
"."
] | def _broadcast_shape(*args):
"""Returns the shape of the arrays that would result from broadcasting the
supplied arrays against each other.
"""
# use the old-iterator because np.nditer does not handle size 0 arrays
# consistently
b = np.broadcast(*args[:32])
# unfortunately, it cannot handle 32 or more arguments directly
for pos in range(32, len(args), 31):
# ironically, np.broadcast does not properly handle np.broadcast
# objects (it treats them as scalars)
# use broadcasting to avoid allocating the full array
b = broadcast_to(0, b.shape)
b = np.broadcast(b, *args[pos:(pos + 31)])
return b.shape | [
"def",
"_broadcast_shape",
"(",
"*",
"args",
")",
":",
"# use the old-iterator because np.nditer does not handle size 0 arrays",
"# consistently",
"b",
"=",
"np",
".",
"broadcast",
"(",
"*",
"args",
"[",
":",
"32",
"]",
")",
"# unfortunately, it cannot handle 32 or more a... | https://github.com/aws/lumberyard/blob/f85344403c1c2e77ec8c75deb2c116e97b713217/dev/Gems/CloudGemMetric/v1/AWS/common-code/Lib/numpy/lib/stride_tricks.py#L185-L199 | |
blackberry/Boost | fc90c3fde129c62565c023f091eddc4a7ed9902b | tools/build/v2/util/path.py | python | glob | (dirs, patterns) | return result | Returns the list of files matching the given pattern in the
specified directory. Both directories and patterns are
supplied as portable paths. Each pattern should be non-absolute
path, and can't contain "." or ".." elements. Each slash separated
element of pattern can contain the following special characters:
- '?', which match any character
- '*', which matches arbitrary number of characters.
A file $(d)/e1/e2/e3 (where 'd' is in $(dirs)) matches pattern p1/p2/p3
if and only if e1 matches p1, e2 matches p2 and so on.
For example:
[ glob . : *.cpp ]
[ glob . : */build/Jamfile ] | Returns the list of files matching the given pattern in the
specified directory. Both directories and patterns are
supplied as portable paths. Each pattern should be non-absolute
path, and can't contain "." or ".." elements. Each slash separated
element of pattern can contain the following special characters:
- '?', which match any character
- '*', which matches arbitrary number of characters.
A file $(d)/e1/e2/e3 (where 'd' is in $(dirs)) matches pattern p1/p2/p3
if and only if e1 matches p1, e2 matches p2 and so on.
For example:
[ glob . : *.cpp ]
[ glob . : */build/Jamfile ] | [
"Returns",
"the",
"list",
"of",
"files",
"matching",
"the",
"given",
"pattern",
"in",
"the",
"specified",
"directory",
".",
"Both",
"directories",
"and",
"patterns",
"are",
"supplied",
"as",
"portable",
"paths",
".",
"Each",
"pattern",
"should",
"be",
"non",
... | def glob (dirs, patterns):
""" Returns the list of files matching the given pattern in the
specified directory. Both directories and patterns are
supplied as portable paths. Each pattern should be non-absolute
path, and can't contain "." or ".." elements. Each slash separated
element of pattern can contain the following special characters:
- '?', which match any character
- '*', which matches arbitrary number of characters.
A file $(d)/e1/e2/e3 (where 'd' is in $(dirs)) matches pattern p1/p2/p3
if and only if e1 matches p1, e2 matches p2 and so on.
For example:
[ glob . : *.cpp ]
[ glob . : */build/Jamfile ]
"""
# {
# local result ;
# if $(patterns:D)
# {
# # When a pattern has a directory element, we first glob for
# # directory, and then glob for file name is the found directories.
# for local p in $(patterns)
# {
# # First glob for directory part.
# local globbed-dirs = [ glob $(dirs) : $(p:D) ] ;
# result += [ glob $(globbed-dirs) : $(p:D="") ] ;
# }
# }
# else
# {
# # When a pattern has not directory, we glob directly.
# # Take care of special ".." value. The "GLOB" rule simply ignores
# # the ".." element (and ".") element in directory listings. This is
# # needed so that
# #
# # [ glob libs/*/Jamfile ]
# #
# # don't return
# #
# # libs/../Jamfile (which is the same as ./Jamfile)
# #
# # On the other hand, when ".." is explicitly present in the pattern
# # we need to return it.
# #
# for local dir in $(dirs)
# {
# for local p in $(patterns)
# {
# if $(p) != ".."
# {
# result += [ sequence.transform make
# : [ GLOB [ native $(dir) ] : $(p) ] ] ;
# }
# else
# {
# result += [ path.join $(dir) .. ] ;
# }
# }
# }
# }
# return $(result) ;
# }
#
# TODO: (PF) I replaced the code above by this. I think it should work but needs to be tested.
result = []
dirs = to_seq (dirs)
patterns = to_seq (patterns)
splitdirs = []
for dir in dirs:
splitdirs += dir.split (os.pathsep)
for dir in splitdirs:
for pattern in patterns:
p = os.path.join (dir, pattern)
import glob
result.extend (glob.glob (p))
return result | [
"def",
"glob",
"(",
"dirs",
",",
"patterns",
")",
":",
"# {",
"# local result ;",
"# if $(patterns:D)",
"# {",
"# # When a pattern has a directory element, we first glob for",
"# # directory, and then glob for file name is the found directories.",
... | https://github.com/blackberry/Boost/blob/fc90c3fde129c62565c023f091eddc4a7ed9902b/tools/build/v2/util/path.py#L231-L309 | |
catboost/catboost | 167f64f237114a4d10b2b4ee42adb4569137debe | contrib/tools/python/src/Lib/decimal.py | python | Decimal._isinfinity | (self) | return 0 | Returns whether the number is infinite
0 if finite or not a number
1 if +INF
-1 if -INF | Returns whether the number is infinite | [
"Returns",
"whether",
"the",
"number",
"is",
"infinite"
] | def _isinfinity(self):
"""Returns whether the number is infinite
0 if finite or not a number
1 if +INF
-1 if -INF
"""
if self._exp == 'F':
if self._sign:
return -1
return 1
return 0 | [
"def",
"_isinfinity",
"(",
"self",
")",
":",
"if",
"self",
".",
"_exp",
"==",
"'F'",
":",
"if",
"self",
".",
"_sign",
":",
"return",
"-",
"1",
"return",
"1",
"return",
"0"
] | https://github.com/catboost/catboost/blob/167f64f237114a4d10b2b4ee42adb4569137debe/contrib/tools/python/src/Lib/decimal.py#L714-L725 | |
pmq20/node-packer | 12c46c6e44fbc14d9ee645ebd17d5296b324f7e0 | lts/deps/v8/tools/grokdump.py | python | FullDump | (reader, heap) | Dump all available memory regions. | Dump all available memory regions. | [
"Dump",
"all",
"available",
"memory",
"regions",
"."
] | def FullDump(reader, heap):
"""Dump all available memory regions."""
def dump_region(reader, start, size, location):
print()
while start & 3 != 0:
start += 1
size -= 1
location += 1
is_executable = reader.IsProbableExecutableRegion(location, size)
is_ascii = reader.IsProbableASCIIRegion(location, size)
if is_executable is not False:
lines = reader.GetDisasmLines(start, size)
for line in lines:
print(FormatDisasmLine(start, heap, line))
print()
if is_ascii is not False:
# Output in the same format as the Unix hd command
addr = start
for i in range(0, size, 16):
slot = i + location
hex_line = ""
asc_line = ""
for i in range(16):
if slot + i < location + size:
byte = ctypes.c_uint8.from_buffer(reader.minidump, slot + i).value
if byte >= 0x20 and byte < 0x7f:
asc_line += chr(byte)
else:
asc_line += "."
hex_line += " %02x" % (byte)
else:
hex_line += " "
if i == 7:
hex_line += " "
print("%s %s |%s|" % (reader.FormatIntPtr(addr),
hex_line,
asc_line))
addr += 16
if is_executable is not True and is_ascii is not True:
print("%s - %s" % (reader.FormatIntPtr(start),
reader.FormatIntPtr(start + size)))
print(start + size + 1);
for i in range(0, size, reader.PointerSize()):
slot = start + i
maybe_address = reader.ReadUIntPtr(slot)
heap_object = heap.FindObject(maybe_address)
print("%s: %s" % (reader.FormatIntPtr(slot),
reader.FormatIntPtr(maybe_address)))
if heap_object:
heap_object.Print(Printer())
print()
reader.ForEachMemoryRegion(dump_region) | [
"def",
"FullDump",
"(",
"reader",
",",
"heap",
")",
":",
"def",
"dump_region",
"(",
"reader",
",",
"start",
",",
"size",
",",
"location",
")",
":",
"print",
"(",
")",
"while",
"start",
"&",
"3",
"!=",
"0",
":",
"start",
"+=",
"1",
"size",
"-=",
"... | https://github.com/pmq20/node-packer/blob/12c46c6e44fbc14d9ee645ebd17d5296b324f7e0/lts/deps/v8/tools/grokdump.py#L126-L181 | ||
INK-USC/USC-DS-RelationExtraction | eebcfa7fd2eda5bba92f3ef8158797cdf91e6981 | code/Model/baselines/sentence-level-models/vocab.py | python | entity_masks | () | return masks | Get all entity mask tokens as a list. | Get all entity mask tokens as a list. | [
"Get",
"all",
"entity",
"mask",
"tokens",
"as",
"a",
"list",
"."
] | def entity_masks():
""" Get all entity mask tokens as a list. """
masks = []
subj_entities = list(utils.ner2id.keys())[2:]
obj_entities = list(utils.ner2id.keys())[2:]
masks += ["SUBJ-" + e for e in subj_entities]
masks += ["OBJ-" + e for e in obj_entities]
return masks | [
"def",
"entity_masks",
"(",
")",
":",
"masks",
"=",
"[",
"]",
"subj_entities",
"=",
"list",
"(",
"utils",
".",
"ner2id",
".",
"keys",
"(",
")",
")",
"[",
"2",
":",
"]",
"obj_entities",
"=",
"list",
"(",
"utils",
".",
"ner2id",
".",
"keys",
"(",
"... | https://github.com/INK-USC/USC-DS-RelationExtraction/blob/eebcfa7fd2eda5bba92f3ef8158797cdf91e6981/code/Model/baselines/sentence-level-models/vocab.py#L201-L208 | |
gem5/gem5 | 141cc37c2d4b93959d4c249b8f7e6a8b2ef75338 | src/python/gem5/components/processors/gups_generator_ep.py | python | GUPSGeneratorEP._create_cores | (
self,
num_cores: int,
start_addr: Addr,
mem_size: str,
update_limit: int,
clk_freq: Optional[str],
) | return [
GUPSGeneratorCore(
start_addr=start_addr + i * chunk_size,
mem_size=table_size,
update_limit=update_limit,
clk_freq=clk_freq
)
for i in range(num_cores)
] | Helper function to create cores. | Helper function to create cores. | [
"Helper",
"function",
"to",
"create",
"cores",
"."
] | def _create_cores(
self,
num_cores: int,
start_addr: Addr,
mem_size: str,
update_limit: int,
clk_freq: Optional[str],
):
"""
Helper function to create cores.
"""
mem_size_int = toMemorySize(mem_size)
chunk_size = int(mem_size_int / num_cores)
table_size = str(chunk_size) + "B"
return [
GUPSGeneratorCore(
start_addr=start_addr + i * chunk_size,
mem_size=table_size,
update_limit=update_limit,
clk_freq=clk_freq
)
for i in range(num_cores)
] | [
"def",
"_create_cores",
"(",
"self",
",",
"num_cores",
":",
"int",
",",
"start_addr",
":",
"Addr",
",",
"mem_size",
":",
"str",
",",
"update_limit",
":",
"int",
",",
"clk_freq",
":",
"Optional",
"[",
"str",
"]",
",",
")",
":",
"mem_size_int",
"=",
"toM... | https://github.com/gem5/gem5/blob/141cc37c2d4b93959d4c249b8f7e6a8b2ef75338/src/python/gem5/components/processors/gups_generator_ep.py#L69-L91 | |
wxWidgets/wxPython-Classic | 19571e1ae65f1ac445f5491474121998c97a1bf0 | wx/tools/Editra/src/syntax/synxml.py | python | Syntax.GetSubElements | (self) | return xml | Get the SubElements xml string
@return: string | Get the SubElements xml string
@return: string | [
"Get",
"the",
"SubElements",
"xml",
"string",
"@return",
":",
"string"
] | def GetSubElements(self):
"""Get the SubElements xml string
@return: string
"""
xml = EditraXml.GetSubElements(self)
ident = self.GetIndentationStr() + (self.Indentation * u" ")
xml += os.linesep
cpat = u" ".join(self.GetCommentPattern())
comment = u"<%s %s=\"%s\"/>" % (EXML_COMMENTPAT, EXML_VALUE, cpat.strip())
xml += os.linesep
xml += (ident + comment)
xml += os.linesep
fileext = u"<%s %s=\"%s\"/>" % (EXML_ASSOCIATIONS, EXML_VALUE, u" ".join(self.file_ext))
xml += (ident + fileext)
return xml | [
"def",
"GetSubElements",
"(",
"self",
")",
":",
"xml",
"=",
"EditraXml",
".",
"GetSubElements",
"(",
"self",
")",
"ident",
"=",
"self",
".",
"GetIndentationStr",
"(",
")",
"+",
"(",
"self",
".",
"Indentation",
"*",
"u\" \"",
")",
"xml",
"+=",
"os",
"."... | https://github.com/wxWidgets/wxPython-Classic/blob/19571e1ae65f1ac445f5491474121998c97a1bf0/wx/tools/Editra/src/syntax/synxml.py#L530-L545 | |
eventql/eventql | 7ca0dbb2e683b525620ea30dc40540a22d5eb227 | deps/3rdparty/spidermonkey/mozjs/config/configobj.py | python | ConfigObj._handle_repeat | (self, section, configspec) | Dynamically assign configspec for repeated section. | Dynamically assign configspec for repeated section. | [
"Dynamically",
"assign",
"configspec",
"for",
"repeated",
"section",
"."
] | def _handle_repeat(self, section, configspec):
"""Dynamically assign configspec for repeated section."""
try:
section_keys = configspec.sections
scalar_keys = configspec.scalars
except AttributeError:
section_keys = [entry for entry in configspec
if isinstance(configspec[entry], dict)]
scalar_keys = [entry for entry in configspec
if not isinstance(configspec[entry], dict)]
if '__many__' in section_keys and len(section_keys) > 1:
# FIXME: can we supply any useful information here ?
raise RepeatSectionError
scalars = {}
sections = {}
for entry in scalar_keys:
val = configspec[entry]
scalars[entry] = val
for entry in section_keys:
val = configspec[entry]
if entry == '__many__':
scalars[entry] = val
continue
sections[entry] = val
#
section.configspec = scalars
for entry in sections:
if not section.has_key(entry):
section[entry] = {}
self._handle_repeat(section[entry], sections[entry]) | [
"def",
"_handle_repeat",
"(",
"self",
",",
"section",
",",
"configspec",
")",
":",
"try",
":",
"section_keys",
"=",
"configspec",
".",
"sections",
"scalar_keys",
"=",
"configspec",
".",
"scalars",
"except",
"AttributeError",
":",
"section_keys",
"=",
"[",
"ent... | https://github.com/eventql/eventql/blob/7ca0dbb2e683b525620ea30dc40540a22d5eb227/deps/3rdparty/spidermonkey/mozjs/config/configobj.py#L1826-L1855 | ||
mindspore-ai/mindspore | fb8fd3338605bb34fa5cea054e535a8b1d753fab | mindspore/python/mindspore/ops/_op_impl/cpu/gather_nd.py | python | _gather_nd_cpu | () | return | GatherNd cpu register | GatherNd cpu register | [
"GatherNd",
"cpu",
"register"
] | def _gather_nd_cpu():
"""GatherNd cpu register"""
return | [
"def",
"_gather_nd_cpu",
"(",
")",
":",
"return"
] | https://github.com/mindspore-ai/mindspore/blob/fb8fd3338605bb34fa5cea054e535a8b1d753fab/mindspore/python/mindspore/ops/_op_impl/cpu/gather_nd.py#L38-L40 | |
klzgrad/naiveproxy | ed2c513637c77b18721fe428d7ed395b4d284c83 | src/tools/grit/grit/node/base.py | python | Node.GetRoot | (self) | return curr | Returns the root Node in the tree this Node belongs to. | Returns the root Node in the tree this Node belongs to. | [
"Returns",
"the",
"root",
"Node",
"in",
"the",
"tree",
"this",
"Node",
"belongs",
"to",
"."
] | def GetRoot(self):
'''Returns the root Node in the tree this Node belongs to.'''
curr = self
while curr.parent:
curr = curr.parent
return curr | [
"def",
"GetRoot",
"(",
"self",
")",
":",
"curr",
"=",
"self",
"while",
"curr",
".",
"parent",
":",
"curr",
"=",
"curr",
".",
"parent",
"return",
"curr"
] | https://github.com/klzgrad/naiveproxy/blob/ed2c513637c77b18721fe428d7ed395b4d284c83/src/tools/grit/grit/node/base.py#L97-L102 | |
opengauss-mirror/openGauss-server | e383f1b77720a00ddbe4c0655bc85914d9b02a2b | src/gausskernel/dbmind/tools/ai_manager/module/anomaly_detection/install_remote.py | python | RemoteInstaller.remote_copy_env_file | (self) | Copy env file to remote node. | Copy env file to remote node. | [
"Copy",
"env",
"file",
"to",
"remote",
"node",
"."
] | def remote_copy_env_file(self):
"""
Copy env file to remote node.
"""
for node in self.agent_nodes:
ip = node.get(Constant.NODE_IP)
uname = node.get(Constant.NODE_USER)
pwd = node.get(Constant.NODE_PWD)
status, output = CommonTools.remote_copy_files(
ip, uname, pwd, ENV_FILE, ENV_FILE)
if status != 0:
raise Exception(Errors.EXECUTE_RESULT['gauss_0406'] % (ip, output))
else:
g.logger.info('Successfully copy env file to node[%s].' % ip) | [
"def",
"remote_copy_env_file",
"(",
"self",
")",
":",
"for",
"node",
"in",
"self",
".",
"agent_nodes",
":",
"ip",
"=",
"node",
".",
"get",
"(",
"Constant",
".",
"NODE_IP",
")",
"uname",
"=",
"node",
".",
"get",
"(",
"Constant",
".",
"NODE_USER",
")",
... | https://github.com/opengauss-mirror/openGauss-server/blob/e383f1b77720a00ddbe4c0655bc85914d9b02a2b/src/gausskernel/dbmind/tools/ai_manager/module/anomaly_detection/install_remote.py#L94-L108 | ||
oracle/graaljs | 36a56e8e993d45fc40939a3a4d9c0c24990720f1 | graal-nodejs/deps/v8/third_party/jinja2/environment.py | python | Environment._compile | (self, source, filename) | return compile(source, filename, 'exec') | Internal hook that can be overridden to hook a different compile
method in.
.. versionadded:: 2.5 | Internal hook that can be overridden to hook a different compile
method in. | [
"Internal",
"hook",
"that",
"can",
"be",
"overridden",
"to",
"hook",
"a",
"different",
"compile",
"method",
"in",
"."
] | def _compile(self, source, filename):
"""Internal hook that can be overridden to hook a different compile
method in.
.. versionadded:: 2.5
"""
return compile(source, filename, 'exec') | [
"def",
"_compile",
"(",
"self",
",",
"source",
",",
"filename",
")",
":",
"return",
"compile",
"(",
"source",
",",
"filename",
",",
"'exec'",
")"
] | https://github.com/oracle/graaljs/blob/36a56e8e993d45fc40939a3a4d9c0c24990720f1/graal-nodejs/deps/v8/third_party/jinja2/environment.py#L545-L551 | |
root-project/root | fcd3583bb14852bf2e8cd2415717cbaac0e75896 | interpreter/llvm/src/tools/clang/tools/scan-build-py/libscanbuild/report.py | python | parse_bug_plist | (filename) | Returns the generator of bugs from a single .plist file. | Returns the generator of bugs from a single .plist file. | [
"Returns",
"the",
"generator",
"of",
"bugs",
"from",
"a",
"single",
".",
"plist",
"file",
"."
] | def parse_bug_plist(filename):
""" Returns the generator of bugs from a single .plist file. """
content = plistlib.readPlist(filename)
files = content.get('files')
for bug in content.get('diagnostics', []):
if len(files) <= int(bug['location']['file']):
logging.warning('Parsing bug from "%s" failed', filename)
continue
yield {
'result': filename,
'bug_type': bug['type'],
'bug_category': bug['category'],
'bug_line': int(bug['location']['line']),
'bug_path_length': int(bug['location']['col']),
'bug_file': files[int(bug['location']['file'])]
} | [
"def",
"parse_bug_plist",
"(",
"filename",
")",
":",
"content",
"=",
"plistlib",
".",
"readPlist",
"(",
"filename",
")",
"files",
"=",
"content",
".",
"get",
"(",
"'files'",
")",
"for",
"bug",
"in",
"content",
".",
"get",
"(",
"'diagnostics'",
",",
"[",
... | https://github.com/root-project/root/blob/fcd3583bb14852bf2e8cd2415717cbaac0e75896/interpreter/llvm/src/tools/clang/tools/scan-build-py/libscanbuild/report.py#L281-L298 | ||
AngoraFuzzer/Angora | 80e81c8590077bc0ac069dbd367da8ce405ff618 | llvm_mode/dfsan_rt/sanitizer_common/scripts/cpplint.py | python | _Filters | () | return _cpplint_state.filters | Returns the module's list of output filters, as a list. | Returns the module's list of output filters, as a list. | [
"Returns",
"the",
"module",
"s",
"list",
"of",
"output",
"filters",
"as",
"a",
"list",
"."
] | def _Filters():
"""Returns the module's list of output filters, as a list."""
return _cpplint_state.filters | [
"def",
"_Filters",
"(",
")",
":",
"return",
"_cpplint_state",
".",
"filters"
] | https://github.com/AngoraFuzzer/Angora/blob/80e81c8590077bc0ac069dbd367da8ce405ff618/llvm_mode/dfsan_rt/sanitizer_common/scripts/cpplint.py#L656-L658 | |
ApolloAuto/apollo-platform | 86d9dc6743b496ead18d597748ebabd34a513289 | ros/ros_comm/rosmaster/src/rosmaster/paramserver.py | python | ParamDictionary.delete_param | (self, key, notify_task=None) | Delete the parameter in the parameter dictionary.
@param key str: parameter key
@param notify_task fn(updates): function to call with
subscriber updates. updates is of the form
[(subscribers, param_key, param_value)*]. The empty dictionary
represents an unset parameter. | Delete the parameter in the parameter dictionary. | [
"Delete",
"the",
"parameter",
"in",
"the",
"parameter",
"dictionary",
"."
] | def delete_param(self, key, notify_task=None):
"""
Delete the parameter in the parameter dictionary.
@param key str: parameter key
@param notify_task fn(updates): function to call with
subscriber updates. updates is of the form
[(subscribers, param_key, param_value)*]. The empty dictionary
represents an unset parameter.
"""
try:
self.lock.acquire()
if key == GLOBALNS:
raise KeyError("cannot delete root of parameter tree")
else:
# key is global, so first split is empty
namespaces = [x for x in key.split(SEP) if x]
# - last namespace is the actual key we're deleting
value_key = namespaces[-1]
namespaces = namespaces[:-1]
d = self.parameters
# - descend tree to the node we're setting
for ns in namespaces:
if type(d) != dict or not ns in d:
raise KeyError(key)
else:
d = d[ns]
if not value_key in d:
raise KeyError(key)
else:
del d[value_key]
# ParamDictionary needs to queue updates so that the updates are thread-safe
if notify_task:
updates = compute_param_updates(self.reg_manager.param_subscribers, key, {})
if updates:
notify_task(updates)
finally:
self.lock.release() | [
"def",
"delete_param",
"(",
"self",
",",
"key",
",",
"notify_task",
"=",
"None",
")",
":",
"try",
":",
"self",
".",
"lock",
".",
"acquire",
"(",
")",
"if",
"key",
"==",
"GLOBALNS",
":",
"raise",
"KeyError",
"(",
"\"cannot delete root of parameter tree\"",
... | https://github.com/ApolloAuto/apollo-platform/blob/86d9dc6743b496ead18d597748ebabd34a513289/ros/ros_comm/rosmaster/src/rosmaster/paramserver.py#L275-L313 | ||
linyouhappy/kongkongxiyou | 7a69b2913eb29f4be77f9a62fb90cdd72c4160f1 | cocosjs/frameworks/cocos2d-x/tools/bindings-generator/clang/cindex.py | python | TokenKind.__init__ | (self, value, name) | Create a new TokenKind instance from a numeric value and a name. | Create a new TokenKind instance from a numeric value and a name. | [
"Create",
"a",
"new",
"TokenKind",
"instance",
"from",
"a",
"numeric",
"value",
"and",
"a",
"name",
"."
] | def __init__(self, value, name):
"""Create a new TokenKind instance from a numeric value and a name."""
self.value = value
self.name = name | [
"def",
"__init__",
"(",
"self",
",",
"value",
",",
"name",
")",
":",
"self",
".",
"value",
"=",
"value",
"self",
".",
"name",
"=",
"name"
] | https://github.com/linyouhappy/kongkongxiyou/blob/7a69b2913eb29f4be77f9a62fb90cdd72c4160f1/cocosjs/frameworks/cocos2d-x/tools/bindings-generator/clang/cindex.py#L556-L559 | ||
google/earthenterprise | 0fe84e29be470cd857e3a0e52e5d0afd5bb8cee9 | earth_enterprise/src/google/protobuf-py/google/protobuf/internal/wire_format.py | python | IsTypePackable | (field_type) | return field_type not in NON_PACKABLE_TYPES | Return true iff packable = true is valid for fields of this type.
Args:
field_type: a FieldDescriptor::Type value.
Returns:
True iff fields of this type are packable. | Return true iff packable = true is valid for fields of this type. | [
"Return",
"true",
"iff",
"packable",
"=",
"true",
"is",
"valid",
"for",
"fields",
"of",
"this",
"type",
"."
] | def IsTypePackable(field_type):
"""Return true iff packable = true is valid for fields of this type.
Args:
field_type: a FieldDescriptor::Type value.
Returns:
True iff fields of this type are packable.
"""
return field_type not in NON_PACKABLE_TYPES | [
"def",
"IsTypePackable",
"(",
"field_type",
")",
":",
"return",
"field_type",
"not",
"in",
"NON_PACKABLE_TYPES"
] | https://github.com/google/earthenterprise/blob/0fe84e29be470cd857e3a0e52e5d0afd5bb8cee9/earth_enterprise/src/google/protobuf-py/google/protobuf/internal/wire_format.py#L258-L267 | |
wxWidgets/wxPython-Classic | 19571e1ae65f1ac445f5491474121998c97a1bf0 | src/msw/_windows.py | python | PageSetupDialogData.EnableMargins | (*args, **kwargs) | return _windows_.PageSetupDialogData_EnableMargins(*args, **kwargs) | EnableMargins(self, bool flag) | EnableMargins(self, bool flag) | [
"EnableMargins",
"(",
"self",
"bool",
"flag",
")"
] | def EnableMargins(*args, **kwargs):
"""EnableMargins(self, bool flag)"""
return _windows_.PageSetupDialogData_EnableMargins(*args, **kwargs) | [
"def",
"EnableMargins",
"(",
"*",
"args",
",",
"*",
"*",
"kwargs",
")",
":",
"return",
"_windows_",
".",
"PageSetupDialogData_EnableMargins",
"(",
"*",
"args",
",",
"*",
"*",
"kwargs",
")"
] | https://github.com/wxWidgets/wxPython-Classic/blob/19571e1ae65f1ac445f5491474121998c97a1bf0/src/msw/_windows.py#L4878-L4880 | |
keystone-engine/keystone | 1475885daa7e566c064ae9754706e1a0ba24be3b | llvm/utils/llvm-build/llvmbuild/main.py | python | LLVMProjectInfo.write_cmake_exports_fragment | (self, output_path, enabled_optional_components) | write_cmake_exports_fragment(output_path) -> None
Generate a CMake fragment which includes LLVMBuild library
dependencies expressed similarly to how CMake would write
them via install(EXPORT). | write_cmake_exports_fragment(output_path) -> None | [
"write_cmake_exports_fragment",
"(",
"output_path",
")",
"-",
">",
"None"
] | def write_cmake_exports_fragment(self, output_path, enabled_optional_components):
"""
write_cmake_exports_fragment(output_path) -> None
Generate a CMake fragment which includes LLVMBuild library
dependencies expressed similarly to how CMake would write
them via install(EXPORT).
"""
dependencies = list(self.get_fragment_dependencies())
# Write out the CMake exports fragment.
make_install_dir(os.path.dirname(output_path))
f = open(output_path, 'w')
f.write("""\
# Explicit library dependency information.
#
# The following property assignments tell CMake about link
# dependencies of libraries imported from LLVM.
""")
self.foreach_cmake_library(
lambda ci:
f.write("""\
set_property(TARGET %s PROPERTY IMPORTED_LINK_INTERFACE_LIBRARIES %s)\n""" % (
ci.get_prefixed_library_name(), " ".join(sorted(
dep.get_prefixed_library_name()
for dep in self.get_required_libraries_for_component(ci)))))
,
enabled_optional_components,
skip_disabled = True,
skip_not_installed = True # Do not export internal libraries like gtest
)
f.close() | [
"def",
"write_cmake_exports_fragment",
"(",
"self",
",",
"output_path",
",",
"enabled_optional_components",
")",
":",
"dependencies",
"=",
"list",
"(",
"self",
".",
"get_fragment_dependencies",
"(",
")",
")",
"# Write out the CMake exports fragment.",
"make_install_dir",
... | https://github.com/keystone-engine/keystone/blob/1475885daa7e566c064ae9754706e1a0ba24be3b/llvm/utils/llvm-build/llvmbuild/main.py#L618-L652 | ||
RamadhanAmizudin/malware | 2c6c53c8b0d556f5d8078d6ca0fc4448f4697cf1 | Fuzzbunch/fuzzbunch/command.py | python | FbCmd.completenames | (self, text, *ignored) | return [ a[3:] for a in self.ctx.get_names() if a.startswith(dotext) ] +\
[ a[3:] for a in self.get_names() if a.startswith(dotext) ] | Return a list of command names for command completion. | Return a list of command names for command completion. | [
"Return",
"a",
"list",
"of",
"command",
"names",
"for",
"command",
"completion",
"."
] | def completenames(self, text, *ignored):
"""Return a list of command names for command completion."""
dotext = 'do_' + text
return [ a[3:] for a in self.ctx.get_names() if a.startswith(dotext) ] +\
[ a[3:] for a in self.get_names() if a.startswith(dotext) ] | [
"def",
"completenames",
"(",
"self",
",",
"text",
",",
"*",
"ignored",
")",
":",
"dotext",
"=",
"'do_'",
"+",
"text",
"return",
"[",
"a",
"[",
"3",
":",
"]",
"for",
"a",
"in",
"self",
".",
"ctx",
".",
"get_names",
"(",
")",
"if",
"a",
".",
"sta... | https://github.com/RamadhanAmizudin/malware/blob/2c6c53c8b0d556f5d8078d6ca0fc4448f4697cf1/Fuzzbunch/fuzzbunch/command.py#L299-L303 | |
wlanjie/AndroidFFmpeg | 7baf9122f4b8e1c74e7baf4be5c422c7a5ba5aaf | tools/fdk-aac-build/armeabi/toolchain/lib/python2.7/logging/__init__.py | python | Filterer.filter | (self, record) | return rv | Determine if a record is loggable by consulting all the filters.
The default is to allow the record to be logged; any filter can veto
this and the record is then dropped. Returns a zero value if a record
is to be dropped, else non-zero. | Determine if a record is loggable by consulting all the filters. | [
"Determine",
"if",
"a",
"record",
"is",
"loggable",
"by",
"consulting",
"all",
"the",
"filters",
"."
] | def filter(self, record):
"""
Determine if a record is loggable by consulting all the filters.
The default is to allow the record to be logged; any filter can veto
this and the record is then dropped. Returns a zero value if a record
is to be dropped, else non-zero.
"""
rv = 1
for f in self.filters:
if not f.filter(record):
rv = 0
break
return rv | [
"def",
"filter",
"(",
"self",
",",
"record",
")",
":",
"rv",
"=",
"1",
"for",
"f",
"in",
"self",
".",
"filters",
":",
"if",
"not",
"f",
".",
"filter",
"(",
"record",
")",
":",
"rv",
"=",
"0",
"break",
"return",
"rv"
] | https://github.com/wlanjie/AndroidFFmpeg/blob/7baf9122f4b8e1c74e7baf4be5c422c7a5ba5aaf/tools/fdk-aac-build/armeabi/toolchain/lib/python2.7/logging/__init__.py#L598-L611 | |
aws/lumberyard | f85344403c1c2e77ec8c75deb2c116e97b713217 | dev/Gems/CloudGemMetric/v1/AWS/python/windows/Lib/pandas/core/internals/blocks.py | python | Block.quantile | (self, qs, interpolation="linear", axis=0) | return make_block(result, placement=np.arange(len(result)), ndim=ndim) | compute the quantiles of the
Parameters
----------
qs: a scalar or list of the quantiles to be computed
interpolation: type of interpolation, default 'linear'
axis: axis to compute, default 0
Returns
-------
Block | compute the quantiles of the | [
"compute",
"the",
"quantiles",
"of",
"the"
] | def quantile(self, qs, interpolation="linear", axis=0):
"""
compute the quantiles of the
Parameters
----------
qs: a scalar or list of the quantiles to be computed
interpolation: type of interpolation, default 'linear'
axis: axis to compute, default 0
Returns
-------
Block
"""
# We should always have ndim == 2 because Series dispatches to DataFrame
assert self.ndim == 2
values = self.get_values()
is_empty = values.shape[axis] == 0
orig_scalar = not is_list_like(qs)
if orig_scalar:
# make list-like, unpack later
qs = [qs]
if is_empty:
# create the array of na_values
# 2d len(values) * len(qs)
result = np.repeat(
np.array([self.fill_value] * len(qs)), len(values)
).reshape(len(values), len(qs))
else:
# asarray needed for Sparse, see GH#24600
mask = np.asarray(isna(values))
result = nanpercentile(
values,
np.array(qs) * 100,
axis=axis,
na_value=self.fill_value,
mask=mask,
ndim=values.ndim,
interpolation=interpolation,
)
result = np.array(result, copy=False)
result = result.T
if orig_scalar and not lib.is_scalar(result):
# result could be scalar in case with is_empty and self.ndim == 1
assert result.shape[-1] == 1, result.shape
result = result[..., 0]
result = lib.item_from_zerodim(result)
ndim = np.ndim(result)
return make_block(result, placement=np.arange(len(result)), ndim=ndim) | [
"def",
"quantile",
"(",
"self",
",",
"qs",
",",
"interpolation",
"=",
"\"linear\"",
",",
"axis",
"=",
"0",
")",
":",
"# We should always have ndim == 2 because Series dispatches to DataFrame",
"assert",
"self",
".",
"ndim",
"==",
"2",
"values",
"=",
"self",
".",
... | https://github.com/aws/lumberyard/blob/f85344403c1c2e77ec8c75deb2c116e97b713217/dev/Gems/CloudGemMetric/v1/AWS/python/windows/Lib/pandas/core/internals/blocks.py#L1491-L1545 | |
panda3d/panda3d | 833ad89ebad58395d0af0b7ec08538e5e4308265 | direct/src/directtools/DirectLights.py | python | DirectLights.setOff | (self, directLight) | Turn off the given directLight | Turn off the given directLight | [
"Turn",
"off",
"the",
"given",
"directLight"
] | def setOff(self, directLight):
"""
Turn off the given directLight
"""
render.clearLight(directLight) | [
"def",
"setOff",
"(",
"self",
",",
"directLight",
")",
":",
"render",
".",
"clearLight",
"(",
"directLight",
")"
] | https://github.com/panda3d/panda3d/blob/833ad89ebad58395d0af0b7ec08538e5e4308265/direct/src/directtools/DirectLights.py#L130-L134 | ||
google/clif | cab24d6a105609a65c95a36a1712ae3c20c7b5df | clif/python/astutils.py | python | FuncParamStr | (fdecl, arg_name=None, true_cpp_type=False) | return TupleStr('%s %s%d' % (a, arg_name, i) for i, a in enumerate(args)) | Constructs the "(params)" string for the func declaration proto. | Constructs the "(params)" string for the func declaration proto. | [
"Constructs",
"the",
"(",
"params",
")",
"string",
"for",
"the",
"func",
"declaration",
"proto",
"."
] | def FuncParamStr(fdecl, arg_name=None, true_cpp_type=False):
"""Constructs the "(params)" string for the func declaration proto."""
if not arg_name:
return TupleStr(itertools.chain((Type(a) for a in fdecl.params),
FuncReturns(fdecl)))
assert true_cpp_type, 'arg_name make sense only for true_cpp_type'
# Skip returns[0] if not void.
returns = fdecl.returns if fdecl.cpp_void_return else fdecl.returns[1:]
args = ([ExactTypeOrType(a) for a in fdecl.params] +
[ExactTypeOrType(a, '*') for a in returns])
return TupleStr('%s %s%d' % (a, arg_name, i) for i, a in enumerate(args)) | [
"def",
"FuncParamStr",
"(",
"fdecl",
",",
"arg_name",
"=",
"None",
",",
"true_cpp_type",
"=",
"False",
")",
":",
"if",
"not",
"arg_name",
":",
"return",
"TupleStr",
"(",
"itertools",
".",
"chain",
"(",
"(",
"Type",
"(",
"a",
")",
"for",
"a",
"in",
"f... | https://github.com/google/clif/blob/cab24d6a105609a65c95a36a1712ae3c20c7b5df/clif/python/astutils.py#L56-L66 | |
Tencent/CMONGO | c40380caa14e05509f46993aa8b8da966b09b0b5 | src/third_party/scons-2.5.0/scons-local-2.5.0/SCons/Util.py | python | silent_intern | (x) | Perform sys.intern() on the passed argument and return the result.
If the input is ineligible (e.g. a unicode string) the original argument is
returned and no exception is thrown. | Perform sys.intern() on the passed argument and return the result.
If the input is ineligible (e.g. a unicode string) the original argument is
returned and no exception is thrown. | [
"Perform",
"sys",
".",
"intern",
"()",
"on",
"the",
"passed",
"argument",
"and",
"return",
"the",
"result",
".",
"If",
"the",
"input",
"is",
"ineligible",
"(",
"e",
".",
"g",
".",
"a",
"unicode",
"string",
")",
"the",
"original",
"argument",
"is",
"ret... | def silent_intern(x):
"""
Perform sys.intern() on the passed argument and return the result.
If the input is ineligible (e.g. a unicode string) the original argument is
returned and no exception is thrown.
"""
try:
return sys.intern(x)
except TypeError:
return x | [
"def",
"silent_intern",
"(",
"x",
")",
":",
"try",
":",
"return",
"sys",
".",
"intern",
"(",
"x",
")",
"except",
"TypeError",
":",
"return",
"x"
] | https://github.com/Tencent/CMONGO/blob/c40380caa14e05509f46993aa8b8da966b09b0b5/src/third_party/scons-2.5.0/scons-local-2.5.0/SCons/Util.py#L1463-L1472 | ||
OSGeo/gdal | 3748fc4ba4fba727492774b2b908a2130c864a83 | swig/python/osgeo/ogr.py | python | Geometry.SetPointZM | (self, *args, **kwargs) | return _ogr.Geometry_SetPointZM(self, *args, **kwargs) | r"""SetPointZM(Geometry self, int point, double x, double y, double z, double m) | r"""SetPointZM(Geometry self, int point, double x, double y, double z, double m) | [
"r",
"SetPointZM",
"(",
"Geometry",
"self",
"int",
"point",
"double",
"x",
"double",
"y",
"double",
"z",
"double",
"m",
")"
] | def SetPointZM(self, *args, **kwargs):
r"""SetPointZM(Geometry self, int point, double x, double y, double z, double m)"""
return _ogr.Geometry_SetPointZM(self, *args, **kwargs) | [
"def",
"SetPointZM",
"(",
"self",
",",
"*",
"args",
",",
"*",
"*",
"kwargs",
")",
":",
"return",
"_ogr",
".",
"Geometry_SetPointZM",
"(",
"self",
",",
"*",
"args",
",",
"*",
"*",
"kwargs",
")"
] | https://github.com/OSGeo/gdal/blob/3748fc4ba4fba727492774b2b908a2130c864a83/swig/python/osgeo/ogr.py#L6014-L6016 | |
mysql/mysql-workbench | 2f35f9034f015cbcd22139a60e1baa2e3e8e795c | plugins/wb.admin/frontend/wb_admin_perfschema_instrumentation.py | python | SetupThreads.thread_edited | (self, node, column, value) | This method will be used to enable/disable the instruments. | This method will be used to enable/disable the instruments. | [
"This",
"method",
"will",
"be",
"used",
"to",
"enable",
"/",
"disable",
"the",
"instruments",
"."
] | def thread_edited(self, node, column, value):
"""
This method will be used to enable/disable the instruments.
"""
if column == 2:
value = True if value == "1" else False
self._threads[node.get_long(0)].instrumented = value
node.set_bool(2, value) | [
"def",
"thread_edited",
"(",
"self",
",",
"node",
",",
"column",
",",
"value",
")",
":",
"if",
"column",
"==",
"2",
":",
"value",
"=",
"True",
"if",
"value",
"==",
"\"1\"",
"else",
"False",
"self",
".",
"_threads",
"[",
"node",
".",
"get_long",
"(",
... | https://github.com/mysql/mysql-workbench/blob/2f35f9034f015cbcd22139a60e1baa2e3e8e795c/plugins/wb.admin/frontend/wb_admin_perfschema_instrumentation.py#L1366-L1374 | ||
linyouhappy/kongkongxiyou | 7a69b2913eb29f4be77f9a62fb90cdd72c4160f1 | cocosjs/frameworks/cocos2d-x/tools/bindings-generator/backup/clang-llvm-3.3-pybinding/cindex.py | python | Cursor.enum_value | (self) | return self._enum_value | Return the value of an enum constant. | Return the value of an enum constant. | [
"Return",
"the",
"value",
"of",
"an",
"enum",
"constant",
"."
] | def enum_value(self):
"""Return the value of an enum constant."""
if not hasattr(self, '_enum_value'):
assert self.kind == CursorKind.ENUM_CONSTANT_DECL
# Figure out the underlying type of the enum to know if it
# is a signed or unsigned quantity.
underlying_type = self.type
if underlying_type.kind == TypeKind.ENUM:
underlying_type = underlying_type.get_declaration().enum_type
if underlying_type.kind in (TypeKind.CHAR_U,
TypeKind.UCHAR,
TypeKind.CHAR16,
TypeKind.CHAR32,
TypeKind.USHORT,
TypeKind.UINT,
TypeKind.ULONG,
TypeKind.ULONGLONG,
TypeKind.UINT128):
self._enum_value = \
conf.lib.clang_getEnumConstantDeclUnsignedValue(self)
else:
self._enum_value = conf.lib.clang_getEnumConstantDeclValue(self)
return self._enum_value | [
"def",
"enum_value",
"(",
"self",
")",
":",
"if",
"not",
"hasattr",
"(",
"self",
",",
"'_enum_value'",
")",
":",
"assert",
"self",
".",
"kind",
"==",
"CursorKind",
".",
"ENUM_CONSTANT_DECL",
"# Figure out the underlying type of the enum to know if it",
"# is a signed ... | https://github.com/linyouhappy/kongkongxiyou/blob/7a69b2913eb29f4be77f9a62fb90cdd72c4160f1/cocosjs/frameworks/cocos2d-x/tools/bindings-generator/backup/clang-llvm-3.3-pybinding/cindex.py#L1210-L1232 | |
wxWidgets/wxPython-Classic | 19571e1ae65f1ac445f5491474121998c97a1bf0 | src/gtk/_controls.py | python | Slider.GetTickFreq | (*args, **kwargs) | return _controls_.Slider_GetTickFreq(*args, **kwargs) | GetTickFreq(self) -> int | GetTickFreq(self) -> int | [
"GetTickFreq",
"(",
"self",
")",
"-",
">",
"int"
] | def GetTickFreq(*args, **kwargs):
"""GetTickFreq(self) -> int"""
return _controls_.Slider_GetTickFreq(*args, **kwargs) | [
"def",
"GetTickFreq",
"(",
"*",
"args",
",",
"*",
"*",
"kwargs",
")",
":",
"return",
"_controls_",
".",
"Slider_GetTickFreq",
"(",
"*",
"args",
",",
"*",
"*",
"kwargs",
")"
] | https://github.com/wxWidgets/wxPython-Classic/blob/19571e1ae65f1ac445f5491474121998c97a1bf0/src/gtk/_controls.py#L2899-L2901 | |
wxWidgets/wxPython-Classic | 19571e1ae65f1ac445f5491474121998c97a1bf0 | src/msw/propgrid.py | python | PGCommonValue.__init__ | (self, *args, **kwargs) | __init__(self, String label, renderer) -> PGCommonValue | __init__(self, String label, renderer) -> PGCommonValue | [
"__init__",
"(",
"self",
"String",
"label",
"renderer",
")",
"-",
">",
"PGCommonValue"
] | def __init__(self, *args, **kwargs):
"""__init__(self, String label, renderer) -> PGCommonValue"""
_propgrid.PGCommonValue_swiginit(self,_propgrid.new_PGCommonValue(*args, **kwargs)) | [
"def",
"__init__",
"(",
"self",
",",
"*",
"args",
",",
"*",
"*",
"kwargs",
")",
":",
"_propgrid",
".",
"PGCommonValue_swiginit",
"(",
"self",
",",
"_propgrid",
".",
"new_PGCommonValue",
"(",
"*",
"args",
",",
"*",
"*",
"kwargs",
")",
")"
] | https://github.com/wxWidgets/wxPython-Classic/blob/19571e1ae65f1ac445f5491474121998c97a1bf0/src/msw/propgrid.py#L1852-L1854 | ||
GoSSIP-SJTU/Armariris | ad5d868482956b2194a77b39c8d543c7c2318200 | tools/clang/bindings/python/clang/cindex.py | python | SourceRange.start | (self) | return conf.lib.clang_getRangeStart(self) | Return a SourceLocation representing the first character within a
source range. | Return a SourceLocation representing the first character within a
source range. | [
"Return",
"a",
"SourceLocation",
"representing",
"the",
"first",
"character",
"within",
"a",
"source",
"range",
"."
] | def start(self):
"""
Return a SourceLocation representing the first character within a
source range.
"""
return conf.lib.clang_getRangeStart(self) | [
"def",
"start",
"(",
"self",
")",
":",
"return",
"conf",
".",
"lib",
".",
"clang_getRangeStart",
"(",
"self",
")"
] | https://github.com/GoSSIP-SJTU/Armariris/blob/ad5d868482956b2194a77b39c8d543c7c2318200/tools/clang/bindings/python/clang/cindex.py#L248-L253 | |
tensorflow/tensorflow | 419e3a6b650ea4bd1b0cba23c4348f8a69f3272e | tensorflow/python/eager/monitoring.py | python | StringGauge.__init__ | (self, name, description, *labels) | Creates a new StringGauge.
Args:
name: name of the new metric.
description: description of the new metric.
*labels: The label list of the new metric. | Creates a new StringGauge. | [
"Creates",
"a",
"new",
"StringGauge",
"."
] | def __init__(self, name, description, *labels):
"""Creates a new StringGauge.
Args:
name: name of the new metric.
description: description of the new metric.
*labels: The label list of the new metric.
"""
super(StringGauge, self).__init__('StringGauge', _string_gauge_methods,
len(labels), name, description, *labels) | [
"def",
"__init__",
"(",
"self",
",",
"name",
",",
"description",
",",
"*",
"labels",
")",
":",
"super",
"(",
"StringGauge",
",",
"self",
")",
".",
"__init__",
"(",
"'StringGauge'",
",",
"_string_gauge_methods",
",",
"len",
"(",
"labels",
")",
",",
"name"... | https://github.com/tensorflow/tensorflow/blob/419e3a6b650ea4bd1b0cba23c4348f8a69f3272e/tensorflow/python/eager/monitoring.py#L295-L304 | ||
openvinotoolkit/openvino | dedcbeafa8b84cccdc55ca64b8da516682b381c7 | tools/mo/openvino/tools/mo/middle/CustomSubgraphCall.py | python | CustomSubgraphCall.add_sub_graph_call_output_tensors_transposes | (node: Node) | Adds transpose operations to the output nodes if they are 4D to change layout from NCHW to NHWC.
:param node: the node to add transposes to the output nodes to.
:return: None | Adds transpose operations to the output nodes if they are 4D to change layout from NCHW to NHWC.
:param node: the node to add transposes to the output nodes to.
:return: None | [
"Adds",
"transpose",
"operations",
"to",
"the",
"output",
"nodes",
"if",
"they",
"are",
"4D",
"to",
"change",
"layout",
"from",
"NCHW",
"to",
"NHWC",
".",
":",
"param",
"node",
":",
"the",
"node",
"to",
"add",
"transposes",
"to",
"the",
"output",
"nodes"... | def add_sub_graph_call_output_tensors_transposes(node: Node):
"""
Adds transpose operations to the output nodes if they are 4D to change layout from NCHW to NHWC.
:param node: the node to add transposes to the output nodes to.
:return: None
"""
try:
import tensorflow.compat.v1 as tf_v1
# disable eager execution of TensorFlow 2 environment immediately
tf_v1.disable_eager_execution()
except ImportError:
import tensorflow as tf_v1
# in some environment suppressing through TF_CPP_MIN_LOG_LEVEL does not work
tf_v1.get_logger().setLevel("ERROR")
from openvino.tools.mo.front.tf.partial_infer.tf import get_subgraph_output_tensors, add_node_def_to_subgraph
_, output_tensors = get_subgraph_output_tensors(node)
# transpose permutation constant
nhwc_to_nchw_constant = tf_v1.constant(nhwc_to_nchw_permute, dtype=tf_v1.int32, name=nhwc_to_nchw_constant_name)
# dummy node which we can refer to as input in the transpose for the output node
dummy_node = tf_v1.constant(value=[[[[1]]]], dtype=tf_v1.float32, name='random_dummy_name')
new_out_tensor_names = list()
for out_tensor_name in node['output_tensors_names']:
out_name, out_port = out_tensor_name.split(':')
if len(output_tensors[
int(out_port)].shape) == 4: # TODO think about better check whether transpose is required
out_transpose_name = out_name + '_port_' + out_port + '_transpose'
transpose = tf_v1.transpose(dummy_node, nhwc_to_nchw_constant, name=out_transpose_name)
# starting from TF 1.8 it is not possible to modify the "node_def" of the "tf.op", so we create a copy,
# update it and use further
new_input_names = transpose.op.node_def.input[:]
new_input_names[0] = out_tensor_name
new_node_def = copy.deepcopy(transpose.op.node_def)
new_node_def.input[:] = new_input_names
add_node_def_to_subgraph(node, new_node_def, position=len(node['nodes_order']))
new_out_tensor_names.append(out_transpose_name)
else:
new_out_tensor_names.append(out_tensor_name)
# update output tensor names with transposes operations
node['output_tensors_names'] = new_out_tensor_names | [
"def",
"add_sub_graph_call_output_tensors_transposes",
"(",
"node",
":",
"Node",
")",
":",
"try",
":",
"import",
"tensorflow",
".",
"compat",
".",
"v1",
"as",
"tf_v1",
"# disable eager execution of TensorFlow 2 environment immediately",
"tf_v1",
".",
"disable_eager_executio... | https://github.com/openvinotoolkit/openvino/blob/dedcbeafa8b84cccdc55ca64b8da516682b381c7/tools/mo/openvino/tools/mo/middle/CustomSubgraphCall.py#L273-L317 | ||
hanpfei/chromium-net | 392cc1fa3a8f92f42e4071ab6e674d8e0482f83f | tools/grit/grit/node/include.py | python | IncludeNode.GetHtmlResourceFilenames | (self) | return grit.format.html_inline.GetResourceFilenames(
self.ToRealPath(self.GetInputPath()),
allow_external_script=allow_external_script) | Returns a set of all filenames inlined by this file. | Returns a set of all filenames inlined by this file. | [
"Returns",
"a",
"set",
"of",
"all",
"filenames",
"inlined",
"by",
"this",
"file",
"."
] | def GetHtmlResourceFilenames(self):
"""Returns a set of all filenames inlined by this file."""
allow_external_script = self.attrs['allowexternalscript'] == 'true'
return grit.format.html_inline.GetResourceFilenames(
self.ToRealPath(self.GetInputPath()),
allow_external_script=allow_external_script) | [
"def",
"GetHtmlResourceFilenames",
"(",
"self",
")",
":",
"allow_external_script",
"=",
"self",
".",
"attrs",
"[",
"'allowexternalscript'",
"]",
"==",
"'true'",
"return",
"grit",
".",
"format",
".",
"html_inline",
".",
"GetResourceFilenames",
"(",
"self",
".",
"... | https://github.com/hanpfei/chromium-net/blob/392cc1fa3a8f92f42e4071ab6e674d8e0482f83f/tools/grit/grit/node/include.py#L141-L146 | |
catboost/catboost | 167f64f237114a4d10b2b4ee42adb4569137debe | contrib/python/joblib/joblib/compressor.py | python | CompressorWrapper.decompressor_file | (self, fileobj) | return self.fileobj_factory(fileobj, 'rb') | Returns an instance of a decompressor file object. | Returns an instance of a decompressor file object. | [
"Returns",
"an",
"instance",
"of",
"a",
"decompressor",
"file",
"object",
"."
] | def decompressor_file(self, fileobj):
"""Returns an instance of a decompressor file object."""
return self.fileobj_factory(fileobj, 'rb') | [
"def",
"decompressor_file",
"(",
"self",
",",
"fileobj",
")",
":",
"return",
"self",
".",
"fileobj_factory",
"(",
"fileobj",
",",
"'rb'",
")"
] | https://github.com/catboost/catboost/blob/167f64f237114a4d10b2b4ee42adb4569137debe/contrib/python/joblib/joblib/compressor.py#L110-L112 | |
koth/kcws | 88efbd36a7022de4e6e90f5a1fb880cf87cfae9f | third_party/python/cpplint/cpplint.py | python | _DropCommonSuffixes | (filename) | return os.path.splitext(filename)[0] | Drops common suffixes like _test.cc or -inl.h from filename.
For example:
>>> _DropCommonSuffixes('foo/foo-inl.h')
'foo/foo'
>>> _DropCommonSuffixes('foo/bar/foo.cc')
'foo/bar/foo'
>>> _DropCommonSuffixes('foo/foo_internal.h')
'foo/foo'
>>> _DropCommonSuffixes('foo/foo_unusualinternal.h')
'foo/foo_unusualinternal'
Args:
filename: The input filename.
Returns:
The filename with the common suffix removed. | Drops common suffixes like _test.cc or -inl.h from filename. | [
"Drops",
"common",
"suffixes",
"like",
"_test",
".",
"cc",
"or",
"-",
"inl",
".",
"h",
"from",
"filename",
"."
] | def _DropCommonSuffixes(filename):
"""Drops common suffixes like _test.cc or -inl.h from filename.
For example:
>>> _DropCommonSuffixes('foo/foo-inl.h')
'foo/foo'
>>> _DropCommonSuffixes('foo/bar/foo.cc')
'foo/bar/foo'
>>> _DropCommonSuffixes('foo/foo_internal.h')
'foo/foo'
>>> _DropCommonSuffixes('foo/foo_unusualinternal.h')
'foo/foo_unusualinternal'
Args:
filename: The input filename.
Returns:
The filename with the common suffix removed.
"""
for suffix in ('test.cc', 'regtest.cc', 'unittest.cc',
'inl.h', 'impl.h', 'internal.h'):
if (filename.endswith(suffix) and len(filename) > len(suffix) and
filename[-len(suffix) - 1] in ('-', '_')):
return filename[:-len(suffix) - 1]
return os.path.splitext(filename)[0] | [
"def",
"_DropCommonSuffixes",
"(",
"filename",
")",
":",
"for",
"suffix",
"in",
"(",
"'test.cc'",
",",
"'regtest.cc'",
",",
"'unittest.cc'",
",",
"'inl.h'",
",",
"'impl.h'",
",",
"'internal.h'",
")",
":",
"if",
"(",
"filename",
".",
"endswith",
"(",
"suffix"... | https://github.com/koth/kcws/blob/88efbd36a7022de4e6e90f5a1fb880cf87cfae9f/third_party/python/cpplint/cpplint.py#L4502-L4526 | |
weichengkuo/DeepBox | c4f8c065b6a51cf296540cc453a44f0519aaacc9 | caffe-fast-rcnn/scripts/cpp_lint.py | python | _ClassifyInclude | (fileinfo, include, is_system) | return _OTHER_HEADER | Figures out what kind of header 'include' is.
Args:
fileinfo: The current file cpplint is running over. A FileInfo instance.
include: The path to a #included file.
is_system: True if the #include used <> rather than "".
Returns:
One of the _XXX_HEADER constants.
For example:
>>> _ClassifyInclude(FileInfo('foo/foo.cc'), 'stdio.h', True)
_C_SYS_HEADER
>>> _ClassifyInclude(FileInfo('foo/foo.cc'), 'string', True)
_CPP_SYS_HEADER
>>> _ClassifyInclude(FileInfo('foo/foo.cc'), 'foo/foo.h', False)
_LIKELY_MY_HEADER
>>> _ClassifyInclude(FileInfo('foo/foo_unknown_extension.cc'),
... 'bar/foo_other_ext.h', False)
_POSSIBLE_MY_HEADER
>>> _ClassifyInclude(FileInfo('foo/foo.cc'), 'foo/bar.h', False)
_OTHER_HEADER | Figures out what kind of header 'include' is. | [
"Figures",
"out",
"what",
"kind",
"of",
"header",
"include",
"is",
"."
] | def _ClassifyInclude(fileinfo, include, is_system):
"""Figures out what kind of header 'include' is.
Args:
fileinfo: The current file cpplint is running over. A FileInfo instance.
include: The path to a #included file.
is_system: True if the #include used <> rather than "".
Returns:
One of the _XXX_HEADER constants.
For example:
>>> _ClassifyInclude(FileInfo('foo/foo.cc'), 'stdio.h', True)
_C_SYS_HEADER
>>> _ClassifyInclude(FileInfo('foo/foo.cc'), 'string', True)
_CPP_SYS_HEADER
>>> _ClassifyInclude(FileInfo('foo/foo.cc'), 'foo/foo.h', False)
_LIKELY_MY_HEADER
>>> _ClassifyInclude(FileInfo('foo/foo_unknown_extension.cc'),
... 'bar/foo_other_ext.h', False)
_POSSIBLE_MY_HEADER
>>> _ClassifyInclude(FileInfo('foo/foo.cc'), 'foo/bar.h', False)
_OTHER_HEADER
"""
# This is a list of all standard c++ header files, except
# those already checked for above.
is_cpp_h = include in _CPP_HEADERS
if is_system:
if is_cpp_h:
return _CPP_SYS_HEADER
else:
return _C_SYS_HEADER
# If the target file and the include we're checking share a
# basename when we drop common extensions, and the include
# lives in . , then it's likely to be owned by the target file.
target_dir, target_base = (
os.path.split(_DropCommonSuffixes(fileinfo.RepositoryName())))
include_dir, include_base = os.path.split(_DropCommonSuffixes(include))
if target_base == include_base and (
include_dir == target_dir or
include_dir == os.path.normpath(target_dir + '/../public')):
return _LIKELY_MY_HEADER
# If the target and include share some initial basename
# component, it's possible the target is implementing the
# include, so it's allowed to be first, but we'll never
# complain if it's not there.
target_first_component = _RE_FIRST_COMPONENT.match(target_base)
include_first_component = _RE_FIRST_COMPONENT.match(include_base)
if (target_first_component and include_first_component and
target_first_component.group(0) ==
include_first_component.group(0)):
return _POSSIBLE_MY_HEADER
return _OTHER_HEADER | [
"def",
"_ClassifyInclude",
"(",
"fileinfo",
",",
"include",
",",
"is_system",
")",
":",
"# This is a list of all standard c++ header files, except",
"# those already checked for above.",
"is_cpp_h",
"=",
"include",
"in",
"_CPP_HEADERS",
"if",
"is_system",
":",
"if",
"is_cpp... | https://github.com/weichengkuo/DeepBox/blob/c4f8c065b6a51cf296540cc453a44f0519aaacc9/caffe-fast-rcnn/scripts/cpp_lint.py#L3620-L3676 | |
microsoft/CNTK | e9396480025b9ca457d26b6f33dd07c474c6aa04 | bindings/python/cntk/contrib/netopt/factorization.py | python | dense_factored | (shapes, #(shape1, shape2)
activation=default_override_or(identity),
init={'W1':None, 'W2':None},
input_rank=None,
map_rank=None,
bias=default_override_or(True),
init_bias=default_override_or(0),
name='') | return dense | Perform the new model creation using the factored inputs W1 and W2.
The returend function represents the new model.
Args:
shapes : dimensions of the input matrices.
activation : activation function used for the model.
init : the two matrices corresponding to the factorization.
input_rank : rank of the input tensor.
map_rank : ???
bias : bias for the model.
init_bias : initial bias value.
name : name of the block function that creates the new model.
Returns:
a model that is factored and projected (reduced). | Perform the new model creation using the factored inputs W1 and W2.
The returend function represents the new model. | [
"Perform",
"the",
"new",
"model",
"creation",
"using",
"the",
"factored",
"inputs",
"W1",
"and",
"W2",
".",
"The",
"returend",
"function",
"represents",
"the",
"new",
"model",
"."
] | def dense_factored(shapes, #(shape1, shape2)
activation=default_override_or(identity),
init={'W1':None, 'W2':None},
input_rank=None,
map_rank=None,
bias=default_override_or(True),
init_bias=default_override_or(0),
name=''):
'''
Perform the new model creation using the factored inputs W1 and W2.
The returend function represents the new model.
Args:
shapes : dimensions of the input matrices.
activation : activation function used for the model.
init : the two matrices corresponding to the factorization.
input_rank : rank of the input tensor.
map_rank : ???
bias : bias for the model.
init_bias : initial bias value.
name : name of the block function that creates the new model.
Returns:
a model that is factored and projected (reduced).
'''
# matthaip: Not sure how to handle input tensor of rank > 1
# or selective flattening of ranks
assert(input_rank is None and
map_rank is None and
all(isinstance(s,int) for s in list(shapes)))
activation = get_default_override(cntk.layers.Dense, activation=activation)
bias = get_default_override(cntk.layers.Dense, bias=bias)
init_bias = get_default_override(cntk.layers.Dense, init_bias=init_bias)
# how to use get_default_override for init parameeter?
output_shape1 = _as_tuple(shapes[0])
output_shape2 = _as_tuple(shapes[1])
if input_rank is not None and map_rank is not None:
raise ValueError("Dense: input_rank and map_rank cannot be specified at the same time.")
# If input_rank not given then pass a single _INFERRED;
# map_rank if given will determine the input_rank.
# The dimension inference may still create multiple axes.
input_shape = _INFERRED
# parameters bound to this Function
# init_weights = _initializer_for(init, Record(output_rank=output_rank))
init_weights = init
W1 = Parameter(input_shape + output_shape1, init=init_weights['W1'], name='W1')
W2 = Parameter(output_shape1 + output_shape2, init=init_weights['W2'], name='W2')
b = Parameter(output_shape2, init=init_bias, name='b') if bias else None
# expression of this function
@BlockFunction('DenseFactored', name)
def dense(x):
r = times(x, W1)
r = times(r, W2)
if b:
r = r + b
if activation is not None:
r = activation(r)
return r
return dense | [
"def",
"dense_factored",
"(",
"shapes",
",",
"#(shape1, shape2)",
"activation",
"=",
"default_override_or",
"(",
"identity",
")",
",",
"init",
"=",
"{",
"'W1'",
":",
"None",
",",
"'W2'",
":",
"None",
"}",
",",
"input_rank",
"=",
"None",
",",
"map_rank",
"=... | https://github.com/microsoft/CNTK/blob/e9396480025b9ca457d26b6f33dd07c474c6aa04/bindings/python/cntk/contrib/netopt/factorization.py#L89-L154 | |
catboost/catboost | 167f64f237114a4d10b2b4ee42adb4569137debe | contrib/tools/python/src/Lib/mhlib.py | python | SubMessage.__init__ | (self, f, n, fp) | Constructor. | Constructor. | [
"Constructor",
"."
] | def __init__(self, f, n, fp):
"""Constructor."""
Message.__init__(self, f, n, fp)
if self.getmaintype() == 'multipart':
self.body = Message.getbodyparts(self)
else:
self.body = Message.getbodytext(self)
self.bodyencoded = Message.getbodytext(self, decode=0) | [
"def",
"__init__",
"(",
"self",
",",
"f",
",",
"n",
",",
"fp",
")",
":",
"Message",
".",
"__init__",
"(",
"self",
",",
"f",
",",
"n",
",",
"fp",
")",
"if",
"self",
".",
"getmaintype",
"(",
")",
"==",
"'multipart'",
":",
"self",
".",
"body",
"="... | https://github.com/catboost/catboost/blob/167f64f237114a4d10b2b4ee42adb4569137debe/contrib/tools/python/src/Lib/mhlib.py#L742-L749 | ||
krishauser/Klampt | 972cc83ea5befac3f653c1ba20f80155768ad519 | Python/klampt/math/se3.py | python | ndarray | (T : RigidTransform) | return numpy.array(homogeneous(T)) | Returns the 4x4 homogeneous transform corresponding to T. | Returns the 4x4 homogeneous transform corresponding to T. | [
"Returns",
"the",
"4x4",
"homogeneous",
"transform",
"corresponding",
"to",
"T",
"."
] | def ndarray(T : RigidTransform) -> "ndarray":
"""Returns the 4x4 homogeneous transform corresponding to T."""
import numpy
return numpy.array(homogeneous(T)) | [
"def",
"ndarray",
"(",
"T",
":",
"RigidTransform",
")",
"->",
"\"ndarray\"",
":",
"import",
"numpy",
"return",
"numpy",
".",
"array",
"(",
"homogeneous",
"(",
"T",
")",
")"
] | https://github.com/krishauser/Klampt/blob/972cc83ea5befac3f653c1ba20f80155768ad519/Python/klampt/math/se3.py#L84-L87 | |
catboost/catboost | 167f64f237114a4d10b2b4ee42adb4569137debe | contrib/python/scipy/scipy/stats/mstats_basic.py | python | spearmanr | (x, y, use_ties=True) | return SpearmanrResult(rho, prob) | Calculates a Spearman rank-order correlation coefficient and the p-value
to test for non-correlation.
The Spearman correlation is a nonparametric measure of the linear
relationship between two datasets. Unlike the Pearson correlation, the
Spearman correlation does not assume that both datasets are normally
distributed. Like other correlation coefficients, this one varies
between -1 and +1 with 0 implying no correlation. Correlations of -1 or
+1 imply an exact linear relationship. Positive correlations imply that
as `x` increases, so does `y`. Negative correlations imply that as `x`
increases, `y` decreases.
Missing values are discarded pair-wise: if a value is missing in `x`, the
corresponding value in `y` is masked.
The p-value roughly indicates the probability of an uncorrelated system
producing datasets that have a Spearman correlation at least as extreme
as the one computed from these datasets. The p-values are not entirely
reliable but are probably reasonable for datasets larger than 500 or so.
Parameters
----------
x : array_like
The length of `x` must be > 2.
y : array_like
The length of `y` must be > 2.
use_ties : bool, optional
Whether the correction for ties should be computed.
Returns
-------
correlation : float
Spearman correlation coefficient
pvalue : float
2-tailed p-value.
References
----------
[CRCProbStat2000] section 14.7 | Calculates a Spearman rank-order correlation coefficient and the p-value
to test for non-correlation. | [
"Calculates",
"a",
"Spearman",
"rank",
"-",
"order",
"correlation",
"coefficient",
"and",
"the",
"p",
"-",
"value",
"to",
"test",
"for",
"non",
"-",
"correlation",
"."
] | def spearmanr(x, y, use_ties=True):
"""
Calculates a Spearman rank-order correlation coefficient and the p-value
to test for non-correlation.
The Spearman correlation is a nonparametric measure of the linear
relationship between two datasets. Unlike the Pearson correlation, the
Spearman correlation does not assume that both datasets are normally
distributed. Like other correlation coefficients, this one varies
between -1 and +1 with 0 implying no correlation. Correlations of -1 or
+1 imply an exact linear relationship. Positive correlations imply that
as `x` increases, so does `y`. Negative correlations imply that as `x`
increases, `y` decreases.
Missing values are discarded pair-wise: if a value is missing in `x`, the
corresponding value in `y` is masked.
The p-value roughly indicates the probability of an uncorrelated system
producing datasets that have a Spearman correlation at least as extreme
as the one computed from these datasets. The p-values are not entirely
reliable but are probably reasonable for datasets larger than 500 or so.
Parameters
----------
x : array_like
The length of `x` must be > 2.
y : array_like
The length of `y` must be > 2.
use_ties : bool, optional
Whether the correction for ties should be computed.
Returns
-------
correlation : float
Spearman correlation coefficient
pvalue : float
2-tailed p-value.
References
----------
[CRCProbStat2000] section 14.7
"""
(x, y, n) = _chk_size(x, y)
(x, y) = (x.ravel(), y.ravel())
m = ma.mask_or(ma.getmask(x), ma.getmask(y))
n -= m.sum()
if m is not nomask:
x = ma.array(x, mask=m, copy=True)
y = ma.array(y, mask=m, copy=True)
df = n-2
if df < 0:
raise ValueError("The input must have at least 3 entries!")
# Gets the ranks and rank differences
rankx = rankdata(x)
ranky = rankdata(y)
dsq = np.add.reduce((rankx-ranky)**2)
# Tie correction
if use_ties:
xties = count_tied_groups(x)
yties = count_tied_groups(y)
corr_x = np.sum(v*k*(k**2-1) for (k,v) in iteritems(xties))/12.
corr_y = np.sum(v*k*(k**2-1) for (k,v) in iteritems(yties))/12.
else:
corr_x = corr_y = 0
denom = n*(n**2 - 1)/6.
if corr_x != 0 or corr_y != 0:
rho = denom - dsq - corr_x - corr_y
rho /= ma.sqrt((denom-2*corr_x)*(denom-2*corr_y))
else:
rho = 1. - dsq/denom
t = ma.sqrt(ma.divide(df,(rho+1.0)*(1.0-rho))) * rho
if t is masked:
prob = 0.
else:
prob = _betai(0.5*df, 0.5, df/(df + t * t))
return SpearmanrResult(rho, prob) | [
"def",
"spearmanr",
"(",
"x",
",",
"y",
",",
"use_ties",
"=",
"True",
")",
":",
"(",
"x",
",",
"y",
",",
"n",
")",
"=",
"_chk_size",
"(",
"x",
",",
"y",
")",
"(",
"x",
",",
"y",
")",
"=",
"(",
"x",
".",
"ravel",
"(",
")",
",",
"y",
".",... | https://github.com/catboost/catboost/blob/167f64f237114a4d10b2b4ee42adb4569137debe/contrib/python/scipy/scipy/stats/mstats_basic.py#L410-L491 | |
Xilinx/Vitis-AI | fc74d404563d9951b57245443c73bef389f3657f | tools/Vitis-AI-Quantizer/vai_q_tensorflow1.x/tensorflow/lite/python/convert.py | python | toco_convert_graph_def | (input_data, input_arrays_with_shape, output_arrays,
enable_mlir_converter, *args, **kwargs) | return data | Convert a model using TOCO.
This function is used to convert GraphDefs that cannot be loaded into
TensorFlow to TFLite. Conversion can be customized by providing arguments
that are forwarded to `build_toco_convert_protos` (see documentation for
details).
Args:
input_data: Input data (i.e. often `sess.graph_def`),
input_arrays_with_shape: Tuple of strings representing input tensor names
and list of integers representing input shapes
(e.g., [("foo" : [1, 16, 16, 3])]). Use only when graph cannot be loaded
into TensorFlow and when `input_tensors` is None. (default None)
output_arrays: List of output tensors to freeze graph with. Use only when
graph cannot be loaded into TensorFlow and when `output_tensors` is None.
(default None)
enable_mlir_converter: Enables the MLIR converter instead of the TOCO
converter.
*args: See `build_toco_convert_protos`,
**kwargs: See `build_toco_convert_protos`.
Returns:
The converted data. For example if TFLite was the destination, then
this will be a tflite flatbuffer in a bytes array.
Raises:
Defined in `build_toco_convert_protos`. | Convert a model using TOCO. | [
"Convert",
"a",
"model",
"using",
"TOCO",
"."
] | def toco_convert_graph_def(input_data, input_arrays_with_shape, output_arrays,
enable_mlir_converter, *args, **kwargs):
""""Convert a model using TOCO.
This function is used to convert GraphDefs that cannot be loaded into
TensorFlow to TFLite. Conversion can be customized by providing arguments
that are forwarded to `build_toco_convert_protos` (see documentation for
details).
Args:
input_data: Input data (i.e. often `sess.graph_def`),
input_arrays_with_shape: Tuple of strings representing input tensor names
and list of integers representing input shapes
(e.g., [("foo" : [1, 16, 16, 3])]). Use only when graph cannot be loaded
into TensorFlow and when `input_tensors` is None. (default None)
output_arrays: List of output tensors to freeze graph with. Use only when
graph cannot be loaded into TensorFlow and when `output_tensors` is None.
(default None)
enable_mlir_converter: Enables the MLIR converter instead of the TOCO
converter.
*args: See `build_toco_convert_protos`,
**kwargs: See `build_toco_convert_protos`.
Returns:
The converted data. For example if TFLite was the destination, then
this will be a tflite flatbuffer in a bytes array.
Raises:
Defined in `build_toco_convert_protos`.
"""
model_flags, toco_flags, _ = build_toco_convert_protos(
input_tensors=[], output_tensors=[], *args, **kwargs)
for idx, (name, shape) in enumerate(input_arrays_with_shape):
input_array = model_flags.input_arrays.add()
if toco_flags.inference_input_type == _types_pb2.QUANTIZED_UINT8:
if (("quantized_input_stats" not in kwargs) or
(not kwargs["quantized_input_stats"])):
raise ValueError("std_dev and mean must be defined when "
"inference_input_type is QUANTIZED_UINT8.")
input_array.mean_value, input_array.std_value = kwargs[
"quantized_input_stats"][idx]
input_array.name = name
input_array.shape.dims.extend(map(int, shape))
for name in output_arrays:
model_flags.output_arrays.append(name)
data = toco_convert_protos(
model_flags.SerializeToString(),
toco_flags.SerializeToString(),
input_data.SerializeToString(),
enable_mlir_converter=enable_mlir_converter)
return data | [
"def",
"toco_convert_graph_def",
"(",
"input_data",
",",
"input_arrays_with_shape",
",",
"output_arrays",
",",
"enable_mlir_converter",
",",
"*",
"args",
",",
"*",
"*",
"kwargs",
")",
":",
"model_flags",
",",
"toco_flags",
",",
"_",
"=",
"build_toco_convert_protos",... | https://github.com/Xilinx/Vitis-AI/blob/fc74d404563d9951b57245443c73bef389f3657f/tools/Vitis-AI-Quantizer/vai_q_tensorflow1.x/tensorflow/lite/python/convert.py#L360-L413 | |
catboost/catboost | 167f64f237114a4d10b2b4ee42adb4569137debe | contrib/python/setuptools/py3/setuptools/sandbox.py | python | AbstractSandbox._validate_path | (self, path) | return path | Called to remap or validate any path, whether input or output | Called to remap or validate any path, whether input or output | [
"Called",
"to",
"remap",
"or",
"validate",
"any",
"path",
"whether",
"input",
"or",
"output"
] | def _validate_path(self, path):
"""Called to remap or validate any path, whether input or output"""
return path | [
"def",
"_validate_path",
"(",
"self",
",",
"path",
")",
":",
"return",
"path"
] | https://github.com/catboost/catboost/blob/167f64f237114a4d10b2b4ee42adb4569137debe/contrib/python/setuptools/py3/setuptools/sandbox.py#L382-L384 | |
wy1iu/LargeMargin_Softmax_Loss | c3e9f20e4f16e2b4daf7d358a614366b9b39a6ec | scripts/cpp_lint.py | python | FindEndOfExpressionInLine | (line, startpos, depth, startchar, endchar) | return (-1, depth) | Find the position just after the matching endchar.
Args:
line: a CleansedLines line.
startpos: start searching at this position.
depth: nesting level at startpos.
startchar: expression opening character.
endchar: expression closing character.
Returns:
On finding matching endchar: (index just after matching endchar, 0)
Otherwise: (-1, new depth at end of this line) | Find the position just after the matching endchar. | [
"Find",
"the",
"position",
"just",
"after",
"the",
"matching",
"endchar",
"."
] | def FindEndOfExpressionInLine(line, startpos, depth, startchar, endchar):
"""Find the position just after the matching endchar.
Args:
line: a CleansedLines line.
startpos: start searching at this position.
depth: nesting level at startpos.
startchar: expression opening character.
endchar: expression closing character.
Returns:
On finding matching endchar: (index just after matching endchar, 0)
Otherwise: (-1, new depth at end of this line)
"""
for i in xrange(startpos, len(line)):
if line[i] == startchar:
depth += 1
elif line[i] == endchar:
depth -= 1
if depth == 0:
return (i + 1, 0)
return (-1, depth) | [
"def",
"FindEndOfExpressionInLine",
"(",
"line",
",",
"startpos",
",",
"depth",
",",
"startchar",
",",
"endchar",
")",
":",
"for",
"i",
"in",
"xrange",
"(",
"startpos",
",",
"len",
"(",
"line",
")",
")",
":",
"if",
"line",
"[",
"i",
"]",
"==",
"start... | https://github.com/wy1iu/LargeMargin_Softmax_Loss/blob/c3e9f20e4f16e2b4daf7d358a614366b9b39a6ec/scripts/cpp_lint.py#L1230-L1251 | |
RamadhanAmizudin/malware | 2c6c53c8b0d556f5d8078d6ca0fc4448f4697cf1 | Fuzzbunch/fuzzbunch/pyreadline/modes/emacs.py | python | EmacsMode.end_of_history | (self, e) | Move to the end of the input history, i.e., the line currently
being entered. | Move to the end of the input history, i.e., the line currently
being entered. | [
"Move",
"to",
"the",
"end",
"of",
"the",
"input",
"history",
"i",
".",
"e",
".",
"the",
"line",
"currently",
"being",
"entered",
"."
] | def end_of_history(self, e): # (M->)
'''Move to the end of the input history, i.e., the line currently
being entered.'''
self._history.end_of_history(self.l_buffer) | [
"def",
"end_of_history",
"(",
"self",
",",
"e",
")",
":",
"# (M->)",
"self",
".",
"_history",
".",
"end_of_history",
"(",
"self",
".",
"l_buffer",
")"
] | https://github.com/RamadhanAmizudin/malware/blob/2c6c53c8b0d556f5d8078d6ca0fc4448f4697cf1/Fuzzbunch/fuzzbunch/pyreadline/modes/emacs.py#L150-L153 | ||
francinexue/xuefu | b6ff79747a42e020588c0c0a921048e08fe4680c | ctpx/ctp3/ctptd.py | python | CtpTd.onRspQryTrade | (self, TradeField, RspInfoField, requestId, final) | 请求查询成交响应 | 请求查询成交响应 | [
"请求查询成交响应"
] | def onRspQryTrade(self, TradeField, RspInfoField, requestId, final):
"""请求查询成交响应"""
pass | [
"def",
"onRspQryTrade",
"(",
"self",
",",
"TradeField",
",",
"RspInfoField",
",",
"requestId",
",",
"final",
")",
":",
"pass"
] | https://github.com/francinexue/xuefu/blob/b6ff79747a42e020588c0c0a921048e08fe4680c/ctpx/ctp3/ctptd.py#L191-L193 | ||
kamyu104/LeetCode-Solutions | 77605708a927ea3b85aee5a479db733938c7c211 | Python/reordered-power-of-2.py | python | Solution.reorderedPowerOf2 | (self, N) | return any(count == collections.Counter(str(1 << i))
for i in xrange(31)) | :type N: int
:rtype: bool | :type N: int
:rtype: bool | [
":",
"type",
"N",
":",
"int",
":",
"rtype",
":",
"bool"
] | def reorderedPowerOf2(self, N):
"""
:type N: int
:rtype: bool
"""
count = collections.Counter(str(N))
return any(count == collections.Counter(str(1 << i))
for i in xrange(31)) | [
"def",
"reorderedPowerOf2",
"(",
"self",
",",
"N",
")",
":",
"count",
"=",
"collections",
".",
"Counter",
"(",
"str",
"(",
"N",
")",
")",
"return",
"any",
"(",
"count",
"==",
"collections",
".",
"Counter",
"(",
"str",
"(",
"1",
"<<",
"i",
")",
")",... | https://github.com/kamyu104/LeetCode-Solutions/blob/77605708a927ea3b85aee5a479db733938c7c211/Python/reordered-power-of-2.py#L8-L15 | |
ApolloAuto/apollo-platform | 86d9dc6743b496ead18d597748ebabd34a513289 | ros/third_party/lib_aarch64/python2.7/dist-packages/geodesy/wu_point.py | python | WuPoint.toWayPoint | (self) | return self.way_pt | :returns: Corresponding `geographic_msgs/WayPoint`_ message. | :returns: Corresponding `geographic_msgs/WayPoint`_ message. | [
":",
"returns",
":",
"Corresponding",
"geographic_msgs",
"/",
"WayPoint",
"_",
"message",
"."
] | def toWayPoint(self):
""":returns: Corresponding `geographic_msgs/WayPoint`_ message. """
return self.way_pt | [
"def",
"toWayPoint",
"(",
"self",
")",
":",
"return",
"self",
".",
"way_pt"
] | https://github.com/ApolloAuto/apollo-platform/blob/86d9dc6743b496ead18d597748ebabd34a513289/ros/third_party/lib_aarch64/python2.7/dist-packages/geodesy/wu_point.py#L106-L108 | |
wxWidgets/wxPython-Classic | 19571e1ae65f1ac445f5491474121998c97a1bf0 | src/gtk/xrc.py | python | XmlNode.GetLineNumber | (*args, **kwargs) | return _xrc.XmlNode_GetLineNumber(*args, **kwargs) | GetLineNumber(self) -> int | GetLineNumber(self) -> int | [
"GetLineNumber",
"(",
"self",
")",
"-",
">",
"int"
] | def GetLineNumber(*args, **kwargs):
"""GetLineNumber(self) -> int"""
return _xrc.XmlNode_GetLineNumber(*args, **kwargs) | [
"def",
"GetLineNumber",
"(",
"*",
"args",
",",
"*",
"*",
"kwargs",
")",
":",
"return",
"_xrc",
".",
"XmlNode_GetLineNumber",
"(",
"*",
"args",
",",
"*",
"*",
"kwargs",
")"
] | https://github.com/wxWidgets/wxPython-Classic/blob/19571e1ae65f1ac445f5491474121998c97a1bf0/src/gtk/xrc.py#L426-L428 | |
SequoiaDB/SequoiaDB | 2894ed7e5bd6fe57330afc900cf76d0ff0df9f64 | tools/server/php_linux/libxml2/lib/python2.4/site-packages/libxml2.py | python | htmlIsScriptAttribute | (name) | return ret | Check if an attribute is of content type Script | Check if an attribute is of content type Script | [
"Check",
"if",
"an",
"attribute",
"is",
"of",
"content",
"type",
"Script"
] | def htmlIsScriptAttribute(name):
"""Check if an attribute is of content type Script """
ret = libxml2mod.htmlIsScriptAttribute(name)
return ret | [
"def",
"htmlIsScriptAttribute",
"(",
"name",
")",
":",
"ret",
"=",
"libxml2mod",
".",
"htmlIsScriptAttribute",
"(",
"name",
")",
"return",
"ret"
] | https://github.com/SequoiaDB/SequoiaDB/blob/2894ed7e5bd6fe57330afc900cf76d0ff0df9f64/tools/server/php_linux/libxml2/lib/python2.4/site-packages/libxml2.py#L750-L753 | |
eclipse/sumo | 7132a9b8b6eea734bdec38479026b4d8c4336d03 | tools/visualization/plot_csv_timeline.py | python | main | (args=None) | The main function; parses options and plots | The main function; parses options and plots | [
"The",
"main",
"function",
";",
"parses",
"options",
"and",
"plots"
] | def main(args=None):
"""The main function; parses options and plots"""
# ---------- build and read options ----------
from optparse import OptionParser
optParser = OptionParser()
optParser.add_option("-i", "--input", dest="input", metavar="FILE",
help="Defines the input file to use")
optParser.add_option("-v", "--verbose", dest="verbose", action="store_true",
default=False, help="If set, the script says what it's doing")
optParser.add_option("-c", "--columns", dest="columns",
default=None, help="Defines which columns shall be plotted")
# standard plot options
helpers.addInteractionOptions(optParser)
helpers.addPlotOptions(optParser)
# parse
options, _ = optParser.parse_args(args=args)
if options.input is None:
print("Error: an input file must be given")
sys.exit(1)
minV = 0
maxV = 0
if options.columns is not None:
options.columns = [int(i) for i in options.columns.split(",")]
nums = readValues(options.input, options.verbose, options.columns)
for f in nums:
maxV = max(maxV, len(nums[f]))
ts = range(minV, maxV + 1)
fig, ax = helpers.openFigure(options)
for i in nums:
v = nums[i]
ci = i
if options.columns is not None:
ci = options.columns.index(i)
c = helpers.getColor(options, ci, len(nums))
plt.plot(ts[0:len(v)], v, label=helpers.getLabel(str(i), ci, options), color=c)
helpers.closeFigure(fig, ax, options) | [
"def",
"main",
"(",
"args",
"=",
"None",
")",
":",
"# ---------- build and read options ----------",
"from",
"optparse",
"import",
"OptionParser",
"optParser",
"=",
"OptionParser",
"(",
")",
"optParser",
".",
"add_option",
"(",
"\"-i\"",
",",
"\"--input\"",
",",
"... | https://github.com/eclipse/sumo/blob/7132a9b8b6eea734bdec38479026b4d8c4336d03/tools/visualization/plot_csv_timeline.py#L55-L93 | ||
apple/swift-lldb | d74be846ef3e62de946df343e8c234bde93a8912 | utils/lui/lldbutil.py | python | is_exe | (fpath) | return os.path.isfile(fpath) and os.access(fpath, os.X_OK) | Returns True if fpath is an executable. | Returns True if fpath is an executable. | [
"Returns",
"True",
"if",
"fpath",
"is",
"an",
"executable",
"."
] | def is_exe(fpath):
"""Returns True if fpath is an executable."""
return os.path.isfile(fpath) and os.access(fpath, os.X_OK) | [
"def",
"is_exe",
"(",
"fpath",
")",
":",
"return",
"os",
".",
"path",
".",
"isfile",
"(",
"fpath",
")",
"and",
"os",
".",
"access",
"(",
"fpath",
",",
"os",
".",
"X_OK",
")"
] | https://github.com/apple/swift-lldb/blob/d74be846ef3e62de946df343e8c234bde93a8912/utils/lui/lldbutil.py#L27-L29 | |
cmu-db/noisepage | 79276e68fe83322f1249e8a8be96bd63c583ae56 | build-support/cpplint.py | python | _ClassifyInclude | (fileinfo, include, is_system) | return _OTHER_HEADER | Figures out what kind of header 'include' is.
Args:
fileinfo: The current file cpplint is running over. A FileInfo instance.
include: The path to a #included file.
is_system: True if the #include used <> rather than "".
Returns:
One of the _XXX_HEADER constants.
For example:
>>> _ClassifyInclude(FileInfo('foo/foo.cc'), 'stdio.h', True)
_C_SYS_HEADER
>>> _ClassifyInclude(FileInfo('foo/foo.cc'), 'string', True)
_CPP_SYS_HEADER
>>> _ClassifyInclude(FileInfo('foo/foo.cc'), 'foo/foo.h', False)
_LIKELY_MY_HEADER
>>> _ClassifyInclude(FileInfo('foo/foo_unknown_extension.cc'),
... 'bar/foo_other_ext.h', False)
_POSSIBLE_MY_HEADER
>>> _ClassifyInclude(FileInfo('foo/foo.cc'), 'foo/bar.h', False)
_OTHER_HEADER | Figures out what kind of header 'include' is. | [
"Figures",
"out",
"what",
"kind",
"of",
"header",
"include",
"is",
"."
] | def _ClassifyInclude(fileinfo, include, is_system):
"""Figures out what kind of header 'include' is.
Args:
fileinfo: The current file cpplint is running over. A FileInfo instance.
include: The path to a #included file.
is_system: True if the #include used <> rather than "".
Returns:
One of the _XXX_HEADER constants.
For example:
>>> _ClassifyInclude(FileInfo('foo/foo.cc'), 'stdio.h', True)
_C_SYS_HEADER
>>> _ClassifyInclude(FileInfo('foo/foo.cc'), 'string', True)
_CPP_SYS_HEADER
>>> _ClassifyInclude(FileInfo('foo/foo.cc'), 'foo/foo.h', False)
_LIKELY_MY_HEADER
>>> _ClassifyInclude(FileInfo('foo/foo_unknown_extension.cc'),
... 'bar/foo_other_ext.h', False)
_POSSIBLE_MY_HEADER
>>> _ClassifyInclude(FileInfo('foo/foo.cc'), 'foo/bar.h', False)
_OTHER_HEADER
"""
# This is a list of all standard c++ header files, except
# those already checked for above.
is_cpp_h = include in _CPP_HEADERS
# Headers with C++ extensions shouldn't be considered C system headers
if is_system and os.path.splitext(include)[1] in ['.hpp', '.hxx', '.h++']:
is_system = False
if is_system:
if is_cpp_h:
return _CPP_SYS_HEADER
else:
return _C_SYS_HEADER
# If the target file and the include we're checking share a
# basename when we drop common extensions, and the include
# lives in . , then it's likely to be owned by the target file.
target_dir, target_base = (
os.path.split(_DropCommonSuffixes(fileinfo.RepositoryName())))
include_dir, include_base = os.path.split(_DropCommonSuffixes(include))
target_dir_pub = os.path.normpath(target_dir + '/../public')
target_dir_pub = target_dir_pub.replace('\\', '/')
if target_base == include_base and (
include_dir == target_dir or
include_dir == target_dir_pub):
return _LIKELY_MY_HEADER
# If the target and include share some initial basename
# component, it's possible the target is implementing the
# include, so it's allowed to be first, but we'll never
# complain if it's not there.
target_first_component = _RE_FIRST_COMPONENT.match(target_base)
include_first_component = _RE_FIRST_COMPONENT.match(include_base)
if (target_first_component and include_first_component and
target_first_component.group(0) ==
include_first_component.group(0)):
return _POSSIBLE_MY_HEADER
return _OTHER_HEADER | [
"def",
"_ClassifyInclude",
"(",
"fileinfo",
",",
"include",
",",
"is_system",
")",
":",
"# This is a list of all standard c++ header files, except",
"# those already checked for above.",
"is_cpp_h",
"=",
"include",
"in",
"_CPP_HEADERS",
"# Headers with C++ extensions shouldn't be c... | https://github.com/cmu-db/noisepage/blob/79276e68fe83322f1249e8a8be96bd63c583ae56/build-support/cpplint.py#L4711-L4773 | |
google/llvm-propeller | 45c226984fe8377ebfb2ad7713c680d652ba678d | compiler-rt/lib/sanitizer_common/scripts/cpplint.py | python | CheckPrintf | (filename, clean_lines, linenum, error) | Check for printf related issues.
Args:
filename: The name of the current file.
clean_lines: A CleansedLines instance containing the file.
linenum: The number of the line to check.
error: The function to call with any errors found. | Check for printf related issues. | [
"Check",
"for",
"printf",
"related",
"issues",
"."
] | def CheckPrintf(filename, clean_lines, linenum, error):
"""Check for printf related issues.
Args:
filename: The name of the current file.
clean_lines: A CleansedLines instance containing the file.
linenum: The number of the line to check.
error: The function to call with any errors found.
"""
line = clean_lines.elided[linenum]
# When snprintf is used, the second argument shouldn't be a literal.
match = Search(r'snprintf\s*\(([^,]*),\s*([0-9]*)\s*,', line)
if match and match.group(2) != '0':
# If 2nd arg is zero, snprintf is used to calculate size.
error(filename, linenum, 'runtime/printf', 3,
'If you can, use sizeof(%s) instead of %s as the 2nd arg '
'to snprintf.' % (match.group(1), match.group(2)))
# Check if some verboten C functions are being used.
if Search(r'\bsprintf\s*\(', line):
error(filename, linenum, 'runtime/printf', 5,
'Never use sprintf. Use snprintf instead.')
match = Search(r'\b(strcpy|strcat)\s*\(', line)
if match:
error(filename, linenum, 'runtime/printf', 4,
'Almost always, snprintf is better than %s' % match.group(1)) | [
"def",
"CheckPrintf",
"(",
"filename",
",",
"clean_lines",
",",
"linenum",
",",
"error",
")",
":",
"line",
"=",
"clean_lines",
".",
"elided",
"[",
"linenum",
"]",
"# When snprintf is used, the second argument shouldn't be a literal.",
"match",
"=",
"Search",
"(",
"r... | https://github.com/google/llvm-propeller/blob/45c226984fe8377ebfb2ad7713c680d652ba678d/compiler-rt/lib/sanitizer_common/scripts/cpplint.py#L4904-L4930 | ||
Slicer/Slicer | ba9fadf332cb0303515b68d8d06a344c82e3e3e5 | Modules/Scripted/DataProbe/DataProbe.py | python | DataProbeTest.setUp | (self) | Do whatever is needed to reset the state - typically a scene clear will be enough. | Do whatever is needed to reset the state - typically a scene clear will be enough. | [
"Do",
"whatever",
"is",
"needed",
"to",
"reset",
"the",
"state",
"-",
"typically",
"a",
"scene",
"clear",
"will",
"be",
"enough",
"."
] | def setUp(self):
""" Do whatever is needed to reset the state - typically a scene clear will be enough.
"""
pass | [
"def",
"setUp",
"(",
"self",
")",
":",
"pass"
] | https://github.com/Slicer/Slicer/blob/ba9fadf332cb0303515b68d8d06a344c82e3e3e5/Modules/Scripted/DataProbe/DataProbe.py#L570-L573 | ||
leosac/leosac | 932a2a90bd2e75483d46b24fdbc8f02e0809d731 | python/leosacpy/cli/dev/doc.py | python | build | (ctx) | Build Leosac documentation. | [] | def build(ctx):
"""
Build Leosac documentation.
"""
call('cd {} && doxygen doc/Doxyfile'.format(ctx.obj.root_dir),
shell=True) | [
"def",
"build",
"(",
"ctx",
")",
":",
"call",
"(",
"'cd {} && doxygen doc/Doxyfile'",
".",
"format",
"(",
"ctx",
".",
"obj",
".",
"root_dir",
")",
",",
"shell",
"=",
"True",
")"
] | https://github.com/leosac/leosac/blob/932a2a90bd2e75483d46b24fdbc8f02e0809d731/python/leosacpy/cli/dev/doc.py#L18-L25 | |||
hughperkins/tf-coriander | 970d3df6c11400ad68405f22b0c42a52374e94ca | tensorflow/contrib/learn/python/learn/utils/export.py | python | regression_signature_fn | (examples, unused_features, predictions) | return default_signature, {} | Creates regression signature from given examples and predictions.
Args:
examples: `Tensor`.
unused_features: `dict` of `Tensor`s.
predictions: `Tensor`.
Returns:
Tuple of default regression signature and empty named signatures.
Raises:
ValueError: If examples is `None`. | Creates regression signature from given examples and predictions. | [
"Creates",
"regression",
"signature",
"from",
"given",
"examples",
"and",
"predictions",
"."
] | def regression_signature_fn(examples, unused_features, predictions):
"""Creates regression signature from given examples and predictions.
Args:
examples: `Tensor`.
unused_features: `dict` of `Tensor`s.
predictions: `Tensor`.
Returns:
Tuple of default regression signature and empty named signatures.
Raises:
ValueError: If examples is `None`.
"""
if examples is None:
raise ValueError('examples cannot be None when using this signature fn.')
default_signature = exporter.regression_signature(
input_tensor=examples, output_tensor=predictions)
return default_signature, {} | [
"def",
"regression_signature_fn",
"(",
"examples",
",",
"unused_features",
",",
"predictions",
")",
":",
"if",
"examples",
"is",
"None",
":",
"raise",
"ValueError",
"(",
"'examples cannot be None when using this signature fn.'",
")",
"default_signature",
"=",
"exporter",
... | https://github.com/hughperkins/tf-coriander/blob/970d3df6c11400ad68405f22b0c42a52374e94ca/tensorflow/contrib/learn/python/learn/utils/export.py#L167-L186 | |
perilouswithadollarsign/cstrike15_src | f82112a2388b841d72cb62ca48ab1846dfcc11c8 | thirdparty/protobuf-2.5.0/python/google/protobuf/internal/encoder.py | python | _TagSize | (field_number) | return _VarintSize(wire_format.PackTag(field_number, 0)) | Returns the number of bytes required to serialize a tag with this field
number. | Returns the number of bytes required to serialize a tag with this field
number. | [
"Returns",
"the",
"number",
"of",
"bytes",
"required",
"to",
"serialize",
"a",
"tag",
"with",
"this",
"field",
"number",
"."
] | def _TagSize(field_number):
"""Returns the number of bytes required to serialize a tag with this field
number."""
# Just pass in type 0, since the type won't affect the tag+type size.
return _VarintSize(wire_format.PackTag(field_number, 0)) | [
"def",
"_TagSize",
"(",
"field_number",
")",
":",
"# Just pass in type 0, since the type won't affect the tag+type size.",
"return",
"_VarintSize",
"(",
"wire_format",
".",
"PackTag",
"(",
"field_number",
",",
"0",
")",
")"
] | https://github.com/perilouswithadollarsign/cstrike15_src/blob/f82112a2388b841d72cb62ca48ab1846dfcc11c8/thirdparty/protobuf-2.5.0/python/google/protobuf/internal/encoder.py#L108-L112 | |
Xilinx/Vitis-AI | fc74d404563d9951b57245443c73bef389f3657f | tools/Vitis-AI-Quantizer/vai_q_tensorflow1.x/tensorflow/python/framework/func_graph.py | python | FuncGraph.deferred_internal_captures | (self) | return [c[1] for c in self._deferred_captures.values()] | List of nest of placeholders which at call time will be fed. | List of nest of placeholders which at call time will be fed. | [
"List",
"of",
"nest",
"of",
"placeholders",
"which",
"at",
"call",
"time",
"will",
"be",
"fed",
"."
] | def deferred_internal_captures(self):
"""List of nest of placeholders which at call time will be fed."""
return [c[1] for c in self._deferred_captures.values()] | [
"def",
"deferred_internal_captures",
"(",
"self",
")",
":",
"return",
"[",
"c",
"[",
"1",
"]",
"for",
"c",
"in",
"self",
".",
"_deferred_captures",
".",
"values",
"(",
")",
"]"
] | https://github.com/Xilinx/Vitis-AI/blob/fc74d404563d9951b57245443c73bef389f3657f/tools/Vitis-AI-Quantizer/vai_q_tensorflow1.x/tensorflow/python/framework/func_graph.py#L697-L699 | |
google/clif | cab24d6a105609a65c95a36a1712ae3c20c7b5df | clif/python/gen.py | python | WrapperClassDef | (name, ctype, cname, is_iter, has_iter, iter_ns,
enable_instance_dict) | Generate wrapper class. | Generate wrapper class. | [
"Generate",
"wrapper",
"class",
"."
] | def WrapperClassDef(name, ctype, cname, is_iter, has_iter, iter_ns,
enable_instance_dict):
"""Generate wrapper class."""
assert not (has_iter and is_iter)
yield ''
yield 'struct %s {' % name
yield I+'PyObject_HEAD'
if is_iter:
assert not enable_instance_dict
yield I+'iterator iter;'
else:
yield I+'::clif::Instance<%s> cpp;' % ctype
yield I+'PyObject* instance_dict = nullptr;'
yield I+'PyObject* weakrefs = nullptr;'
yield '};'
if has_iter:
yield ''
yield 'namespace %s {' % iter_ns
yield 'typedef ::clif::Iterator<%s, %s> iterator;' % (cname, has_iter)
yield '}' | [
"def",
"WrapperClassDef",
"(",
"name",
",",
"ctype",
",",
"cname",
",",
"is_iter",
",",
"has_iter",
",",
"iter_ns",
",",
"enable_instance_dict",
")",
":",
"assert",
"not",
"(",
"has_iter",
"and",
"is_iter",
")",
"yield",
"''",
"yield",
"'struct %s {'",
"%",
... | https://github.com/google/clif/blob/cab24d6a105609a65c95a36a1712ae3c20c7b5df/clif/python/gen.py#L334-L353 | ||
hanpfei/chromium-net | 392cc1fa3a8f92f42e4071ab6e674d8e0482f83f | third_party/catapult/third_party/gsutil/third_party/boto/boto/gs/bucket.py | python | Bucket.copy_key | (self, new_key_name, src_bucket_name, src_key_name,
metadata=None, src_version_id=None, storage_class='STANDARD',
preserve_acl=False, encrypt_key=False, headers=None,
query_args=None, src_generation=None) | return super(Bucket, self).copy_key(
new_key_name, src_bucket_name, src_key_name, metadata=metadata,
storage_class=storage_class, preserve_acl=preserve_acl,
encrypt_key=encrypt_key, headers=headers, query_args=query_args) | Create a new key in the bucket by copying an existing key.
:type new_key_name: string
:param new_key_name: The name of the new key
:type src_bucket_name: string
:param src_bucket_name: The name of the source bucket
:type src_key_name: string
:param src_key_name: The name of the source key
:type src_generation: int
:param src_generation: The generation number of the source key to copy.
If not specified, the latest generation is copied.
:type metadata: dict
:param metadata: Metadata to be associated with new key. If
metadata is supplied, it will replace the metadata of the
source key being copied. If no metadata is supplied, the
source key's metadata will be copied to the new key.
:type version_id: string
:param version_id: Unused in this subclass.
:type storage_class: string
:param storage_class: The storage class of the new key. By
default, the new key will use the standard storage class.
Possible values are: STANDARD | DURABLE_REDUCED_AVAILABILITY
:type preserve_acl: bool
:param preserve_acl: If True, the ACL from the source key will
be copied to the destination key. If False, the
destination key will have the default ACL. Note that
preserving the ACL in the new key object will require two
additional API calls to GCS, one to retrieve the current
ACL and one to set that ACL on the new object. If you
don't care about the ACL (or if you have a default ACL set
on the bucket), a value of False will be significantly more
efficient.
:type encrypt_key: bool
:param encrypt_key: Included for compatibility with S3. This argument is
ignored.
:type headers: dict
:param headers: A dictionary of header name/value pairs.
:type query_args: string
:param query_args: A string of additional querystring arguments
to append to the request
:rtype: :class:`boto.gs.key.Key`
:returns: An instance of the newly created key object | Create a new key in the bucket by copying an existing key. | [
"Create",
"a",
"new",
"key",
"in",
"the",
"bucket",
"by",
"copying",
"an",
"existing",
"key",
"."
] | def copy_key(self, new_key_name, src_bucket_name, src_key_name,
metadata=None, src_version_id=None, storage_class='STANDARD',
preserve_acl=False, encrypt_key=False, headers=None,
query_args=None, src_generation=None):
"""Create a new key in the bucket by copying an existing key.
:type new_key_name: string
:param new_key_name: The name of the new key
:type src_bucket_name: string
:param src_bucket_name: The name of the source bucket
:type src_key_name: string
:param src_key_name: The name of the source key
:type src_generation: int
:param src_generation: The generation number of the source key to copy.
If not specified, the latest generation is copied.
:type metadata: dict
:param metadata: Metadata to be associated with new key. If
metadata is supplied, it will replace the metadata of the
source key being copied. If no metadata is supplied, the
source key's metadata will be copied to the new key.
:type version_id: string
:param version_id: Unused in this subclass.
:type storage_class: string
:param storage_class: The storage class of the new key. By
default, the new key will use the standard storage class.
Possible values are: STANDARD | DURABLE_REDUCED_AVAILABILITY
:type preserve_acl: bool
:param preserve_acl: If True, the ACL from the source key will
be copied to the destination key. If False, the
destination key will have the default ACL. Note that
preserving the ACL in the new key object will require two
additional API calls to GCS, one to retrieve the current
ACL and one to set that ACL on the new object. If you
don't care about the ACL (or if you have a default ACL set
on the bucket), a value of False will be significantly more
efficient.
:type encrypt_key: bool
:param encrypt_key: Included for compatibility with S3. This argument is
ignored.
:type headers: dict
:param headers: A dictionary of header name/value pairs.
:type query_args: string
:param query_args: A string of additional querystring arguments
to append to the request
:rtype: :class:`boto.gs.key.Key`
:returns: An instance of the newly created key object
"""
if src_generation:
headers = headers or {}
headers['x-goog-copy-source-generation'] = str(src_generation)
return super(Bucket, self).copy_key(
new_key_name, src_bucket_name, src_key_name, metadata=metadata,
storage_class=storage_class, preserve_acl=preserve_acl,
encrypt_key=encrypt_key, headers=headers, query_args=query_args) | [
"def",
"copy_key",
"(",
"self",
",",
"new_key_name",
",",
"src_bucket_name",
",",
"src_key_name",
",",
"metadata",
"=",
"None",
",",
"src_version_id",
"=",
"None",
",",
"storage_class",
"=",
"'STANDARD'",
",",
"preserve_acl",
"=",
"False",
",",
"encrypt_key",
... | https://github.com/hanpfei/chromium-net/blob/392cc1fa3a8f92f42e4071ab6e674d8e0482f83f/third_party/catapult/third_party/gsutil/third_party/boto/boto/gs/bucket.py#L118-L182 | |
naver/sling | 5671cd445a2caae0b4dd0332299e4cfede05062c | webkit/Tools/Scripts/webkitpy/common/system/filesystem.py | python | FileSystem.read_text_file | (self, path) | Return the contents of the file at the given path as a Unicode string.
The file is read assuming it is a UTF-8 encoded file with no BOM. | Return the contents of the file at the given path as a Unicode string. | [
"Return",
"the",
"contents",
"of",
"the",
"file",
"at",
"the",
"given",
"path",
"as",
"a",
"Unicode",
"string",
"."
] | def read_text_file(self, path):
"""Return the contents of the file at the given path as a Unicode string.
The file is read assuming it is a UTF-8 encoded file with no BOM."""
with codecs.open(path, 'r', 'utf8') as f:
return f.read() | [
"def",
"read_text_file",
"(",
"self",
",",
"path",
")",
":",
"with",
"codecs",
".",
"open",
"(",
"path",
",",
"'r'",
",",
"'utf8'",
")",
"as",
"f",
":",
"return",
"f",
".",
"read",
"(",
")"
] | https://github.com/naver/sling/blob/5671cd445a2caae0b4dd0332299e4cfede05062c/webkit/Tools/Scripts/webkitpy/common/system/filesystem.py#L247-L252 | ||
intel/llvm | e6d0547e9d99b5a56430c4749f6c7e328bf221ab | clang-tools-extra/clang-tidy/tool/clang-tidy-diff.py | python | merge_replacement_files | (tmpdir, mergefile) | Merge all replacement files in a directory into a single file | Merge all replacement files in a directory into a single file | [
"Merge",
"all",
"replacement",
"files",
"in",
"a",
"directory",
"into",
"a",
"single",
"file"
] | def merge_replacement_files(tmpdir, mergefile):
"""Merge all replacement files in a directory into a single file"""
# The fixes suggested by clang-tidy >= 4.0.0 are given under
# the top level key 'Diagnostics' in the output yaml files
mergekey = "Diagnostics"
merged = []
for replacefile in glob.iglob(os.path.join(tmpdir, '*.yaml')):
content = yaml.safe_load(open(replacefile, 'r'))
if not content:
continue # Skip empty files.
merged.extend(content.get(mergekey, []))
if merged:
# MainSourceFile: The key is required by the definition inside
# include/clang/Tooling/ReplacementsYaml.h, but the value
# is actually never used inside clang-apply-replacements,
# so we set it to '' here.
output = {'MainSourceFile': '', mergekey: merged}
with open(mergefile, 'w') as out:
yaml.safe_dump(output, out)
else:
# Empty the file:
open(mergefile, 'w').close() | [
"def",
"merge_replacement_files",
"(",
"tmpdir",
",",
"mergefile",
")",
":",
"# The fixes suggested by clang-tidy >= 4.0.0 are given under",
"# the top level key 'Diagnostics' in the output yaml files",
"mergekey",
"=",
"\"Diagnostics\"",
"merged",
"=",
"[",
"]",
"for",
"replacef... | https://github.com/intel/llvm/blob/e6d0547e9d99b5a56430c4749f6c7e328bf221ab/clang-tools-extra/clang-tidy/tool/clang-tidy-diff.py#L93-L115 | ||
ArduPilot/ardupilot | 6e684b3496122b8158ac412b609d00004b7ac306 | libraries/AP_HAL_ChibiOS/hwdef/scripts/chibios_hwdef.py | python | generic_pin.get_OSPEEDR | (self) | return "PIN_O%s(%uU)" % (self.get_OSPEEDR_value(), self.pin) | return one of SPEED_VERYLOW, SPEED_LOW, SPEED_MEDIUM, SPEED_HIGH | return one of SPEED_VERYLOW, SPEED_LOW, SPEED_MEDIUM, SPEED_HIGH | [
"return",
"one",
"of",
"SPEED_VERYLOW",
"SPEED_LOW",
"SPEED_MEDIUM",
"SPEED_HIGH"
] | def get_OSPEEDR(self):
'''return one of SPEED_VERYLOW, SPEED_LOW, SPEED_MEDIUM, SPEED_HIGH'''
return "PIN_O%s(%uU)" % (self.get_OSPEEDR_value(), self.pin) | [
"def",
"get_OSPEEDR",
"(",
"self",
")",
":",
"return",
"\"PIN_O%s(%uU)\"",
"%",
"(",
"self",
".",
"get_OSPEEDR_value",
"(",
")",
",",
"self",
".",
"pin",
")"
] | https://github.com/ArduPilot/ardupilot/blob/6e684b3496122b8158ac412b609d00004b7ac306/libraries/AP_HAL_ChibiOS/hwdef/scripts/chibios_hwdef.py#L356-L358 | |
wlanjie/AndroidFFmpeg | 7baf9122f4b8e1c74e7baf4be5c422c7a5ba5aaf | tools/fdk-aac-build/armeabi/toolchain/lib/python2.7/decimal.py | python | Decimal.logical_xor | (self, other, context=None) | return _dec_from_triple(0, result.lstrip('0') or '0', 0) | Applies an 'xor' operation between self and other's digits. | Applies an 'xor' operation between self and other's digits. | [
"Applies",
"an",
"xor",
"operation",
"between",
"self",
"and",
"other",
"s",
"digits",
"."
] | def logical_xor(self, other, context=None):
"""Applies an 'xor' operation between self and other's digits."""
if context is None:
context = getcontext()
other = _convert_other(other, raiseit=True)
if not self._islogical() or not other._islogical():
return context._raise_error(InvalidOperation)
# fill to context.prec
(opa, opb) = self._fill_logical(context, self._int, other._int)
# make the operation, and clean starting zeroes
result = "".join([str(int(a)^int(b)) for a,b in zip(opa,opb)])
return _dec_from_triple(0, result.lstrip('0') or '0', 0) | [
"def",
"logical_xor",
"(",
"self",
",",
"other",
",",
"context",
"=",
"None",
")",
":",
"if",
"context",
"is",
"None",
":",
"context",
"=",
"getcontext",
"(",
")",
"other",
"=",
"_convert_other",
"(",
"other",
",",
"raiseit",
"=",
"True",
")",
"if",
... | https://github.com/wlanjie/AndroidFFmpeg/blob/7baf9122f4b8e1c74e7baf4be5c422c7a5ba5aaf/tools/fdk-aac-build/armeabi/toolchain/lib/python2.7/decimal.py#L3317-L3332 | |
wlanjie/AndroidFFmpeg | 7baf9122f4b8e1c74e7baf4be5c422c7a5ba5aaf | tools/fdk-aac-build/x86/toolchain/lib/python2.7/textwrap.py | python | TextWrapper._split | (self, text) | return chunks | _split(text : string) -> [string]
Split the text to wrap into indivisible chunks. Chunks are
not quite the same as words; see _wrap_chunks() for full
details. As an example, the text
Look, goof-ball -- use the -b option!
breaks into the following chunks:
'Look,', ' ', 'goof-', 'ball', ' ', '--', ' ',
'use', ' ', 'the', ' ', '-b', ' ', 'option!'
if break_on_hyphens is True, or in:
'Look,', ' ', 'goof-ball', ' ', '--', ' ',
'use', ' ', 'the', ' ', '-b', ' ', option!'
otherwise. | _split(text : string) -> [string] | [
"_split",
"(",
"text",
":",
"string",
")",
"-",
">",
"[",
"string",
"]"
] | def _split(self, text):
"""_split(text : string) -> [string]
Split the text to wrap into indivisible chunks. Chunks are
not quite the same as words; see _wrap_chunks() for full
details. As an example, the text
Look, goof-ball -- use the -b option!
breaks into the following chunks:
'Look,', ' ', 'goof-', 'ball', ' ', '--', ' ',
'use', ' ', 'the', ' ', '-b', ' ', 'option!'
if break_on_hyphens is True, or in:
'Look,', ' ', 'goof-ball', ' ', '--', ' ',
'use', ' ', 'the', ' ', '-b', ' ', option!'
otherwise.
"""
if isinstance(text, _unicode):
if self.break_on_hyphens:
pat = self.wordsep_re_uni
else:
pat = self.wordsep_simple_re_uni
else:
if self.break_on_hyphens:
pat = self.wordsep_re
else:
pat = self.wordsep_simple_re
chunks = pat.split(text)
chunks = filter(None, chunks) # remove empty chunks
return chunks | [
"def",
"_split",
"(",
"self",
",",
"text",
")",
":",
"if",
"isinstance",
"(",
"text",
",",
"_unicode",
")",
":",
"if",
"self",
".",
"break_on_hyphens",
":",
"pat",
"=",
"self",
".",
"wordsep_re_uni",
"else",
":",
"pat",
"=",
"self",
".",
"wordsep_simpl... | https://github.com/wlanjie/AndroidFFmpeg/blob/7baf9122f4b8e1c74e7baf4be5c422c7a5ba5aaf/tools/fdk-aac-build/x86/toolchain/lib/python2.7/textwrap.py#L163-L190 | |
linyouhappy/kongkongxiyou | 7a69b2913eb29f4be77f9a62fb90cdd72c4160f1 | cocosjs/frameworks/cocos2d-x/tools/bindings-generator/clang/cindex.py | python | CompilationDatabase.fromDirectory | (buildDir) | return cdb | Builds a CompilationDatabase from the database found in buildDir | Builds a CompilationDatabase from the database found in buildDir | [
"Builds",
"a",
"CompilationDatabase",
"from",
"the",
"database",
"found",
"in",
"buildDir"
] | def fromDirectory(buildDir):
"""Builds a CompilationDatabase from the database found in buildDir"""
errorCode = c_uint()
try:
cdb = conf.lib.clang_CompilationDatabase_fromDirectory(buildDir,
byref(errorCode))
except CompilationDatabaseError as e:
raise CompilationDatabaseError(int(errorCode.value),
"CompilationDatabase loading failed")
return cdb | [
"def",
"fromDirectory",
"(",
"buildDir",
")",
":",
"errorCode",
"=",
"c_uint",
"(",
")",
"try",
":",
"cdb",
"=",
"conf",
".",
"lib",
".",
"clang_CompilationDatabase_fromDirectory",
"(",
"buildDir",
",",
"byref",
"(",
"errorCode",
")",
")",
"except",
"Compila... | https://github.com/linyouhappy/kongkongxiyou/blob/7a69b2913eb29f4be77f9a62fb90cdd72c4160f1/cocosjs/frameworks/cocos2d-x/tools/bindings-generator/clang/cindex.py#L2634-L2643 | |
catboost/catboost | 167f64f237114a4d10b2b4ee42adb4569137debe | contrib/python/pyparsing/py2/pyparsing.py | python | traceParseAction | (f) | return z | Decorator for debugging parse actions.
When the parse action is called, this decorator will print
``">> entering method-name(line:<current_source_line>, <parse_location>, <matched_tokens>)"``.
When the parse action completes, the decorator will print
``"<<"`` followed by the returned value, or any exception that the parse action raised.
Example::
wd = Word(alphas)
@traceParseAction
def remove_duplicate_chars(tokens):
return ''.join(sorted(set(''.join(tokens))))
wds = OneOrMore(wd).setParseAction(remove_duplicate_chars)
print(wds.parseString("slkdjs sld sldd sdlf sdljf"))
prints::
>>entering remove_duplicate_chars(line: 'slkdjs sld sldd sdlf sdljf', 0, (['slkdjs', 'sld', 'sldd', 'sdlf', 'sdljf'], {}))
<<leaving remove_duplicate_chars (ret: 'dfjkls')
['dfjkls'] | Decorator for debugging parse actions. | [
"Decorator",
"for",
"debugging",
"parse",
"actions",
"."
] | def traceParseAction(f):
"""Decorator for debugging parse actions.
When the parse action is called, this decorator will print
``">> entering method-name(line:<current_source_line>, <parse_location>, <matched_tokens>)"``.
When the parse action completes, the decorator will print
``"<<"`` followed by the returned value, or any exception that the parse action raised.
Example::
wd = Word(alphas)
@traceParseAction
def remove_duplicate_chars(tokens):
return ''.join(sorted(set(''.join(tokens))))
wds = OneOrMore(wd).setParseAction(remove_duplicate_chars)
print(wds.parseString("slkdjs sld sldd sdlf sdljf"))
prints::
>>entering remove_duplicate_chars(line: 'slkdjs sld sldd sdlf sdljf', 0, (['slkdjs', 'sld', 'sldd', 'sdlf', 'sdljf'], {}))
<<leaving remove_duplicate_chars (ret: 'dfjkls')
['dfjkls']
"""
f = _trim_arity(f)
def z(*paArgs):
thisFunc = f.__name__
s, l, t = paArgs[-3:]
if len(paArgs) > 3:
thisFunc = paArgs[0].__class__.__name__ + '.' + thisFunc
sys.stderr.write(">>entering %s(line: '%s', %d, %r)\n" % (thisFunc, line(l, s), l, t))
try:
ret = f(*paArgs)
except Exception as exc:
sys.stderr.write("<<leaving %s (exception: %s)\n" % (thisFunc, exc))
raise
sys.stderr.write("<<leaving %s (ret: %r)\n" % (thisFunc, ret))
return ret
try:
z.__name__ = f.__name__
except AttributeError:
pass
return z | [
"def",
"traceParseAction",
"(",
"f",
")",
":",
"f",
"=",
"_trim_arity",
"(",
"f",
")",
"def",
"z",
"(",
"*",
"paArgs",
")",
":",
"thisFunc",
"=",
"f",
".",
"__name__",
"s",
",",
"l",
",",
"t",
"=",
"paArgs",
"[",
"-",
"3",
":",
"]",
"if",
"le... | https://github.com/catboost/catboost/blob/167f64f237114a4d10b2b4ee42adb4569137debe/contrib/python/pyparsing/py2/pyparsing.py#L5281-L5324 | |
pyroscope/rtorrent-ps | ee296b11fb3d609dfdba97ded57f89782f18e4ad | tasks.py | python | docs | (ctx, open_tab=False) | Start watchdog to build the Sphinx docs. | Start watchdog to build the Sphinx docs. | [
"Start",
"watchdog",
"to",
"build",
"the",
"Sphinx",
"docs",
"."
] | def docs(ctx, open_tab=False):
"""Start watchdog to build the Sphinx docs."""
build_dir = 'docs/_build'
index_html = build_dir + '/html/index.html'
stop(ctx)
if os.path.exists(build_dir):
shutil.rmtree(build_dir)
print("\n*** Generating HTML doc ***\n")
ctx.run('builtin cd docs'
' && . {pwd}/.pyvenv/*/bin/activate'
' && nohup {pwd}/docs/Makefile SPHINXBUILD="sphinx-autobuild -p {port:d}'
' -i \'.*\' -i \'*.log\' -i \'*.png\' -i \'*.txt\'" html >autobuild.log 2>&1 &'
.format(port=SPHINX_AUTOBUILD_PORT, pwd=os.getcwd()), pty=False)
for i in range(25):
time.sleep(2.5)
pid = watchdog_pid(ctx)
if pid:
ctx.run("touch docs/index.rst")
ctx.run('ps {}'.format(pid), pty=False)
url = 'http://localhost:{port:d}/'.format(port=SPHINX_AUTOBUILD_PORT)
if open_tab:
webbrowser.open_new_tab(url)
else:
print("\n*** Open '{}' in your browser...".format(url))
break | [
"def",
"docs",
"(",
"ctx",
",",
"open_tab",
"=",
"False",
")",
":",
"build_dir",
"=",
"'docs/_build'",
"index_html",
"=",
"build_dir",
"+",
"'/html/index.html'",
"stop",
"(",
"ctx",
")",
"if",
"os",
".",
"path",
".",
"exists",
"(",
"build_dir",
")",
":",... | https://github.com/pyroscope/rtorrent-ps/blob/ee296b11fb3d609dfdba97ded57f89782f18e4ad/tasks.py#L32-L59 | ||
digibyte/digibyte | 0b8a04fb06d5470a15168e2f675aec57bcc24dac | contrib/devtools/update-translations.py | python | split_format_specifiers | (specifiers) | return set(numeric),other | Split format specifiers between numeric (Qt) and others (strprintf) | Split format specifiers between numeric (Qt) and others (strprintf) | [
"Split",
"format",
"specifiers",
"between",
"numeric",
"(",
"Qt",
")",
"and",
"others",
"(",
"strprintf",
")"
] | def split_format_specifiers(specifiers):
'''Split format specifiers between numeric (Qt) and others (strprintf)'''
numeric = []
other = []
for s in specifiers:
if s in {'1','2','3','4','5','6','7','8','9'}:
numeric.append(s)
else:
other.append(s)
# If both numeric format specifiers and "others" are used, assume we're dealing
# with a Qt-formatted message. In the case of Qt formatting (see https://doc.qt.io/qt-5/qstring.html#arg)
# only numeric formats are replaced at all. This means "(percentage: %1%)" is valid, without needing
# any kind of escaping that would be necessary for strprintf. Without this, this function
# would wrongly detect '%)' as a printf format specifier.
if numeric:
other = []
# numeric (Qt) can be present in any order, others (strprintf) must be in specified order
return set(numeric),other | [
"def",
"split_format_specifiers",
"(",
"specifiers",
")",
":",
"numeric",
"=",
"[",
"]",
"other",
"=",
"[",
"]",
"for",
"s",
"in",
"specifiers",
":",
"if",
"s",
"in",
"{",
"'1'",
",",
"'2'",
",",
"'3'",
",",
"'4'",
",",
"'5'",
",",
"'6'",
",",
"'... | https://github.com/digibyte/digibyte/blob/0b8a04fb06d5470a15168e2f675aec57bcc24dac/contrib/devtools/update-translations.py#L59-L78 | |
google/earthenterprise | 0fe84e29be470cd857e3a0e52e5d0afd5bb8cee9 | earth_enterprise/src/google/protobuf-py/google/protobuf/internal/cpp_message.py | python | _IsMessageSetExtension | (field) | return (field.is_extension and
field.containing_type.has_options and
field.containing_type.GetOptions().message_set_wire_format and
field.type == _TYPE_MESSAGE and
field.message_type == field.extension_scope and
field.label == _LABEL_OPTIONAL) | Checks if a field is a message set extension. | Checks if a field is a message set extension. | [
"Checks",
"if",
"a",
"field",
"is",
"a",
"message",
"set",
"extension",
"."
] | def _IsMessageSetExtension(field):
"""Checks if a field is a message set extension."""
return (field.is_extension and
field.containing_type.has_options and
field.containing_type.GetOptions().message_set_wire_format and
field.type == _TYPE_MESSAGE and
field.message_type == field.extension_scope and
field.label == _LABEL_OPTIONAL) | [
"def",
"_IsMessageSetExtension",
"(",
"field",
")",
":",
"return",
"(",
"field",
".",
"is_extension",
"and",
"field",
".",
"containing_type",
".",
"has_options",
"and",
"field",
".",
"containing_type",
".",
"GetOptions",
"(",
")",
".",
"message_set_wire_format",
... | https://github.com/google/earthenterprise/blob/0fe84e29be470cd857e3a0e52e5d0afd5bb8cee9/earth_enterprise/src/google/protobuf-py/google/protobuf/internal/cpp_message.py#L475-L482 | |
sdhash/sdhash | b9eff63e4e5867e910f41fd69032bbb1c94a2a5e | sdhash-ui/cherrypy/lib/cptools.py | python | SessionAuth.do_login | (self, username, password, from_page='..', **kwargs) | Login. May raise redirect, or return True if request handled. | Login. May raise redirect, or return True if request handled. | [
"Login",
".",
"May",
"raise",
"redirect",
"or",
"return",
"True",
"if",
"request",
"handled",
"."
] | def do_login(self, username, password, from_page='..', **kwargs):
"""Login. May raise redirect, or return True if request handled."""
response = cherrypy.serving.response
error_msg = self.check_username_and_password(username, password)
if error_msg:
body = self.login_screen(from_page, username, error_msg)
response.body = body
if "Content-Length" in response.headers:
# Delete Content-Length header so finalize() recalcs it.
del response.headers["Content-Length"]
return True
else:
cherrypy.serving.request.login = username
cherrypy.session[self.session_key] = username
self.on_login(username)
raise cherrypy.HTTPRedirect(from_page or "/") | [
"def",
"do_login",
"(",
"self",
",",
"username",
",",
"password",
",",
"from_page",
"=",
"'..'",
",",
"*",
"*",
"kwargs",
")",
":",
"response",
"=",
"cherrypy",
".",
"serving",
".",
"response",
"error_msg",
"=",
"self",
".",
"check_username_and_password",
... | https://github.com/sdhash/sdhash/blob/b9eff63e4e5867e910f41fd69032bbb1c94a2a5e/sdhash-ui/cherrypy/lib/cptools.py#L313-L328 | ||
Evolving-AI-Lab/fooling | 66f097dd6bd2eb6794ade3e187a7adfdf1887688 | caffe/python/caffe/pycaffe.py | python | _Net_forward | (self, blobs=None, **kwargs) | return outs | Forward pass: prepare inputs and run the net forward.
Take
blobs: list of blobs to return in addition to output blobs.
kwargs: Keys are input blob names and values are blob ndarrays.
For formatting inputs for Caffe, see Net.preprocess().
If None, input is taken from data layers.
Give
outs: {blob name: blob ndarray} dict. | Forward pass: prepare inputs and run the net forward. | [
"Forward",
"pass",
":",
"prepare",
"inputs",
"and",
"run",
"the",
"net",
"forward",
"."
] | def _Net_forward(self, blobs=None, **kwargs):
"""
Forward pass: prepare inputs and run the net forward.
Take
blobs: list of blobs to return in addition to output blobs.
kwargs: Keys are input blob names and values are blob ndarrays.
For formatting inputs for Caffe, see Net.preprocess().
If None, input is taken from data layers.
Give
outs: {blob name: blob ndarray} dict.
"""
if blobs is None:
blobs = []
if kwargs:
if set(kwargs.keys()) != set(self.inputs):
raise Exception('Input blob arguments do not match net inputs.')
# Set input according to defined shapes and make arrays single and
# C-contiguous as Caffe expects.
for in_, blob in kwargs.iteritems():
if blob.shape[0] != self.blobs[in_].num:
raise Exception('Input is not batch sized')
if blob.ndim != 4:
raise Exception('{} blob is not 4-d'.format(in_))
self.blobs[in_].data[...] = blob
self._forward()
# Unpack blobs to extract
outs = {out: self.blobs[out].data for out in set(self.outputs + blobs)}
return outs | [
"def",
"_Net_forward",
"(",
"self",
",",
"blobs",
"=",
"None",
",",
"*",
"*",
"kwargs",
")",
":",
"if",
"blobs",
"is",
"None",
":",
"blobs",
"=",
"[",
"]",
"if",
"kwargs",
":",
"if",
"set",
"(",
"kwargs",
".",
"keys",
"(",
")",
")",
"!=",
"set"... | https://github.com/Evolving-AI-Lab/fooling/blob/66f097dd6bd2eb6794ade3e187a7adfdf1887688/caffe/python/caffe/pycaffe.py#L38-L70 | |
baidu-research/tensorflow-allreduce | 66d5b855e90b0949e9fa5cca5599fd729a70e874 | tensorflow/contrib/tpu/python/tpu/tpu_estimator.py | python | _InputsHolder.__init__ | (self, sharded_features=None, sharded_labels=None,
num_shards=None) | Constructor.
Args:
sharded_features: A list of features one for each shard. Once provided,
the corresponding shared_labels should be set also and this
`_InputsHolder` is frozen to prevent from future modification. If
`None`, it is expected to add features and labels for each shard by
calling `append_shard` later.
sharded_labels: A list of labels one for each shard.
num_shards: Number of shards in the TPU system. Must be provided unless it
can be deduced from `sharded_features`.
Raises:
ValueError: If both `sharded_features` and `num_shards` are `None`. | Constructor. | [
"Constructor",
"."
] | def __init__(self, sharded_features=None, sharded_labels=None,
num_shards=None):
"""Constructor.
Args:
sharded_features: A list of features one for each shard. Once provided,
the corresponding shared_labels should be set also and this
`_InputsHolder` is frozen to prevent from future modification. If
`None`, it is expected to add features and labels for each shard by
calling `append_shard` later.
sharded_labels: A list of labels one for each shard.
num_shards: Number of shards in the TPU system. Must be provided unless it
can be deduced from `sharded_features`.
Raises:
ValueError: If both `sharded_features` and `num_shards` are `None`.
"""
# Holds the features and labels for all shards.
self._feature_list = []
self._label_list = []
# Holds the structure of inputs
self._feature_names = []
self._label_names = []
self._has_labels = False
# Internal state.
self._initialized = False
self._frozen = False
if sharded_features is None:
if num_shards is None:
raise ValueError(
'`sharded_features` and `num_shards` cannot be both None')
self._num_shards = num_shards
else:
self._from_sharded_inputs(sharded_features, sharded_labels, num_shards) | [
"def",
"__init__",
"(",
"self",
",",
"sharded_features",
"=",
"None",
",",
"sharded_labels",
"=",
"None",
",",
"num_shards",
"=",
"None",
")",
":",
"# Holds the features and labels for all shards.",
"self",
".",
"_feature_list",
"=",
"[",
"]",
"self",
".",
"_lab... | https://github.com/baidu-research/tensorflow-allreduce/blob/66d5b855e90b0949e9fa5cca5599fd729a70e874/tensorflow/contrib/tpu/python/tpu/tpu_estimator.py#L197-L233 | ||
krishauser/Klampt | 972cc83ea5befac3f653c1ba20f80155768ad519 | Python/klampt/model/coordinates.py | python | Group.direction | (self,coordinates=[0,0,0],frame='root') | return Direction(coordinates,self.frame(frame)) | Makes a Direction object with the given local coordinates in the
given frame. Does not add it to the list of managed points. | Makes a Direction object with the given local coordinates in the
given frame. Does not add it to the list of managed points. | [
"Makes",
"a",
"Direction",
"object",
"with",
"the",
"given",
"local",
"coordinates",
"in",
"the",
"given",
"frame",
".",
"Does",
"not",
"add",
"it",
"to",
"the",
"list",
"of",
"managed",
"points",
"."
] | def direction(self,coordinates=[0,0,0],frame='root'):
"""Makes a Direction object with the given local coordinates in the
given frame. Does not add it to the list of managed points."""
return Direction(coordinates,self.frame(frame)) | [
"def",
"direction",
"(",
"self",
",",
"coordinates",
"=",
"[",
"0",
",",
"0",
",",
"0",
"]",
",",
"frame",
"=",
"'root'",
")",
":",
"return",
"Direction",
"(",
"coordinates",
",",
"self",
".",
"frame",
"(",
"frame",
")",
")"
] | https://github.com/krishauser/Klampt/blob/972cc83ea5befac3f653c1ba20f80155768ad519/Python/klampt/model/coordinates.py#L507-L510 | |
eventql/eventql | 7ca0dbb2e683b525620ea30dc40540a22d5eb227 | deps/3rdparty/spidermonkey/mozjs/media/webrtc/trunk/tools/gyp/pylib/gyp/generator/xcode.py | python | ExpandXcodeVariables | (string, expansions) | return string | Expands Xcode-style $(VARIABLES) in string per the expansions dict.
In some rare cases, it is appropriate to expand Xcode variables when a
project file is generated. For any substring $(VAR) in string, if VAR is a
key in the expansions dict, $(VAR) will be replaced with expansions[VAR].
Any $(VAR) substring in string for which VAR is not a key in the expansions
dict will remain in the returned string. | Expands Xcode-style $(VARIABLES) in string per the expansions dict. | [
"Expands",
"Xcode",
"-",
"style",
"$",
"(",
"VARIABLES",
")",
"in",
"string",
"per",
"the",
"expansions",
"dict",
"."
] | def ExpandXcodeVariables(string, expansions):
"""Expands Xcode-style $(VARIABLES) in string per the expansions dict.
In some rare cases, it is appropriate to expand Xcode variables when a
project file is generated. For any substring $(VAR) in string, if VAR is a
key in the expansions dict, $(VAR) will be replaced with expansions[VAR].
Any $(VAR) substring in string for which VAR is not a key in the expansions
dict will remain in the returned string.
"""
matches = _xcode_variable_re.findall(string)
if matches == None:
return string
matches.reverse()
for match in matches:
(to_replace, variable) = match
if not variable in expansions:
continue
replacement = expansions[variable]
string = re.sub(re.escape(to_replace), replacement, string)
return string | [
"def",
"ExpandXcodeVariables",
"(",
"string",
",",
"expansions",
")",
":",
"matches",
"=",
"_xcode_variable_re",
".",
"findall",
"(",
"string",
")",
"if",
"matches",
"==",
"None",
":",
"return",
"string",
"matches",
".",
"reverse",
"(",
")",
"for",
"match",
... | https://github.com/eventql/eventql/blob/7ca0dbb2e683b525620ea30dc40540a22d5eb227/deps/3rdparty/spidermonkey/mozjs/media/webrtc/trunk/tools/gyp/pylib/gyp/generator/xcode.py#L556-L579 | |
wxWidgets/wxPython-Classic | 19571e1ae65f1ac445f5491474121998c97a1bf0 | src/msw/richtext.py | python | RichTextBox.Copy | (*args, **kwargs) | return _richtext.RichTextBox_Copy(*args, **kwargs) | Copy(self, RichTextBox obj) | Copy(self, RichTextBox obj) | [
"Copy",
"(",
"self",
"RichTextBox",
"obj",
")"
] | def Copy(*args, **kwargs):
"""Copy(self, RichTextBox obj)"""
return _richtext.RichTextBox_Copy(*args, **kwargs) | [
"def",
"Copy",
"(",
"*",
"args",
",",
"*",
"*",
"kwargs",
")",
":",
"return",
"_richtext",
".",
"RichTextBox_Copy",
"(",
"*",
"args",
",",
"*",
"*",
"kwargs",
")"
] | https://github.com/wxWidgets/wxPython-Classic/blob/19571e1ae65f1ac445f5491474121998c97a1bf0/src/msw/richtext.py#L1878-L1880 | |
zerollzeng/tiny-tensorrt | e7bdb8f82934342a0f22ce68dfefdb8e15eb72b2 | third_party/pybind11/tools/clang/cindex.py | python | CompileCommand.arguments | (self) | Get an iterable object providing each argument in the
command line for the compiler invocation as a _CXString.
Invariant : the first argument is the compiler executable | Get an iterable object providing each argument in the
command line for the compiler invocation as a _CXString. | [
"Get",
"an",
"iterable",
"object",
"providing",
"each",
"argument",
"in",
"the",
"command",
"line",
"for",
"the",
"compiler",
"invocation",
"as",
"a",
"_CXString",
"."
] | def arguments(self):
"""
Get an iterable object providing each argument in the
command line for the compiler invocation as a _CXString.
Invariant : the first argument is the compiler executable
"""
length = conf.lib.clang_CompileCommand_getNumArgs(self.cmd)
for i in range(length):
yield conf.lib.clang_CompileCommand_getArg(self.cmd, i) | [
"def",
"arguments",
"(",
"self",
")",
":",
"length",
"=",
"conf",
".",
"lib",
".",
"clang_CompileCommand_getNumArgs",
"(",
"self",
".",
"cmd",
")",
"for",
"i",
"in",
"range",
"(",
"length",
")",
":",
"yield",
"conf",
".",
"lib",
".",
"clang_CompileComman... | https://github.com/zerollzeng/tiny-tensorrt/blob/e7bdb8f82934342a0f22ce68dfefdb8e15eb72b2/third_party/pybind11/tools/clang/cindex.py#L2894-L2903 | ||
aws/lumberyard | f85344403c1c2e77ec8c75deb2c116e97b713217 | dev/Gems/CloudGemMetric/v1/AWS/common-code/Lib/numba/targets/numbers.py | python | number_item_impl | (context, builder, sig, args) | return args[0] | The no-op .item() method on booleans and numbers. | The no-op .item() method on booleans and numbers. | [
"The",
"no",
"-",
"op",
".",
"item",
"()",
"method",
"on",
"booleans",
"and",
"numbers",
"."
] | def number_item_impl(context, builder, sig, args):
"""
The no-op .item() method on booleans and numbers.
"""
return args[0] | [
"def",
"number_item_impl",
"(",
"context",
",",
"builder",
",",
"sig",
",",
"args",
")",
":",
"return",
"args",
"[",
"0",
"]"
] | https://github.com/aws/lumberyard/blob/f85344403c1c2e77ec8c75deb2c116e97b713217/dev/Gems/CloudGemMetric/v1/AWS/common-code/Lib/numba/targets/numbers.py#L1187-L1191 | |
thalium/icebox | 99d147d5b9269222225443ce171b4fd46d8985d4 | third_party/virtualbox/src/libs/libxml2-2.9.4/python/libxml2.py | python | xmlDoc.newDocComment | (self, content) | return __tmp | Creation of a new node containing a comment within a
document. | Creation of a new node containing a comment within a
document. | [
"Creation",
"of",
"a",
"new",
"node",
"containing",
"a",
"comment",
"within",
"a",
"document",
"."
] | def newDocComment(self, content):
"""Creation of a new node containing a comment within a
document. """
ret = libxml2mod.xmlNewDocComment(self._o, content)
if ret is None:raise treeError('xmlNewDocComment() failed')
__tmp = xmlNode(_obj=ret)
return __tmp | [
"def",
"newDocComment",
"(",
"self",
",",
"content",
")",
":",
"ret",
"=",
"libxml2mod",
".",
"xmlNewDocComment",
"(",
"self",
".",
"_o",
",",
"content",
")",
"if",
"ret",
"is",
"None",
":",
"raise",
"treeError",
"(",
"'xmlNewDocComment() failed'",
")",
"_... | https://github.com/thalium/icebox/blob/99d147d5b9269222225443ce171b4fd46d8985d4/third_party/virtualbox/src/libs/libxml2-2.9.4/python/libxml2.py#L4317-L4323 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.