nwo stringlengths 5 86 | sha stringlengths 40 40 | path stringlengths 4 189 | language stringclasses 1 value | identifier stringlengths 1 94 | parameters stringlengths 2 4.03k | argument_list stringclasses 1 value | return_statement stringlengths 0 11.5k | docstring stringlengths 1 33.2k | docstring_summary stringlengths 0 5.15k | docstring_tokens list | function stringlengths 34 151k | function_tokens list | url stringlengths 90 278 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
wxWidgets/wxPython-Classic | 19571e1ae65f1ac445f5491474121998c97a1bf0 | src/gtk/stc.py | python | StyledTextCtrl.GetCurLine | (*args, **kwargs) | return _stc.StyledTextCtrl_GetCurLine(*args, **kwargs) | GetCurLine(self) -> (text, pos)
Retrieve the text of the line containing the caret, and also theindex
of the caret on the line. | GetCurLine(self) -> (text, pos) | [
"GetCurLine",
"(",
"self",
")",
"-",
">",
"(",
"text",
"pos",
")"
] | def GetCurLine(*args, **kwargs):
"""
GetCurLine(self) -> (text, pos)
Retrieve the text of the line containing the caret, and also theindex
of the caret on the line.
"""
return _stc.StyledTextCtrl_GetCurLine(*args, **kwargs) | [
"def",
"GetCurLine",
"(",
"*",
"args",
",",
"*",
"*",
"kwargs",
")",
":",
"return",
"_stc",
".",
"StyledTextCtrl_GetCurLine",
"(",
"*",
"args",
",",
"*",
"*",
"kwargs",
")"
] | https://github.com/wxWidgets/wxPython-Classic/blob/19571e1ae65f1ac445f5491474121998c97a1bf0/src/gtk/stc.py#L2228-L2235 | |
dmlc/nnvm | dab5ce8ab6adbf4edd8bd2fa89f1a99f343b6e38 | python/nnvm/frontend/common.py | python | AttrConverter._parse_bool | (self, value) | return bool(value) | Helper function to parse default boolean values. | Helper function to parse default boolean values. | [
"Helper",
"function",
"to",
"parse",
"default",
"boolean",
"values",
"."
] | def _parse_bool(self, value):
"""Helper function to parse default boolean values."""
if isinstance(value, string_types):
return value.strip().lower() in ['true', '1', 't', 'y', 'yes']
return bool(value) | [
"def",
"_parse_bool",
"(",
"self",
",",
"value",
")",
":",
"if",
"isinstance",
"(",
"value",
",",
"string_types",
")",
":",
"return",
"value",
".",
"strip",
"(",
")",
".",
"lower",
"(",
")",
"in",
"[",
"'true'",
",",
"'1'",
",",
"'t'",
",",
"'y'",
",",
"'yes'",
"]",
"return",
"bool",
"(",
"value",
")"
] | https://github.com/dmlc/nnvm/blob/dab5ce8ab6adbf4edd8bd2fa89f1a99f343b6e38/python/nnvm/frontend/common.py#L126-L130 | |
hpi-xnor/BMXNet | ed0b201da6667887222b8e4b5f997c4f6b61943d | python/mxnet/kvstore.py | python | KVStore.set_gradient_compression | (self, compression_params) | Specifies type of low-bit quantization for gradient compression \
and additional arguments depending on the type of compression being used.
2bit Gradient Compression takes a positive float `threshold`.
The technique works by thresholding values such that positive values in the
gradient above threshold will be set to threshold. Negative values whose absolute
values are higher than threshold, will be set to the negative of threshold.
Values whose absolute values are less than threshold will be set to 0.
By doing so, each value in the gradient is in one of three states. 2bits are
used to represent these states, and every 16 float values in the original
gradient can be represented using one float. This compressed representation
can reduce communication costs. The difference between these thresholded values and
original values is stored at the sender's end as residual and added to the
gradient in the next iteration.
When kvstore is 'local', gradient compression is used to reduce communication
between multiple devices (gpus). Gradient is quantized on each GPU which
computed the gradients, then sent to the GPU which merges the gradients. This
receiving GPU dequantizes the gradients and merges them. Note that this
increases memory usage on each GPU because of the residual array stored.
When kvstore is 'dist', gradient compression is used to reduce communication
from worker to sender. Gradient is quantized on each worker which
computed the gradients, then sent to the server which dequantizes
this data and merges the gradients from each worker. Note that this
increases CPU memory usage on each worker because of the residual array stored.
Only worker to server communication is compressed in this setting.
If each machine has multiple GPUs, currently this GPU to GPU or GPU to CPU communication
is not compressed. Server to worker communication (in the case of pull)
is also not compressed.
To use 2bit compression, we need to specify `type` as `2bit`.
Only specifying `type` would use default value for the threshold.
To completely specify the arguments for 2bit compression, we would need to pass
a dictionary which includes `threshold` like:
{'type': '2bit', 'threshold': 0.5}
Parameters
----------
compression_params : dict
A dictionary specifying the type and parameters for gradient compression.
The key `type` in this dictionary is a
required string argument and specifies the type of gradient compression.
Currently `type` can be only `2bit`
Other keys in this dictionary are optional and specific to the type
of gradient compression. | Specifies type of low-bit quantization for gradient compression \
and additional arguments depending on the type of compression being used. | [
"Specifies",
"type",
"of",
"low",
"-",
"bit",
"quantization",
"for",
"gradient",
"compression",
"\\",
"and",
"additional",
"arguments",
"depending",
"on",
"the",
"type",
"of",
"compression",
"being",
"used",
"."
] | def set_gradient_compression(self, compression_params):
""" Specifies type of low-bit quantization for gradient compression \
and additional arguments depending on the type of compression being used.
2bit Gradient Compression takes a positive float `threshold`.
The technique works by thresholding values such that positive values in the
gradient above threshold will be set to threshold. Negative values whose absolute
values are higher than threshold, will be set to the negative of threshold.
Values whose absolute values are less than threshold will be set to 0.
By doing so, each value in the gradient is in one of three states. 2bits are
used to represent these states, and every 16 float values in the original
gradient can be represented using one float. This compressed representation
can reduce communication costs. The difference between these thresholded values and
original values is stored at the sender's end as residual and added to the
gradient in the next iteration.
When kvstore is 'local', gradient compression is used to reduce communication
between multiple devices (gpus). Gradient is quantized on each GPU which
computed the gradients, then sent to the GPU which merges the gradients. This
receiving GPU dequantizes the gradients and merges them. Note that this
increases memory usage on each GPU because of the residual array stored.
When kvstore is 'dist', gradient compression is used to reduce communication
from worker to sender. Gradient is quantized on each worker which
computed the gradients, then sent to the server which dequantizes
this data and merges the gradients from each worker. Note that this
increases CPU memory usage on each worker because of the residual array stored.
Only worker to server communication is compressed in this setting.
If each machine has multiple GPUs, currently this GPU to GPU or GPU to CPU communication
is not compressed. Server to worker communication (in the case of pull)
is also not compressed.
To use 2bit compression, we need to specify `type` as `2bit`.
Only specifying `type` would use default value for the threshold.
To completely specify the arguments for 2bit compression, we would need to pass
a dictionary which includes `threshold` like:
{'type': '2bit', 'threshold': 0.5}
Parameters
----------
compression_params : dict
A dictionary specifying the type and parameters for gradient compression.
The key `type` in this dictionary is a
required string argument and specifies the type of gradient compression.
Currently `type` can be only `2bit`
Other keys in this dictionary are optional and specific to the type
of gradient compression.
"""
if ('device' in self.type) or ('dist' in self.type):
ckeys, cvals = _ctype_dict(compression_params)
check_call(_LIB.MXKVStoreSetGradientCompression(self.handle,
mx_uint(len(compression_params)),
ckeys, cvals))
else:
raise Exception('Gradient compression is not supported for this type of kvstore') | [
"def",
"set_gradient_compression",
"(",
"self",
",",
"compression_params",
")",
":",
"if",
"(",
"'device'",
"in",
"self",
".",
"type",
")",
"or",
"(",
"'dist'",
"in",
"self",
".",
"type",
")",
":",
"ckeys",
",",
"cvals",
"=",
"_ctype_dict",
"(",
"compression_params",
")",
"check_call",
"(",
"_LIB",
".",
"MXKVStoreSetGradientCompression",
"(",
"self",
".",
"handle",
",",
"mx_uint",
"(",
"len",
"(",
"compression_params",
")",
")",
",",
"ckeys",
",",
"cvals",
")",
")",
"else",
":",
"raise",
"Exception",
"(",
"'Gradient compression is not supported for this type of kvstore'",
")"
] | https://github.com/hpi-xnor/BMXNet/blob/ed0b201da6667887222b8e4b5f997c4f6b61943d/python/mxnet/kvstore.py#L363-L417 | ||
kristjankorjus/Replicating-DeepMind | 68539394e792b34a4d6b430a2eb73b8b8f91d8db | src/ale/preprocessor.py | python | Preprocessor.__init__ | (self) | Initialise preprocessor | Initialise preprocessor | [
"Initialise",
"preprocessor"
] | def __init__(self):
"""
Initialise preprocessor
"""
self.grayscale_array = self.get_grayscale_array() | [
"def",
"__init__",
"(",
"self",
")",
":",
"self",
".",
"grayscale_array",
"=",
"self",
".",
"get_grayscale_array",
"(",
")"
] | https://github.com/kristjankorjus/Replicating-DeepMind/blob/68539394e792b34a4d6b430a2eb73b8b8f91d8db/src/ale/preprocessor.py#L15-L19 | ||
gwaldron/osgearth | 4c521857d59a69743e4a9cedba00afe570f984e8 | src/third_party/tinygltf/deps/cpplint.py | python | _ShouldPrintError | (category, confidence, linenum) | return True | If confidence >= verbose, category passes filter and is not suppressed. | If confidence >= verbose, category passes filter and is not suppressed. | [
"If",
"confidence",
">",
"=",
"verbose",
"category",
"passes",
"filter",
"and",
"is",
"not",
"suppressed",
"."
] | def _ShouldPrintError(category, confidence, linenum):
"""If confidence >= verbose, category passes filter and is not suppressed."""
# There are three ways we might decide not to print an error message:
# a "NOLINT(category)" comment appears in the source,
# the verbosity level isn't high enough, or the filters filter it out.
if IsErrorSuppressedByNolint(category, linenum):
return False
if confidence < _cpplint_state.verbose_level:
return False
is_filtered = False
for one_filter in _Filters():
if one_filter.startswith('-'):
if category.startswith(one_filter[1:]):
is_filtered = True
elif one_filter.startswith('+'):
if category.startswith(one_filter[1:]):
is_filtered = False
else:
assert False # should have been checked for in SetFilter.
if is_filtered:
return False
return True | [
"def",
"_ShouldPrintError",
"(",
"category",
",",
"confidence",
",",
"linenum",
")",
":",
"# There are three ways we might decide not to print an error message:",
"# a \"NOLINT(category)\" comment appears in the source,",
"# the verbosity level isn't high enough, or the filters filter it out.",
"if",
"IsErrorSuppressedByNolint",
"(",
"category",
",",
"linenum",
")",
":",
"return",
"False",
"if",
"confidence",
"<",
"_cpplint_state",
".",
"verbose_level",
":",
"return",
"False",
"is_filtered",
"=",
"False",
"for",
"one_filter",
"in",
"_Filters",
"(",
")",
":",
"if",
"one_filter",
".",
"startswith",
"(",
"'-'",
")",
":",
"if",
"category",
".",
"startswith",
"(",
"one_filter",
"[",
"1",
":",
"]",
")",
":",
"is_filtered",
"=",
"True",
"elif",
"one_filter",
".",
"startswith",
"(",
"'+'",
")",
":",
"if",
"category",
".",
"startswith",
"(",
"one_filter",
"[",
"1",
":",
"]",
")",
":",
"is_filtered",
"=",
"False",
"else",
":",
"assert",
"False",
"# should have been checked for in SetFilter.",
"if",
"is_filtered",
":",
"return",
"False",
"return",
"True"
] | https://github.com/gwaldron/osgearth/blob/4c521857d59a69743e4a9cedba00afe570f984e8/src/third_party/tinygltf/deps/cpplint.py#L1064-L1089 | |
snap-stanford/snap-python | d53c51b0a26aa7e3e7400b014cdf728948fde80a | setup/snap.py | python | TForestFire_GenGraph | (*args) | return _snap.TForestFire_GenGraph(*args) | TForestFire_GenGraph(int const & Nodes, double const & FwdProb, double const & BckProb) -> PNGraph
Parameters:
Nodes: int const &
FwdProb: double const &
BckProb: double const & | TForestFire_GenGraph(int const & Nodes, double const & FwdProb, double const & BckProb) -> PNGraph | [
"TForestFire_GenGraph",
"(",
"int",
"const",
"&",
"Nodes",
"double",
"const",
"&",
"FwdProb",
"double",
"const",
"&",
"BckProb",
")",
"-",
">",
"PNGraph"
] | def TForestFire_GenGraph(*args):
"""
TForestFire_GenGraph(int const & Nodes, double const & FwdProb, double const & BckProb) -> PNGraph
Parameters:
Nodes: int const &
FwdProb: double const &
BckProb: double const &
"""
return _snap.TForestFire_GenGraph(*args) | [
"def",
"TForestFire_GenGraph",
"(",
"*",
"args",
")",
":",
"return",
"_snap",
".",
"TForestFire_GenGraph",
"(",
"*",
"args",
")"
] | https://github.com/snap-stanford/snap-python/blob/d53c51b0a26aa7e3e7400b014cdf728948fde80a/setup/snap.py#L1384-L1394 | |
gklz1982/caffe-yolov2 | ebb27029db4ddc0d40e520634633b0fa9cdcc10d | scripts/cpp_lint.py | python | _SetCountingStyle | (level) | Sets the module's counting options. | Sets the module's counting options. | [
"Sets",
"the",
"module",
"s",
"counting",
"options",
"."
] | def _SetCountingStyle(level):
"""Sets the module's counting options."""
_cpplint_state.SetCountingStyle(level) | [
"def",
"_SetCountingStyle",
"(",
"level",
")",
":",
"_cpplint_state",
".",
"SetCountingStyle",
"(",
"level",
")"
] | https://github.com/gklz1982/caffe-yolov2/blob/ebb27029db4ddc0d40e520634633b0fa9cdcc10d/scripts/cpp_lint.py#L787-L789 | ||
kmatheussen/radium | 90f113f2df1d63b987a0106be3bb09ab2ab8c233 | bin/import_midi.py | python | Events.split_by_channel | (self) | return [self] + other_channels | Returns a list of channel-specific Event instances, replacing 'self'.
If the instance already only contained events for one channel, it will return a list only containing itself.
Only the first of the returned Event instances will contain 'other_events'. | Returns a list of channel-specific Event instances, replacing 'self'.
If the instance already only contained events for one channel, it will return a list only containing itself. | [
"Returns",
"a",
"list",
"of",
"channel",
"-",
"specific",
"Event",
"instances",
"replacing",
"self",
".",
"If",
"the",
"instance",
"already",
"only",
"contained",
"events",
"for",
"one",
"channel",
"it",
"will",
"return",
"a",
"list",
"only",
"containing",
"itself",
"."
] | def split_by_channel(self):
"""
Returns a list of channel-specific Event instances, replacing 'self'.
If the instance already only contained events for one channel, it will return a list only containing itself.
Only the first of the returned Event instances will contain 'other_events'.
"""
other_channels = []
new = self.split_channels_helper()
while new:
other_channels.append(new)
new = self.split_channels_helper()
other_channels.reverse()
return [self] + other_channels | [
"def",
"split_by_channel",
"(",
"self",
")",
":",
"other_channels",
"=",
"[",
"]",
"new",
"=",
"self",
".",
"split_channels_helper",
"(",
")",
"while",
"new",
":",
"other_channels",
".",
"append",
"(",
"new",
")",
"new",
"=",
"self",
".",
"split_channels_helper",
"(",
")",
"other_channels",
".",
"reverse",
"(",
")",
"return",
"[",
"self",
"]",
"+",
"other_channels"
] | https://github.com/kmatheussen/radium/blob/90f113f2df1d63b987a0106be3bb09ab2ab8c233/bin/import_midi.py#L397-L413 | |
tensorflow/io | 92b44e180674a8af0e12e405530f7343e3e693e4 | tensorflow_io/python/experimental/io_dataset_ops.py | python | IODataset.from_libsvm | (
cls,
filename,
num_features,
dtype=None,
label_dtype=None,
compression_type="",
**kwargs
) | Creates an `IODataset` from a libsvm file.
Args:
filename: A `tf.string` tensor containing one or more filenames.
num_features: The number of features.
dtype(Optional): The type of the output feature tensor.
Default to tf.float32.
label_dtype(Optional): The type of the output label tensor.
Default to tf.int64.
compression_type: (Optional.) A `tf.string` scalar evaluating to one of
`""` (no compression), `"ZLIB"`, or `"GZIP"`.
name: A name prefix for the IOTensor (optional).
Returns:
A `IODataset`. | Creates an `IODataset` from a libsvm file. | [
"Creates",
"an",
"IODataset",
"from",
"a",
"libsvm",
"file",
"."
] | def from_libsvm(
cls,
filename,
num_features,
dtype=None,
label_dtype=None,
compression_type="",
**kwargs
):
"""Creates an `IODataset` from a libsvm file.
Args:
filename: A `tf.string` tensor containing one or more filenames.
num_features: The number of features.
dtype(Optional): The type of the output feature tensor.
Default to tf.float32.
label_dtype(Optional): The type of the output label tensor.
Default to tf.int64.
compression_type: (Optional.) A `tf.string` scalar evaluating to one of
`""` (no compression), `"ZLIB"`, or `"GZIP"`.
name: A name prefix for the IOTensor (optional).
Returns:
A `IODataset`.
"""
with tf.name_scope(kwargs.get("name", "IOFromLibSVM")):
return libsvm_dataset_ops.LibSVMIODataset(
filename,
num_features,
dtype=dtype,
label_dtype=label_dtype,
compression_type=compression_type,
internal=True,
**kwargs
) | [
"def",
"from_libsvm",
"(",
"cls",
",",
"filename",
",",
"num_features",
",",
"dtype",
"=",
"None",
",",
"label_dtype",
"=",
"None",
",",
"compression_type",
"=",
"\"\"",
",",
"*",
"*",
"kwargs",
")",
":",
"with",
"tf",
".",
"name_scope",
"(",
"kwargs",
".",
"get",
"(",
"\"name\"",
",",
"\"IOFromLibSVM\"",
")",
")",
":",
"return",
"libsvm_dataset_ops",
".",
"LibSVMIODataset",
"(",
"filename",
",",
"num_features",
",",
"dtype",
"=",
"dtype",
",",
"label_dtype",
"=",
"label_dtype",
",",
"compression_type",
"=",
"compression_type",
",",
"internal",
"=",
"True",
",",
"*",
"*",
"kwargs",
")"
] | https://github.com/tensorflow/io/blob/92b44e180674a8af0e12e405530f7343e3e693e4/tensorflow_io/python/experimental/io_dataset_ops.py#L52-L86 | ||
aws/lumberyard | f85344403c1c2e77ec8c75deb2c116e97b713217 | dev/Gems/CloudGemMetric/v1/AWS/python/windows/Lib/numba/caching.py | python | IndexDataCacheFile._load_index | (self) | Load the cache index and return it as a dictionary (possibly
empty if cache is empty or obsolete). | Load the cache index and return it as a dictionary (possibly
empty if cache is empty or obsolete). | [
"Load",
"the",
"cache",
"index",
"and",
"return",
"it",
"as",
"a",
"dictionary",
"(",
"possibly",
"empty",
"if",
"cache",
"is",
"empty",
"or",
"obsolete",
")",
"."
] | def _load_index(self):
"""
Load the cache index and return it as a dictionary (possibly
empty if cache is empty or obsolete).
"""
try:
with open(self._index_path, "rb") as f:
version = pickle.load(f)
data = f.read()
except EnvironmentError as e:
# Index doesn't exist yet?
if e.errno in (errno.ENOENT,):
return {}
raise
if version != self._version:
# This is another version. Avoid trying to unpickling the
# rest of the stream, as that may fail.
return {}
stamp, overloads = pickle.loads(data)
_cache_log("[cache] index loaded from %r", self._index_path)
if stamp != self._source_stamp:
# Cache is not fresh. Stale data files will be eventually
# overwritten, since they are numbered in incrementing order.
return {}
else:
return overloads | [
"def",
"_load_index",
"(",
"self",
")",
":",
"try",
":",
"with",
"open",
"(",
"self",
".",
"_index_path",
",",
"\"rb\"",
")",
"as",
"f",
":",
"version",
"=",
"pickle",
".",
"load",
"(",
"f",
")",
"data",
"=",
"f",
".",
"read",
"(",
")",
"except",
"EnvironmentError",
"as",
"e",
":",
"# Index doesn't exist yet?",
"if",
"e",
".",
"errno",
"in",
"(",
"errno",
".",
"ENOENT",
",",
")",
":",
"return",
"{",
"}",
"raise",
"if",
"version",
"!=",
"self",
".",
"_version",
":",
"# This is another version. Avoid trying to unpickling the",
"# rest of the stream, as that may fail.",
"return",
"{",
"}",
"stamp",
",",
"overloads",
"=",
"pickle",
".",
"loads",
"(",
"data",
")",
"_cache_log",
"(",
"\"[cache] index loaded from %r\"",
",",
"self",
".",
"_index_path",
")",
"if",
"stamp",
"!=",
"self",
".",
"_source_stamp",
":",
"# Cache is not fresh. Stale data files will be eventually",
"# overwritten, since they are numbered in incrementing order.",
"return",
"{",
"}",
"else",
":",
"return",
"overloads"
] | https://github.com/aws/lumberyard/blob/f85344403c1c2e77ec8c75deb2c116e97b713217/dev/Gems/CloudGemMetric/v1/AWS/python/windows/Lib/numba/caching.py#L508-L533 | ||
wlanjie/AndroidFFmpeg | 7baf9122f4b8e1c74e7baf4be5c422c7a5ba5aaf | tools/fdk-aac-build/armeabi/toolchain/lib/python2.7/mailbox.py | python | _singlefileMailbox.remove | (self, key) | Remove the keyed message; raise KeyError if it doesn't exist. | Remove the keyed message; raise KeyError if it doesn't exist. | [
"Remove",
"the",
"keyed",
"message",
";",
"raise",
"KeyError",
"if",
"it",
"doesn",
"t",
"exist",
"."
] | def remove(self, key):
"""Remove the keyed message; raise KeyError if it doesn't exist."""
self._lookup(key)
del self._toc[key]
self._pending = True | [
"def",
"remove",
"(",
"self",
",",
"key",
")",
":",
"self",
".",
"_lookup",
"(",
"key",
")",
"del",
"self",
".",
"_toc",
"[",
"key",
"]",
"self",
".",
"_pending",
"=",
"True"
] | https://github.com/wlanjie/AndroidFFmpeg/blob/7baf9122f4b8e1c74e7baf4be5c422c7a5ba5aaf/tools/fdk-aac-build/armeabi/toolchain/lib/python2.7/mailbox.py#L594-L598 | ||
mantidproject/mantid | 03deeb89254ec4289edb8771e0188c2090a02f32 | Framework/PythonInterface/plugins/algorithms/GenerateGroupingSNSInelastic.py | python | GenerateGroupingSNSInelastic.category | (self) | return "Inelastic\\Utility;Transforms\\Grouping" | Mantid required | Mantid required | [
"Mantid",
"required"
] | def category(self):
""" Mantid required
"""
return "Inelastic\\Utility;Transforms\\Grouping" | [
"def",
"category",
"(",
"self",
")",
":",
"return",
"\"Inelastic\\\\Utility;Transforms\\\\Grouping\""
] | https://github.com/mantidproject/mantid/blob/03deeb89254ec4289edb8771e0188c2090a02f32/Framework/PythonInterface/plugins/algorithms/GenerateGroupingSNSInelastic.py#L19-L22 | |
Xilinx/Vitis-AI | fc74d404563d9951b57245443c73bef389f3657f | tools/Vitis-AI-Quantizer/vai_q_tensorflow1.x/tensorflow/python/feature_column/feature_column_v2.py | python | _standardize_and_copy_config | (config) | return kwargs | Returns a shallow copy of config with lists turned to tuples.
Keras serialization uses nest to listify everything.
This causes problems with the NumericColumn shape, which becomes
unhashable. We could try to solve this on the Keras side, but that
would require lots of tracking to avoid changing existing behavior.
Instead, we ensure here that we revive correctly.
Args:
config: dict that will be used to revive a Feature Column
Returns:
Shallow copy of config with lists turned to tuples. | Returns a shallow copy of config with lists turned to tuples. | [
"Returns",
"a",
"shallow",
"copy",
"of",
"config",
"with",
"lists",
"turned",
"to",
"tuples",
"."
] | def _standardize_and_copy_config(config):
"""Returns a shallow copy of config with lists turned to tuples.
Keras serialization uses nest to listify everything.
This causes problems with the NumericColumn shape, which becomes
unhashable. We could try to solve this on the Keras side, but that
would require lots of tracking to avoid changing existing behavior.
Instead, we ensure here that we revive correctly.
Args:
config: dict that will be used to revive a Feature Column
Returns:
Shallow copy of config with lists turned to tuples.
"""
kwargs = config.copy()
for k, v in kwargs.items():
if isinstance(v, list):
kwargs[k] = tuple(v)
return kwargs | [
"def",
"_standardize_and_copy_config",
"(",
"config",
")",
":",
"kwargs",
"=",
"config",
".",
"copy",
"(",
")",
"for",
"k",
",",
"v",
"in",
"kwargs",
".",
"items",
"(",
")",
":",
"if",
"isinstance",
"(",
"v",
",",
"list",
")",
":",
"kwargs",
"[",
"k",
"]",
"=",
"tuple",
"(",
"v",
")",
"return",
"kwargs"
] | https://github.com/Xilinx/Vitis-AI/blob/fc74d404563d9951b57245443c73bef389f3657f/tools/Vitis-AI-Quantizer/vai_q_tensorflow1.x/tensorflow/python/feature_column/feature_column_v2.py#L4604-L4624 | |
baidu/tera | dbcd28af792d879d961bf9fc7eb60de81b437646 | src/sdk/python/TeraSdk.py | python | ScanDescriptor.SetEnd | (self, end_key) | 不调用此函数时,end_key被置为“无穷大”
Args:
end_key(string): scan操作的终止位置,scan结果不包含end_key | 不调用此函数时,end_key被置为“无穷大” | [
"不调用此函数时,end_key被置为“无穷大”"
] | def SetEnd(self, end_key):
"""
不调用此函数时,end_key被置为“无穷大”
Args:
end_key(string): scan操作的终止位置,scan结果不包含end_key
"""
lib.tera_scan_descriptor_set_end(self.desc, end_key,
c_uint64(len(end_key))) | [
"def",
"SetEnd",
"(",
"self",
",",
"end_key",
")",
":",
"lib",
".",
"tera_scan_descriptor_set_end",
"(",
"self",
".",
"desc",
",",
"end_key",
",",
"c_uint64",
"(",
"len",
"(",
"end_key",
")",
")",
")"
] | https://github.com/baidu/tera/blob/dbcd28af792d879d961bf9fc7eb60de81b437646/src/sdk/python/TeraSdk.py#L77-L85 | ||
ZhouWeikuan/DouDiZhu | 0d84ff6c0bc54dba6ae37955de9ae9307513dc99 | code/frameworks/cocos2d-x/tools/bindings-generator/clang/cindex.py | python | Type.element_type | (self) | return result | Retrieve the Type of elements within this Type.
If accessed on a type that is not an array, complex, or vector type, an
exception will be raised. | Retrieve the Type of elements within this Type. | [
"Retrieve",
"the",
"Type",
"of",
"elements",
"within",
"this",
"Type",
"."
] | def element_type(self):
"""Retrieve the Type of elements within this Type.
If accessed on a type that is not an array, complex, or vector type, an
exception will be raised.
"""
result = conf.lib.clang_getElementType(self)
if result.kind == TypeKind.INVALID:
raise Exception('Element type not available on this type.')
return result | [
"def",
"element_type",
"(",
"self",
")",
":",
"result",
"=",
"conf",
".",
"lib",
".",
"clang_getElementType",
"(",
"self",
")",
"if",
"result",
".",
"kind",
"==",
"TypeKind",
".",
"INVALID",
":",
"raise",
"Exception",
"(",
"'Element type not available on this type.'",
")",
"return",
"result"
] | https://github.com/ZhouWeikuan/DouDiZhu/blob/0d84ff6c0bc54dba6ae37955de9ae9307513dc99/code/frameworks/cocos2d-x/tools/bindings-generator/clang/cindex.py#L1680-L1690 | |
nvdla/sw | 79538ba1b52b040a4a4645f630e457fa01839e90 | umd/external/protobuf-2.6/python/google/protobuf/service_reflection.py | python | GeneratedServiceType.__init__ | (cls, name, bases, dictionary) | Creates a message service class.
Args:
name: Name of the class (ignored, but required by the metaclass
protocol).
bases: Base classes of the class being constructed.
dictionary: The class dictionary of the class being constructed.
dictionary[_DESCRIPTOR_KEY] must contain a ServiceDescriptor object
describing this protocol service type. | Creates a message service class. | [
"Creates",
"a",
"message",
"service",
"class",
"."
] | def __init__(cls, name, bases, dictionary):
"""Creates a message service class.
Args:
name: Name of the class (ignored, but required by the metaclass
protocol).
bases: Base classes of the class being constructed.
dictionary: The class dictionary of the class being constructed.
dictionary[_DESCRIPTOR_KEY] must contain a ServiceDescriptor object
describing this protocol service type.
"""
# Don't do anything if this class doesn't have a descriptor. This happens
# when a service class is subclassed.
if GeneratedServiceType._DESCRIPTOR_KEY not in dictionary:
return
descriptor = dictionary[GeneratedServiceType._DESCRIPTOR_KEY]
service_builder = _ServiceBuilder(descriptor)
service_builder.BuildService(cls) | [
"def",
"__init__",
"(",
"cls",
",",
"name",
",",
"bases",
",",
"dictionary",
")",
":",
"# Don't do anything if this class doesn't have a descriptor. This happens",
"# when a service class is subclassed.",
"if",
"GeneratedServiceType",
".",
"_DESCRIPTOR_KEY",
"not",
"in",
"dictionary",
":",
"return",
"descriptor",
"=",
"dictionary",
"[",
"GeneratedServiceType",
".",
"_DESCRIPTOR_KEY",
"]",
"service_builder",
"=",
"_ServiceBuilder",
"(",
"descriptor",
")",
"service_builder",
".",
"BuildService",
"(",
"cls",
")"
] | https://github.com/nvdla/sw/blob/79538ba1b52b040a4a4645f630e457fa01839e90/umd/external/protobuf-2.6/python/google/protobuf/service_reflection.py#L64-L81 | ||
SpaceNetChallenge/BuildingDetectors | 3def3c44b5847c744cd2f3356182892d92496579 | qinhaifang/src/evalTools/script/preprocess_space_net_data.py | python | convert_wgs84geojson_to_pixgeojson_worker | (image_name) | docstring for convert_wgs84geojson_to_pixgeojson_worker | docstring for convert_wgs84geojson_to_pixgeojson_worker | [
"docstring",
"for",
"convert_wgs84geojson_to_pixgeojson_worker"
] | def convert_wgs84geojson_to_pixgeojson_worker(image_name):
"""docstring for convert_wgs84geojson_to_pixgeojson_worker"""
try:
image_id = '_'.join(image_name.split('.')[0].split('_')[1:])
image_name = os.path.join(setting.PIC_3BAND_DIR, image_name)
wgs_geojson_file = os.path.join(setting.GEO_JSON_DIR,
'{}_Geo.geojson'.format(image_id))
pixel_geojson_file = os.path.join(setting.PIXEL_GEO_JSON_DIR,
'{}_Pixel.geojson'.format(image_id))
building_list = gT.convert_wgs84geojson_to_pixgeojson(wgs_geojson_file,
image_name, image_id, pixel_geojson_file)
return image_id, 'Done'
except Exception as e:
logging.warning('Convert wgs_2_pix Exception[{}] image_id[{}]'.format(e,
image_id))
return image_id, e | [
"def",
"convert_wgs84geojson_to_pixgeojson_worker",
"(",
"image_name",
")",
":",
"try",
":",
"image_id",
"=",
"'_'",
".",
"join",
"(",
"image_name",
".",
"split",
"(",
"'.'",
")",
"[",
"0",
"]",
".",
"split",
"(",
"'_'",
")",
"[",
"1",
":",
"]",
")",
"image_name",
"=",
"os",
".",
"path",
".",
"join",
"(",
"setting",
".",
"PIC_3BAND_DIR",
",",
"image_name",
")",
"wgs_geojson_file",
"=",
"os",
".",
"path",
".",
"join",
"(",
"setting",
".",
"GEO_JSON_DIR",
",",
"'{}_Geo.geojson'",
".",
"format",
"(",
"image_id",
")",
")",
"pixel_geojson_file",
"=",
"os",
".",
"path",
".",
"join",
"(",
"setting",
".",
"PIXEL_GEO_JSON_DIR",
",",
"'{}_Pixel.geojson'",
".",
"format",
"(",
"image_id",
")",
")",
"building_list",
"=",
"gT",
".",
"convert_wgs84geojson_to_pixgeojson",
"(",
"wgs_geojson_file",
",",
"image_name",
",",
"image_id",
",",
"pixel_geojson_file",
")",
"return",
"image_id",
",",
"'Done'",
"except",
"Exception",
"as",
"e",
":",
"logging",
".",
"warning",
"(",
"'Convert wgs_2_pix Exception[{}] image_id[{}]'",
".",
"format",
"(",
"e",
",",
"image_id",
")",
")",
"return",
"image_id",
",",
"e"
] | https://github.com/SpaceNetChallenge/BuildingDetectors/blob/3def3c44b5847c744cd2f3356182892d92496579/qinhaifang/src/evalTools/script/preprocess_space_net_data.py#L126-L141 | ||
nasa/fprime | 595cf3682d8365943d86c1a6fe7c78f0a116acf0 | Autocoders/Python/src/fprime_ac/models/Channel.py | python | Channel.__init__ | (
self,
ids,
name,
ctype,
size,
abbrev=None,
format_string=None,
update=None,
limits=(None, None, None, None),
comment=None,
xml_filename=None,
component_name=None,
component_base_name=None,
) | Constructor
@param id: Numeric ID of channel
@param name: Name of channel
@param ctype: Type of channel
@param size: Size, if string type
@param abbrev: Abbreviation (to support AMPCS)
@param comment: A single or multi-line comment describing the channel
@param xml_filename: Component XML where channel is defined
@param component_name: Typename of component | Constructor | [
"Constructor"
] | def __init__(
self,
ids,
name,
ctype,
size,
abbrev=None,
format_string=None,
update=None,
limits=(None, None, None, None),
comment=None,
xml_filename=None,
component_name=None,
component_base_name=None,
):
"""
Constructor
@param id: Numeric ID of channel
@param name: Name of channel
@param ctype: Type of channel
@param size: Size, if string type
@param abbrev: Abbreviation (to support AMPCS)
@param comment: A single or multi-line comment describing the channel
@param xml_filename: Component XML where channel is defined
@param component_name: Typename of component
"""
self.__ids = ids
self.__name = name
self.__ctype = ctype
self.__size = size
self.__abbrev = abbrev
self.__comment = comment
self.__format_string = format_string
self.__update = update
self.__limits = limits
self.__xml_filename = xml_filename
self.__component_name = component_name
self.__component_base_name = component_base_name | [
"def",
"__init__",
"(",
"self",
",",
"ids",
",",
"name",
",",
"ctype",
",",
"size",
",",
"abbrev",
"=",
"None",
",",
"format_string",
"=",
"None",
",",
"update",
"=",
"None",
",",
"limits",
"=",
"(",
"None",
",",
"None",
",",
"None",
",",
"None",
")",
",",
"comment",
"=",
"None",
",",
"xml_filename",
"=",
"None",
",",
"component_name",
"=",
"None",
",",
"component_base_name",
"=",
"None",
",",
")",
":",
"self",
".",
"__ids",
"=",
"ids",
"self",
".",
"__name",
"=",
"name",
"self",
".",
"__ctype",
"=",
"ctype",
"self",
".",
"__size",
"=",
"size",
"self",
".",
"__abbrev",
"=",
"abbrev",
"self",
".",
"__comment",
"=",
"comment",
"self",
".",
"__format_string",
"=",
"format_string",
"self",
".",
"__update",
"=",
"update",
"self",
".",
"__limits",
"=",
"limits",
"self",
".",
"__xml_filename",
"=",
"xml_filename",
"self",
".",
"__component_name",
"=",
"component_name",
"self",
".",
"__component_base_name",
"=",
"component_base_name"
] | https://github.com/nasa/fprime/blob/595cf3682d8365943d86c1a6fe7c78f0a116acf0/Autocoders/Python/src/fprime_ac/models/Channel.py#L41-L78 | ||
Jittor/jittor | e9aca0444c2bdc8e2389d99122954cd0903eec46 | python/jittor/transform/__init__.py | python | hflip | (img) | return F_pil.hflip(img) | Function for horizontally flipping the given image.
Args::
[in] img(PIL Image.Image): Input image.
Example::
img = Image.open(...)
img_ = transform.hflip(img) | Function for horizontally flipping the given image.
Args::
[in] img(PIL Image.Image): Input image.
Example::
img = Image.open(...)
img_ = transform.hflip(img) | [
"Function",
"for",
"horizontally",
"flipping",
"the",
"given",
"image",
".",
"Args",
"::",
"[",
"in",
"]",
"img",
"(",
"PIL",
"Image",
".",
"Image",
")",
":",
"Input",
"image",
".",
"Example",
"::",
"img",
"=",
"Image",
".",
"open",
"(",
"...",
")",
"img_",
"=",
"transform",
".",
"hflip",
"(",
"img",
")"
] | def hflip(img):
"""
Function for horizontally flipping the given image.
Args::
[in] img(PIL Image.Image): Input image.
Example::
img = Image.open(...)
img_ = transform.hflip(img)
"""
return F_pil.hflip(img) | [
"def",
"hflip",
"(",
"img",
")",
":",
"return",
"F_pil",
".",
"hflip",
"(",
"img",
")"
] | https://github.com/Jittor/jittor/blob/e9aca0444c2bdc8e2389d99122954cd0903eec46/python/jittor/transform/__init__.py#L223-L233 | |
bulletphysics/bullet3 | f0f2a952e146f016096db6f85cf0c44ed75b0b9a | examples/pybullet/gym/pybullet_envs/agents/tools/in_graph_env.py | python | InGraphEnv.action | (self) | return self._action | Access the variable holding the last recieved action. | Access the variable holding the last recieved action. | [
"Access",
"the",
"variable",
"holding",
"the",
"last",
"recieved",
"action",
"."
] | def action(self):
"""Access the variable holding the last recieved action."""
return self._action | [
"def",
"action",
"(",
"self",
")",
":",
"return",
"self",
".",
"_action"
] | https://github.com/bulletphysics/bullet3/blob/f0f2a952e146f016096db6f85cf0c44ed75b0b9a/examples/pybullet/gym/pybullet_envs/agents/tools/in_graph_env.py#L113-L115 | |
PaddlePaddle/Paddle | 1252f4bb3e574df80aa6d18c7ddae1b3a90bd81c | python/paddle/static/io.py | python | save_inference_model | (path_prefix, feed_vars, fetch_vars, executor,
**kwargs) | :api_attr: Static Graph
Save current model and its parameters to given path. i.e.
Given path_prefix = "/path/to/modelname", after invoking
save_inference_model(path_prefix, feed_vars, fetch_vars, executor),
you will find two files named modelname.pdmodel and modelname.pdiparams
under "/path/to", which represent your model and parameters respectively.
Args:
path_prefix(str): Directory path to save model + model name without suffix.
feed_vars(Variable | list[Variable]): Variables needed by inference.
fetch_vars(Variable | list[Variable]): Variables returned by inference.
executor(Executor): The executor that saves the inference model. You can refer
to :ref:`api_guide_executor_en` for more details.
kwargs: Supported keys including 'program' and "clip_extra". Attention please, kwargs is used for backward compatibility mainly.
- program(Program): specify a program if you don't want to use default main program.
- clip_extra(bool): set to True if you want to clip extra information for every operator.
Returns:
None
Raises:
ValueError: If `feed_vars` is not a Variable or a list of Variable, an exception is thrown.
ValueError: If `fetch_vars` is not a Variable or a list of Variable, an exception is thrown.
Examples:
.. code-block:: python
import paddle
paddle.enable_static()
path_prefix = "./infer_model"
# User defined network, here a softmax regession example
image = paddle.static.data(name='img', shape=[None, 28, 28], dtype='float32')
label = paddle.static.data(name='label', shape=[None, 1], dtype='int64')
predict = paddle.static.nn.fc(image, 10, activation='softmax')
loss = paddle.nn.functional.cross_entropy(predict, label)
exe = paddle.static.Executor(paddle.CPUPlace())
exe.run(paddle.static.default_startup_program())
# Feed data and train process
# Save inference model. Note we don't save label and loss in this example
paddle.static.save_inference_model(path_prefix, [image], [predict], exe)
# In this example, the save_inference_mode inference will prune the default
# main program according to the network's input node (img) and output node(predict).
# The pruned inference program is going to be saved in file "./infer_model.pdmodel"
# and parameters are going to be saved in file "./infer_model.pdiparams". | :api_attr: Static Graph | [
":",
"api_attr",
":",
"Static",
"Graph"
] | def save_inference_model(path_prefix, feed_vars, fetch_vars, executor,
**kwargs):
"""
:api_attr: Static Graph
Save current model and its parameters to given path. i.e.
Given path_prefix = "/path/to/modelname", after invoking
save_inference_model(path_prefix, feed_vars, fetch_vars, executor),
you will find two files named modelname.pdmodel and modelname.pdiparams
under "/path/to", which represent your model and parameters respectively.
Args:
path_prefix(str): Directory path to save model + model name without suffix.
feed_vars(Variable | list[Variable]): Variables needed by inference.
fetch_vars(Variable | list[Variable]): Variables returned by inference.
executor(Executor): The executor that saves the inference model. You can refer
to :ref:`api_guide_executor_en` for more details.
kwargs: Supported keys including 'program' and "clip_extra". Attention please, kwargs is used for backward compatibility mainly.
- program(Program): specify a program if you don't want to use default main program.
- clip_extra(bool): set to True if you want to clip extra information for every operator.
Returns:
None
Raises:
ValueError: If `feed_vars` is not a Variable or a list of Variable, an exception is thrown.
ValueError: If `fetch_vars` is not a Variable or a list of Variable, an exception is thrown.
Examples:
.. code-block:: python
import paddle
paddle.enable_static()
path_prefix = "./infer_model"
# User defined network, here a softmax regession example
image = paddle.static.data(name='img', shape=[None, 28, 28], dtype='float32')
label = paddle.static.data(name='label', shape=[None, 1], dtype='int64')
predict = paddle.static.nn.fc(image, 10, activation='softmax')
loss = paddle.nn.functional.cross_entropy(predict, label)
exe = paddle.static.Executor(paddle.CPUPlace())
exe.run(paddle.static.default_startup_program())
# Feed data and train process
# Save inference model. Note we don't save label and loss in this example
paddle.static.save_inference_model(path_prefix, [image], [predict], exe)
# In this example, the save_inference_mode inference will prune the default
# main program according to the network's input node (img) and output node(predict).
# The pruned inference program is going to be saved in file "./infer_model.pdmodel"
# and parameters are going to be saved in file "./infer_model.pdiparams".
"""
# check path_prefix, set model_path and params_path
path_prefix = _normalize_path_prefix(path_prefix)
try:
# mkdir may conflict if pserver and trainer are running on the same machine
dirname = os.path.dirname(path_prefix)
os.makedirs(dirname)
except OSError as e:
if e.errno != errno.EEXIST:
raise
model_path = path_prefix + ".pdmodel"
params_path = path_prefix + ".pdiparams"
if os.path.isdir(model_path):
raise ValueError("'{}' is an existing directory.".format(model_path))
if os.path.isdir(params_path):
raise ValueError("'{}' is an existing directory.".format(params_path))
# verify feed_vars
_check_vars('feed_vars', feed_vars)
# verify fetch_vars
_check_vars('fetch_vars', fetch_vars)
program = _get_valid_program(kwargs.get('program', None))
clip_extra = kwargs.get('clip_extra', False)
program = normalize_program(program, feed_vars, fetch_vars)
# serialize and save program
program_bytes = _serialize_program(
program._remove_training_info(clip_extra=clip_extra))
save_to_file(model_path, program_bytes)
# serialize and save params
params_bytes = _serialize_persistables(program, executor)
save_to_file(params_path, params_bytes) | [
"def",
"save_inference_model",
"(",
"path_prefix",
",",
"feed_vars",
",",
"fetch_vars",
",",
"executor",
",",
"*",
"*",
"kwargs",
")",
":",
"# check path_prefix, set model_path and params_path",
"path_prefix",
"=",
"_normalize_path_prefix",
"(",
"path_prefix",
")",
"try",
":",
"# mkdir may conflict if pserver and trainer are running on the same machine",
"dirname",
"=",
"os",
".",
"path",
".",
"dirname",
"(",
"path_prefix",
")",
"os",
".",
"makedirs",
"(",
"dirname",
")",
"except",
"OSError",
"as",
"e",
":",
"if",
"e",
".",
"errno",
"!=",
"errno",
".",
"EEXIST",
":",
"raise",
"model_path",
"=",
"path_prefix",
"+",
"\".pdmodel\"",
"params_path",
"=",
"path_prefix",
"+",
"\".pdiparams\"",
"if",
"os",
".",
"path",
".",
"isdir",
"(",
"model_path",
")",
":",
"raise",
"ValueError",
"(",
"\"'{}' is an existing directory.\"",
".",
"format",
"(",
"model_path",
")",
")",
"if",
"os",
".",
"path",
".",
"isdir",
"(",
"params_path",
")",
":",
"raise",
"ValueError",
"(",
"\"'{}' is an existing directory.\"",
".",
"format",
"(",
"params_path",
")",
")",
"# verify feed_vars",
"_check_vars",
"(",
"'feed_vars'",
",",
"feed_vars",
")",
"# verify fetch_vars",
"_check_vars",
"(",
"'fetch_vars'",
",",
"fetch_vars",
")",
"program",
"=",
"_get_valid_program",
"(",
"kwargs",
".",
"get",
"(",
"'program'",
",",
"None",
")",
")",
"clip_extra",
"=",
"kwargs",
".",
"get",
"(",
"'clip_extra'",
",",
"False",
")",
"program",
"=",
"normalize_program",
"(",
"program",
",",
"feed_vars",
",",
"fetch_vars",
")",
"# serialize and save program",
"program_bytes",
"=",
"_serialize_program",
"(",
"program",
".",
"_remove_training_info",
"(",
"clip_extra",
"=",
"clip_extra",
")",
")",
"save_to_file",
"(",
"model_path",
",",
"program_bytes",
")",
"# serialize and save params",
"params_bytes",
"=",
"_serialize_persistables",
"(",
"program",
",",
"executor",
")",
"save_to_file",
"(",
"params_path",
",",
"params_bytes",
")"
] | https://github.com/PaddlePaddle/Paddle/blob/1252f4bb3e574df80aa6d18c7ddae1b3a90bd81c/python/paddle/static/io.py#L433-L521 | ||
omnisci/omniscidb | b9c95f1bd602b4ffc8b0edf18bfad61031e08d86 | QueryEngine/scripts/generate_TableFunctionsFactory_init.py | python | AnnotationNode.__init__ | (self, key, value) | Parameters
----------
key : str
value : {str, list} | Parameters
----------
key : str
value : {str, list} | [
"Parameters",
"----------",
"key",
":",
"str",
"value",
":",
"{",
"str",
"list",
"}"
] | def __init__(self, key, value):
"""
Parameters
----------
key : str
value : {str, list}
"""
self.key = key
self.value = value | [
"def",
"__init__",
"(",
"self",
",",
"key",
",",
"value",
")",
":",
"self",
".",
"key",
"=",
"key",
"self",
".",
"value",
"=",
"value"
] | https://github.com/omnisci/omniscidb/blob/b9c95f1bd602b4ffc8b0edf18bfad61031e08d86/QueryEngine/scripts/generate_TableFunctionsFactory_init.py#L1163-L1171 | ||
mindspore-ai/mindspore | fb8fd3338605bb34fa5cea054e535a8b1d753fab | mindspore/python/mindspore/dataset/engine/datasets.py | python | Dataset.reset | (self) | Reset the dataset for next epoch. | Reset the dataset for next epoch. | [
"Reset",
"the",
"dataset",
"for",
"next",
"epoch",
"."
] | def reset(self):
"""Reset the dataset for next epoch.""" | [
"def",
"reset",
"(",
"self",
")",
":"
] | https://github.com/mindspore-ai/mindspore/blob/fb8fd3338605bb34fa5cea054e535a8b1d753fab/mindspore/python/mindspore/dataset/engine/datasets.py#L1786-L1787 | ||
PX4/PX4-Autopilot | 0b9f60a0370be53d683352c63fd92db3d6586e18 | Tools/mavlink_px4.py | python | MAVLink.param_request_list_encode | (self, target_system, target_component) | return msg | Request all parameters of this component. After his request, all
parameters are emitted.
target_system : System ID (uint8_t)
target_component : Component ID (uint8_t) | Request all parameters of this component. After his request, all
parameters are emitted. | [
"Request",
"all",
"parameters",
"of",
"this",
"component",
".",
"After",
"his",
"request",
"all",
"parameters",
"are",
"emitted",
"."
] | def param_request_list_encode(self, target_system, target_component):
'''
Request all parameters of this component. After his request, all
parameters are emitted.
target_system : System ID (uint8_t)
target_component : Component ID (uint8_t)
'''
msg = MAVLink_param_request_list_message(target_system, target_component)
msg.pack(self)
return msg | [
"def",
"param_request_list_encode",
"(",
"self",
",",
"target_system",
",",
"target_component",
")",
":",
"msg",
"=",
"MAVLink_param_request_list_message",
"(",
"target_system",
",",
"target_component",
")",
"msg",
".",
"pack",
"(",
"self",
")",
"return",
"msg"
] | https://github.com/PX4/PX4-Autopilot/blob/0b9f60a0370be53d683352c63fd92db3d6586e18/Tools/mavlink_px4.py#L2707-L2718 | |
hanpfei/chromium-net | 392cc1fa3a8f92f42e4071ab6e674d8e0482f83f | third_party/catapult/third_party/py_vulcanize/third_party/rcssmin/_setup/py3/shell.py | python | rm_rf | (dest) | Remove a tree | Remove a tree | [
"Remove",
"a",
"tree"
] | def rm_rf(dest):
""" Remove a tree """
dest = native(dest)
if _os.path.exists(dest):
for path in files(dest, '*'):
_os.chmod(native(path), 0o644)
_shutil.rmtree(dest) | [
"def",
"rm_rf",
"(",
"dest",
")",
":",
"dest",
"=",
"native",
"(",
"dest",
")",
"if",
"_os",
".",
"path",
".",
"exists",
"(",
"dest",
")",
":",
"for",
"path",
"in",
"files",
"(",
"dest",
",",
"'*'",
")",
":",
"_os",
".",
"chmod",
"(",
"native",
"(",
"path",
")",
",",
"0o644",
")",
"_shutil",
".",
"rmtree",
"(",
"dest",
")"
] | https://github.com/hanpfei/chromium-net/blob/392cc1fa3a8f92f42e4071ab6e674d8e0482f83f/third_party/catapult/third_party/py_vulcanize/third_party/rcssmin/_setup/py3/shell.py#L84-L90 | ||
FreeCAD/FreeCAD | ba42231b9c6889b89e064d6d563448ed81e376ec | src/Mod/Draft/drafttaskpanels/task_polararray.py | python | TaskPanelPolarArray.reset_point | (self) | Reset the center point to the original distance. | Reset the center point to the original distance. | [
"Reset",
"the",
"center",
"point",
"to",
"the",
"original",
"distance",
"."
] | def reset_point(self):
"""Reset the center point to the original distance."""
self.form.input_c_x.setProperty('rawValue', 0)
self.form.input_c_y.setProperty('rawValue', 0)
self.form.input_c_z.setProperty('rawValue', 0)
self.center = self.get_center()
_msg(translate("draft","Center reset:")
+ " ({0}, {1}, {2})".format(self.center.x,
self.center.y,
self.center.z)) | [
"def",
"reset_point",
"(",
"self",
")",
":",
"self",
".",
"form",
".",
"input_c_x",
".",
"setProperty",
"(",
"'rawValue'",
",",
"0",
")",
"self",
".",
"form",
".",
"input_c_y",
".",
"setProperty",
"(",
"'rawValue'",
",",
"0",
")",
"self",
".",
"form",
".",
"input_c_z",
".",
"setProperty",
"(",
"'rawValue'",
",",
"0",
")",
"self",
".",
"center",
"=",
"self",
".",
"get_center",
"(",
")",
"_msg",
"(",
"translate",
"(",
"\"draft\"",
",",
"\"Center reset:\"",
")",
"+",
"\" ({0}, {1}, {2})\"",
".",
"format",
"(",
"self",
".",
"center",
".",
"x",
",",
"self",
".",
"center",
".",
"y",
",",
"self",
".",
"center",
".",
"z",
")",
")"
] | https://github.com/FreeCAD/FreeCAD/blob/ba42231b9c6889b89e064d6d563448ed81e376ec/src/Mod/Draft/drafttaskpanels/task_polararray.py#L282-L292 | ||
aws/lumberyard | f85344403c1c2e77ec8c75deb2c116e97b713217 | dev/Tools/Python/3.7.10/mac/Python.framework/Versions/3.7/lib/python3.7/ntpath.py | python | ismount | (path) | Test whether a path is a mount point (a drive root, the root of a
share, or a mounted volume) | Test whether a path is a mount point (a drive root, the root of a
share, or a mounted volume) | [
"Test",
"whether",
"a",
"path",
"is",
"a",
"mount",
"point",
"(",
"a",
"drive",
"root",
"the",
"root",
"of",
"a",
"share",
"or",
"a",
"mounted",
"volume",
")"
] | def ismount(path):
"""Test whether a path is a mount point (a drive root, the root of a
share, or a mounted volume)"""
path = os.fspath(path)
seps = _get_bothseps(path)
path = abspath(path)
root, rest = splitdrive(path)
if root and root[0] in seps:
return (not rest) or (rest in seps)
if rest in seps:
return True
if _getvolumepathname:
return path.rstrip(seps) == _getvolumepathname(path).rstrip(seps)
else:
return False | [
"def",
"ismount",
"(",
"path",
")",
":",
"path",
"=",
"os",
".",
"fspath",
"(",
"path",
")",
"seps",
"=",
"_get_bothseps",
"(",
"path",
")",
"path",
"=",
"abspath",
"(",
"path",
")",
"root",
",",
"rest",
"=",
"splitdrive",
"(",
"path",
")",
"if",
"root",
"and",
"root",
"[",
"0",
"]",
"in",
"seps",
":",
"return",
"(",
"not",
"rest",
")",
"or",
"(",
"rest",
"in",
"seps",
")",
"if",
"rest",
"in",
"seps",
":",
"return",
"True",
"if",
"_getvolumepathname",
":",
"return",
"path",
".",
"rstrip",
"(",
"seps",
")",
"==",
"_getvolumepathname",
"(",
"path",
")",
".",
"rstrip",
"(",
"seps",
")",
"else",
":",
"return",
"False"
] | https://github.com/aws/lumberyard/blob/f85344403c1c2e77ec8c75deb2c116e97b713217/dev/Tools/Python/3.7.10/mac/Python.framework/Versions/3.7/lib/python3.7/ntpath.py#L260-L275 | ||
baidu/bigflow | 449245016c0df7d1252e85581e588bfc60cefad3 | bigflow_python/python/bigflow/pcollection.py | python | PCollection.combine | (self, fn, **options) | return transforms.combine(self, fn, **options) | 等同于 :func:`bigflow.transforms.combine(self, fn) <bigflow.transforms.combine>`
Args:
fn (callable): 合并函数
**options: 可配置选项
Returns:
PObject: 合并结果
>>> _pipeline.parallelize([2, 4, 6, 10]).combine(sum).get()
22 | 等同于 :func:`bigflow.transforms.combine(self, fn) <bigflow.transforms.combine>` | [
"等同于",
":",
"func",
":",
"bigflow",
".",
"transforms",
".",
"combine",
"(",
"self",
"fn",
")",
"<bigflow",
".",
"transforms",
".",
"combine",
">"
] | def combine(self, fn, **options):
"""
等同于 :func:`bigflow.transforms.combine(self, fn) <bigflow.transforms.combine>`
Args:
fn (callable): 合并函数
**options: 可配置选项
Returns:
PObject: 合并结果
>>> _pipeline.parallelize([2, 4, 6, 10]).combine(sum).get()
22
"""
return transforms.combine(self, fn, **options) | [
"def",
"combine",
"(",
"self",
",",
"fn",
",",
"*",
"*",
"options",
")",
":",
"return",
"transforms",
".",
"combine",
"(",
"self",
",",
"fn",
",",
"*",
"*",
"options",
")"
] | https://github.com/baidu/bigflow/blob/449245016c0df7d1252e85581e588bfc60cefad3/bigflow_python/python/bigflow/pcollection.py#L112-L126 | |
natanielruiz/android-yolo | 1ebb54f96a67a20ff83ddfc823ed83a13dc3a47f | jni-build/jni/include/tensorflow/models/embedding/word2vec_optimized.py | python | Word2Vec.analogy | (self, w0, w1, w2) | return "unknown" | Predict word w3 as in w0:w1 vs w2:w3. | Predict word w3 as in w0:w1 vs w2:w3. | [
"Predict",
"word",
"w3",
"as",
"in",
"w0",
":",
"w1",
"vs",
"w2",
":",
"w3",
"."
] | def analogy(self, w0, w1, w2):
"""Predict word w3 as in w0:w1 vs w2:w3."""
wid = np.array([[self._word2id.get(w, 0) for w in [w0, w1, w2]]])
idx = self._predict(wid)
for c in [self._id2word[i] for i in idx[0, :]]:
if c not in [w0, w1, w2]:
return c
return "unknown" | [
"def",
"analogy",
"(",
"self",
",",
"w0",
",",
"w1",
",",
"w2",
")",
":",
"wid",
"=",
"np",
".",
"array",
"(",
"[",
"[",
"self",
".",
"_word2id",
".",
"get",
"(",
"w",
",",
"0",
")",
"for",
"w",
"in",
"[",
"w0",
",",
"w1",
",",
"w2",
"]",
"]",
"]",
")",
"idx",
"=",
"self",
".",
"_predict",
"(",
"wid",
")",
"for",
"c",
"in",
"[",
"self",
".",
"_id2word",
"[",
"i",
"]",
"for",
"i",
"in",
"idx",
"[",
"0",
",",
":",
"]",
"]",
":",
"if",
"c",
"not",
"in",
"[",
"w0",
",",
"w1",
",",
"w2",
"]",
":",
"return",
"c",
"return",
"\"unknown\""
] | https://github.com/natanielruiz/android-yolo/blob/1ebb54f96a67a20ff83ddfc823ed83a13dc3a47f/jni-build/jni/include/tensorflow/models/embedding/word2vec_optimized.py#L378-L385 | |
aws/lumberyard | f85344403c1c2e77ec8c75deb2c116e97b713217 | dev/Gems/CloudGemMetric/v1/AWS/common-code/Lib/numba/array_analysis.py | python | SymbolicEquivSet._get_or_set_rel | (self, name, func_ir=None) | Retrieve a definition pair for the given variable,
and if it is not already available, try to look it up
in the given func_ir, and remember it for future use. | Retrieve a definition pair for the given variable,
and if it is not already available, try to look it up
in the given func_ir, and remember it for future use. | [
"Retrieve",
"a",
"definition",
"pair",
"for",
"the",
"given",
"variable",
"and",
"if",
"it",
"is",
"not",
"already",
"available",
"try",
"to",
"look",
"it",
"up",
"in",
"the",
"given",
"func_ir",
"and",
"remember",
"it",
"for",
"future",
"use",
"."
] | def _get_or_set_rel(self, name, func_ir=None):
"""Retrieve a definition pair for the given variable,
and if it is not already available, try to look it up
in the given func_ir, and remember it for future use.
"""
if isinstance(name, ir.Var):
name = name.name
require(self.defs.get(name, 0) == 1)
if name in self.def_by:
return self.def_by[name]
else:
require(func_ir != None)
def plus(x, y):
x_is_const = isinstance(x, int)
y_is_const = isinstance(y, int)
if x_is_const:
if y_is_const:
return x + y
else:
(var, offset) = y
return (var, x + offset)
else:
(var, offset) = x
if y_is_const:
return (var, y + offset)
else:
return None
def minus(x, y):
if isinstance(y, int):
return plus(x, -y)
elif (isinstance(x, tuple) and isinstance(y, tuple) and
x[0] == y[0]):
return minus(x[1], y[1])
else:
return None
expr = get_definition(func_ir, name)
value = (name, 0) # default to its own name
if isinstance(expr, ir.Expr):
if expr.op == 'call':
fname, mod_name = find_callname(
func_ir, expr, typemap=self.typemap)
if fname == 'wrap_index' and mod_name == 'numba.array_analysis':
index = tuple(self.obj_to_ind.get(x.name, -1)
for x in expr.args)
if -1 in index:
return None
names = self.ext_shapes.get(index, [])
names.append(name)
if len(names) > 0:
self._insert(names)
self.ext_shapes[index] = names
elif expr.op == 'binop':
lhs = self._get_or_set_rel(expr.lhs, func_ir)
rhs = self._get_or_set_rel(expr.rhs, func_ir)
if expr.fn == operator.add:
value = plus(lhs, rhs)
elif expr.fn == operator.sub:
value = minus(lhs, rhs)
elif isinstance(expr, ir.Const) and isinstance(expr.value, int):
value = expr.value
require(value != None)
# update def_by table
self.def_by[name] = value
if isinstance(value, int) or (isinstance(value, tuple) and
(value[0] != name or value[1] != 0)):
# update ref_by table too
if isinstance(value, tuple):
(var, offset) = value
if not (var in self.ref_by):
self.ref_by[var] = []
self.ref_by[var].append((name, -offset))
# insert new equivalence if found
ind = self._get_ind(var)
if ind >= 0:
objs = self.ind_to_obj[ind]
names = []
for obj in objs:
if obj in self.ref_by:
names += [ x for (x, i) in self.ref_by[obj]
if i == -offset ]
if len(names) > 1:
super(SymbolicEquivSet, self)._insert(names)
return value | [
"def",
"_get_or_set_rel",
"(",
"self",
",",
"name",
",",
"func_ir",
"=",
"None",
")",
":",
"if",
"isinstance",
"(",
"name",
",",
"ir",
".",
"Var",
")",
":",
"name",
"=",
"name",
".",
"name",
"require",
"(",
"self",
".",
"defs",
".",
"get",
"(",
"name",
",",
"0",
")",
"==",
"1",
")",
"if",
"name",
"in",
"self",
".",
"def_by",
":",
"return",
"self",
".",
"def_by",
"[",
"name",
"]",
"else",
":",
"require",
"(",
"func_ir",
"!=",
"None",
")",
"def",
"plus",
"(",
"x",
",",
"y",
")",
":",
"x_is_const",
"=",
"isinstance",
"(",
"x",
",",
"int",
")",
"y_is_const",
"=",
"isinstance",
"(",
"y",
",",
"int",
")",
"if",
"x_is_const",
":",
"if",
"y_is_const",
":",
"return",
"x",
"+",
"y",
"else",
":",
"(",
"var",
",",
"offset",
")",
"=",
"y",
"return",
"(",
"var",
",",
"x",
"+",
"offset",
")",
"else",
":",
"(",
"var",
",",
"offset",
")",
"=",
"x",
"if",
"y_is_const",
":",
"return",
"(",
"var",
",",
"y",
"+",
"offset",
")",
"else",
":",
"return",
"None",
"def",
"minus",
"(",
"x",
",",
"y",
")",
":",
"if",
"isinstance",
"(",
"y",
",",
"int",
")",
":",
"return",
"plus",
"(",
"x",
",",
"-",
"y",
")",
"elif",
"(",
"isinstance",
"(",
"x",
",",
"tuple",
")",
"and",
"isinstance",
"(",
"y",
",",
"tuple",
")",
"and",
"x",
"[",
"0",
"]",
"==",
"y",
"[",
"0",
"]",
")",
":",
"return",
"minus",
"(",
"x",
"[",
"1",
"]",
",",
"y",
"[",
"1",
"]",
")",
"else",
":",
"return",
"None",
"expr",
"=",
"get_definition",
"(",
"func_ir",
",",
"name",
")",
"value",
"=",
"(",
"name",
",",
"0",
")",
"# default to its own name",
"if",
"isinstance",
"(",
"expr",
",",
"ir",
".",
"Expr",
")",
":",
"if",
"expr",
".",
"op",
"==",
"'call'",
":",
"fname",
",",
"mod_name",
"=",
"find_callname",
"(",
"func_ir",
",",
"expr",
",",
"typemap",
"=",
"self",
".",
"typemap",
")",
"if",
"fname",
"==",
"'wrap_index'",
"and",
"mod_name",
"==",
"'numba.array_analysis'",
":",
"index",
"=",
"tuple",
"(",
"self",
".",
"obj_to_ind",
".",
"get",
"(",
"x",
".",
"name",
",",
"-",
"1",
")",
"for",
"x",
"in",
"expr",
".",
"args",
")",
"if",
"-",
"1",
"in",
"index",
":",
"return",
"None",
"names",
"=",
"self",
".",
"ext_shapes",
".",
"get",
"(",
"index",
",",
"[",
"]",
")",
"names",
".",
"append",
"(",
"name",
")",
"if",
"len",
"(",
"names",
")",
">",
"0",
":",
"self",
".",
"_insert",
"(",
"names",
")",
"self",
".",
"ext_shapes",
"[",
"index",
"]",
"=",
"names",
"elif",
"expr",
".",
"op",
"==",
"'binop'",
":",
"lhs",
"=",
"self",
".",
"_get_or_set_rel",
"(",
"expr",
".",
"lhs",
",",
"func_ir",
")",
"rhs",
"=",
"self",
".",
"_get_or_set_rel",
"(",
"expr",
".",
"rhs",
",",
"func_ir",
")",
"if",
"expr",
".",
"fn",
"==",
"operator",
".",
"add",
":",
"value",
"=",
"plus",
"(",
"lhs",
",",
"rhs",
")",
"elif",
"expr",
".",
"fn",
"==",
"operator",
".",
"sub",
":",
"value",
"=",
"minus",
"(",
"lhs",
",",
"rhs",
")",
"elif",
"isinstance",
"(",
"expr",
",",
"ir",
".",
"Const",
")",
"and",
"isinstance",
"(",
"expr",
".",
"value",
",",
"int",
")",
":",
"value",
"=",
"expr",
".",
"value",
"require",
"(",
"value",
"!=",
"None",
")",
"# update def_by table",
"self",
".",
"def_by",
"[",
"name",
"]",
"=",
"value",
"if",
"isinstance",
"(",
"value",
",",
"int",
")",
"or",
"(",
"isinstance",
"(",
"value",
",",
"tuple",
")",
"and",
"(",
"value",
"[",
"0",
"]",
"!=",
"name",
"or",
"value",
"[",
"1",
"]",
"!=",
"0",
")",
")",
":",
"# update ref_by table too",
"if",
"isinstance",
"(",
"value",
",",
"tuple",
")",
":",
"(",
"var",
",",
"offset",
")",
"=",
"value",
"if",
"not",
"(",
"var",
"in",
"self",
".",
"ref_by",
")",
":",
"self",
".",
"ref_by",
"[",
"var",
"]",
"=",
"[",
"]",
"self",
".",
"ref_by",
"[",
"var",
"]",
".",
"append",
"(",
"(",
"name",
",",
"-",
"offset",
")",
")",
"# insert new equivalence if found",
"ind",
"=",
"self",
".",
"_get_ind",
"(",
"var",
")",
"if",
"ind",
">=",
"0",
":",
"objs",
"=",
"self",
".",
"ind_to_obj",
"[",
"ind",
"]",
"names",
"=",
"[",
"]",
"for",
"obj",
"in",
"objs",
":",
"if",
"obj",
"in",
"self",
".",
"ref_by",
":",
"names",
"+=",
"[",
"x",
"for",
"(",
"x",
",",
"i",
")",
"in",
"self",
".",
"ref_by",
"[",
"obj",
"]",
"if",
"i",
"==",
"-",
"offset",
"]",
"if",
"len",
"(",
"names",
")",
">",
"1",
":",
"super",
"(",
"SymbolicEquivSet",
",",
"self",
")",
".",
"_insert",
"(",
"names",
")",
"return",
"value"
] | https://github.com/aws/lumberyard/blob/f85344403c1c2e77ec8c75deb2c116e97b713217/dev/Gems/CloudGemMetric/v1/AWS/common-code/Lib/numba/array_analysis.py#L741-L823 | ||
NVIDIA/MDL-SDK | aa9642b2546ad7b6236b5627385d882c2ed83c5d | examples/mdl_python/modules/example_modules.py | python | load_module | (neuray, transaction, module_mdl_name) | return res >= 0 | Load the module given its name.
Returns true if the module is loaded to database | Load the module given its name.
Returns true if the module is loaded to database | [
"Load",
"the",
"module",
"given",
"its",
"name",
".",
"Returns",
"true",
"if",
"the",
"module",
"is",
"loaded",
"to",
"database"
] | def load_module(neuray, transaction, module_mdl_name):
"""Load the module given its name.
Returns true if the module is loaded to database"""
with neuray.get_api_component(pymdlsdk.IMdl_impexp_api) as imp_exp:
with neuray.get_api_component(pymdlsdk.IMdl_factory) as mdl_factory:
# for illustration, we don't use a `with` block for the `context`, instead we release manually
context = mdl_factory.create_execution_context()
res = imp_exp.load_module(transaction, module_mdl_name, context)
context.release() # same as: context = None
return res >= 0 | [
"def",
"load_module",
"(",
"neuray",
",",
"transaction",
",",
"module_mdl_name",
")",
":",
"with",
"neuray",
".",
"get_api_component",
"(",
"pymdlsdk",
".",
"IMdl_impexp_api",
")",
"as",
"imp_exp",
":",
"with",
"neuray",
".",
"get_api_component",
"(",
"pymdlsdk",
".",
"IMdl_factory",
")",
"as",
"mdl_factory",
":",
"# for illustration, we don't use a `with` block for the `context`, instead we release manually",
"context",
"=",
"mdl_factory",
".",
"create_execution_context",
"(",
")",
"res",
"=",
"imp_exp",
".",
"load_module",
"(",
"transaction",
",",
"module_mdl_name",
",",
"context",
")",
"context",
".",
"release",
"(",
")",
"# same as: context = None",
"return",
"res",
">=",
"0"
] | https://github.com/NVIDIA/MDL-SDK/blob/aa9642b2546ad7b6236b5627385d882c2ed83c5d/examples/mdl_python/modules/example_modules.py#L284-L295 | |
wxWidgets/wxPython-Classic | 19571e1ae65f1ac445f5491474121998c97a1bf0 | src/osx_cocoa/animate.py | python | Animation.GetFramePosition | (*args, **kwargs) | return _animate.Animation_GetFramePosition(*args, **kwargs) | GetFramePosition(self, int frame) -> Point | GetFramePosition(self, int frame) -> Point | [
"GetFramePosition",
"(",
"self",
"int",
"frame",
")",
"-",
">",
"Point"
] | def GetFramePosition(*args, **kwargs):
"""GetFramePosition(self, int frame) -> Point"""
return _animate.Animation_GetFramePosition(*args, **kwargs) | [
"def",
"GetFramePosition",
"(",
"*",
"args",
",",
"*",
"*",
"kwargs",
")",
":",
"return",
"_animate",
".",
"Animation_GetFramePosition",
"(",
"*",
"args",
",",
"*",
"*",
"kwargs",
")"
] | https://github.com/wxWidgets/wxPython-Classic/blob/19571e1ae65f1ac445f5491474121998c97a1bf0/src/osx_cocoa/animate.py#L122-L124 | |
aws/lumberyard | f85344403c1c2e77ec8c75deb2c116e97b713217 | dev/Tools/Python/3.7.10/windows/Lib/pydoc.py | python | HTMLDoc.multicolumn | (self, list, format, cols=4) | return '<table width="100%%" summary="list"><tr>%s</tr></table>' % result | Format a list of items into a multi-column list. | Format a list of items into a multi-column list. | [
"Format",
"a",
"list",
"of",
"items",
"into",
"a",
"multi",
"-",
"column",
"list",
"."
] | def multicolumn(self, list, format, cols=4):
"""Format a list of items into a multi-column list."""
result = ''
rows = (len(list)+cols-1)//cols
for col in range(cols):
result = result + '<td width="%d%%" valign=top>' % (100//cols)
for i in range(rows*col, rows*col+rows):
if i < len(list):
result = result + format(list[i]) + '<br>\n'
result = result + '</td>'
return '<table width="100%%" summary="list"><tr>%s</tr></table>' % result | [
"def",
"multicolumn",
"(",
"self",
",",
"list",
",",
"format",
",",
"cols",
"=",
"4",
")",
":",
"result",
"=",
"''",
"rows",
"=",
"(",
"len",
"(",
"list",
")",
"+",
"cols",
"-",
"1",
")",
"//",
"cols",
"for",
"col",
"in",
"range",
"(",
"cols",
")",
":",
"result",
"=",
"result",
"+",
"'<td width=\"%d%%\" valign=top>'",
"%",
"(",
"100",
"//",
"cols",
")",
"for",
"i",
"in",
"range",
"(",
"rows",
"*",
"col",
",",
"rows",
"*",
"col",
"+",
"rows",
")",
":",
"if",
"i",
"<",
"len",
"(",
"list",
")",
":",
"result",
"=",
"result",
"+",
"format",
"(",
"list",
"[",
"i",
"]",
")",
"+",
"'<br>\\n'",
"result",
"=",
"result",
"+",
"'</td>'",
"return",
"'<table width=\"100%%\" summary=\"list\"><tr>%s</tr></table>'",
"%",
"result"
] | https://github.com/aws/lumberyard/blob/f85344403c1c2e77ec8c75deb2c116e97b713217/dev/Tools/Python/3.7.10/windows/Lib/pydoc.py#L535-L545 | |
benoitsteiner/tensorflow-opencl | cb7cb40a57fde5cfd4731bc551e82a1e2fef43a5 | tensorflow/contrib/labeled_tensor/python/ops/ops.py | python | batch | (labeled_tensors,
batch_size,
num_threads=1,
capacity=32,
enqueue_many=False,
allow_smaller_final_batch=False,
name=None) | return _batch_helper('lt_batch', fn, batch_size, enqueue_many,
labeled_tensors, allow_smaller_final_batch, name) | Rebatch a tensor.
See tf.batch.
Args:
labeled_tensors: The input tensors.
batch_size: The output batch size.
num_threads: See tf.batch.
capacity: See tf.batch.
enqueue_many: If true, the input tensors must contain a 'batch' axis as
their first axis.
If false, the input tensors must not contain a 'batch' axis.
See tf.batch.
allow_smaller_final_batch: See tf.batch.
name: Optional op name.
Returns:
The rebatched tensors.
If enqueue_many is false, the output tensors will have a new 'batch' axis
as their first axis.
Raises:
ValueError: If enqueue_many is True and the first axis of the tensors
isn't "batch". | Rebatch a tensor. | [
"Rebatch",
"a",
"tensor",
"."
] | def batch(labeled_tensors,
batch_size,
num_threads=1,
capacity=32,
enqueue_many=False,
allow_smaller_final_batch=False,
name=None):
"""Rebatch a tensor.
See tf.batch.
Args:
labeled_tensors: The input tensors.
batch_size: The output batch size.
num_threads: See tf.batch.
capacity: See tf.batch.
enqueue_many: If true, the input tensors must contain a 'batch' axis as
their first axis.
If false, the input tensors must not contain a 'batch' axis.
See tf.batch.
allow_smaller_final_batch: See tf.batch.
name: Optional op name.
Returns:
The rebatched tensors.
If enqueue_many is false, the output tensors will have a new 'batch' axis
as their first axis.
Raises:
ValueError: If enqueue_many is True and the first axis of the tensors
isn't "batch".
"""
def fn(tensors, scope):
return input.batch(
tensors,
batch_size=batch_size,
num_threads=num_threads,
capacity=capacity,
enqueue_many=enqueue_many,
allow_smaller_final_batch=allow_smaller_final_batch,
name=scope)
return _batch_helper('lt_batch', fn, batch_size, enqueue_many,
labeled_tensors, allow_smaller_final_batch, name) | [
"def",
"batch",
"(",
"labeled_tensors",
",",
"batch_size",
",",
"num_threads",
"=",
"1",
",",
"capacity",
"=",
"32",
",",
"enqueue_many",
"=",
"False",
",",
"allow_smaller_final_batch",
"=",
"False",
",",
"name",
"=",
"None",
")",
":",
"def",
"fn",
"(",
"tensors",
",",
"scope",
")",
":",
"return",
"input",
".",
"batch",
"(",
"tensors",
",",
"batch_size",
"=",
"batch_size",
",",
"num_threads",
"=",
"num_threads",
",",
"capacity",
"=",
"capacity",
",",
"enqueue_many",
"=",
"enqueue_many",
",",
"allow_smaller_final_batch",
"=",
"allow_smaller_final_batch",
",",
"name",
"=",
"scope",
")",
"return",
"_batch_helper",
"(",
"'lt_batch'",
",",
"fn",
",",
"batch_size",
",",
"enqueue_many",
",",
"labeled_tensors",
",",
"allow_smaller_final_batch",
",",
"name",
")"
] | https://github.com/benoitsteiner/tensorflow-opencl/blob/cb7cb40a57fde5cfd4731bc551e82a1e2fef43a5/tensorflow/contrib/labeled_tensor/python/ops/ops.py#L452-L496 | |
catboost/catboost | 167f64f237114a4d10b2b4ee42adb4569137debe | contrib/tools/python3/src/Lib/textwrap.py | python | fill | (text, width=70, **kwargs) | return w.fill(text) | Fill a single paragraph of text, returning a new string.
Reformat the single paragraph in 'text' to fit in lines of no more
than 'width' columns, and return a new string containing the entire
wrapped paragraph. As with wrap(), tabs are expanded and other
whitespace characters converted to space. See TextWrapper class for
available keyword args to customize wrapping behaviour. | Fill a single paragraph of text, returning a new string. | [
"Fill",
"a",
"single",
"paragraph",
"of",
"text",
"returning",
"a",
"new",
"string",
"."
] | def fill(text, width=70, **kwargs):
"""Fill a single paragraph of text, returning a new string.
Reformat the single paragraph in 'text' to fit in lines of no more
than 'width' columns, and return a new string containing the entire
wrapped paragraph. As with wrap(), tabs are expanded and other
whitespace characters converted to space. See TextWrapper class for
available keyword args to customize wrapping behaviour.
"""
w = TextWrapper(width=width, **kwargs)
return w.fill(text) | [
"def",
"fill",
"(",
"text",
",",
"width",
"=",
"70",
",",
"*",
"*",
"kwargs",
")",
":",
"w",
"=",
"TextWrapper",
"(",
"width",
"=",
"width",
",",
"*",
"*",
"kwargs",
")",
"return",
"w",
".",
"fill",
"(",
"text",
")"
] | https://github.com/catboost/catboost/blob/167f64f237114a4d10b2b4ee42adb4569137debe/contrib/tools/python3/src/Lib/textwrap.py#L381-L391 | |
arangodb/arangodb | 0d658689c7d1b721b314fa3ca27d38303e1570c8 | 3rdParty/V8/gyp/xcode_emulation.py | python | GetXcodeArchsDefault | () | return XCODE_ARCHS_DEFAULT_CACHE | Returns the |XcodeArchsDefault| object to use to expand ARCHS for the
installed version of Xcode. The default values used by Xcode for ARCHS
and the expansion of the variables depends on the version of Xcode used.
For all version anterior to Xcode 5.0 or posterior to Xcode 5.1 included
uses $(ARCHS_STANDARD) if ARCHS is unset, while Xcode 5.0 to 5.0.2 uses
$(ARCHS_STANDARD_INCLUDING_64_BIT). This variable was added to Xcode 5.0
and deprecated with Xcode 5.1.
For "macosx" SDKROOT, all version starting with Xcode 5.0 includes 64-bit
architecture as part of $(ARCHS_STANDARD) and default to only building it.
For "iphoneos" and "iphonesimulator" SDKROOT, 64-bit architectures are part
of $(ARCHS_STANDARD_INCLUDING_64_BIT) from Xcode 5.0. From Xcode 5.1, they
are also part of $(ARCHS_STANDARD).
All thoses rules are coded in the construction of the |XcodeArchsDefault|
object to use depending on the version of Xcode detected. The object is
for performance reason. | Returns the |XcodeArchsDefault| object to use to expand ARCHS for the
installed version of Xcode. The default values used by Xcode for ARCHS
and the expansion of the variables depends on the version of Xcode used. | [
"Returns",
"the",
"|XcodeArchsDefault|",
"object",
"to",
"use",
"to",
"expand",
"ARCHS",
"for",
"the",
"installed",
"version",
"of",
"Xcode",
".",
"The",
"default",
"values",
"used",
"by",
"Xcode",
"for",
"ARCHS",
"and",
"the",
"expansion",
"of",
"the",
"variables",
"depends",
"on",
"the",
"version",
"of",
"Xcode",
"used",
"."
] | def GetXcodeArchsDefault():
"""Returns the |XcodeArchsDefault| object to use to expand ARCHS for the
installed version of Xcode. The default values used by Xcode for ARCHS
and the expansion of the variables depends on the version of Xcode used.
For all version anterior to Xcode 5.0 or posterior to Xcode 5.1 included
uses $(ARCHS_STANDARD) if ARCHS is unset, while Xcode 5.0 to 5.0.2 uses
$(ARCHS_STANDARD_INCLUDING_64_BIT). This variable was added to Xcode 5.0
and deprecated with Xcode 5.1.
For "macosx" SDKROOT, all version starting with Xcode 5.0 includes 64-bit
architecture as part of $(ARCHS_STANDARD) and default to only building it.
For "iphoneos" and "iphonesimulator" SDKROOT, 64-bit architectures are part
of $(ARCHS_STANDARD_INCLUDING_64_BIT) from Xcode 5.0. From Xcode 5.1, they
are also part of $(ARCHS_STANDARD).
All thoses rules are coded in the construction of the |XcodeArchsDefault|
object to use depending on the version of Xcode detected. The object is
for performance reason."""
global XCODE_ARCHS_DEFAULT_CACHE
if XCODE_ARCHS_DEFAULT_CACHE:
return XCODE_ARCHS_DEFAULT_CACHE
xcode_version, _ = XcodeVersion()
if xcode_version < '0500':
XCODE_ARCHS_DEFAULT_CACHE = XcodeArchsDefault(
'$(ARCHS_STANDARD)',
XcodeArchsVariableMapping(['i386']),
XcodeArchsVariableMapping(['i386']),
XcodeArchsVariableMapping(['armv7']))
elif xcode_version < '0510':
XCODE_ARCHS_DEFAULT_CACHE = XcodeArchsDefault(
'$(ARCHS_STANDARD_INCLUDING_64_BIT)',
XcodeArchsVariableMapping(['x86_64'], ['x86_64']),
XcodeArchsVariableMapping(['i386'], ['i386', 'x86_64']),
XcodeArchsVariableMapping(
['armv7', 'armv7s'],
['armv7', 'armv7s', 'arm64']))
else:
XCODE_ARCHS_DEFAULT_CACHE = XcodeArchsDefault(
'$(ARCHS_STANDARD)',
XcodeArchsVariableMapping(['x86_64'], ['x86_64']),
XcodeArchsVariableMapping(['i386', 'x86_64'], ['i386', 'x86_64']),
XcodeArchsVariableMapping(
['armv7', 'armv7s', 'arm64'],
['armv7', 'armv7s', 'arm64']))
return XCODE_ARCHS_DEFAULT_CACHE | [
"def",
"GetXcodeArchsDefault",
"(",
")",
":",
"global",
"XCODE_ARCHS_DEFAULT_CACHE",
"if",
"XCODE_ARCHS_DEFAULT_CACHE",
":",
"return",
"XCODE_ARCHS_DEFAULT_CACHE",
"xcode_version",
",",
"_",
"=",
"XcodeVersion",
"(",
")",
"if",
"xcode_version",
"<",
"'0500'",
":",
"XCODE_ARCHS_DEFAULT_CACHE",
"=",
"XcodeArchsDefault",
"(",
"'$(ARCHS_STANDARD)'",
",",
"XcodeArchsVariableMapping",
"(",
"[",
"'i386'",
"]",
")",
",",
"XcodeArchsVariableMapping",
"(",
"[",
"'i386'",
"]",
")",
",",
"XcodeArchsVariableMapping",
"(",
"[",
"'armv7'",
"]",
")",
")",
"elif",
"xcode_version",
"<",
"'0510'",
":",
"XCODE_ARCHS_DEFAULT_CACHE",
"=",
"XcodeArchsDefault",
"(",
"'$(ARCHS_STANDARD_INCLUDING_64_BIT)'",
",",
"XcodeArchsVariableMapping",
"(",
"[",
"'x86_64'",
"]",
",",
"[",
"'x86_64'",
"]",
")",
",",
"XcodeArchsVariableMapping",
"(",
"[",
"'i386'",
"]",
",",
"[",
"'i386'",
",",
"'x86_64'",
"]",
")",
",",
"XcodeArchsVariableMapping",
"(",
"[",
"'armv7'",
",",
"'armv7s'",
"]",
",",
"[",
"'armv7'",
",",
"'armv7s'",
",",
"'arm64'",
"]",
")",
")",
"else",
":",
"XCODE_ARCHS_DEFAULT_CACHE",
"=",
"XcodeArchsDefault",
"(",
"'$(ARCHS_STANDARD)'",
",",
"XcodeArchsVariableMapping",
"(",
"[",
"'x86_64'",
"]",
",",
"[",
"'x86_64'",
"]",
")",
",",
"XcodeArchsVariableMapping",
"(",
"[",
"'i386'",
",",
"'x86_64'",
"]",
",",
"[",
"'i386'",
",",
"'x86_64'",
"]",
")",
",",
"XcodeArchsVariableMapping",
"(",
"[",
"'armv7'",
",",
"'armv7s'",
",",
"'arm64'",
"]",
",",
"[",
"'armv7'",
",",
"'armv7s'",
",",
"'arm64'",
"]",
")",
")",
"return",
"XCODE_ARCHS_DEFAULT_CACHE"
] | https://github.com/arangodb/arangodb/blob/0d658689c7d1b721b314fa3ca27d38303e1570c8/3rdParty/V8/gyp/xcode_emulation.py#L99-L145 | |
Tencent/Pebble | 68315f176d9e328a233ace29b7579a829f89879f | tools/blade/src/blade/java_jar_target.py | python | JavaJarTarget.scons_rules | (self) | scons_rules.
Description
-----------
It outputs the scons rules according to user options. | scons_rules. | [
"scons_rules",
"."
] | def scons_rules(self):
"""scons_rules.
Description
-----------
It outputs the scons rules according to user options.
"""
self._clone_env()
if self.type == 'prebuilt_java_jar':
self._prebuilt_java_jar()
return
env_name = self._env_name()
class_root = self._java_jar_gen_class_root(self.path,
self.name)
targets = self.blade.get_build_targets()
for key in self.expanded_deps:
self.cmd_var_list += targets[key].data.get('java_dep_var', [])
self.java_jar_dep_source_list = []
for key in self.expanded_deps:
dep_target = targets[key]
sources = dep_target.data.get('java_sources')
if sources:
self.java_jar_dep_source_list.append(sources)
self.java_classpath_list = []
for key in self.expanded_deps:
class_path = targets[key].data.get('jar_class_path')
if class_path:
self.java_classpath_list.append(class_path)
# make unique
self.java_jar_dep_source_list = list(set(self.java_jar_dep_source_list))
if not class_root in self.java_jar_cmd_list:
self._write_rule('%s.Command("%s", "", [Mkdir("%s")])' % (
env_name, class_root, class_root))
self.java_jar_cmd_list.append(class_root)
target_source_list = []
for src_dir in self.srcs:
java_src = os.path.join(self.path, src_dir)
if not java_src in target_source_list:
target_source_list.append(java_src)
new_src_dir = ''
src_dir = '%s_src' % self.name
new_src_dir = os.path.join(self.build_path, self.path, src_dir)
if not new_src_dir in self.java_jar_cmd_list:
self._write_rule('%s.Command("%s", "", [Mkdir("%s")])' % (
env_name,
new_src_dir,
new_src_dir))
self.java_jar_cmd_list.append(new_src_dir)
pack_list = []
classes_var_list = []
if self.java_jar_dep_source_list:
self._java_jar_rules_prepare_dep(new_src_dir)
self._java_jar_rules_compile_src(target_source_list,
new_src_dir,
pack_list,
classes_var_list)
self._java_jar_rules_make_jar(pack_list, classes_var_list) | [
"def",
"scons_rules",
"(",
"self",
")",
":",
"self",
".",
"_clone_env",
"(",
")",
"if",
"self",
".",
"type",
"==",
"'prebuilt_java_jar'",
":",
"self",
".",
"_prebuilt_java_jar",
"(",
")",
"return",
"env_name",
"=",
"self",
".",
"_env_name",
"(",
")",
"class_root",
"=",
"self",
".",
"_java_jar_gen_class_root",
"(",
"self",
".",
"path",
",",
"self",
".",
"name",
")",
"targets",
"=",
"self",
".",
"blade",
".",
"get_build_targets",
"(",
")",
"for",
"key",
"in",
"self",
".",
"expanded_deps",
":",
"self",
".",
"cmd_var_list",
"+=",
"targets",
"[",
"key",
"]",
".",
"data",
".",
"get",
"(",
"'java_dep_var'",
",",
"[",
"]",
")",
"self",
".",
"java_jar_dep_source_list",
"=",
"[",
"]",
"for",
"key",
"in",
"self",
".",
"expanded_deps",
":",
"dep_target",
"=",
"targets",
"[",
"key",
"]",
"sources",
"=",
"dep_target",
".",
"data",
".",
"get",
"(",
"'java_sources'",
")",
"if",
"sources",
":",
"self",
".",
"java_jar_dep_source_list",
".",
"append",
"(",
"sources",
")",
"self",
".",
"java_classpath_list",
"=",
"[",
"]",
"for",
"key",
"in",
"self",
".",
"expanded_deps",
":",
"class_path",
"=",
"targets",
"[",
"key",
"]",
".",
"data",
".",
"get",
"(",
"'jar_class_path'",
")",
"if",
"class_path",
":",
"self",
".",
"java_classpath_list",
".",
"append",
"(",
"class_path",
")",
"# make unique",
"self",
".",
"java_jar_dep_source_list",
"=",
"list",
"(",
"set",
"(",
"self",
".",
"java_jar_dep_source_list",
")",
")",
"if",
"not",
"class_root",
"in",
"self",
".",
"java_jar_cmd_list",
":",
"self",
".",
"_write_rule",
"(",
"'%s.Command(\"%s\", \"\", [Mkdir(\"%s\")])'",
"%",
"(",
"env_name",
",",
"class_root",
",",
"class_root",
")",
")",
"self",
".",
"java_jar_cmd_list",
".",
"append",
"(",
"class_root",
")",
"target_source_list",
"=",
"[",
"]",
"for",
"src_dir",
"in",
"self",
".",
"srcs",
":",
"java_src",
"=",
"os",
".",
"path",
".",
"join",
"(",
"self",
".",
"path",
",",
"src_dir",
")",
"if",
"not",
"java_src",
"in",
"target_source_list",
":",
"target_source_list",
".",
"append",
"(",
"java_src",
")",
"new_src_dir",
"=",
"''",
"src_dir",
"=",
"'%s_src'",
"%",
"self",
".",
"name",
"new_src_dir",
"=",
"os",
".",
"path",
".",
"join",
"(",
"self",
".",
"build_path",
",",
"self",
".",
"path",
",",
"src_dir",
")",
"if",
"not",
"new_src_dir",
"in",
"self",
".",
"java_jar_cmd_list",
":",
"self",
".",
"_write_rule",
"(",
"'%s.Command(\"%s\", \"\", [Mkdir(\"%s\")])'",
"%",
"(",
"env_name",
",",
"new_src_dir",
",",
"new_src_dir",
")",
")",
"self",
".",
"java_jar_cmd_list",
".",
"append",
"(",
"new_src_dir",
")",
"pack_list",
"=",
"[",
"]",
"classes_var_list",
"=",
"[",
"]",
"if",
"self",
".",
"java_jar_dep_source_list",
":",
"self",
".",
"_java_jar_rules_prepare_dep",
"(",
"new_src_dir",
")",
"self",
".",
"_java_jar_rules_compile_src",
"(",
"target_source_list",
",",
"new_src_dir",
",",
"pack_list",
",",
"classes_var_list",
")",
"self",
".",
"_java_jar_rules_make_jar",
"(",
"pack_list",
",",
"classes_var_list",
")"
] | https://github.com/Tencent/Pebble/blob/68315f176d9e328a233ace29b7579a829f89879f/tools/blade/src/blade/java_jar_target.py#L340-L410 | ||
aws/lumberyard | f85344403c1c2e77ec8c75deb2c116e97b713217 | dev/Tools/Python/3.7.10/windows/Lib/site-packages/s3transfer/manager.py | python | TransferManager.shutdown | (self, cancel=False, cancel_msg='') | Shutdown the TransferManager
It will wait till all transfers complete before it completely shuts
down.
:type cancel: boolean
:param cancel: If True, calls TransferFuture.cancel() for
all in-progress in transfers. This is useful if you want the
shutdown to happen quicker.
:type cancel_msg: str
:param cancel_msg: The message to specify if canceling all in-progress
transfers. | Shutdown the TransferManager | [
"Shutdown",
"the",
"TransferManager"
] | def shutdown(self, cancel=False, cancel_msg=''):
"""Shutdown the TransferManager
It will wait till all transfers complete before it completely shuts
down.
:type cancel: boolean
:param cancel: If True, calls TransferFuture.cancel() for
all in-progress in transfers. This is useful if you want the
shutdown to happen quicker.
:type cancel_msg: str
:param cancel_msg: The message to specify if canceling all in-progress
transfers.
"""
self._shutdown(cancel, cancel, cancel_msg) | [
"def",
"shutdown",
"(",
"self",
",",
"cancel",
"=",
"False",
",",
"cancel_msg",
"=",
"''",
")",
":",
"self",
".",
"_shutdown",
"(",
"cancel",
",",
"cancel",
",",
"cancel_msg",
")"
] | https://github.com/aws/lumberyard/blob/f85344403c1c2e77ec8c75deb2c116e97b713217/dev/Tools/Python/3.7.10/windows/Lib/site-packages/s3transfer/manager.py#L541-L556 | ||
tensorflow/tensorflow | 419e3a6b650ea4bd1b0cba23c4348f8a69f3272e | tensorflow/python/ops/distributions/bijector_impl.py | python | _Mapping.merge | (self, x=None, y=None, ildj_map=None, kwargs=None, mapping=None) | return _Mapping(
x=self._merge(self.x, mapping.x),
y=self._merge(self.y, mapping.y),
ildj_map=self._merge_dicts(self.ildj_map, mapping.ildj_map),
kwargs=self._merge(self.kwargs, mapping.kwargs)) | Returns new _Mapping with args merged with self.
Args:
x: `Tensor`. Forward.
y: `Tensor`. Inverse.
ildj_map: `Dictionary`. This is a mapping from event_ndims to a `Tensor`
representing the inverse log det jacobian.
kwargs: Python dictionary. Extra args supplied to
forward/inverse/etc functions.
mapping: Instance of _Mapping to merge. Can only be specified if no other
arg is specified.
Returns:
mapping: New instance of `_Mapping` which has inputs merged with self.
Raises:
ValueError: if mapping and any other arg is not `None`. | Returns new _Mapping with args merged with self. | [
"Returns",
"new",
"_Mapping",
"with",
"args",
"merged",
"with",
"self",
"."
] | def merge(self, x=None, y=None, ildj_map=None, kwargs=None, mapping=None):
"""Returns new _Mapping with args merged with self.
Args:
x: `Tensor`. Forward.
y: `Tensor`. Inverse.
ildj_map: `Dictionary`. This is a mapping from event_ndims to a `Tensor`
representing the inverse log det jacobian.
kwargs: Python dictionary. Extra args supplied to
forward/inverse/etc functions.
mapping: Instance of _Mapping to merge. Can only be specified if no other
arg is specified.
Returns:
mapping: New instance of `_Mapping` which has inputs merged with self.
Raises:
ValueError: if mapping and any other arg is not `None`.
"""
if mapping is None:
mapping = _Mapping(x=x, y=y, ildj_map=ildj_map, kwargs=kwargs)
elif any(arg is not None for arg in [x, y, ildj_map, kwargs]):
raise ValueError("Cannot simultaneously specify mapping and individual "
"arguments.")
return _Mapping(
x=self._merge(self.x, mapping.x),
y=self._merge(self.y, mapping.y),
ildj_map=self._merge_dicts(self.ildj_map, mapping.ildj_map),
kwargs=self._merge(self.kwargs, mapping.kwargs)) | [
"def",
"merge",
"(",
"self",
",",
"x",
"=",
"None",
",",
"y",
"=",
"None",
",",
"ildj_map",
"=",
"None",
",",
"kwargs",
"=",
"None",
",",
"mapping",
"=",
"None",
")",
":",
"if",
"mapping",
"is",
"None",
":",
"mapping",
"=",
"_Mapping",
"(",
"x",
"=",
"x",
",",
"y",
"=",
"y",
",",
"ildj_map",
"=",
"ildj_map",
",",
"kwargs",
"=",
"kwargs",
")",
"elif",
"any",
"(",
"arg",
"is",
"not",
"None",
"for",
"arg",
"in",
"[",
"x",
",",
"y",
",",
"ildj_map",
",",
"kwargs",
"]",
")",
":",
"raise",
"ValueError",
"(",
"\"Cannot simultaneously specify mapping and individual \"",
"\"arguments.\"",
")",
"return",
"_Mapping",
"(",
"x",
"=",
"self",
".",
"_merge",
"(",
"self",
".",
"x",
",",
"mapping",
".",
"x",
")",
",",
"y",
"=",
"self",
".",
"_merge",
"(",
"self",
".",
"y",
",",
"mapping",
".",
"y",
")",
",",
"ildj_map",
"=",
"self",
".",
"_merge_dicts",
"(",
"self",
".",
"ildj_map",
",",
"mapping",
".",
"ildj_map",
")",
",",
"kwargs",
"=",
"self",
".",
"_merge",
"(",
"self",
".",
"kwargs",
",",
"mapping",
".",
"kwargs",
")",
")"
] | https://github.com/tensorflow/tensorflow/blob/419e3a6b650ea4bd1b0cba23c4348f8a69f3272e/tensorflow/python/ops/distributions/bijector_impl.py#L72-L101 | |
catboost/catboost | 167f64f237114a4d10b2b4ee42adb4569137debe | contrib/python/numpy/py2/numpy/core/defchararray.py | python | lstrip | (a, chars=None) | return _vec_string(a_arr, a_arr.dtype, 'lstrip', (chars,)) | For each element in `a`, return a copy with the leading characters
removed.
Calls `str.lstrip` element-wise.
Parameters
----------
a : array-like, {str, unicode}
Input array.
chars : {str, unicode}, optional
The `chars` argument is a string specifying the set of
characters to be removed. If omitted or None, the `chars`
argument defaults to removing whitespace. The `chars` argument
is not a prefix; rather, all combinations of its values are
stripped.
Returns
-------
out : ndarray, {str, unicode}
Output array of str or unicode, depending on input type
See also
--------
str.lstrip
Examples
--------
>>> c = np.array(['aAaAaA', ' aA ', 'abBABba'])
>>> c
array(['aAaAaA', ' aA ', 'abBABba'],
dtype='|S7')
The 'a' variable is unstripped from c[1] because whitespace leading.
>>> np.char.lstrip(c, 'a')
array(['AaAaA', ' aA ', 'bBABba'],
dtype='|S7')
>>> np.char.lstrip(c, 'A') # leaves c unchanged
array(['aAaAaA', ' aA ', 'abBABba'],
dtype='|S7')
>>> (np.char.lstrip(c, ' ') == np.char.lstrip(c, '')).all()
... # XXX: is this a regression? this line now returns False
... # np.char.lstrip(c,'') does not modify c at all.
True
>>> (np.char.lstrip(c, ' ') == np.char.lstrip(c, None)).all()
True | For each element in `a`, return a copy with the leading characters
removed. | [
"For",
"each",
"element",
"in",
"a",
"return",
"a",
"copy",
"with",
"the",
"leading",
"characters",
"removed",
"."
] | def lstrip(a, chars=None):
"""
For each element in `a`, return a copy with the leading characters
removed.
Calls `str.lstrip` element-wise.
Parameters
----------
a : array-like, {str, unicode}
Input array.
chars : {str, unicode}, optional
The `chars` argument is a string specifying the set of
characters to be removed. If omitted or None, the `chars`
argument defaults to removing whitespace. The `chars` argument
is not a prefix; rather, all combinations of its values are
stripped.
Returns
-------
out : ndarray, {str, unicode}
Output array of str or unicode, depending on input type
See also
--------
str.lstrip
Examples
--------
>>> c = np.array(['aAaAaA', ' aA ', 'abBABba'])
>>> c
array(['aAaAaA', ' aA ', 'abBABba'],
dtype='|S7')
The 'a' variable is unstripped from c[1] because whitespace leading.
>>> np.char.lstrip(c, 'a')
array(['AaAaA', ' aA ', 'bBABba'],
dtype='|S7')
>>> np.char.lstrip(c, 'A') # leaves c unchanged
array(['aAaAaA', ' aA ', 'abBABba'],
dtype='|S7')
>>> (np.char.lstrip(c, ' ') == np.char.lstrip(c, '')).all()
... # XXX: is this a regression? this line now returns False
... # np.char.lstrip(c,'') does not modify c at all.
True
>>> (np.char.lstrip(c, ' ') == np.char.lstrip(c, None)).all()
True
"""
a_arr = numpy.asarray(a)
return _vec_string(a_arr, a_arr.dtype, 'lstrip', (chars,)) | [
"def",
"lstrip",
"(",
"a",
",",
"chars",
"=",
"None",
")",
":",
"a_arr",
"=",
"numpy",
".",
"asarray",
"(",
"a",
")",
"return",
"_vec_string",
"(",
"a_arr",
",",
"a_arr",
".",
"dtype",
",",
"'lstrip'",
",",
"(",
"chars",
",",
")",
")"
] | https://github.com/catboost/catboost/blob/167f64f237114a4d10b2b4ee42adb4569137debe/contrib/python/numpy/py2/numpy/core/defchararray.py#L1055-L1109 | |
aws/lumberyard | f85344403c1c2e77ec8c75deb2c116e97b713217 | dev/Gems/CloudGemMetric/v1/AWS/common-code/Lib/llvmlite/ir/builder.py | python | IRBuilder.sitofp | (self, value, typ, name='') | Convert signed integer to floating-point:
name = (typ) value | Convert signed integer to floating-point:
name = (typ) value | [
"Convert",
"signed",
"integer",
"to",
"floating",
"-",
"point",
":",
"name",
"=",
"(",
"typ",
")",
"value"
] | def sitofp(self, value, typ, name=''):
"""
Convert signed integer to floating-point:
name = (typ) value
""" | [
"def",
"sitofp",
"(",
"self",
",",
"value",
",",
"typ",
",",
"name",
"=",
"''",
")",
":"
] | https://github.com/aws/lumberyard/blob/f85344403c1c2e77ec8c75deb2c116e97b713217/dev/Gems/CloudGemMetric/v1/AWS/common-code/Lib/llvmlite/ir/builder.py#L675-L679 | ||
aws/lumberyard | f85344403c1c2e77ec8c75deb2c116e97b713217 | dev/Gems/CloudGemFramework/v1/ResourceManager/lib/Crypto/Cipher/_mode_siv.py | python | SivMode.encrypt | (self, plaintext) | For SIV, encryption and MAC authentication must take place at the same
point. This method shall not be used.
Use `encrypt_and_digest` instead. | For SIV, encryption and MAC authentication must take place at the same
point. This method shall not be used. | [
"For",
"SIV",
"encryption",
"and",
"MAC",
"authentication",
"must",
"take",
"place",
"at",
"the",
"same",
"point",
".",
"This",
"method",
"shall",
"not",
"be",
"used",
"."
] | def encrypt(self, plaintext):
"""
For SIV, encryption and MAC authentication must take place at the same
point. This method shall not be used.
Use `encrypt_and_digest` instead.
"""
raise TypeError("encrypt() not allowed for SIV mode."
" Use encrypt_and_digest() instead.") | [
"def",
"encrypt",
"(",
"self",
",",
"plaintext",
")",
":",
"raise",
"TypeError",
"(",
"\"encrypt() not allowed for SIV mode.\"",
"\" Use encrypt_and_digest() instead.\"",
")"
] | https://github.com/aws/lumberyard/blob/f85344403c1c2e77ec8c75deb2c116e97b713217/dev/Gems/CloudGemFramework/v1/ResourceManager/lib/Crypto/Cipher/_mode_siv.py#L176-L185 | ||
fabianschenk/RESLAM | 2e71a578b6d1a1ad1fb018641218e1f41dd9e330 | thirdparty/Sophus/py/sophus/so3.py | python | So3.matrix | (self) | return sympy.Matrix([[
1 - 2 * self.q.vec[1]**2 - 2 * self.q.vec[2]**2,
2 * self.q.vec[0] * self.q.vec[1] -
2 * self.q.vec[2] * self.q[3],
2 * self.q.vec[0] * self.q.vec[2] +
2 * self.q.vec[1] * self.q[3]
], [
2 * self.q.vec[0] * self.q.vec[1] +
2 * self.q.vec[2] * self.q[3],
1 - 2 * self.q.vec[0]**2 - 2 * self.q.vec[2]**2,
2 * self.q.vec[1] * self.q.vec[2] -
2 * self.q.vec[0] * self.q[3]
], [
2 * self.q.vec[0] * self.q.vec[2] -
2 * self.q.vec[1] * self.q[3],
2 * self.q.vec[1] * self.q.vec[2] +
2 * self.q.vec[0] * self.q[3],
1 - 2 * self.q.vec[0]**2 - 2 * self.q.vec[1]**2
]]) | returns matrix representation | returns matrix representation | [
"returns",
"matrix",
"representation"
] | def matrix(self):
""" returns matrix representation """
return sympy.Matrix([[
1 - 2 * self.q.vec[1]**2 - 2 * self.q.vec[2]**2,
2 * self.q.vec[0] * self.q.vec[1] -
2 * self.q.vec[2] * self.q[3],
2 * self.q.vec[0] * self.q.vec[2] +
2 * self.q.vec[1] * self.q[3]
], [
2 * self.q.vec[0] * self.q.vec[1] +
2 * self.q.vec[2] * self.q[3],
1 - 2 * self.q.vec[0]**2 - 2 * self.q.vec[2]**2,
2 * self.q.vec[1] * self.q.vec[2] -
2 * self.q.vec[0] * self.q[3]
], [
2 * self.q.vec[0] * self.q.vec[2] -
2 * self.q.vec[1] * self.q[3],
2 * self.q.vec[1] * self.q.vec[2] +
2 * self.q.vec[0] * self.q[3],
1 - 2 * self.q.vec[0]**2 - 2 * self.q.vec[1]**2
]]) | [
"def",
"matrix",
"(",
"self",
")",
":",
"return",
"sympy",
".",
"Matrix",
"(",
"[",
"[",
"1",
"-",
"2",
"*",
"self",
".",
"q",
".",
"vec",
"[",
"1",
"]",
"**",
"2",
"-",
"2",
"*",
"self",
".",
"q",
".",
"vec",
"[",
"2",
"]",
"**",
"2",
",",
"2",
"*",
"self",
".",
"q",
".",
"vec",
"[",
"0",
"]",
"*",
"self",
".",
"q",
".",
"vec",
"[",
"1",
"]",
"-",
"2",
"*",
"self",
".",
"q",
".",
"vec",
"[",
"2",
"]",
"*",
"self",
".",
"q",
"[",
"3",
"]",
",",
"2",
"*",
"self",
".",
"q",
".",
"vec",
"[",
"0",
"]",
"*",
"self",
".",
"q",
".",
"vec",
"[",
"2",
"]",
"+",
"2",
"*",
"self",
".",
"q",
".",
"vec",
"[",
"1",
"]",
"*",
"self",
".",
"q",
"[",
"3",
"]",
"]",
",",
"[",
"2",
"*",
"self",
".",
"q",
".",
"vec",
"[",
"0",
"]",
"*",
"self",
".",
"q",
".",
"vec",
"[",
"1",
"]",
"+",
"2",
"*",
"self",
".",
"q",
".",
"vec",
"[",
"2",
"]",
"*",
"self",
".",
"q",
"[",
"3",
"]",
",",
"1",
"-",
"2",
"*",
"self",
".",
"q",
".",
"vec",
"[",
"0",
"]",
"**",
"2",
"-",
"2",
"*",
"self",
".",
"q",
".",
"vec",
"[",
"2",
"]",
"**",
"2",
",",
"2",
"*",
"self",
".",
"q",
".",
"vec",
"[",
"1",
"]",
"*",
"self",
".",
"q",
".",
"vec",
"[",
"2",
"]",
"-",
"2",
"*",
"self",
".",
"q",
".",
"vec",
"[",
"0",
"]",
"*",
"self",
".",
"q",
"[",
"3",
"]",
"]",
",",
"[",
"2",
"*",
"self",
".",
"q",
".",
"vec",
"[",
"0",
"]",
"*",
"self",
".",
"q",
".",
"vec",
"[",
"2",
"]",
"-",
"2",
"*",
"self",
".",
"q",
".",
"vec",
"[",
"1",
"]",
"*",
"self",
".",
"q",
"[",
"3",
"]",
",",
"2",
"*",
"self",
".",
"q",
".",
"vec",
"[",
"1",
"]",
"*",
"self",
".",
"q",
".",
"vec",
"[",
"2",
"]",
"+",
"2",
"*",
"self",
".",
"q",
".",
"vec",
"[",
"0",
"]",
"*",
"self",
".",
"q",
"[",
"3",
"]",
",",
"1",
"-",
"2",
"*",
"self",
".",
"q",
".",
"vec",
"[",
"0",
"]",
"**",
"2",
"-",
"2",
"*",
"self",
".",
"q",
".",
"vec",
"[",
"1",
"]",
"**",
"2",
"]",
"]",
")"
] | https://github.com/fabianschenk/RESLAM/blob/2e71a578b6d1a1ad1fb018641218e1f41dd9e330/thirdparty/Sophus/py/sophus/so3.py#L39-L59 | |
taichi-dev/taichi | 973c04d6ba40f34e9e3bd5a28ae0ee0802f136a6 | python/taichi/lang/ops.py | python | bit_and | (a, b) | return _binary_operation(_ti_core.expr_bit_and, _bt_ops_mod.and_, a, b) | Compute bitwise-and
Args:
a (Union[:class:`~taichi.lang.expr.Expr`, :class:`~taichi.lang.matrix.Matrix`]): value LHS
b (Union[:class:`~taichi.lang.expr.Expr`, :class:`~taichi.lang.matrix.Matrix`]): value RHS
Returns:
Union[:class:`~taichi.lang.expr.Expr`, bool]: LHS bitwise-and with RHS | Compute bitwise-and | [
"Compute",
"bitwise",
"-",
"and"
] | def bit_and(a, b):
"""Compute bitwise-and
Args:
a (Union[:class:`~taichi.lang.expr.Expr`, :class:`~taichi.lang.matrix.Matrix`]): value LHS
b (Union[:class:`~taichi.lang.expr.Expr`, :class:`~taichi.lang.matrix.Matrix`]): value RHS
Returns:
Union[:class:`~taichi.lang.expr.Expr`, bool]: LHS bitwise-and with RHS
"""
return _binary_operation(_ti_core.expr_bit_and, _bt_ops_mod.and_, a, b) | [
"def",
"bit_and",
"(",
"a",
",",
"b",
")",
":",
"return",
"_binary_operation",
"(",
"_ti_core",
".",
"expr_bit_and",
",",
"_bt_ops_mod",
".",
"and_",
",",
"a",
",",
"b",
")"
] | https://github.com/taichi-dev/taichi/blob/973c04d6ba40f34e9e3bd5a28ae0ee0802f136a6/python/taichi/lang/ops.py#L696-L707 | |
microsoft/TSS.MSR | 0f2516fca2cd9929c31d5450e39301c9bde43688 | TSS.Py/src/TpmTypes.py | python | TPM2_Certify_REQUEST.inSchemeScheme | (self) | return inScheme.GetUnionSelector() if inScheme else TPM_ALG_ID.NULL | Scheme selector | Scheme selector | [
"Scheme",
"selector"
] | def inSchemeScheme(self): # TPM_ALG_ID
""" Scheme selector """
return inScheme.GetUnionSelector() if inScheme else TPM_ALG_ID.NULL | [
"def",
"inSchemeScheme",
"(",
"self",
")",
":",
"# TPM_ALG_ID",
"return",
"inScheme",
".",
"GetUnionSelector",
"(",
")",
"if",
"inScheme",
"else",
"TPM_ALG_ID",
".",
"NULL"
] | https://github.com/microsoft/TSS.MSR/blob/0f2516fca2cd9929c31d5450e39301c9bde43688/TSS.Py/src/TpmTypes.py#L12411-L12413 | |
wlanjie/AndroidFFmpeg | 7baf9122f4b8e1c74e7baf4be5c422c7a5ba5aaf | tools/fdk-aac-build/x86/toolchain/lib/python2.7/plat-mac/findertools.py | python | _setlocation | (object_alias, (x, y)) | return (x,y) | _setlocation: Set the location of the icon for the object. | _setlocation: Set the location of the icon for the object. | [
"_setlocation",
":",
"Set",
"the",
"location",
"of",
"the",
"icon",
"for",
"the",
"object",
"."
] | def _setlocation(object_alias, (x, y)):
"""_setlocation: Set the location of the icon for the object."""
finder = _getfinder()
args = {}
attrs = {}
aeobj_00 = aetypes.ObjectSpecifier(want=aetypes.Type('cfol'), form="alis", seld=object_alias, fr=None)
aeobj_01 = aetypes.ObjectSpecifier(want=aetypes.Type('prop'), form="prop", seld=aetypes.Type('posn'), fr=aeobj_00)
args['----'] = aeobj_01
args["data"] = [x, y]
_reply, args, attrs = finder.send("core", "setd", args, attrs)
if 'errn' in args:
raise Error, aetools.decodeerror(args)
return (x,y) | [
"def",
"_setlocation",
"(",
"object_alias",
",",
"(",
"x",
",",
"y",
")",
")",
":",
"finder",
"=",
"_getfinder",
"(",
")",
"args",
"=",
"{",
"}",
"attrs",
"=",
"{",
"}",
"aeobj_00",
"=",
"aetypes",
".",
"ObjectSpecifier",
"(",
"want",
"=",
"aetypes",
".",
"Type",
"(",
"'cfol'",
")",
",",
"form",
"=",
"\"alis\"",
",",
"seld",
"=",
"object_alias",
",",
"fr",
"=",
"None",
")",
"aeobj_01",
"=",
"aetypes",
".",
"ObjectSpecifier",
"(",
"want",
"=",
"aetypes",
".",
"Type",
"(",
"'prop'",
")",
",",
"form",
"=",
"\"prop\"",
",",
"seld",
"=",
"aetypes",
".",
"Type",
"(",
"'posn'",
")",
",",
"fr",
"=",
"aeobj_00",
")",
"args",
"[",
"'----'",
"]",
"=",
"aeobj_01",
"args",
"[",
"\"data\"",
"]",
"=",
"[",
"x",
",",
"y",
"]",
"_reply",
",",
"args",
",",
"attrs",
"=",
"finder",
".",
"send",
"(",
"\"core\"",
",",
"\"setd\"",
",",
"args",
",",
"attrs",
")",
"if",
"'errn'",
"in",
"args",
":",
"raise",
"Error",
",",
"aetools",
".",
"decodeerror",
"(",
"args",
")",
"return",
"(",
"x",
",",
"y",
")"
] | https://github.com/wlanjie/AndroidFFmpeg/blob/7baf9122f4b8e1c74e7baf4be5c422c7a5ba5aaf/tools/fdk-aac-build/x86/toolchain/lib/python2.7/plat-mac/findertools.py#L303-L315 | |
BlzFans/wke | b0fa21158312e40c5fbd84682d643022b6c34a93 | cygwin/lib/python2.6/fnmatch.py | python | fnmatchcase | (name, pat) | return _cache[pat].match(name) is not None | Test whether FILENAME matches PATTERN, including case.
This is a version of fnmatch() which doesn't case-normalize
its arguments. | Test whether FILENAME matches PATTERN, including case. | [
"Test",
"whether",
"FILENAME",
"matches",
"PATTERN",
"including",
"case",
"."
] | def fnmatchcase(name, pat):
"""Test whether FILENAME matches PATTERN, including case.
This is a version of fnmatch() which doesn't case-normalize
its arguments.
"""
if not pat in _cache:
res = translate(pat)
_cache[pat] = re.compile(res)
return _cache[pat].match(name) is not None | [
"def",
"fnmatchcase",
"(",
"name",
",",
"pat",
")",
":",
"if",
"not",
"pat",
"in",
"_cache",
":",
"res",
"=",
"translate",
"(",
"pat",
")",
"_cache",
"[",
"pat",
"]",
"=",
"re",
".",
"compile",
"(",
"res",
")",
"return",
"_cache",
"[",
"pat",
"]",
".",
"match",
"(",
"name",
")",
"is",
"not",
"None"
] | https://github.com/BlzFans/wke/blob/b0fa21158312e40c5fbd84682d643022b6c34a93/cygwin/lib/python2.6/fnmatch.py#L60-L70 | |
windystrife/UnrealEngine_NVIDIAGameWorks | b50e6338a7c5b26374d66306ebc7807541ff815e | Engine/Extras/ThirdPartyNotUE/emsdk/Win64/python/2.7.5.3_64bit/Lib/lib-tk/Tkinter.py | python | Misc.winfo_id | (self) | return self.tk.getint(
self.tk.call('winfo', 'id', self._w)) | Return identifier ID for this widget. | Return identifier ID for this widget. | [
"Return",
"identifier",
"ID",
"for",
"this",
"widget",
"."
] | def winfo_id(self):
"""Return identifier ID for this widget."""
return self.tk.getint(
self.tk.call('winfo', 'id', self._w)) | [
"def",
"winfo_id",
"(",
"self",
")",
":",
"return",
"self",
".",
"tk",
".",
"getint",
"(",
"self",
".",
"tk",
".",
"call",
"(",
"'winfo'",
",",
"'id'",
",",
"self",
".",
"_w",
")",
")"
] | https://github.com/windystrife/UnrealEngine_NVIDIAGameWorks/blob/b50e6338a7c5b26374d66306ebc7807541ff815e/Engine/Extras/ThirdPartyNotUE/emsdk/Win64/python/2.7.5.3_64bit/Lib/lib-tk/Tkinter.py#L783-L786 | |
shoaibrayeen/Programmers-Community | 1d352fb3e6ac5e2e7d9472d90527bdcc8d5ec355 | Bit Algorithm/Count Number of Set Bits in A Number/SolutionByTanmay.py | python | counSetBits | (num) | return st,st.count('1') | Objective : To Count The Number Of Set Bits In Binary Representation Of A Given Number
Input : A Decimal Number - Numeric Value
Return : No. Of Set Bits - Numeric Value | Objective : To Count The Number Of Set Bits In Binary Representation Of A Given Number
Input : A Decimal Number - Numeric Value
Return : No. Of Set Bits - Numeric Value | [
"Objective",
":",
"To",
"Count",
"The",
"Number",
"Of",
"Set",
"Bits",
"In",
"Binary",
"Representation",
"Of",
"A",
"Given",
"Number",
"Input",
":",
"A",
"Decimal",
"Number",
"-",
"Numeric",
"Value",
"Return",
":",
"No",
".",
"Of",
"Set",
"Bits",
"-",
"Numeric",
"Value"
] | def counSetBits(num):
'''
Objective : To Count The Number Of Set Bits In Binary Representation Of A Given Number
Input : A Decimal Number - Numeric Value
Return : No. Of Set Bits - Numeric Value
'''
if num == 0:
return 0, 0
st = ""
while num!=1:
if num%2==0:
r = num%2
num = (num+r)//2
st+=str(r)
else:
r = num%2
num = ((num + r)//2 - 1)
st+=str(r)
st += str(num)
st = st[::-1]
return st,st.count('1') | [
"def",
"counSetBits",
"(",
"num",
")",
":",
"if",
"num",
"==",
"0",
":",
"return",
"0",
",",
"0",
"st",
"=",
"\"\"",
"while",
"num",
"!=",
"1",
":",
"if",
"num",
"%",
"2",
"==",
"0",
":",
"r",
"=",
"num",
"%",
"2",
"num",
"=",
"(",
"num",
"+",
"r",
")",
"//",
"2",
"st",
"+=",
"str",
"(",
"r",
")",
"else",
":",
"r",
"=",
"num",
"%",
"2",
"num",
"=",
"(",
"(",
"num",
"+",
"r",
")",
"//",
"2",
"-",
"1",
")",
"st",
"+=",
"str",
"(",
"r",
")",
"st",
"+=",
"str",
"(",
"num",
")",
"st",
"=",
"st",
"[",
":",
":",
"-",
"1",
"]",
"return",
"st",
",",
"st",
".",
"count",
"(",
"'1'",
")"
] | https://github.com/shoaibrayeen/Programmers-Community/blob/1d352fb3e6ac5e2e7d9472d90527bdcc8d5ec355/Bit Algorithm/Count Number of Set Bits in A Number/SolutionByTanmay.py#L1-L24 | |
AojunZhou/Incremental-Network-Quantization | c7f6a609d5817d8424ce224209cf4c50f1e4de50 | python/caffe/draw.py | python | choose_color_by_layertype | (layertype) | return color | Define colors for nodes based on the layer type. | Define colors for nodes based on the layer type. | [
"Define",
"colors",
"for",
"nodes",
"based",
"on",
"the",
"layer",
"type",
"."
] | def choose_color_by_layertype(layertype):
"""Define colors for nodes based on the layer type.
"""
color = '#6495ED' # Default
if layertype == 'Convolution' or layertype == 'Deconvolution':
color = '#FF5050'
elif layertype == 'Pooling':
color = '#FF9900'
elif layertype == 'InnerProduct':
color = '#CC33FF'
return color | [
"def",
"choose_color_by_layertype",
"(",
"layertype",
")",
":",
"color",
"=",
"'#6495ED'",
"# Default",
"if",
"layertype",
"==",
"'Convolution'",
"or",
"layertype",
"==",
"'Deconvolution'",
":",
"color",
"=",
"'#FF5050'",
"elif",
"layertype",
"==",
"'Pooling'",
":",
"color",
"=",
"'#FF9900'",
"elif",
"layertype",
"==",
"'InnerProduct'",
":",
"color",
"=",
"'#CC33FF'",
"return",
"color"
] | https://github.com/AojunZhou/Incremental-Network-Quantization/blob/c7f6a609d5817d8424ce224209cf4c50f1e4de50/python/caffe/draw.py#L117-L127 | |
mindspore-ai/mindspore | fb8fd3338605bb34fa5cea054e535a8b1d753fab | mindspore/python/mindspore/dataset/engine/validators.py | python | check_gnn_graphdata | (method) | return new_method | check the input arguments of graphdata. | check the input arguments of graphdata. | [
"check",
"the",
"input",
"arguments",
"of",
"graphdata",
"."
] | def check_gnn_graphdata(method):
"""check the input arguments of graphdata."""
@wraps(method)
def new_method(self, *args, **kwargs):
[dataset_file, num_parallel_workers, working_mode, hostname,
port, num_client, auto_shutdown], _ = parse_user_args(method, *args, **kwargs)
check_file(dataset_file)
if num_parallel_workers is not None:
check_num_parallel_workers(num_parallel_workers)
type_check(hostname, (str,), "hostname")
if check_hostname(hostname) is False:
raise ValueError("The hostname is illegal")
type_check(working_mode, (str,), "working_mode")
if working_mode not in {'local', 'client', 'server'}:
raise ValueError("Invalid working mode, please enter 'local', 'client' or 'server'.")
type_check(port, (int,), "port")
check_value(port, (1024, 65535), "port")
type_check(num_client, (int,), "num_client")
check_value(num_client, (1, 255), "num_client")
type_check(auto_shutdown, (bool,), "auto_shutdown")
return method(self, *args, **kwargs)
return new_method | [
"def",
"check_gnn_graphdata",
"(",
"method",
")",
":",
"@",
"wraps",
"(",
"method",
")",
"def",
"new_method",
"(",
"self",
",",
"*",
"args",
",",
"*",
"*",
"kwargs",
")",
":",
"[",
"dataset_file",
",",
"num_parallel_workers",
",",
"working_mode",
",",
"hostname",
",",
"port",
",",
"num_client",
",",
"auto_shutdown",
"]",
",",
"_",
"=",
"parse_user_args",
"(",
"method",
",",
"*",
"args",
",",
"*",
"*",
"kwargs",
")",
"check_file",
"(",
"dataset_file",
")",
"if",
"num_parallel_workers",
"is",
"not",
"None",
":",
"check_num_parallel_workers",
"(",
"num_parallel_workers",
")",
"type_check",
"(",
"hostname",
",",
"(",
"str",
",",
")",
",",
"\"hostname\"",
")",
"if",
"check_hostname",
"(",
"hostname",
")",
"is",
"False",
":",
"raise",
"ValueError",
"(",
"\"The hostname is illegal\"",
")",
"type_check",
"(",
"working_mode",
",",
"(",
"str",
",",
")",
",",
"\"working_mode\"",
")",
"if",
"working_mode",
"not",
"in",
"{",
"'local'",
",",
"'client'",
",",
"'server'",
"}",
":",
"raise",
"ValueError",
"(",
"\"Invalid working mode, please enter 'local', 'client' or 'server'.\"",
")",
"type_check",
"(",
"port",
",",
"(",
"int",
",",
")",
",",
"\"port\"",
")",
"check_value",
"(",
"port",
",",
"(",
"1024",
",",
"65535",
")",
",",
"\"port\"",
")",
"type_check",
"(",
"num_client",
",",
"(",
"int",
",",
")",
",",
"\"num_client\"",
")",
"check_value",
"(",
"num_client",
",",
"(",
"1",
",",
"255",
")",
",",
"\"num_client\"",
")",
"type_check",
"(",
"auto_shutdown",
",",
"(",
"bool",
",",
")",
",",
"\"auto_shutdown\"",
")",
"return",
"method",
"(",
"self",
",",
"*",
"args",
",",
"*",
"*",
"kwargs",
")",
"return",
"new_method"
] | https://github.com/mindspore-ai/mindspore/blob/fb8fd3338605bb34fa5cea054e535a8b1d753fab/mindspore/python/mindspore/dataset/engine/validators.py#L1563-L1586 | |
hpi-xnor/BMXNet-v2 | af2b1859eafc5c721b1397cef02f946aaf2ce20d | python/mxnet/symbol/symbol.py | python | Symbol.eval | (self, ctx=None, **kwargs) | return self.bind(ctx, kwargs).forward() | Evaluates a symbol given arguments.
The `eval` method combines a call to `bind` (which returns an executor)
with a call to `forward` (executor method).
For the common use case, where you might repeatedly evaluate with same arguments,
eval is slow.
In that case, you should call `bind` once and then repeatedly call forward.
This function allows simpler syntax for less cumbersome introspection.
Example
-------
>>> a = mx.sym.Variable('a')
>>> b = mx.sym.Variable('b')
>>> c = a + b
>>> ex = c.eval(ctx = mx.cpu(), a = mx.nd.ones([2,3]), b = mx.nd.ones([2,3]))
>>> ex
[<NDArray 2x3 @cpu(0)>]
>>> ex[0].asnumpy()
array([[ 2., 2., 2.],
[ 2., 2., 2.]], dtype=float32)
Parameters
----------
ctx : Context
The device context the generated executor to run on.
kwargs : Keyword arguments of type `NDArray`
Input arguments to the symbol. All the arguments must be provided.
Returns
----------
result : a list of NDArrays corresponding to the values taken by each symbol when
evaluated on given args. When called on a single symbol (not a group),
the result will be a list with one element. | Evaluates a symbol given arguments. | [
"Evaluates",
"a",
"symbol",
"given",
"arguments",
"."
] | def eval(self, ctx=None, **kwargs):
"""Evaluates a symbol given arguments.
The `eval` method combines a call to `bind` (which returns an executor)
with a call to `forward` (executor method).
For the common use case, where you might repeatedly evaluate with same arguments,
eval is slow.
In that case, you should call `bind` once and then repeatedly call forward.
This function allows simpler syntax for less cumbersome introspection.
Example
-------
>>> a = mx.sym.Variable('a')
>>> b = mx.sym.Variable('b')
>>> c = a + b
>>> ex = c.eval(ctx = mx.cpu(), a = mx.nd.ones([2,3]), b = mx.nd.ones([2,3]))
>>> ex
[<NDArray 2x3 @cpu(0)>]
>>> ex[0].asnumpy()
array([[ 2., 2., 2.],
[ 2., 2., 2.]], dtype=float32)
Parameters
----------
ctx : Context
The device context the generated executor to run on.
kwargs : Keyword arguments of type `NDArray`
Input arguments to the symbol. All the arguments must be provided.
Returns
----------
result : a list of NDArrays corresponding to the values taken by each symbol when
evaluated on given args. When called on a single symbol (not a group),
the result will be a list with one element.
"""
if ctx is None:
ctx = current_context()
return self.bind(ctx, kwargs).forward() | [
"def",
"eval",
"(",
"self",
",",
"ctx",
"=",
"None",
",",
"*",
"*",
"kwargs",
")",
":",
"if",
"ctx",
"is",
"None",
":",
"ctx",
"=",
"current_context",
"(",
")",
"return",
"self",
".",
"bind",
"(",
"ctx",
",",
"kwargs",
")",
".",
"forward",
"(",
")"
] | https://github.com/hpi-xnor/BMXNet-v2/blob/af2b1859eafc5c721b1397cef02f946aaf2ce20d/python/mxnet/symbol/symbol.py#L1838-L1876 | |
gem5/gem5 | 141cc37c2d4b93959d4c249b8f7e6a8b2ef75338 | src/python/gem5/components/cachehierarchies/ruby/caches/abstract_dma_controller.py | python | AbstractDMAController.connectQueues | (self, network) | Connect all of the queues for this controller. | Connect all of the queues for this controller. | [
"Connect",
"all",
"of",
"the",
"queues",
"for",
"this",
"controller",
"."
] | def connectQueues(self, network):
"""Connect all of the queues for this controller."""
raise NotImplementedError | [
"def",
"connectQueues",
"(",
"self",
",",
"network",
")",
":",
"raise",
"NotImplementedError"
] | https://github.com/gem5/gem5/blob/141cc37c2d4b93959d4c249b8f7e6a8b2ef75338/src/python/gem5/components/cachehierarchies/ruby/caches/abstract_dma_controller.py#L48-L50 | ||
wxWidgets/wxPython-Classic | 19571e1ae65f1ac445f5491474121998c97a1bf0 | wx/lib/agw/supertooltip.py | python | SuperToolTip.GetMessage | (self) | return self._message | Returns the main body message in :class:`SuperToolTip`. | Returns the main body message in :class:`SuperToolTip`. | [
"Returns",
"the",
"main",
"body",
"message",
"in",
":",
"class",
":",
"SuperToolTip",
"."
] | def GetMessage(self):
""" Returns the main body message in :class:`SuperToolTip`. """
return self._message | [
"def",
"GetMessage",
"(",
"self",
")",
":",
"return",
"self",
".",
"_message"
] | https://github.com/wxWidgets/wxPython-Classic/blob/19571e1ae65f1ac445f5491474121998c97a1bf0/wx/lib/agw/supertooltip.py#L1171-L1174 | |
MythTV/mythtv | d282a209cb8be85d036f85a62a8ec971b67d45f4 | mythtv/programs/scripts/metadata/Music/musicbrainzngs/musicbrainz.py | python | get_releases_in_collection | (collection, limit=None, offset=None) | return _do_mb_query("collection", "%s/releases" % collection, [], params) | List the releases in a collection.
Returns a dict with a 'collection' key, which again has a 'release-list'.
See `Browsing`_ for how to use `limit` and `offset`. | List the releases in a collection.
Returns a dict with a 'collection' key, which again has a 'release-list'. | [
"List",
"the",
"releases",
"in",
"a",
"collection",
".",
"Returns",
"a",
"dict",
"with",
"a",
"collection",
"key",
"which",
"again",
"has",
"a",
"release",
"-",
"list",
"."
] | def get_releases_in_collection(collection, limit=None, offset=None):
"""List the releases in a collection.
Returns a dict with a 'collection' key, which again has a 'release-list'.
See `Browsing`_ for how to use `limit` and `offset`.
"""
params = {}
if limit: params["limit"] = limit
if offset: params["offset"] = offset
return _do_mb_query("collection", "%s/releases" % collection, [], params) | [
"def",
"get_releases_in_collection",
"(",
"collection",
",",
"limit",
"=",
"None",
",",
"offset",
"=",
"None",
")",
":",
"params",
"=",
"{",
"}",
"if",
"limit",
":",
"params",
"[",
"\"limit\"",
"]",
"=",
"limit",
"if",
"offset",
":",
"params",
"[",
"\"offset\"",
"]",
"=",
"offset",
"return",
"_do_mb_query",
"(",
"\"collection\"",
",",
"\"%s/releases\"",
"%",
"collection",
",",
"[",
"]",
",",
"params",
")"
] | https://github.com/MythTV/mythtv/blob/d282a209cb8be85d036f85a62a8ec971b67d45f4/mythtv/programs/scripts/metadata/Music/musicbrainzngs/musicbrainz.py#L1185-L1194 | |
kungfu-origin/kungfu | 90c84b2b590855654cb9a6395ed050e0f7763512 | core/deps/SQLiteCpp-2.3.0/cpplint.py | python | Search | (pattern, s) | return _regexp_compile_cache[pattern].search(s) | Searches the string for the pattern, caching the compiled regexp. | Searches the string for the pattern, caching the compiled regexp. | [
"Searches",
"the",
"string",
"for",
"the",
"pattern",
"caching",
"the",
"compiled",
"regexp",
"."
] | def Search(pattern, s):
"""Searches the string for the pattern, caching the compiled regexp."""
if pattern not in _regexp_compile_cache:
_regexp_compile_cache[pattern] = sre_compile.compile(pattern)
return _regexp_compile_cache[pattern].search(s) | [
"def",
"Search",
"(",
"pattern",
",",
"s",
")",
":",
"if",
"pattern",
"not",
"in",
"_regexp_compile_cache",
":",
"_regexp_compile_cache",
"[",
"pattern",
"]",
"=",
"sre_compile",
".",
"compile",
"(",
"pattern",
")",
"return",
"_regexp_compile_cache",
"[",
"pattern",
"]",
".",
"search",
"(",
"s",
")"
] | https://github.com/kungfu-origin/kungfu/blob/90c84b2b590855654cb9a6395ed050e0f7763512/core/deps/SQLiteCpp-2.3.0/cpplint.py#L531-L535 | |
miyosuda/TensorFlowAndroidDemo | 35903e0221aa5f109ea2dbef27f20b52e317f42d | jni-build/jni/include/tensorflow/python/framework/ops.py | python | Graph.colocate_with | (self, op, ignore_existing=False) | Returns a context manager that specifies an op to colocate with.
Note: this function is not for public use, only for internal libraries.
For example:
```python
a = tf.Variable([1.0])
with g.colocate_with(a):
b = tf.constant(1.0)
c = tf.add(a, b)
```
`b` and `c` will always be colocated with `a`, no matter where `a`
is eventually placed.
Args:
op: The op to colocate all created ops with.
ignore_existing: If true, only applies colocation of this op within
the context, rather than applying all colocation properties
on the stack.
Raises:
ValueError: if op is None.
Yields:
A context manager that specifies the op with which to colocate
newly created ops. | Returns a context manager that specifies an op to colocate with. | [
"Returns",
"a",
"context",
"manager",
"that",
"specifies",
"an",
"op",
"to",
"colocate",
"with",
"."
] | def colocate_with(self, op, ignore_existing=False):
"""Returns a context manager that specifies an op to colocate with.
Note: this function is not for public use, only for internal libraries.
For example:
```python
a = tf.Variable([1.0])
with g.colocate_with(a):
b = tf.constant(1.0)
c = tf.add(a, b)
```
`b` and `c` will always be colocated with `a`, no matter where `a`
is eventually placed.
Args:
op: The op to colocate all created ops with.
ignore_existing: If true, only applies colocation of this op within
the context, rather than applying all colocation properties
on the stack.
Raises:
ValueError: if op is None.
Yields:
A context manager that specifies the op with which to colocate
newly created ops.
"""
if op is None:
raise ValueError("Tried to colocate with None")
if not isinstance(op, Operation):
# We always want to colocate with the reference op.
op = convert_to_tensor_or_indexed_slices(op, as_ref=True).op
# By default, colocate_with resets the device function stack,
# since colocate_with is typically used in specific internal
# library functions where colocation is intended to be "stronger"
# than device functions.
#
# In the future, a caller may specify that device_functions win
# over colocation, in which case we can add support.
device_fn_tmp = self._device_function_stack
self._device_function_stack = []
if ignore_existing:
current_stack = self._colocation_stack
self._colocation_stack = []
self._colocation_stack.append(op)
try:
yield
finally:
# Restore device function stack
self._device_function_stack = device_fn_tmp
self._colocation_stack.pop()
# Reset the colocation stack if requested.
if ignore_existing:
self._colocation_stack = current_stack | [
"def",
"colocate_with",
"(",
"self",
",",
"op",
",",
"ignore_existing",
"=",
"False",
")",
":",
"if",
"op",
"is",
"None",
":",
"raise",
"ValueError",
"(",
"\"Tried to colocate with None\"",
")",
"if",
"not",
"isinstance",
"(",
"op",
",",
"Operation",
")",
":",
"# We always want to colocate with the reference op.",
"op",
"=",
"convert_to_tensor_or_indexed_slices",
"(",
"op",
",",
"as_ref",
"=",
"True",
")",
".",
"op",
"# By default, colocate_with resets the device function stack,",
"# since colocate_with is typically used in specific internal",
"# library functions where colocation is intended to be \"stronger\"",
"# than device functions.",
"#",
"# In the future, a caller may specify that device_functions win",
"# over colocation, in which case we can add support.",
"device_fn_tmp",
"=",
"self",
".",
"_device_function_stack",
"self",
".",
"_device_function_stack",
"=",
"[",
"]",
"if",
"ignore_existing",
":",
"current_stack",
"=",
"self",
".",
"_colocation_stack",
"self",
".",
"_colocation_stack",
"=",
"[",
"]",
"self",
".",
"_colocation_stack",
".",
"append",
"(",
"op",
")",
"try",
":",
"yield",
"finally",
":",
"# Restore device function stack",
"self",
".",
"_device_function_stack",
"=",
"device_fn_tmp",
"self",
".",
"_colocation_stack",
".",
"pop",
"(",
")",
"# Reset the colocation stack if requested.",
"if",
"ignore_existing",
":",
"self",
".",
"_colocation_stack",
"=",
"current_stack"
] | https://github.com/miyosuda/TensorFlowAndroidDemo/blob/35903e0221aa5f109ea2dbef27f20b52e317f42d/jni-build/jni/include/tensorflow/python/framework/ops.py#L2873-L2936 | ||
francinexue/xuefu | b6ff79747a42e020588c0c0a921048e08fe4680c | api/stock/histmd/to_mongodb_md/k_min_dao.py | python | KminDbCache._update_cache | (self,collection_name = 'k_data_5',code=None,ktype='5') | return update_one_code(code,collection_name) | :return: 默认更新5分钟线数据,且用一个缓存一个
start='2015-01-01' | :return: 默认更新5分钟线数据,且用一个缓存一个
start='2015-01-01' | [
":",
"return",
":",
"默认更新5分钟线数据,且用一个缓存一个",
"start",
"=",
"2015",
"-",
"01",
"-",
"01"
] | def _update_cache(self,collection_name = 'k_data_5',code=None,ktype='5'):
"""
:return: 默认更新5分钟线数据,且用一个缓存一个
start='2015-01-01'
"""
date_fmt = '%Y-%m-%d %H:%M:%S'
today = ct._TODAY_
client = MongoClient(ct._MONGODB_ENGIN_)
def update_one_code(code,collection_name):
ktype = collection_name[7:]
cursor = client[ct._MONGODB_DATABASE_][collection_name].find({"code": code}).sort([("date", -1)]).limit(1) #倒序
if cursor.count() <= 0:
print("%s is not in the db" % (code))
'''重新采集数据灌进去'''
start_date = '2015-01-01'
end_date = today
if start_date is None or start_date == end_date:
return
df = self.gm.get_k_data(code, start_date, end_date,ktype=ktype)
else:
print ("%s have data in the db" % (code)), cursor[0]
start_date = cursor[0]['date']
end_date = today
df = self.gm.get_k_data(code, start_date, end_date,ktype=ktype)
if isinstance(df, pd.DataFrame) and not df.empty:
print("insert data to db of code:%s\n" % (code))
# print json.loads(df.to_json(orient='records'))
client[ct._MONGODB_DATABASE_][collection_name].insert(json.loads(df.to_json(orient='records')))
return df
return update_one_code(code,collection_name)
print 'update is done' | [
"def",
"_update_cache",
"(",
"self",
",",
"collection_name",
"=",
"'k_data_5'",
",",
"code",
"=",
"None",
",",
"ktype",
"=",
"'5'",
")",
":",
"date_fmt",
"=",
"'%Y-%m-%d %H:%M:%S'",
"today",
"=",
"ct",
".",
"_TODAY_",
"client",
"=",
"MongoClient",
"(",
"ct",
".",
"_MONGODB_ENGIN_",
")",
"def",
"update_one_code",
"(",
"code",
",",
"collection_name",
")",
":",
"ktype",
"=",
"collection_name",
"[",
"7",
":",
"]",
"cursor",
"=",
"client",
"[",
"ct",
".",
"_MONGODB_DATABASE_",
"]",
"[",
"collection_name",
"]",
".",
"find",
"(",
"{",
"\"code\"",
":",
"code",
"}",
")",
".",
"sort",
"(",
"[",
"(",
"\"date\"",
",",
"-",
"1",
")",
"]",
")",
".",
"limit",
"(",
"1",
")",
"#倒序",
"if",
"cursor",
".",
"count",
"(",
")",
"<=",
"0",
":",
"print",
"(",
"\"%s is not in the db\"",
"%",
"(",
"code",
")",
")",
"'''重新采集数据灌进去'''",
"start_date",
"=",
"'2015-01-01'",
"end_date",
"=",
"today",
"if",
"start_date",
"is",
"None",
"or",
"start_date",
"==",
"end_date",
":",
"return",
"df",
"=",
"self",
".",
"gm",
".",
"get_k_data",
"(",
"code",
",",
"start_date",
",",
"end_date",
",",
"ktype",
"=",
"ktype",
")",
"else",
":",
"print",
"(",
"\"%s have data in the db\"",
"%",
"(",
"code",
")",
")",
",",
"cursor",
"[",
"0",
"]",
"start_date",
"=",
"cursor",
"[",
"0",
"]",
"[",
"'date'",
"]",
"end_date",
"=",
"today",
"df",
"=",
"self",
".",
"gm",
".",
"get_k_data",
"(",
"code",
",",
"start_date",
",",
"end_date",
",",
"ktype",
"=",
"ktype",
")",
"if",
"isinstance",
"(",
"df",
",",
"pd",
".",
"DataFrame",
")",
"and",
"not",
"df",
".",
"empty",
":",
"print",
"(",
"\"insert data to db of code:%s\\n\"",
"%",
"(",
"code",
")",
")",
"# print json.loads(df.to_json(orient='records'))",
"client",
"[",
"ct",
".",
"_MONGODB_DATABASE_",
"]",
"[",
"collection_name",
"]",
".",
"insert",
"(",
"json",
".",
"loads",
"(",
"df",
".",
"to_json",
"(",
"orient",
"=",
"'records'",
")",
")",
")",
"return",
"df",
"return",
"update_one_code",
"(",
"code",
",",
"collection_name",
")",
"print",
"'update is done'"
] | https://github.com/francinexue/xuefu/blob/b6ff79747a42e020588c0c0a921048e08fe4680c/api/stock/histmd/to_mongodb_md/k_min_dao.py#L50-L83 | |
giuspen/cherrytree | 84712f206478fcf9acf30174009ad28c648c6344 | pygtk2/modules/core.py | python | CherryTree.apply_tag_exist_or_create | (self, tag_property, property_value) | return str(tag_name) | Check into the Tags Table whether the Tag Exists, if Not Creates it | Check into the Tags Table whether the Tag Exists, if Not Creates it | [
"Check",
"into",
"the",
"Tags",
"Table",
"whether",
"the",
"Tag",
"Exists",
"if",
"Not",
"Creates",
"it"
] | def apply_tag_exist_or_create(self, tag_property, property_value):
"""Check into the Tags Table whether the Tag Exists, if Not Creates it"""
if property_value == "large": property_value = cons.TAG_PROP_H1
elif property_value == "largo": property_value = cons.TAG_PROP_H2
tag_name = tag_property + "_" + property_value
tag = self.tag_table.lookup(str(tag_name))
if tag == None:
tag = gtk.TextTag(str(tag_name))
if property_value == cons.TAG_PROP_HEAVY: tag.set_property(tag_property, pango.WEIGHT_HEAVY)
elif property_value == cons.TAG_PROP_SMALL: tag.set_property(tag_property, pango.SCALE_SMALL)
elif property_value == cons.TAG_PROP_H1: tag.set_property(tag_property, pango.SCALE_XX_LARGE)
elif property_value == cons.TAG_PROP_H2: tag.set_property(tag_property, pango.SCALE_X_LARGE)
elif property_value == cons.TAG_PROP_H3: tag.set_property(tag_property, pango.SCALE_LARGE)
elif property_value == cons.TAG_PROP_ITALIC: tag.set_property(tag_property, pango.STYLE_ITALIC)
elif property_value == cons.TAG_PROP_SINGLE: tag.set_property(tag_property, pango.UNDERLINE_SINGLE)
elif property_value == cons.TAG_PROP_TRUE: tag.set_property(tag_property, True)
elif property_value == cons.TAG_PROP_LEFT: tag.set_property(tag_property, gtk.JUSTIFY_LEFT)
elif property_value == cons.TAG_PROP_RIGHT: tag.set_property(tag_property, gtk.JUSTIFY_RIGHT)
elif property_value == cons.TAG_PROP_CENTER: tag.set_property(tag_property, gtk.JUSTIFY_CENTER)
elif property_value == cons.TAG_PROP_FILL: tag.set_property(tag_property, gtk.JUSTIFY_FILL)
elif property_value == cons.TAG_PROP_MONOSPACE:
tag.set_property(tag_property, property_value)
if self.monospace_bg:
tag.set_property(cons.TAG_BACKGROUND, self.monospace_bg)
elif property_value == cons.TAG_PROP_SUB:
tag.set_property(cons.TAG_SCALE, pango.SCALE_X_SMALL)
rise = pango.FontDescription(self.rt_font).get_size() / -4
tag.set_property("rise", rise)
elif property_value == cons.TAG_PROP_SUP:
tag.set_property(cons.TAG_SCALE, pango.SCALE_X_SMALL)
rise = pango.FontDescription(self.rt_font).get_size() / 2
tag.set_property("rise", rise)
elif property_value[0:4] == cons.LINK_TYPE_WEBS:
if self.links_underline: tag.set_property(cons.TAG_UNDERLINE, pango.UNDERLINE_SINGLE)
tag.set_property(cons.TAG_FOREGROUND, self.col_link_webs)
elif property_value[0:4] == cons.LINK_TYPE_NODE:
if self.links_underline: tag.set_property(cons.TAG_UNDERLINE, pango.UNDERLINE_SINGLE)
tag.set_property(cons.TAG_FOREGROUND, self.col_link_node)
elif property_value[0:4] == cons.LINK_TYPE_FILE:
if self.links_underline: tag.set_property(cons.TAG_UNDERLINE, pango.UNDERLINE_SINGLE)
tag.set_property(cons.TAG_FOREGROUND, self.col_link_file)
elif property_value[0:4] == cons.LINK_TYPE_FOLD:
if self.links_underline: tag.set_property(cons.TAG_UNDERLINE, pango.UNDERLINE_SINGLE)
tag.set_property(cons.TAG_FOREGROUND, self.col_link_fold)
else: tag.set_property(tag_property, property_value)
self.tag_table.add(tag)
return str(tag_name) | [
"def",
"apply_tag_exist_or_create",
"(",
"self",
",",
"tag_property",
",",
"property_value",
")",
":",
"if",
"property_value",
"==",
"\"large\"",
":",
"property_value",
"=",
"cons",
".",
"TAG_PROP_H1",
"elif",
"property_value",
"==",
"\"largo\"",
":",
"property_value",
"=",
"cons",
".",
"TAG_PROP_H2",
"tag_name",
"=",
"tag_property",
"+",
"\"_\"",
"+",
"property_value",
"tag",
"=",
"self",
".",
"tag_table",
".",
"lookup",
"(",
"str",
"(",
"tag_name",
")",
")",
"if",
"tag",
"==",
"None",
":",
"tag",
"=",
"gtk",
".",
"TextTag",
"(",
"str",
"(",
"tag_name",
")",
")",
"if",
"property_value",
"==",
"cons",
".",
"TAG_PROP_HEAVY",
":",
"tag",
".",
"set_property",
"(",
"tag_property",
",",
"pango",
".",
"WEIGHT_HEAVY",
")",
"elif",
"property_value",
"==",
"cons",
".",
"TAG_PROP_SMALL",
":",
"tag",
".",
"set_property",
"(",
"tag_property",
",",
"pango",
".",
"SCALE_SMALL",
")",
"elif",
"property_value",
"==",
"cons",
".",
"TAG_PROP_H1",
":",
"tag",
".",
"set_property",
"(",
"tag_property",
",",
"pango",
".",
"SCALE_XX_LARGE",
")",
"elif",
"property_value",
"==",
"cons",
".",
"TAG_PROP_H2",
":",
"tag",
".",
"set_property",
"(",
"tag_property",
",",
"pango",
".",
"SCALE_X_LARGE",
")",
"elif",
"property_value",
"==",
"cons",
".",
"TAG_PROP_H3",
":",
"tag",
".",
"set_property",
"(",
"tag_property",
",",
"pango",
".",
"SCALE_LARGE",
")",
"elif",
"property_value",
"==",
"cons",
".",
"TAG_PROP_ITALIC",
":",
"tag",
".",
"set_property",
"(",
"tag_property",
",",
"pango",
".",
"STYLE_ITALIC",
")",
"elif",
"property_value",
"==",
"cons",
".",
"TAG_PROP_SINGLE",
":",
"tag",
".",
"set_property",
"(",
"tag_property",
",",
"pango",
".",
"UNDERLINE_SINGLE",
")",
"elif",
"property_value",
"==",
"cons",
".",
"TAG_PROP_TRUE",
":",
"tag",
".",
"set_property",
"(",
"tag_property",
",",
"True",
")",
"elif",
"property_value",
"==",
"cons",
".",
"TAG_PROP_LEFT",
":",
"tag",
".",
"set_property",
"(",
"tag_property",
",",
"gtk",
".",
"JUSTIFY_LEFT",
")",
"elif",
"property_value",
"==",
"cons",
".",
"TAG_PROP_RIGHT",
":",
"tag",
".",
"set_property",
"(",
"tag_property",
",",
"gtk",
".",
"JUSTIFY_RIGHT",
")",
"elif",
"property_value",
"==",
"cons",
".",
"TAG_PROP_CENTER",
":",
"tag",
".",
"set_property",
"(",
"tag_property",
",",
"gtk",
".",
"JUSTIFY_CENTER",
")",
"elif",
"property_value",
"==",
"cons",
".",
"TAG_PROP_FILL",
":",
"tag",
".",
"set_property",
"(",
"tag_property",
",",
"gtk",
".",
"JUSTIFY_FILL",
")",
"elif",
"property_value",
"==",
"cons",
".",
"TAG_PROP_MONOSPACE",
":",
"tag",
".",
"set_property",
"(",
"tag_property",
",",
"property_value",
")",
"if",
"self",
".",
"monospace_bg",
":",
"tag",
".",
"set_property",
"(",
"cons",
".",
"TAG_BACKGROUND",
",",
"self",
".",
"monospace_bg",
")",
"elif",
"property_value",
"==",
"cons",
".",
"TAG_PROP_SUB",
":",
"tag",
".",
"set_property",
"(",
"cons",
".",
"TAG_SCALE",
",",
"pango",
".",
"SCALE_X_SMALL",
")",
"rise",
"=",
"pango",
".",
"FontDescription",
"(",
"self",
".",
"rt_font",
")",
".",
"get_size",
"(",
")",
"/",
"-",
"4",
"tag",
".",
"set_property",
"(",
"\"rise\"",
",",
"rise",
")",
"elif",
"property_value",
"==",
"cons",
".",
"TAG_PROP_SUP",
":",
"tag",
".",
"set_property",
"(",
"cons",
".",
"TAG_SCALE",
",",
"pango",
".",
"SCALE_X_SMALL",
")",
"rise",
"=",
"pango",
".",
"FontDescription",
"(",
"self",
".",
"rt_font",
")",
".",
"get_size",
"(",
")",
"/",
"2",
"tag",
".",
"set_property",
"(",
"\"rise\"",
",",
"rise",
")",
"elif",
"property_value",
"[",
"0",
":",
"4",
"]",
"==",
"cons",
".",
"LINK_TYPE_WEBS",
":",
"if",
"self",
".",
"links_underline",
":",
"tag",
".",
"set_property",
"(",
"cons",
".",
"TAG_UNDERLINE",
",",
"pango",
".",
"UNDERLINE_SINGLE",
")",
"tag",
".",
"set_property",
"(",
"cons",
".",
"TAG_FOREGROUND",
",",
"self",
".",
"col_link_webs",
")",
"elif",
"property_value",
"[",
"0",
":",
"4",
"]",
"==",
"cons",
".",
"LINK_TYPE_NODE",
":",
"if",
"self",
".",
"links_underline",
":",
"tag",
".",
"set_property",
"(",
"cons",
".",
"TAG_UNDERLINE",
",",
"pango",
".",
"UNDERLINE_SINGLE",
")",
"tag",
".",
"set_property",
"(",
"cons",
".",
"TAG_FOREGROUND",
",",
"self",
".",
"col_link_node",
")",
"elif",
"property_value",
"[",
"0",
":",
"4",
"]",
"==",
"cons",
".",
"LINK_TYPE_FILE",
":",
"if",
"self",
".",
"links_underline",
":",
"tag",
".",
"set_property",
"(",
"cons",
".",
"TAG_UNDERLINE",
",",
"pango",
".",
"UNDERLINE_SINGLE",
")",
"tag",
".",
"set_property",
"(",
"cons",
".",
"TAG_FOREGROUND",
",",
"self",
".",
"col_link_file",
")",
"elif",
"property_value",
"[",
"0",
":",
"4",
"]",
"==",
"cons",
".",
"LINK_TYPE_FOLD",
":",
"if",
"self",
".",
"links_underline",
":",
"tag",
".",
"set_property",
"(",
"cons",
".",
"TAG_UNDERLINE",
",",
"pango",
".",
"UNDERLINE_SINGLE",
")",
"tag",
".",
"set_property",
"(",
"cons",
".",
"TAG_FOREGROUND",
",",
"self",
".",
"col_link_fold",
")",
"else",
":",
"tag",
".",
"set_property",
"(",
"tag_property",
",",
"property_value",
")",
"self",
".",
"tag_table",
".",
"add",
"(",
"tag",
")",
"return",
"str",
"(",
"tag_name",
")"
] | https://github.com/giuspen/cherrytree/blob/84712f206478fcf9acf30174009ad28c648c6344/pygtk2/modules/core.py#L4778-L4824 | |
adobe/chromium | cfe5bf0b51b1f6b9fe239c2a3c2f2364da9967d7 | third_party/closure_linter/closure_linter/statetracker.py | python | StateTracker.HasDocComment | (self, identifier) | return identifier in self._documented_identifiers | Returns whether the identifier has been documented yet.
Args:
identifier: The identifier.
Returns:
Whether the identifier has been documented yet. | Returns whether the identifier has been documented yet. | [
"Returns",
"whether",
"the",
"identifier",
"has",
"been",
"documented",
"yet",
"."
] | def HasDocComment(self, identifier):
"""Returns whether the identifier has been documented yet.
Args:
identifier: The identifier.
Returns:
Whether the identifier has been documented yet.
"""
return identifier in self._documented_identifiers | [
"def",
"HasDocComment",
"(",
"self",
",",
"identifier",
")",
":",
"return",
"identifier",
"in",
"self",
".",
"_documented_identifiers"
] | https://github.com/adobe/chromium/blob/cfe5bf0b51b1f6b9fe239c2a3c2f2364da9967d7/third_party/closure_linter/closure_linter/statetracker.py#L762-L771 | |
mindspore-ai/mindspore | fb8fd3338605bb34fa5cea054e535a8b1d753fab | mindspore/python/mindspore/mindrecord/shardsegment.py | python | ShardSegment.read_category_info | (self) | return self._segment.read_category_info() | Get the group info by the current category field.
Returns:
str, description fo group information. | Get the group info by the current category field. | [
"Get",
"the",
"group",
"info",
"by",
"the",
"current",
"category",
"field",
"."
] | def read_category_info(self):
"""
Get the group info by the current category field.
Returns:
str, description fo group information.
"""
return self._segment.read_category_info() | [
"def",
"read_category_info",
"(",
"self",
")",
":",
"return",
"self",
".",
"_segment",
".",
"read_category_info",
"(",
")"
] | https://github.com/mindspore-ai/mindspore/blob/fb8fd3338605bb34fa5cea054e535a8b1d753fab/mindspore/python/mindspore/mindrecord/shardsegment.py#L82-L90 | |
ROCmSoftwarePlatform/rocBLAS | 3738f8b098cdc1db1bdfc164ceb689d073116c98 | scripts/performance/blas/commandrunner.py | python | ArgumentSetABC.collect_timing | (self, run_configuration) | Use a RunConfiguration to find the data files on disk and process them. | Use a RunConfiguration to find the data files on disk and process them. | [
"Use",
"a",
"RunConfiguration",
"to",
"find",
"the",
"data",
"files",
"on",
"disk",
"and",
"process",
"them",
"."
] | def collect_timing(self, run_configuration):
'''Use a RunConfiguration to find the data files on disk and process them.'''
raise NotImplementedError('ArgumentSetABC.collect_timing is meant to be pure virtual') | [
"def",
"collect_timing",
"(",
"self",
",",
"run_configuration",
")",
":",
"raise",
"NotImplementedError",
"(",
"'ArgumentSetABC.collect_timing is meant to be pure virtual'",
")"
] | https://github.com/ROCmSoftwarePlatform/rocBLAS/blob/3738f8b098cdc1db1bdfc164ceb689d073116c98/scripts/performance/blas/commandrunner.py#L320-L322 | ||
mantidproject/mantid | 03deeb89254ec4289edb8771e0188c2090a02f32 | Framework/PythonInterface/plugins/algorithms/WorkflowAlgorithms/DirectILLReduction.py | python | DirectILLReduction.PyInit | (self) | Initialize the algorithm's input and output properties. | Initialize the algorithm's input and output properties. | [
"Initialize",
"the",
"algorithm",
"s",
"input",
"and",
"output",
"properties",
"."
] | def PyInit(self):
"""Initialize the algorithm's input and output properties."""
PROPGROUP_REBINNING = 'Rebinning for SofQW'
inputWorkspaceValidator = CompositeValidator()
inputWorkspaceValidator.add(InstrumentValidator())
inputWorkspaceValidator.add(WorkspaceUnitValidator('TOF'))
positiveFloat = FloatBoundedValidator(0., exclusive=True)
validRebinParams = RebinParamsValidator(AllowEmpty=True)
# Properties.
self.declareProperty(MatrixWorkspaceProperty(
name=common.PROP_INPUT_WS,
defaultValue='',
validator=inputWorkspaceValidator,
direction=Direction.Input),
doc='A workspace to reduce.')
self.declareProperty(WorkspaceProperty(name=common.PROP_OUTPUT_WS,
defaultValue='',
direction=Direction.Output),
doc='The reduced S(Q, DeltaE) workspace.')
self.declareProperty(name=common.PROP_CLEANUP_MODE,
defaultValue=utils.Cleanup.ON,
validator=StringListValidator([
utils.Cleanup.ON,
utils.Cleanup.OFF]),
direction=Direction.Input,
doc='What to do with intermediate workspaces.')
self.declareProperty(name=common.PROP_SUBALG_LOGGING,
defaultValue=common.SUBALG_LOGGING_OFF,
validator=StringListValidator([
common.SUBALG_LOGGING_OFF,
common.SUBALG_LOGGING_ON]),
direction=Direction.Input,
doc='Enable or disable subalgorithms to print in the logs.')
self.declareProperty(MatrixWorkspaceProperty(
name=common.PROP_VANA_WS,
defaultValue='',
validator=inputWorkspaceValidator,
direction=Direction.Input,
optional=PropertyMode.Optional),
doc='An integrated vanadium workspace.')
self.declareProperty(name=common.PROP_ABSOLUTE_UNITS,
defaultValue=common.ABSOLUTE_UNITS_OFF,
validator=StringListValidator([
common.ABSOLUTE_UNITS_OFF,
common.ABSOLUTE_UNITS_ON]),
direction=Direction.Input,
doc='Enable or disable normalisation to absolute units.')
self.declareProperty(MatrixWorkspaceProperty(
name=common.PROP_DIAGNOSTICS_WS,
defaultValue='',
direction=Direction.Input,
optional=PropertyMode.Optional),
doc='Detector diagnostics workspace for masking.')
self.declareProperty(name=common.PROP_GROUPING_ANGLE_STEP,
defaultValue=Property.EMPTY_DBL,
validator=positiveFloat,
doc='A scattering angle step to which to group detectors, in degrees.')
self.declareProperty(FloatArrayProperty(name=common.PROP_REBINNING_PARAMS_W, validator=validRebinParams),
doc='Manual energy rebinning parameters.')
self.setPropertyGroup(common.PROP_REBINNING_PARAMS_W, PROPGROUP_REBINNING)
self.declareProperty(name=common.PROP_REBINNING_W,
defaultValue='',
doc='Energy rebinning when mixing manual and automatic binning parameters.')
self.declareProperty(FloatArrayProperty(name=common.PROP_BINNING_PARAMS_Q, validator=validRebinParams),
doc='Manual q rebinning parameters.')
self.setPropertyGroup(common.PROP_BINNING_PARAMS_Q, PROPGROUP_REBINNING)
self.declareProperty(name=common.PROP_TRANSPOSE_SAMPLE_OUTPUT,
defaultValue=common.TRANSPOSING_ON,
validator=StringListValidator([
common.TRANSPOSING_ON,
common.TRANSPOSING_OFF]),
direction=Direction.Input,
doc='Enable or disable ' + common.PROP_OUTPUT_WS + ' transposing.')
self.declareProperty(WorkspaceProperty(
name=common.PROP_OUTPUT_THETA_W_WS,
defaultValue='',
direction=Direction.Output,
optional=PropertyMode.Optional),
doc='Output workspace for reduced S(theta, DeltaE).')
self.setPropertyGroup(common.PROP_OUTPUT_THETA_W_WS,
common.PROPGROUP_OPTIONAL_OUTPUT) | [
"def",
"PyInit",
"(",
"self",
")",
":",
"PROPGROUP_REBINNING",
"=",
"'Rebinning for SofQW'",
"inputWorkspaceValidator",
"=",
"CompositeValidator",
"(",
")",
"inputWorkspaceValidator",
".",
"add",
"(",
"InstrumentValidator",
"(",
")",
")",
"inputWorkspaceValidator",
".",
"add",
"(",
"WorkspaceUnitValidator",
"(",
"'TOF'",
")",
")",
"positiveFloat",
"=",
"FloatBoundedValidator",
"(",
"0.",
",",
"exclusive",
"=",
"True",
")",
"validRebinParams",
"=",
"RebinParamsValidator",
"(",
"AllowEmpty",
"=",
"True",
")",
"# Properties.",
"self",
".",
"declareProperty",
"(",
"MatrixWorkspaceProperty",
"(",
"name",
"=",
"common",
".",
"PROP_INPUT_WS",
",",
"defaultValue",
"=",
"''",
",",
"validator",
"=",
"inputWorkspaceValidator",
",",
"direction",
"=",
"Direction",
".",
"Input",
")",
",",
"doc",
"=",
"'A workspace to reduce.'",
")",
"self",
".",
"declareProperty",
"(",
"WorkspaceProperty",
"(",
"name",
"=",
"common",
".",
"PROP_OUTPUT_WS",
",",
"defaultValue",
"=",
"''",
",",
"direction",
"=",
"Direction",
".",
"Output",
")",
",",
"doc",
"=",
"'The reduced S(Q, DeltaE) workspace.'",
")",
"self",
".",
"declareProperty",
"(",
"name",
"=",
"common",
".",
"PROP_CLEANUP_MODE",
",",
"defaultValue",
"=",
"utils",
".",
"Cleanup",
".",
"ON",
",",
"validator",
"=",
"StringListValidator",
"(",
"[",
"utils",
".",
"Cleanup",
".",
"ON",
",",
"utils",
".",
"Cleanup",
".",
"OFF",
"]",
")",
",",
"direction",
"=",
"Direction",
".",
"Input",
",",
"doc",
"=",
"'What to do with intermediate workspaces.'",
")",
"self",
".",
"declareProperty",
"(",
"name",
"=",
"common",
".",
"PROP_SUBALG_LOGGING",
",",
"defaultValue",
"=",
"common",
".",
"SUBALG_LOGGING_OFF",
",",
"validator",
"=",
"StringListValidator",
"(",
"[",
"common",
".",
"SUBALG_LOGGING_OFF",
",",
"common",
".",
"SUBALG_LOGGING_ON",
"]",
")",
",",
"direction",
"=",
"Direction",
".",
"Input",
",",
"doc",
"=",
"'Enable or disable subalgorithms to print in the logs.'",
")",
"self",
".",
"declareProperty",
"(",
"MatrixWorkspaceProperty",
"(",
"name",
"=",
"common",
".",
"PROP_VANA_WS",
",",
"defaultValue",
"=",
"''",
",",
"validator",
"=",
"inputWorkspaceValidator",
",",
"direction",
"=",
"Direction",
".",
"Input",
",",
"optional",
"=",
"PropertyMode",
".",
"Optional",
")",
",",
"doc",
"=",
"'An integrated vanadium workspace.'",
")",
"self",
".",
"declareProperty",
"(",
"name",
"=",
"common",
".",
"PROP_ABSOLUTE_UNITS",
",",
"defaultValue",
"=",
"common",
".",
"ABSOLUTE_UNITS_OFF",
",",
"validator",
"=",
"StringListValidator",
"(",
"[",
"common",
".",
"ABSOLUTE_UNITS_OFF",
",",
"common",
".",
"ABSOLUTE_UNITS_ON",
"]",
")",
",",
"direction",
"=",
"Direction",
".",
"Input",
",",
"doc",
"=",
"'Enable or disable normalisation to absolute units.'",
")",
"self",
".",
"declareProperty",
"(",
"MatrixWorkspaceProperty",
"(",
"name",
"=",
"common",
".",
"PROP_DIAGNOSTICS_WS",
",",
"defaultValue",
"=",
"''",
",",
"direction",
"=",
"Direction",
".",
"Input",
",",
"optional",
"=",
"PropertyMode",
".",
"Optional",
")",
",",
"doc",
"=",
"'Detector diagnostics workspace for masking.'",
")",
"self",
".",
"declareProperty",
"(",
"name",
"=",
"common",
".",
"PROP_GROUPING_ANGLE_STEP",
",",
"defaultValue",
"=",
"Property",
".",
"EMPTY_DBL",
",",
"validator",
"=",
"positiveFloat",
",",
"doc",
"=",
"'A scattering angle step to which to group detectors, in degrees.'",
")",
"self",
".",
"declareProperty",
"(",
"FloatArrayProperty",
"(",
"name",
"=",
"common",
".",
"PROP_REBINNING_PARAMS_W",
",",
"validator",
"=",
"validRebinParams",
")",
",",
"doc",
"=",
"'Manual energy rebinning parameters.'",
")",
"self",
".",
"setPropertyGroup",
"(",
"common",
".",
"PROP_REBINNING_PARAMS_W",
",",
"PROPGROUP_REBINNING",
")",
"self",
".",
"declareProperty",
"(",
"name",
"=",
"common",
".",
"PROP_REBINNING_W",
",",
"defaultValue",
"=",
"''",
",",
"doc",
"=",
"'Energy rebinning when mixing manual and automatic binning parameters.'",
")",
"self",
".",
"declareProperty",
"(",
"FloatArrayProperty",
"(",
"name",
"=",
"common",
".",
"PROP_BINNING_PARAMS_Q",
",",
"validator",
"=",
"validRebinParams",
")",
",",
"doc",
"=",
"'Manual q rebinning parameters.'",
")",
"self",
".",
"setPropertyGroup",
"(",
"common",
".",
"PROP_BINNING_PARAMS_Q",
",",
"PROPGROUP_REBINNING",
")",
"self",
".",
"declareProperty",
"(",
"name",
"=",
"common",
".",
"PROP_TRANSPOSE_SAMPLE_OUTPUT",
",",
"defaultValue",
"=",
"common",
".",
"TRANSPOSING_ON",
",",
"validator",
"=",
"StringListValidator",
"(",
"[",
"common",
".",
"TRANSPOSING_ON",
",",
"common",
".",
"TRANSPOSING_OFF",
"]",
")",
",",
"direction",
"=",
"Direction",
".",
"Input",
",",
"doc",
"=",
"'Enable or disable '",
"+",
"common",
".",
"PROP_OUTPUT_WS",
"+",
"' transposing.'",
")",
"self",
".",
"declareProperty",
"(",
"WorkspaceProperty",
"(",
"name",
"=",
"common",
".",
"PROP_OUTPUT_THETA_W_WS",
",",
"defaultValue",
"=",
"''",
",",
"direction",
"=",
"Direction",
".",
"Output",
",",
"optional",
"=",
"PropertyMode",
".",
"Optional",
")",
",",
"doc",
"=",
"'Output workspace for reduced S(theta, DeltaE).'",
")",
"self",
".",
"setPropertyGroup",
"(",
"common",
".",
"PROP_OUTPUT_THETA_W_WS",
",",
"common",
".",
"PROPGROUP_OPTIONAL_OUTPUT",
")"
] | https://github.com/mantidproject/mantid/blob/03deeb89254ec4289edb8771e0188c2090a02f32/Framework/PythonInterface/plugins/algorithms/WorkflowAlgorithms/DirectILLReduction.py#L307-L388 | ||
wxWidgets/wxPython-Classic | 19571e1ae65f1ac445f5491474121998c97a1bf0 | src/osx_carbon/_core.py | python | MenuBar.IsEnabled | (*args, **kwargs) | return _core_.MenuBar_IsEnabled(*args, **kwargs) | IsEnabled(self, int id) -> bool | IsEnabled(self, int id) -> bool | [
"IsEnabled",
"(",
"self",
"int",
"id",
")",
"-",
">",
"bool"
] | def IsEnabled(*args, **kwargs):
"""IsEnabled(self, int id) -> bool"""
return _core_.MenuBar_IsEnabled(*args, **kwargs) | [
"def",
"IsEnabled",
"(",
"*",
"args",
",",
"*",
"*",
"kwargs",
")",
":",
"return",
"_core_",
".",
"MenuBar_IsEnabled",
"(",
"*",
"args",
",",
"*",
"*",
"kwargs",
")"
] | https://github.com/wxWidgets/wxPython-Classic/blob/19571e1ae65f1ac445f5491474121998c97a1bf0/src/osx_carbon/_core.py#L12343-L12345 | |
CRYTEK/CRYENGINE | 232227c59a220cbbd311576f0fbeba7bb53b2a8c | Code/Tools/waf-1.7.13/waflib/Node.py | python | Node.__eq__ | (self, node) | return id(self) == id(node) | Node comparison, based on the IDs | Node comparison, based on the IDs | [
"Node",
"comparison",
"based",
"on",
"the",
"IDs"
] | def __eq__(self, node):
"Node comparison, based on the IDs"
return id(self) == id(node) | [
"def",
"__eq__",
"(",
"self",
",",
"node",
")",
":",
"return",
"id",
"(",
"self",
")",
"==",
"id",
"(",
"node",
")"
] | https://github.com/CRYTEK/CRYENGINE/blob/232227c59a220cbbd311576f0fbeba7bb53b2a8c/Code/Tools/waf-1.7.13/waflib/Node.py#L138-L140 | |
catboost/catboost | 167f64f237114a4d10b2b4ee42adb4569137debe | contrib/python/scipy/py2/scipy/linalg/decomp_lu.py | python | lu_solve | (lu_and_piv, b, trans=0, overwrite_b=False, check_finite=True) | Solve an equation system, a x = b, given the LU factorization of a
Parameters
----------
(lu, piv)
Factorization of the coefficient matrix a, as given by lu_factor
b : array
Right-hand side
trans : {0, 1, 2}, optional
Type of system to solve:
===== =========
trans system
===== =========
0 a x = b
1 a^T x = b
2 a^H x = b
===== =========
overwrite_b : bool, optional
Whether to overwrite data in b (may increase performance)
check_finite : bool, optional
Whether to check that the input matrices contain only finite numbers.
Disabling may give a performance gain, but may result in problems
(crashes, non-termination) if the inputs do contain infinities or NaNs.
Returns
-------
x : array
Solution to the system
See also
--------
lu_factor : LU factorize a matrix
Examples
--------
>>> from scipy.linalg import lu_factor, lu_solve
>>> A = np.array([[2, 5, 8, 7], [5, 2, 2, 8], [7, 5, 6, 6], [5, 4, 4, 8]])
>>> b = np.array([1, 1, 1, 1])
>>> lu, piv = lu_factor(A)
>>> x = lu_solve((lu, piv), b)
>>> np.allclose(A @ x - b, np.zeros((4,)))
True | Solve an equation system, a x = b, given the LU factorization of a | [
"Solve",
"an",
"equation",
"system",
"a",
"x",
"=",
"b",
"given",
"the",
"LU",
"factorization",
"of",
"a"
] | def lu_solve(lu_and_piv, b, trans=0, overwrite_b=False, check_finite=True):
"""Solve an equation system, a x = b, given the LU factorization of a
Parameters
----------
(lu, piv)
Factorization of the coefficient matrix a, as given by lu_factor
b : array
Right-hand side
trans : {0, 1, 2}, optional
Type of system to solve:
===== =========
trans system
===== =========
0 a x = b
1 a^T x = b
2 a^H x = b
===== =========
overwrite_b : bool, optional
Whether to overwrite data in b (may increase performance)
check_finite : bool, optional
Whether to check that the input matrices contain only finite numbers.
Disabling may give a performance gain, but may result in problems
(crashes, non-termination) if the inputs do contain infinities or NaNs.
Returns
-------
x : array
Solution to the system
See also
--------
lu_factor : LU factorize a matrix
Examples
--------
>>> from scipy.linalg import lu_factor, lu_solve
>>> A = np.array([[2, 5, 8, 7], [5, 2, 2, 8], [7, 5, 6, 6], [5, 4, 4, 8]])
>>> b = np.array([1, 1, 1, 1])
>>> lu, piv = lu_factor(A)
>>> x = lu_solve((lu, piv), b)
>>> np.allclose(A @ x - b, np.zeros((4,)))
True
"""
(lu, piv) = lu_and_piv
if check_finite:
b1 = asarray_chkfinite(b)
else:
b1 = asarray(b)
overwrite_b = overwrite_b or _datacopied(b1, b)
if lu.shape[0] != b1.shape[0]:
raise ValueError("incompatible dimensions.")
getrs, = get_lapack_funcs(('getrs',), (lu, b1))
x, info = getrs(lu, piv, b1, trans=trans, overwrite_b=overwrite_b)
if info == 0:
return x
raise ValueError('illegal value in %d-th argument of internal gesv|posv'
% -info) | [
"def",
"lu_solve",
"(",
"lu_and_piv",
",",
"b",
",",
"trans",
"=",
"0",
",",
"overwrite_b",
"=",
"False",
",",
"check_finite",
"=",
"True",
")",
":",
"(",
"lu",
",",
"piv",
")",
"=",
"lu_and_piv",
"if",
"check_finite",
":",
"b1",
"=",
"asarray_chkfinite",
"(",
"b",
")",
"else",
":",
"b1",
"=",
"asarray",
"(",
"b",
")",
"overwrite_b",
"=",
"overwrite_b",
"or",
"_datacopied",
"(",
"b1",
",",
"b",
")",
"if",
"lu",
".",
"shape",
"[",
"0",
"]",
"!=",
"b1",
".",
"shape",
"[",
"0",
"]",
":",
"raise",
"ValueError",
"(",
"\"incompatible dimensions.\"",
")",
"getrs",
",",
"=",
"get_lapack_funcs",
"(",
"(",
"'getrs'",
",",
")",
",",
"(",
"lu",
",",
"b1",
")",
")",
"x",
",",
"info",
"=",
"getrs",
"(",
"lu",
",",
"piv",
",",
"b1",
",",
"trans",
"=",
"trans",
",",
"overwrite_b",
"=",
"overwrite_b",
")",
"if",
"info",
"==",
"0",
":",
"return",
"x",
"raise",
"ValueError",
"(",
"'illegal value in %d-th argument of internal gesv|posv'",
"%",
"-",
"info",
")"
] | https://github.com/catboost/catboost/blob/167f64f237114a4d10b2b4ee42adb4569137debe/contrib/python/scipy/py2/scipy/linalg/decomp_lu.py#L90-L150 | ||
stan-dev/math | 5fd79f89933269a4ca4d8dd1fde2a36d53d4768c | lib/boost_1.75.0/tools/build/src/util/utility.py | python | split_action_id | (id) | return (toolset, name) | Splits an id in the toolset and specific rule parts. E.g.
'gcc.compile.c++' returns ('gcc', 'compile.c++') | Splits an id in the toolset and specific rule parts. E.g.
'gcc.compile.c++' returns ('gcc', 'compile.c++') | [
"Splits",
"an",
"id",
"in",
"the",
"toolset",
"and",
"specific",
"rule",
"parts",
".",
"E",
".",
"g",
".",
"gcc",
".",
"compile",
".",
"c",
"++",
"returns",
"(",
"gcc",
"compile",
".",
"c",
"++",
")"
] | def split_action_id (id):
""" Splits an id in the toolset and specific rule parts. E.g.
'gcc.compile.c++' returns ('gcc', 'compile.c++')
"""
assert isinstance(id, basestring)
split = id.split ('.', 1)
toolset = split [0]
name = ''
if len (split) > 1:
name = split [1]
return (toolset, name) | [
"def",
"split_action_id",
"(",
"id",
")",
":",
"assert",
"isinstance",
"(",
"id",
",",
"basestring",
")",
"split",
"=",
"id",
".",
"split",
"(",
"'.'",
",",
"1",
")",
"toolset",
"=",
"split",
"[",
"0",
"]",
"name",
"=",
"''",
"if",
"len",
"(",
"split",
")",
">",
"1",
":",
"name",
"=",
"split",
"[",
"1",
"]",
"return",
"(",
"toolset",
",",
"name",
")"
] | https://github.com/stan-dev/math/blob/5fd79f89933269a4ca4d8dd1fde2a36d53d4768c/lib/boost_1.75.0/tools/build/src/util/utility.py#L141-L151 | |
LiquidPlayer/LiquidCore | 9405979363f2353ac9a71ad8ab59685dd7f919c9 | deps/node-10.15.3/deps/npm/node_modules/node-gyp/gyp/pylib/gyp/generator/ninja.py | python | NinjaWriter.WriteSourcesForArch | (self, ninja_file, config_name, config, sources,
predepends, precompiled_header, spec, arch=None) | return outputs | Write build rules to compile all of |sources|. | Write build rules to compile all of |sources|. | [
"Write",
"build",
"rules",
"to",
"compile",
"all",
"of",
"|sources|",
"."
] | def WriteSourcesForArch(self, ninja_file, config_name, config, sources,
predepends, precompiled_header, spec, arch=None):
"""Write build rules to compile all of |sources|."""
extra_defines = []
if self.flavor == 'mac':
cflags = self.xcode_settings.GetCflags(config_name, arch=arch)
cflags_c = self.xcode_settings.GetCflagsC(config_name)
cflags_cc = self.xcode_settings.GetCflagsCC(config_name)
cflags_objc = ['$cflags_c'] + \
self.xcode_settings.GetCflagsObjC(config_name)
cflags_objcc = ['$cflags_cc'] + \
self.xcode_settings.GetCflagsObjCC(config_name)
elif self.flavor == 'win':
asmflags = self.msvs_settings.GetAsmflags(config_name)
cflags = self.msvs_settings.GetCflags(config_name)
cflags_c = self.msvs_settings.GetCflagsC(config_name)
cflags_cc = self.msvs_settings.GetCflagsCC(config_name)
extra_defines = self.msvs_settings.GetComputedDefines(config_name)
# See comment at cc_command for why there's two .pdb files.
pdbpath_c = pdbpath_cc = self.msvs_settings.GetCompilerPdbName(
config_name, self.ExpandSpecial)
if not pdbpath_c:
obj = 'obj'
if self.toolset != 'target':
obj += '.' + self.toolset
pdbpath = os.path.normpath(os.path.join(obj, self.base_dir, self.name))
pdbpath_c = pdbpath + '.c.pdb'
pdbpath_cc = pdbpath + '.cc.pdb'
self.WriteVariableList(ninja_file, 'pdbname_c', [pdbpath_c])
self.WriteVariableList(ninja_file, 'pdbname_cc', [pdbpath_cc])
self.WriteVariableList(ninja_file, 'pchprefix', [self.name])
else:
cflags = config.get('cflags', [])
cflags_c = config.get('cflags_c', [])
cflags_cc = config.get('cflags_cc', [])
# Respect environment variables related to build, but target-specific
# flags can still override them.
if self.toolset == 'target':
cflags_c = (os.environ.get('CPPFLAGS', '').split() +
os.environ.get('CFLAGS', '').split() + cflags_c)
cflags_cc = (os.environ.get('CPPFLAGS', '').split() +
os.environ.get('CXXFLAGS', '').split() + cflags_cc)
elif self.toolset == 'host':
cflags_c = (os.environ.get('CPPFLAGS_host', '').split() +
os.environ.get('CFLAGS_host', '').split() + cflags_c)
cflags_cc = (os.environ.get('CPPFLAGS_host', '').split() +
os.environ.get('CXXFLAGS_host', '').split() + cflags_cc)
defines = config.get('defines', []) + extra_defines
self.WriteVariableList(ninja_file, 'defines',
[Define(d, self.flavor) for d in defines])
if self.flavor == 'win':
self.WriteVariableList(ninja_file, 'asmflags',
map(self.ExpandSpecial, asmflags))
self.WriteVariableList(ninja_file, 'rcflags',
[QuoteShellArgument(self.ExpandSpecial(f), self.flavor)
for f in self.msvs_settings.GetRcflags(config_name,
self.GypPathToNinja)])
include_dirs = config.get('include_dirs', [])
env = self.GetToolchainEnv()
if self.flavor == 'win':
include_dirs = self.msvs_settings.AdjustIncludeDirs(include_dirs,
config_name)
self.WriteVariableList(ninja_file, 'includes',
[QuoteShellArgument('-I' + self.GypPathToNinja(i, env), self.flavor)
for i in include_dirs])
if self.flavor == 'win':
midl_include_dirs = config.get('midl_include_dirs', [])
midl_include_dirs = self.msvs_settings.AdjustMidlIncludeDirs(
midl_include_dirs, config_name)
self.WriteVariableList(ninja_file, 'midl_includes',
[QuoteShellArgument('-I' + self.GypPathToNinja(i, env), self.flavor)
for i in midl_include_dirs])
pch_commands = precompiled_header.GetPchBuildCommands(arch)
if self.flavor == 'mac':
# Most targets use no precompiled headers, so only write these if needed.
for ext, var in [('c', 'cflags_pch_c'), ('cc', 'cflags_pch_cc'),
('m', 'cflags_pch_objc'), ('mm', 'cflags_pch_objcc')]:
include = precompiled_header.GetInclude(ext, arch)
if include: ninja_file.variable(var, include)
arflags = config.get('arflags', [])
self.WriteVariableList(ninja_file, 'cflags',
map(self.ExpandSpecial, cflags))
self.WriteVariableList(ninja_file, 'cflags_c',
map(self.ExpandSpecial, cflags_c))
self.WriteVariableList(ninja_file, 'cflags_cc',
map(self.ExpandSpecial, cflags_cc))
if self.flavor == 'mac':
self.WriteVariableList(ninja_file, 'cflags_objc',
map(self.ExpandSpecial, cflags_objc))
self.WriteVariableList(ninja_file, 'cflags_objcc',
map(self.ExpandSpecial, cflags_objcc))
self.WriteVariableList(ninja_file, 'arflags',
map(self.ExpandSpecial, arflags))
ninja_file.newline()
outputs = []
has_rc_source = False
for source in sources:
filename, ext = os.path.splitext(source)
ext = ext[1:]
obj_ext = self.obj_ext
if ext in ('cc', 'cpp', 'cxx'):
command = 'cxx'
self.uses_cpp = True
elif ext == 'c' or (ext == 'S' and self.flavor != 'win'):
command = 'cc'
elif ext == 's' and self.flavor != 'win': # Doesn't generate .o.d files.
command = 'cc_s'
elif (self.flavor == 'win' and ext == 'asm' and
not self.msvs_settings.HasExplicitAsmRules(spec)):
command = 'asm'
# Add the _asm suffix as msvs is capable of handling .cc and
# .asm files of the same name without collision.
obj_ext = '_asm.obj'
elif self.flavor == 'mac' and ext == 'm':
command = 'objc'
elif self.flavor == 'mac' and ext == 'mm':
command = 'objcxx'
self.uses_cpp = True
elif self.flavor == 'win' and ext == 'rc':
command = 'rc'
obj_ext = '.res'
has_rc_source = True
else:
# Ignore unhandled extensions.
continue
input = self.GypPathToNinja(source)
output = self.GypPathToUniqueOutput(filename + obj_ext)
if arch is not None:
output = AddArch(output, arch)
implicit = precompiled_header.GetObjDependencies([input], [output], arch)
variables = []
if self.flavor == 'win':
variables, output, implicit = precompiled_header.GetFlagsModifications(
input, output, implicit, command, cflags_c, cflags_cc,
self.ExpandSpecial)
ninja_file.build(output, command, input,
implicit=[gch for _, _, gch in implicit],
order_only=predepends, variables=variables)
outputs.append(output)
if has_rc_source:
resource_include_dirs = config.get('resource_include_dirs', include_dirs)
self.WriteVariableList(ninja_file, 'resource_includes',
[QuoteShellArgument('-I' + self.GypPathToNinja(i, env), self.flavor)
for i in resource_include_dirs])
self.WritePchTargets(ninja_file, pch_commands)
ninja_file.newline()
return outputs | [
"def",
"WriteSourcesForArch",
"(",
"self",
",",
"ninja_file",
",",
"config_name",
",",
"config",
",",
"sources",
",",
"predepends",
",",
"precompiled_header",
",",
"spec",
",",
"arch",
"=",
"None",
")",
":",
"extra_defines",
"=",
"[",
"]",
"if",
"self",
".",
"flavor",
"==",
"'mac'",
":",
"cflags",
"=",
"self",
".",
"xcode_settings",
".",
"GetCflags",
"(",
"config_name",
",",
"arch",
"=",
"arch",
")",
"cflags_c",
"=",
"self",
".",
"xcode_settings",
".",
"GetCflagsC",
"(",
"config_name",
")",
"cflags_cc",
"=",
"self",
".",
"xcode_settings",
".",
"GetCflagsCC",
"(",
"config_name",
")",
"cflags_objc",
"=",
"[",
"'$cflags_c'",
"]",
"+",
"self",
".",
"xcode_settings",
".",
"GetCflagsObjC",
"(",
"config_name",
")",
"cflags_objcc",
"=",
"[",
"'$cflags_cc'",
"]",
"+",
"self",
".",
"xcode_settings",
".",
"GetCflagsObjCC",
"(",
"config_name",
")",
"elif",
"self",
".",
"flavor",
"==",
"'win'",
":",
"asmflags",
"=",
"self",
".",
"msvs_settings",
".",
"GetAsmflags",
"(",
"config_name",
")",
"cflags",
"=",
"self",
".",
"msvs_settings",
".",
"GetCflags",
"(",
"config_name",
")",
"cflags_c",
"=",
"self",
".",
"msvs_settings",
".",
"GetCflagsC",
"(",
"config_name",
")",
"cflags_cc",
"=",
"self",
".",
"msvs_settings",
".",
"GetCflagsCC",
"(",
"config_name",
")",
"extra_defines",
"=",
"self",
".",
"msvs_settings",
".",
"GetComputedDefines",
"(",
"config_name",
")",
"# See comment at cc_command for why there's two .pdb files.",
"pdbpath_c",
"=",
"pdbpath_cc",
"=",
"self",
".",
"msvs_settings",
".",
"GetCompilerPdbName",
"(",
"config_name",
",",
"self",
".",
"ExpandSpecial",
")",
"if",
"not",
"pdbpath_c",
":",
"obj",
"=",
"'obj'",
"if",
"self",
".",
"toolset",
"!=",
"'target'",
":",
"obj",
"+=",
"'.'",
"+",
"self",
".",
"toolset",
"pdbpath",
"=",
"os",
".",
"path",
".",
"normpath",
"(",
"os",
".",
"path",
".",
"join",
"(",
"obj",
",",
"self",
".",
"base_dir",
",",
"self",
".",
"name",
")",
")",
"pdbpath_c",
"=",
"pdbpath",
"+",
"'.c.pdb'",
"pdbpath_cc",
"=",
"pdbpath",
"+",
"'.cc.pdb'",
"self",
".",
"WriteVariableList",
"(",
"ninja_file",
",",
"'pdbname_c'",
",",
"[",
"pdbpath_c",
"]",
")",
"self",
".",
"WriteVariableList",
"(",
"ninja_file",
",",
"'pdbname_cc'",
",",
"[",
"pdbpath_cc",
"]",
")",
"self",
".",
"WriteVariableList",
"(",
"ninja_file",
",",
"'pchprefix'",
",",
"[",
"self",
".",
"name",
"]",
")",
"else",
":",
"cflags",
"=",
"config",
".",
"get",
"(",
"'cflags'",
",",
"[",
"]",
")",
"cflags_c",
"=",
"config",
".",
"get",
"(",
"'cflags_c'",
",",
"[",
"]",
")",
"cflags_cc",
"=",
"config",
".",
"get",
"(",
"'cflags_cc'",
",",
"[",
"]",
")",
"# Respect environment variables related to build, but target-specific",
"# flags can still override them.",
"if",
"self",
".",
"toolset",
"==",
"'target'",
":",
"cflags_c",
"=",
"(",
"os",
".",
"environ",
".",
"get",
"(",
"'CPPFLAGS'",
",",
"''",
")",
".",
"split",
"(",
")",
"+",
"os",
".",
"environ",
".",
"get",
"(",
"'CFLAGS'",
",",
"''",
")",
".",
"split",
"(",
")",
"+",
"cflags_c",
")",
"cflags_cc",
"=",
"(",
"os",
".",
"environ",
".",
"get",
"(",
"'CPPFLAGS'",
",",
"''",
")",
".",
"split",
"(",
")",
"+",
"os",
".",
"environ",
".",
"get",
"(",
"'CXXFLAGS'",
",",
"''",
")",
".",
"split",
"(",
")",
"+",
"cflags_cc",
")",
"elif",
"self",
".",
"toolset",
"==",
"'host'",
":",
"cflags_c",
"=",
"(",
"os",
".",
"environ",
".",
"get",
"(",
"'CPPFLAGS_host'",
",",
"''",
")",
".",
"split",
"(",
")",
"+",
"os",
".",
"environ",
".",
"get",
"(",
"'CFLAGS_host'",
",",
"''",
")",
".",
"split",
"(",
")",
"+",
"cflags_c",
")",
"cflags_cc",
"=",
"(",
"os",
".",
"environ",
".",
"get",
"(",
"'CPPFLAGS_host'",
",",
"''",
")",
".",
"split",
"(",
")",
"+",
"os",
".",
"environ",
".",
"get",
"(",
"'CXXFLAGS_host'",
",",
"''",
")",
".",
"split",
"(",
")",
"+",
"cflags_cc",
")",
"defines",
"=",
"config",
".",
"get",
"(",
"'defines'",
",",
"[",
"]",
")",
"+",
"extra_defines",
"self",
".",
"WriteVariableList",
"(",
"ninja_file",
",",
"'defines'",
",",
"[",
"Define",
"(",
"d",
",",
"self",
".",
"flavor",
")",
"for",
"d",
"in",
"defines",
"]",
")",
"if",
"self",
".",
"flavor",
"==",
"'win'",
":",
"self",
".",
"WriteVariableList",
"(",
"ninja_file",
",",
"'asmflags'",
",",
"map",
"(",
"self",
".",
"ExpandSpecial",
",",
"asmflags",
")",
")",
"self",
".",
"WriteVariableList",
"(",
"ninja_file",
",",
"'rcflags'",
",",
"[",
"QuoteShellArgument",
"(",
"self",
".",
"ExpandSpecial",
"(",
"f",
")",
",",
"self",
".",
"flavor",
")",
"for",
"f",
"in",
"self",
".",
"msvs_settings",
".",
"GetRcflags",
"(",
"config_name",
",",
"self",
".",
"GypPathToNinja",
")",
"]",
")",
"include_dirs",
"=",
"config",
".",
"get",
"(",
"'include_dirs'",
",",
"[",
"]",
")",
"env",
"=",
"self",
".",
"GetToolchainEnv",
"(",
")",
"if",
"self",
".",
"flavor",
"==",
"'win'",
":",
"include_dirs",
"=",
"self",
".",
"msvs_settings",
".",
"AdjustIncludeDirs",
"(",
"include_dirs",
",",
"config_name",
")",
"self",
".",
"WriteVariableList",
"(",
"ninja_file",
",",
"'includes'",
",",
"[",
"QuoteShellArgument",
"(",
"'-I'",
"+",
"self",
".",
"GypPathToNinja",
"(",
"i",
",",
"env",
")",
",",
"self",
".",
"flavor",
")",
"for",
"i",
"in",
"include_dirs",
"]",
")",
"if",
"self",
".",
"flavor",
"==",
"'win'",
":",
"midl_include_dirs",
"=",
"config",
".",
"get",
"(",
"'midl_include_dirs'",
",",
"[",
"]",
")",
"midl_include_dirs",
"=",
"self",
".",
"msvs_settings",
".",
"AdjustMidlIncludeDirs",
"(",
"midl_include_dirs",
",",
"config_name",
")",
"self",
".",
"WriteVariableList",
"(",
"ninja_file",
",",
"'midl_includes'",
",",
"[",
"QuoteShellArgument",
"(",
"'-I'",
"+",
"self",
".",
"GypPathToNinja",
"(",
"i",
",",
"env",
")",
",",
"self",
".",
"flavor",
")",
"for",
"i",
"in",
"midl_include_dirs",
"]",
")",
"pch_commands",
"=",
"precompiled_header",
".",
"GetPchBuildCommands",
"(",
"arch",
")",
"if",
"self",
".",
"flavor",
"==",
"'mac'",
":",
"# Most targets use no precompiled headers, so only write these if needed.",
"for",
"ext",
",",
"var",
"in",
"[",
"(",
"'c'",
",",
"'cflags_pch_c'",
")",
",",
"(",
"'cc'",
",",
"'cflags_pch_cc'",
")",
",",
"(",
"'m'",
",",
"'cflags_pch_objc'",
")",
",",
"(",
"'mm'",
",",
"'cflags_pch_objcc'",
")",
"]",
":",
"include",
"=",
"precompiled_header",
".",
"GetInclude",
"(",
"ext",
",",
"arch",
")",
"if",
"include",
":",
"ninja_file",
".",
"variable",
"(",
"var",
",",
"include",
")",
"arflags",
"=",
"config",
".",
"get",
"(",
"'arflags'",
",",
"[",
"]",
")",
"self",
".",
"WriteVariableList",
"(",
"ninja_file",
",",
"'cflags'",
",",
"map",
"(",
"self",
".",
"ExpandSpecial",
",",
"cflags",
")",
")",
"self",
".",
"WriteVariableList",
"(",
"ninja_file",
",",
"'cflags_c'",
",",
"map",
"(",
"self",
".",
"ExpandSpecial",
",",
"cflags_c",
")",
")",
"self",
".",
"WriteVariableList",
"(",
"ninja_file",
",",
"'cflags_cc'",
",",
"map",
"(",
"self",
".",
"ExpandSpecial",
",",
"cflags_cc",
")",
")",
"if",
"self",
".",
"flavor",
"==",
"'mac'",
":",
"self",
".",
"WriteVariableList",
"(",
"ninja_file",
",",
"'cflags_objc'",
",",
"map",
"(",
"self",
".",
"ExpandSpecial",
",",
"cflags_objc",
")",
")",
"self",
".",
"WriteVariableList",
"(",
"ninja_file",
",",
"'cflags_objcc'",
",",
"map",
"(",
"self",
".",
"ExpandSpecial",
",",
"cflags_objcc",
")",
")",
"self",
".",
"WriteVariableList",
"(",
"ninja_file",
",",
"'arflags'",
",",
"map",
"(",
"self",
".",
"ExpandSpecial",
",",
"arflags",
")",
")",
"ninja_file",
".",
"newline",
"(",
")",
"outputs",
"=",
"[",
"]",
"has_rc_source",
"=",
"False",
"for",
"source",
"in",
"sources",
":",
"filename",
",",
"ext",
"=",
"os",
".",
"path",
".",
"splitext",
"(",
"source",
")",
"ext",
"=",
"ext",
"[",
"1",
":",
"]",
"obj_ext",
"=",
"self",
".",
"obj_ext",
"if",
"ext",
"in",
"(",
"'cc'",
",",
"'cpp'",
",",
"'cxx'",
")",
":",
"command",
"=",
"'cxx'",
"self",
".",
"uses_cpp",
"=",
"True",
"elif",
"ext",
"==",
"'c'",
"or",
"(",
"ext",
"==",
"'S'",
"and",
"self",
".",
"flavor",
"!=",
"'win'",
")",
":",
"command",
"=",
"'cc'",
"elif",
"ext",
"==",
"'s'",
"and",
"self",
".",
"flavor",
"!=",
"'win'",
":",
"# Doesn't generate .o.d files.",
"command",
"=",
"'cc_s'",
"elif",
"(",
"self",
".",
"flavor",
"==",
"'win'",
"and",
"ext",
"==",
"'asm'",
"and",
"not",
"self",
".",
"msvs_settings",
".",
"HasExplicitAsmRules",
"(",
"spec",
")",
")",
":",
"command",
"=",
"'asm'",
"# Add the _asm suffix as msvs is capable of handling .cc and",
"# .asm files of the same name without collision.",
"obj_ext",
"=",
"'_asm.obj'",
"elif",
"self",
".",
"flavor",
"==",
"'mac'",
"and",
"ext",
"==",
"'m'",
":",
"command",
"=",
"'objc'",
"elif",
"self",
".",
"flavor",
"==",
"'mac'",
"and",
"ext",
"==",
"'mm'",
":",
"command",
"=",
"'objcxx'",
"self",
".",
"uses_cpp",
"=",
"True",
"elif",
"self",
".",
"flavor",
"==",
"'win'",
"and",
"ext",
"==",
"'rc'",
":",
"command",
"=",
"'rc'",
"obj_ext",
"=",
"'.res'",
"has_rc_source",
"=",
"True",
"else",
":",
"# Ignore unhandled extensions.",
"continue",
"input",
"=",
"self",
".",
"GypPathToNinja",
"(",
"source",
")",
"output",
"=",
"self",
".",
"GypPathToUniqueOutput",
"(",
"filename",
"+",
"obj_ext",
")",
"if",
"arch",
"is",
"not",
"None",
":",
"output",
"=",
"AddArch",
"(",
"output",
",",
"arch",
")",
"implicit",
"=",
"precompiled_header",
".",
"GetObjDependencies",
"(",
"[",
"input",
"]",
",",
"[",
"output",
"]",
",",
"arch",
")",
"variables",
"=",
"[",
"]",
"if",
"self",
".",
"flavor",
"==",
"'win'",
":",
"variables",
",",
"output",
",",
"implicit",
"=",
"precompiled_header",
".",
"GetFlagsModifications",
"(",
"input",
",",
"output",
",",
"implicit",
",",
"command",
",",
"cflags_c",
",",
"cflags_cc",
",",
"self",
".",
"ExpandSpecial",
")",
"ninja_file",
".",
"build",
"(",
"output",
",",
"command",
",",
"input",
",",
"implicit",
"=",
"[",
"gch",
"for",
"_",
",",
"_",
",",
"gch",
"in",
"implicit",
"]",
",",
"order_only",
"=",
"predepends",
",",
"variables",
"=",
"variables",
")",
"outputs",
".",
"append",
"(",
"output",
")",
"if",
"has_rc_source",
":",
"resource_include_dirs",
"=",
"config",
".",
"get",
"(",
"'resource_include_dirs'",
",",
"include_dirs",
")",
"self",
".",
"WriteVariableList",
"(",
"ninja_file",
",",
"'resource_includes'",
",",
"[",
"QuoteShellArgument",
"(",
"'-I'",
"+",
"self",
".",
"GypPathToNinja",
"(",
"i",
",",
"env",
")",
",",
"self",
".",
"flavor",
")",
"for",
"i",
"in",
"resource_include_dirs",
"]",
")",
"self",
".",
"WritePchTargets",
"(",
"ninja_file",
",",
"pch_commands",
")",
"ninja_file",
".",
"newline",
"(",
")",
"return",
"outputs"
] | https://github.com/LiquidPlayer/LiquidCore/blob/9405979363f2353ac9a71ad8ab59685dd7f919c9/deps/node-10.15.3/deps/npm/node_modules/node-gyp/gyp/pylib/gyp/generator/ninja.py#L884-L1042 | |
Kitware/ParaView | f760af9124ff4634b23ebbeab95a4f56e0261955 | Wrapping/Python/paraview/simple.py | python | ResetCameraToDirection | (position, direction, up=None, view=None) | Resets the settings of the camera to the given position and direction | Resets the settings of the camera to the given position and direction | [
"Resets",
"the",
"settings",
"of",
"the",
"camera",
"to",
"the",
"given",
"position",
"and",
"direction"
] | def ResetCameraToDirection(position, direction, up=None, view=None):
"""Resets the settings of the camera to the given position and direction"""
if not view:
view = active_objects.view
if hasattr(view, "CameraFocalPoint"):
view.CameraFocalPoint = position
if hasattr(view, "CameraPosition"):
for i in range(3):
view.CameraPosition[i] = position[i] - direction[i]
if hasattr(view, "CameraViewUp") and up:
view.CameraViewUp = up
ResetCamera(view) | [
"def",
"ResetCameraToDirection",
"(",
"position",
",",
"direction",
",",
"up",
"=",
"None",
",",
"view",
"=",
"None",
")",
":",
"if",
"not",
"view",
":",
"view",
"=",
"active_objects",
".",
"view",
"if",
"hasattr",
"(",
"view",
",",
"\"CameraFocalPoint\"",
")",
":",
"view",
".",
"CameraFocalPoint",
"=",
"position",
"if",
"hasattr",
"(",
"view",
",",
"\"CameraPosition\"",
")",
":",
"for",
"i",
"in",
"range",
"(",
"3",
")",
":",
"view",
".",
"CameraPosition",
"[",
"i",
"]",
"=",
"position",
"[",
"i",
"]",
"-",
"direction",
"[",
"i",
"]",
"if",
"hasattr",
"(",
"view",
",",
"\"CameraViewUp\"",
")",
"and",
"up",
":",
"view",
".",
"CameraViewUp",
"=",
"up",
"ResetCamera",
"(",
"view",
")"
] | https://github.com/Kitware/ParaView/blob/f760af9124ff4634b23ebbeab95a4f56e0261955/Wrapping/Python/paraview/simple.py#L361-L372 | ||
mongodb/mongo | d8ff665343ad29cf286ee2cf4a1960d29371937b | src/third_party/scons-3.1.2/scons-local-3.1.2/SCons/Platform/virtualenv.py | python | _inject_venv_path | (env, path_list=None) | Modify environment such that SCons will take into account its virtualenv
when running external tools. | Modify environment such that SCons will take into account its virtualenv
when running external tools. | [
"Modify",
"environment",
"such",
"that",
"SCons",
"will",
"take",
"into",
"account",
"its",
"virtualenv",
"when",
"running",
"external",
"tools",
"."
] | def _inject_venv_path(env, path_list=None):
"""Modify environment such that SCons will take into account its virtualenv
when running external tools."""
if path_list is None:
path_list = os.getenv('PATH')
env.PrependENVPath('PATH', select_paths_in_venv(path_list)) | [
"def",
"_inject_venv_path",
"(",
"env",
",",
"path_list",
"=",
"None",
")",
":",
"if",
"path_list",
"is",
"None",
":",
"path_list",
"=",
"os",
".",
"getenv",
"(",
"'PATH'",
")",
"env",
".",
"PrependENVPath",
"(",
"'PATH'",
",",
"select_paths_in_venv",
"(",
"path_list",
")",
")"
] | https://github.com/mongodb/mongo/blob/d8ff665343ad29cf286ee2cf4a1960d29371937b/src/third_party/scons-3.1.2/scons-local-3.1.2/SCons/Platform/virtualenv.py#L77-L82 | ||
hanpfei/chromium-net | 392cc1fa3a8f92f42e4071ab6e674d8e0482f83f | third_party/catapult/third_party/gsutil/third_party/boto/boto/kms/layer1.py | python | KMSConnection.revoke_grant | (self, key_id, grant_id) | return self.make_request(action='RevokeGrant',
body=json.dumps(params)) | Revokes a grant. You can revoke a grant to actively deny
operations that depend on it.
:type key_id: string
:param key_id: Unique identifier of the key associated with the grant.
:type grant_id: string
:param grant_id: Identifier of the grant to be revoked. | Revokes a grant. You can revoke a grant to actively deny
operations that depend on it. | [
"Revokes",
"a",
"grant",
".",
"You",
"can",
"revoke",
"a",
"grant",
"to",
"actively",
"deny",
"operations",
"that",
"depend",
"on",
"it",
"."
] | def revoke_grant(self, key_id, grant_id):
"""
Revokes a grant. You can revoke a grant to actively deny
operations that depend on it.
:type key_id: string
:param key_id: Unique identifier of the key associated with the grant.
:type grant_id: string
:param grant_id: Identifier of the grant to be revoked.
"""
params = {'KeyId': key_id, 'GrantId': grant_id, }
return self.make_request(action='RevokeGrant',
body=json.dumps(params)) | [
"def",
"revoke_grant",
"(",
"self",
",",
"key_id",
",",
"grant_id",
")",
":",
"params",
"=",
"{",
"'KeyId'",
":",
"key_id",
",",
"'GrantId'",
":",
"grant_id",
",",
"}",
"return",
"self",
".",
"make_request",
"(",
"action",
"=",
"'RevokeGrant'",
",",
"body",
"=",
"json",
".",
"dumps",
"(",
"params",
")",
")"
] | https://github.com/hanpfei/chromium-net/blob/392cc1fa3a8f92f42e4071ab6e674d8e0482f83f/third_party/catapult/third_party/gsutil/third_party/boto/boto/kms/layer1.py#L767-L781 | |
aws/lumberyard | f85344403c1c2e77ec8c75deb2c116e97b713217 | dev/Tools/Python/3.7.10/windows/Lib/tempfile.py | python | _get_default_tempdir | () | Calculate the default directory to use for temporary files.
This routine should be called exactly once.
We determine whether or not a candidate temp dir is usable by
trying to create and write to a file in that directory. If this
is successful, the test file is deleted. To prevent denial of
service, the name of the test file must be randomized. | Calculate the default directory to use for temporary files.
This routine should be called exactly once. | [
"Calculate",
"the",
"default",
"directory",
"to",
"use",
"for",
"temporary",
"files",
".",
"This",
"routine",
"should",
"be",
"called",
"exactly",
"once",
"."
] | def _get_default_tempdir():
"""Calculate the default directory to use for temporary files.
This routine should be called exactly once.
We determine whether or not a candidate temp dir is usable by
trying to create and write to a file in that directory. If this
is successful, the test file is deleted. To prevent denial of
service, the name of the test file must be randomized."""
namer = _RandomNameSequence()
dirlist = _candidate_tempdir_list()
for dir in dirlist:
if dir != _os.curdir:
dir = _os.path.abspath(dir)
# Try only a few names per directory.
for seq in range(100):
name = next(namer)
filename = _os.path.join(dir, name)
try:
fd = _os.open(filename, _bin_openflags, 0o600)
try:
try:
with _io.open(fd, 'wb', closefd=False) as fp:
fp.write(b'blat')
finally:
_os.close(fd)
finally:
_os.unlink(filename)
return dir
except FileExistsError:
pass
except PermissionError:
# This exception is thrown when a directory with the chosen name
# already exists on windows.
if (_os.name == 'nt' and _os.path.isdir(dir) and
_os.access(dir, _os.W_OK)):
continue
break # no point trying more names in this directory
except OSError:
break # no point trying more names in this directory
raise FileNotFoundError(_errno.ENOENT,
"No usable temporary directory found in %s" %
dirlist) | [
"def",
"_get_default_tempdir",
"(",
")",
":",
"namer",
"=",
"_RandomNameSequence",
"(",
")",
"dirlist",
"=",
"_candidate_tempdir_list",
"(",
")",
"for",
"dir",
"in",
"dirlist",
":",
"if",
"dir",
"!=",
"_os",
".",
"curdir",
":",
"dir",
"=",
"_os",
".",
"path",
".",
"abspath",
"(",
"dir",
")",
"# Try only a few names per directory.",
"for",
"seq",
"in",
"range",
"(",
"100",
")",
":",
"name",
"=",
"next",
"(",
"namer",
")",
"filename",
"=",
"_os",
".",
"path",
".",
"join",
"(",
"dir",
",",
"name",
")",
"try",
":",
"fd",
"=",
"_os",
".",
"open",
"(",
"filename",
",",
"_bin_openflags",
",",
"0o600",
")",
"try",
":",
"try",
":",
"with",
"_io",
".",
"open",
"(",
"fd",
",",
"'wb'",
",",
"closefd",
"=",
"False",
")",
"as",
"fp",
":",
"fp",
".",
"write",
"(",
"b'blat'",
")",
"finally",
":",
"_os",
".",
"close",
"(",
"fd",
")",
"finally",
":",
"_os",
".",
"unlink",
"(",
"filename",
")",
"return",
"dir",
"except",
"FileExistsError",
":",
"pass",
"except",
"PermissionError",
":",
"# This exception is thrown when a directory with the chosen name",
"# already exists on windows.",
"if",
"(",
"_os",
".",
"name",
"==",
"'nt'",
"and",
"_os",
".",
"path",
".",
"isdir",
"(",
"dir",
")",
"and",
"_os",
".",
"access",
"(",
"dir",
",",
"_os",
".",
"W_OK",
")",
")",
":",
"continue",
"break",
"# no point trying more names in this directory",
"except",
"OSError",
":",
"break",
"# no point trying more names in this directory",
"raise",
"FileNotFoundError",
"(",
"_errno",
".",
"ENOENT",
",",
"\"No usable temporary directory found in %s\"",
"%",
"dirlist",
")"
] | https://github.com/aws/lumberyard/blob/f85344403c1c2e77ec8c75deb2c116e97b713217/dev/Tools/Python/3.7.10/windows/Lib/tempfile.py#L186-L229 | ||
catboost/catboost | 167f64f237114a4d10b2b4ee42adb4569137debe | contrib/python/scipy/py2/scipy/interpolate/interpolate.py | python | PPoly.roots | (self, discontinuity=True, extrapolate=None) | return self.solve(0, discontinuity, extrapolate) | Find real roots of the the piecewise polynomial.
Parameters
----------
discontinuity : bool, optional
Whether to report sign changes across discontinuities at
breakpoints as roots.
extrapolate : {bool, 'periodic', None}, optional
If bool, determines whether to return roots from the polynomial
extrapolated based on first and last intervals, 'periodic' works
the same as False. If None (default), use `self.extrapolate`.
Returns
-------
roots : ndarray
Roots of the polynomial(s).
If the PPoly object describes multiple polynomials, the
return value is an object array whose each element is an
ndarray containing the roots.
See Also
--------
PPoly.solve | Find real roots of the the piecewise polynomial. | [
"Find",
"real",
"roots",
"of",
"the",
"the",
"piecewise",
"polynomial",
"."
] | def roots(self, discontinuity=True, extrapolate=None):
"""
Find real roots of the the piecewise polynomial.
Parameters
----------
discontinuity : bool, optional
Whether to report sign changes across discontinuities at
breakpoints as roots.
extrapolate : {bool, 'periodic', None}, optional
If bool, determines whether to return roots from the polynomial
extrapolated based on first and last intervals, 'periodic' works
the same as False. If None (default), use `self.extrapolate`.
Returns
-------
roots : ndarray
Roots of the polynomial(s).
If the PPoly object describes multiple polynomials, the
return value is an object array whose each element is an
ndarray containing the roots.
See Also
--------
PPoly.solve
"""
return self.solve(0, discontinuity, extrapolate) | [
"def",
"roots",
"(",
"self",
",",
"discontinuity",
"=",
"True",
",",
"extrapolate",
"=",
"None",
")",
":",
"return",
"self",
".",
"solve",
"(",
"0",
",",
"discontinuity",
",",
"extrapolate",
")"
] | https://github.com/catboost/catboost/blob/167f64f237114a4d10b2b4ee42adb4569137debe/contrib/python/scipy/py2/scipy/interpolate/interpolate.py#L1242-L1269 | |
kamyu104/LeetCode-Solutions | 77605708a927ea3b85aee5a479db733938c7c211 | Python/lowest-common-ancestor-of-a-binary-tree-iii.py | python | Solution.lowestCommonAncestor | (self, p, q) | return a | :type node: Node
:rtype: Node | :type node: Node
:rtype: Node | [
":",
"type",
"node",
":",
"Node",
":",
"rtype",
":",
"Node"
] | def lowestCommonAncestor(self, p, q):
"""
:type node: Node
:rtype: Node
"""
a, b = p, q
while a != b:
a = a.parent if a else q
b = b.parent if b else p
return a | [
"def",
"lowestCommonAncestor",
"(",
"self",
",",
"p",
",",
"q",
")",
":",
"a",
",",
"b",
"=",
"p",
",",
"q",
"while",
"a",
"!=",
"b",
":",
"a",
"=",
"a",
".",
"parent",
"if",
"a",
"else",
"q",
"b",
"=",
"b",
".",
"parent",
"if",
"b",
"else",
"p",
"return",
"a"
] | https://github.com/kamyu104/LeetCode-Solutions/blob/77605708a927ea3b85aee5a479db733938c7c211/Python/lowest-common-ancestor-of-a-binary-tree-iii.py#L10-L19 | |
kamyu104/LeetCode-Solutions | 77605708a927ea3b85aee5a479db733938c7c211 | Python/masking-personal-information.py | python | Solution.maskPII | (self, S) | return "+{}-{}".format('*' * (len(digits) - 10), local) | :type S: str
:rtype: str | :type S: str
:rtype: str | [
":",
"type",
"S",
":",
"str",
":",
"rtype",
":",
"str"
] | def maskPII(self, S):
"""
:type S: str
:rtype: str
"""
if '@' in S:
first, after = S.split('@')
return "{}*****{}@{}".format(first[0], first[-1], after).lower()
digits = filter(lambda x: x.isdigit(), S)
local = "***-***-{}".format(digits[-4:])
if len(digits) == 10:
return local
return "+{}-{}".format('*' * (len(digits) - 10), local) | [
"def",
"maskPII",
"(",
"self",
",",
"S",
")",
":",
"if",
"'@'",
"in",
"S",
":",
"first",
",",
"after",
"=",
"S",
".",
"split",
"(",
"'@'",
")",
"return",
"\"{}*****{}@{}\"",
".",
"format",
"(",
"first",
"[",
"0",
"]",
",",
"first",
"[",
"-",
"1",
"]",
",",
"after",
")",
".",
"lower",
"(",
")",
"digits",
"=",
"filter",
"(",
"lambda",
"x",
":",
"x",
".",
"isdigit",
"(",
")",
",",
"S",
")",
"local",
"=",
"\"***-***-{}\"",
".",
"format",
"(",
"digits",
"[",
"-",
"4",
":",
"]",
")",
"if",
"len",
"(",
"digits",
")",
"==",
"10",
":",
"return",
"local",
"return",
"\"+{}-{}\"",
".",
"format",
"(",
"'*'",
"*",
"(",
"len",
"(",
"digits",
")",
"-",
"10",
")",
",",
"local",
")"
] | https://github.com/kamyu104/LeetCode-Solutions/blob/77605708a927ea3b85aee5a479db733938c7c211/Python/masking-personal-information.py#L5-L18 | |
wxWidgets/wxPython-Classic | 19571e1ae65f1ac445f5491474121998c97a1bf0 | src/osx_cocoa/aui.py | python | AuiToolBarItem.Assign | (*args, **kwargs) | return _aui.AuiToolBarItem_Assign(*args, **kwargs) | Assign(self, AuiToolBarItem c) | Assign(self, AuiToolBarItem c) | [
"Assign",
"(",
"self",
"AuiToolBarItem",
"c",
")"
] | def Assign(*args, **kwargs):
"""Assign(self, AuiToolBarItem c)"""
return _aui.AuiToolBarItem_Assign(*args, **kwargs) | [
"def",
"Assign",
"(",
"*",
"args",
",",
"*",
"*",
"kwargs",
")",
":",
"return",
"_aui",
".",
"AuiToolBarItem_Assign",
"(",
"*",
"args",
",",
"*",
"*",
"kwargs",
")"
] | https://github.com/wxWidgets/wxPython-Classic/blob/19571e1ae65f1ac445f5491474121998c97a1bf0/src/osx_cocoa/aui.py#L1725-L1727 | |
natanielruiz/android-yolo | 1ebb54f96a67a20ff83ddfc823ed83a13dc3a47f | jni-build/jni/include/tensorflow/contrib/distributions/python/ops/dirichlet.py | python | Dirichlet.get_event_shape | (self) | return self._get_event_shape | `TensorShape` available at graph construction time.
Same meaning as `event_shape`. May be only partially defined.
Returns:
event shape | `TensorShape` available at graph construction time. | [
"TensorShape",
"available",
"at",
"graph",
"construction",
"time",
"."
] | def get_event_shape(self):
"""`TensorShape` available at graph construction time.
Same meaning as `event_shape`. May be only partially defined.
Returns:
event shape
"""
return self._get_event_shape | [
"def",
"get_event_shape",
"(",
"self",
")",
":",
"return",
"self",
".",
"_get_event_shape"
] | https://github.com/natanielruiz/android-yolo/blob/1ebb54f96a67a20ff83ddfc823ed83a13dc3a47f/jni-build/jni/include/tensorflow/contrib/distributions/python/ops/dirichlet.py#L221-L229 | |
alexgkendall/caffe-segnet | 344c113bf1832886f1cbe9f33ffe28a3beeaf412 | scripts/cpp_lint.py | python | RemoveMultiLineCommentsFromRange | (lines, begin, end) | Clears a range of lines for multi-line comments. | Clears a range of lines for multi-line comments. | [
"Clears",
"a",
"range",
"of",
"lines",
"for",
"multi",
"-",
"line",
"comments",
"."
] | def RemoveMultiLineCommentsFromRange(lines, begin, end):
"""Clears a range of lines for multi-line comments."""
# Having // dummy comments makes the lines non-empty, so we will not get
# unnecessary blank line warnings later in the code.
for i in range(begin, end):
lines[i] = '// dummy' | [
"def",
"RemoveMultiLineCommentsFromRange",
"(",
"lines",
",",
"begin",
",",
"end",
")",
":",
"# Having // dummy comments makes the lines non-empty, so we will not get",
"# unnecessary blank line warnings later in the code.",
"for",
"i",
"in",
"range",
"(",
"begin",
",",
"end",
")",
":",
"lines",
"[",
"i",
"]",
"=",
"'// dummy'"
] | https://github.com/alexgkendall/caffe-segnet/blob/344c113bf1832886f1cbe9f33ffe28a3beeaf412/scripts/cpp_lint.py#L1143-L1148 | ||
schwehr/libais | 1e19605942c8e155cd02fde6d1acde75ecd15d75 | third_party/gmock/scripts/upload.py | python | SubversionVCS._CollapseKeywords | (self, content, keyword_str) | return re.sub(r"\$(%s):(:?)([^\$]+)\$" % '|'.join(keywords), repl, content) | Collapses SVN keywords. | Collapses SVN keywords. | [
"Collapses",
"SVN",
"keywords",
"."
] | def _CollapseKeywords(self, content, keyword_str):
"""Collapses SVN keywords."""
# svn cat translates keywords but svn diff doesn't. As a result of this
# behavior patching.PatchChunks() fails with a chunk mismatch error.
# This part was originally written by the Review Board development team
# who had the same problem (http://reviews.review-board.org/r/276/).
# Mapping of keywords to known aliases
svn_keywords = {
# Standard keywords
'Date': ['Date', 'LastChangedDate'],
'Revision': ['Revision', 'LastChangedRevision', 'Rev'],
'Author': ['Author', 'LastChangedBy'],
'HeadURL': ['HeadURL', 'URL'],
'Id': ['Id'],
# Aliases
'LastChangedDate': ['LastChangedDate', 'Date'],
'LastChangedRevision': ['LastChangedRevision', 'Rev', 'Revision'],
'LastChangedBy': ['LastChangedBy', 'Author'],
'URL': ['URL', 'HeadURL'],
}
def repl(m):
if m.group(2):
return "$%s::%s$" % (m.group(1), " " * len(m.group(3)))
return "$%s$" % m.group(1)
keywords = [keyword
for name in keyword_str.split(" ")
for keyword in svn_keywords.get(name, [])]
return re.sub(r"\$(%s):(:?)([^\$]+)\$" % '|'.join(keywords), repl, content) | [
"def",
"_CollapseKeywords",
"(",
"self",
",",
"content",
",",
"keyword_str",
")",
":",
"# svn cat translates keywords but svn diff doesn't. As a result of this",
"# behavior patching.PatchChunks() fails with a chunk mismatch error.",
"# This part was originally written by the Review Board development team",
"# who had the same problem (http://reviews.review-board.org/r/276/).",
"# Mapping of keywords to known aliases",
"svn_keywords",
"=",
"{",
"# Standard keywords",
"'Date'",
":",
"[",
"'Date'",
",",
"'LastChangedDate'",
"]",
",",
"'Revision'",
":",
"[",
"'Revision'",
",",
"'LastChangedRevision'",
",",
"'Rev'",
"]",
",",
"'Author'",
":",
"[",
"'Author'",
",",
"'LastChangedBy'",
"]",
",",
"'HeadURL'",
":",
"[",
"'HeadURL'",
",",
"'URL'",
"]",
",",
"'Id'",
":",
"[",
"'Id'",
"]",
",",
"# Aliases",
"'LastChangedDate'",
":",
"[",
"'LastChangedDate'",
",",
"'Date'",
"]",
",",
"'LastChangedRevision'",
":",
"[",
"'LastChangedRevision'",
",",
"'Rev'",
",",
"'Revision'",
"]",
",",
"'LastChangedBy'",
":",
"[",
"'LastChangedBy'",
",",
"'Author'",
"]",
",",
"'URL'",
":",
"[",
"'URL'",
",",
"'HeadURL'",
"]",
",",
"}",
"def",
"repl",
"(",
"m",
")",
":",
"if",
"m",
".",
"group",
"(",
"2",
")",
":",
"return",
"\"$%s::%s$\"",
"%",
"(",
"m",
".",
"group",
"(",
"1",
")",
",",
"\" \"",
"*",
"len",
"(",
"m",
".",
"group",
"(",
"3",
")",
")",
")",
"return",
"\"$%s$\"",
"%",
"m",
".",
"group",
"(",
"1",
")",
"keywords",
"=",
"[",
"keyword",
"for",
"name",
"in",
"keyword_str",
".",
"split",
"(",
"\" \"",
")",
"for",
"keyword",
"in",
"svn_keywords",
".",
"get",
"(",
"name",
",",
"[",
"]",
")",
"]",
"return",
"re",
".",
"sub",
"(",
"r\"\\$(%s):(:?)([^\\$]+)\\$\"",
"%",
"'|'",
".",
"join",
"(",
"keywords",
")",
",",
"repl",
",",
"content",
")"
] | https://github.com/schwehr/libais/blob/1e19605942c8e155cd02fde6d1acde75ecd15d75/third_party/gmock/scripts/upload.py#L805-L834 | |
LiquidPlayer/LiquidCore | 9405979363f2353ac9a71ad8ab59685dd7f919c9 | deps/node-10.15.3/tools/gyp/pylib/gyp/generator/cmake.py | python | CreateCMakeTargetBaseName | (qualified_target) | return StringToCMakeTargetName(cmake_target_base_name) | This is the name we would like the target to have. | This is the name we would like the target to have. | [
"This",
"is",
"the",
"name",
"we",
"would",
"like",
"the",
"target",
"to",
"have",
"."
] | def CreateCMakeTargetBaseName(qualified_target):
"""This is the name we would like the target to have."""
_, gyp_target_name, gyp_target_toolset = (
gyp.common.ParseQualifiedTarget(qualified_target))
cmake_target_base_name = gyp_target_name
if gyp_target_toolset and gyp_target_toolset != 'target':
cmake_target_base_name += '_' + gyp_target_toolset
return StringToCMakeTargetName(cmake_target_base_name) | [
"def",
"CreateCMakeTargetBaseName",
"(",
"qualified_target",
")",
":",
"_",
",",
"gyp_target_name",
",",
"gyp_target_toolset",
"=",
"(",
"gyp",
".",
"common",
".",
"ParseQualifiedTarget",
"(",
"qualified_target",
")",
")",
"cmake_target_base_name",
"=",
"gyp_target_name",
"if",
"gyp_target_toolset",
"and",
"gyp_target_toolset",
"!=",
"'target'",
":",
"cmake_target_base_name",
"+=",
"'_'",
"+",
"gyp_target_toolset",
"return",
"StringToCMakeTargetName",
"(",
"cmake_target_base_name",
")"
] | https://github.com/LiquidPlayer/LiquidCore/blob/9405979363f2353ac9a71ad8ab59685dd7f919c9/deps/node-10.15.3/tools/gyp/pylib/gyp/generator/cmake.py#L552-L559 | |
aws/lumberyard | f85344403c1c2e77ec8c75deb2c116e97b713217 | dev/Tools/Python/3.7.10/mac/Python.framework/Versions/3.7/lib/python3.7/site-packages/pip/_vendor/urllib3/util/proxy.py | python | connection_requires_http_tunnel | (
proxy_url=None, proxy_config=None, destination_scheme=None
) | return True | Returns True if the connection requires an HTTP CONNECT through the proxy.
:param URL proxy_url:
URL of the proxy.
:param ProxyConfig proxy_config:
Proxy configuration from poolmanager.py
:param str destination_scheme:
The scheme of the destination. (i.e https, http, etc) | Returns True if the connection requires an HTTP CONNECT through the proxy. | [
"Returns",
"True",
"if",
"the",
"connection",
"requires",
"an",
"HTTP",
"CONNECT",
"through",
"the",
"proxy",
"."
] | def connection_requires_http_tunnel(
proxy_url=None, proxy_config=None, destination_scheme=None
):
"""
Returns True if the connection requires an HTTP CONNECT through the proxy.
:param URL proxy_url:
URL of the proxy.
:param ProxyConfig proxy_config:
Proxy configuration from poolmanager.py
:param str destination_scheme:
The scheme of the destination. (i.e https, http, etc)
"""
# If we're not using a proxy, no way to use a tunnel.
if proxy_url is None:
return False
# HTTP destinations never require tunneling, we always forward.
if destination_scheme == "http":
return False
# Support for forwarding with HTTPS proxies and HTTPS destinations.
if (
proxy_url.scheme == "https"
and proxy_config
and proxy_config.use_forwarding_for_https
):
return False
# Otherwise always use a tunnel.
return True | [
"def",
"connection_requires_http_tunnel",
"(",
"proxy_url",
"=",
"None",
",",
"proxy_config",
"=",
"None",
",",
"destination_scheme",
"=",
"None",
")",
":",
"# If we're not using a proxy, no way to use a tunnel.",
"if",
"proxy_url",
"is",
"None",
":",
"return",
"False",
"# HTTP destinations never require tunneling, we always forward.",
"if",
"destination_scheme",
"==",
"\"http\"",
":",
"return",
"False",
"# Support for forwarding with HTTPS proxies and HTTPS destinations.",
"if",
"(",
"proxy_url",
".",
"scheme",
"==",
"\"https\"",
"and",
"proxy_config",
"and",
"proxy_config",
".",
"use_forwarding_for_https",
")",
":",
"return",
"False",
"# Otherwise always use a tunnel.",
"return",
"True"
] | https://github.com/aws/lumberyard/blob/f85344403c1c2e77ec8c75deb2c116e97b713217/dev/Tools/Python/3.7.10/mac/Python.framework/Versions/3.7/lib/python3.7/site-packages/pip/_vendor/urllib3/util/proxy.py#L4-L34 | |
adobe/chromium | cfe5bf0b51b1f6b9fe239c2a3c2f2364da9967d7 | webkit/support/setup_third_party.py | python | TryToMakeDir | (dir_name) | Create the directory dir_name if it doesn't exist. | Create the directory dir_name if it doesn't exist. | [
"Create",
"the",
"directory",
"dir_name",
"if",
"it",
"doesn",
"t",
"exist",
"."
] | def TryToMakeDir(dir_name):
"""Create the directory dir_name if it doesn't exist."""
try:
os.makedirs(dir_name)
except OSError, e:
if e.errno != errno.EEXIST:
raise e | [
"def",
"TryToMakeDir",
"(",
"dir_name",
")",
":",
"try",
":",
"os",
".",
"makedirs",
"(",
"dir_name",
")",
"except",
"OSError",
",",
"e",
":",
"if",
"e",
".",
"errno",
"!=",
"errno",
".",
"EEXIST",
":",
"raise",
"e"
] | https://github.com/adobe/chromium/blob/cfe5bf0b51b1f6b9fe239c2a3c2f2364da9967d7/webkit/support/setup_third_party.py#L81-L87 | ||
wxWidgets/wxPython-Classic | 19571e1ae65f1ac445f5491474121998c97a1bf0 | wx/lib/plot.py | python | PlotCanvas.Printout | (self, paper=None) | Print current plot. | Print current plot. | [
"Print",
"current",
"plot",
"."
] | def Printout(self, paper=None):
"""Print current plot."""
if paper != None:
self.print_data.SetPaperId(paper)
pdd = wx.PrintDialogData(self.print_data)
printer = wx.Printer(pdd)
out = PlotPrintout(self)
print_ok = printer.Print(self.parent, out)
if print_ok:
self._print_data = wx.PrintData(
printer.GetPrintDialogData().GetPrintData())
out.Destroy() | [
"def",
"Printout",
"(",
"self",
",",
"paper",
"=",
"None",
")",
":",
"if",
"paper",
"!=",
"None",
":",
"self",
".",
"print_data",
".",
"SetPaperId",
"(",
"paper",
")",
"pdd",
"=",
"wx",
".",
"PrintDialogData",
"(",
"self",
".",
"print_data",
")",
"printer",
"=",
"wx",
".",
"Printer",
"(",
"pdd",
")",
"out",
"=",
"PlotPrintout",
"(",
"self",
")",
"print_ok",
"=",
"printer",
".",
"Print",
"(",
"self",
".",
"parent",
",",
"out",
")",
"if",
"print_ok",
":",
"self",
".",
"_print_data",
"=",
"wx",
".",
"PrintData",
"(",
"printer",
".",
"GetPrintDialogData",
"(",
")",
".",
"GetPrintData",
"(",
")",
")",
"out",
".",
"Destroy",
"(",
")"
] | https://github.com/wxWidgets/wxPython-Classic/blob/19571e1ae65f1ac445f5491474121998c97a1bf0/wx/lib/plot.py#L777-L788 | ||
tensorflow/tensorflow | 419e3a6b650ea4bd1b0cba23c4348f8a69f3272e | tensorflow/python/ops/ragged/ragged_shape.py | python | _get_layer_broadcasters_from_rps | (zero_broadcaster, source_rps, target_rps) | return [zero_broadcaster] + tail_broadcasters | Get LayerBroadcasters from RowPartitions.
*--zero_broadcaster->*
| |
source_rps[0] target_rps[0]
| |
V V
*---result[1]------->*
| |
source_rps[1] target_rps[1]
| |
V V
*---result[2]------->*
.
.
.
*---result[k-1]----->*
| |
source_rps[k] target_rps[k]
| |
V V
*---result[k]------->*
Note: result[0] = zero_broadcaster
Args:
zero_broadcaster: a broadcaster between the source and target row
partitions' rows, and equal to result[0].
source_rps: source row partitions.
target_rps: target row partitions (same length as source_rps).
Returns:
result: a list of LayerBroadcasters. | Get LayerBroadcasters from RowPartitions. | [
"Get",
"LayerBroadcasters",
"from",
"RowPartitions",
"."
] | def _get_layer_broadcasters_from_rps(zero_broadcaster, source_rps, target_rps):
"""Get LayerBroadcasters from RowPartitions.
*--zero_broadcaster->*
| |
source_rps[0] target_rps[0]
| |
V V
*---result[1]------->*
| |
source_rps[1] target_rps[1]
| |
V V
*---result[2]------->*
.
.
.
*---result[k-1]----->*
| |
source_rps[k] target_rps[k]
| |
V V
*---result[k]------->*
Note: result[0] = zero_broadcaster
Args:
zero_broadcaster: a broadcaster between the source and target row
partitions' rows, and equal to result[0].
source_rps: source row partitions.
target_rps: target row partitions (same length as source_rps).
Returns:
result: a list of LayerBroadcasters.
"""
if not isinstance(zero_broadcaster, _LayerBroadcaster):
raise TypeError("Not a _LayerBroadcaster: " + str(zero_broadcaster))
assert len(source_rps) == len(target_rps)
if not source_rps:
return [zero_broadcaster]
next_broadcaster = zero_broadcaster.next_layer(source_rps[0], target_rps[0])
tail_broadcasters = _get_layer_broadcasters_from_rps(next_broadcaster,
source_rps[1:],
target_rps[1:])
return [zero_broadcaster] + tail_broadcasters | [
"def",
"_get_layer_broadcasters_from_rps",
"(",
"zero_broadcaster",
",",
"source_rps",
",",
"target_rps",
")",
":",
"if",
"not",
"isinstance",
"(",
"zero_broadcaster",
",",
"_LayerBroadcaster",
")",
":",
"raise",
"TypeError",
"(",
"\"Not a _LayerBroadcaster: \"",
"+",
"str",
"(",
"zero_broadcaster",
")",
")",
"assert",
"len",
"(",
"source_rps",
")",
"==",
"len",
"(",
"target_rps",
")",
"if",
"not",
"source_rps",
":",
"return",
"[",
"zero_broadcaster",
"]",
"next_broadcaster",
"=",
"zero_broadcaster",
".",
"next_layer",
"(",
"source_rps",
"[",
"0",
"]",
",",
"target_rps",
"[",
"0",
"]",
")",
"tail_broadcasters",
"=",
"_get_layer_broadcasters_from_rps",
"(",
"next_broadcaster",
",",
"source_rps",
"[",
"1",
":",
"]",
",",
"target_rps",
"[",
"1",
":",
"]",
")",
"return",
"[",
"zero_broadcaster",
"]",
"+",
"tail_broadcasters"
] | https://github.com/tensorflow/tensorflow/blob/419e3a6b650ea4bd1b0cba23c4348f8a69f3272e/tensorflow/python/ops/ragged/ragged_shape.py#L1375-L1419 | |
ChromiumWebApps/chromium | c7361d39be8abd1574e6ce8957c8dbddd4c6ccf7 | build/android/pylib/cmd_helper.py | python | RunCmd | (args, cwd=None) | return Call(args, cwd=cwd) | Opens a subprocess to execute a program and returns its return value.
Args:
args: A string or a sequence of program arguments. The program to execute is
the string or the first item in the args sequence.
cwd: If not None, the subprocess's current directory will be changed to
|cwd| before it's executed.
Returns:
Return code from the command execution. | Opens a subprocess to execute a program and returns its return value. | [
"Opens",
"a",
"subprocess",
"to",
"execute",
"a",
"program",
"and",
"returns",
"its",
"return",
"value",
"."
] | def RunCmd(args, cwd=None):
"""Opens a subprocess to execute a program and returns its return value.
Args:
args: A string or a sequence of program arguments. The program to execute is
the string or the first item in the args sequence.
cwd: If not None, the subprocess's current directory will be changed to
|cwd| before it's executed.
Returns:
Return code from the command execution.
"""
logging.info(str(args) + ' ' + (cwd or ''))
return Call(args, cwd=cwd) | [
"def",
"RunCmd",
"(",
"args",
",",
"cwd",
"=",
"None",
")",
":",
"logging",
".",
"info",
"(",
"str",
"(",
"args",
")",
"+",
"' '",
"+",
"(",
"cwd",
"or",
"''",
")",
")",
"return",
"Call",
"(",
"args",
",",
"cwd",
"=",
"cwd",
")"
] | https://github.com/ChromiumWebApps/chromium/blob/c7361d39be8abd1574e6ce8957c8dbddd4c6ccf7/build/android/pylib/cmd_helper.py#L30-L43 | |
stellar-deprecated/stellard | 67eabb2217bdfa9a6ea317f62338fb6bca458c90 | src/protobuf/python/google/protobuf/internal/cpp_message.py | python | NewMessage | (bases, message_descriptor, dictionary) | return bases | Creates a new protocol message *class*. | Creates a new protocol message *class*. | [
"Creates",
"a",
"new",
"protocol",
"message",
"*",
"class",
"*",
"."
] | def NewMessage(bases, message_descriptor, dictionary):
"""Creates a new protocol message *class*."""
_AddClassAttributesForNestedExtensions(message_descriptor, dictionary)
_AddEnumValues(message_descriptor, dictionary)
_AddDescriptors(message_descriptor, dictionary)
return bases | [
"def",
"NewMessage",
"(",
"bases",
",",
"message_descriptor",
",",
"dictionary",
")",
":",
"_AddClassAttributesForNestedExtensions",
"(",
"message_descriptor",
",",
"dictionary",
")",
"_AddEnumValues",
"(",
"message_descriptor",
",",
"dictionary",
")",
"_AddDescriptors",
"(",
"message_descriptor",
",",
"dictionary",
")",
"return",
"bases"
] | https://github.com/stellar-deprecated/stellard/blob/67eabb2217bdfa9a6ea317f62338fb6bca458c90/src/protobuf/python/google/protobuf/internal/cpp_message.py#L374-L379 | |
hakuna-m/wubiuefi | caec1af0a09c78fd5a345180ada1fe45e0c63493 | src/openpgp/sap/crypto.py | python | pad_rsa | (alg_hash, hashed_msg, rsa_n_bit_length) | return ''.join(['\x00\x01', padding, '\x00', prefix, hashed_msg]) | Pad RSA signature hashed data with "full hash prefixes".
:Parameters:
- `alg_hash`: integer hash algorithm constant
- `hashed_msg`: string of [already] hashed data
- `rsa_n_bit_length`: integer RSA MPI "n" bit length
:Returns: string hashed data padded according to rfc2440 5.2.2. | Pad RSA signature hashed data with "full hash prefixes". | [
"Pad",
"RSA",
"signature",
"hashed",
"data",
"with",
"full",
"hash",
"prefixes",
"."
] | def pad_rsa(alg_hash, hashed_msg, rsa_n_bit_length):
"""Pad RSA signature hashed data with "full hash prefixes".
:Parameters:
- `alg_hash`: integer hash algorithm constant
- `hashed_msg`: string of [already] hashed data
- `rsa_n_bit_length`: integer RSA MPI "n" bit length
:Returns: string hashed data padded according to rfc2440 5.2.2.
"""
# "full hash prefixes"
if HASH_MD5 == alg_hash:
prefix = '\x30\x20\x30\x0C\x06\x08\x2A\x86\x48\x86\xF7\x0D\x02\x05\x05\x00\x04\x10'
elif HASH_SHA1 == alg_hash:
prefix = '\x30\x21\x30\x09\x06\x05\x2b\x0E\x03\x02\x1A\x05\x00\x04\x14'
elif HASH_SHA224 == alg_hash:
prefix = '\x30\x2d\x30\x0d\x06\x09\x60\x86\x48\x01\x65\x03\x04\x02\x04\x05\x00\x04\x1c'
elif HASH_SHA256 == alg_hash:
prefix = '\x30\x31\x30\x0d\x06\x09\x60\x86\x48\x01\x65\x03\x04\x02\x01\x05\x00\x04\x20'
elif HASH_SHA384 == alg_hash:
prefix = '\x30\x41\x30\x0d\x06\x09\x60\x86\x48\x01\x65\x03\x04\x02\x02\x05\x00\x04\x30'
elif HASH_SHA512 == alg_hash:
prefix = '\x30\x51\x30\x0d\x06\x09\x60\x86\x48\x01\x65\x03\x04\x02\x03\x05\x00\x04\x40'
else:
raise NotImplementedError, "Prefix unassigned for RSA signature hash->(%s)" % alg_hash
padlen = ((rsa_n_bit_length + 7)/8) - len(prefix) - len(hashed_msg) - 3
padding = ''.join(['\xff' for x in range(padlen)])
return ''.join(['\x00\x01', padding, '\x00', prefix, hashed_msg]) | [
"def",
"pad_rsa",
"(",
"alg_hash",
",",
"hashed_msg",
",",
"rsa_n_bit_length",
")",
":",
"# \"full hash prefixes\"",
"if",
"HASH_MD5",
"==",
"alg_hash",
":",
"prefix",
"=",
"'\\x30\\x20\\x30\\x0C\\x06\\x08\\x2A\\x86\\x48\\x86\\xF7\\x0D\\x02\\x05\\x05\\x00\\x04\\x10'",
"elif",
"HASH_SHA1",
"==",
"alg_hash",
":",
"prefix",
"=",
"'\\x30\\x21\\x30\\x09\\x06\\x05\\x2b\\x0E\\x03\\x02\\x1A\\x05\\x00\\x04\\x14'",
"elif",
"HASH_SHA224",
"==",
"alg_hash",
":",
"prefix",
"=",
"'\\x30\\x2d\\x30\\x0d\\x06\\x09\\x60\\x86\\x48\\x01\\x65\\x03\\x04\\x02\\x04\\x05\\x00\\x04\\x1c'",
"elif",
"HASH_SHA256",
"==",
"alg_hash",
":",
"prefix",
"=",
"'\\x30\\x31\\x30\\x0d\\x06\\x09\\x60\\x86\\x48\\x01\\x65\\x03\\x04\\x02\\x01\\x05\\x00\\x04\\x20'",
"elif",
"HASH_SHA384",
"==",
"alg_hash",
":",
"prefix",
"=",
"'\\x30\\x41\\x30\\x0d\\x06\\x09\\x60\\x86\\x48\\x01\\x65\\x03\\x04\\x02\\x02\\x05\\x00\\x04\\x30'",
"elif",
"HASH_SHA512",
"==",
"alg_hash",
":",
"prefix",
"=",
"'\\x30\\x51\\x30\\x0d\\x06\\x09\\x60\\x86\\x48\\x01\\x65\\x03\\x04\\x02\\x03\\x05\\x00\\x04\\x40'",
"else",
":",
"raise",
"NotImplementedError",
",",
"\"Prefix unassigned for RSA signature hash->(%s)\"",
"%",
"alg_hash",
"padlen",
"=",
"(",
"(",
"rsa_n_bit_length",
"+",
"7",
")",
"/",
"8",
")",
"-",
"len",
"(",
"prefix",
")",
"-",
"len",
"(",
"hashed_msg",
")",
"-",
"3",
"padding",
"=",
"''",
".",
"join",
"(",
"[",
"'\\xff'",
"for",
"x",
"in",
"range",
"(",
"padlen",
")",
"]",
")",
"return",
"''",
".",
"join",
"(",
"[",
"'\\x00\\x01'",
",",
"padding",
",",
"'\\x00'",
",",
"prefix",
",",
"hashed_msg",
"]",
")"
] | https://github.com/hakuna-m/wubiuefi/blob/caec1af0a09c78fd5a345180ada1fe45e0c63493/src/openpgp/sap/crypto.py#L192-L219 | |
hpi-xnor/BMXNet-v2 | af2b1859eafc5c721b1397cef02f946aaf2ce20d | python/mxnet/contrib/onnx/onnx2mx/_op_translations.py | python | softplus | (attrs, inputs, proto_obj) | return 'Activation', new_attrs, inputs | Applies the sofplus activation function element-wise to the input. | Applies the sofplus activation function element-wise to the input. | [
"Applies",
"the",
"sofplus",
"activation",
"function",
"element",
"-",
"wise",
"to",
"the",
"input",
"."
] | def softplus(attrs, inputs, proto_obj):
"""Applies the sofplus activation function element-wise to the input."""
new_attrs = translation_utils._add_extra_attributes(attrs, {'act_type' : 'softrelu'})
return 'Activation', new_attrs, inputs | [
"def",
"softplus",
"(",
"attrs",
",",
"inputs",
",",
"proto_obj",
")",
":",
"new_attrs",
"=",
"translation_utils",
".",
"_add_extra_attributes",
"(",
"attrs",
",",
"{",
"'act_type'",
":",
"'softrelu'",
"}",
")",
"return",
"'Activation'",
",",
"new_attrs",
",",
"inputs"
] | https://github.com/hpi-xnor/BMXNet-v2/blob/af2b1859eafc5c721b1397cef02f946aaf2ce20d/python/mxnet/contrib/onnx/onnx2mx/_op_translations.py#L312-L315 | |
PaddlePaddle/Paddle | 1252f4bb3e574df80aa6d18c7ddae1b3a90bd81c | python/paddle/fluid/transpiler/collective.py | python | MultiThread._insert_allgather_ops | (self) | insert allgather op to the main_program | insert allgather op to the main_program | [
"insert",
"allgather",
"op",
"to",
"the",
"main_program"
] | def _insert_allgather_ops(self):
"""
insert allgather op to the main_program
"""
block = self.main_program.global_block()
ring_id = -1
grad = None
for idx, op in reversed(list(enumerate(block.ops))):
if self._is_backward_op(op) and \
self.op_role_var_key in op.attr_names:
op_role_var = op.all_attrs()[self.op_role_var_key]
if len(op_role_var) == 0:
continue
assert len(op_role_var) % 2 == 0
offset = idx
for i in range(0, len(op_role_var), 2):
param = block.vars[op_role_var[i]]
new_grad_var = block.create_var(
name=op_role_var[i] + "_allgather",
shape=[self.allgather_ranks] + list(param.shape),
persistable=False,
dtype=core.VarDesc.VarType.FP32,
stop_gradient=True)
grad = block.vars[op_role_var[i + 1]]
if param.is_distributed: # no need to care: used in PLSC
continue
if offset == idx:
offset += 1
block._insert_op(
offset,
type='c_sync_calc_stream',
inputs={'X': grad},
outputs={'Out': grad},
attrs={self.op_role_key: OpRole.Backward})
offset += 1
# As we search ops reversedly, we should insert c_allgather
# op in the same way to keep the ring_id alternate
ring_id = (ring_id + 1) % self.nrings
block._insert_op(
offset,
type='c_allgather',
inputs={'X': grad},
outputs={'Out': new_grad_var},
attrs={
'nranks': self.allgather_ranks,
'ring_id': ring_id,
self.op_role_key: OpRole.Backward
})
if grad is None:
return
for idx, op in enumerate(block.ops):
if self._is_optimizer_op(op):
for ring_id in range(self.nrings):
block._insert_op(
idx + ring_id,
type='c_sync_comm_stream',
inputs={'X': grad},
outputs={'Out': grad},
attrs={
'ring_id': ring_id,
self.op_role_key: OpRole.Backward
})
break | [
"def",
"_insert_allgather_ops",
"(",
"self",
")",
":",
"block",
"=",
"self",
".",
"main_program",
".",
"global_block",
"(",
")",
"ring_id",
"=",
"-",
"1",
"grad",
"=",
"None",
"for",
"idx",
",",
"op",
"in",
"reversed",
"(",
"list",
"(",
"enumerate",
"(",
"block",
".",
"ops",
")",
")",
")",
":",
"if",
"self",
".",
"_is_backward_op",
"(",
"op",
")",
"and",
"self",
".",
"op_role_var_key",
"in",
"op",
".",
"attr_names",
":",
"op_role_var",
"=",
"op",
".",
"all_attrs",
"(",
")",
"[",
"self",
".",
"op_role_var_key",
"]",
"if",
"len",
"(",
"op_role_var",
")",
"==",
"0",
":",
"continue",
"assert",
"len",
"(",
"op_role_var",
")",
"%",
"2",
"==",
"0",
"offset",
"=",
"idx",
"for",
"i",
"in",
"range",
"(",
"0",
",",
"len",
"(",
"op_role_var",
")",
",",
"2",
")",
":",
"param",
"=",
"block",
".",
"vars",
"[",
"op_role_var",
"[",
"i",
"]",
"]",
"new_grad_var",
"=",
"block",
".",
"create_var",
"(",
"name",
"=",
"op_role_var",
"[",
"i",
"]",
"+",
"\"_allgather\"",
",",
"shape",
"=",
"[",
"self",
".",
"allgather_ranks",
"]",
"+",
"list",
"(",
"param",
".",
"shape",
")",
",",
"persistable",
"=",
"False",
",",
"dtype",
"=",
"core",
".",
"VarDesc",
".",
"VarType",
".",
"FP32",
",",
"stop_gradient",
"=",
"True",
")",
"grad",
"=",
"block",
".",
"vars",
"[",
"op_role_var",
"[",
"i",
"+",
"1",
"]",
"]",
"if",
"param",
".",
"is_distributed",
":",
"# no need to care: used in PLSC",
"continue",
"if",
"offset",
"==",
"idx",
":",
"offset",
"+=",
"1",
"block",
".",
"_insert_op",
"(",
"offset",
",",
"type",
"=",
"'c_sync_calc_stream'",
",",
"inputs",
"=",
"{",
"'X'",
":",
"grad",
"}",
",",
"outputs",
"=",
"{",
"'Out'",
":",
"grad",
"}",
",",
"attrs",
"=",
"{",
"self",
".",
"op_role_key",
":",
"OpRole",
".",
"Backward",
"}",
")",
"offset",
"+=",
"1",
"# As we search ops reversedly, we should insert c_allgather",
"# op in the same way to keep the ring_id alternate",
"ring_id",
"=",
"(",
"ring_id",
"+",
"1",
")",
"%",
"self",
".",
"nrings",
"block",
".",
"_insert_op",
"(",
"offset",
",",
"type",
"=",
"'c_allgather'",
",",
"inputs",
"=",
"{",
"'X'",
":",
"grad",
"}",
",",
"outputs",
"=",
"{",
"'Out'",
":",
"new_grad_var",
"}",
",",
"attrs",
"=",
"{",
"'nranks'",
":",
"self",
".",
"allgather_ranks",
",",
"'ring_id'",
":",
"ring_id",
",",
"self",
".",
"op_role_key",
":",
"OpRole",
".",
"Backward",
"}",
")",
"if",
"grad",
"is",
"None",
":",
"return",
"for",
"idx",
",",
"op",
"in",
"enumerate",
"(",
"block",
".",
"ops",
")",
":",
"if",
"self",
".",
"_is_optimizer_op",
"(",
"op",
")",
":",
"for",
"ring_id",
"in",
"range",
"(",
"self",
".",
"nrings",
")",
":",
"block",
".",
"_insert_op",
"(",
"idx",
"+",
"ring_id",
",",
"type",
"=",
"'c_sync_comm_stream'",
",",
"inputs",
"=",
"{",
"'X'",
":",
"grad",
"}",
",",
"outputs",
"=",
"{",
"'Out'",
":",
"grad",
"}",
",",
"attrs",
"=",
"{",
"'ring_id'",
":",
"ring_id",
",",
"self",
".",
"op_role_key",
":",
"OpRole",
".",
"Backward",
"}",
")",
"break"
] | https://github.com/PaddlePaddle/Paddle/blob/1252f4bb3e574df80aa6d18c7ddae1b3a90bd81c/python/paddle/fluid/transpiler/collective.py#L483-L550 | ||
mhammond/pywin32 | 44afd86ba8485194df93234639243252deeb40d5 | com/win32comext/adsi/demos/scp.py | python | do_ScpDelete | (po) | return sc | Delete a Service Connection Point | Delete a Service Connection Point | [
"Delete",
"a",
"Service",
"Connection",
"Point"
] | def do_ScpDelete(po):
"""Delete a Service Connection Point"""
sc = _get_option(po, "service_class")
try:
ScpDelete(sc)
except adsi.error as details:
if details[0] != winerror.ERROR_DS_OBJ_NOT_FOUND:
raise
log(2, "ScpDelete ignoring ERROR_DS_OBJ_NOT_FOUND for service-class '%s'", sc)
return sc | [
"def",
"do_ScpDelete",
"(",
"po",
")",
":",
"sc",
"=",
"_get_option",
"(",
"po",
",",
"\"service_class\"",
")",
"try",
":",
"ScpDelete",
"(",
"sc",
")",
"except",
"adsi",
".",
"error",
"as",
"details",
":",
"if",
"details",
"[",
"0",
"]",
"!=",
"winerror",
".",
"ERROR_DS_OBJ_NOT_FOUND",
":",
"raise",
"log",
"(",
"2",
",",
"\"ScpDelete ignoring ERROR_DS_OBJ_NOT_FOUND for service-class '%s'\"",
",",
"sc",
")",
"return",
"sc"
] | https://github.com/mhammond/pywin32/blob/44afd86ba8485194df93234639243252deeb40d5/com/win32comext/adsi/demos/scp.py#L302-L311 | |
troldal/OpenXLSX | 3eb9c748e3ecd865203fb9946ea86d3c02b3f7d9 | Benchmarks/gbench/tools/strip_asm.py | python | process_asm | (asm) | return new_contents | Strip the ASM of unwanted directives and lines | Strip the ASM of unwanted directives and lines | [
"Strip",
"the",
"ASM",
"of",
"unwanted",
"directives",
"and",
"lines"
] | def process_asm(asm):
"""
Strip the ASM of unwanted directives and lines
"""
new_contents = ''
asm = transform_labels(asm)
# TODO: Add more things we want to remove
discard_regexes = [
re.compile("\s+\..*$"), # directive
re.compile("\s*#(NO_APP|APP)$"), #inline ASM
re.compile("\s*#.*$"), # comment line
re.compile("\s*\.globa?l\s*([.a-zA-Z_][a-zA-Z0-9$_.]*)"), #global directive
re.compile("\s*\.(string|asciz|ascii|[1248]?byte|short|word|long|quad|value|zero)"),
]
keep_regexes = [
]
fn_label_def = re.compile("^[a-zA-Z_][a-zA-Z0-9_.]*:")
for l in asm.splitlines():
# Remove Mach-O attribute
l = l.replace('@GOTPCREL', '')
add_line = True
for reg in discard_regexes:
if reg.match(l) is not None:
add_line = False
break
for reg in keep_regexes:
if reg.match(l) is not None:
add_line = True
break
if add_line:
if fn_label_def.match(l) and len(new_contents) != 0:
new_contents += '\n'
l = process_identifiers(l)
new_contents += l
new_contents += '\n'
return new_contents | [
"def",
"process_asm",
"(",
"asm",
")",
":",
"new_contents",
"=",
"''",
"asm",
"=",
"transform_labels",
"(",
"asm",
")",
"# TODO: Add more things we want to remove",
"discard_regexes",
"=",
"[",
"re",
".",
"compile",
"(",
"\"\\s+\\..*$\"",
")",
",",
"# directive",
"re",
".",
"compile",
"(",
"\"\\s*#(NO_APP|APP)$\"",
")",
",",
"#inline ASM",
"re",
".",
"compile",
"(",
"\"\\s*#.*$\"",
")",
",",
"# comment line",
"re",
".",
"compile",
"(",
"\"\\s*\\.globa?l\\s*([.a-zA-Z_][a-zA-Z0-9$_.]*)\"",
")",
",",
"#global directive",
"re",
".",
"compile",
"(",
"\"\\s*\\.(string|asciz|ascii|[1248]?byte|short|word|long|quad|value|zero)\"",
")",
",",
"]",
"keep_regexes",
"=",
"[",
"]",
"fn_label_def",
"=",
"re",
".",
"compile",
"(",
"\"^[a-zA-Z_][a-zA-Z0-9_.]*:\"",
")",
"for",
"l",
"in",
"asm",
".",
"splitlines",
"(",
")",
":",
"# Remove Mach-O attribute",
"l",
"=",
"l",
".",
"replace",
"(",
"'@GOTPCREL'",
",",
"''",
")",
"add_line",
"=",
"True",
"for",
"reg",
"in",
"discard_regexes",
":",
"if",
"reg",
".",
"match",
"(",
"l",
")",
"is",
"not",
"None",
":",
"add_line",
"=",
"False",
"break",
"for",
"reg",
"in",
"keep_regexes",
":",
"if",
"reg",
".",
"match",
"(",
"l",
")",
"is",
"not",
"None",
":",
"add_line",
"=",
"True",
"break",
"if",
"add_line",
":",
"if",
"fn_label_def",
".",
"match",
"(",
"l",
")",
"and",
"len",
"(",
"new_contents",
")",
"!=",
"0",
":",
"new_contents",
"+=",
"'\\n'",
"l",
"=",
"process_identifiers",
"(",
"l",
")",
"new_contents",
"+=",
"l",
"new_contents",
"+=",
"'\\n'",
"return",
"new_contents"
] | https://github.com/troldal/OpenXLSX/blob/3eb9c748e3ecd865203fb9946ea86d3c02b3f7d9/Benchmarks/gbench/tools/strip_asm.py#L84-L121 | |
Polidea/SiriusObfuscator | b0e590d8130e97856afe578869b83a209e2b19be | SymbolExtractorAndRenamer/lldb/scripts/Python/static-binding/lldb.py | python | SBValue.GetDisplayTypeName | (self) | return _lldb.SBValue_GetDisplayTypeName(self) | GetDisplayTypeName(self) -> str | GetDisplayTypeName(self) -> str | [
"GetDisplayTypeName",
"(",
"self",
")",
"-",
">",
"str"
] | def GetDisplayTypeName(self):
"""GetDisplayTypeName(self) -> str"""
return _lldb.SBValue_GetDisplayTypeName(self) | [
"def",
"GetDisplayTypeName",
"(",
"self",
")",
":",
"return",
"_lldb",
".",
"SBValue_GetDisplayTypeName",
"(",
"self",
")"
] | https://github.com/Polidea/SiriusObfuscator/blob/b0e590d8130e97856afe578869b83a209e2b19be/SymbolExtractorAndRenamer/lldb/scripts/Python/static-binding/lldb.py#L11815-L11817 | |
pytorch/pytorch | 7176c92687d3cc847cc046bf002269c6949a21c2 | benchmarks/sparse/dlmc/utils.py | python | load_spmm_dataset | (dataset_path, hidden_size, sparsity, spmm_type, device, n_limit=math.inf) | load_spmm_dataset loads a DLMC dataset for a sparse matrix-matrix multiplication (SPMM) performance test.
Args:
dataset_path:
path of the dataset from DLMC collection.
hidden_size
This value allows tensors of varying sizes.
sparsity:
This value allows tensors of varying sparsities.
spmm_type:
This value allows tensors for `sparse@sparse` or `sparse@dense` operations.
device:
Whether to place the Tensor on a GPU or CPU.
n_limit:
This value allows a dataset with some limit size. | load_spmm_dataset loads a DLMC dataset for a sparse matrix-matrix multiplication (SPMM) performance test.
Args:
dataset_path:
path of the dataset from DLMC collection.
hidden_size
This value allows tensors of varying sizes.
sparsity:
This value allows tensors of varying sparsities.
spmm_type:
This value allows tensors for `sparse | [
"load_spmm_dataset",
"loads",
"a",
"DLMC",
"dataset",
"for",
"a",
"sparse",
"matrix",
"-",
"matrix",
"multiplication",
"(",
"SPMM",
")",
"performance",
"test",
".",
"Args",
":",
"dataset_path",
":",
"path",
"of",
"the",
"dataset",
"from",
"DLMC",
"collection",
".",
"hidden_size",
"This",
"value",
"allows",
"tensors",
"of",
"varying",
"sizes",
".",
"sparsity",
":",
"This",
"value",
"allows",
"tensors",
"of",
"varying",
"sparsities",
".",
"spmm_type",
":",
"This",
"value",
"allows",
"tensors",
"for",
"sparse"
] | def load_spmm_dataset(dataset_path, hidden_size, sparsity, spmm_type, device, n_limit=math.inf):
"""load_spmm_dataset loads a DLMC dataset for a sparse matrix-matrix multiplication (SPMM) performance test.
Args:
dataset_path:
path of the dataset from DLMC collection.
hidden_size
This value allows tensors of varying sizes.
sparsity:
This value allows tensors of varying sparsities.
spmm_type:
This value allows tensors for `sparse@sparse` or `sparse@dense` operations.
device:
Whether to place the Tensor on a GPU or CPU.
n_limit:
This value allows a dataset with some limit size.
"""
current_folder_path = f"{dataset_path}/{sparsity}"
path = Path(current_folder_path)
files = path.glob('**/*.smtx')
print(dataset_path, hidden_size, sparsity)
index = 0
x_files, y_files = [], []
for f in files:
if index >= n_limit:
break
print('.', end='')
size, nnz = read_matrix_params(f.as_posix())
if size[1] == hidden_size:
x_files.append(f.as_posix())
if size[0] == hidden_size:
y_files.append(f.as_posix())
index += 1
print()
for fx, fy in zip(x_files, y_files):
x = load_sparse_matrix(fx, device)
y = gen_matrix(fy, device) if spmm_type == 'sparse@dense' else load_sparse_matrix(fy, device)
yield (x, y) | [
"def",
"load_spmm_dataset",
"(",
"dataset_path",
",",
"hidden_size",
",",
"sparsity",
",",
"spmm_type",
",",
"device",
",",
"n_limit",
"=",
"math",
".",
"inf",
")",
":",
"current_folder_path",
"=",
"f\"{dataset_path}/{sparsity}\"",
"path",
"=",
"Path",
"(",
"current_folder_path",
")",
"files",
"=",
"path",
".",
"glob",
"(",
"'**/*.smtx'",
")",
"print",
"(",
"dataset_path",
",",
"hidden_size",
",",
"sparsity",
")",
"index",
"=",
"0",
"x_files",
",",
"y_files",
"=",
"[",
"]",
",",
"[",
"]",
"for",
"f",
"in",
"files",
":",
"if",
"index",
">=",
"n_limit",
":",
"break",
"print",
"(",
"'.'",
",",
"end",
"=",
"''",
")",
"size",
",",
"nnz",
"=",
"read_matrix_params",
"(",
"f",
".",
"as_posix",
"(",
")",
")",
"if",
"size",
"[",
"1",
"]",
"==",
"hidden_size",
":",
"x_files",
".",
"append",
"(",
"f",
".",
"as_posix",
"(",
")",
")",
"if",
"size",
"[",
"0",
"]",
"==",
"hidden_size",
":",
"y_files",
".",
"append",
"(",
"f",
".",
"as_posix",
"(",
")",
")",
"index",
"+=",
"1",
"print",
"(",
")",
"for",
"fx",
",",
"fy",
"in",
"zip",
"(",
"x_files",
",",
"y_files",
")",
":",
"x",
"=",
"load_sparse_matrix",
"(",
"fx",
",",
"device",
")",
"y",
"=",
"gen_matrix",
"(",
"fy",
",",
"device",
")",
"if",
"spmm_type",
"==",
"'sparse@dense'",
"else",
"load_sparse_matrix",
"(",
"fy",
",",
"device",
")",
"yield",
"(",
"x",
",",
"y",
")"
] | https://github.com/pytorch/pytorch/blob/7176c92687d3cc847cc046bf002269c6949a21c2/benchmarks/sparse/dlmc/utils.py#L107-L144 | ||
catboost/catboost | 167f64f237114a4d10b2b4ee42adb4569137debe | contrib/tools/python/src/Lib/xml/sax/xmlreader.py | python | IncrementalParser.feed | (self, data) | This method gives the raw XML data in the data parameter to
the parser and makes it parse the data, emitting the
corresponding events. It is allowed for XML constructs to be
split across several calls to feed.
feed may raise SAXException. | This method gives the raw XML data in the data parameter to
the parser and makes it parse the data, emitting the
corresponding events. It is allowed for XML constructs to be
split across several calls to feed. | [
"This",
"method",
"gives",
"the",
"raw",
"XML",
"data",
"in",
"the",
"data",
"parameter",
"to",
"the",
"parser",
"and",
"makes",
"it",
"parse",
"the",
"data",
"emitting",
"the",
"corresponding",
"events",
".",
"It",
"is",
"allowed",
"for",
"XML",
"constructs",
"to",
"be",
"split",
"across",
"several",
"calls",
"to",
"feed",
"."
] | def feed(self, data):
"""This method gives the raw XML data in the data parameter to
the parser and makes it parse the data, emitting the
corresponding events. It is allowed for XML constructs to be
split across several calls to feed.
feed may raise SAXException."""
raise NotImplementedError("This method must be implemented!") | [
"def",
"feed",
"(",
"self",
",",
"data",
")",
":",
"raise",
"NotImplementedError",
"(",
"\"This method must be implemented!\"",
")"
] | https://github.com/catboost/catboost/blob/167f64f237114a4d10b2b4ee42adb4569137debe/contrib/tools/python/src/Lib/xml/sax/xmlreader.py#L127-L134 | ||
wxWidgets/wxPython-Classic | 19571e1ae65f1ac445f5491474121998c97a1bf0 | wx/tools/Editra/src/eclib/platebtn.py | python | PlateButton.__InitColors | (self) | return colors | Initialize the default colors | Initialize the default colors | [
"Initialize",
"the",
"default",
"colors"
] | def __InitColors(self):
"""Initialize the default colors"""
color = GetHighlightColour()
pcolor = AdjustColour(color, -12)
colors = dict(default=True,
hlight=color,
press=pcolor,
htxt=BestLabelColour(self.GetForegroundColour()))
return colors | [
"def",
"__InitColors",
"(",
"self",
")",
":",
"color",
"=",
"GetHighlightColour",
"(",
")",
"pcolor",
"=",
"AdjustColour",
"(",
"color",
",",
"-",
"12",
")",
"colors",
"=",
"dict",
"(",
"default",
"=",
"True",
",",
"hlight",
"=",
"color",
",",
"press",
"=",
"pcolor",
",",
"htxt",
"=",
"BestLabelColour",
"(",
"self",
".",
"GetForegroundColour",
"(",
")",
")",
")",
"return",
"colors"
] | https://github.com/wxWidgets/wxPython-Classic/blob/19571e1ae65f1ac445f5491474121998c97a1bf0/wx/tools/Editra/src/eclib/platebtn.py#L320-L328 | |
v8mips/v8mips | f0c9cc0bbfd461c7f516799d9a58e9a7395f737e | tools/stats-viewer.py | python | ChromeCounterCollection.CountersInUse | (self) | return self.max_counters | Return the number of counters in active use. | Return the number of counters in active use. | [
"Return",
"the",
"number",
"of",
"counters",
"in",
"active",
"use",
"."
] | def CountersInUse(self):
"""Return the number of counters in active use."""
for i in xrange(self.max_counters):
name_offset = self.counter_names_offset + i * self._COUNTER_NAME_SIZE
if self.data.ByteAt(name_offset) == 0:
return i
return self.max_counters | [
"def",
"CountersInUse",
"(",
"self",
")",
":",
"for",
"i",
"in",
"xrange",
"(",
"self",
".",
"max_counters",
")",
":",
"name_offset",
"=",
"self",
".",
"counter_names_offset",
"+",
"i",
"*",
"self",
".",
"_COUNTER_NAME_SIZE",
"if",
"self",
".",
"data",
".",
"ByteAt",
"(",
"name_offset",
")",
"==",
"0",
":",
"return",
"i",
"return",
"self",
".",
"max_counters"
] | https://github.com/v8mips/v8mips/blob/f0c9cc0bbfd461c7f516799d9a58e9a7395f737e/tools/stats-viewer.py#L436-L442 | |
microsoft/EdgeML | ef9f8a77f096acbdeb941014791f8eda1c1bc35b | examples/pytorch/vision/Face_Detection/models/RPool_Face_C.py | python | S3FD.__init__ | (self, phase, base, head, num_classes) | self.priorbox = PriorBox(size,cfg)
self.priors = Variable(self.priorbox.forward(), volatile=True) | self.priorbox = PriorBox(size,cfg)
self.priors = Variable(self.priorbox.forward(), volatile=True) | [
"self",
".",
"priorbox",
"=",
"PriorBox",
"(",
"size",
"cfg",
")",
"self",
".",
"priors",
"=",
"Variable",
"(",
"self",
".",
"priorbox",
".",
"forward",
"()",
"volatile",
"=",
"True",
")"
] | def __init__(self, phase, base, head, num_classes):
super(S3FD, self).__init__()
self.phase = phase
self.num_classes = num_classes
'''
self.priorbox = PriorBox(size,cfg)
self.priors = Variable(self.priorbox.forward(), volatile=True)
'''
# SSD network
self.unfold = nn.Unfold(kernel_size=(8,8),stride=(4,4))
self.rnn_model = RNNPool(8, 8, 16, 16, 3)
self.mob = nn.ModuleList(base)
# Layer learns to scale the l2 normalized features from conv4_3
self.L2Norm3_3 = L2Norm(24, 10)
self.L2Norm4_3 = L2Norm(32, 8)
self.L2Norm5_3 = L2Norm(64, 5)
self.loc = nn.ModuleList(head[0])
self.conf = nn.ModuleList(head[1])
if self.phase == 'test':
self.softmax = nn.Softmax(dim=-1) | [
"def",
"__init__",
"(",
"self",
",",
"phase",
",",
"base",
",",
"head",
",",
"num_classes",
")",
":",
"super",
"(",
"S3FD",
",",
"self",
")",
".",
"__init__",
"(",
")",
"self",
".",
"phase",
"=",
"phase",
"self",
".",
"num_classes",
"=",
"num_classes",
"# SSD network",
"self",
".",
"unfold",
"=",
"nn",
".",
"Unfold",
"(",
"kernel_size",
"=",
"(",
"8",
",",
"8",
")",
",",
"stride",
"=",
"(",
"4",
",",
"4",
")",
")",
"self",
".",
"rnn_model",
"=",
"RNNPool",
"(",
"8",
",",
"8",
",",
"16",
",",
"16",
",",
"3",
")",
"self",
".",
"mob",
"=",
"nn",
".",
"ModuleList",
"(",
"base",
")",
"# Layer learns to scale the l2 normalized features from conv4_3",
"self",
".",
"L2Norm3_3",
"=",
"L2Norm",
"(",
"24",
",",
"10",
")",
"self",
".",
"L2Norm4_3",
"=",
"L2Norm",
"(",
"32",
",",
"8",
")",
"self",
".",
"L2Norm5_3",
"=",
"L2Norm",
"(",
"64",
",",
"5",
")",
"self",
".",
"loc",
"=",
"nn",
".",
"ModuleList",
"(",
"head",
"[",
"0",
"]",
")",
"self",
".",
"conf",
"=",
"nn",
".",
"ModuleList",
"(",
"head",
"[",
"1",
"]",
")",
"if",
"self",
".",
"phase",
"==",
"'test'",
":",
"self",
".",
"softmax",
"=",
"nn",
".",
"Softmax",
"(",
"dim",
"=",
"-",
"1",
")"
] | https://github.com/microsoft/EdgeML/blob/ef9f8a77f096acbdeb941014791f8eda1c1bc35b/examples/pytorch/vision/Face_Detection/models/RPool_Face_C.py#L39-L64 | ||
wxWidgets/wxPython-Classic | 19571e1ae65f1ac445f5491474121998c97a1bf0 | wx/lib/plot.py | python | PolyLine.getSymExtent | (self, printerScale) | return (w, h) | Width and Height of Marker | Width and Height of Marker | [
"Width",
"and",
"Height",
"of",
"Marker"
] | def getSymExtent(self, printerScale):
"""Width and Height of Marker"""
h = self.attributes['width'] * printerScale * self._pointSize[0]
w = 5 * h
return (w, h) | [
"def",
"getSymExtent",
"(",
"self",
",",
"printerScale",
")",
":",
"h",
"=",
"self",
".",
"attributes",
"[",
"'width'",
"]",
"*",
"printerScale",
"*",
"self",
".",
"_pointSize",
"[",
"0",
"]",
"w",
"=",
"5",
"*",
"h",
"return",
"(",
"w",
",",
"h",
")"
] | https://github.com/wxWidgets/wxPython-Classic/blob/19571e1ae65f1ac445f5491474121998c97a1bf0/wx/lib/plot.py#L264-L268 | |
Polidea/SiriusObfuscator | b0e590d8130e97856afe578869b83a209e2b19be | SymbolExtractorAndRenamer/lldb/utils/lui/lldbutil.py | python | get_FPRs | (frame) | return get_registers(frame, "floating point") | Returns the floating point registers of the frame as an SBValue.
The returned SBValue object is iterable. An example:
...
from lldbutil import get_FPRs
regs = get_FPRs(frame)
for reg in regs:
print "%s => %s" % (reg.GetName(), reg.GetValue())
... | Returns the floating point registers of the frame as an SBValue. | [
"Returns",
"the",
"floating",
"point",
"registers",
"of",
"the",
"frame",
"as",
"an",
"SBValue",
"."
] | def get_FPRs(frame):
"""Returns the floating point registers of the frame as an SBValue.
The returned SBValue object is iterable. An example:
...
from lldbutil import get_FPRs
regs = get_FPRs(frame)
for reg in regs:
print "%s => %s" % (reg.GetName(), reg.GetValue())
...
"""
return get_registers(frame, "floating point") | [
"def",
"get_FPRs",
"(",
"frame",
")",
":",
"return",
"get_registers",
"(",
"frame",
",",
"\"floating point\"",
")"
] | https://github.com/Polidea/SiriusObfuscator/blob/b0e590d8130e97856afe578869b83a209e2b19be/SymbolExtractorAndRenamer/lldb/utils/lui/lldbutil.py#L927-L938 | |
apache/incubator-mxnet | f03fb23f1d103fec9541b5ae59ee06b1734a51d9 | python/mxnet/image/detection.py | python | ImageDetIter.reshape | (self, data_shape=None, label_shape=None) | Reshape iterator for data_shape or label_shape.
Parameters
----------
data_shape : tuple or None
Reshape the data_shape to the new shape if not None
label_shape : tuple or None
Reshape label shape to new shape if not None | Reshape iterator for data_shape or label_shape. | [
"Reshape",
"iterator",
"for",
"data_shape",
"or",
"label_shape",
"."
] | def reshape(self, data_shape=None, label_shape=None):
"""Reshape iterator for data_shape or label_shape.
Parameters
----------
data_shape : tuple or None
Reshape the data_shape to the new shape if not None
label_shape : tuple or None
Reshape label shape to new shape if not None
"""
if data_shape is not None:
self.check_data_shape(data_shape)
self.provide_data = [(self.provide_data[0][0], (self.batch_size,) + data_shape)]
self.data_shape = data_shape
if label_shape is not None:
self.check_label_shape(label_shape)
self.provide_label = [(self.provide_label[0][0], (self.batch_size,) + label_shape)]
self.label_shape = label_shape | [
"def",
"reshape",
"(",
"self",
",",
"data_shape",
"=",
"None",
",",
"label_shape",
"=",
"None",
")",
":",
"if",
"data_shape",
"is",
"not",
"None",
":",
"self",
".",
"check_data_shape",
"(",
"data_shape",
")",
"self",
".",
"provide_data",
"=",
"[",
"(",
"self",
".",
"provide_data",
"[",
"0",
"]",
"[",
"0",
"]",
",",
"(",
"self",
".",
"batch_size",
",",
")",
"+",
"data_shape",
")",
"]",
"self",
".",
"data_shape",
"=",
"data_shape",
"if",
"label_shape",
"is",
"not",
"None",
":",
"self",
".",
"check_label_shape",
"(",
"label_shape",
")",
"self",
".",
"provide_label",
"=",
"[",
"(",
"self",
".",
"provide_label",
"[",
"0",
"]",
"[",
"0",
"]",
",",
"(",
"self",
".",
"batch_size",
",",
")",
"+",
"label_shape",
")",
"]",
"self",
".",
"label_shape",
"=",
"label_shape"
] | https://github.com/apache/incubator-mxnet/blob/f03fb23f1d103fec9541b5ae59ee06b1734a51d9/python/mxnet/image/detection.py#L743-L760 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.