nwo stringlengths 5 86 | sha stringlengths 40 40 | path stringlengths 4 189 | language stringclasses 1 value | identifier stringlengths 1 94 | parameters stringlengths 2 4.03k | argument_list stringclasses 1 value | return_statement stringlengths 0 11.5k | docstring stringlengths 1 33.2k | docstring_summary stringlengths 0 5.15k | docstring_tokens list | function stringlengths 34 151k | function_tokens list | url stringlengths 90 278 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
apple/turicreate | cce55aa5311300e3ce6af93cb45ba791fd1bdf49 | src/python/turicreate/toolkits/text_classifier/_text_classifier.py | python | TextClassifier.evaluate | (self, dataset, metric="auto", **kwargs) | return m.evaluate(test, metric, **kwargs) | Evaluate the model by making predictions of target values and comparing
these to actual values.
Parameters
----------
dataset : SFrame
An SFrame having the same feature columns as provided when creating
the model.
metric : str, optional
Name of the evaluation metric. Possible values are:
- 'auto' : Returns all available metrics.
- 'accuracy' : Classification accuracy (micro average).
- 'auc' : Area under the ROC curve (macro average)
- 'precision' : Precision score (macro average)
- 'recall' : Recall score (macro average)
- 'f1_score' : F1 score (macro average)
- 'log_loss' : Log loss
- 'confusion_matrix' : An SFrame with counts of possible prediction/true label combinations.
- 'roc_curve' : An SFrame containing information needed for an ROC curve
For more flexibility in calculating evaluation metrics, use the
:class:`~turicreate.evaluation` module.
Returns
-------
out : dict
Dictionary of evaluation results where the key is the name of the
evaluation metric (e.g. `accuracy`) and the value is the evaluation
score.
See Also
----------
create, predict, classify | Evaluate the model by making predictions of target values and comparing
these to actual values. | [
"Evaluate",
"the",
"model",
"by",
"making",
"predictions",
"of",
"target",
"values",
"and",
"comparing",
"these",
"to",
"actual",
"values",
"."
] | def evaluate(self, dataset, metric="auto", **kwargs):
"""
Evaluate the model by making predictions of target values and comparing
these to actual values.
Parameters
----------
dataset : SFrame
An SFrame having the same feature columns as provided when creating
the model.
metric : str, optional
Name of the evaluation metric. Possible values are:
- 'auto' : Returns all available metrics.
- 'accuracy' : Classification accuracy (micro average).
- 'auc' : Area under the ROC curve (macro average)
- 'precision' : Precision score (macro average)
- 'recall' : Recall score (macro average)
- 'f1_score' : F1 score (macro average)
- 'log_loss' : Log loss
- 'confusion_matrix' : An SFrame with counts of possible prediction/true label combinations.
- 'roc_curve' : An SFrame containing information needed for an ROC curve
For more flexibility in calculating evaluation metrics, use the
:class:`~turicreate.evaluation` module.
Returns
-------
out : dict
Dictionary of evaluation results where the key is the name of the
evaluation metric (e.g. `accuracy`) and the value is the evaluation
score.
See Also
----------
create, predict, classify
"""
m = self.__proxy__["classifier"]
target = self.__proxy__["target"]
f = _BOW_FEATURE_EXTRACTOR
test = f(dataset, target)
return m.evaluate(test, metric, **kwargs) | [
"def",
"evaluate",
"(",
"self",
",",
"dataset",
",",
"metric",
"=",
"\"auto\"",
",",
"*",
"*",
"kwargs",
")",
":",
"m",
"=",
"self",
".",
"__proxy__",
"[",
"\"classifier\"",
"]",
"target",
"=",
"self",
".",
"__proxy__",
"[",
"\"target\"",
"]",
"f",
"... | https://github.com/apple/turicreate/blob/cce55aa5311300e3ce6af93cb45ba791fd1bdf49/src/python/turicreate/toolkits/text_classifier/_text_classifier.py#L318-L361 | |
benoitsteiner/tensorflow-opencl | cb7cb40a57fde5cfd4731bc551e82a1e2fef43a5 | tensorflow/contrib/distributions/python/ops/binomial.py | python | Binomial._maybe_assert_valid_sample | (self, counts) | return control_flow_ops.with_dependencies([
check_ops.assert_less_equal(
counts, self.total_count,
message="counts are not less than or equal to n."),
], counts) | Check counts for proper shape, values, then return tensor version. | Check counts for proper shape, values, then return tensor version. | [
"Check",
"counts",
"for",
"proper",
"shape",
"values",
"then",
"return",
"tensor",
"version",
"."
] | def _maybe_assert_valid_sample(self, counts):
"""Check counts for proper shape, values, then return tensor version."""
if not self.validate_args:
return counts
counts = distribution_util.embed_check_nonnegative_integer_form(counts)
return control_flow_ops.with_dependencies([
check_ops.assert_less_equal(
counts, self.total_count,
message="counts are not less than or equal to n."),
], counts) | [
"def",
"_maybe_assert_valid_sample",
"(",
"self",
",",
"counts",
")",
":",
"if",
"not",
"self",
".",
"validate_args",
":",
"return",
"counts",
"counts",
"=",
"distribution_util",
".",
"embed_check_nonnegative_integer_form",
"(",
"counts",
")",
"return",
"control_flo... | https://github.com/benoitsteiner/tensorflow-opencl/blob/cb7cb40a57fde5cfd4731bc551e82a1e2fef43a5/tensorflow/contrib/distributions/python/ops/binomial.py#L275-L284 | |
OpenChemistry/tomviz | 0a903679318f191cb7dd3eb5ff5bc3a7d3320d9a | tomviz/python/tomviz/utils.py | python | get_coordinate_arrays | (dataobject) | return (xx, yy, zz) | Returns a triple of Numpy arrays containing x, y, and z coordinates for
each point in the dataset. This can be used to evaluate a function at each
point, for instance. | Returns a triple of Numpy arrays containing x, y, and z coordinates for
each point in the dataset. This can be used to evaluate a function at each
point, for instance. | [
"Returns",
"a",
"triple",
"of",
"Numpy",
"arrays",
"containing",
"x",
"y",
"and",
"z",
"coordinates",
"for",
"each",
"point",
"in",
"the",
"dataset",
".",
"This",
"can",
"be",
"used",
"to",
"evaluate",
"a",
"function",
"at",
"each",
"point",
"for",
"inst... | def get_coordinate_arrays(dataobject):
"""Returns a triple of Numpy arrays containing x, y, and z coordinates for
each point in the dataset. This can be used to evaluate a function at each
point, for instance.
"""
assert dataobject.IsA("vtkImageData"), "Dataset must be a vtkImageData"
# Create meshgrid for image
spacing = dataobject.GetSpacing()
origin = dataobject.GetOrigin()
dims = dataobject.GetDimensions()
x = [origin[0] + (spacing[0] * i) for i in range(dims[0])]
y = [origin[1] + (spacing[1] * i) for i in range(dims[1])]
z = [origin[2] + (spacing[2] * i) for i in range(dims[2])]
# The funny ordering is to match VTK's convention for point storage
yy, xx, zz = np.meshgrid(y, x, z)
return (xx, yy, zz) | [
"def",
"get_coordinate_arrays",
"(",
"dataobject",
")",
":",
"assert",
"dataobject",
".",
"IsA",
"(",
"\"vtkImageData\"",
")",
",",
"\"Dataset must be a vtkImageData\"",
"# Create meshgrid for image",
"spacing",
"=",
"dataobject",
".",
"GetSpacing",
"(",
")",
"origin",
... | https://github.com/OpenChemistry/tomviz/blob/0a903679318f191cb7dd3eb5ff5bc3a7d3320d9a/tomviz/python/tomviz/utils.py#L173-L191 | |
albertz/openlierox | d316c14a8eb57848ef56e9bfa7b23a56f694a51b | tools/DedicatedServerVideo/atom/http_core.py | python | HttpRequest.add_form_inputs | (self, form_data,
mime_type='application/x-www-form-urlencoded') | Form-encodes and adds data to the request body.
Args:
form_data: dict or sequnce or two member tuples which contains the
form keys and values.
mime_type: str The MIME type of the form data being sent. Defaults
to 'application/x-www-form-urlencoded'. | Form-encodes and adds data to the request body. | [
"Form",
"-",
"encodes",
"and",
"adds",
"data",
"to",
"the",
"request",
"body",
"."
] | def add_form_inputs(self, form_data,
mime_type='application/x-www-form-urlencoded'):
"""Form-encodes and adds data to the request body.
Args:
form_data: dict or sequnce or two member tuples which contains the
form keys and values.
mime_type: str The MIME type of the form data being sent. Defaults
to 'application/x-www-form-urlencoded'.
"""
body = urllib.urlencode(form_data)
self.add_body_part(body, mime_type) | [
"def",
"add_form_inputs",
"(",
"self",
",",
"form_data",
",",
"mime_type",
"=",
"'application/x-www-form-urlencoded'",
")",
":",
"body",
"=",
"urllib",
".",
"urlencode",
"(",
"form_data",
")",
"self",
".",
"add_body_part",
"(",
"body",
",",
"mime_type",
")"
] | https://github.com/albertz/openlierox/blob/d316c14a8eb57848ef56e9bfa7b23a56f694a51b/tools/DedicatedServerVideo/atom/http_core.py#L153-L164 | ||
ApolloAuto/apollo-platform | 86d9dc6743b496ead18d597748ebabd34a513289 | ros/bond_core/smclib/python/smclib/statemap.py | python | FSMContext.getState | (self) | return self._state | Returns the current state. | Returns the current state. | [
"Returns",
"the",
"current",
"state",
"."
] | def getState(self):
"""Returns the current state."""
if self._state == None:
raise StateUndefinedException
return self._state | [
"def",
"getState",
"(",
"self",
")",
":",
"if",
"self",
".",
"_state",
"==",
"None",
":",
"raise",
"StateUndefinedException",
"return",
"self",
".",
"_state"
] | https://github.com/ApolloAuto/apollo-platform/blob/86d9dc6743b496ead18d597748ebabd34a513289/ros/bond_core/smclib/python/smclib/statemap.py#L101-L105 | |
trailofbits/llvm-sanitizer-tutorial | d29dfeec7f51fbf234fd0080f28f2b30cd0b6e99 | llvm/tools/clang/bindings/python/clang/cindex.py | python | Cursor.underlying_typedef_type | (self) | return self._underlying_type | Return the underlying type of a typedef declaration.
Returns a Type for the typedef this cursor is a declaration for. If
the current cursor is not a typedef, this raises. | Return the underlying type of a typedef declaration. | [
"Return",
"the",
"underlying",
"type",
"of",
"a",
"typedef",
"declaration",
"."
] | def underlying_typedef_type(self):
"""Return the underlying type of a typedef declaration.
Returns a Type for the typedef this cursor is a declaration for. If
the current cursor is not a typedef, this raises.
"""
if not hasattr(self, '_underlying_type'):
assert self.kind.is_declaration()
self._underlying_type = \
conf.lib.clang_getTypedefDeclUnderlyingType(self)
return self._underlying_type | [
"def",
"underlying_typedef_type",
"(",
"self",
")",
":",
"if",
"not",
"hasattr",
"(",
"self",
",",
"'_underlying_type'",
")",
":",
"assert",
"self",
".",
"kind",
".",
"is_declaration",
"(",
")",
"self",
".",
"_underlying_type",
"=",
"conf",
".",
"lib",
"."... | https://github.com/trailofbits/llvm-sanitizer-tutorial/blob/d29dfeec7f51fbf234fd0080f28f2b30cd0b6e99/llvm/tools/clang/bindings/python/clang/cindex.py#L1685-L1696 | |
FreeCAD/FreeCAD | ba42231b9c6889b89e064d6d563448ed81e376ec | src/Mod/Arch/ArchEquipment.py | python | _Equipment.addSketchArchFeatures | (self,obj,linkObj=None,mode=None) | To add features in the SketchArch External Add-on, if present (https://github.com/paullee0/FreeCAD_SketchArch)
- import ArchSketchObject module, and
- set properties that are common to ArchObjects (including Links) and ArchSketch
to support the additional features
To install SketchArch External Add-on, see https://github.com/paullee0/FreeCAD_SketchArch#iv-install | To add features in the SketchArch External Add-on, if present (https://github.com/paullee0/FreeCAD_SketchArch)
- import ArchSketchObject module, and
- set properties that are common to ArchObjects (including Links) and ArchSketch
to support the additional features | [
"To",
"add",
"features",
"in",
"the",
"SketchArch",
"External",
"Add",
"-",
"on",
"if",
"present",
"(",
"https",
":",
"//",
"github",
".",
"com",
"/",
"paullee0",
"/",
"FreeCAD_SketchArch",
")",
"-",
"import",
"ArchSketchObject",
"module",
"and",
"-",
"set... | def addSketchArchFeatures(self,obj,linkObj=None,mode=None):
'''
To add features in the SketchArch External Add-on, if present (https://github.com/paullee0/FreeCAD_SketchArch)
- import ArchSketchObject module, and
- set properties that are common to ArchObjects (including Links) and ArchSketch
to support the additional features
To install SketchArch External Add-on, see https://github.com/paullee0/FreeCAD_SketchArch#iv-install
'''
try:
import ArchSketchObject
ArchSketchObject.ArchSketch.setPropertiesLinkCommon(self, obj, linkObj, mode)
except:
pass | [
"def",
"addSketchArchFeatures",
"(",
"self",
",",
"obj",
",",
"linkObj",
"=",
"None",
",",
"mode",
"=",
"None",
")",
":",
"try",
":",
"import",
"ArchSketchObject",
"ArchSketchObject",
".",
"ArchSketch",
".",
"setPropertiesLinkCommon",
"(",
"self",
",",
"obj",
... | https://github.com/FreeCAD/FreeCAD/blob/ba42231b9c6889b89e064d6d563448ed81e376ec/src/Mod/Arch/ArchEquipment.py#L287-L301 | ||
baidu-research/tensorflow-allreduce | 66d5b855e90b0949e9fa5cca5599fd729a70e874 | tensorflow/python/framework/ops.py | python | get_stats_for_node_def | (graph, node, statistic_type) | return result | Looks up the node's statistics function in the registry and calls it.
This function takes a Graph object and a NodeDef from a GraphDef, and if
there's an associated statistics method, calls it and returns a result. If no
function has been registered for the particular node type, it returns an empty
statistics object.
Args:
graph: A Graph object that's been set up with the node's graph.
node: A NodeDef describing the operator.
statistic_type: A string identifying the statistic we're interested in.
Returns:
An OpStats object containing information about resource usage. | Looks up the node's statistics function in the registry and calls it. | [
"Looks",
"up",
"the",
"node",
"s",
"statistics",
"function",
"in",
"the",
"registry",
"and",
"calls",
"it",
"."
] | def get_stats_for_node_def(graph, node, statistic_type):
"""Looks up the node's statistics function in the registry and calls it.
This function takes a Graph object and a NodeDef from a GraphDef, and if
there's an associated statistics method, calls it and returns a result. If no
function has been registered for the particular node type, it returns an empty
statistics object.
Args:
graph: A Graph object that's been set up with the node's graph.
node: A NodeDef describing the operator.
statistic_type: A string identifying the statistic we're interested in.
Returns:
An OpStats object containing information about resource usage.
"""
try:
stats_func = _stats_registry.lookup(node.op + "," + statistic_type)
result = stats_func(graph, node)
except LookupError:
result = OpStats(statistic_type)
return result | [
"def",
"get_stats_for_node_def",
"(",
"graph",
",",
"node",
",",
"statistic_type",
")",
":",
"try",
":",
"stats_func",
"=",
"_stats_registry",
".",
"lookup",
"(",
"node",
".",
"op",
"+",
"\",\"",
"+",
"statistic_type",
")",
"result",
"=",
"stats_func",
"(",
... | https://github.com/baidu-research/tensorflow-allreduce/blob/66d5b855e90b0949e9fa5cca5599fd729a70e874/tensorflow/python/framework/ops.py#L2045-L2066 | |
wxWidgets/wxPython-Classic | 19571e1ae65f1ac445f5491474121998c97a1bf0 | src/osx_carbon/_core.py | python | Menu.GetStyle | (*args, **kwargs) | return _core_.Menu_GetStyle(*args, **kwargs) | GetStyle(self) -> long | GetStyle(self) -> long | [
"GetStyle",
"(",
"self",
")",
"-",
">",
"long"
] | def GetStyle(*args, **kwargs):
"""GetStyle(self) -> long"""
return _core_.Menu_GetStyle(*args, **kwargs) | [
"def",
"GetStyle",
"(",
"*",
"args",
",",
"*",
"*",
"kwargs",
")",
":",
"return",
"_core_",
".",
"Menu_GetStyle",
"(",
"*",
"args",
",",
"*",
"*",
"kwargs",
")"
] | https://github.com/wxWidgets/wxPython-Classic/blob/19571e1ae65f1ac445f5491474121998c97a1bf0/src/osx_carbon/_core.py#L12218-L12220 | |
wxWidgets/wxPython-Classic | 19571e1ae65f1ac445f5491474121998c97a1bf0 | src/gtk/_windows.py | python | LayoutAlgorithm.LayoutMDIFrame | (*args, **kwargs) | return _windows_.LayoutAlgorithm_LayoutMDIFrame(*args, **kwargs) | LayoutMDIFrame(self, MDIParentFrame frame, Rect rect=None) -> bool | LayoutMDIFrame(self, MDIParentFrame frame, Rect rect=None) -> bool | [
"LayoutMDIFrame",
"(",
"self",
"MDIParentFrame",
"frame",
"Rect",
"rect",
"=",
"None",
")",
"-",
">",
"bool"
] | def LayoutMDIFrame(*args, **kwargs):
"""LayoutMDIFrame(self, MDIParentFrame frame, Rect rect=None) -> bool"""
return _windows_.LayoutAlgorithm_LayoutMDIFrame(*args, **kwargs) | [
"def",
"LayoutMDIFrame",
"(",
"*",
"args",
",",
"*",
"*",
"kwargs",
")",
":",
"return",
"_windows_",
".",
"LayoutAlgorithm_LayoutMDIFrame",
"(",
"*",
"args",
",",
"*",
"*",
"kwargs",
")"
] | https://github.com/wxWidgets/wxPython-Classic/blob/19571e1ae65f1ac445f5491474121998c97a1bf0/src/gtk/_windows.py#L2093-L2095 | |
benoitsteiner/tensorflow-opencl | cb7cb40a57fde5cfd4731bc551e82a1e2fef43a5 | tensorflow/contrib/bayesflow/python/ops/stochastic_tensor_impl.py | python | ObservedStochasticTensor.__init__ | (self, dist, value, name=None) | Construct an `ObservedStochasticTensor`.
`ObservedStochasticTensor` is backed by distribution `dist` and uses the
provided value instead of using the current value type to draw a value from
the distribution. The provided value argument must be appropriately shaped
to have come from the distribution.
Args:
dist: an instance of `Distribution`.
value: a Tensor containing the observed value
name: a name for this `ObservedStochasticTensor` and its ops.
Raises:
TypeError: if `dist` is not an instance of `Distribution`.
ValueError: if `value` is not compatible with the distribution. | Construct an `ObservedStochasticTensor`. | [
"Construct",
"an",
"ObservedStochasticTensor",
"."
] | def __init__(self, dist, value, name=None):
"""Construct an `ObservedStochasticTensor`.
`ObservedStochasticTensor` is backed by distribution `dist` and uses the
provided value instead of using the current value type to draw a value from
the distribution. The provided value argument must be appropriately shaped
to have come from the distribution.
Args:
dist: an instance of `Distribution`.
value: a Tensor containing the observed value
name: a name for this `ObservedStochasticTensor` and its ops.
Raises:
TypeError: if `dist` is not an instance of `Distribution`.
ValueError: if `value` is not compatible with the distribution.
"""
if not isinstance(dist, distribution.Distribution):
raise TypeError("dist must be an instance of Distribution")
with ops.name_scope(name, "ObservedStochasticTensor", [value]) as scope:
self._name = scope
self._dist = dist
dist_shape = self._dist.batch_shape.concatenate(
self._dist.event_shape)
value = ops.convert_to_tensor(value)
value_shape = value.get_shape()
if not value_shape.is_compatible_with(dist_shape):
if value_shape.ndims < dist_shape.ndims:
raise ValueError(
"Rank of observed value (%d) must be >= rank of a sample from the"
" distribution (%d)." % (value_shape.ndims, dist_shape.ndims))
sample_shape = value_shape[(value_shape.ndims - dist_shape.ndims):]
if not sample_shape.is_compatible_with(dist_shape):
raise ValueError(
"Shape of observed value %s is incompatible with the shape of a "
"sample from the distribution %s." % (value_shape, dist_shape))
if value.dtype != self._dist.dtype:
raise ValueError("Type of observed value (%s) does not match type of "
"distribution (%s)." % (value.dtype, self._dist.dtype))
self._value = array_ops.identity(value)
# pylint: disable=non-parent-init-called
BaseStochasticTensor.__init__(self) | [
"def",
"__init__",
"(",
"self",
",",
"dist",
",",
"value",
",",
"name",
"=",
"None",
")",
":",
"if",
"not",
"isinstance",
"(",
"dist",
",",
"distribution",
".",
"Distribution",
")",
":",
"raise",
"TypeError",
"(",
"\"dist must be an instance of Distribution\""... | https://github.com/benoitsteiner/tensorflow-opencl/blob/cb7cb40a57fde5cfd4731bc551e82a1e2fef43a5/tensorflow/contrib/bayesflow/python/ops/stochastic_tensor_impl.py#L421-L463 | ||
baidu/AnyQ | d94d450d2aaa5f7ed73424b10aa4539835b97527 | tools/simnet/train/tf/tools/tf_record_writer.py | python | int_feature | (v) | return tf.train.Feature(int64_list=tf.train.Int64List(value=v)) | int feature | int feature | [
"int",
"feature"
] | def int_feature(v):
"""
int feature
"""
return tf.train.Feature(int64_list=tf.train.Int64List(value=v)) | [
"def",
"int_feature",
"(",
"v",
")",
":",
"return",
"tf",
".",
"train",
".",
"Feature",
"(",
"int64_list",
"=",
"tf",
".",
"train",
".",
"Int64List",
"(",
"value",
"=",
"v",
")",
")"
] | https://github.com/baidu/AnyQ/blob/d94d450d2aaa5f7ed73424b10aa4539835b97527/tools/simnet/train/tf/tools/tf_record_writer.py#L27-L31 | |
aws/lumberyard | f85344403c1c2e77ec8c75deb2c116e97b713217 | dev/Tools/Python/3.7.10/windows/Lib/turtle.py | python | TNavigator.distance | (self, x, y=None) | return abs(pos - self._position) | Return the distance from the turtle to (x,y) in turtle step units.
Arguments:
x -- a number or a pair/vector of numbers or a turtle instance
y -- a number None None
call: distance(x, y) # two coordinates
--or: distance((x, y)) # a pair (tuple) of coordinates
--or: distance(vec) # e.g. as returned by pos()
--or: distance(mypen) # where mypen is another turtle
Example (for a Turtle instance named turtle):
>>> turtle.pos()
(0.00, 0.00)
>>> turtle.distance(30,40)
50.0
>>> pen = Turtle()
>>> pen.forward(77)
>>> turtle.distance(pen)
77.0 | Return the distance from the turtle to (x,y) in turtle step units. | [
"Return",
"the",
"distance",
"from",
"the",
"turtle",
"to",
"(",
"x",
"y",
")",
"in",
"turtle",
"step",
"units",
"."
] | def distance(self, x, y=None):
"""Return the distance from the turtle to (x,y) in turtle step units.
Arguments:
x -- a number or a pair/vector of numbers or a turtle instance
y -- a number None None
call: distance(x, y) # two coordinates
--or: distance((x, y)) # a pair (tuple) of coordinates
--or: distance(vec) # e.g. as returned by pos()
--or: distance(mypen) # where mypen is another turtle
Example (for a Turtle instance named turtle):
>>> turtle.pos()
(0.00, 0.00)
>>> turtle.distance(30,40)
50.0
>>> pen = Turtle()
>>> pen.forward(77)
>>> turtle.distance(pen)
77.0
"""
if y is not None:
pos = Vec2D(x, y)
if isinstance(x, Vec2D):
pos = x
elif isinstance(x, tuple):
pos = Vec2D(*x)
elif isinstance(x, TNavigator):
pos = x._position
return abs(pos - self._position) | [
"def",
"distance",
"(",
"self",
",",
"x",
",",
"y",
"=",
"None",
")",
":",
"if",
"y",
"is",
"not",
"None",
":",
"pos",
"=",
"Vec2D",
"(",
"x",
",",
"y",
")",
"if",
"isinstance",
"(",
"x",
",",
"Vec2D",
")",
":",
"pos",
"=",
"x",
"elif",
"is... | https://github.com/aws/lumberyard/blob/f85344403c1c2e77ec8c75deb2c116e97b713217/dev/Tools/Python/3.7.10/windows/Lib/turtle.py#L1828-L1858 | |
trilinos/Trilinos | 6168be6dd51e35e1cd681e9c4b24433e709df140 | packages/seacas/scripts/exomerge3.py | python | ExodusModel._get_closest_point_distance_brute | (self, points) | return dist | Return the distance between the two closest points. | Return the distance between the two closest points. | [
"Return",
"the",
"distance",
"between",
"the",
"two",
"closest",
"points",
"."
] | def _get_closest_point_distance_brute(self, points):
"""Return the distance between the two closest points."""
point_count = len(points)
dist = sys.float_info.max
for i in range(1, point_count):
for j in range(i):
dist = min(dist, self._distance_between(points[i], points[j]))
return dist | [
"def",
"_get_closest_point_distance_brute",
"(",
"self",
",",
"points",
")",
":",
"point_count",
"=",
"len",
"(",
"points",
")",
"dist",
"=",
"sys",
".",
"float_info",
".",
"max",
"for",
"i",
"in",
"range",
"(",
"1",
",",
"point_count",
")",
":",
"for",
... | https://github.com/trilinos/Trilinos/blob/6168be6dd51e35e1cd681e9c4b24433e709df140/packages/seacas/scripts/exomerge3.py#L8192-L8199 | |
wxWidgets/wxPython-Classic | 19571e1ae65f1ac445f5491474121998c97a1bf0 | src/gtk/_core.py | python | AcceleratorEntry.ToString | (*args, **kwargs) | return _core_.AcceleratorEntry_ToString(*args, **kwargs) | ToString(self) -> String
Returns a string representation for the this accelerator. The string
is formatted using the <flags>-<keycode> format where <flags> maybe a
hyphen-separed list of "shift|alt|ctrl" | ToString(self) -> String | [
"ToString",
"(",
"self",
")",
"-",
">",
"String"
] | def ToString(*args, **kwargs):
"""
ToString(self) -> String
Returns a string representation for the this accelerator. The string
is formatted using the <flags>-<keycode> format where <flags> maybe a
hyphen-separed list of "shift|alt|ctrl"
"""
return _core_.AcceleratorEntry_ToString(*args, **kwargs) | [
"def",
"ToString",
"(",
"*",
"args",
",",
"*",
"*",
"kwargs",
")",
":",
"return",
"_core_",
".",
"AcceleratorEntry_ToString",
"(",
"*",
"args",
",",
"*",
"*",
"kwargs",
")"
] | https://github.com/wxWidgets/wxPython-Classic/blob/19571e1ae65f1ac445f5491474121998c97a1bf0/src/gtk/_core.py#L8960-L8969 | |
anestisb/oatdump_plus | ba858c1596598f0d9ae79c14d08c708cecc50af3 | tools/cpplint.py | python | _Quiet | () | return _cpplint_state.quiet | Returns the module's quiet setting. | Returns the module's quiet setting. | [
"Returns",
"the",
"module",
"s",
"quiet",
"setting",
"."
] | def _Quiet():
"""Returns the module's quiet setting."""
return _cpplint_state.quiet | [
"def",
"_Quiet",
"(",
")",
":",
"return",
"_cpplint_state",
".",
"quiet"
] | https://github.com/anestisb/oatdump_plus/blob/ba858c1596598f0d9ae79c14d08c708cecc50af3/tools/cpplint.py#L657-L659 | |
simsong/bulk_extractor | 738911df22b7066ca9e1662f4131fb44090a4196 | python/dfxml.py | python | ET_tostring | (*pargs, **kwargs) | return retval | The ElementTree XML interface produces redundant namespace
declarations if you print an element at a time. This method simply
removes all xmlns delcarations from the string. | The ElementTree XML interface produces redundant namespace
declarations if you print an element at a time. This method simply
removes all xmlns delcarations from the string. | [
"The",
"ElementTree",
"XML",
"interface",
"produces",
"redundant",
"namespace",
"declarations",
"if",
"you",
"print",
"an",
"element",
"at",
"a",
"time",
".",
"This",
"method",
"simply",
"removes",
"all",
"xmlns",
"delcarations",
"from",
"the",
"string",
"."
] | def ET_tostring(*pargs, **kwargs):
"""
The ElementTree XML interface produces redundant namespace
declarations if you print an element at a time. This method simply
removes all xmlns delcarations from the string.
"""
global rx_xmlns
import xml.etree.ElementTree as ET
tempstring = ET.tostring(*pargs, **kwargs)
retval = re.sub(rx_xmlns, "", tempstring)
return retval | [
"def",
"ET_tostring",
"(",
"*",
"pargs",
",",
"*",
"*",
"kwargs",
")",
":",
"global",
"rx_xmlns",
"import",
"xml",
".",
"etree",
".",
"ElementTree",
"as",
"ET",
"tempstring",
"=",
"ET",
".",
"tostring",
"(",
"*",
"pargs",
",",
"*",
"*",
"kwargs",
")"... | https://github.com/simsong/bulk_extractor/blob/738911df22b7066ca9e1662f4131fb44090a4196/python/dfxml.py#L1554-L1564 | |
deepmind/open_spiel | 4ca53bea32bb2875c7385d215424048ae92f78c8 | open_spiel/python/algorithms/psro_v2/psro_v2.py | python | PSROSolver.update_empirical_gamestate | (self, seed=None) | return meta_games | Given new agents in _new_policies, update meta_games through simulations.
Args:
seed: Seed for environment generation.
Returns:
Meta game payoff matrix. | Given new agents in _new_policies, update meta_games through simulations. | [
"Given",
"new",
"agents",
"in",
"_new_policies",
"update",
"meta_games",
"through",
"simulations",
"."
] | def update_empirical_gamestate(self, seed=None):
"""Given new agents in _new_policies, update meta_games through simulations.
Args:
seed: Seed for environment generation.
Returns:
Meta game payoff matrix.
"""
if seed is not None:
np.random.seed(seed=seed)
assert self._oracle is not None
if self.symmetric_game:
# Switch to considering the game as a symmetric game where players have
# the same policies & new policies. This allows the empirical gamestate
# update to function normally.
self._policies = self._game_num_players * self._policies
self._new_policies = self._game_num_players * self._new_policies
self._num_players = self._game_num_players
# Concatenate both lists.
updated_policies = [
self._policies[k] + self._new_policies[k]
for k in range(self._num_players)
]
# Each metagame will be (num_strategies)^self._num_players.
# There are self._num_player metagames, one per player.
total_number_policies = [
len(updated_policies[k]) for k in range(self._num_players)
]
number_older_policies = [
len(self._policies[k]) for k in range(self._num_players)
]
number_new_policies = [
len(self._new_policies[k]) for k in range(self._num_players)
]
# Initializing the matrix with nans to recognize unestimated states.
meta_games = [
np.full(tuple(total_number_policies), np.nan)
for k in range(self._num_players)
]
# Filling the matrix with already-known values.
older_policies_slice = tuple(
[slice(len(self._policies[k])) for k in range(self._num_players)])
for k in range(self._num_players):
meta_games[k][older_policies_slice] = self._meta_games[k]
# Filling the matrix for newly added policies.
for current_player in range(self._num_players):
# Only iterate over new policies for current player ; compute on every
# policy for the other players.
range_iterators = [
range(total_number_policies[k]) for k in range(current_player)
] + [range(number_new_policies[current_player])] + [
range(total_number_policies[k])
for k in range(current_player + 1, self._num_players)
]
for current_index in itertools.product(*range_iterators):
used_index = list(current_index)
used_index[current_player] += number_older_policies[current_player]
if np.isnan(meta_games[current_player][tuple(used_index)]):
estimated_policies = [
updated_policies[k][current_index[k]]
for k in range(current_player)
] + [
self._new_policies[current_player][current_index[current_player]]
] + [
updated_policies[k][current_index[k]]
for k in range(current_player + 1, self._num_players)
]
if self.symmetric_game:
# TODO(author4): This update uses ~2**(n_players-1) * sims_per_entry
# samples to estimate each payoff table entry. This should be
# brought to sims_per_entry to coincide with expected behavior.
utility_estimates = self.sample_episodes(estimated_policies,
self._sims_per_entry)
player_permutations = list(itertools.permutations(list(range(
self._num_players))))
for permutation in player_permutations:
used_tuple = tuple([used_index[i] for i in permutation])
for player in range(self._num_players):
if np.isnan(meta_games[player][used_tuple]):
meta_games[player][used_tuple] = 0.0
meta_games[player][used_tuple] += utility_estimates[
permutation[player]] / len(player_permutations)
else:
utility_estimates = self.sample_episodes(estimated_policies,
self._sims_per_entry)
for k in range(self._num_players):
meta_games[k][tuple(used_index)] = utility_estimates[k]
if self.symmetric_game:
# Make PSRO consider that we only have one population again, as we
# consider that we are in a symmetric game (No difference between players)
self._policies = [self._policies[0]]
self._new_policies = [self._new_policies[0]]
updated_policies = [updated_policies[0]]
self._num_players = 1
self._meta_games = meta_games
self._policies = updated_policies
return meta_games | [
"def",
"update_empirical_gamestate",
"(",
"self",
",",
"seed",
"=",
"None",
")",
":",
"if",
"seed",
"is",
"not",
"None",
":",
"np",
".",
"random",
".",
"seed",
"(",
"seed",
"=",
"seed",
")",
"assert",
"self",
".",
"_oracle",
"is",
"not",
"None",
"if"... | https://github.com/deepmind/open_spiel/blob/4ca53bea32bb2875c7385d215424048ae92f78c8/open_spiel/python/algorithms/psro_v2/psro_v2.py#L353-L460 | |
Yijunmaverick/GenerativeFaceCompletion | f72dea0fa27c779fef7b65d2f01e82bcc23a0eb2 | scripts/cpp_lint.py | python | RemoveMultiLineComments | (filename, lines, error) | Removes multiline (c-style) comments from lines. | Removes multiline (c-style) comments from lines. | [
"Removes",
"multiline",
"(",
"c",
"-",
"style",
")",
"comments",
"from",
"lines",
"."
] | def RemoveMultiLineComments(filename, lines, error):
"""Removes multiline (c-style) comments from lines."""
lineix = 0
while lineix < len(lines):
lineix_begin = FindNextMultiLineCommentStart(lines, lineix)
if lineix_begin >= len(lines):
return
lineix_end = FindNextMultiLineCommentEnd(lines, lineix_begin)
if lineix_end >= len(lines):
error(filename, lineix_begin + 1, 'readability/multiline_comment', 5,
'Could not find end of multi-line comment')
return
RemoveMultiLineCommentsFromRange(lines, lineix_begin, lineix_end + 1)
lineix = lineix_end + 1 | [
"def",
"RemoveMultiLineComments",
"(",
"filename",
",",
"lines",
",",
"error",
")",
":",
"lineix",
"=",
"0",
"while",
"lineix",
"<",
"len",
"(",
"lines",
")",
":",
"lineix_begin",
"=",
"FindNextMultiLineCommentStart",
"(",
"lines",
",",
"lineix",
")",
"if",
... | https://github.com/Yijunmaverick/GenerativeFaceCompletion/blob/f72dea0fa27c779fef7b65d2f01e82bcc23a0eb2/scripts/cpp_lint.py#L1151-L1164 | ||
freesurfer/freesurfer | 6dbe527d43ffa611acb2cd112e9469f9bfec8e36 | sscnn_skullstripping/sscnn_skullstripping/deeplearn_utils/DeepImageSynth.py | python | FeatureGenerator.build_seg_from_patches | (self, in_patches, indices, padded_img_size, patch_crop_size, step_size, center_voxel=False) | return label_img_data | patch_crop_size depends on the size of the cnn filter. If [3,3,3] then [1,1,1] | patch_crop_size depends on the size of the cnn filter. If [3,3,3] then [1,1,1] | [
"patch_crop_size",
"depends",
"on",
"the",
"size",
"of",
"the",
"cnn",
"filter",
".",
"If",
"[",
"3",
"3",
"3",
"]",
"then",
"[",
"1",
"1",
"1",
"]"
] | def build_seg_from_patches(self, in_patches, indices, padded_img_size, patch_crop_size, step_size, center_voxel=False):
''' patch_crop_size depends on the size of the cnn filter. If [3,3,3] then [1,1,1]'''
print(padded_img_size)
out_img_data = np.zeros(padded_img_size)
count_img_data = np.zeros(padded_img_size)
out_feature_shape = in_patches.shape[1:5]
idx_x = indices[:, 0]
idx_y = indices[:, 1]
idx_z = indices[:, 2]
# print('indices shape is :')
print(indices.shape)
if center_voxel == False:
patch_mask = np.zeros(out_feature_shape)
patch_mask[0 + patch_crop_size[0]: out_feature_shape[0] - patch_crop_size[0],
0 + patch_crop_size[1]: out_feature_shape[1] - patch_crop_size[1],
0 + patch_crop_size[2]: out_feature_shape[2] - patch_crop_size[2],:] = 1
for patch_iter in range(len(idx_x)):
out_img_data[idx_x[patch_iter]:idx_x[patch_iter] + out_feature_shape[0],
idx_y[patch_iter]:idx_y[patch_iter] + out_feature_shape[1],
idx_z[patch_iter]:idx_z[patch_iter] + out_feature_shape[2],:] += \
np.multiply(np.reshape(in_patches[patch_iter, :],out_feature_shape), patch_mask)
count_img_data[idx_x[patch_iter]:idx_x[patch_iter] + out_feature_shape[0],
idx_y[patch_iter]:idx_y[patch_iter] + out_feature_shape[1],
idx_z[patch_iter]:idx_z[patch_iter] + out_feature_shape[2],:] += patch_mask
out_img_data = np.divide(out_img_data, count_img_data)
out_img_data[np.isnan(out_img_data)] = 0
# remove the padding
# unpadded_img_size = padded_img_size[0:3] - np.multiply(self.feature_shape, 2)
unpadded_img_size = padded_img_size[0:3] - np.multiply(np.asarray(self.feature_shape[:-1]) + step_size + 1, 2)
padding = np.asarray(self.feature_shape[:-1]) + step_size + 1
# print("padding is "+ str(np.multiply(np.asarray(self.feature_shape[:-1]) + step_size + 1, 2)))
# print("unpadded image size is "+str(unpadded_img_size))
out_img_data = out_img_data[padding[0]:padding[0] + unpadded_img_size[0],
padding[1]:padding[1] + unpadded_img_size[1],
padding[2]:padding[2] + unpadded_img_size[2]]
count_img_data = count_img_data[padding[0]:padding[0] + unpadded_img_size[0],
padding[1]:padding[1] + unpadded_img_size[1],
padding[2]:padding[2] + unpadded_img_size[2]]
else:
patch_mask = np.ones(out_feature_shape)
for patch_iter in range(len(idx_x)):
px = idx_x[patch_iter]
py = idx_y[patch_iter]
pz = idx_z[patch_iter]
out_img_data[ int(px) - self.feature_shape[0] // 2: int(px) + self.feature_shape[0] // 2,
int(py) - self.feature_shape[1] // 2: int(py) + self.feature_shape[1] // 2,
int(pz) - self.feature_shape[2] // 2: int(pz) + self.feature_shape[2] // 2, :] += \
(np.reshape(in_patches[patch_iter, :], out_feature_shape))
count_img_data[int(px) - self.feature_shape[0] // 2: int(px) + self.feature_shape[0] // 2,
int(py) - self.feature_shape[1] // 2: int(py) + self.feature_shape[1] // 2,
int(pz) - self.feature_shape[2] // 2:int(pz) + self.feature_shape[2] // 2, :] += \
patch_mask
count_img_data[count_img_data == 0] = 1
out_img_data = np.divide(out_img_data, count_img_data)
# out_img_data[np.isnan(out_img_data)] = 0
unpadded_img_size = padded_img_size[0:3] - np.multiply(np.asarray(self.feature_shape[:-1]), 2)
padding = np.asarray(self.feature_shape[:-1])
# print('unpadded_img_size in build seg is')
# print(unpadded_img_size)
# print('padding in build seg is ')
# print(padding)
out_img_data = out_img_data[padding[0]:padding[0] + unpadded_img_size[0],
padding[1]:padding[1] + unpadded_img_size[1],
padding[2]:padding[2] + unpadded_img_size[2]]
# count_img_data = count_img_data[padding[0]:padding[0] + unpadded_img_size[0],
# padding[1]:padding[1] + unpadded_img_size[1],
# padding[2]:padding[2] + unpadded_img_size[2]]
# out_img_data = out_img_data[self.feature_shape[0]:self.feature_shape[0] + unpadded_img_size[0],
# self.feature_shape[1]:self.feature_shape[1] + unpadded_img_size[1],
# self.feature_shape[2]:self.feature_shape[2] + unpadded_img_size[2], :]
# count_img_data = count_img_data[self.feature_shape[0]:self.feature_shape[0] + unpadded_img_size[0],
# self.feature_shape[1]:self.feature_shape[1] + unpadded_img_size[1],
# self.feature_shape[2]:self.feature_shape[2] + unpadded_img_size[2], :]
print('calculating hard segmentation')
label_img_data = np.argmax(out_img_data, axis=-1)
label_img_data = self.map_inv_labels(label_img_data, self.labels)
return label_img_data | [
"def",
"build_seg_from_patches",
"(",
"self",
",",
"in_patches",
",",
"indices",
",",
"padded_img_size",
",",
"patch_crop_size",
",",
"step_size",
",",
"center_voxel",
"=",
"False",
")",
":",
"print",
"(",
"padded_img_size",
")",
"out_img_data",
"=",
"np",
".",
... | https://github.com/freesurfer/freesurfer/blob/6dbe527d43ffa611acb2cd112e9469f9bfec8e36/sscnn_skullstripping/sscnn_skullstripping/deeplearn_utils/DeepImageSynth.py#L2148-L2246 | |
ChromiumWebApps/chromium | c7361d39be8abd1574e6ce8957c8dbddd4c6ccf7 | gpu/command_buffer/build_gles2_cmd_buffer.py | python | DataHandler.WriteImmediateCmdSet | (self, func, file) | Overrriden from TypeHandler. | Overrriden from TypeHandler. | [
"Overrriden",
"from",
"TypeHandler",
"."
] | def WriteImmediateCmdSet(self, func, file):
"""Overrriden from TypeHandler."""
copy_args = func.MakeCmdArgString("_", False)
file.Write(" void* Set(void* cmd%s) {\n" %
func.MakeTypedCmdArgString("_", True))
self.WriteImmediateCmdGetTotalSize(func, file)
file.Write(" static_cast<ValueType*>(cmd)->Init(%s);\n" % copy_args)
file.Write(" return NextImmediateCmdAddressTotalSize<ValueType>("
"cmd, total_size);\n")
file.Write(" }\n")
file.Write("\n") | [
"def",
"WriteImmediateCmdSet",
"(",
"self",
",",
"func",
",",
"file",
")",
":",
"copy_args",
"=",
"func",
".",
"MakeCmdArgString",
"(",
"\"_\"",
",",
"False",
")",
"file",
".",
"Write",
"(",
"\" void* Set(void* cmd%s) {\\n\"",
"%",
"func",
".",
"MakeTypedCmdA... | https://github.com/ChromiumWebApps/chromium/blob/c7361d39be8abd1574e6ce8957c8dbddd4c6ccf7/gpu/command_buffer/build_gles2_cmd_buffer.py#L3704-L3714 | ||
hanpfei/chromium-net | 392cc1fa3a8f92f42e4071ab6e674d8e0482f83f | third_party/catapult/third_party/py_vulcanize/py_vulcanize/resource_loader.py | python | ResourceLoader.FindModuleResource | (self, requested_module_name) | return html_resource | Finds a module javascript file and returns a Resource, or none. | Finds a module javascript file and returns a Resource, or none. | [
"Finds",
"a",
"module",
"javascript",
"file",
"and",
"returns",
"a",
"Resource",
"or",
"none",
"."
] | def FindModuleResource(self, requested_module_name):
"""Finds a module javascript file and returns a Resource, or none."""
js_resource = self._FindResourceGivenNameAndSuffix(
requested_module_name, '.js', return_resource=True)
html_resource = self._FindResourceGivenNameAndSuffix(
requested_module_name, '.html', return_resource=True)
if js_resource and html_resource:
if html_module.IsHTMLResourceTheModuleGivenConflictingResourceNames(
js_resource, html_resource):
return html_resource
return js_resource
elif js_resource:
return js_resource
return html_resource | [
"def",
"FindModuleResource",
"(",
"self",
",",
"requested_module_name",
")",
":",
"js_resource",
"=",
"self",
".",
"_FindResourceGivenNameAndSuffix",
"(",
"requested_module_name",
",",
"'.js'",
",",
"return_resource",
"=",
"True",
")",
"html_resource",
"=",
"self",
... | https://github.com/hanpfei/chromium-net/blob/392cc1fa3a8f92f42e4071ab6e674d8e0482f83f/third_party/catapult/third_party/py_vulcanize/py_vulcanize/resource_loader.py#L96-L109 | |
apache/incubator-mxnet | f03fb23f1d103fec9541b5ae59ee06b1734a51d9 | python/mxnet/symbol/symbol.py | python | load | (fname) | return Symbol(handle) | Loads symbol from a JSON file.
You can also use pickle to do the job if you only work on python.
The advantage of load/save is the file is language agnostic.
This means the file saved using save can be loaded by other language binding of mxnet.
You also get the benefit being able to directly load/save from cloud storage(S3, HDFS).
Parameters
----------
fname : str
The name of the file, examples:
- `s3://my-bucket/path/my-s3-symbol`
- `hdfs://my-bucket/path/my-hdfs-symbol`
- `/path-to/my-local-symbol`
Returns
-------
sym : Symbol
The loaded symbol.
See Also
--------
Symbol.save : Used to save symbol into file. | Loads symbol from a JSON file. | [
"Loads",
"symbol",
"from",
"a",
"JSON",
"file",
"."
] | def load(fname):
"""Loads symbol from a JSON file.
You can also use pickle to do the job if you only work on python.
The advantage of load/save is the file is language agnostic.
This means the file saved using save can be loaded by other language binding of mxnet.
You also get the benefit being able to directly load/save from cloud storage(S3, HDFS).
Parameters
----------
fname : str
The name of the file, examples:
- `s3://my-bucket/path/my-s3-symbol`
- `hdfs://my-bucket/path/my-hdfs-symbol`
- `/path-to/my-local-symbol`
Returns
-------
sym : Symbol
The loaded symbol.
See Also
--------
Symbol.save : Used to save symbol into file.
"""
if not isinstance(fname, string_types):
raise TypeError('fname need to be string')
handle = SymbolHandle()
check_call(_LIB.MXSymbolCreateFromFile(c_str(fname), ctypes.byref(handle)))
return Symbol(handle) | [
"def",
"load",
"(",
"fname",
")",
":",
"if",
"not",
"isinstance",
"(",
"fname",
",",
"string_types",
")",
":",
"raise",
"TypeError",
"(",
"'fname need to be string'",
")",
"handle",
"=",
"SymbolHandle",
"(",
")",
"check_call",
"(",
"_LIB",
".",
"MXSymbolCrea... | https://github.com/apache/incubator-mxnet/blob/f03fb23f1d103fec9541b5ae59ee06b1734a51d9/python/mxnet/symbol/symbol.py#L2810-L2840 | |
cksystemsgroup/scal | fa2208a97a77d65f4e90f85fef3404c27c1f2ac2 | tools/cpplint.py | python | CheckSpacing | (filename, clean_lines, linenum, nesting_state, error) | Checks for the correctness of various spacing issues in the code.
Things we check for: spaces around operators, spaces after
if/for/while/switch, no spaces around parens in function calls, two
spaces between code and comment, don't start a block with a blank
line, don't end a function with a blank line, don't add a blank line
after public/protected/private, don't have too many blank lines in a row.
Args:
filename: The name of the current file.
clean_lines: A CleansedLines instance containing the file.
linenum: The number of the line to check.
nesting_state: A NestingState instance which maintains information about
the current stack of nested blocks being parsed.
error: The function to call with any errors found. | Checks for the correctness of various spacing issues in the code. | [
"Checks",
"for",
"the",
"correctness",
"of",
"various",
"spacing",
"issues",
"in",
"the",
"code",
"."
] | def CheckSpacing(filename, clean_lines, linenum, nesting_state, error):
"""Checks for the correctness of various spacing issues in the code.
Things we check for: spaces around operators, spaces after
if/for/while/switch, no spaces around parens in function calls, two
spaces between code and comment, don't start a block with a blank
line, don't end a function with a blank line, don't add a blank line
after public/protected/private, don't have too many blank lines in a row.
Args:
filename: The name of the current file.
clean_lines: A CleansedLines instance containing the file.
linenum: The number of the line to check.
nesting_state: A NestingState instance which maintains information about
the current stack of nested blocks being parsed.
error: The function to call with any errors found.
"""
# Don't use "elided" lines here, otherwise we can't check commented lines.
# Don't want to use "raw" either, because we don't want to check inside C++11
# raw strings,
raw = clean_lines.lines_without_raw_strings
line = raw[linenum]
# Before nixing comments, check if the line is blank for no good
# reason. This includes the first line after a block is opened, and
# blank lines at the end of a function (ie, right before a line like '}'
#
# Skip all the blank line checks if we are immediately inside a
# namespace body. In other words, don't issue blank line warnings
# for this block:
# namespace {
#
# }
#
# A warning about missing end of namespace comments will be issued instead.
#
# Also skip blank line checks for 'extern "C"' blocks, which are formatted
# like namespaces.
if (IsBlankLine(line) and
not nesting_state.InNamespaceBody() and
not nesting_state.InExternC()):
elided = clean_lines.elided
prev_line = elided[linenum - 1]
prevbrace = prev_line.rfind('{')
# TODO(unknown): Don't complain if line before blank line, and line after,
# both start with alnums and are indented the same amount.
# This ignores whitespace at the start of a namespace block
# because those are not usually indented.
if prevbrace != -1 and prev_line[prevbrace:].find('}') == -1:
# OK, we have a blank line at the start of a code block. Before we
# complain, we check if it is an exception to the rule: The previous
# non-empty line has the parameters of a function header that are indented
# 4 spaces (because they did not fit in a 80 column line when placed on
# the same line as the function name). We also check for the case where
# the previous line is indented 6 spaces, which may happen when the
# initializers of a constructor do not fit into a 80 column line.
exception = False
if Match(r' {6}\w', prev_line): # Initializer list?
# We are looking for the opening column of initializer list, which
# should be indented 4 spaces to cause 6 space indentation afterwards.
search_position = linenum-2
while (search_position >= 0
and Match(r' {6}\w', elided[search_position])):
search_position -= 1
exception = (search_position >= 0
and elided[search_position][:5] == ' :')
else:
# Search for the function arguments or an initializer list. We use a
# simple heuristic here: If the line is indented 4 spaces; and we have a
# closing paren, without the opening paren, followed by an opening brace
# or colon (for initializer lists) we assume that it is the last line of
# a function header. If we have a colon indented 4 spaces, it is an
# initializer list.
exception = (Match(r' {4}\w[^\(]*\)\s*(const\s*)?(\{\s*$|:)',
prev_line)
or Match(r' {4}:', prev_line))
if not exception:
error(filename, linenum, 'whitespace/blank_line', 2,
'Redundant blank line at the start of a code block '
'should be deleted.')
# Ignore blank lines at the end of a block in a long if-else
# chain, like this:
# if (condition1) {
# // Something followed by a blank line
#
# } else if (condition2) {
# // Something else
# }
if linenum + 1 < clean_lines.NumLines():
next_line = raw[linenum + 1]
if (next_line
and Match(r'\s*}', next_line)
and next_line.find('} else ') == -1):
error(filename, linenum, 'whitespace/blank_line', 3,
'Redundant blank line at the end of a code block '
'should be deleted.')
matched = Match(r'\s*(public|protected|private):', prev_line)
if matched:
error(filename, linenum, 'whitespace/blank_line', 3,
'Do not leave a blank line after "%s:"' % matched.group(1))
# Next, check comments
next_line_start = 0
if linenum + 1 < clean_lines.NumLines():
next_line = raw[linenum + 1]
next_line_start = len(next_line) - len(next_line.lstrip())
CheckComment(line, filename, linenum, next_line_start, error)
# get rid of comments and strings
line = clean_lines.elided[linenum]
# You shouldn't have spaces before your brackets, except maybe after
# 'delete []' or 'return []() {};'
if Search(r'\w\s+\[', line) and not Search(r'(?:delete|return)\s+\[', line):
error(filename, linenum, 'whitespace/braces', 5,
'Extra space before [')
# In range-based for, we wanted spaces before and after the colon, but
# not around "::" tokens that might appear.
if (Search(r'for *\(.*[^:]:[^: ]', line) or
Search(r'for *\(.*[^: ]:[^:]', line)):
error(filename, linenum, 'whitespace/forcolon', 2,
'Missing space around colon in range-based for loop') | [
"def",
"CheckSpacing",
"(",
"filename",
",",
"clean_lines",
",",
"linenum",
",",
"nesting_state",
",",
"error",
")",
":",
"# Don't use \"elided\" lines here, otherwise we can't check commented lines.",
"# Don't want to use \"raw\" either, because we don't want to check inside C++11",
... | https://github.com/cksystemsgroup/scal/blob/fa2208a97a77d65f4e90f85fef3404c27c1f2ac2/tools/cpplint.py#L2999-L3124 | ||
natanielruiz/android-yolo | 1ebb54f96a67a20ff83ddfc823ed83a13dc3a47f | jni-build/jni/include/tensorflow/python/framework/importer.py | python | _MaybeDevice | (device) | Applies the given device only if device is not None or empty. | Applies the given device only if device is not None or empty. | [
"Applies",
"the",
"given",
"device",
"only",
"if",
"device",
"is",
"not",
"None",
"or",
"empty",
"."
] | def _MaybeDevice(device):
"""Applies the given device only if device is not None or empty."""
if device:
with ops.device(device):
yield
else:
yield | [
"def",
"_MaybeDevice",
"(",
"device",
")",
":",
"if",
"device",
":",
"with",
"ops",
".",
"device",
"(",
"device",
")",
":",
"yield",
"else",
":",
"yield"
] | https://github.com/natanielruiz/android-yolo/blob/1ebb54f96a67a20ff83ddfc823ed83a13dc3a47f/jni-build/jni/include/tensorflow/python/framework/importer.py#L136-L142 | ||
miyosuda/TensorFlowAndroidMNIST | 7b5a4603d2780a8a2834575706e9001977524007 | jni-build/jni/include/external/bazel_tools/tools/cpp/wrapper/bin/pydir/msvc_link.py | python | MsvcLinker.Run | (self, argv) | Runs the linker using the passed clang/gcc style argument list.
Args:
argv: List of arguments
Returns:
The return code of the link.
Raises:
ValueError: if target architecture or compile mode isn't specified | Runs the linker using the passed clang/gcc style argument list. | [
"Runs",
"the",
"linker",
"using",
"the",
"passed",
"clang",
"/",
"gcc",
"style",
"argument",
"list",
"."
] | def Run(self, argv):
"""Runs the linker using the passed clang/gcc style argument list.
Args:
argv: List of arguments
Returns:
The return code of the link.
Raises:
ValueError: if target architecture or compile mode isn't specified
"""
# For now assume we are building a library.
tool = 'lib'
default_args = ['/nologo']
# Build argument list.
parser = msvc_tools.ArgParser(self, argv, LINKPATTERNS)
# Find the output file name.
name = ''
for arg in parser.options:
if '/OUT:' in arg:
name = arg[5:]
if not name:
raise msvc_tools.Error('No output file name specified!')
# Check if the library is empty, which is what happens when we create header
# or intermediate link-only libraries.
if (len(parser.options) == 2 and parser.options[0].startswith('/OUT') and
parser.options[1].startswith('/M')):
# Just "touch" the library to create the file.
with open(name, 'w'):
os.utime(name, None)
else:
# If the output name ends in .lo, or .a, it is a library, otherwise
# we need to use link to create an executable.
if os.path.splitext(name)[1] not in ['.a', '.lo']:
tool = 'link'
if not parser.target_arch:
raise ValueError('Must specify target architecture (-m32 or -m64)')
# Append explicit machine type.
if parser.target_arch == 'x64':
default_args.append('/MACHINE:X64')
else:
default_args.append('/MACHINE:X86')
# Args for buildng a console application. These must appear here since
# blaze will not properly pass them to the linker.
# /SUBSYSTEM:CONSOLE: Build a console application.
default_args += ['/SUBSYSTEM:CONSOLE']
# If there is no .o on the command line, then we need to add the
# run-time library for the target. Without this the linker gives a
# LNK4001 error and cannot find an entry point.
for arg in parser.options:
if arg.endswith('.o'):
break
else:
if not parser.compilation_mode:
raise ValueError('Must specify compilation mode '
'(-Xcompilation-mode={dbg,fastbuild,opt})')
if parser.compilation_mode == 'dbg':
default_args.insert(0, 'libcmtd.lib')
else:
default_args.insert(0, 'libcmt.lib')
return self.RunBinary(tool, default_args + parser.options,
parser.target_arch, parser) | [
"def",
"Run",
"(",
"self",
",",
"argv",
")",
":",
"# For now assume we are building a library.",
"tool",
"=",
"'lib'",
"default_args",
"=",
"[",
"'/nologo'",
"]",
"# Build argument list.",
"parser",
"=",
"msvc_tools",
".",
"ArgParser",
"(",
"self",
",",
"argv",
... | https://github.com/miyosuda/TensorFlowAndroidMNIST/blob/7b5a4603d2780a8a2834575706e9001977524007/jni-build/jni/include/external/bazel_tools/tools/cpp/wrapper/bin/pydir/msvc_link.py#L51-L120 | ||
catboost/catboost | 167f64f237114a4d10b2b4ee42adb4569137debe | contrib/python/pandas/py2/pandas/core/computation/pytables.py | python | FilterBinOp.format | (self) | return [self.filter] | return the actual filter format | return the actual filter format | [
"return",
"the",
"actual",
"filter",
"format"
] | def format(self):
""" return the actual filter format """
return [self.filter] | [
"def",
"format",
"(",
"self",
")",
":",
"return",
"[",
"self",
".",
"filter",
"]"
] | https://github.com/catboost/catboost/blob/167f64f237114a4d10b2b4ee42adb4569137debe/contrib/python/pandas/py2/pandas/core/computation/pytables.py#L243-L245 | |
wxWidgets/wxPython-Classic | 19571e1ae65f1ac445f5491474121998c97a1bf0 | src/osx_cocoa/_windows.py | python | PrintPreview.__init__ | (self, *args) | __init__(self, Printout printout, Printout printoutForPrinting, PrintDialogData data=None) -> PrintPreview
__init__(self, Printout printout, Printout printoutForPrinting, PrintData data) -> PrintPreview | __init__(self, Printout printout, Printout printoutForPrinting, PrintDialogData data=None) -> PrintPreview
__init__(self, Printout printout, Printout printoutForPrinting, PrintData data) -> PrintPreview | [
"__init__",
"(",
"self",
"Printout",
"printout",
"Printout",
"printoutForPrinting",
"PrintDialogData",
"data",
"=",
"None",
")",
"-",
">",
"PrintPreview",
"__init__",
"(",
"self",
"Printout",
"printout",
"Printout",
"printoutForPrinting",
"PrintData",
"data",
")",
"... | def __init__(self, *args):
"""
__init__(self, Printout printout, Printout printoutForPrinting, PrintDialogData data=None) -> PrintPreview
__init__(self, Printout printout, Printout printoutForPrinting, PrintData data) -> PrintPreview
"""
_windows_.PrintPreview_swiginit(self,_windows_.new_PrintPreview(*args)) | [
"def",
"__init__",
"(",
"self",
",",
"*",
"args",
")",
":",
"_windows_",
".",
"PrintPreview_swiginit",
"(",
"self",
",",
"_windows_",
".",
"new_PrintPreview",
"(",
"*",
"args",
")",
")"
] | https://github.com/wxWidgets/wxPython-Classic/blob/19571e1ae65f1ac445f5491474121998c97a1bf0/src/osx_cocoa/_windows.py#L5557-L5562 | ||
catboost/catboost | 167f64f237114a4d10b2b4ee42adb4569137debe | contrib/python/scikit-learn/py3/sklearn/impute/_base.py | python | _BaseImputer._transform_indicator | (self, X) | Compute the indicator mask.'
Note that X must be the original data as passed to the imputer before
any imputation, since imputation may be done inplace in some cases. | Compute the indicator mask.' | [
"Compute",
"the",
"indicator",
"mask",
"."
] | def _transform_indicator(self, X):
"""Compute the indicator mask.'
Note that X must be the original data as passed to the imputer before
any imputation, since imputation may be done inplace in some cases.
"""
if self.add_indicator:
if not hasattr(self, 'indicator_'):
raise ValueError(
"Make sure to call _fit_indicator before "
"_transform_indicator"
)
return self.indicator_.transform(X) | [
"def",
"_transform_indicator",
"(",
"self",
",",
"X",
")",
":",
"if",
"self",
".",
"add_indicator",
":",
"if",
"not",
"hasattr",
"(",
"self",
",",
"'indicator_'",
")",
":",
"raise",
"ValueError",
"(",
"\"Make sure to call _fit_indicator before \"",
"\"_transform_i... | https://github.com/catboost/catboost/blob/167f64f237114a4d10b2b4ee42adb4569137debe/contrib/python/scikit-learn/py3/sklearn/impute/_base.py#L85-L97 | ||
wxWidgets/wxPython-Classic | 19571e1ae65f1ac445f5491474121998c97a1bf0 | src/osx_cocoa/_windows.py | python | HVScrolledWindow.EstimateTotalHeight | (*args, **kwargs) | return _windows_.HVScrolledWindow_EstimateTotalHeight(*args, **kwargs) | EstimateTotalHeight(self) -> int | EstimateTotalHeight(self) -> int | [
"EstimateTotalHeight",
"(",
"self",
")",
"-",
">",
"int"
] | def EstimateTotalHeight(*args, **kwargs):
"""EstimateTotalHeight(self) -> int"""
return _windows_.HVScrolledWindow_EstimateTotalHeight(*args, **kwargs) | [
"def",
"EstimateTotalHeight",
"(",
"*",
"args",
",",
"*",
"*",
"kwargs",
")",
":",
"return",
"_windows_",
".",
"HVScrolledWindow_EstimateTotalHeight",
"(",
"*",
"args",
",",
"*",
"*",
"kwargs",
")"
] | https://github.com/wxWidgets/wxPython-Classic/blob/19571e1ae65f1ac445f5491474121998c97a1bf0/src/osx_cocoa/_windows.py#L2562-L2564 | |
emscripten-core/emscripten | 0d413d3c5af8b28349682496edc14656f5700c2f | third_party/ply/example/ansic/cparse.py | python | p_labeled_statement_1 | (t) | labeled_statement : ID COLON statement | labeled_statement : ID COLON statement | [
"labeled_statement",
":",
"ID",
"COLON",
"statement"
] | def p_labeled_statement_1(t):
'labeled_statement : ID COLON statement'
pass | [
"def",
"p_labeled_statement_1",
"(",
"t",
")",
":",
"pass"
] | https://github.com/emscripten-core/emscripten/blob/0d413d3c5af8b28349682496edc14656f5700c2f/third_party/ply/example/ansic/cparse.py#L466-L468 | ||
bareos/bareos | 56a10bb368b0a81e977bb51304033fe49d59efb0 | contrib/fd-plugins/openvz7/BareosFdPluginVz7CtFs.py | python | BareosFdPluginVz7CtFs.list_snapshots | (self, ) | return snapshots | Returns a list of existing snapshots for a container and returns a list of hashes containing
uuid of parent snapshot, the snapshot_uuid of the snapshot, its status and the path of delta image | Returns a list of existing snapshots for a container and returns a list of hashes containing
uuid of parent snapshot, the snapshot_uuid of the snapshot, its status and the path of delta image | [
"Returns",
"a",
"list",
"of",
"existing",
"snapshots",
"for",
"a",
"container",
"and",
"returns",
"a",
"list",
"of",
"hashes",
"containing",
"uuid",
"of",
"parent",
"snapshot",
"the",
"snapshot_uuid",
"of",
"the",
"snapshot",
"its",
"status",
"and",
"the",
"... | def list_snapshots(self, ):
'''
Returns a list of existing snapshots for a container and returns a list of hashes containing
uuid of parent snapshot, the snapshot_uuid of the snapshot, its status and the path of delta image
'''
try:
snapshot_list = subprocess.check_output(
['/usr/sbin/ploop', 'snapshot-list', self.disk_descriptor],
universal_newlines=True)
except subprocess.CalledProcessError:
return []
snapshot_list = snapshot_list.split("\n")
snapshots = []
while snapshot_list:
parentuuid, status, snapshot_uuid, fname = [None, None, None, None]
line = snapshot_list.pop(0)
if line == "":
continue
tokens = line.split()
if tokens[0] == "PARENT_UUID":
continue
if len(tokens) == 4:
parentuuid, status, snapshot_uuid, fname = tokens[0], tokens[1], tokens[2], tokens[3]
active = True
elif len(tokens) == 3:
parentuuid, snapshot_uuid, fname = tokens[0], tokens[1], tokens[2]
active = False
snapshots.append({'parentuuid': parentuuid,
'snapshot_uuid': snapshot_uuid.strip("{}"),
'fname': fname,
'active': active})
bareosfd.DebugMessage(
100,
"Function list_snapshots: returning {}\n".format(
str(snapshots)))
return snapshots | [
"def",
"list_snapshots",
"(",
"self",
",",
")",
":",
"try",
":",
"snapshot_list",
"=",
"subprocess",
".",
"check_output",
"(",
"[",
"'/usr/sbin/ploop'",
",",
"'snapshot-list'",
",",
"self",
".",
"disk_descriptor",
"]",
",",
"universal_newlines",
"=",
"True",
"... | https://github.com/bareos/bareos/blob/56a10bb368b0a81e977bb51304033fe49d59efb0/contrib/fd-plugins/openvz7/BareosFdPluginVz7CtFs.py#L139-L174 | |
aws/lumberyard | f85344403c1c2e77ec8c75deb2c116e97b713217 | dev/Gems/CloudGemFramework/v1/ResourceManager/resource_manager/uploader.py | python | ResourceGroupUploader.__init__ | (self, deployment_uploader, resource_group_name) | Initializes a ResourceGroupUploader object.
Args:
deployment_uploader: The DeploymentUploader object on which the uploader is based.
resource_group_name: The name of the resource group targeted by the uploader. | Initializes a ResourceGroupUploader object. | [
"Initializes",
"a",
"ResourceGroupUploader",
"object",
"."
] | def __init__(self, deployment_uploader, resource_group_name):
"""Initializes a ResourceGroupUploader object.
Args:
deployment_uploader: The DeploymentUploader object on which the uploader is based.
resource_group_name: The name of the resource group targeted by the uploader.
"""
Uploader.__init__(
self,
deployment_uploader.context,
deployment_uploader.bucket,
deployment_uploader.key + '/resource-group/' + resource_group_name)
self._deployment_uploader = deployment_uploader
self._resource_group_name = resource_group_name | [
"def",
"__init__",
"(",
"self",
",",
"deployment_uploader",
",",
"resource_group_name",
")",
":",
"Uploader",
".",
"__init__",
"(",
"self",
",",
"deployment_uploader",
".",
"context",
",",
"deployment_uploader",
".",
"bucket",
",",
"deployment_uploader",
".",
"key... | https://github.com/aws/lumberyard/blob/f85344403c1c2e77ec8c75deb2c116e97b713217/dev/Gems/CloudGemFramework/v1/ResourceManager/resource_manager/uploader.py#L663-L681 | ||
natanielruiz/android-yolo | 1ebb54f96a67a20ff83ddfc823ed83a13dc3a47f | jni-build/jni/include/tensorflow/python/ops/nn_ops.py | python | _TopKShape | (op) | return [output_shape, output_shape] | Shape function for TopK and TopKV2 ops. | Shape function for TopK and TopKV2 ops. | [
"Shape",
"function",
"for",
"TopK",
"and",
"TopKV2",
"ops",
"."
] | def _TopKShape(op):
"""Shape function for TopK and TopKV2 ops."""
input_shape = op.inputs[0].get_shape().with_rank_at_least(1)
if len(op.inputs) >= 2:
k = tensor_util.constant_value(op.inputs[1])
else:
k = op.get_attr("k")
last = input_shape[-1].value
if last is not None and k is not None and last < k:
raise ValueError("input.shape %s must have last dimension >= k = %d" %
(input_shape, k))
output_shape = input_shape[:-1].concatenate([k])
return [output_shape, output_shape] | [
"def",
"_TopKShape",
"(",
"op",
")",
":",
"input_shape",
"=",
"op",
".",
"inputs",
"[",
"0",
"]",
".",
"get_shape",
"(",
")",
".",
"with_rank_at_least",
"(",
"1",
")",
"if",
"len",
"(",
"op",
".",
"inputs",
")",
">=",
"2",
":",
"k",
"=",
"tensor_... | https://github.com/natanielruiz/android-yolo/blob/1ebb54f96a67a20ff83ddfc823ed83a13dc3a47f/jni-build/jni/include/tensorflow/python/ops/nn_ops.py#L722-L734 | |
benoitsteiner/tensorflow-opencl | cb7cb40a57fde5cfd4731bc551e82a1e2fef43a5 | tensorflow/contrib/losses/python/metric_learning/metric_loss_ops.py | python | cluster_loss | (labels,
embeddings,
margin_multiplier,
enable_pam_finetuning=True,
margin_type='nmi',
print_losses=False) | return clustering_loss | Computes the clustering loss.
The following structured margins are supported:
nmi: normalized mutual information
ami: adjusted mutual information
ari: adjusted random index
vmeasure: v-measure
const: indicator checking whether the two clusterings are the same.
Args:
labels: 2-D Tensor of labels of shape [batch size, 1]
embeddings: 2-D Tensor of embeddings of shape
[batch size, embedding dimension]. Embeddings should be l2 normalized.
margin_multiplier: float32 scalar. multiplier on the structured margin term
See section 3.2 of paper for discussion.
enable_pam_finetuning: Boolean, Whether to run local pam refinement.
See section 3.4 of paper for discussion.
margin_type: Type of structured margin to use. See section 3.2 of
paper for discussion. Can be 'nmi', 'ami', 'ari', 'vmeasure', 'const'.
print_losses: Boolean. Option to print the loss.
Paper: https://arxiv.org/abs/1612.01213.
Returns:
clustering_loss: A float32 scalar `Tensor`.
Raises:
ImportError: If sklearn dependency is not installed. | Computes the clustering loss. | [
"Computes",
"the",
"clustering",
"loss",
"."
] | def cluster_loss(labels,
embeddings,
margin_multiplier,
enable_pam_finetuning=True,
margin_type='nmi',
print_losses=False):
"""Computes the clustering loss.
The following structured margins are supported:
nmi: normalized mutual information
ami: adjusted mutual information
ari: adjusted random index
vmeasure: v-measure
const: indicator checking whether the two clusterings are the same.
Args:
labels: 2-D Tensor of labels of shape [batch size, 1]
embeddings: 2-D Tensor of embeddings of shape
[batch size, embedding dimension]. Embeddings should be l2 normalized.
margin_multiplier: float32 scalar. multiplier on the structured margin term
See section 3.2 of paper for discussion.
enable_pam_finetuning: Boolean, Whether to run local pam refinement.
See section 3.4 of paper for discussion.
margin_type: Type of structured margin to use. See section 3.2 of
paper for discussion. Can be 'nmi', 'ami', 'ari', 'vmeasure', 'const'.
print_losses: Boolean. Option to print the loss.
Paper: https://arxiv.org/abs/1612.01213.
Returns:
clustering_loss: A float32 scalar `Tensor`.
Raises:
ImportError: If sklearn dependency is not installed.
"""
if not HAS_SKLEARN:
raise ImportError('Cluster loss depends on sklearn.')
pairwise_distances = pairwise_distance(embeddings)
labels = array_ops.squeeze(labels)
all_ids = math_ops.range(array_ops.shape(embeddings)[0])
# Compute the loss augmented inference and get the cluster centroids.
chosen_ids = compute_augmented_facility_locations(pairwise_distances, labels,
all_ids, margin_multiplier,
margin_type)
# Given the predicted centroids, compute the clustering score.
score_pred = compute_facility_energy(pairwise_distances, chosen_ids)
# Branch whether to use PAM finetuning.
if enable_pam_finetuning:
# Initialize with augmented facility solution.
chosen_ids = compute_augmented_facility_locations_pam(pairwise_distances,
labels,
margin_multiplier,
margin_type,
chosen_ids)
score_pred = compute_facility_energy(pairwise_distances, chosen_ids)
# Given the predicted centroids, compute the cluster assignments.
predictions = get_cluster_assignment(pairwise_distances, chosen_ids)
# Compute the clustering (i.e. NMI) score between the two assignments.
clustering_score_pred = compute_clustering_score(labels, predictions,
margin_type)
# Compute the clustering score from labels.
score_gt = compute_gt_cluster_score(pairwise_distances, labels)
# Compute the hinge loss.
clustering_loss = math_ops.maximum(
score_pred + margin_multiplier * (1.0 - clustering_score_pred) - score_gt,
0.0,
name='clustering_loss')
clustering_loss.set_shape([])
if print_losses:
clustering_loss = logging_ops.Print(
clustering_loss,
['clustering_loss: ', clustering_loss, array_ops.shape(
clustering_loss)])
# Clustering specific summary.
summary.scalar('losses/score_pred', score_pred)
summary.scalar('losses/' + margin_type, clustering_score_pred)
summary.scalar('losses/score_gt', score_gt)
return clustering_loss | [
"def",
"cluster_loss",
"(",
"labels",
",",
"embeddings",
",",
"margin_multiplier",
",",
"enable_pam_finetuning",
"=",
"True",
",",
"margin_type",
"=",
"'nmi'",
",",
"print_losses",
"=",
"False",
")",
":",
"if",
"not",
"HAS_SKLEARN",
":",
"raise",
"ImportError",
... | https://github.com/benoitsteiner/tensorflow-opencl/blob/cb7cb40a57fde5cfd4731bc551e82a1e2fef43a5/tensorflow/contrib/losses/python/metric_learning/metric_loss_ops.py#L946-L1031 | |
wxWidgets/wxPython-Classic | 19571e1ae65f1ac445f5491474121998c97a1bf0 | src/osx_cocoa/_core.py | python | TextAreaBase.GetLineLength | (*args, **kwargs) | return _core_.TextAreaBase_GetLineLength(*args, **kwargs) | GetLineLength(self, long lineNo) -> int | GetLineLength(self, long lineNo) -> int | [
"GetLineLength",
"(",
"self",
"long",
"lineNo",
")",
"-",
">",
"int"
] | def GetLineLength(*args, **kwargs):
"""GetLineLength(self, long lineNo) -> int"""
return _core_.TextAreaBase_GetLineLength(*args, **kwargs) | [
"def",
"GetLineLength",
"(",
"*",
"args",
",",
"*",
"*",
"kwargs",
")",
":",
"return",
"_core_",
".",
"TextAreaBase_GetLineLength",
"(",
"*",
"args",
",",
"*",
"*",
"kwargs",
")"
] | https://github.com/wxWidgets/wxPython-Classic/blob/19571e1ae65f1ac445f5491474121998c97a1bf0/src/osx_cocoa/_core.py#L13384-L13386 | |
coinapi/coinapi-sdk | 854f21e7f69ea8599ae35c5403565cf299d8b795 | oeml-sdk/python/openapi_client/model/exec_inst.py | python | ExecInst.__init__ | (self, *args, **kwargs) | ExecInst - a model defined in OpenAPI
Note that value can be passed either in args or in kwargs, but not in both.
Args:
args[0] ([str]): Order execution instructions are documented in the separate section: <a href=\"#oeml-order-params-exec\">OEML / Starter Guide / Order parameters / Execution instructions</a> . # noqa: E501
Keyword Args:
value ([str]): Order execution instructions are documented in the separate section: <a href=\"#oeml-order-params-exec\">OEML / Starter Guide / Order parameters / Execution instructions</a> . # noqa: E501
_check_type (bool): if True, values for parameters in openapi_types
will be type checked and a TypeError will be
raised if the wrong type is input.
Defaults to True
_path_to_item (tuple/list): This is a list of keys or values to
drill down to the model in received_data
when deserializing a response
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_configuration (Configuration): the instance to use when
deserializing a file_type parameter.
If passed, type conversion is attempted
If omitted no type conversion is done.
_visited_composed_classes (tuple): This stores a tuple of
classes that we have traveled through so that
if we see that class again we will not use its
discriminator again.
When traveling through a discriminator, the
composed schema that is
is traveled through is added to this set.
For example if Animal has a discriminator
petType and we pass in "Dog", and the class Dog
allOf includes Animal, we move through Animal
once using the discriminator, and pick Dog.
Then in Dog, we will make an instance of the
Animal class but this time we won't travel
through its discriminator because we passed in
_visited_composed_classes = (Animal,) | ExecInst - a model defined in OpenAPI | [
"ExecInst",
"-",
"a",
"model",
"defined",
"in",
"OpenAPI"
] | def __init__(self, *args, **kwargs):
"""ExecInst - a model defined in OpenAPI
Note that value can be passed either in args or in kwargs, but not in both.
Args:
args[0] ([str]): Order execution instructions are documented in the separate section: <a href=\"#oeml-order-params-exec\">OEML / Starter Guide / Order parameters / Execution instructions</a> . # noqa: E501
Keyword Args:
value ([str]): Order execution instructions are documented in the separate section: <a href=\"#oeml-order-params-exec\">OEML / Starter Guide / Order parameters / Execution instructions</a> . # noqa: E501
_check_type (bool): if True, values for parameters in openapi_types
will be type checked and a TypeError will be
raised if the wrong type is input.
Defaults to True
_path_to_item (tuple/list): This is a list of keys or values to
drill down to the model in received_data
when deserializing a response
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_configuration (Configuration): the instance to use when
deserializing a file_type parameter.
If passed, type conversion is attempted
If omitted no type conversion is done.
_visited_composed_classes (tuple): This stores a tuple of
classes that we have traveled through so that
if we see that class again we will not use its
discriminator again.
When traveling through a discriminator, the
composed schema that is
is traveled through is added to this set.
For example if Animal has a discriminator
petType and we pass in "Dog", and the class Dog
allOf includes Animal, we move through Animal
once using the discriminator, and pick Dog.
Then in Dog, we will make an instance of the
Animal class but this time we won't travel
through its discriminator because we passed in
_visited_composed_classes = (Animal,)
"""
# required up here when default value is not given
_path_to_item = kwargs.pop('_path_to_item', ())
if 'value' in kwargs:
value = kwargs.pop('value')
elif args:
args = list(args)
value = args.pop(0)
else:
raise ApiTypeError(
"value is required, but not passed in args or kwargs and doesn't have default",
path_to_item=_path_to_item,
valid_classes=(self.__class__,),
)
_check_type = kwargs.pop('_check_type', True)
_spec_property_naming = kwargs.pop('_spec_property_naming', False)
_configuration = kwargs.pop('_configuration', None)
_visited_composed_classes = kwargs.pop('_visited_composed_classes', ())
if args:
raise ApiTypeError(
"Invalid positional arguments=%s passed to %s. Remove those invalid positional arguments." % (
args,
self.__class__.__name__,
),
path_to_item=_path_to_item,
valid_classes=(self.__class__,),
)
self._data_store = {}
self._check_type = _check_type
self._spec_property_naming = _spec_property_naming
self._path_to_item = _path_to_item
self._configuration = _configuration
self._visited_composed_classes = _visited_composed_classes + (self.__class__,)
self.value = value
if kwargs:
raise ApiTypeError(
"Invalid named arguments=%s passed to %s. Remove those invalid named arguments." % (
kwargs,
self.__class__.__name__,
),
path_to_item=_path_to_item,
valid_classes=(self.__class__,),
) | [
"def",
"__init__",
"(",
"self",
",",
"*",
"args",
",",
"*",
"*",
"kwargs",
")",
":",
"# required up here when default value is not given",
"_path_to_item",
"=",
"kwargs",
".",
"pop",
"(",
"'_path_to_item'",
",",
"(",
")",
")",
"if",
"'value'",
"in",
"kwargs",
... | https://github.com/coinapi/coinapi-sdk/blob/854f21e7f69ea8599ae35c5403565cf299d8b795/oeml-sdk/python/openapi_client/model/exec_inst.py#L99-L185 | ||
avast/retdec | b9879088a5f0278508185ec645494e6c5c57a455 | scripts/type_extractor/type_extractor/io.py | python | types_sub | (type_text) | return type_text | Substitutes type for lti type. | Substitutes type for lti type. | [
"Substitutes",
"type",
"for",
"lti",
"type",
"."
] | def types_sub(type_text):
"""Substitutes type for lti type."""
if type_text in LTI_TYPES.keys():
return LTI_TYPES[type_text]
return type_text | [
"def",
"types_sub",
"(",
"type_text",
")",
":",
"if",
"type_text",
"in",
"LTI_TYPES",
".",
"keys",
"(",
")",
":",
"return",
"LTI_TYPES",
"[",
"type_text",
"]",
"return",
"type_text"
] | https://github.com/avast/retdec/blob/b9879088a5f0278508185ec645494e6c5c57a455/scripts/type_extractor/type_extractor/io.py#L114-L118 | |
BlzFans/wke | b0fa21158312e40c5fbd84682d643022b6c34a93 | cygwin/lib/python2.6/logging/__init__.py | python | Logger.setLevel | (self, level) | Set the logging level of this logger. | Set the logging level of this logger. | [
"Set",
"the",
"logging",
"level",
"of",
"this",
"logger",
"."
] | def setLevel(self, level):
"""
Set the logging level of this logger.
"""
self.level = level | [
"def",
"setLevel",
"(",
"self",
",",
"level",
")",
":",
"self",
".",
"level",
"=",
"level"
] | https://github.com/BlzFans/wke/blob/b0fa21158312e40c5fbd84682d643022b6c34a93/cygwin/lib/python2.6/logging/__init__.py#L1020-L1024 | ||
H-uru/Plasma | c2140ea046e82e9c199e257a7f2e7edb42602871 | Scripts/Python/plasma/Plasma.py | python | PtLocalAvatarIsMoving | () | Returns true if the local avatar is moving (a movement key is held down) | Returns true if the local avatar is moving (a movement key is held down) | [
"Returns",
"true",
"if",
"the",
"local",
"avatar",
"is",
"moving",
"(",
"a",
"movement",
"key",
"is",
"held",
"down",
")"
] | def PtLocalAvatarIsMoving():
"""Returns true if the local avatar is moving (a movement key is held down)"""
pass | [
"def",
"PtLocalAvatarIsMoving",
"(",
")",
":",
"pass"
] | https://github.com/H-uru/Plasma/blob/c2140ea046e82e9c199e257a7f2e7edb42602871/Scripts/Python/plasma/Plasma.py#L630-L632 | ||
adobe/chromium | cfe5bf0b51b1f6b9fe239c2a3c2f2364da9967d7 | third_party/mesa/MesaLib/src/mesa/main/APIspec.py | python | Spec.get_api | (self, name) | return API(self, self.api_nodes[name]) | Return an API. | Return an API. | [
"Return",
"an",
"API",
"."
] | def get_api(self, name):
"""Return an API."""
return API(self, self.api_nodes[name]) | [
"def",
"get_api",
"(",
"self",
",",
"name",
")",
":",
"return",
"API",
"(",
"self",
",",
"self",
".",
"api_nodes",
"[",
"name",
"]",
")"
] | https://github.com/adobe/chromium/blob/cfe5bf0b51b1f6b9fe239c2a3c2f2364da9967d7/third_party/mesa/MesaLib/src/mesa/main/APIspec.py#L67-L69 | |
tensorflow/tensorflow | 419e3a6b650ea4bd1b0cba23c4348f8a69f3272e | tensorflow/python/tools/saved_model_cli.py | python | show | (args) | Function triggered by show command.
Args:
args: A namespace parsed from command line. | Function triggered by show command. | [
"Function",
"triggered",
"by",
"show",
"command",
"."
] | def show(args):
"""Function triggered by show command.
Args:
args: A namespace parsed from command line.
"""
# If all tag is specified, display all information.
if args.all:
_show_all(args.dir)
else:
# If no tag is specified, display all tag_set, if no signature_def key is
# specified, display all SignatureDef keys, else show input output tensor
# information corresponding to the given SignatureDef key
if args.tag_set is None:
_show_tag_sets(args.dir)
else:
if args.signature_def is None:
_show_signature_def_map_keys(args.dir, args.tag_set)
else:
_show_inputs_outputs(args.dir, args.tag_set, args.signature_def) | [
"def",
"show",
"(",
"args",
")",
":",
"# If all tag is specified, display all information.",
"if",
"args",
".",
"all",
":",
"_show_all",
"(",
"args",
".",
"dir",
")",
"else",
":",
"# If no tag is specified, display all tag_set, if no signature_def key is",
"# specified, dis... | https://github.com/tensorflow/tensorflow/blob/419e3a6b650ea4bd1b0cba23c4348f8a69f3272e/tensorflow/python/tools/saved_model_cli.py#L726-L745 | ||
wxWidgets/wxPython-Classic | 19571e1ae65f1ac445f5491474121998c97a1bf0 | src/osx_carbon/html.py | python | HtmlHelpController.AddBook | (*args, **kwargs) | return _html.HtmlHelpController_AddBook(*args, **kwargs) | AddBook(self, String book, int show_wait_msg=False) -> bool | AddBook(self, String book, int show_wait_msg=False) -> bool | [
"AddBook",
"(",
"self",
"String",
"book",
"int",
"show_wait_msg",
"=",
"False",
")",
"-",
">",
"bool"
] | def AddBook(*args, **kwargs):
"""AddBook(self, String book, int show_wait_msg=False) -> bool"""
return _html.HtmlHelpController_AddBook(*args, **kwargs) | [
"def",
"AddBook",
"(",
"*",
"args",
",",
"*",
"*",
"kwargs",
")",
":",
"return",
"_html",
".",
"HtmlHelpController_AddBook",
"(",
"*",
"args",
",",
"*",
"*",
"kwargs",
")"
] | https://github.com/wxWidgets/wxPython-Classic/blob/19571e1ae65f1ac445f5491474121998c97a1bf0/src/osx_carbon/html.py#L1966-L1968 | |
tensorflow/tensorflow | 419e3a6b650ea4bd1b0cba23c4348f8a69f3272e | tensorflow/python/feature_column/feature_column.py | python | _categorical_column_with_identity | (key, num_buckets, default_value=None) | return _IdentityCategoricalColumn(
key=key, num_buckets=num_buckets, default_value=default_value) | A `_CategoricalColumn` that returns identity values.
Use this when your inputs are integers in the range `[0, num_buckets)`, and
you want to use the input value itself as the categorical ID. Values outside
this range will result in `default_value` if specified, otherwise it will
fail.
Typically, this is used for contiguous ranges of integer indexes, but
it doesn't have to be. This might be inefficient, however, if many of IDs
are unused. Consider `categorical_column_with_hash_bucket` in that case.
For input dictionary `features`, `features[key]` is either `Tensor` or
`SparseTensor`. If `Tensor`, missing values can be represented by `-1` for int
and `''` for string, which will be dropped by this feature column.
In the following examples, each input in the range `[0, 1000000)` is assigned
the same value. All other inputs are assigned `default_value` 0. Note that a
literal 0 in inputs will result in the same default ID.
Linear model:
```python
video_id = categorical_column_with_identity(
key='video_id', num_buckets=1000000, default_value=0)
columns = [video_id, ...]
features = tf.io.parse_example(..., features=make_parse_example_spec(columns))
linear_prediction, _, _ = linear_model(features, columns)
```
Embedding for a DNN model:
```python
columns = [embedding_column(video_id, 9),...]
features = tf.io.parse_example(..., features=make_parse_example_spec(columns))
dense_tensor = input_layer(features, columns)
```
Args:
key: A unique string identifying the input feature. It is used as the
column name and the dictionary key for feature parsing configs, feature
`Tensor` objects, and feature columns.
num_buckets: Range of inputs and outputs is `[0, num_buckets)`.
default_value: If set, values outside of range `[0, num_buckets)` will
be replaced with this value. If not set, values >= num_buckets will
cause a failure while values < 0 will be dropped.
Returns:
A `_CategoricalColumn` that returns identity values.
Raises:
ValueError: if `num_buckets` is less than one.
ValueError: if `default_value` is not in range `[0, num_buckets)`. | A `_CategoricalColumn` that returns identity values. | [
"A",
"_CategoricalColumn",
"that",
"returns",
"identity",
"values",
"."
] | def _categorical_column_with_identity(key, num_buckets, default_value=None):
"""A `_CategoricalColumn` that returns identity values.
Use this when your inputs are integers in the range `[0, num_buckets)`, and
you want to use the input value itself as the categorical ID. Values outside
this range will result in `default_value` if specified, otherwise it will
fail.
Typically, this is used for contiguous ranges of integer indexes, but
it doesn't have to be. This might be inefficient, however, if many of IDs
are unused. Consider `categorical_column_with_hash_bucket` in that case.
For input dictionary `features`, `features[key]` is either `Tensor` or
`SparseTensor`. If `Tensor`, missing values can be represented by `-1` for int
and `''` for string, which will be dropped by this feature column.
In the following examples, each input in the range `[0, 1000000)` is assigned
the same value. All other inputs are assigned `default_value` 0. Note that a
literal 0 in inputs will result in the same default ID.
Linear model:
```python
video_id = categorical_column_with_identity(
key='video_id', num_buckets=1000000, default_value=0)
columns = [video_id, ...]
features = tf.io.parse_example(..., features=make_parse_example_spec(columns))
linear_prediction, _, _ = linear_model(features, columns)
```
Embedding for a DNN model:
```python
columns = [embedding_column(video_id, 9),...]
features = tf.io.parse_example(..., features=make_parse_example_spec(columns))
dense_tensor = input_layer(features, columns)
```
Args:
key: A unique string identifying the input feature. It is used as the
column name and the dictionary key for feature parsing configs, feature
`Tensor` objects, and feature columns.
num_buckets: Range of inputs and outputs is `[0, num_buckets)`.
default_value: If set, values outside of range `[0, num_buckets)` will
be replaced with this value. If not set, values >= num_buckets will
cause a failure while values < 0 will be dropped.
Returns:
A `_CategoricalColumn` that returns identity values.
Raises:
ValueError: if `num_buckets` is less than one.
ValueError: if `default_value` is not in range `[0, num_buckets)`.
"""
if num_buckets < 1:
raise ValueError(
'num_buckets {} < 1, column_name {}'.format(num_buckets, key))
if (default_value is not None) and (
(default_value < 0) or (default_value >= num_buckets)):
raise ValueError(
'default_value {} not in range [0, {}), column_name {}'.format(
default_value, num_buckets, key))
fc_utils.assert_key_is_string(key)
return _IdentityCategoricalColumn(
key=key, num_buckets=num_buckets, default_value=default_value) | [
"def",
"_categorical_column_with_identity",
"(",
"key",
",",
"num_buckets",
",",
"default_value",
"=",
"None",
")",
":",
"if",
"num_buckets",
"<",
"1",
":",
"raise",
"ValueError",
"(",
"'num_buckets {} < 1, column_name {}'",
".",
"format",
"(",
"num_buckets",
",",
... | https://github.com/tensorflow/tensorflow/blob/419e3a6b650ea4bd1b0cba23c4348f8a69f3272e/tensorflow/python/feature_column/feature_column.py#L1391-L1455 | |
ChromiumWebApps/chromium | c7361d39be8abd1574e6ce8957c8dbddd4c6ccf7 | tools/bisect_utils.py | python | SubprocessCall | (cmd, cwd=None) | return subprocess.call(cmd, shell=shell, cwd=cwd) | Runs a subprocess with specified parameters.
Args:
params: A list of parameters to pass to gclient.
cwd: Working directory to run from.
Returns:
The return code of the call. | Runs a subprocess with specified parameters. | [
"Runs",
"a",
"subprocess",
"with",
"specified",
"parameters",
"."
] | def SubprocessCall(cmd, cwd=None):
"""Runs a subprocess with specified parameters.
Args:
params: A list of parameters to pass to gclient.
cwd: Working directory to run from.
Returns:
The return code of the call.
"""
if os.name == 'nt':
# "HOME" isn't normally defined on windows, but is needed
# for git to find the user's .netrc file.
if not os.getenv('HOME'):
os.environ['HOME'] = os.environ['USERPROFILE']
shell = os.name == 'nt'
return subprocess.call(cmd, shell=shell, cwd=cwd) | [
"def",
"SubprocessCall",
"(",
"cmd",
",",
"cwd",
"=",
"None",
")",
":",
"if",
"os",
".",
"name",
"==",
"'nt'",
":",
"# \"HOME\" isn't normally defined on windows, but is needed",
"# for git to find the user's .netrc file.",
"if",
"not",
"os",
".",
"getenv",
"(",
"'H... | https://github.com/ChromiumWebApps/chromium/blob/c7361d39be8abd1574e6ce8957c8dbddd4c6ccf7/tools/bisect_utils.py#L147-L163 | |
aws/lumberyard | f85344403c1c2e77ec8c75deb2c116e97b713217 | dev/Tools/Python/3.7.10/linux_x64/lib/python3.7/mailbox.py | python | MH.get_file | (self, key) | return _ProxyFile(f) | Return a file-like representation or raise a KeyError. | Return a file-like representation or raise a KeyError. | [
"Return",
"a",
"file",
"-",
"like",
"representation",
"or",
"raise",
"a",
"KeyError",
"."
] | def get_file(self, key):
"""Return a file-like representation or raise a KeyError."""
try:
f = open(os.path.join(self._path, str(key)), 'rb')
except OSError as e:
if e.errno == errno.ENOENT:
raise KeyError('No message with key: %s' % key)
else:
raise
return _ProxyFile(f) | [
"def",
"get_file",
"(",
"self",
",",
"key",
")",
":",
"try",
":",
"f",
"=",
"open",
"(",
"os",
".",
"path",
".",
"join",
"(",
"self",
".",
"_path",
",",
"str",
"(",
"key",
")",
")",
",",
"'rb'",
")",
"except",
"OSError",
"as",
"e",
":",
"if",... | https://github.com/aws/lumberyard/blob/f85344403c1c2e77ec8c75deb2c116e97b713217/dev/Tools/Python/3.7.10/linux_x64/lib/python3.7/mailbox.py#L1065-L1074 | |
Xilinx/Vitis-AI | fc74d404563d9951b57245443c73bef389f3657f | tools/Vitis-AI-Quantizer/vai_q_tensorflow1.x/tensorflow/contrib/factorization/python/ops/factorization_ops.py | python | WALSModel.row_factors | (self) | return self._row_factors | Returns a list of tensors corresponding to row factor shards. | Returns a list of tensors corresponding to row factor shards. | [
"Returns",
"a",
"list",
"of",
"tensors",
"corresponding",
"to",
"row",
"factor",
"shards",
"."
] | def row_factors(self):
"""Returns a list of tensors corresponding to row factor shards."""
return self._row_factors | [
"def",
"row_factors",
"(",
"self",
")",
":",
"return",
"self",
".",
"_row_factors"
] | https://github.com/Xilinx/Vitis-AI/blob/fc74d404563d9951b57245443c73bef389f3657f/tools/Vitis-AI-Quantizer/vai_q_tensorflow1.x/tensorflow/contrib/factorization/python/ops/factorization_ops.py#L299-L301 | |
microsoft/checkedc-clang | a173fefde5d7877b7750e7ce96dd08cf18baebf2 | mlir/utils/spirv/gen_spirv_dialect.py | python | gen_opcode | (instructions) | return opcode_str + '\n\n' + enum_attr | Generates the TableGen definition to map opname to opcode
Returns:
- A string containing the TableGen SPV_OpCode definition | Generates the TableGen definition to map opname to opcode | [
"Generates",
"the",
"TableGen",
"definition",
"to",
"map",
"opname",
"to",
"opcode"
] | def gen_opcode(instructions):
""" Generates the TableGen definition to map opname to opcode
Returns:
- A string containing the TableGen SPV_OpCode definition
"""
max_len = max([len(inst['opname']) for inst in instructions])
def_fmt_str = 'def SPV_OC_{name} {colon:>{offset}} '\
'I32EnumAttrCase<"{name}", {value}>;'
opcode_defs = [
def_fmt_str.format(
name=inst['opname'],
value=inst['opcode'],
colon=':',
offset=(max_len + 1 - len(inst['opname']))) for inst in instructions
]
opcode_str = '\n'.join(opcode_defs)
decl_fmt_str = 'SPV_OC_{name}'
opcode_list = [
decl_fmt_str.format(name=inst['opname']) for inst in instructions
]
opcode_list = split_list_into_sublists(opcode_list)
opcode_list = [
'{:6}'.format('') + ', '.join(sublist) for sublist in opcode_list
]
opcode_list = ',\n'.join(opcode_list)
enum_attr = 'def SPV_OpcodeAttr :\n'\
' SPV_I32EnumAttr<"{name}", "valid SPIR-V instructions", [\n'\
'{lst}\n'\
' ]>;'.format(name='Opcode', lst=opcode_list)
return opcode_str + '\n\n' + enum_attr | [
"def",
"gen_opcode",
"(",
"instructions",
")",
":",
"max_len",
"=",
"max",
"(",
"[",
"len",
"(",
"inst",
"[",
"'opname'",
"]",
")",
"for",
"inst",
"in",
"instructions",
"]",
")",
"def_fmt_str",
"=",
"'def SPV_OC_{name} {colon:>{offset}} '",
"'I32EnumAttrCase<\"{... | https://github.com/microsoft/checkedc-clang/blob/a173fefde5d7877b7750e7ce96dd08cf18baebf2/mlir/utils/spirv/gen_spirv_dialect.py#L408-L440 | |
wxWidgets/wxPython-Classic | 19571e1ae65f1ac445f5491474121998c97a1bf0 | wx/lib/plot.py | python | PlotCanvas.GetEnablePointLabel | (self) | return self._pointLabelEnabled | True if pointLabel enabled. | True if pointLabel enabled. | [
"True",
"if",
"pointLabel",
"enabled",
"."
] | def GetEnablePointLabel(self):
"""True if pointLabel enabled."""
return self._pointLabelEnabled | [
"def",
"GetEnablePointLabel",
"(",
"self",
")",
":",
"return",
"self",
".",
"_pointLabelEnabled"
] | https://github.com/wxWidgets/wxPython-Classic/blob/19571e1ae65f1ac445f5491474121998c97a1bf0/wx/lib/plot.py#L984-L986 | |
windystrife/UnrealEngine_NVIDIAGameWorks | b50e6338a7c5b26374d66306ebc7807541ff815e | Engine/Extras/ThirdPartyNotUE/emsdk/Win64/python/2.7.5.3_64bit/Lib/lib-tk/Tkinter.py | python | Misc.pack_propagate | (self, flag=_noarg_) | Set or get the status for propagation of geometry information.
A boolean argument specifies whether the geometry information
of the slaves will determine the size of this widget. If no argument
is given the current setting will be returned. | Set or get the status for propagation of geometry information. | [
"Set",
"or",
"get",
"the",
"status",
"for",
"propagation",
"of",
"geometry",
"information",
"."
] | def pack_propagate(self, flag=_noarg_):
"""Set or get the status for propagation of geometry information.
A boolean argument specifies whether the geometry information
of the slaves will determine the size of this widget. If no argument
is given the current setting will be returned.
"""
if flag is Misc._noarg_:
return self._getboolean(self.tk.call(
'pack', 'propagate', self._w))
else:
self.tk.call('pack', 'propagate', self._w, flag) | [
"def",
"pack_propagate",
"(",
"self",
",",
"flag",
"=",
"_noarg_",
")",
":",
"if",
"flag",
"is",
"Misc",
".",
"_noarg_",
":",
"return",
"self",
".",
"_getboolean",
"(",
"self",
".",
"tk",
".",
"call",
"(",
"'pack'",
",",
"'propagate'",
",",
"self",
"... | https://github.com/windystrife/UnrealEngine_NVIDIAGameWorks/blob/b50e6338a7c5b26374d66306ebc7807541ff815e/Engine/Extras/ThirdPartyNotUE/emsdk/Win64/python/2.7.5.3_64bit/Lib/lib-tk/Tkinter.py#L1281-L1292 | ||
aws/lumberyard | f85344403c1c2e77ec8c75deb2c116e97b713217 | dev/Gems/CloudGemFramework/v1/AWS/resource-manager-code/lib/pkg_resources/__init__.py | python | IMetadataProvider.run_script | (script_name, namespace) | Execute the named script in the supplied namespace dictionary | Execute the named script in the supplied namespace dictionary | [
"Execute",
"the",
"named",
"script",
"in",
"the",
"supplied",
"namespace",
"dictionary"
] | def run_script(script_name, namespace):
"""Execute the named script in the supplied namespace dictionary""" | [
"def",
"run_script",
"(",
"script_name",
",",
"namespace",
")",
":"
] | https://github.com/aws/lumberyard/blob/f85344403c1c2e77ec8c75deb2c116e97b713217/dev/Gems/CloudGemFramework/v1/AWS/resource-manager-code/lib/pkg_resources/__init__.py#L522-L523 | ||
wlanjie/AndroidFFmpeg | 7baf9122f4b8e1c74e7baf4be5c422c7a5ba5aaf | tools/fdk-aac-build/x86/toolchain/lib/python2.7/subprocess.py | python | Popen.__init__ | (self, args, bufsize=0, executable=None,
stdin=None, stdout=None, stderr=None,
preexec_fn=None, close_fds=False, shell=False,
cwd=None, env=None, universal_newlines=False,
startupinfo=None, creationflags=0) | Create new Popen instance. | Create new Popen instance. | [
"Create",
"new",
"Popen",
"instance",
"."
] | def __init__(self, args, bufsize=0, executable=None,
stdin=None, stdout=None, stderr=None,
preexec_fn=None, close_fds=False, shell=False,
cwd=None, env=None, universal_newlines=False,
startupinfo=None, creationflags=0):
"""Create new Popen instance."""
_cleanup()
self._child_created = False
if not isinstance(bufsize, (int, long)):
raise TypeError("bufsize must be an integer")
if mswindows:
if preexec_fn is not None:
raise ValueError("preexec_fn is not supported on Windows "
"platforms")
if close_fds and (stdin is not None or stdout is not None or
stderr is not None):
raise ValueError("close_fds is not supported on Windows "
"platforms if you redirect stdin/stdout/stderr")
else:
# POSIX
if startupinfo is not None:
raise ValueError("startupinfo is only supported on Windows "
"platforms")
if creationflags != 0:
raise ValueError("creationflags is only supported on Windows "
"platforms")
self.stdin = None
self.stdout = None
self.stderr = None
self.pid = None
self.returncode = None
self.universal_newlines = universal_newlines
# Input and output objects. The general principle is like
# this:
#
# Parent Child
# ------ -----
# p2cwrite ---stdin---> p2cread
# c2pread <--stdout--- c2pwrite
# errread <--stderr--- errwrite
#
# On POSIX, the child objects are file descriptors. On
# Windows, these are Windows file handles. The parent objects
# are file descriptors on both platforms. The parent objects
# are None when not using PIPEs. The child objects are None
# when not redirecting.
(p2cread, p2cwrite,
c2pread, c2pwrite,
errread, errwrite) = self._get_handles(stdin, stdout, stderr)
try:
self._execute_child(args, executable, preexec_fn, close_fds,
cwd, env, universal_newlines,
startupinfo, creationflags, shell,
p2cread, p2cwrite,
c2pread, c2pwrite,
errread, errwrite)
except Exception:
# Preserve original exception in case os.close raises.
exc_type, exc_value, exc_trace = sys.exc_info()
to_close = []
# Only close the pipes we created.
if stdin == PIPE:
to_close.extend((p2cread, p2cwrite))
if stdout == PIPE:
to_close.extend((c2pread, c2pwrite))
if stderr == PIPE:
to_close.extend((errread, errwrite))
for fd in to_close:
try:
os.close(fd)
except EnvironmentError:
pass
raise exc_type, exc_value, exc_trace
if mswindows:
if p2cwrite is not None:
p2cwrite = msvcrt.open_osfhandle(p2cwrite.Detach(), 0)
if c2pread is not None:
c2pread = msvcrt.open_osfhandle(c2pread.Detach(), 0)
if errread is not None:
errread = msvcrt.open_osfhandle(errread.Detach(), 0)
if p2cwrite is not None:
self.stdin = os.fdopen(p2cwrite, 'wb', bufsize)
if c2pread is not None:
if universal_newlines:
self.stdout = os.fdopen(c2pread, 'rU', bufsize)
else:
self.stdout = os.fdopen(c2pread, 'rb', bufsize)
if errread is not None:
if universal_newlines:
self.stderr = os.fdopen(errread, 'rU', bufsize)
else:
self.stderr = os.fdopen(errread, 'rb', bufsize) | [
"def",
"__init__",
"(",
"self",
",",
"args",
",",
"bufsize",
"=",
"0",
",",
"executable",
"=",
"None",
",",
"stdin",
"=",
"None",
",",
"stdout",
"=",
"None",
",",
"stderr",
"=",
"None",
",",
"preexec_fn",
"=",
"None",
",",
"close_fds",
"=",
"False",
... | https://github.com/wlanjie/AndroidFFmpeg/blob/7baf9122f4b8e1c74e7baf4be5c422c7a5ba5aaf/tools/fdk-aac-build/x86/toolchain/lib/python2.7/subprocess.py#L650-L752 | ||
perilouswithadollarsign/cstrike15_src | f82112a2388b841d72cb62ca48ab1846dfcc11c8 | thirdparty/protobuf-2.5.0/python/google/protobuf/internal/decoder.py | python | MessageSetItemDecoder | (extensions_by_number) | return DecodeItem | Returns a decoder for a MessageSet item.
The parameter is the _extensions_by_number map for the message class.
The message set message looks like this:
message MessageSet {
repeated group Item = 1 {
required int32 type_id = 2;
required string message = 3;
}
} | Returns a decoder for a MessageSet item. | [
"Returns",
"a",
"decoder",
"for",
"a",
"MessageSet",
"item",
"."
] | def MessageSetItemDecoder(extensions_by_number):
"""Returns a decoder for a MessageSet item.
The parameter is the _extensions_by_number map for the message class.
The message set message looks like this:
message MessageSet {
repeated group Item = 1 {
required int32 type_id = 2;
required string message = 3;
}
}
"""
type_id_tag_bytes = encoder.TagBytes(2, wire_format.WIRETYPE_VARINT)
message_tag_bytes = encoder.TagBytes(3, wire_format.WIRETYPE_LENGTH_DELIMITED)
item_end_tag_bytes = encoder.TagBytes(1, wire_format.WIRETYPE_END_GROUP)
local_ReadTag = ReadTag
local_DecodeVarint = _DecodeVarint
local_SkipField = SkipField
def DecodeItem(buffer, pos, end, message, field_dict):
message_set_item_start = pos
type_id = -1
message_start = -1
message_end = -1
# Technically, type_id and message can appear in any order, so we need
# a little loop here.
while 1:
(tag_bytes, pos) = local_ReadTag(buffer, pos)
if tag_bytes == type_id_tag_bytes:
(type_id, pos) = local_DecodeVarint(buffer, pos)
elif tag_bytes == message_tag_bytes:
(size, message_start) = local_DecodeVarint(buffer, pos)
pos = message_end = message_start + size
elif tag_bytes == item_end_tag_bytes:
break
else:
pos = SkipField(buffer, pos, end, tag_bytes)
if pos == -1:
raise _DecodeError('Missing group end tag.')
if pos > end:
raise _DecodeError('Truncated message.')
if type_id == -1:
raise _DecodeError('MessageSet item missing type_id.')
if message_start == -1:
raise _DecodeError('MessageSet item missing message.')
extension = extensions_by_number.get(type_id)
if extension is not None:
value = field_dict.get(extension)
if value is None:
value = field_dict.setdefault(
extension, extension.message_type._concrete_class())
if value._InternalParse(buffer, message_start,message_end) != message_end:
# The only reason _InternalParse would return early is if it encountered
# an end-group tag.
raise _DecodeError('Unexpected end-group tag.')
else:
if not message._unknown_fields:
message._unknown_fields = []
message._unknown_fields.append((MESSAGE_SET_ITEM_TAG,
buffer[message_set_item_start:pos]))
return pos
return DecodeItem | [
"def",
"MessageSetItemDecoder",
"(",
"extensions_by_number",
")",
":",
"type_id_tag_bytes",
"=",
"encoder",
".",
"TagBytes",
"(",
"2",
",",
"wire_format",
".",
"WIRETYPE_VARINT",
")",
"message_tag_bytes",
"=",
"encoder",
".",
"TagBytes",
"(",
"3",
",",
"wire_forma... | https://github.com/perilouswithadollarsign/cstrike15_src/blob/f82112a2388b841d72cb62ca48ab1846dfcc11c8/thirdparty/protobuf-2.5.0/python/google/protobuf/internal/decoder.py#L556-L626 | |
miyosuda/TensorFlowAndroidDemo | 35903e0221aa5f109ea2dbef27f20b52e317f42d | jni-build/jni/include/tensorflow/python/ops/control_flow_ops.py | python | ControlFlowContext.outer_context | (self) | return self._outer_context | Return the context containing this context. | Return the context containing this context. | [
"Return",
"the",
"context",
"containing",
"this",
"context",
"."
] | def outer_context(self):
"""Return the context containing this context."""
return self._outer_context | [
"def",
"outer_context",
"(",
"self",
")",
":",
"return",
"self",
".",
"_outer_context"
] | https://github.com/miyosuda/TensorFlowAndroidDemo/blob/35903e0221aa5f109ea2dbef27f20b52e317f42d/jni-build/jni/include/tensorflow/python/ops/control_flow_ops.py#L1094-L1096 | |
wxWidgets/wxPython-Classic | 19571e1ae65f1ac445f5491474121998c97a1bf0 | src/gtk/richtext.py | python | RichTextObject.Dereference | (*args, **kwargs) | return _richtext.RichTextObject_Dereference(*args, **kwargs) | Dereference(self) | Dereference(self) | [
"Dereference",
"(",
"self",
")"
] | def Dereference(*args, **kwargs):
"""Dereference(self)"""
return _richtext.RichTextObject_Dereference(*args, **kwargs) | [
"def",
"Dereference",
"(",
"*",
"args",
",",
"*",
"*",
"kwargs",
")",
":",
"return",
"_richtext",
".",
"RichTextObject_Dereference",
"(",
"*",
"args",
",",
"*",
"*",
"kwargs",
")"
] | https://github.com/wxWidgets/wxPython-Classic/blob/19571e1ae65f1ac445f5491474121998c97a1bf0/src/gtk/richtext.py#L1387-L1389 | |
catboost/catboost | 167f64f237114a4d10b2b4ee42adb4569137debe | contrib/python/scipy/py2/scipy/optimize/nonlin.py | python | LowRankMatrix.restart_reduce | (self, rank) | Reduce the rank of the matrix by dropping all vectors. | Reduce the rank of the matrix by dropping all vectors. | [
"Reduce",
"the",
"rank",
"of",
"the",
"matrix",
"by",
"dropping",
"all",
"vectors",
"."
] | def restart_reduce(self, rank):
"""
Reduce the rank of the matrix by dropping all vectors.
"""
if self.collapsed is not None:
return
assert rank > 0
if len(self.cs) > rank:
del self.cs[:]
del self.ds[:] | [
"def",
"restart_reduce",
"(",
"self",
",",
"rank",
")",
":",
"if",
"self",
".",
"collapsed",
"is",
"not",
"None",
":",
"return",
"assert",
"rank",
">",
"0",
"if",
"len",
"(",
"self",
".",
"cs",
")",
">",
"rank",
":",
"del",
"self",
".",
"cs",
"["... | https://github.com/catboost/catboost/blob/167f64f237114a4d10b2b4ee42adb4569137debe/contrib/python/scipy/py2/scipy/optimize/nonlin.py#L799-L808 | ||
hanpfei/chromium-net | 392cc1fa3a8f92f42e4071ab6e674d8e0482f83f | third_party/catapult/third_party/py_vulcanize/third_party/rjsmin/rjsmin.py | python | jsmin_for_posers | (script, keep_bang_comments=False) | return _re.sub(rex, subber, '\n%s\n' % script).strip() | r"""
Minify javascript based on `jsmin.c by Douglas Crockford`_\.
Instead of parsing the stream char by char, it uses a regular
expression approach which minifies the whole script with one big
substitution regex.
.. _jsmin.c by Douglas Crockford:
http://www.crockford.com/javascript/jsmin.c
:Warning: This function is the digest of a _make_jsmin() call. It just
utilizes the resulting regexes. It's here for fun and may
vanish any time. Use the `jsmin` function instead.
:Parameters:
`script` : ``str``
Script to minify
`keep_bang_comments` : ``bool``
Keep comments starting with an exclamation mark? (``/*!...*/``)
:Return: Minified script
:Rtype: ``str`` | r"""
Minify javascript based on `jsmin.c by Douglas Crockford`_\. | [
"r",
"Minify",
"javascript",
"based",
"on",
"jsmin",
".",
"c",
"by",
"Douglas",
"Crockford",
"_",
"\\",
"."
] | def jsmin_for_posers(script, keep_bang_comments=False):
r"""
Minify javascript based on `jsmin.c by Douglas Crockford`_\.
Instead of parsing the stream char by char, it uses a regular
expression approach which minifies the whole script with one big
substitution regex.
.. _jsmin.c by Douglas Crockford:
http://www.crockford.com/javascript/jsmin.c
:Warning: This function is the digest of a _make_jsmin() call. It just
utilizes the resulting regexes. It's here for fun and may
vanish any time. Use the `jsmin` function instead.
:Parameters:
`script` : ``str``
Script to minify
`keep_bang_comments` : ``bool``
Keep comments starting with an exclamation mark? (``/*!...*/``)
:Return: Minified script
:Rtype: ``str``
"""
if not keep_bang_comments:
rex = (
r'([^\047"/\000-\040]+)|((?:(?:\047[^\047\\\r\n]*(?:\\(?:[^\r\n]'
r'|\r?\n|\r)[^\047\\\r\n]*)*\047)|(?:"[^"\\\r\n]*(?:\\(?:[^\r\n]'
r'|\r?\n|\r)[^"\\\r\n]*)*"))[^\047"/\000-\040]*)|(?<=[(,=:\[!&|?'
r'{};\r\n])(?:[\000-\011\013\014\016-\040]|(?:/\*[^*]*\*+(?:[^/*'
r'][^*]*\*+)*/))*(?:(?:(?://[^\r\n]*)?[\r\n])(?:[\000-\011\013\0'
r'14\016-\040]|(?:/\*[^*]*\*+(?:[^/*][^*]*\*+)*/))*)*((?:/(?![\r'
r'\n/*])[^/\\\[\r\n]*(?:(?:\\[^\r\n]|(?:\[[^\\\]\r\n]*(?:\\[^\r'
r'\n][^\\\]\r\n]*)*\]))[^/\\\[\r\n]*)*/)[^\047"/\000-\040]*)|(?<'
r'=[\000-#%-,./:-@\[-^`{-~-]return)(?:[\000-\011\013\014\016-\04'
r'0]|(?:/\*[^*]*\*+(?:[^/*][^*]*\*+)*/))*(?:(?:(?://[^\r\n]*)?['
r'\r\n])(?:[\000-\011\013\014\016-\040]|(?:/\*[^*]*\*+(?:[^/*][^'
r'*]*\*+)*/)))*((?:/(?![\r\n/*])[^/\\\[\r\n]*(?:(?:\\[^\r\n]|(?:'
r'\[[^\\\]\r\n]*(?:\\[^\r\n][^\\\]\r\n]*)*\]))[^/\\\[\r\n]*)*/)['
r'^\047"/\000-\040]*)|(?<=[^\000-!#%&(*,./:-@\[\\^`{|~])(?:[\000'
r'-\011\013\014\016-\040]|(?:/\*[^*]*\*+(?:[^/*][^*]*\*+)*/))*(?'
r':((?:(?://[^\r\n]*)?[\r\n]))(?:[\000-\011\013\014\016-\040]|(?'
r':/\*[^*]*\*+(?:[^/*][^*]*\*+)*/))*)+(?=[^\000-\040"#%-\047)*,.'
r'/:-@\\-^`|-~])|(?<=[^\000-#%-,./:-@\[-^`{-~-])((?:[\000-\011\0'
r'13\014\016-\040]|(?:/\*[^*]*\*+(?:[^/*][^*]*\*+)*/)))+(?=[^\00'
r'0-#%-,./:-@\[-^`{-~-])|(?<=\+)((?:[\000-\011\013\014\016-\040]'
r'|(?:/\*[^*]*\*+(?:[^/*][^*]*\*+)*/)))+(?=\+)|(?<=-)((?:[\000-'
r'\011\013\014\016-\040]|(?:/\*[^*]*\*+(?:[^/*][^*]*\*+)*/)))+(?'
r'=-)|(?:[\000-\011\013\014\016-\040]|(?:/\*[^*]*\*+(?:[^/*][^*]'
r'*\*+)*/))+|(?:(?:(?://[^\r\n]*)?[\r\n])(?:[\000-\011\013\014\0'
r'16-\040]|(?:/\*[^*]*\*+(?:[^/*][^*]*\*+)*/))*)+'
)
def subber(match):
""" Substitution callback """
groups = match.groups()
return (
groups[0] or
groups[1] or
groups[2] or
groups[3] or
(groups[4] and '\n') or
(groups[5] and ' ') or
(groups[6] and ' ') or
(groups[7] and ' ') or
''
)
else:
rex = (
r'([^\047"/\000-\040]+)|((?:(?:\047[^\047\\\r\n]*(?:\\(?:[^\r\n]'
r'|\r?\n|\r)[^\047\\\r\n]*)*\047)|(?:"[^"\\\r\n]*(?:\\(?:[^\r\n]'
r'|\r?\n|\r)[^"\\\r\n]*)*"))[^\047"/\000-\040]*)|((?:/\*![^*]*\*'
r'+(?:[^/*][^*]*\*+)*/)[^\047"/\000-\040]*)|(?<=[(,=:\[!&|?{};\r'
r'\n])(?:[\000-\011\013\014\016-\040]|(?:/\*(?!!)[^*]*\*+(?:[^/*'
r'][^*]*\*+)*/))*(?:(?:(?://[^\r\n]*)?[\r\n])(?:[\000-\011\013\0'
r'14\016-\040]|(?:/\*(?!!)[^*]*\*+(?:[^/*][^*]*\*+)*/))*)*((?:/('
r'?![\r\n/*])[^/\\\[\r\n]*(?:(?:\\[^\r\n]|(?:\[[^\\\]\r\n]*(?:'
r'\\[^\r\n][^\\\]\r\n]*)*\]))[^/\\\[\r\n]*)*/)[^\047"/\000-\040]'
r'*)|(?<=[\000-#%-,./:-@\[-^`{-~-]return)(?:[\000-\011\013\014\0'
r'16-\040]|(?:/\*(?!!)[^*]*\*+(?:[^/*][^*]*\*+)*/))*(?:(?:(?://['
r'^\r\n]*)?[\r\n])(?:[\000-\011\013\014\016-\040]|(?:/\*(?!!)[^*'
r']*\*+(?:[^/*][^*]*\*+)*/)))*((?:/(?![\r\n/*])[^/\\\[\r\n]*(?:('
r'?:\\[^\r\n]|(?:\[[^\\\]\r\n]*(?:\\[^\r\n][^\\\]\r\n]*)*\]))[^/'
r'\\\[\r\n]*)*/)[^\047"/\000-\040]*)|(?<=[^\000-!#%&(*,./:-@\[\\'
r'^`{|~])(?:[\000-\011\013\014\016-\040]|(?:/\*(?!!)[^*]*\*+(?:['
r'^/*][^*]*\*+)*/))*(?:((?:(?://[^\r\n]*)?[\r\n]))(?:[\000-\011'
r'\013\014\016-\040]|(?:/\*(?!!)[^*]*\*+(?:[^/*][^*]*\*+)*/))*)+'
r'(?=[^\000-\040"#%-\047)*,./:-@\\-^`|-~])|(?<=[^\000-#%-,./:-@'
r'\[-^`{-~-])((?:[\000-\011\013\014\016-\040]|(?:/\*(?!!)[^*]*\*'
r'+(?:[^/*][^*]*\*+)*/)))+(?=[^\000-#%-,./:-@\[-^`{-~-])|(?<=\+)'
r'((?:[\000-\011\013\014\016-\040]|(?:/\*(?!!)[^*]*\*+(?:[^/*][^'
r'*]*\*+)*/)))+(?=\+)|(?<=-)((?:[\000-\011\013\014\016-\040]|(?:'
r'/\*(?!!)[^*]*\*+(?:[^/*][^*]*\*+)*/)))+(?=-)|(?:[\000-\011\013'
r'\014\016-\040]|(?:/\*(?!!)[^*]*\*+(?:[^/*][^*]*\*+)*/))+|(?:(?'
r':(?://[^\r\n]*)?[\r\n])(?:[\000-\011\013\014\016-\040]|(?:/\*('
r'?!!)[^*]*\*+(?:[^/*][^*]*\*+)*/))*)+'
)
def subber(match):
""" Substitution callback """
groups = match.groups()
return (
groups[0] or
groups[1] or
groups[2] or
groups[3] or
groups[4] or
(groups[5] and '\n') or
(groups[6] and ' ') or
(groups[7] and ' ') or
(groups[8] and ' ') or
''
)
return _re.sub(rex, subber, '\n%s\n' % script).strip() | [
"def",
"jsmin_for_posers",
"(",
"script",
",",
"keep_bang_comments",
"=",
"False",
")",
":",
"if",
"not",
"keep_bang_comments",
":",
"rex",
"=",
"(",
"r'([^\\047\"/\\000-\\040]+)|((?:(?:\\047[^\\047\\\\\\r\\n]*(?:\\\\(?:[^\\r\\n]'",
"r'|\\r?\\n|\\r)[^\\047\\\\\\r\\n]*)*\\047)|(?:... | https://github.com/hanpfei/chromium-net/blob/392cc1fa3a8f92f42e4071ab6e674d8e0482f83f/third_party/catapult/third_party/py_vulcanize/third_party/rjsmin/rjsmin.py#L312-L427 | |
pyne/pyne | 0c2714d7c0d1b5e20be6ae6527da2c660dd6b1b3 | pyne/fluka.py | python | UsrbinTally._create_mesh | (self, part_data, error_data) | This will create the mesh object with the name of the tally
specified by the user. One mesh object contains both the part_data and
the error_data. | This will create the mesh object with the name of the tally
specified by the user. One mesh object contains both the part_data and
the error_data. | [
"This",
"will",
"create",
"the",
"mesh",
"object",
"with",
"the",
"name",
"of",
"the",
"tally",
"specified",
"by",
"the",
"user",
".",
"One",
"mesh",
"object",
"contains",
"both",
"the",
"part_data",
"and",
"the",
"error_data",
"."
] | def _create_mesh(self, part_data, error_data):
"""This will create the mesh object with the name of the tally
specified by the user. One mesh object contains both the part_data and
the error_data.
"""
super(UsrbinTally, self).__init__(structured_coords=[self.x_bounds,
self.y_bounds, self.z_bounds],
structured=True,
structured_ordering='zyx',
mats=None)
self.part_data_tag = NativeMeshTag(size=1, dtype=float, mesh=self,
name="part_data_{0}".format(self.particle))
self.error_data_tag = NativeMeshTag(size=1, dtype=float, mesh=self,
name="error_data_{0}".format(self.particle))
self.part_data_tag[:] = part_data
self.error_data_tag[:] = error_data | [
"def",
"_create_mesh",
"(",
"self",
",",
"part_data",
",",
"error_data",
")",
":",
"super",
"(",
"UsrbinTally",
",",
"self",
")",
".",
"__init__",
"(",
"structured_coords",
"=",
"[",
"self",
".",
"x_bounds",
",",
"self",
".",
"y_bounds",
",",
"self",
"."... | https://github.com/pyne/pyne/blob/0c2714d7c0d1b5e20be6ae6527da2c660dd6b1b3/pyne/fluka.py#L200-L215 | ||
cyberbotics/webots | af7fa7d68dcf7b4550f1f2e132092b41e83698fc | resources/osm_importer/utils/vector.py | python | Vector2D.__floordiv__ | (self, other) | return self.__truediv__(other) | Divide the vector to another. | Divide the vector to another. | [
"Divide",
"the",
"vector",
"to",
"another",
"."
] | def __floordiv__(self, other):
"""Divide the vector to another."""
return self.__truediv__(other) | [
"def",
"__floordiv__",
"(",
"self",
",",
"other",
")",
":",
"return",
"self",
".",
"__truediv__",
"(",
"other",
")"
] | https://github.com/cyberbotics/webots/blob/af7fa7d68dcf7b4550f1f2e132092b41e83698fc/resources/osm_importer/utils/vector.py#L74-L76 | |
hpi-xnor/BMXNet-v2 | af2b1859eafc5c721b1397cef02f946aaf2ce20d | python/mxnet/ndarray/ndarray.py | python | NDArray.repeat | (self, *args, **kwargs) | return op.repeat(self, *args, **kwargs) | Convenience fluent method for :py:func:`repeat`.
The arguments are the same as for :py:func:`repeat`, with
this array as data. | Convenience fluent method for :py:func:`repeat`. | [
"Convenience",
"fluent",
"method",
"for",
":",
"py",
":",
"func",
":",
"repeat",
"."
] | def repeat(self, *args, **kwargs):
"""Convenience fluent method for :py:func:`repeat`.
The arguments are the same as for :py:func:`repeat`, with
this array as data.
"""
return op.repeat(self, *args, **kwargs) | [
"def",
"repeat",
"(",
"self",
",",
"*",
"args",
",",
"*",
"*",
"kwargs",
")",
":",
"return",
"op",
".",
"repeat",
"(",
"self",
",",
"*",
"args",
",",
"*",
"*",
"kwargs",
")"
] | https://github.com/hpi-xnor/BMXNet-v2/blob/af2b1859eafc5c721b1397cef02f946aaf2ce20d/python/mxnet/ndarray/ndarray.py#L1102-L1108 | |
aws/lumberyard | f85344403c1c2e77ec8c75deb2c116e97b713217 | dev/Tools/Python/3.7.10/linux_x64/lib/python3.7/inspect.py | python | getcoroutinelocals | (coroutine) | Get the mapping of coroutine local variables to their current values.
A dict is returned, with the keys the local variable names and values the
bound values. | Get the mapping of coroutine local variables to their current values. | [
"Get",
"the",
"mapping",
"of",
"coroutine",
"local",
"variables",
"to",
"their",
"current",
"values",
"."
] | def getcoroutinelocals(coroutine):
"""
Get the mapping of coroutine local variables to their current values.
A dict is returned, with the keys the local variable names and values the
bound values."""
frame = getattr(coroutine, "cr_frame", None)
if frame is not None:
return frame.f_locals
else:
return {} | [
"def",
"getcoroutinelocals",
"(",
"coroutine",
")",
":",
"frame",
"=",
"getattr",
"(",
"coroutine",
",",
"\"cr_frame\"",
",",
"None",
")",
"if",
"frame",
"is",
"not",
"None",
":",
"return",
"frame",
".",
"f_locals",
"else",
":",
"return",
"{",
"}"
] | https://github.com/aws/lumberyard/blob/f85344403c1c2e77ec8c75deb2c116e97b713217/dev/Tools/Python/3.7.10/linux_x64/lib/python3.7/inspect.py#L1679-L1689 | ||
eventql/eventql | 7ca0dbb2e683b525620ea30dc40540a22d5eb227 | deps/3rdparty/spidermonkey/mozjs/python/psutil/psutil/__init__.py | python | Process.num_ctx_switches | (self) | return self._proc.num_ctx_switches() | Return the number of voluntary and involuntary context
switches performed by this process. | Return the number of voluntary and involuntary context
switches performed by this process. | [
"Return",
"the",
"number",
"of",
"voluntary",
"and",
"involuntary",
"context",
"switches",
"performed",
"by",
"this",
"process",
"."
] | def num_ctx_switches(self):
"""Return the number of voluntary and involuntary context
switches performed by this process.
"""
return self._proc.num_ctx_switches() | [
"def",
"num_ctx_switches",
"(",
"self",
")",
":",
"return",
"self",
".",
"_proc",
".",
"num_ctx_switches",
"(",
")"
] | https://github.com/eventql/eventql/blob/7ca0dbb2e683b525620ea30dc40540a22d5eb227/deps/3rdparty/spidermonkey/mozjs/python/psutil/psutil/__init__.py#L691-L695 | |
freeorion/freeorion | c266a40eccd3a99a17de8fe57c36ef6ba3771665 | default/python/universe_generation/galaxy.py | python | DisjointSets.complete_sets | (self) | return list(ret.values()) | returns a list of list of all sets O(n). | returns a list of list of all sets O(n). | [
"returns",
"a",
"list",
"of",
"list",
"of",
"all",
"sets",
"O",
"(",
"n",
")",
"."
] | def complete_sets(self):
"""returns a list of list of all sets O(n)."""
ret = defaultdict(list)
for pos in self.dsets.keys():
ret[self.root(pos)].append(pos)
return list(ret.values()) | [
"def",
"complete_sets",
"(",
"self",
")",
":",
"ret",
"=",
"defaultdict",
"(",
"list",
")",
"for",
"pos",
"in",
"self",
".",
"dsets",
".",
"keys",
"(",
")",
":",
"ret",
"[",
"self",
".",
"root",
"(",
"pos",
")",
"]",
".",
"append",
"(",
"pos",
... | https://github.com/freeorion/freeorion/blob/c266a40eccd3a99a17de8fe57c36ef6ba3771665/default/python/universe_generation/galaxy.py#L209-L214 | |
thalium/icebox | 99d147d5b9269222225443ce171b4fd46d8985d4 | third_party/virtualbox/src/libs/libxml2-2.9.4/python/libxml2.py | python | createOutputBuffer | (file, encoding) | return outputBuffer(_obj=ret) | Create a libxml2 output buffer from a Python file | Create a libxml2 output buffer from a Python file | [
"Create",
"a",
"libxml2",
"output",
"buffer",
"from",
"a",
"Python",
"file"
] | def createOutputBuffer(file, encoding):
"""Create a libxml2 output buffer from a Python file """
ret = libxml2mod.xmlCreateOutputBuffer(file, encoding)
if ret is None:raise treeError('xmlCreateOutputBuffer() failed')
return outputBuffer(_obj=ret) | [
"def",
"createOutputBuffer",
"(",
"file",
",",
"encoding",
")",
":",
"ret",
"=",
"libxml2mod",
".",
"xmlCreateOutputBuffer",
"(",
"file",
",",
"encoding",
")",
"if",
"ret",
"is",
"None",
":",
"raise",
"treeError",
"(",
"'xmlCreateOutputBuffer() failed'",
")",
... | https://github.com/thalium/icebox/blob/99d147d5b9269222225443ce171b4fd46d8985d4/third_party/virtualbox/src/libs/libxml2-2.9.4/python/libxml2.py#L1565-L1569 | |
aws/lumberyard | f85344403c1c2e77ec8c75deb2c116e97b713217 | dev/Tools/Python/3.7.10/mac/Python.framework/Versions/3.7/lib/python3.7/multiprocessing/managers.py | python | BaseManager._number_of_objects | (self) | Return the number of shared objects | Return the number of shared objects | [
"Return",
"the",
"number",
"of",
"shared",
"objects"
] | def _number_of_objects(self):
'''
Return the number of shared objects
'''
conn = self._Client(self._address, authkey=self._authkey)
try:
return dispatch(conn, None, 'number_of_objects')
finally:
conn.close() | [
"def",
"_number_of_objects",
"(",
"self",
")",
":",
"conn",
"=",
"self",
".",
"_Client",
"(",
"self",
".",
"_address",
",",
"authkey",
"=",
"self",
".",
"_authkey",
")",
"try",
":",
"return",
"dispatch",
"(",
"conn",
",",
"None",
",",
"'number_of_objects... | https://github.com/aws/lumberyard/blob/f85344403c1c2e77ec8c75deb2c116e97b713217/dev/Tools/Python/3.7.10/mac/Python.framework/Versions/3.7/lib/python3.7/multiprocessing/managers.py#L633-L641 | ||
catboost/catboost | 167f64f237114a4d10b2b4ee42adb4569137debe | contrib/python/ipython/py3/IPython/core/interactiveshell.py | python | InteractiveShell._indent_current_str | (self) | return self.input_splitter.get_indent_spaces() * ' ' | return the current level of indentation as a string | return the current level of indentation as a string | [
"return",
"the",
"current",
"level",
"of",
"indentation",
"as",
"a",
"string"
] | def _indent_current_str(self):
"""return the current level of indentation as a string"""
return self.input_splitter.get_indent_spaces() * ' ' | [
"def",
"_indent_current_str",
"(",
"self",
")",
":",
"return",
"self",
".",
"input_splitter",
".",
"get_indent_spaces",
"(",
")",
"*",
"' '"
] | https://github.com/catboost/catboost/blob/167f64f237114a4d10b2b4ee42adb4569137debe/contrib/python/ipython/py3/IPython/core/interactiveshell.py#L2166-L2168 | |
catboost/catboost | 167f64f237114a4d10b2b4ee42adb4569137debe | contrib/tools/python3/src/Lib/_pydecimal.py | python | Decimal.copy_sign | (self, other, context=None) | return _dec_from_triple(other._sign, self._int,
self._exp, self._is_special) | Returns self with the sign of other. | Returns self with the sign of other. | [
"Returns",
"self",
"with",
"the",
"sign",
"of",
"other",
"."
] | def copy_sign(self, other, context=None):
"""Returns self with the sign of other."""
other = _convert_other(other, raiseit=True)
return _dec_from_triple(other._sign, self._int,
self._exp, self._is_special) | [
"def",
"copy_sign",
"(",
"self",
",",
"other",
",",
"context",
"=",
"None",
")",
":",
"other",
"=",
"_convert_other",
"(",
"other",
",",
"raiseit",
"=",
"True",
")",
"return",
"_dec_from_triple",
"(",
"other",
".",
"_sign",
",",
"self",
".",
"_int",
",... | https://github.com/catboost/catboost/blob/167f64f237114a4d10b2b4ee42adb4569137debe/contrib/tools/python3/src/Lib/_pydecimal.py#L3030-L3034 | |
mindspore-ai/mindspore | fb8fd3338605bb34fa5cea054e535a8b1d753fab | mindspore/python/mindspore/ops/_grad/grad_math_ops.py | python | get_bprop_cumsum | (self) | return bprop | Grad definition for `CumSum` operation. | Grad definition for `CumSum` operation. | [
"Grad",
"definition",
"for",
"CumSum",
"operation",
"."
] | def get_bprop_cumsum(self):
"""Grad definition for `CumSum` operation."""
cumsum = P.CumSum(exclusive=self.exclusive, reverse=not self.reverse)
def bprop(x, axis, out, dout):
return cumsum(dout, axis), zeros_like(axis)
return bprop | [
"def",
"get_bprop_cumsum",
"(",
"self",
")",
":",
"cumsum",
"=",
"P",
".",
"CumSum",
"(",
"exclusive",
"=",
"self",
".",
"exclusive",
",",
"reverse",
"=",
"not",
"self",
".",
"reverse",
")",
"def",
"bprop",
"(",
"x",
",",
"axis",
",",
"out",
",",
"... | https://github.com/mindspore-ai/mindspore/blob/fb8fd3338605bb34fa5cea054e535a8b1d753fab/mindspore/python/mindspore/ops/_grad/grad_math_ops.py#L686-L693 | |
metashell/metashell | f4177e4854ea00c8dbc722cadab26ef413d798ea | 3rd/templight/clang/tools/scan-build-py/libear/__init__.py | python | Toolset.add_definitions | (self, defines) | part of public interface | part of public interface | [
"part",
"of",
"public",
"interface"
] | def add_definitions(self, defines):
""" part of public interface """
self.c_flags.extend(defines) | [
"def",
"add_definitions",
"(",
"self",
",",
"defines",
")",
":",
"self",
".",
"c_flags",
".",
"extend",
"(",
"defines",
")"
] | https://github.com/metashell/metashell/blob/f4177e4854ea00c8dbc722cadab26ef413d798ea/3rd/templight/clang/tools/scan-build-py/libear/__init__.py#L94-L96 | ||
adobe/chromium | cfe5bf0b51b1f6b9fe239c2a3c2f2364da9967d7 | third_party/protobuf/python/google/protobuf/text_format.py | python | _Tokenizer.NextToken | (self) | Reads the next meaningful token. | Reads the next meaningful token. | [
"Reads",
"the",
"next",
"meaningful",
"token",
"."
] | def NextToken(self):
"""Reads the next meaningful token."""
self._previous_line = self._line
self._previous_column = self._column
self._column += len(self.token)
self._SkipWhitespace()
if not self._lines and len(self._current_line) <= self._column:
self.token = ''
return
match = self._TOKEN.match(self._current_line, self._column)
if match:
token = match.group(0)
self.token = token
else:
self.token = self._current_line[self._column] | [
"def",
"NextToken",
"(",
"self",
")",
":",
"self",
".",
"_previous_line",
"=",
"self",
".",
"_line",
"self",
".",
"_previous_column",
"=",
"self",
".",
"_column",
"self",
".",
"_column",
"+=",
"len",
"(",
"self",
".",
"token",
")",
"self",
".",
"_SkipW... | https://github.com/adobe/chromium/blob/cfe5bf0b51b1f6b9fe239c2a3c2f2364da9967d7/third_party/protobuf/python/google/protobuf/text_format.py#L640-L657 | ||
networkit/networkit | 695b7a786a894a303fa8587597d5ef916e797729 | scripts/DynamicBetweennessExperiments.py | python | setRandomWeights | (G, mu, sigma) | return G | Add random weights, normal distribution with mean mu and standard deviation sigma | Add random weights, normal distribution with mean mu and standard deviation sigma | [
"Add",
"random",
"weights",
"normal",
"distribution",
"with",
"mean",
"mu",
"and",
"standard",
"deviation",
"sigma"
] | def setRandomWeights(G, mu, sigma):
"""
Add random weights, normal distribution with mean mu and standard deviation sigma
"""
for (u, v) in G.iterEdges():
w = random.normalvariate(mu, sigma)
G.setWeight(u, v, w)
return G | [
"def",
"setRandomWeights",
"(",
"G",
",",
"mu",
",",
"sigma",
")",
":",
"for",
"(",
"u",
",",
"v",
")",
"in",
"G",
".",
"iterEdges",
"(",
")",
":",
"w",
"=",
"random",
".",
"normalvariate",
"(",
"mu",
",",
"sigma",
")",
"G",
".",
"setWeight",
"... | https://github.com/networkit/networkit/blob/695b7a786a894a303fa8587597d5ef916e797729/scripts/DynamicBetweennessExperiments.py#L35-L42 | |
windystrife/UnrealEngine_NVIDIAGameWorks | b50e6338a7c5b26374d66306ebc7807541ff815e | Engine/Extras/ThirdPartyNotUE/emsdk/Win64/python/2.7.5.3_64bit/Lib/mailbox.py | python | _ProxyFile.read | (self, size=None) | return self._read(size, self._file.read) | Read bytes. | Read bytes. | [
"Read",
"bytes",
"."
] | def read(self, size=None):
"""Read bytes."""
return self._read(size, self._file.read) | [
"def",
"read",
"(",
"self",
",",
"size",
"=",
"None",
")",
":",
"return",
"self",
".",
"_read",
"(",
"size",
",",
"self",
".",
"_file",
".",
"read",
")"
] | https://github.com/windystrife/UnrealEngine_NVIDIAGameWorks/blob/b50e6338a7c5b26374d66306ebc7807541ff815e/Engine/Extras/ThirdPartyNotUE/emsdk/Win64/python/2.7.5.3_64bit/Lib/mailbox.py#L1871-L1873 | |
TheLegendAli/DeepLab-Context | fb04e9e2fc2682490ad9f60533b9d6c4c0e0479c | scripts/cpp_lint.py | python | GetLineWidth | (line) | Determines the width of the line in column positions.
Args:
line: A string, which may be a Unicode string.
Returns:
The width of the line in column positions, accounting for Unicode
combining characters and wide characters. | Determines the width of the line in column positions. | [
"Determines",
"the",
"width",
"of",
"the",
"line",
"in",
"column",
"positions",
"."
] | def GetLineWidth(line):
"""Determines the width of the line in column positions.
Args:
line: A string, which may be a Unicode string.
Returns:
The width of the line in column positions, accounting for Unicode
combining characters and wide characters.
"""
if isinstance(line, unicode):
width = 0
for uc in unicodedata.normalize('NFC', line):
if unicodedata.east_asian_width(uc) in ('W', 'F'):
width += 2
elif not unicodedata.combining(uc):
width += 1
return width
else:
return len(line) | [
"def",
"GetLineWidth",
"(",
"line",
")",
":",
"if",
"isinstance",
"(",
"line",
",",
"unicode",
")",
":",
"width",
"=",
"0",
"for",
"uc",
"in",
"unicodedata",
".",
"normalize",
"(",
"'NFC'",
",",
"line",
")",
":",
"if",
"unicodedata",
".",
"east_asian_w... | https://github.com/TheLegendAli/DeepLab-Context/blob/fb04e9e2fc2682490ad9f60533b9d6c4c0e0479c/scripts/cpp_lint.py#L3437-L3456 | ||
ucbrise/confluo | 578883a4f7fbbb4aea78c342d366f5122ef598f7 | pyclient/confluo/rpc/data_types.py | python | DataType.pack | (self, data) | Args:
data: Python data
Returns:
Binary data | [] | def pack(self, data):
"""
Args:
data: Python data
Returns:
Binary data
"""
try:
return struct.pack(self.format_code(), data)
except Exception as e:
raise ValueError('Error converting {} to {}: {}'.format(data, to_string(self.type_id_, self.size_), e)) | [
"def",
"pack",
"(",
"self",
",",
"data",
")",
":",
"try",
":",
"return",
"struct",
".",
"pack",
"(",
"self",
".",
"format_code",
"(",
")",
",",
"data",
")",
"except",
"Exception",
"as",
"e",
":",
"raise",
"ValueError",
"(",
"'Error converting {} to {}: {... | https://github.com/ucbrise/confluo/blob/578883a4f7fbbb4aea78c342d366f5122ef598f7/pyclient/confluo/rpc/data_types.py#L129-L140 | |||
wxWidgets/wxPython-Classic | 19571e1ae65f1ac445f5491474121998c97a1bf0 | src/osx_cocoa/richtext.py | python | RichTextObject.Layout | (*args, **kwargs) | return _richtext.RichTextObject_Layout(*args, **kwargs) | Layout(self, DC dc, RichTextDrawingContext context, Rect rect, Rect parentRect,
int style) -> bool | Layout(self, DC dc, RichTextDrawingContext context, Rect rect, Rect parentRect,
int style) -> bool | [
"Layout",
"(",
"self",
"DC",
"dc",
"RichTextDrawingContext",
"context",
"Rect",
"rect",
"Rect",
"parentRect",
"int",
"style",
")",
"-",
">",
"bool"
] | def Layout(*args, **kwargs):
"""
Layout(self, DC dc, RichTextDrawingContext context, Rect rect, Rect parentRect,
int style) -> bool
"""
return _richtext.RichTextObject_Layout(*args, **kwargs) | [
"def",
"Layout",
"(",
"*",
"args",
",",
"*",
"*",
"kwargs",
")",
":",
"return",
"_richtext",
".",
"RichTextObject_Layout",
"(",
"*",
"args",
",",
"*",
"*",
"kwargs",
")"
] | https://github.com/wxWidgets/wxPython-Classic/blob/19571e1ae65f1ac445f5491474121998c97a1bf0/src/osx_cocoa/richtext.py#L1172-L1177 | |
catboost/catboost | 167f64f237114a4d10b2b4ee42adb4569137debe | contrib/python/scikit-learn/py3/sklearn/decomposition/_fastica.py | python | FastICA._fit | (self, X, compute_sources=False) | return S | Fit the model
Parameters
----------
X : array-like, shape (n_samples, n_features)
Training data, where n_samples is the number of samples
and n_features is the number of features.
compute_sources : bool
If False, sources are not computes but only the rotation matrix.
This can save memory when working with big data. Defaults to False.
Returns
-------
X_new : array-like, shape (n_samples, n_components) | Fit the model | [
"Fit",
"the",
"model"
] | def _fit(self, X, compute_sources=False):
"""Fit the model
Parameters
----------
X : array-like, shape (n_samples, n_features)
Training data, where n_samples is the number of samples
and n_features is the number of features.
compute_sources : bool
If False, sources are not computes but only the rotation matrix.
This can save memory when working with big data. Defaults to False.
Returns
-------
X_new : array-like, shape (n_samples, n_components)
"""
fun_args = {} if self.fun_args is None else self.fun_args
random_state = check_random_state(self.random_state)
# make interface compatible with other decompositions
# a copy is required only for non whitened data
X = check_array(X, copy=self.whiten, dtype=FLOAT_DTYPES,
ensure_min_samples=2).T
alpha = fun_args.get('alpha', 1.0)
if not 1 <= alpha <= 2:
raise ValueError('alpha must be in [1,2]')
if self.fun == 'logcosh':
g = _logcosh
elif self.fun == 'exp':
g = _exp
elif self.fun == 'cube':
g = _cube
elif callable(self.fun):
def g(x, fun_args):
return self.fun(x, **fun_args)
else:
exc = ValueError if isinstance(self.fun, str) else TypeError
raise exc(
"Unknown function %r;"
" should be one of 'logcosh', 'exp', 'cube' or callable"
% self.fun
)
n_samples, n_features = X.shape
n_components = self.n_components
if not self.whiten and n_components is not None:
n_components = None
warnings.warn('Ignoring n_components with whiten=False.')
if n_components is None:
n_components = min(n_samples, n_features)
if (n_components > min(n_samples, n_features)):
n_components = min(n_samples, n_features)
warnings.warn(
'n_components is too large: it will be set to %s'
% n_components
)
if self.whiten:
# Centering the columns (ie the variables)
X_mean = X.mean(axis=-1)
X -= X_mean[:, np.newaxis]
# Whitening and preprocessing by PCA
u, d, _ = linalg.svd(X, full_matrices=False)
del _
K = (u / d).T[:n_components] # see (6.33) p.140
del u, d
X1 = np.dot(K, X)
# see (13.6) p.267 Here X1 is white and data
# in X has been projected onto a subspace by PCA
X1 *= np.sqrt(n_features)
else:
# X must be casted to floats to avoid typing issues with numpy
# 2.0 and the line below
X1 = as_float_array(X, copy=False) # copy has been taken care of
w_init = self.w_init
if w_init is None:
w_init = np.asarray(random_state.normal(
size=(n_components, n_components)), dtype=X1.dtype)
else:
w_init = np.asarray(w_init)
if w_init.shape != (n_components, n_components):
raise ValueError(
'w_init has invalid shape -- should be %(shape)s'
% {'shape': (n_components, n_components)})
kwargs = {'tol': self.tol,
'g': g,
'fun_args': fun_args,
'max_iter': self.max_iter,
'w_init': w_init}
if self.algorithm == 'parallel':
W, n_iter = _ica_par(X1, **kwargs)
elif self.algorithm == 'deflation':
W, n_iter = _ica_def(X1, **kwargs)
else:
raise ValueError('Invalid algorithm: must be either `parallel` or'
' `deflation`.')
del X1
if compute_sources:
if self.whiten:
S = np.dot(np.dot(W, K), X).T
else:
S = np.dot(W, X).T
else:
S = None
self.n_iter_ = n_iter
if self.whiten:
self.components_ = np.dot(W, K)
self.mean_ = X_mean
self.whitening_ = K
else:
self.components_ = W
self.mixing_ = linalg.pinv(self.components_)
self._unmixing = W
if compute_sources:
self.__sources = S
return S | [
"def",
"_fit",
"(",
"self",
",",
"X",
",",
"compute_sources",
"=",
"False",
")",
":",
"fun_args",
"=",
"{",
"}",
"if",
"self",
".",
"fun_args",
"is",
"None",
"else",
"self",
".",
"fun_args",
"random_state",
"=",
"check_random_state",
"(",
"self",
".",
... | https://github.com/catboost/catboost/blob/167f64f237114a4d10b2b4ee42adb4569137debe/contrib/python/scikit-learn/py3/sklearn/decomposition/_fastica.py#L410-L542 | |
aws/lumberyard | f85344403c1c2e77ec8c75deb2c116e97b713217 | dev/Tools/Python/3.7.10/windows/Lib/collections/__init__.py | python | OrderedDict.__init__ | (*args, **kwds) | Initialize an ordered dictionary. The signature is the same as
regular dictionaries. Keyword argument order is preserved. | Initialize an ordered dictionary. The signature is the same as
regular dictionaries. Keyword argument order is preserved. | [
"Initialize",
"an",
"ordered",
"dictionary",
".",
"The",
"signature",
"is",
"the",
"same",
"as",
"regular",
"dictionaries",
".",
"Keyword",
"argument",
"order",
"is",
"preserved",
"."
] | def __init__(*args, **kwds):
'''Initialize an ordered dictionary. The signature is the same as
regular dictionaries. Keyword argument order is preserved.
'''
if not args:
raise TypeError("descriptor '__init__' of 'OrderedDict' object "
"needs an argument")
self, *args = args
if len(args) > 1:
raise TypeError('expected at most 1 arguments, got %d' % len(args))
try:
self.__root
except AttributeError:
self.__hardroot = _Link()
self.__root = root = _proxy(self.__hardroot)
root.prev = root.next = root
self.__map = {}
self.__update(*args, **kwds) | [
"def",
"__init__",
"(",
"*",
"args",
",",
"*",
"*",
"kwds",
")",
":",
"if",
"not",
"args",
":",
"raise",
"TypeError",
"(",
"\"descriptor '__init__' of 'OrderedDict' object \"",
"\"needs an argument\"",
")",
"self",
",",
"",
"*",
"args",
"=",
"args",
"if",
"l... | https://github.com/aws/lumberyard/blob/f85344403c1c2e77ec8c75deb2c116e97b713217/dev/Tools/Python/3.7.10/windows/Lib/collections/__init__.py#L96-L113 | ||
apache/incubator-mxnet | f03fb23f1d103fec9541b5ae59ee06b1734a51d9 | python/mxnet/numpy/multiarray.py | python | log10 | (x, out=None, **kwargs) | return _mx_nd_np.log10(x, out=out, **kwargs) | Return the base 10 logarithm of the input array, element-wise.
Parameters
----------
x : ndarray or scalar
Input array or scalar.
out : ndarray or None
A location into which the result is stored. If provided, it
must have a shape that the inputs broadcast to. If not provided
or None, a freshly-allocated array is returned. The dtype of the
output is the same as that of the input if the input is an ndarray.
Returns
-------
y : ndarray or scalar
The logarithm to the base 10 of `x`, element-wise. NaNs are
returned where x is negative. This is a scalar if `x` is a scalar.
Notes
----
This function only supports input type of float.
Examples
--------
>>> np.log10(np.array([1e-15, -3.]))
array([-15., nan]) | Return the base 10 logarithm of the input array, element-wise. | [
"Return",
"the",
"base",
"10",
"logarithm",
"of",
"the",
"input",
"array",
"element",
"-",
"wise",
"."
] | def log10(x, out=None, **kwargs):
"""
Return the base 10 logarithm of the input array, element-wise.
Parameters
----------
x : ndarray or scalar
Input array or scalar.
out : ndarray or None
A location into which the result is stored. If provided, it
must have a shape that the inputs broadcast to. If not provided
or None, a freshly-allocated array is returned. The dtype of the
output is the same as that of the input if the input is an ndarray.
Returns
-------
y : ndarray or scalar
The logarithm to the base 10 of `x`, element-wise. NaNs are
returned where x is negative. This is a scalar if `x` is a scalar.
Notes
----
This function only supports input type of float.
Examples
--------
>>> np.log10(np.array([1e-15, -3.]))
array([-15., nan])
"""
return _mx_nd_np.log10(x, out=out, **kwargs) | [
"def",
"log10",
"(",
"x",
",",
"out",
"=",
"None",
",",
"*",
"*",
"kwargs",
")",
":",
"return",
"_mx_nd_np",
".",
"log10",
"(",
"x",
",",
"out",
"=",
"out",
",",
"*",
"*",
"kwargs",
")"
] | https://github.com/apache/incubator-mxnet/blob/f03fb23f1d103fec9541b5ae59ee06b1734a51d9/python/mxnet/numpy/multiarray.py#L4229-L4258 | |
wxWidgets/wxPython-Classic | 19571e1ae65f1ac445f5491474121998c97a1bf0 | src/osx_cocoa/_core.py | python | Window.CacheBestSize | (*args, **kwargs) | return _core_.Window_CacheBestSize(*args, **kwargs) | CacheBestSize(self, Size size)
Cache the best size so it doesn't need to be calculated again, (at least until
some properties of the window change.) | CacheBestSize(self, Size size) | [
"CacheBestSize",
"(",
"self",
"Size",
"size",
")"
] | def CacheBestSize(*args, **kwargs):
"""
CacheBestSize(self, Size size)
Cache the best size so it doesn't need to be calculated again, (at least until
some properties of the window change.)
"""
return _core_.Window_CacheBestSize(*args, **kwargs) | [
"def",
"CacheBestSize",
"(",
"*",
"args",
",",
"*",
"*",
"kwargs",
")",
":",
"return",
"_core_",
".",
"Window_CacheBestSize",
"(",
"*",
"args",
",",
"*",
"*",
"kwargs",
")"
] | https://github.com/wxWidgets/wxPython-Classic/blob/19571e1ae65f1ac445f5491474121998c97a1bf0/src/osx_cocoa/_core.py#L9628-L9635 | |
aws/lumberyard | f85344403c1c2e77ec8c75deb2c116e97b713217 | dev/Tools/Python/3.7.10/mac/Python.framework/Versions/3.7/lib/python3.7/nturl2path.py | python | pathname2url | (p) | return path | OS-specific conversion from a file system path to a relative URL
of the 'file' scheme; not recommended for general use. | OS-specific conversion from a file system path to a relative URL
of the 'file' scheme; not recommended for general use. | [
"OS",
"-",
"specific",
"conversion",
"from",
"a",
"file",
"system",
"path",
"to",
"a",
"relative",
"URL",
"of",
"the",
"file",
"scheme",
";",
"not",
"recommended",
"for",
"general",
"use",
"."
] | def pathname2url(p):
"""OS-specific conversion from a file system path to a relative URL
of the 'file' scheme; not recommended for general use."""
# e.g.
# C:\foo\bar\spam.foo
# becomes
# ///C:/foo/bar/spam.foo
import urllib.parse
if not ':' in p:
# No drive specifier, just convert slashes and quote the name
if p[:2] == '\\\\':
# path is something like \\host\path\on\remote\host
# convert this to ////host/path/on/remote/host
# (notice doubling of slashes at the start of the path)
p = '\\\\' + p
components = p.split('\\')
return urllib.parse.quote('/'.join(components))
comp = p.split(':')
if len(comp) != 2 or len(comp[0]) > 1:
error = 'Bad path: ' + p
raise OSError(error)
drive = urllib.parse.quote(comp[0].upper())
components = comp[1].split('\\')
path = '///' + drive + ':'
for comp in components:
if comp:
path = path + '/' + urllib.parse.quote(comp)
return path | [
"def",
"pathname2url",
"(",
"p",
")",
":",
"# e.g.",
"# C:\\foo\\bar\\spam.foo",
"# becomes",
"# ///C:/foo/bar/spam.foo",
"import",
"urllib",
".",
"parse",
"if",
"not",
"':'",
"in",
"p",
":",
"# No drive specifier, just convert slashes and quote the name",
"if",
"p",
... | https://github.com/aws/lumberyard/blob/f85344403c1c2e77ec8c75deb2c116e97b713217/dev/Tools/Python/3.7.10/mac/Python.framework/Versions/3.7/lib/python3.7/nturl2path.py#L45-L73 | |
catboost/catboost | 167f64f237114a4d10b2b4ee42adb4569137debe | contrib/tools/python/src/Lib/idlelib/configDialog.py | python | ConfigDialog.create_extension_frame | (self, ext_name) | return | Create a frame holding the widgets to configure one extension | Create a frame holding the widgets to configure one extension | [
"Create",
"a",
"frame",
"holding",
"the",
"widgets",
"to",
"configure",
"one",
"extension"
] | def create_extension_frame(self, ext_name):
"""Create a frame holding the widgets to configure one extension"""
f = VerticalScrolledFrame(self.details_frame, height=250, width=250)
self.config_frame[ext_name] = f
entry_area = f.interior
# create an entry for each configuration option
for row, opt in enumerate(self.extensions[ext_name]):
# create a row with a label and entry/checkbutton
label = Label(entry_area, text=opt['name'])
label.grid(row=row, column=0, sticky=NW)
var = opt['var']
if opt['type'] == 'bool':
Checkbutton(entry_area, textvariable=var, variable=var,
onvalue='True', offvalue='False',
indicatoron=FALSE, selectcolor='', width=8
).grid(row=row, column=1, sticky=W, padx=7)
elif opt['type'] == 'int':
Entry(entry_area, textvariable=var, validate='key',
validatecommand=(self.is_int, '%P')
).grid(row=row, column=1, sticky=NSEW, padx=7)
else:
Entry(entry_area, textvariable=var
).grid(row=row, column=1, sticky=NSEW, padx=7)
return | [
"def",
"create_extension_frame",
"(",
"self",
",",
"ext_name",
")",
":",
"f",
"=",
"VerticalScrolledFrame",
"(",
"self",
".",
"details_frame",
",",
"height",
"=",
"250",
",",
"width",
"=",
"250",
")",
"self",
".",
"config_frame",
"[",
"ext_name",
"]",
"=",... | https://github.com/catboost/catboost/blob/167f64f237114a4d10b2b4ee42adb4569137debe/contrib/tools/python/src/Lib/idlelib/configDialog.py#L1330-L1354 | |
ComputationalRadiationPhysics/picongpu | 59e9b53605f9a5c1bf271eeb055bc74370a99052 | lib/python/picongpu/plugins/jupyter_widgets/base_widget.py | python | BaseWidget._create_sim_dropdown | (self, options) | return sim_drop | Provide the widget for selection of simulations.
Can be overridden in derived classes if some of those widgets
are not necessary.
Note: Make sure that no value of the widget is selected initially
since otherwise initial plotting after creation of the widget might
not work (since the constructor sets the value to the first available
which leads to the visualization callback being triggered.)
Returns
-------
a jupyter widget that allows selection of value(s) | Provide the widget for selection of simulations.
Can be overridden in derived classes if some of those widgets
are not necessary.
Note: Make sure that no value of the widget is selected initially
since otherwise initial plotting after creation of the widget might
not work (since the constructor sets the value to the first available
which leads to the visualization callback being triggered.) | [
"Provide",
"the",
"widget",
"for",
"selection",
"of",
"simulations",
".",
"Can",
"be",
"overridden",
"in",
"derived",
"classes",
"if",
"some",
"of",
"those",
"widgets",
"are",
"not",
"necessary",
".",
"Note",
":",
"Make",
"sure",
"that",
"no",
"value",
"of... | def _create_sim_dropdown(self, options):
"""
Provide the widget for selection of simulations.
Can be overridden in derived classes if some of those widgets
are not necessary.
Note: Make sure that no value of the widget is selected initially
since otherwise initial plotting after creation of the widget might
not work (since the constructor sets the value to the first available
which leads to the visualization callback being triggered.)
Returns
-------
a jupyter widget that allows selection of value(s)
"""
sim_drop = widgets.SelectMultiple(
description="Sims", options=options, value=())
return sim_drop | [
"def",
"_create_sim_dropdown",
"(",
"self",
",",
"options",
")",
":",
"sim_drop",
"=",
"widgets",
".",
"SelectMultiple",
"(",
"description",
"=",
"\"Sims\"",
",",
"options",
"=",
"options",
",",
"value",
"=",
"(",
")",
")",
"return",
"sim_drop"
] | https://github.com/ComputationalRadiationPhysics/picongpu/blob/59e9b53605f9a5c1bf271eeb055bc74370a99052/lib/python/picongpu/plugins/jupyter_widgets/base_widget.py#L236-L253 | |
wxWidgets/wxPython-Classic | 19571e1ae65f1ac445f5491474121998c97a1bf0 | src/osx_cocoa/_windows.py | python | TopLevelWindow.ShowFullScreen | (*args, **kwargs) | return _windows_.TopLevelWindow_ShowFullScreen(*args, **kwargs) | ShowFullScreen(self, bool show, long style=FULLSCREEN_ALL) -> bool | ShowFullScreen(self, bool show, long style=FULLSCREEN_ALL) -> bool | [
"ShowFullScreen",
"(",
"self",
"bool",
"show",
"long",
"style",
"=",
"FULLSCREEN_ALL",
")",
"-",
">",
"bool"
] | def ShowFullScreen(*args, **kwargs):
"""ShowFullScreen(self, bool show, long style=FULLSCREEN_ALL) -> bool"""
return _windows_.TopLevelWindow_ShowFullScreen(*args, **kwargs) | [
"def",
"ShowFullScreen",
"(",
"*",
"args",
",",
"*",
"*",
"kwargs",
")",
":",
"return",
"_windows_",
".",
"TopLevelWindow_ShowFullScreen",
"(",
"*",
"args",
",",
"*",
"*",
"kwargs",
")"
] | https://github.com/wxWidgets/wxPython-Classic/blob/19571e1ae65f1ac445f5491474121998c97a1bf0/src/osx_cocoa/_windows.py#L441-L443 | |
aws/lumberyard | f85344403c1c2e77ec8c75deb2c116e97b713217 | dev/Tools/Python/3.7.10/mac/Python.framework/Versions/3.7/lib/python3.7/importlib/_bootstrap_external.py | python | _get_supported_file_loaders | () | return [extensions, source, bytecode] | Returns a list of file-based module loaders.
Each item is a tuple (loader, suffixes). | Returns a list of file-based module loaders. | [
"Returns",
"a",
"list",
"of",
"file",
"-",
"based",
"module",
"loaders",
"."
] | def _get_supported_file_loaders():
"""Returns a list of file-based module loaders.
Each item is a tuple (loader, suffixes).
"""
extensions = ExtensionFileLoader, _imp.extension_suffixes()
source = SourceFileLoader, SOURCE_SUFFIXES
bytecode = SourcelessFileLoader, BYTECODE_SUFFIXES
return [extensions, source, bytecode] | [
"def",
"_get_supported_file_loaders",
"(",
")",
":",
"extensions",
"=",
"ExtensionFileLoader",
",",
"_imp",
".",
"extension_suffixes",
"(",
")",
"source",
"=",
"SourceFileLoader",
",",
"SOURCE_SUFFIXES",
"bytecode",
"=",
"SourcelessFileLoader",
",",
"BYTECODE_SUFFIXES",... | https://github.com/aws/lumberyard/blob/f85344403c1c2e77ec8c75deb2c116e97b713217/dev/Tools/Python/3.7.10/mac/Python.framework/Versions/3.7/lib/python3.7/importlib/_bootstrap_external.py#L1482-L1490 | |
benoitsteiner/tensorflow-opencl | cb7cb40a57fde5cfd4731bc551e82a1e2fef43a5 | tensorflow/python/ops/resource_variable_ops.py | python | ResourceVariable._init_from_args | (self,
initial_value=None,
trainable=True,
collections=None,
validate_shape=True,
caching_device=None,
name=None,
dtype=None,
constraint=None) | Creates a variable.
Args:
initial_value: A `Tensor`, or Python object convertible to a `Tensor`,
which is the initial value for the Variable. The initial value must have
a shape specified unless `validate_shape` is set to False. Can also be a
callable with no argument that returns the initial value when called.
(Note that initializer functions from init_ops.py must first be bound
to a shape before being used here.)
trainable: If `True`, the default, also adds the variable to the graph
collection `GraphKeys.TRAINABLE_VARIABLES`. This collection is used as
the default list of variables to use by the `Optimizer` classes.
collections: List of graph collections keys. The new variable is added to
these collections. Defaults to `[GraphKeys.GLOBAL_VARIABLES]`.
validate_shape: Ignored. Provided for compatibility with tf.Variable.
caching_device: Optional device string or function describing where the
Variable should be cached for reading. Defaults to the Variable's
device. If not `None`, caches on another device. Typical use is to
cache on the device where the Ops using the Variable reside, to
deduplicate copying through `Switch` and other conditional statements.
name: Optional name for the variable. Defaults to `'Variable'` and gets
uniquified automatically.
dtype: If set, initial_value will be converted to the given type.
If None, either the datatype will be kept (if initial_value is
a Tensor) or float32 will be used (if it is a Python object convertible
to a Tensor).
constraint: An optional projection function to be applied to the variable
after being updated by an `Optimizer` (e.g. used to implement norm
constraints or value constraints for layer weights). The function must
take as input the unprojected Tensor representing the value of the
variable and return the Tensor for the projected value
(which must have the same shape). Constraints are not safe to
use when doing asynchronous distributed training.
Raises:
ValueError: If the initial value is not specified, or does not have a
shape and `validate_shape` is `True`.
@compatibility(eager)
When Eager Execution is enabled, variables are never added to collections.
It is not implicitly added to the `GLOBAL_VARIABLES` or
`TRAINABLE_VARIABLES` collections, and the `collections` argument is
ignored.
@end_compatibility | Creates a variable. | [
"Creates",
"a",
"variable",
"."
] | def _init_from_args(self,
initial_value=None,
trainable=True,
collections=None,
validate_shape=True,
caching_device=None,
name=None,
dtype=None,
constraint=None):
"""Creates a variable.
Args:
initial_value: A `Tensor`, or Python object convertible to a `Tensor`,
which is the initial value for the Variable. The initial value must have
a shape specified unless `validate_shape` is set to False. Can also be a
callable with no argument that returns the initial value when called.
(Note that initializer functions from init_ops.py must first be bound
to a shape before being used here.)
trainable: If `True`, the default, also adds the variable to the graph
collection `GraphKeys.TRAINABLE_VARIABLES`. This collection is used as
the default list of variables to use by the `Optimizer` classes.
collections: List of graph collections keys. The new variable is added to
these collections. Defaults to `[GraphKeys.GLOBAL_VARIABLES]`.
validate_shape: Ignored. Provided for compatibility with tf.Variable.
caching_device: Optional device string or function describing where the
Variable should be cached for reading. Defaults to the Variable's
device. If not `None`, caches on another device. Typical use is to
cache on the device where the Ops using the Variable reside, to
deduplicate copying through `Switch` and other conditional statements.
name: Optional name for the variable. Defaults to `'Variable'` and gets
uniquified automatically.
dtype: If set, initial_value will be converted to the given type.
If None, either the datatype will be kept (if initial_value is
a Tensor) or float32 will be used (if it is a Python object convertible
to a Tensor).
constraint: An optional projection function to be applied to the variable
after being updated by an `Optimizer` (e.g. used to implement norm
constraints or value constraints for layer weights). The function must
take as input the unprojected Tensor representing the value of the
variable and return the Tensor for the projected value
(which must have the same shape). Constraints are not safe to
use when doing asynchronous distributed training.
Raises:
ValueError: If the initial value is not specified, or does not have a
shape and `validate_shape` is `True`.
@compatibility(eager)
When Eager Execution is enabled, variables are never added to collections.
It is not implicitly added to the `GLOBAL_VARIABLES` or
`TRAINABLE_VARIABLES` collections, and the `collections` argument is
ignored.
@end_compatibility
"""
if initial_value is None:
raise ValueError("initial_value must be specified.")
init_from_fn = callable(initial_value)
if collections is None:
collections = [ops.GraphKeys.GLOBAL_VARIABLES]
if not isinstance(collections, (list, tuple, set)):
raise ValueError(
"collections argument to Variable constructor must be a list, tuple, "
"or set. Got %s of type %s" % (collections, type(collections)))
if constraint is not None and not callable(constraint):
raise ValueError("The `constraint` argument must be a callable.")
self._trainable = trainable
if trainable and ops.GraphKeys.TRAINABLE_VARIABLES not in collections:
collections = list(collections) + [ops.GraphKeys.TRAINABLE_VARIABLES]
self._save_slice_info = None
self._in_graph_mode = context.in_graph_mode()
# Save the graph's container prefix for error checking. Reading the value of
# the ResourceVariable from another Graph in Eager mode is an error.
self._container_prefix = ops.get_default_graph()._container_prefix # pylint: disable=protected-access
if not self._in_graph_mode and not name:
# TODO(ashankar,josh11b): make this unnecessary using the same
# logic as in layer
raise ValueError("Variables need to have explicit names when eager "
"execution is enabled")
with ops.control_dependencies(None):
with ops.name_scope(name, "Variable", []
if init_from_fn else [initial_value]) as name:
# pylint: disable=protected-access
handle_name = ops._name_from_scope_name(name)
if init_from_fn:
# Use attr_scope and device(None) to simulate the behavior of
# colocate_with when the variable we want to colocate with doesn't
# yet exist.
if self._in_graph_mode:
attr = attr_value_pb2.AttrValue(
list=attr_value_pb2.AttrValue.ListValue(
s=[compat.as_bytes("loc:@%s" % handle_name)]))
with ops.get_default_graph()._attr_scope({"_class": attr}):
with ops.name_scope("Initializer"), ops.device(None):
initial_value = ops.convert_to_tensor(
initial_value(), name="initial_value", dtype=dtype)
self._handle = _eager_safe_variable_handle(
shape=initial_value.get_shape(),
dtype=initial_value.dtype.base_dtype,
shared_name=handle_name,
name=name,
graph_mode=self._in_graph_mode)
self._handle_device = (
self._handle.device if self._in_graph_mode else
context.get_default_context().device_name)
self._shape = initial_value.get_shape()
else:
initial_value = initial_value()
with ops.name_scope("Initializer"):
initial_value = ops.convert_to_tensor(
initial_value, name="initial_value", dtype=dtype)
self._handle = _eager_safe_variable_handle(
shape=initial_value.get_shape(),
dtype=initial_value.dtype.base_dtype,
shared_name=handle_name,
name=name,
graph_mode=False)
self._handle_device = (
self._handle.device if self._in_graph_mode else
context.get_default_context().device_name)
self._shape = initial_value.get_shape()
# pylint: enable=protected-access
# Or get the initial value from a Tensor or Python object.
else:
with ops.name_scope("Initializer"):
initial_value = ops.convert_to_tensor(
initial_value, name="initial_value", dtype=dtype)
# pylint: disable=protected-access
if (self._in_graph_mode and initial_value is not None and
initial_value.op._get_control_flow_context() is not None):
raise ValueError(
"Initializer for variable %s is from inside a control-flow "
"construct, such as a loop or conditional. When creating a "
"variable inside a loop or conditional, use a lambda as the "
"initializer." % name)
# pylint: enable=protected-access
self._handle = _eager_safe_variable_handle(
shape=initial_value.get_shape(),
dtype=initial_value.dtype.base_dtype,
shared_name=handle_name,
name=name,
graph_mode=self._in_graph_mode)
self._handle_device = (self._handle.device if self._in_graph_mode else
context.get_default_context().device_name)
self._shape = initial_value.get_shape()
self._initial_value = initial_value if self._in_graph_mode else None
self._handle_name = handle_name + ":0"
self._dtype = initial_value.dtype.base_dtype
self._constraint = constraint
if self._in_graph_mode:
with ops.name_scope("IsInitialized"):
self._is_initialized_op = (
gen_resource_variable_ops.var_is_initialized_op(self._handle))
if initial_value is not None:
with ops.name_scope("Assign") as n, ops.colocate_with(self._handle):
self._initializer_op = (
gen_resource_variable_ops.assign_variable_op(
self._handle,
self._build_initializer_expr(initial_value),
name=n))
with ops.name_scope("Read"), ops.colocate_with(self._handle):
# Manually assign reads to the handle's device to avoid log
# messages.
with ops.device(self._handle_device):
value = self._read_variable_op()
self._graph_element = value
if caching_device is not None:
# Variables may be created in a tf.device() or ops.colocate_with()
# context. At the same time, users would expect caching device to
# be independent of this context, and/or would not expect the
# current device context to be merged with the caching device
# spec. Therefore we reset the colocation stack before creating
# the cached value. Note that resetting the colocation stack will
# also reset the device stack.
with ops.colocate_with(None, ignore_existing=True):
with ops.device(caching_device):
self._cached_value = array_ops.identity(value)
else:
self._cached_value = None
else:
gen_resource_variable_ops.assign_variable_op(self._handle,
initial_value)
self._is_initialized_op = None
self._initializer_op = None
self._graph_element = None
if caching_device:
with ops.device(caching_device):
self._cached_value = self._read_variable_op()
else:
self._cached_value = None
if context.in_graph_mode():
ops.add_to_collections(collections, self)
elif ops.GraphKeys.GLOBAL_STEP in collections:
ops.add_to_collections(ops.GraphKeys.GLOBAL_STEP, self) | [
"def",
"_init_from_args",
"(",
"self",
",",
"initial_value",
"=",
"None",
",",
"trainable",
"=",
"True",
",",
"collections",
"=",
"None",
",",
"validate_shape",
"=",
"True",
",",
"caching_device",
"=",
"None",
",",
"name",
"=",
"None",
",",
"dtype",
"=",
... | https://github.com/benoitsteiner/tensorflow-opencl/blob/cb7cb40a57fde5cfd4731bc551e82a1e2fef43a5/tensorflow/python/ops/resource_variable_ops.py#L211-L409 | ||
fifengine/fifengine | 4b62c42e85bec19893cef8e63e6855927cff2c47 | engine/python/fife/extensions/pychan/widgets/widget.py | python | Widget.isModalMouseInputFocusable | (self) | return self.real_widget.isModalMouseInputFocusable() | Checks if a widget is modal mouse input focusable.
True if no other widget has modal mouse input focus, false otherwise. | Checks if a widget is modal mouse input focusable.
True if no other widget has modal mouse input focus, false otherwise. | [
"Checks",
"if",
"a",
"widget",
"is",
"modal",
"mouse",
"input",
"focusable",
".",
"True",
"if",
"no",
"other",
"widget",
"has",
"modal",
"mouse",
"input",
"focus",
"false",
"otherwise",
"."
] | def isModalMouseInputFocusable(self):
"""
Checks if a widget is modal mouse input focusable.
True if no other widget has modal mouse input focus, false otherwise.
"""
return self.real_widget.isModalMouseInputFocusable() | [
"def",
"isModalMouseInputFocusable",
"(",
"self",
")",
":",
"return",
"self",
".",
"real_widget",
".",
"isModalMouseInputFocusable",
"(",
")"
] | https://github.com/fifengine/fifengine/blob/4b62c42e85bec19893cef8e63e6855927cff2c47/engine/python/fife/extensions/pychan/widgets/widget.py#L358-L363 | |
potassco/clingo | e0c91d8f95cc28de1c480a871f9c97c30de83d40 | libpyclingo/clingo/backend.py | python | Backend.add_atom | (self, symbol: Optional[Symbol]=None) | return _c_call('clingo_atom_t', _lib.clingo_backend_add_atom, self._rep, p_sym, handler=self._error) | Return a fresh program atom or the atom associated with the given symbol.
If the given symbol does not exist in the atom base, it is added first. Such
atoms will be used in subequents calls to ground for instantiation.
Parameters
----------
symbol
The symbol associated with the atom.
Returns
-------
The program atom representing the atom. | Return a fresh program atom or the atom associated with the given symbol. | [
"Return",
"a",
"fresh",
"program",
"atom",
"or",
"the",
"atom",
"associated",
"with",
"the",
"given",
"symbol",
"."
] | def add_atom(self, symbol: Optional[Symbol]=None) -> int:
'''
Return a fresh program atom or the atom associated with the given symbol.
If the given symbol does not exist in the atom base, it is added first. Such
atoms will be used in subequents calls to ground for instantiation.
Parameters
----------
symbol
The symbol associated with the atom.
Returns
-------
The program atom representing the atom.
'''
# pylint: disable=protected-access
if symbol is None:
p_sym = _ffi.NULL
else:
p_sym = _ffi.new('clingo_symbol_t*', symbol._rep)
self._error.clear()
return _c_call('clingo_atom_t', _lib.clingo_backend_add_atom, self._rep, p_sym, handler=self._error) | [
"def",
"add_atom",
"(",
"self",
",",
"symbol",
":",
"Optional",
"[",
"Symbol",
"]",
"=",
"None",
")",
"->",
"int",
":",
"# pylint: disable=protected-access",
"if",
"symbol",
"is",
"None",
":",
"p_sym",
"=",
"_ffi",
".",
"NULL",
"else",
":",
"p_sym",
"=",... | https://github.com/potassco/clingo/blob/e0c91d8f95cc28de1c480a871f9c97c30de83d40/libpyclingo/clingo/backend.py#L546-L568 | |
syoyo/tinygltf | e7f1ff5c59d3ca2489923beb239bdf93d863498f | deps/cpplint.py | python | _CppLintState.SetOutputFormat | (self, output_format) | Sets the output format for errors. | Sets the output format for errors. | [
"Sets",
"the",
"output",
"format",
"for",
"errors",
"."
] | def SetOutputFormat(self, output_format):
"""Sets the output format for errors."""
self.output_format = output_format | [
"def",
"SetOutputFormat",
"(",
"self",
",",
"output_format",
")",
":",
"self",
".",
"output_format",
"=",
"output_format"
] | https://github.com/syoyo/tinygltf/blob/e7f1ff5c59d3ca2489923beb239bdf93d863498f/deps/cpplint.py#L775-L777 | ||
RapidsAtHKUST/CommunityDetectionCodes | 23dbafd2e57ab0f5f0528b1322c4a409f21e5892 | Prensentation/graph_tool_usage/intro_graph_tool/nx2gt.py | python | nx2gt | (nxG) | return gtG | Converts a networkx graph to a graph-tool graph. | Converts a networkx graph to a graph-tool graph. | [
"Converts",
"a",
"networkx",
"graph",
"to",
"a",
"graph",
"-",
"tool",
"graph",
"."
] | def nx2gt(nxG):
"""
Converts a networkx graph to a graph-tool graph.
"""
# Phase 0: Create a directed or undirected graph-tool Graph
gtG = gt.Graph(directed=nxG.is_directed())
# Add the Graph properties as "internal properties"
for key, value in nxG.graph.items():
# Convert the value and key into a type for graph-tool
tname, value, key = get_prop_type(value, key)
prop = gtG.new_graph_property(tname) # Create the PropertyMap
gtG.graph_properties[key] = prop # Set the PropertyMap
gtG.graph_properties[key] = value # Set the actual value
# Phase 1: Add the vertex and edge property maps
# Go through all nodes and edges and add seen properties
# Add the node properties first
nprops = set() # cache keys to only add properties once
for node, data in nxG.nodes_iter(data=True):
# Go through all the properties if not seen and add them.
for key, val in data.items():
if key in nprops: continue # Skip properties already added
# Convert the value and key into a type for graph-tool
tname, _, key = get_prop_type(val, key)
prop = gtG.new_vertex_property(tname) # Create the PropertyMap
gtG.vertex_properties[key] = prop # Set the PropertyMap
# Add the key to the already seen properties
nprops.add(key)
# Also add the node id: in NetworkX a node can be any hashable type, but
# in graph-tool node are defined as indices. So we capture any strings
# in a special PropertyMap called 'id' -- modify as needed!
gtG.vertex_properties['id'] = gtG.new_vertex_property('string')
# Add the edge properties second
eprops = set() # cache keys to only add properties once
for src, dst, data in nxG.edges_iter(data=True):
# Go through all the edge properties if not seen and add them.
for key, val in data.items():
if key in eprops: continue # Skip properties already added
# Convert the value and key into a type for graph-tool
tname, _, key = get_prop_type(val, key)
prop = gtG.new_edge_property(tname) # Create the PropertyMap
gtG.edge_properties[key] = prop # Set the PropertyMap
# Add the key to the already seen properties
eprops.add(key)
# Phase 2: Actually add all the nodes and vertices with their properties
# Add the nodes
vertices = {} # vertex mapping for tracking edges later
for node, data in nxG.nodes_iter(data=True):
# Create the vertex and annotate for our edges later
v = gtG.add_vertex()
vertices[node] = v
# Set the vertex properties, not forgetting the id property
data['id'] = str(node)
for key, value in data.items():
gtG.vp[key][v] = value # vp is short for vertex_properties
# Add the edges
for src, dst, data in nxG.edges_iter(data=True):
# Look up the vertex structs from our vertices mapping and add edge.
e = gtG.add_edge(vertices[src], vertices[dst])
# Add the edge properties
for key, value in data.items():
gtG.ep[key][e] = value # ep is short for edge_properties
# Done, finally!
return gtG | [
"def",
"nx2gt",
"(",
"nxG",
")",
":",
"# Phase 0: Create a directed or undirected graph-tool Graph",
"gtG",
"=",
"gt",
".",
"Graph",
"(",
"directed",
"=",
"nxG",
".",
"is_directed",
"(",
")",
")",
"# Add the Graph properties as \"internal properties\"",
"for",
"key",
... | https://github.com/RapidsAtHKUST/CommunityDetectionCodes/blob/23dbafd2e57ab0f5f0528b1322c4a409f21e5892/Prensentation/graph_tool_usage/intro_graph_tool/nx2gt.py#L40-L122 | |
catboost/catboost | 167f64f237114a4d10b2b4ee42adb4569137debe | contrib/python/scipy/scipy/io/netcdf.py | python | netcdf_variable.assignValue | (self, value) | Assign a scalar value to a `netcdf_variable` of length one.
Parameters
----------
value : scalar
Scalar value (of compatible type) to assign to a length-one netcdf
variable. This value will be written to file.
Raises
------
ValueError
If the input is not a scalar, or if the destination is not a length-one
netcdf variable. | Assign a scalar value to a `netcdf_variable` of length one. | [
"Assign",
"a",
"scalar",
"value",
"to",
"a",
"netcdf_variable",
"of",
"length",
"one",
"."
] | def assignValue(self, value):
"""
Assign a scalar value to a `netcdf_variable` of length one.
Parameters
----------
value : scalar
Scalar value (of compatible type) to assign to a length-one netcdf
variable. This value will be written to file.
Raises
------
ValueError
If the input is not a scalar, or if the destination is not a length-one
netcdf variable.
"""
if not self.data.flags.writeable:
# Work-around for a bug in NumPy. Calling itemset() on a read-only
# memory-mapped array causes a seg. fault.
# See NumPy ticket #1622, and SciPy ticket #1202.
# This check for `writeable` can be removed when the oldest version
# of numpy still supported by scipy contains the fix for #1622.
raise RuntimeError("variable is not writeable")
self.data.itemset(value) | [
"def",
"assignValue",
"(",
"self",
",",
"value",
")",
":",
"if",
"not",
"self",
".",
"data",
".",
"flags",
".",
"writeable",
":",
"# Work-around for a bug in NumPy. Calling itemset() on a read-only",
"# memory-mapped array causes a seg. fault.",
"# See NumPy ticket #1622, an... | https://github.com/catboost/catboost/blob/167f64f237114a4d10b2b4ee42adb4569137debe/contrib/python/scipy/scipy/io/netcdf.py#L889-L914 | ||
aws/lumberyard | f85344403c1c2e77ec8c75deb2c116e97b713217 | dev/Tools/Python/3.7.10/mac/Python.framework/Versions/3.7/lib/python3.7/site-packages/pip/_vendor/distro.py | python | LinuxDistribution.__init__ | (self,
include_lsb=True,
os_release_file='',
distro_release_file='',
include_uname=True) | The initialization method of this class gathers information from the
available data sources, and stores that in private instance attributes.
Subsequent access to the information items uses these private instance
attributes, so that the data sources are read only once.
Parameters:
* ``include_lsb`` (bool): Controls whether the
`lsb_release command output`_ is included as a data source.
If the lsb_release command is not available in the program execution
path, the data source for the lsb_release command will be empty.
* ``os_release_file`` (string): The path name of the
`os-release file`_ that is to be used as a data source.
An empty string (the default) will cause the default path name to
be used (see `os-release file`_ for details).
If the specified or defaulted os-release file does not exist, the
data source for the os-release file will be empty.
* ``distro_release_file`` (string): The path name of the
`distro release file`_ that is to be used as a data source.
An empty string (the default) will cause a default search algorithm
to be used (see `distro release file`_ for details).
If the specified distro release file does not exist, or if no default
distro release file can be found, the data source for the distro
release file will be empty.
* ``include_uname`` (bool): Controls whether uname command output is
included as a data source. If the uname command is not available in
the program execution path the data source for the uname command will
be empty.
Public instance attributes:
* ``os_release_file`` (string): The path name of the
`os-release file`_ that is actually used as a data source. The
empty string if no distro release file is used as a data source.
* ``distro_release_file`` (string): The path name of the
`distro release file`_ that is actually used as a data source. The
empty string if no distro release file is used as a data source.
* ``include_lsb`` (bool): The result of the ``include_lsb`` parameter.
This controls whether the lsb information will be loaded.
* ``include_uname`` (bool): The result of the ``include_uname``
parameter. This controls whether the uname information will
be loaded.
Raises:
* :py:exc:`IOError`: Some I/O issue with an os-release file or distro
release file.
* :py:exc:`subprocess.CalledProcessError`: The lsb_release command had
some issue (other than not being available in the program execution
path).
* :py:exc:`UnicodeError`: A data source has unexpected characters or
uses an unexpected encoding. | The initialization method of this class gathers information from the
available data sources, and stores that in private instance attributes.
Subsequent access to the information items uses these private instance
attributes, so that the data sources are read only once. | [
"The",
"initialization",
"method",
"of",
"this",
"class",
"gathers",
"information",
"from",
"the",
"available",
"data",
"sources",
"and",
"stores",
"that",
"in",
"private",
"instance",
"attributes",
".",
"Subsequent",
"access",
"to",
"the",
"information",
"items",... | def __init__(self,
include_lsb=True,
os_release_file='',
distro_release_file='',
include_uname=True):
"""
The initialization method of this class gathers information from the
available data sources, and stores that in private instance attributes.
Subsequent access to the information items uses these private instance
attributes, so that the data sources are read only once.
Parameters:
* ``include_lsb`` (bool): Controls whether the
`lsb_release command output`_ is included as a data source.
If the lsb_release command is not available in the program execution
path, the data source for the lsb_release command will be empty.
* ``os_release_file`` (string): The path name of the
`os-release file`_ that is to be used as a data source.
An empty string (the default) will cause the default path name to
be used (see `os-release file`_ for details).
If the specified or defaulted os-release file does not exist, the
data source for the os-release file will be empty.
* ``distro_release_file`` (string): The path name of the
`distro release file`_ that is to be used as a data source.
An empty string (the default) will cause a default search algorithm
to be used (see `distro release file`_ for details).
If the specified distro release file does not exist, or if no default
distro release file can be found, the data source for the distro
release file will be empty.
* ``include_uname`` (bool): Controls whether uname command output is
included as a data source. If the uname command is not available in
the program execution path the data source for the uname command will
be empty.
Public instance attributes:
* ``os_release_file`` (string): The path name of the
`os-release file`_ that is actually used as a data source. The
empty string if no distro release file is used as a data source.
* ``distro_release_file`` (string): The path name of the
`distro release file`_ that is actually used as a data source. The
empty string if no distro release file is used as a data source.
* ``include_lsb`` (bool): The result of the ``include_lsb`` parameter.
This controls whether the lsb information will be loaded.
* ``include_uname`` (bool): The result of the ``include_uname``
parameter. This controls whether the uname information will
be loaded.
Raises:
* :py:exc:`IOError`: Some I/O issue with an os-release file or distro
release file.
* :py:exc:`subprocess.CalledProcessError`: The lsb_release command had
some issue (other than not being available in the program execution
path).
* :py:exc:`UnicodeError`: A data source has unexpected characters or
uses an unexpected encoding.
"""
self.os_release_file = os_release_file or \
os.path.join(_UNIXCONFDIR, _OS_RELEASE_BASENAME)
self.distro_release_file = distro_release_file or '' # updated later
self.include_lsb = include_lsb
self.include_uname = include_uname | [
"def",
"__init__",
"(",
"self",
",",
"include_lsb",
"=",
"True",
",",
"os_release_file",
"=",
"''",
",",
"distro_release_file",
"=",
"''",
",",
"include_uname",
"=",
"True",
")",
":",
"self",
".",
"os_release_file",
"=",
"os_release_file",
"or",
"os",
".",
... | https://github.com/aws/lumberyard/blob/f85344403c1c2e77ec8c75deb2c116e97b713217/dev/Tools/Python/3.7.10/mac/Python.framework/Versions/3.7/lib/python3.7/site-packages/pip/_vendor/distro.py#L578-L654 | ||
catboost/catboost | 167f64f237114a4d10b2b4ee42adb4569137debe | contrib/python/pandas/py3/pandas/io/stata.py | python | _dtype_to_default_stata_fmt | (
dtype, column: Series, dta_version: int = 114, force_strl: bool = False
) | Map numpy dtype to stata's default format for this type. Not terribly
important since users can change this in Stata. Semantics are
object -> "%DDs" where DD is the length of the string. If not a string,
raise ValueError
float64 -> "%10.0g"
float32 -> "%9.0g"
int64 -> "%9.0g"
int32 -> "%12.0g"
int16 -> "%8.0g"
int8 -> "%8.0g"
strl -> "%9s" | Map numpy dtype to stata's default format for this type. Not terribly
important since users can change this in Stata. Semantics are | [
"Map",
"numpy",
"dtype",
"to",
"stata",
"s",
"default",
"format",
"for",
"this",
"type",
".",
"Not",
"terribly",
"important",
"since",
"users",
"can",
"change",
"this",
"in",
"Stata",
".",
"Semantics",
"are"
] | def _dtype_to_default_stata_fmt(
dtype, column: Series, dta_version: int = 114, force_strl: bool = False
) -> str:
"""
Map numpy dtype to stata's default format for this type. Not terribly
important since users can change this in Stata. Semantics are
object -> "%DDs" where DD is the length of the string. If not a string,
raise ValueError
float64 -> "%10.0g"
float32 -> "%9.0g"
int64 -> "%9.0g"
int32 -> "%12.0g"
int16 -> "%8.0g"
int8 -> "%8.0g"
strl -> "%9s"
"""
# TODO: Refactor to combine type with format
# TODO: expand this to handle a default datetime format?
if dta_version < 117:
max_str_len = 244
else:
max_str_len = 2045
if force_strl:
return "%9s"
if dtype.type == np.object_:
itemsize = max_len_string_array(ensure_object(column._values))
if itemsize > max_str_len:
if dta_version >= 117:
return "%9s"
else:
raise ValueError(excessive_string_length_error.format(column.name))
return "%" + str(max(itemsize, 1)) + "s"
elif dtype == np.float64:
return "%10.0g"
elif dtype == np.float32:
return "%9.0g"
elif dtype == np.int32:
return "%12.0g"
elif dtype == np.int8 or dtype == np.int16:
return "%8.0g"
else: # pragma : no cover
raise NotImplementedError(f"Data type {dtype} not supported.") | [
"def",
"_dtype_to_default_stata_fmt",
"(",
"dtype",
",",
"column",
":",
"Series",
",",
"dta_version",
":",
"int",
"=",
"114",
",",
"force_strl",
":",
"bool",
"=",
"False",
")",
"->",
"str",
":",
"# TODO: Refactor to combine type with format",
"# TODO: expand this to... | https://github.com/catboost/catboost/blob/167f64f237114a4d10b2b4ee42adb4569137debe/contrib/python/pandas/py3/pandas/io/stata.py#L2065-L2107 | ||
naver/sling | 5671cd445a2caae0b4dd0332299e4cfede05062c | webkit/Tools/Scripts/webkitpy/thirdparty/irc/irclib.py | python | IRC.add_global_handler | (self, event, handler, priority=0) | Adds a global handler function for a specific event type.
Arguments:
event -- Event type (a string). Check the values of the
numeric_events dictionary in irclib.py for possible event
types.
handler -- Callback function.
priority -- A number (the lower number, the higher priority).
The handler function is called whenever the specified event is
triggered in any of the connections. See documentation for
the Event class.
The handler functions are called in priority order (lowest
number is highest priority). If a handler function returns
\"NO MORE\", no more handlers will be called. | Adds a global handler function for a specific event type. | [
"Adds",
"a",
"global",
"handler",
"function",
"for",
"a",
"specific",
"event",
"type",
"."
] | def add_global_handler(self, event, handler, priority=0):
"""Adds a global handler function for a specific event type.
Arguments:
event -- Event type (a string). Check the values of the
numeric_events dictionary in irclib.py for possible event
types.
handler -- Callback function.
priority -- A number (the lower number, the higher priority).
The handler function is called whenever the specified event is
triggered in any of the connections. See documentation for
the Event class.
The handler functions are called in priority order (lowest
number is highest priority). If a handler function returns
\"NO MORE\", no more handlers will be called.
"""
if not event in self.handlers:
self.handlers[event] = []
bisect.insort(self.handlers[event], ((priority, handler))) | [
"def",
"add_global_handler",
"(",
"self",
",",
"event",
",",
"handler",
",",
"priority",
"=",
"0",
")",
":",
"if",
"not",
"event",
"in",
"self",
".",
"handlers",
":",
"self",
".",
"handlers",
"[",
"event",
"]",
"=",
"[",
"]",
"bisect",
".",
"insort",... | https://github.com/naver/sling/blob/5671cd445a2caae0b4dd0332299e4cfede05062c/webkit/Tools/Scripts/webkitpy/thirdparty/irc/irclib.py#L236-L259 | ||
devsisters/libquic | 8954789a056d8e7d5fcb6452fd1572ca57eb5c4e | src/third_party/protobuf/python/google/protobuf/json_format.py | python | _ConvertBool | (value, require_str) | return value | Convert a boolean value.
Args:
value: A scalar value to convert.
require_str: If True, value must be a str.
Returns:
The bool parsed.
Raises:
ParseError: If a boolean value couldn't be consumed. | Convert a boolean value. | [
"Convert",
"a",
"boolean",
"value",
"."
] | def _ConvertBool(value, require_str):
"""Convert a boolean value.
Args:
value: A scalar value to convert.
require_str: If True, value must be a str.
Returns:
The bool parsed.
Raises:
ParseError: If a boolean value couldn't be consumed.
"""
if require_str:
if value == 'true':
return True
elif value == 'false':
return False
else:
raise ParseError('Expected "true" or "false", not {0}.'.format(value))
if not isinstance(value, bool):
raise ParseError('Expected true or false without quotes.')
return value | [
"def",
"_ConvertBool",
"(",
"value",
",",
"require_str",
")",
":",
"if",
"require_str",
":",
"if",
"value",
"==",
"'true'",
":",
"return",
"True",
"elif",
"value",
"==",
"'false'",
":",
"return",
"False",
"else",
":",
"raise",
"ParseError",
"(",
"'Expected... | https://github.com/devsisters/libquic/blob/8954789a056d8e7d5fcb6452fd1572ca57eb5c4e/src/third_party/protobuf/python/google/protobuf/json_format.py#L605-L628 | |
ApolloAuto/apollo | 463fb82f9e979d02dcb25044e60931293ab2dba0 | modules/tools/record_parse_save/record_parse_save.py | python | define_destinations | (parse_dict) | return dest_dict, parser_func | define destination for extracted files | define destination for extracted files | [
"define",
"destination",
"for",
"extracted",
"files"
] | def define_destinations(parse_dict):
"""
define destination for extracted files
"""
dest_dict = {
"channel_name": "",
"timestamp_file": "",
"destination_folder": ""
}
parse_type = parse_dict["parse_type"]
params = parse_dict["params"]
dest_folder = parse_dict["out_folder"]
prefix = parse_dict["prefix"]
parser_func = 'parse_' + parse_type
dest_dict['channel_name'] = params[parse_type]['channel_name']
dest_dict['timestamp_file'] = dest_folder + prefix + params[parse_type]['timestamp_file_extn']
dest_dict['destination_folder'] = dest_folder + \
prefix + params[parse_type]['out_folder_extn'] + '/'
if not os.path.exists(dest_dict["destination_folder"]):
os.makedirs(dest_dict["destination_folder"])
return dest_dict, parser_func | [
"def",
"define_destinations",
"(",
"parse_dict",
")",
":",
"dest_dict",
"=",
"{",
"\"channel_name\"",
":",
"\"\"",
",",
"\"timestamp_file\"",
":",
"\"\"",
",",
"\"destination_folder\"",
":",
"\"\"",
"}",
"parse_type",
"=",
"parse_dict",
"[",
"\"parse_type\"",
"]",... | https://github.com/ApolloAuto/apollo/blob/463fb82f9e979d02dcb25044e60931293ab2dba0/modules/tools/record_parse_save/record_parse_save.py#L73-L98 | |
wlanjie/AndroidFFmpeg | 7baf9122f4b8e1c74e7baf4be5c422c7a5ba5aaf | tools/fdk-aac-build/x86/toolchain/lib/python2.7/email/message.py | python | Message.add_header | (self, _name, _value, **_params) | Extended header setting.
name is the header field to add. keyword arguments can be used to set
additional parameters for the header field, with underscores converted
to dashes. Normally the parameter will be added as key="value" unless
value is None, in which case only the key will be added. If a
parameter value contains non-ASCII characters it must be specified as a
three-tuple of (charset, language, value), in which case it will be
encoded according to RFC2231 rules.
Example:
msg.add_header('content-disposition', 'attachment', filename='bud.gif') | Extended header setting. | [
"Extended",
"header",
"setting",
"."
] | def add_header(self, _name, _value, **_params):
"""Extended header setting.
name is the header field to add. keyword arguments can be used to set
additional parameters for the header field, with underscores converted
to dashes. Normally the parameter will be added as key="value" unless
value is None, in which case only the key will be added. If a
parameter value contains non-ASCII characters it must be specified as a
three-tuple of (charset, language, value), in which case it will be
encoded according to RFC2231 rules.
Example:
msg.add_header('content-disposition', 'attachment', filename='bud.gif')
"""
parts = []
for k, v in _params.items():
if v is None:
parts.append(k.replace('_', '-'))
else:
parts.append(_formatparam(k.replace('_', '-'), v))
if _value is not None:
parts.insert(0, _value)
self._headers.append((_name, SEMISPACE.join(parts))) | [
"def",
"add_header",
"(",
"self",
",",
"_name",
",",
"_value",
",",
"*",
"*",
"_params",
")",
":",
"parts",
"=",
"[",
"]",
"for",
"k",
",",
"v",
"in",
"_params",
".",
"items",
"(",
")",
":",
"if",
"v",
"is",
"None",
":",
"parts",
".",
"append",... | https://github.com/wlanjie/AndroidFFmpeg/blob/7baf9122f4b8e1c74e7baf4be5c422c7a5ba5aaf/tools/fdk-aac-build/x86/toolchain/lib/python2.7/email/message.py#L388-L411 | ||
tensorflow/tensorflow | 419e3a6b650ea4bd1b0cba23c4348f8a69f3272e | tensorflow/python/keras/backend.py | python | set_session | (session) | Sets the global TensorFlow session.
Args:
session: A TF Session. | Sets the global TensorFlow session. | [
"Sets",
"the",
"global",
"TensorFlow",
"session",
"."
] | def set_session(session):
"""Sets the global TensorFlow session.
Args:
session: A TF Session.
"""
global _SESSION
_SESSION.session = session | [
"def",
"set_session",
"(",
"session",
")",
":",
"global",
"_SESSION",
"_SESSION",
".",
"session",
"=",
"session"
] | https://github.com/tensorflow/tensorflow/blob/419e3a6b650ea4bd1b0cba23c4348f8a69f3272e/tensorflow/python/keras/backend.py#L806-L813 | ||
catboost/catboost | 167f64f237114a4d10b2b4ee42adb4569137debe | contrib/python/numpy/py2/numpy/core/defchararray.py | python | isupper | (a) | return _vec_string(a, bool_, 'isupper') | Returns true for each element if all cased characters in the
string are uppercase and there is at least one character, false
otherwise.
Call `str.isupper` element-wise.
For 8-bit strings, this method is locale-dependent.
Parameters
----------
a : array_like of str or unicode
Returns
-------
out : ndarray
Output array of bools
See also
--------
str.isupper | Returns true for each element if all cased characters in the
string are uppercase and there is at least one character, false
otherwise. | [
"Returns",
"true",
"for",
"each",
"element",
"if",
"all",
"cased",
"characters",
"in",
"the",
"string",
"are",
"uppercase",
"and",
"there",
"is",
"at",
"least",
"one",
"character",
"false",
"otherwise",
"."
] | def isupper(a):
"""
Returns true for each element if all cased characters in the
string are uppercase and there is at least one character, false
otherwise.
Call `str.isupper` element-wise.
For 8-bit strings, this method is locale-dependent.
Parameters
----------
a : array_like of str or unicode
Returns
-------
out : ndarray
Output array of bools
See also
--------
str.isupper
"""
return _vec_string(a, bool_, 'isupper') | [
"def",
"isupper",
"(",
"a",
")",
":",
"return",
"_vec_string",
"(",
"a",
",",
"bool_",
",",
"'isupper'",
")"
] | https://github.com/catboost/catboost/blob/167f64f237114a4d10b2b4ee42adb4569137debe/contrib/python/numpy/py2/numpy/core/defchararray.py#L916-L939 | |
google/earthenterprise | 0fe84e29be470cd857e3a0e52e5d0afd5bb8cee9 | earth_enterprise/src/scons/getversion.py | python | OpenGeeVersion.set_long | (self, value) | Overrides the long version string by using the given value.
Overriding the long version string would indirectly override the short
version string, as well, unless the former is also overridden. | Overrides the long version string by using the given value.
Overriding the long version string would indirectly override the short
version string, as well, unless the former is also overridden. | [
"Overrides",
"the",
"long",
"version",
"string",
"by",
"using",
"the",
"given",
"value",
".",
"Overriding",
"the",
"long",
"version",
"string",
"would",
"indirectly",
"override",
"the",
"short",
"version",
"string",
"as",
"well",
"unless",
"the",
"former",
"is... | def set_long(self, value):
"""Overrides the long version string by using the given value.
Overriding the long version string would indirectly override the short
version string, as well, unless the former is also overridden.
"""
self.long_version_string = value | [
"def",
"set_long",
"(",
"self",
",",
"value",
")",
":",
"self",
".",
"long_version_string",
"=",
"value"
] | https://github.com/google/earthenterprise/blob/0fe84e29be470cd857e3a0e52e5d0afd5bb8cee9/earth_enterprise/src/scons/getversion.py#L358-L364 | ||
aws/lumberyard | f85344403c1c2e77ec8c75deb2c116e97b713217 | dev/Tools/Python/3.7.10/mac/Python.framework/Versions/3.7/lib/python3.7/logging/__init__.py | python | Formatter.formatStack | (self, stack_info) | return stack_info | This method is provided as an extension point for specialized
formatting of stack information.
The input data is a string as returned from a call to
:func:`traceback.print_stack`, but with the last trailing newline
removed.
The base implementation just returns the value passed in. | This method is provided as an extension point for specialized
formatting of stack information. | [
"This",
"method",
"is",
"provided",
"as",
"an",
"extension",
"point",
"for",
"specialized",
"formatting",
"of",
"stack",
"information",
"."
] | def formatStack(self, stack_info):
"""
This method is provided as an extension point for specialized
formatting of stack information.
The input data is a string as returned from a call to
:func:`traceback.print_stack`, but with the last trailing newline
removed.
The base implementation just returns the value passed in.
"""
return stack_info | [
"def",
"formatStack",
"(",
"self",
",",
"stack_info",
")",
":",
"return",
"stack_info"
] | https://github.com/aws/lumberyard/blob/f85344403c1c2e77ec8c75deb2c116e97b713217/dev/Tools/Python/3.7.10/mac/Python.framework/Versions/3.7/lib/python3.7/logging/__init__.py#L582-L593 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.