nwo stringlengths 5 106 | sha stringlengths 40 40 | path stringlengths 4 174 | language stringclasses 1
value | identifier stringlengths 1 140 | parameters stringlengths 0 87.7k | argument_list stringclasses 1
value | return_statement stringlengths 0 426k | docstring stringlengths 0 64.3k | docstring_summary stringlengths 0 26.3k | docstring_tokens list | function stringlengths 18 4.83M | function_tokens list | url stringlengths 83 304 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
gangly/datafaker | 18291adb450cb6c8b87b4e5cc44ed3e8f3dadd89 | datafaker/reg.py | python | reg_int | (data) | return int(ret[0]) if ret else 0 | 匹配整数
:param data:
:return: | 匹配整数
:param data:
:return: | [
"匹配整数",
":",
"param",
"data",
":",
":",
"return",
":"
] | def reg_int(data):
"""
匹配整数
:param data:
:return:
"""
ret = reg_integer(data)
return int(ret[0]) if ret else 0 | [
"def",
"reg_int",
"(",
"data",
")",
":",
"ret",
"=",
"reg_integer",
"(",
"data",
")",
"return",
"int",
"(",
"ret",
"[",
"0",
"]",
")",
"if",
"ret",
"else",
"0"
] | https://github.com/gangly/datafaker/blob/18291adb450cb6c8b87b4e5cc44ed3e8f3dadd89/datafaker/reg.py#L82-L90 | |
runawayhorse001/LearningApacheSpark | 67f3879dce17553195f094f5728b94a01badcf24 | pyspark/mllib/regression.py | python | RidgeRegressionWithSGD.train | (cls, data, iterations=100, step=1.0, regParam=0.01,
miniBatchFraction=1.0, initialWeights=None, intercept=False,
validateData=True, convergenceTol=0.001) | return _regression_train_wrapper(train, RidgeRegressionModel, data, initialWeights) | Train a regression model with L2-regularization using Stochastic
Gradient Descent. This solves the l2-regularized least squares
regression formulation
f(weights) = 1/(2n) ||A weights - y||^2 + regParam/2 ||weights||^2
Here the data matrix has n rows, and the input RDD holds the set
of rows of A, each with its corresponding right hand side label y.
See also the documentation for the precise formulation.
:param data:
The training data, an RDD of LabeledPoint.
:param iterations:
The number of iterations.
(default: 100)
:param step:
The step parameter used in SGD.
(default: 1.0)
:param regParam:
The regularizer parameter.
(default: 0.01)
:param miniBatchFraction:
Fraction of data to be used for each SGD iteration.
(default: 1.0)
:param initialWeights:
The initial weights.
(default: None)
:param intercept:
Boolean parameter which indicates the use or not of the
augmented representation for training data (i.e. whether bias
features are activated or not).
(default: False)
:param validateData:
Boolean parameter which indicates if the algorithm should
validate data before training.
(default: True)
:param convergenceTol:
A condition which decides iteration termination.
(default: 0.001) | Train a regression model with L2-regularization using Stochastic
Gradient Descent. This solves the l2-regularized least squares
regression formulation | [
"Train",
"a",
"regression",
"model",
"with",
"L2",
"-",
"regularization",
"using",
"Stochastic",
"Gradient",
"Descent",
".",
"This",
"solves",
"the",
"l2",
"-",
"regularized",
"least",
"squares",
"regression",
"formulation"
] | def train(cls, data, iterations=100, step=1.0, regParam=0.01,
miniBatchFraction=1.0, initialWeights=None, intercept=False,
validateData=True, convergenceTol=0.001):
"""
Train a regression model with L2-regularization using Stochastic
Gradient Descent. This solves the l2-regularized least squares
regression formulation
f(weights) = 1/(2n) ||A weights - y||^2 + regParam/2 ||weights||^2
Here the data matrix has n rows, and the input RDD holds the set
of rows of A, each with its corresponding right hand side label y.
See also the documentation for the precise formulation.
:param data:
The training data, an RDD of LabeledPoint.
:param iterations:
The number of iterations.
(default: 100)
:param step:
The step parameter used in SGD.
(default: 1.0)
:param regParam:
The regularizer parameter.
(default: 0.01)
:param miniBatchFraction:
Fraction of data to be used for each SGD iteration.
(default: 1.0)
:param initialWeights:
The initial weights.
(default: None)
:param intercept:
Boolean parameter which indicates the use or not of the
augmented representation for training data (i.e. whether bias
features are activated or not).
(default: False)
:param validateData:
Boolean parameter which indicates if the algorithm should
validate data before training.
(default: True)
:param convergenceTol:
A condition which decides iteration termination.
(default: 0.001)
"""
warnings.warn(
"Deprecated in 2.0.0. Use ml.regression.LinearRegression with elasticNetParam = 0.0. "
"Note the default regParam is 0.01 for RidgeRegressionWithSGD, but is 0.0 for "
"LinearRegression.", DeprecationWarning)
def train(rdd, i):
return callMLlibFunc("trainRidgeModelWithSGD", rdd, int(iterations), float(step),
float(regParam), float(miniBatchFraction), i, bool(intercept),
bool(validateData), float(convergenceTol))
return _regression_train_wrapper(train, RidgeRegressionModel, data, initialWeights) | [
"def",
"train",
"(",
"cls",
",",
"data",
",",
"iterations",
"=",
"100",
",",
"step",
"=",
"1.0",
",",
"regParam",
"=",
"0.01",
",",
"miniBatchFraction",
"=",
"1.0",
",",
"initialWeights",
"=",
"None",
",",
"intercept",
"=",
"False",
",",
"validateData",
... | https://github.com/runawayhorse001/LearningApacheSpark/blob/67f3879dce17553195f094f5728b94a01badcf24/pyspark/mllib/regression.py#L526-L580 | |
koaning/scikit-lego | 028597fd0ba9ac387b9faa6f06050a7ee05e6cba | sklego/common.py | python | expanding_list | (list_to_extent, return_type=list) | return [return_type(listed[: n + 1]) for n in range(len(listed))] | Make a expanding list of lists by making tuples of the first element, the first 2 elements etc.
:param list_to_extent:
:param return_type: type of the elements of the list (tuple or list)
:Example:
>>> expanding_list('test')
[['test']]
>>> expanding_list(['test1', 'test2', 'test3'])
[['test1'], ['test1', 'test2'], ['test1', 'test2', 'test3']]
>>> expanding_list(['test1', 'test2', 'test3'], tuple)
[('test1',), ('test1', 'test2'), ('test1', 'test2', 'test3')] | Make a expanding list of lists by making tuples of the first element, the first 2 elements etc. | [
"Make",
"a",
"expanding",
"list",
"of",
"lists",
"by",
"making",
"tuples",
"of",
"the",
"first",
"element",
"the",
"first",
"2",
"elements",
"etc",
"."
] | def expanding_list(list_to_extent, return_type=list):
"""
Make a expanding list of lists by making tuples of the first element, the first 2 elements etc.
:param list_to_extent:
:param return_type: type of the elements of the list (tuple or list)
:Example:
>>> expanding_list('test')
[['test']]
>>> expanding_list(['test1', 'test2', 'test3'])
[['test1'], ['test1', 'test2'], ['test1', 'test2', 'test3']]
>>> expanding_list(['test1', 'test2', 'test3'], tuple)
[('test1',), ('test1', 'test2'), ('test1', 'test2', 'test3')]
"""
listed = as_list(list_to_extent)
if len(listed) <= 1:
return [listed]
return [return_type(listed[: n + 1]) for n in range(len(listed))] | [
"def",
"expanding_list",
"(",
"list_to_extent",
",",
"return_type",
"=",
"list",
")",
":",
"listed",
"=",
"as_list",
"(",
"list_to_extent",
")",
"if",
"len",
"(",
"listed",
")",
"<=",
"1",
":",
"return",
"[",
"listed",
"]",
"return",
"[",
"return_type",
... | https://github.com/koaning/scikit-lego/blob/028597fd0ba9ac387b9faa6f06050a7ee05e6cba/sklego/common.py#L146-L168 | |
ansible-community/molecule | be98c8db07666fd1125f69020419b67fda48b559 | src/molecule/command/create.py | python | Create.execute | (self) | Execute the actions necessary to perform a `molecule create` and \
returns None.
:return: None | Execute the actions necessary to perform a `molecule create` and \
returns None. | [
"Execute",
"the",
"actions",
"necessary",
"to",
"perform",
"a",
"molecule",
"create",
"and",
"\\",
"returns",
"None",
"."
] | def execute(self):
"""
Execute the actions necessary to perform a `molecule create` and \
returns None.
:return: None
"""
self._config.state.change_state("driver", self._config.driver.name)
if self._config.driver.delegated and not self._config.driver.managed:
msg = "Skipping, instances are delegated."
LOG.warning(msg)
return
if self._config.state.created:
msg = "Skipping, instances already created."
LOG.warning(msg)
return
self._config.provisioner.create()
self._config.state.change_state("created", True) | [
"def",
"execute",
"(",
"self",
")",
":",
"self",
".",
"_config",
".",
"state",
".",
"change_state",
"(",
"\"driver\"",
",",
"self",
".",
"_config",
".",
"driver",
".",
"name",
")",
"if",
"self",
".",
"_config",
".",
"driver",
".",
"delegated",
"and",
... | https://github.com/ansible-community/molecule/blob/be98c8db07666fd1125f69020419b67fda48b559/src/molecule/command/create.py#L75-L96 | ||
digidotcom/xbee-python | 0757f4be0017530c205175fbee8f9f61be9614d1 | digi/xbee/devices.py | python | XBeeDevice.send_data_broadcast | (self, data, transmit_options=TransmitOptions.NONE.value) | return self._send_data_64(XBee64BitAddress.BROADCAST_ADDRESS, data,
transmit_options=transmit_options) | Sends the provided data to all the XBee nodes of the network (broadcast).
This method blocks until a success or error transmit status arrives or
the configured receive timeout expires.
The received timeout is configured using method
:meth:`.AbstractXBeeDevice.set_sync_ops_timeout` and can be consulted
with :meth:`.AbstractXBeeDevice.get_sync_ops_timeout` method.
Args:
data (String or Bytearray): Data to send.
transmit_options (Integer, optional): Transmit options, bitfield of
:class:`.TransmitOptions`. Default to `TransmitOptions.NONE.value`.
Raises:
TimeoutException: If response is not received before the read
timeout expires.
InvalidOperatingModeException: If the XBee's operating mode is not
API or ESCAPED API. This method only checks the cached value of
the operating mode.
TransmitException: If the status of the response received is not OK.
XBeeException: If the XBee's communication interface is closed. | Sends the provided data to all the XBee nodes of the network (broadcast). | [
"Sends",
"the",
"provided",
"data",
"to",
"all",
"the",
"XBee",
"nodes",
"of",
"the",
"network",
"(",
"broadcast",
")",
"."
] | def send_data_broadcast(self, data, transmit_options=TransmitOptions.NONE.value):
"""
Sends the provided data to all the XBee nodes of the network (broadcast).
This method blocks until a success or error transmit status arrives or
the configured receive timeout expires.
The received timeout is configured using method
:meth:`.AbstractXBeeDevice.set_sync_ops_timeout` and can be consulted
with :meth:`.AbstractXBeeDevice.get_sync_ops_timeout` method.
Args:
data (String or Bytearray): Data to send.
transmit_options (Integer, optional): Transmit options, bitfield of
:class:`.TransmitOptions`. Default to `TransmitOptions.NONE.value`.
Raises:
TimeoutException: If response is not received before the read
timeout expires.
InvalidOperatingModeException: If the XBee's operating mode is not
API or ESCAPED API. This method only checks the cached value of
the operating mode.
TransmitException: If the status of the response received is not OK.
XBeeException: If the XBee's communication interface is closed.
"""
return self._send_data_64(XBee64BitAddress.BROADCAST_ADDRESS, data,
transmit_options=transmit_options) | [
"def",
"send_data_broadcast",
"(",
"self",
",",
"data",
",",
"transmit_options",
"=",
"TransmitOptions",
".",
"NONE",
".",
"value",
")",
":",
"return",
"self",
".",
"_send_data_64",
"(",
"XBee64BitAddress",
".",
"BROADCAST_ADDRESS",
",",
"data",
",",
"transmit_o... | https://github.com/digidotcom/xbee-python/blob/0757f4be0017530c205175fbee8f9f61be9614d1/digi/xbee/devices.py#L3026-L3052 | |
LGE-ARC-AdvancedAI/auptimizer | 50f6e3b4e0cb9146ca90fd74b9b24ca97ae22617 | src/aup/Proposer/spearmint/chooser/cma.py | python | DEAPCMADataLogger.load | (self, filenameprefix=None) | return dat | loads data from files written and return a data dictionary, *not*
a prerequisite for using `plot()` or `disp()`.
Argument `filenameprefix` is the filename prefix of data to be loaded (five files),
by default ``'outcmaes'``.
Return data dictionary with keys `xrecent`, `xmean`, `f`, `D`, `std` | loads data from files written and return a data dictionary, *not*
a prerequisite for using `plot()` or `disp()`. | [
"loads",
"data",
"from",
"files",
"written",
"and",
"return",
"a",
"data",
"dictionary",
"*",
"not",
"*",
"a",
"prerequisite",
"for",
"using",
"plot",
"()",
"or",
"disp",
"()",
"."
] | def load(self, filenameprefix=None):
"""loads data from files written and return a data dictionary, *not*
a prerequisite for using `plot()` or `disp()`.
Argument `filenameprefix` is the filename prefix of data to be loaded (five files),
by default ``'outcmaes'``.
Return data dictionary with keys `xrecent`, `xmean`, `f`, `D`, `std`
"""
if not filenameprefix:
filenameprefix = self.name_prefix
dat = self # historical
# dat.xrecent = _fileToMatrix(filenameprefix + 'xrecentbest.dat')
dat.xmean = _fileToMatrix(filenameprefix + 'xmean.dat')
dat.std = _fileToMatrix(filenameprefix + 'stddev' + '.dat')
# a hack to later write something into the last entry
for key in ['xmean', 'std']: # 'xrecent',
dat.__dict__[key].append(dat.__dict__[key][-1]) # copy last row to later fill in annotation position for display
dat.__dict__[key] = array(dat.__dict__[key], copy=False)
dat.f = array(_fileToMatrix(filenameprefix + 'fit.dat'))
dat.D = array(_fileToMatrix(filenameprefix + 'axlen' + '.dat'))
return dat | [
"def",
"load",
"(",
"self",
",",
"filenameprefix",
"=",
"None",
")",
":",
"if",
"not",
"filenameprefix",
":",
"filenameprefix",
"=",
"self",
".",
"name_prefix",
"dat",
"=",
"self",
"# historical",
"# dat.xrecent = _fileToMatrix(filenameprefix + 'xrecentbest.dat')",
"d... | https://github.com/LGE-ARC-AdvancedAI/auptimizer/blob/50f6e3b4e0cb9146ca90fd74b9b24ca97ae22617/src/aup/Proposer/spearmint/chooser/cma.py#L4263-L4285 | |
pyparallel/pyparallel | 11e8c6072d48c8f13641925d17b147bf36ee0ba3 | Lib/site-packages/traitlets-4.0.0-py3.3.egg/traitlets/config/application.py | python | Application.initialize | (self, argv=None) | Do the basic steps to configure me.
Override in subclasses. | Do the basic steps to configure me. | [
"Do",
"the",
"basic",
"steps",
"to",
"configure",
"me",
"."
] | def initialize(self, argv=None):
"""Do the basic steps to configure me.
Override in subclasses.
"""
self.parse_command_line(argv) | [
"def",
"initialize",
"(",
"self",
",",
"argv",
"=",
"None",
")",
":",
"self",
".",
"parse_command_line",
"(",
"argv",
")"
] | https://github.com/pyparallel/pyparallel/blob/11e8c6072d48c8f13641925d17b147bf36ee0ba3/Lib/site-packages/traitlets-4.0.0-py3.3.egg/traitlets/config/application.py#L254-L259 | ||
TencentCloud/tencentcloud-sdk-python | 3677fd1cdc8c5fd626ce001c13fd3b59d1f279d2 | tencentcloud/kms/v20190118/models.py | python | DescribeKeysRequest.__init__ | (self) | r"""
:param KeyIds: 查询CMK的ID列表,批量查询一次最多支持100个KeyId
:type KeyIds: list of str | r"""
:param KeyIds: 查询CMK的ID列表,批量查询一次最多支持100个KeyId
:type KeyIds: list of str | [
"r",
":",
"param",
"KeyIds",
":",
"查询CMK的ID列表,批量查询一次最多支持100个KeyId",
":",
"type",
"KeyIds",
":",
"list",
"of",
"str"
] | def __init__(self):
r"""
:param KeyIds: 查询CMK的ID列表,批量查询一次最多支持100个KeyId
:type KeyIds: list of str
"""
self.KeyIds = None | [
"def",
"__init__",
"(",
"self",
")",
":",
"self",
".",
"KeyIds",
"=",
"None"
] | https://github.com/TencentCloud/tencentcloud-sdk-python/blob/3677fd1cdc8c5fd626ce001c13fd3b59d1f279d2/tencentcloud/kms/v20190118/models.py#L709-L714 | ||
dmnfarrell/pandastable | 9c268b3e2bfe2e718eaee4a30bd02832a0ad1614 | pandastable/core.py | python | Table.pasteTable | (self, event=None) | return | Paste a new table from the clipboard | Paste a new table from the clipboard | [
"Paste",
"a",
"new",
"table",
"from",
"the",
"clipboard"
] | def pasteTable(self, event=None):
"""Paste a new table from the clipboard"""
self.storeCurrent()
try:
df = pd.read_clipboard(sep=',',error_bad_lines=False)
except Exception as e:
messagebox.showwarning("Could not read data", e,
parent=self.parentframe)
return
if len(df) == 0:
return
df = pd.read_clipboard(sep=',', index_col=0, error_bad_lines=False)
model = TableModel(df)
self.updateModel(model)
self.redraw()
return | [
"def",
"pasteTable",
"(",
"self",
",",
"event",
"=",
"None",
")",
":",
"self",
".",
"storeCurrent",
"(",
")",
"try",
":",
"df",
"=",
"pd",
".",
"read_clipboard",
"(",
"sep",
"=",
"','",
",",
"error_bad_lines",
"=",
"False",
")",
"except",
"Exception",
... | https://github.com/dmnfarrell/pandastable/blob/9c268b3e2bfe2e718eaee4a30bd02832a0ad1614/pandastable/core.py#L2452-L2469 | |
SiCKRAGE/SiCKRAGE | 45fb67c0c730fc22a34c695b5a62b11970621c53 | sickrage/core/blackandwhitelist.py | python | BlackAndWhiteList.set_white_keywords | (self, values) | Sets whitelist to new value
:param values: Complete list of keywords to be set as whitelist
:param session: Database session | Sets whitelist to new value | [
"Sets",
"whitelist",
"to",
"new",
"value"
] | def set_white_keywords(self, values):
"""
Sets whitelist to new value
:param values: Complete list of keywords to be set as whitelist
:param session: Database session
"""
session = sickrage.app.main_db.session()
session.query(MainDB.Whitelist).filter_by(series_id=self.series_id, series_provider_id=self.series_provider_id).delete()
session.commit()
self._add_keywords(MainDB.Whitelist, values)
self.whitelist = values
sickrage.app.log.debug('Whitelist set to: %s' % self.whitelist) | [
"def",
"set_white_keywords",
"(",
"self",
",",
"values",
")",
":",
"session",
"=",
"sickrage",
".",
"app",
".",
"main_db",
".",
"session",
"(",
")",
"session",
".",
"query",
"(",
"MainDB",
".",
"Whitelist",
")",
".",
"filter_by",
"(",
"series_id",
"=",
... | https://github.com/SiCKRAGE/SiCKRAGE/blob/45fb67c0c730fc22a34c695b5a62b11970621c53/sickrage/core/blackandwhitelist.py#L86-L100 | ||
saltstack/salt | fae5bc757ad0f1716483ce7ae180b451545c2058 | salt/states/monit.py | python | monitor | (name) | return ret | Get the summary from module monit and try to see if service is
being monitored. If not then monitor the service. | Get the summary from module monit and try to see if service is
being monitored. If not then monitor the service. | [
"Get",
"the",
"summary",
"from",
"module",
"monit",
"and",
"try",
"to",
"see",
"if",
"service",
"is",
"being",
"monitored",
".",
"If",
"not",
"then",
"monitor",
"the",
"service",
"."
] | def monitor(name):
"""
Get the summary from module monit and try to see if service is
being monitored. If not then monitor the service.
"""
ret = {"result": None, "name": name, "comment": "", "changes": {}}
result = __salt__["monit.summary"](name)
try:
for key, value in result.items():
if "Running" in value[name]:
ret["comment"] = "{} is being being monitored.".format(name)
ret["result"] = True
else:
if __opts__["test"]:
ret["comment"] = "Service {} is set to be monitored.".format(name)
ret["result"] = None
return ret
__salt__["monit.monitor"](name)
ret["comment"] = "{} started to be monitored.".format(name)
ret["changes"][name] = "Running"
ret["result"] = True
break
except KeyError:
ret["comment"] = "{} not found in configuration.".format(name)
ret["result"] = False
return ret | [
"def",
"monitor",
"(",
"name",
")",
":",
"ret",
"=",
"{",
"\"result\"",
":",
"None",
",",
"\"name\"",
":",
"name",
",",
"\"comment\"",
":",
"\"\"",
",",
"\"changes\"",
":",
"{",
"}",
"}",
"result",
"=",
"__salt__",
"[",
"\"monit.summary\"",
"]",
"(",
... | https://github.com/saltstack/salt/blob/fae5bc757ad0f1716483ce7ae180b451545c2058/salt/states/monit.py#L32-L59 | |
jgagneastro/coffeegrindsize | 22661ebd21831dba4cf32bfc6ba59fe3d49f879c | App/dist/coffeegrindsize.app/Contents/Resources/lib/python3.7/matplotlib/font_manager.py | python | FontManager.score_style | (self, style1, style2) | return 1.0 | Returns a match score between *style1* and *style2*.
An exact match returns 0.0.
A match between 'italic' and 'oblique' returns 0.1.
No match returns 1.0. | Returns a match score between *style1* and *style2*. | [
"Returns",
"a",
"match",
"score",
"between",
"*",
"style1",
"*",
"and",
"*",
"style2",
"*",
"."
] | def score_style(self, style1, style2):
"""
Returns a match score between *style1* and *style2*.
An exact match returns 0.0.
A match between 'italic' and 'oblique' returns 0.1.
No match returns 1.0.
"""
if style1 == style2:
return 0.0
elif style1 in ('italic', 'oblique') and \
style2 in ('italic', 'oblique'):
return 0.1
return 1.0 | [
"def",
"score_style",
"(",
"self",
",",
"style1",
",",
"style2",
")",
":",
"if",
"style1",
"==",
"style2",
":",
"return",
"0.0",
"elif",
"style1",
"in",
"(",
"'italic'",
",",
"'oblique'",
")",
"and",
"style2",
"in",
"(",
"'italic'",
",",
"'oblique'",
"... | https://github.com/jgagneastro/coffeegrindsize/blob/22661ebd21831dba4cf32bfc6ba59fe3d49f879c/App/dist/coffeegrindsize.app/Contents/Resources/lib/python3.7/matplotlib/font_manager.py#L1065-L1080 | |
stevearc/flywheel | ac6eea314f6d88b593cf809336d8723df0b78f6f | git_hooks/hook.py | python | copy_index | (tmpdir) | Copy the git repo's index into a temporary directory | Copy the git repo's index into a temporary directory | [
"Copy",
"the",
"git",
"repo",
"s",
"index",
"into",
"a",
"temporary",
"directory"
] | def copy_index(tmpdir):
""" Copy the git repo's index into a temporary directory """
# Put the code being checked-in into the temp dir
subprocess.check_call(['git', 'checkout-index', '-a', '-f', '--prefix=%s/'
% tmpdir])
# Go to each recursive submodule and use a 'git archive' tarpipe to copy
# the correct ref into the temporary directory
output = check_output(['git', 'submodule', 'status', '--recursive',
'--cached'])
for line in output.splitlines():
ref, path, _ = line.split()
ref = ref.strip('+')
with pushd(path):
archive = subprocess.Popen(['git', 'archive', '--format=tar', ref],
stdout=subprocess.PIPE)
untar_cmd = ['tar', '-x', '-C', '%s/%s/' % (tmpdir, path)]
untar = subprocess.Popen(untar_cmd, stdin=archive.stdout,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT)
out = untar.communicate()[0]
if untar.returncode != 0:
raise subprocess.CalledProcessError(untar.returncode,
untar_cmd, out) | [
"def",
"copy_index",
"(",
"tmpdir",
")",
":",
"# Put the code being checked-in into the temp dir",
"subprocess",
".",
"check_call",
"(",
"[",
"'git'",
",",
"'checkout-index'",
",",
"'-a'",
",",
"'-f'",
",",
"'--prefix=%s/'",
"%",
"tmpdir",
"]",
")",
"# Go to each re... | https://github.com/stevearc/flywheel/blob/ac6eea314f6d88b593cf809336d8723df0b78f6f/git_hooks/hook.py#L103-L126 | ||
kubernetes-client/python | 47b9da9de2d02b2b7a34fbe05afb44afd130d73a | kubernetes/client/models/core_v1_event.py | python | CoreV1Event.count | (self) | return self._count | Gets the count of this CoreV1Event. # noqa: E501
The number of times this event has occurred. # noqa: E501
:return: The count of this CoreV1Event. # noqa: E501
:rtype: int | Gets the count of this CoreV1Event. # noqa: E501 | [
"Gets",
"the",
"count",
"of",
"this",
"CoreV1Event",
".",
"#",
"noqa",
":",
"E501"
] | def count(self):
"""Gets the count of this CoreV1Event. # noqa: E501
The number of times this event has occurred. # noqa: E501
:return: The count of this CoreV1Event. # noqa: E501
:rtype: int
"""
return self._count | [
"def",
"count",
"(",
"self",
")",
":",
"return",
"self",
".",
"_count"
] | https://github.com/kubernetes-client/python/blob/47b9da9de2d02b2b7a34fbe05afb44afd130d73a/kubernetes/client/models/core_v1_event.py#L180-L188 | |
nadineproject/nadine | c41c8ef7ffe18f1853029c97eecc329039b4af6c | doors/hid_control.py | python | HIDDoorController.is_locked | (self) | return "set" == relay | [] | def is_locked(self):
door_xml = self.__send_xml(list_doors())
relay = get_attribute(door_xml, "relayState")
return "set" == relay | [
"def",
"is_locked",
"(",
"self",
")",
":",
"door_xml",
"=",
"self",
".",
"__send_xml",
"(",
"list_doors",
"(",
")",
")",
"relay",
"=",
"get_attribute",
"(",
"door_xml",
",",
"\"relayState\"",
")",
"return",
"\"set\"",
"==",
"relay"
] | https://github.com/nadineproject/nadine/blob/c41c8ef7ffe18f1853029c97eecc329039b4af6c/doors/hid_control.py#L185-L188 | |||
OpenMDAO/OpenMDAO-Framework | f2e37b7de3edeaaeb2d251b375917adec059db9b | openmdao.main/src/openmdao/main/interfaces.py | python | IHasIneqConstraints.get_ineq_constraints | () | Returns an ordered dict of inequality constraint objects. | Returns an ordered dict of inequality constraint objects. | [
"Returns",
"an",
"ordered",
"dict",
"of",
"inequality",
"constraint",
"objects",
"."
] | def get_ineq_constraints():
"""Returns an ordered dict of inequality constraint objects.""" | [
"def",
"get_ineq_constraints",
"(",
")",
":"
] | https://github.com/OpenMDAO/OpenMDAO-Framework/blob/f2e37b7de3edeaaeb2d251b375917adec059db9b/openmdao.main/src/openmdao/main/interfaces.py#L703-L704 | ||
AppScale/gts | 46f909cf5dc5ba81faf9d81dc9af598dcf8a82a9 | AppServer/google/appengine/ext/mapreduce/input_readers.py | python | AbstractDatastoreInputReader.validate | (cls, mapper_spec) | Inherit docs. | Inherit docs. | [
"Inherit",
"docs",
"."
] | def validate(cls, mapper_spec):
"""Inherit docs."""
params = _get_params(mapper_spec)
if cls.ENTITY_KIND_PARAM not in params:
raise BadReaderParamsError("Missing input reader parameter 'entity_kind'")
if cls.BATCH_SIZE_PARAM in params:
try:
batch_size = int(params[cls.BATCH_SIZE_PARAM])
if batch_size < 1:
raise BadReaderParamsError("Bad batch size: %s" % batch_size)
except ValueError, e:
raise BadReaderParamsError("Bad batch size: %s" % e)
if cls.NAMESPACE_PARAM in params:
if not isinstance(params[cls.NAMESPACE_PARAM],
(str, unicode, type(None))):
raise BadReaderParamsError(
"Expected a single namespace string")
if cls.NAMESPACES_PARAM in params:
raise BadReaderParamsError("Multiple namespaces are no longer supported")
if cls.FILTERS_PARAM in params:
filters = params[cls.FILTERS_PARAM]
if not isinstance(filters, list):
raise BadReaderParamsError("Expected list for filters parameter")
for f in filters:
if not isinstance(f, (tuple, list)):
raise BadReaderParamsError("Filter should be a tuple or list: %s", f)
if len(f) != 3:
raise BadReaderParamsError("Filter should be a 3-tuple: %s", f)
prop, op, _ = f
if not isinstance(prop, basestring):
raise BadReaderParamsError("Property should be string: %s", prop)
if not isinstance(op, basestring):
raise BadReaderParamsError("Operator should be string: %s", op) | [
"def",
"validate",
"(",
"cls",
",",
"mapper_spec",
")",
":",
"params",
"=",
"_get_params",
"(",
"mapper_spec",
")",
"if",
"cls",
".",
"ENTITY_KIND_PARAM",
"not",
"in",
"params",
":",
"raise",
"BadReaderParamsError",
"(",
"\"Missing input reader parameter 'entity_kin... | https://github.com/AppScale/gts/blob/46f909cf5dc5ba81faf9d81dc9af598dcf8a82a9/AppServer/google/appengine/ext/mapreduce/input_readers.py#L545-L577 | ||
yinboc/few-shot-meta-baseline | 779fae39dad3537e7c801049c858923e2a352dfe | meta-dataset/meta_dataset/analysis/select_best_model.py | python | extract_best_from_event_file | (event_path, log_details=False) | return best_acc, best_step | Returns the best accuracy and the step it occurs in in the given events.
This searches the summaries written in a given event file, which may be only a
subset of the total summaries of a run, since the summaries of a run are
sometimes split into multiple event files.
Args:
event_path: A string. The path to an event file.
log_details: A boolean. Whether to log details regarding skipped event paths
in which locating the tag "mean valid acc" failed. | Returns the best accuracy and the step it occurs in in the given events. | [
"Returns",
"the",
"best",
"accuracy",
"and",
"the",
"step",
"it",
"occurs",
"in",
"in",
"the",
"given",
"events",
"."
] | def extract_best_from_event_file(event_path, log_details=False):
"""Returns the best accuracy and the step it occurs in in the given events.
This searches the summaries written in a given event file, which may be only a
subset of the total summaries of a run, since the summaries of a run are
sometimes split into multiple event files.
Args:
event_path: A string. The path to an event file.
log_details: A boolean. Whether to log details regarding skipped event paths
in which locating the tag "mean valid acc" failed.
"""
steps, valid_accs = [], []
try:
for event in tf.train.summary_iterator(event_path):
step = event.step
for value in event.summary.value:
if value.tag == 'mean valid acc':
steps.append(step)
valid_accs.append(value.simple_value)
except tf.errors.DataLossError:
if log_details:
tf.logging.info(
'Omitting events from event_path {} because '
'tf.train.summary_iterator(event_path) failed.'.format(event_path))
return 0, 0
if not valid_accs:
# Could happen if there is no DataLossError above but for some reason
# there is no 'mean valid acc' tag found in the summary values.
tf.logging.info(
'Did not find any "mean valid acc" tags in event_path {}'.format(
event_path))
return 0, 0
argmax_ind = np.argmax(valid_accs)
best_acc = valid_accs[argmax_ind]
best_step = steps[argmax_ind]
if log_details:
tf.logging.info('Successfully read event_path {} with best_acc {}'.format(
event_path, best_acc))
return best_acc, best_step | [
"def",
"extract_best_from_event_file",
"(",
"event_path",
",",
"log_details",
"=",
"False",
")",
":",
"steps",
",",
"valid_accs",
"=",
"[",
"]",
",",
"[",
"]",
"try",
":",
"for",
"event",
"in",
"tf",
".",
"train",
".",
"summary_iterator",
"(",
"event_path"... | https://github.com/yinboc/few-shot-meta-baseline/blob/779fae39dad3537e7c801049c858923e2a352dfe/meta-dataset/meta_dataset/analysis/select_best_model.py#L251-L290 | |
jchanvfx/NodeGraphQt | 8b810ef469f839176f9c26bdd6496ff34d9b64a2 | NodeGraphQt/base/menu.py | python | NodeGraphMenu.all_commands | (self) | return [NodeGraphCommand(self._graph, a) for a in child_actions] | Returns all child and sub child commands from the current context menu.
Returns:
list[NodeGraphQt.MenuCommand]: list of commands. | Returns all child and sub child commands from the current context menu. | [
"Returns",
"all",
"child",
"and",
"sub",
"child",
"commands",
"from",
"the",
"current",
"context",
"menu",
"."
] | def all_commands(self):
"""
Returns all child and sub child commands from the current context menu.
Returns:
list[NodeGraphQt.MenuCommand]: list of commands.
"""
def get_actions(menu):
actions = []
for action in menu.actions():
if not action.menu():
if not action.isSeparator():
actions.append(action)
else:
actions += get_actions(action.menu())
return actions
child_actions = get_actions(self.qmenu)
return [NodeGraphCommand(self._graph, a) for a in child_actions] | [
"def",
"all_commands",
"(",
"self",
")",
":",
"def",
"get_actions",
"(",
"menu",
")",
":",
"actions",
"=",
"[",
"]",
"for",
"action",
"in",
"menu",
".",
"actions",
"(",
")",
":",
"if",
"not",
"action",
".",
"menu",
"(",
")",
":",
"if",
"not",
"ac... | https://github.com/jchanvfx/NodeGraphQt/blob/8b810ef469f839176f9c26bdd6496ff34d9b64a2/NodeGraphQt/base/menu.py#L83-L100 | |
CvvT/dumpDex | 92ab3b7e996194a06bf1dd5538a4954e8a5ee9c1 | python/idc.py | python | OpAlt | (ea, n, opstr) | return idaapi.set_forced_operand(ea, n, opstr) | Specify operand represenation manually.
@param ea: linear address
@param n: number of operand
- 0 - the first operand
- 1 - the second, third and all other operands
- -1 - all operands
@param opstr: a string represenation of the operand
@note: IDA will not check the specified operand, it will simply display
it instead of the orginal representation of the operand. | Specify operand represenation manually. | [
"Specify",
"operand",
"represenation",
"manually",
"."
] | def OpAlt(ea, n, opstr):
"""
Specify operand represenation manually.
@param ea: linear address
@param n: number of operand
- 0 - the first operand
- 1 - the second, third and all other operands
- -1 - all operands
@param opstr: a string represenation of the operand
@note: IDA will not check the specified operand, it will simply display
it instead of the orginal representation of the operand.
"""
return idaapi.set_forced_operand(ea, n, opstr) | [
"def",
"OpAlt",
"(",
"ea",
",",
"n",
",",
"opstr",
")",
":",
"return",
"idaapi",
".",
"set_forced_operand",
"(",
"ea",
",",
"n",
",",
"opstr",
")"
] | https://github.com/CvvT/dumpDex/blob/92ab3b7e996194a06bf1dd5538a4954e8a5ee9c1/python/idc.py#L1218-L1232 | |
mrJean1/PyGeodesy | 7da5ca71aa3edb7bc49e219e0b8190686e1a7965 | pygeodesy/ltp.py | python | Frustum.hfov | (self) | return Degrees(hfov=self._h_2 * _2_0) | Get the horizontal C{fov} (C{degrees}). | Get the horizontal C{fov} (C{degrees}). | [
"Get",
"the",
"horizontal",
"C",
"{",
"fov",
"}",
"(",
"C",
"{",
"degrees",
"}",
")",
"."
] | def hfov(self):
'''Get the horizontal C{fov} (C{degrees}).
'''
return Degrees(hfov=self._h_2 * _2_0) | [
"def",
"hfov",
"(",
"self",
")",
":",
"return",
"Degrees",
"(",
"hfov",
"=",
"self",
".",
"_h_2",
"*",
"_2_0",
")"
] | https://github.com/mrJean1/PyGeodesy/blob/7da5ca71aa3edb7bc49e219e0b8190686e1a7965/pygeodesy/ltp.py#L344-L347 | |
neurolib-dev/neurolib | 8d8ed2ceb422e9a1367193495a7e2df96cf4e4a3 | neurolib/optimize/evolution/evolution.py | python | Evolution.dfEvolution | (self, outputs=False) | return dfEvolution | Returns a `pandas` DataFrame with the individuals of the the whole evolution.
This method can be usef after loading an evolution from disk using loadEvolution()
:return: Pandas DataFrame with all individuals and their parameters
:rtype: `pandas.core.frame.DataFrame` | Returns a `pandas` DataFrame with the individuals of the the whole evolution.
This method can be usef after loading an evolution from disk using loadEvolution() | [
"Returns",
"a",
"pandas",
"DataFrame",
"with",
"the",
"individuals",
"of",
"the",
"the",
"whole",
"evolution",
".",
"This",
"method",
"can",
"be",
"usef",
"after",
"loading",
"an",
"evolution",
"from",
"disk",
"using",
"loadEvolution",
"()"
] | def dfEvolution(self, outputs=False):
"""Returns a `pandas` DataFrame with the individuals of the the whole evolution.
This method can be usef after loading an evolution from disk using loadEvolution()
:return: Pandas DataFrame with all individuals and their parameters
:rtype: `pandas.core.frame.DataFrame`
"""
parameters = self.parameterSpace.parameterNames
allIndividuals = [p for gen, pop in self.history.items() for p in pop]
popArray = np.array([p[0 : len(self.paramInterval._fields)] for p in allIndividuals]).T
dfEvolution = pd.DataFrame(popArray, index=parameters).T
# add more information to the dataframe
scores = [float(p.fitness.score) for p in allIndividuals]
indIds = [p.id for p in allIndividuals]
dfEvolution["score"] = scores
dfEvolution["id"] = indIds
dfEvolution["gen"] = [p.gIdx for p in allIndividuals]
if outputs:
dfEvolution = self._outputToDf(allIndividuals, dfEvolution)
# add fitness columns
# NOTE: have to do this with wvalues and divide by weights later, why?
# Because after loading the evolution with dill, somehow multiple fitnesses
# dissappear and only the first one is left. However, wvalues still has all
# fitnesses, and we have acces to weightList, so this hack kind of helps
n_fitnesses = len(self.pop[0].fitness.wvalues)
for i in range(n_fitnesses):
for ip, p in enumerate(allIndividuals):
dfEvolution.loc[ip, f"f{i}"] = p.fitness.wvalues[i] / self.weightList[i]
# the history keeps all individuals of all generations
# there can be duplicates (in elitism for example), which we filter
# out for the dataframe
dfEvolution = self._dropDuplicatesFromDf(dfEvolution)
dfEvolution = dfEvolution.reset_index(drop=True)
return dfEvolution | [
"def",
"dfEvolution",
"(",
"self",
",",
"outputs",
"=",
"False",
")",
":",
"parameters",
"=",
"self",
".",
"parameterSpace",
".",
"parameterNames",
"allIndividuals",
"=",
"[",
"p",
"for",
"gen",
",",
"pop",
"in",
"self",
".",
"history",
".",
"items",
"("... | https://github.com/neurolib-dev/neurolib/blob/8d8ed2ceb422e9a1367193495a7e2df96cf4e4a3/neurolib/optimize/evolution/evolution.py#L870-L906 | |
NIHOPA/NLPre | 1f7c05734026b39467fe521adbea7e799b91037a | nlpre/replace_acronyms.py | python | replace_acronyms.check_acronym | (self, token) | return token in self.acronym_dict | Check if a token is an acronym to be replaced
Args:
token: a string token
Returns:
a boolean | Check if a token is an acronym to be replaced | [
"Check",
"if",
"a",
"token",
"is",
"an",
"acronym",
"to",
"be",
"replaced"
] | def check_acronym(self, token):
"""
Check if a token is an acronym to be replaced
Args:
token: a string token
Returns:
a boolean
"""
if token.lower() == token:
return False
return token in self.acronym_dict | [
"def",
"check_acronym",
"(",
"self",
",",
"token",
")",
":",
"if",
"token",
".",
"lower",
"(",
")",
"==",
"token",
":",
"return",
"False",
"return",
"token",
"in",
"self",
".",
"acronym_dict"
] | https://github.com/NIHOPA/NLPre/blob/1f7c05734026b39467fe521adbea7e799b91037a/nlpre/replace_acronyms.py#L127-L140 | |
nlloyd/SubliminalCollaborator | 5c619e17ddbe8acb9eea8996ec038169ddcd50a1 | libs/twisted/protocols/policies.py | python | WrappingFactory.unregisterProtocol | (self, p) | Called by protocols when they go away. | Called by protocols when they go away. | [
"Called",
"by",
"protocols",
"when",
"they",
"go",
"away",
"."
] | def unregisterProtocol(self, p):
"""
Called by protocols when they go away.
"""
del self.protocols[p] | [
"def",
"unregisterProtocol",
"(",
"self",
",",
"p",
")",
":",
"del",
"self",
".",
"protocols",
"[",
"p",
"]"
] | https://github.com/nlloyd/SubliminalCollaborator/blob/5c619e17ddbe8acb9eea8996ec038169ddcd50a1/libs/twisted/protocols/policies.py#L179-L183 | ||
securesystemslab/zippy | ff0e84ac99442c2c55fe1d285332cfd4e185e089 | zippy/benchmarks/src/benchmarks/sympy/sympy/physics/quantum/piab.py | python | PIABKet._represent_default_basis | (self, **options) | return self._represent_XOp(None, **options) | [] | def _represent_default_basis(self, **options):
return self._represent_XOp(None, **options) | [
"def",
"_represent_default_basis",
"(",
"self",
",",
"*",
"*",
"options",
")",
":",
"return",
"self",
".",
"_represent_XOp",
"(",
"None",
",",
"*",
"*",
"options",
")"
] | https://github.com/securesystemslab/zippy/blob/ff0e84ac99442c2c55fe1d285332cfd4e185e089/zippy/benchmarks/src/benchmarks/sympy/sympy/physics/quantum/piab.py#L47-L48 | |||
NeuralEnsemble/PyNN | 96f5d80a5e814e3ccf05958b0c1fb39e98cc981b | pyNN/serialization/sonata.py | python | EdgeGroup.from_data | (cls, id, edge_types_array, index, source_ids, target_ids,
h5_data, edge_types_map, config) | return obj | Create an EdgeGroup instance from data.
Arguments
---------
id : integer
Taken from the SONATA edges HDF5 file.
node_types_array : NumPy array
Subset of the data from "/edges/<population_name>/edge_type_id"
that applies to this group.
index : list
Subset of the data from "/edges/<population_name>/edge_group_index"
that applies to this group.
h5_data : HDF5 Group
The "/edges/<population_name>/<group_id>" group.
edge_types_map : dict
Data loaded from edge types CSV file. Top-level keys are edge type ids.
config : dict
Circuit config loaded from JSON. | Create an EdgeGroup instance from data. | [
"Create",
"an",
"EdgeGroup",
"instance",
"from",
"data",
"."
] | def from_data(cls, id, edge_types_array, index, source_ids, target_ids,
h5_data, edge_types_map, config):
"""Create an EdgeGroup instance from data.
Arguments
---------
id : integer
Taken from the SONATA edges HDF5 file.
node_types_array : NumPy array
Subset of the data from "/edges/<population_name>/edge_type_id"
that applies to this group.
index : list
Subset of the data from "/edges/<population_name>/edge_group_index"
that applies to this group.
h5_data : HDF5 Group
The "/edges/<population_name>/<group_id>" group.
edge_types_map : dict
Data loaded from edge types CSV file. Top-level keys are edge type ids.
config : dict
Circuit config loaded from JSON.
"""
obj = cls()
obj.id = id
obj.edge_types_array = edge_types_array
obj.source_ids = source_ids
obj.target_ids = target_ids
parameters = defaultdict(dict)
edge_type_ids = np.unique(edge_types_array)
# parameters defined directly in edge_types csv file
for edge_type_id in edge_type_ids:
for name, value in edge_types_map[edge_type_id].items():
parameters[name][edge_type_id] = cast(value)
# parameters defined in json files referenced from edge_types.csv
if "dynamics_params" in parameters:
for edge_type_id in edge_type_ids:
parameter_file_name = parameters["dynamics_params"][edge_type_id]
parameter_file_path = join(config["components"]["synaptic_models_dir"],
parameter_file_name)
with open(parameter_file_path) as fp:
dynamics_params = json.load(fp)
for name, value in dynamics_params.items():
parameters[name][edge_type_id] = value
# parameters defined in .h5 files
if 'dynamics_params' in h5_data:
dynamics_params_group = h5_data['dynamics_params']
# not sure the next bit is using `index` correctly
for key in dynamics_params_group.keys():
parameters[key] = dynamics_params_group[key][index]
if 'nsyns' in h5_data:
parameters['nsyns'] = h5_data['nsyns'][index]
if 'syn_weight' in h5_data:
parameters['syn_weight'] = h5_data['syn_weight'][index]
obj.parameters = parameters
obj.config = config
logger.info(parameters)
return obj | [
"def",
"from_data",
"(",
"cls",
",",
"id",
",",
"edge_types_array",
",",
"index",
",",
"source_ids",
",",
"target_ids",
",",
"h5_data",
",",
"edge_types_map",
",",
"config",
")",
":",
"obj",
"=",
"cls",
"(",
")",
"obj",
".",
"id",
"=",
"id",
"obj",
"... | https://github.com/NeuralEnsemble/PyNN/blob/96f5d80a5e814e3ccf05958b0c1fb39e98cc981b/pyNN/serialization/sonata.py#L916-L978 | |
meduza-corp/interstellar | 40a801ccd7856491726f5a126621d9318cabe2e1 | gsutil/gslib/commands/defacl.py | python | DefAclCommand.ApplyAclChanges | (self, url) | Applies the changes in self.changes to the provided URL. | Applies the changes in self.changes to the provided URL. | [
"Applies",
"the",
"changes",
"in",
"self",
".",
"changes",
"to",
"the",
"provided",
"URL",
"."
] | def ApplyAclChanges(self, url):
"""Applies the changes in self.changes to the provided URL."""
bucket = self.gsutil_api.GetBucket(
url.bucket_name, provider=url.scheme,
fields=['defaultObjectAcl', 'metageneration'])
# Default object ACLs can be blank if the ACL was set to private, or
# if the user doesn't have permission. We warn about this with defacl get,
# so just try the modification here and if the user doesn't have
# permission they'll get an AccessDeniedException.
current_acl = bucket.defaultObjectAcl
modification_count = 0
for change in self.changes:
modification_count += change.Execute(
url, current_acl, 'defacl', self.logger)
if modification_count == 0:
self.logger.info('No changes to %s', url)
return
try:
preconditions = Preconditions(meta_gen_match=bucket.metageneration)
bucket_metadata = apitools_messages.Bucket(defaultObjectAcl=current_acl)
self.gsutil_api.PatchBucket(url.bucket_name, bucket_metadata,
preconditions=preconditions,
provider=url.scheme, fields=['id'])
except BadRequestException as e:
# Don't retry on bad requests, e.g. invalid email address.
raise CommandException('Received bad request from server: %s' % str(e))
except AccessDeniedException:
self._WarnServiceAccounts()
raise CommandException('Failed to set acl for %s. Please ensure you have '
'OWNER-role access to this resource.' % url)
self.logger.info('Updated default ACL on %s', url) | [
"def",
"ApplyAclChanges",
"(",
"self",
",",
"url",
")",
":",
"bucket",
"=",
"self",
".",
"gsutil_api",
".",
"GetBucket",
"(",
"url",
".",
"bucket_name",
",",
"provider",
"=",
"url",
".",
"scheme",
",",
"fields",
"=",
"[",
"'defaultObjectAcl'",
",",
"'met... | https://github.com/meduza-corp/interstellar/blob/40a801ccd7856491726f5a126621d9318cabe2e1/gsutil/gslib/commands/defacl.py#L213-L247 | ||
agronholm/anyio | ac3e7c619913bd0ddf9c36b6e633b278d07405b7 | src/anyio/_core/_compat.py | python | maybe_async_cm | (cm: Union[ContextManager[T], AsyncContextManager[T]]) | return _ContextManagerWrapper(cm) | Wrap a regular context manager as an async one if necessary.
This function is intended to bridge the gap between AnyIO 2.x and 3.x where some functions and
methods were changed to return regular context managers instead of async ones.
:param cm: a regular or async context manager
:return: an async context manager
.. versionadded:: 2.2 | Wrap a regular context manager as an async one if necessary. | [
"Wrap",
"a",
"regular",
"context",
"manager",
"as",
"an",
"async",
"one",
"if",
"necessary",
"."
] | def maybe_async_cm(cm: Union[ContextManager[T], AsyncContextManager[T]]) -> AsyncContextManager[T]:
"""
Wrap a regular context manager as an async one if necessary.
This function is intended to bridge the gap between AnyIO 2.x and 3.x where some functions and
methods were changed to return regular context managers instead of async ones.
:param cm: a regular or async context manager
:return: an async context manager
.. versionadded:: 2.2
"""
if not isinstance(cm, AbstractContextManager):
raise TypeError('Given object is not an context manager')
return _ContextManagerWrapper(cm) | [
"def",
"maybe_async_cm",
"(",
"cm",
":",
"Union",
"[",
"ContextManager",
"[",
"T",
"]",
",",
"AsyncContextManager",
"[",
"T",
"]",
"]",
")",
"->",
"AsyncContextManager",
"[",
"T",
"]",
":",
"if",
"not",
"isinstance",
"(",
"cm",
",",
"AbstractContextManager... | https://github.com/agronholm/anyio/blob/ac3e7c619913bd0ddf9c36b6e633b278d07405b7/src/anyio/_core/_compat.py#L69-L85 | |
tp4a/teleport | 1fafd34f1f775d2cf80ea4af6e44468d8e0b24ad | server/www/packages/packages-windows/x86/mako/util.py | python | LRUCache.values | (self) | return [i.value for i in dict.values(self)] | [] | def values(self):
return [i.value for i in dict.values(self)] | [
"def",
"values",
"(",
"self",
")",
":",
"return",
"[",
"i",
".",
"value",
"for",
"i",
"in",
"dict",
".",
"values",
"(",
"self",
")",
"]"
] | https://github.com/tp4a/teleport/blob/1fafd34f1f775d2cf80ea4af6e44468d8e0b24ad/server/www/packages/packages-windows/x86/mako/util.py#L200-L201 | |||
zhl2008/awd-platform | 0416b31abea29743387b10b3914581fbe8e7da5e | web_flaskbb/lib/python2.7/site-packages/setuptools/_vendor/pyparsing.py | python | _ParseResultsWithOffset.__repr__ | (self) | return repr(self.tup[0]) | [] | def __repr__(self):
return repr(self.tup[0]) | [
"def",
"__repr__",
"(",
"self",
")",
":",
"return",
"repr",
"(",
"self",
".",
"tup",
"[",
"0",
"]",
")"
] | https://github.com/zhl2008/awd-platform/blob/0416b31abea29743387b10b3914581fbe8e7da5e/web_flaskbb/lib/python2.7/site-packages/setuptools/_vendor/pyparsing.py#L296-L297 | |||
dropbox/dropbox-sdk-python | 015437429be224732990041164a21a0501235db1 | dropbox/sharing.py | python | PermissionDeniedReason.is_user_not_allowed_by_owner | (self) | return self._tag == 'user_not_allowed_by_owner' | Check if the union tag is ``user_not_allowed_by_owner``.
:rtype: bool | Check if the union tag is ``user_not_allowed_by_owner``. | [
"Check",
"if",
"the",
"union",
"tag",
"is",
"user_not_allowed_by_owner",
"."
] | def is_user_not_allowed_by_owner(self):
"""
Check if the union tag is ``user_not_allowed_by_owner``.
:rtype: bool
"""
return self._tag == 'user_not_allowed_by_owner' | [
"def",
"is_user_not_allowed_by_owner",
"(",
"self",
")",
":",
"return",
"self",
".",
"_tag",
"==",
"'user_not_allowed_by_owner'"
] | https://github.com/dropbox/dropbox-sdk-python/blob/015437429be224732990041164a21a0501235db1/dropbox/sharing.py#L6793-L6799 | |
scikit-multiflow/scikit-multiflow | d073a706b5006cba2584761286b7fa17e74e87be | src/skmultiflow/meta/additive_expert_ensemble.py | python | AdditiveExpertEnsembleClassifier.predict | (self, X) | return (aggregate + 0.5).astype(int) | Predicts the class labels of X in a general classification setting.
The predict function will take an average of the predictions of its
learners, weighted by their respective weights, and return the most
likely class.
Parameters
----------
X: numpy.ndarray of shape (n_samples, n_features)
A matrix of the samples we want to predict.
Returns
-------
numpy.ndarray
A numpy.ndarray with the label prediction for all the samples in X. | Predicts the class labels of X in a general classification setting. | [
"Predicts",
"the",
"class",
"labels",
"of",
"X",
"in",
"a",
"general",
"classification",
"setting",
"."
] | def predict(self, X):
""" Predicts the class labels of X in a general classification setting.
The predict function will take an average of the predictions of its
learners, weighted by their respective weights, and return the most
likely class.
Parameters
----------
X: numpy.ndarray of shape (n_samples, n_features)
A matrix of the samples we want to predict.
Returns
-------
numpy.ndarray
A numpy.ndarray with the label prediction for all the samples in X.
"""
preds = np.array([np.array(exp.estimator.predict(X)) * exp.weight
for exp in self.experts])
sum_weights = sum(exp.weight for exp in self.experts)
aggregate = np.sum(preds / sum_weights, axis=0)
return (aggregate + 0.5).astype(int) | [
"def",
"predict",
"(",
"self",
",",
"X",
")",
":",
"preds",
"=",
"np",
".",
"array",
"(",
"[",
"np",
".",
"array",
"(",
"exp",
".",
"estimator",
".",
"predict",
"(",
"X",
")",
")",
"*",
"exp",
".",
"weight",
"for",
"exp",
"in",
"self",
".",
"... | https://github.com/scikit-multiflow/scikit-multiflow/blob/d073a706b5006cba2584761286b7fa17e74e87be/src/skmultiflow/meta/additive_expert_ensemble.py#L157-L178 | |
nanoporetech/medaka | 2b83074fe3b6a6ec971614bfc6804f543fe1e5f0 | medaka/training.py | python | qscore | (y_true, y_pred) | return -10.0 * 0.434294481 * K.log(error) | Keras metric function for calculating scaled error.
:param y_true: tensor of true class labels.
:param y_pred: class output scores from network.
:returns: class error expressed as a phred score. | Keras metric function for calculating scaled error. | [
"Keras",
"metric",
"function",
"for",
"calculating",
"scaled",
"error",
"."
] | def qscore(y_true, y_pred):
"""Keras metric function for calculating scaled error.
:param y_true: tensor of true class labels.
:param y_pred: class output scores from network.
:returns: class error expressed as a phred score.
"""
from tensorflow.keras import backend as K
error = K.cast(K.not_equal(
K.max(y_true, axis=-1), K.cast(K.argmax(y_pred, axis=-1), K.floatx())),
K.floatx()
)
error = K.sum(error) / K.sum(K.ones_like(error))
return -10.0 * 0.434294481 * K.log(error) | [
"def",
"qscore",
"(",
"y_true",
",",
"y_pred",
")",
":",
"from",
"tensorflow",
".",
"keras",
"import",
"backend",
"as",
"K",
"error",
"=",
"K",
".",
"cast",
"(",
"K",
".",
"not_equal",
"(",
"K",
".",
"max",
"(",
"y_true",
",",
"axis",
"=",
"-",
"... | https://github.com/nanoporetech/medaka/blob/2b83074fe3b6a6ec971614bfc6804f543fe1e5f0/medaka/training.py#L13-L27 | |
rbgirshick/py-faster-rcnn | 781a917b378dbfdedb45b6a56189a31982da1b43 | lib/fast_rcnn/train.py | python | SolverWrapper.train_model | (self, max_iters) | return model_paths | Network training loop. | Network training loop. | [
"Network",
"training",
"loop",
"."
] | def train_model(self, max_iters):
"""Network training loop."""
last_snapshot_iter = -1
timer = Timer()
model_paths = []
while self.solver.iter < max_iters:
# Make one SGD update
timer.tic()
self.solver.step(1)
timer.toc()
if self.solver.iter % (10 * self.solver_param.display) == 0:
print 'speed: {:.3f}s / iter'.format(timer.average_time)
if self.solver.iter % cfg.TRAIN.SNAPSHOT_ITERS == 0:
last_snapshot_iter = self.solver.iter
model_paths.append(self.snapshot())
if last_snapshot_iter != self.solver.iter:
model_paths.append(self.snapshot())
return model_paths | [
"def",
"train_model",
"(",
"self",
",",
"max_iters",
")",
":",
"last_snapshot_iter",
"=",
"-",
"1",
"timer",
"=",
"Timer",
"(",
")",
"model_paths",
"=",
"[",
"]",
"while",
"self",
".",
"solver",
".",
"iter",
"<",
"max_iters",
":",
"# Make one SGD update",
... | https://github.com/rbgirshick/py-faster-rcnn/blob/781a917b378dbfdedb45b6a56189a31982da1b43/lib/fast_rcnn/train.py#L93-L112 | |
timkpaine/pyEX | 254acd2b0cf7cb7183100106f4ecc11d1860c46a | pyEX/cryptocurrency/cryptocurrency.py | python | cryptoPriceDF | (*args, **kwargs) | return pd.DataFrame(cryptoPrice(*args, **kwargs)) | [] | def cryptoPriceDF(*args, **kwargs):
return pd.DataFrame(cryptoPrice(*args, **kwargs)) | [
"def",
"cryptoPriceDF",
"(",
"*",
"args",
",",
"*",
"*",
"kwargs",
")",
":",
"return",
"pd",
".",
"DataFrame",
"(",
"cryptoPrice",
"(",
"*",
"args",
",",
"*",
"*",
"kwargs",
")",
")"
] | https://github.com/timkpaine/pyEX/blob/254acd2b0cf7cb7183100106f4ecc11d1860c46a/pyEX/cryptocurrency/cryptocurrency.py#L82-L83 | |||
lxc/pylxd | d82e4bbf81cb2a932d62179e895c955c489066fd | pylxd/deprecated/profiles.py | python | LXDProfile.profile_update | (self, profile, config) | return self.connection.get_status(
"PUT", "/1.0/profiles/%s" % profile, json.dumps(config)
) | Update the LXD profile (not implemented) | Update the LXD profile (not implemented) | [
"Update",
"the",
"LXD",
"profile",
"(",
"not",
"implemented",
")"
] | def profile_update(self, profile, config):
"""Update the LXD profile (not implemented)"""
return self.connection.get_status(
"PUT", "/1.0/profiles/%s" % profile, json.dumps(config)
) | [
"def",
"profile_update",
"(",
"self",
",",
"profile",
",",
"config",
")",
":",
"return",
"self",
".",
"connection",
".",
"get_status",
"(",
"\"PUT\"",
",",
"\"/1.0/profiles/%s\"",
"%",
"profile",
",",
"json",
".",
"dumps",
"(",
"config",
")",
")"
] | https://github.com/lxc/pylxd/blob/d82e4bbf81cb2a932d62179e895c955c489066fd/pylxd/deprecated/profiles.py#L37-L41 | |
theotherp/nzbhydra | 4b03d7f769384b97dfc60dade4806c0fc987514e | libs/werkzeug/routing.py | python | MapAdapter.build | (self, endpoint, values=None, method=None, force_external=False,
append_unknown=True) | return str('%s//%s%s/%s' % (
self.url_scheme + ':' if self.url_scheme else '',
host,
self.script_name[:-1],
path.lstrip('/')
)) | Building URLs works pretty much the other way round. Instead of
`match` you call `build` and pass it the endpoint and a dict of
arguments for the placeholders.
The `build` function also accepts an argument called `force_external`
which, if you set it to `True` will force external URLs. Per default
external URLs (include the server name) will only be used if the
target URL is on a different subdomain.
>>> m = Map([
... Rule('/', endpoint='index'),
... Rule('/downloads/', endpoint='downloads/index'),
... Rule('/downloads/<int:id>', endpoint='downloads/show')
... ])
>>> urls = m.bind("example.com", "/")
>>> urls.build("index", {})
'/'
>>> urls.build("downloads/show", {'id': 42})
'/downloads/42'
>>> urls.build("downloads/show", {'id': 42}, force_external=True)
'http://example.com/downloads/42'
Because URLs cannot contain non ASCII data you will always get
bytestrings back. Non ASCII characters are urlencoded with the
charset defined on the map instance.
Additional values are converted to unicode and appended to the URL as
URL querystring parameters:
>>> urls.build("index", {'q': 'My Searchstring'})
'/?q=My+Searchstring'
When processing those additional values, lists are furthermore
interpreted as multiple values (as per
:py:class:`werkzeug.datastructures.MultiDict`):
>>> urls.build("index", {'q': ['a', 'b', 'c']})
'/?q=a&q=b&q=c'
If a rule does not exist when building a `BuildError` exception is
raised.
The build method accepts an argument called `method` which allows you
to specify the method you want to have an URL built for if you have
different methods for the same endpoint specified.
.. versionadded:: 0.6
the `append_unknown` parameter was added.
:param endpoint: the endpoint of the URL to build.
:param values: the values for the URL to build. Unhandled values are
appended to the URL as query parameters.
:param method: the HTTP method for the rule if there are different
URLs for different methods on the same endpoint.
:param force_external: enforce full canonical external URLs. If the URL
scheme is not provided, this will generate
a protocol-relative URL.
:param append_unknown: unknown parameters are appended to the generated
URL as query string argument. Disable this
if you want the builder to ignore those. | Building URLs works pretty much the other way round. Instead of
`match` you call `build` and pass it the endpoint and a dict of
arguments for the placeholders. | [
"Building",
"URLs",
"works",
"pretty",
"much",
"the",
"other",
"way",
"round",
".",
"Instead",
"of",
"match",
"you",
"call",
"build",
"and",
"pass",
"it",
"the",
"endpoint",
"and",
"a",
"dict",
"of",
"arguments",
"for",
"the",
"placeholders",
"."
] | def build(self, endpoint, values=None, method=None, force_external=False,
append_unknown=True):
"""Building URLs works pretty much the other way round. Instead of
`match` you call `build` and pass it the endpoint and a dict of
arguments for the placeholders.
The `build` function also accepts an argument called `force_external`
which, if you set it to `True` will force external URLs. Per default
external URLs (include the server name) will only be used if the
target URL is on a different subdomain.
>>> m = Map([
... Rule('/', endpoint='index'),
... Rule('/downloads/', endpoint='downloads/index'),
... Rule('/downloads/<int:id>', endpoint='downloads/show')
... ])
>>> urls = m.bind("example.com", "/")
>>> urls.build("index", {})
'/'
>>> urls.build("downloads/show", {'id': 42})
'/downloads/42'
>>> urls.build("downloads/show", {'id': 42}, force_external=True)
'http://example.com/downloads/42'
Because URLs cannot contain non ASCII data you will always get
bytestrings back. Non ASCII characters are urlencoded with the
charset defined on the map instance.
Additional values are converted to unicode and appended to the URL as
URL querystring parameters:
>>> urls.build("index", {'q': 'My Searchstring'})
'/?q=My+Searchstring'
When processing those additional values, lists are furthermore
interpreted as multiple values (as per
:py:class:`werkzeug.datastructures.MultiDict`):
>>> urls.build("index", {'q': ['a', 'b', 'c']})
'/?q=a&q=b&q=c'
If a rule does not exist when building a `BuildError` exception is
raised.
The build method accepts an argument called `method` which allows you
to specify the method you want to have an URL built for if you have
different methods for the same endpoint specified.
.. versionadded:: 0.6
the `append_unknown` parameter was added.
:param endpoint: the endpoint of the URL to build.
:param values: the values for the URL to build. Unhandled values are
appended to the URL as query parameters.
:param method: the HTTP method for the rule if there are different
URLs for different methods on the same endpoint.
:param force_external: enforce full canonical external URLs. If the URL
scheme is not provided, this will generate
a protocol-relative URL.
:param append_unknown: unknown parameters are appended to the generated
URL as query string argument. Disable this
if you want the builder to ignore those.
"""
self.map.update()
if values:
if isinstance(values, MultiDict):
valueiter = iteritems(values, multi=True)
else:
valueiter = iteritems(values)
values = dict((k, v) for k, v in valueiter if v is not None)
else:
values = {}
rv = self._partial_build(endpoint, values, method, append_unknown)
if rv is None:
raise BuildError(endpoint, values, method)
domain_part, path = rv
host = self.get_host(domain_part)
# shortcut this.
if not force_external and (
(self.map.host_matching and host == self.server_name) or
(not self.map.host_matching and domain_part == self.subdomain)):
return str(url_join(self.script_name, './' + path.lstrip('/')))
return str('%s//%s%s/%s' % (
self.url_scheme + ':' if self.url_scheme else '',
host,
self.script_name[:-1],
path.lstrip('/')
)) | [
"def",
"build",
"(",
"self",
",",
"endpoint",
",",
"values",
"=",
"None",
",",
"method",
"=",
"None",
",",
"force_external",
"=",
"False",
",",
"append_unknown",
"=",
"True",
")",
":",
"self",
".",
"map",
".",
"update",
"(",
")",
"if",
"values",
":",... | https://github.com/theotherp/nzbhydra/blob/4b03d7f769384b97dfc60dade4806c0fc987514e/libs/werkzeug/routing.py#L1603-L1693 | |
zhl2008/awd-platform | 0416b31abea29743387b10b3914581fbe8e7da5e | web_hxb2/lib/python3.5/site-packages/django/db/models/query.py | python | RawQuerySet.__getitem__ | (self, k) | return list(self)[k] | [] | def __getitem__(self, k):
return list(self)[k] | [
"def",
"__getitem__",
"(",
"self",
",",
"k",
")",
":",
"return",
"list",
"(",
"self",
")",
"[",
"k",
"]"
] | https://github.com/zhl2008/awd-platform/blob/0416b31abea29743387b10b3914581fbe8e7da5e/web_hxb2/lib/python3.5/site-packages/django/db/models/query.py#L1276-L1277 | |||
annoviko/pyclustering | bf4f51a472622292627ec8c294eb205585e50f52 | pyclustering/nnet/legion.py | python | legion_dynamic.allocate_sync_ensembles | (self, tolerance = 0.1) | return allocate_sync_ensembles(self.__output, tolerance) | !
@brief Allocate clusters in line with ensembles of synchronous oscillators where each synchronous ensemble corresponds to only one cluster.
@param[in] tolerance (double): Maximum error for allocation of synchronous ensemble oscillators.
@return (list) Grours of indexes of synchronous oscillators, for example, [ [index_osc1, index_osc3], [index_osc2], [index_osc4, index_osc5] ]. | ! | [
"!"
] | def allocate_sync_ensembles(self, tolerance = 0.1):
"""!
@brief Allocate clusters in line with ensembles of synchronous oscillators where each synchronous ensemble corresponds to only one cluster.
@param[in] tolerance (double): Maximum error for allocation of synchronous ensemble oscillators.
@return (list) Grours of indexes of synchronous oscillators, for example, [ [index_osc1, index_osc3], [index_osc2], [index_osc4, index_osc5] ].
"""
if (self.__ccore_legion_dynamic_pointer is not None):
self.__output = wrapper.legion_dynamic_get_output(self.__ccore_legion_dynamic_pointer);
return allocate_sync_ensembles(self.__output, tolerance); | [
"def",
"allocate_sync_ensembles",
"(",
"self",
",",
"tolerance",
"=",
"0.1",
")",
":",
"if",
"(",
"self",
".",
"__ccore_legion_dynamic_pointer",
"is",
"not",
"None",
")",
":",
"self",
".",
"__output",
"=",
"wrapper",
".",
"legion_dynamic_get_output",
"(",
"sel... | https://github.com/annoviko/pyclustering/blob/bf4f51a472622292627ec8c294eb205585e50f52/pyclustering/nnet/legion.py#L178-L191 | |
dropbox/stone | b7b64320631b3a4d2f10681dca64e0718ebe68ee | stone/frontend/lexer.py | python | Lexer.t_INITIAL_NEWLINE | (self, newline_token) | r'\n+ | r'\n+ | [
"r",
"\\",
"n",
"+"
] | def t_INITIAL_NEWLINE(self, newline_token):
r'\n+'
newline_token.lexer.lineno += newline_token.value.count('\n')
dent_tokens = self._create_tokens_for_next_line_dent(newline_token)
if dent_tokens:
dent_tokens.tokens.insert(0, newline_token)
return dent_tokens
else:
return newline_token | [
"def",
"t_INITIAL_NEWLINE",
"(",
"self",
",",
"newline_token",
")",
":",
"newline_token",
".",
"lexer",
".",
"lineno",
"+=",
"newline_token",
".",
"value",
".",
"count",
"(",
"'\\n'",
")",
"dent_tokens",
"=",
"self",
".",
"_create_tokens_for_next_line_dent",
"("... | https://github.com/dropbox/stone/blob/b7b64320631b3a4d2f10681dca64e0718ebe68ee/stone/frontend/lexer.py#L332-L340 | ||
PyLops/pylops | 33eb807c6f429dd2efe697627c0d3955328af81f | pylops/LinearOperator.py | python | LinearOperator._matmat | (self, X) | return y | Matrix-matrix multiplication handler.
Modified version of scipy _matmat to avoid having trailing dimension
in col when provided to matvec | Matrix-matrix multiplication handler. | [
"Matrix",
"-",
"matrix",
"multiplication",
"handler",
"."
] | def _matmat(self, X):
"""Matrix-matrix multiplication handler.
Modified version of scipy _matmat to avoid having trailing dimension
in col when provided to matvec
"""
if sp.sparse.issparse(X):
y = np.vstack([self.matvec(col.toarray().reshape(-1)) for col in X.T]).T
else:
y = np.vstack([self.matvec(col.reshape(-1)) for col in X.T]).T
return y | [
"def",
"_matmat",
"(",
"self",
",",
"X",
")",
":",
"if",
"sp",
".",
"sparse",
".",
"issparse",
"(",
"X",
")",
":",
"y",
"=",
"np",
".",
"vstack",
"(",
"[",
"self",
".",
"matvec",
"(",
"col",
".",
"toarray",
"(",
")",
".",
"reshape",
"(",
"-",... | https://github.com/PyLops/pylops/blob/33eb807c6f429dd2efe697627c0d3955328af81f/pylops/LinearOperator.py#L66-L76 | |
francisck/DanderSpritz_docs | 86bb7caca5a957147f120b18bb5c31f299914904 | Python/Core/Lib/urlparse.py | python | urlsplit | (url, scheme='', allow_fragments=True) | Parse a URL into 5 components:
<scheme>://<netloc>/<path>?<query>#<fragment>
Return a 5-tuple: (scheme, netloc, path, query, fragment).
Note that we don't break the components up in smaller bits
(e.g. netloc is a single string) and we don't expand % escapes. | Parse a URL into 5 components:
<scheme>://<netloc>/<path>?<query>#<fragment>
Return a 5-tuple: (scheme, netloc, path, query, fragment).
Note that we don't break the components up in smaller bits
(e.g. netloc is a single string) and we don't expand % escapes. | [
"Parse",
"a",
"URL",
"into",
"5",
"components",
":",
"<scheme",
">",
":",
"//",
"<netloc",
">",
"/",
"<path",
">",
"?<query",
">",
"#<fragment",
">",
"Return",
"a",
"5",
"-",
"tuple",
":",
"(",
"scheme",
"netloc",
"path",
"query",
"fragment",
")",
".... | def urlsplit(url, scheme='', allow_fragments=True):
"""Parse a URL into 5 components:
<scheme>://<netloc>/<path>?<query>#<fragment>
Return a 5-tuple: (scheme, netloc, path, query, fragment).
Note that we don't break the components up in smaller bits
(e.g. netloc is a single string) and we don't expand % escapes."""
allow_fragments = bool(allow_fragments)
key = (url, scheme, allow_fragments, type(url), type(scheme))
cached = _parse_cache.get(key, None)
if cached:
return cached
else:
if len(_parse_cache) >= MAX_CACHE_SIZE:
clear_cache()
netloc = query = fragment = ''
i = url.find(':')
if i > 0:
if url[:i] == 'http':
scheme = url[:i].lower()
url = url[i + 1:]
if url[:2] == '//':
netloc, url = _splitnetloc(url, 2)
if '[' in netloc and ']' not in netloc or ']' in netloc and '[' not in netloc:
raise ValueError('Invalid IPv6 URL')
if allow_fragments and '#' in url:
url, fragment = url.split('#', 1)
if '?' in url:
url, query = url.split('?', 1)
v = SplitResult(scheme, netloc, url, query, fragment)
_parse_cache[key] = v
return v
for c in url[:i]:
if c not in scheme_chars:
break
else:
try:
_testportnum = int(url[i + 1:])
except ValueError:
scheme, url = url[:i].lower(), url[i + 1:]
if url[:2] == '//':
netloc, url = _splitnetloc(url, 2)
if '[' in netloc and ']' not in netloc or ']' in netloc and '[' not in netloc:
raise ValueError('Invalid IPv6 URL')
if allow_fragments and scheme in uses_fragment and '#' in url:
url, fragment = url.split('#', 1)
if scheme in uses_query and '?' in url:
url, query = url.split('?', 1)
v = SplitResult(scheme, netloc, url, query, fragment)
_parse_cache[key] = v
return v | [
"def",
"urlsplit",
"(",
"url",
",",
"scheme",
"=",
"''",
",",
"allow_fragments",
"=",
"True",
")",
":",
"allow_fragments",
"=",
"bool",
"(",
"allow_fragments",
")",
"key",
"=",
"(",
"url",
",",
"scheme",
",",
"allow_fragments",
",",
"type",
"(",
"url",
... | https://github.com/francisck/DanderSpritz_docs/blob/86bb7caca5a957147f120b18bb5c31f299914904/Python/Core/Lib/urlparse.py#L164-L214 | ||
wxGlade/wxGlade | 44ed0d1cba78f27c5c0a56918112a737653b7b27 | bugdialog.py | python | BugReport.SetContent | (self, msg, exc) | Prepare given exception information and show it as dialog content.
msg: Short description of the action that has raised this error
exc: Caught exception (Exception instance)
see: SetContentEI() | Prepare given exception information and show it as dialog content. | [
"Prepare",
"given",
"exception",
"information",
"and",
"show",
"it",
"as",
"dialog",
"content",
"."
] | def SetContent(self, msg, exc):
"""Prepare given exception information and show it as dialog content.
msg: Short description of the action that has raised this error
exc: Caught exception (Exception instance)
see: SetContentEI()"""
if self._disabled:
return
exc_type = exc.__class__.__name__
exc_msg = str(exc)
header = self.st_header.GetLabel() % {'action': msg}
log.exception_orig(header)
self._fill_dialog(exc_msg, exc_type, header) | [
"def",
"SetContent",
"(",
"self",
",",
"msg",
",",
"exc",
")",
":",
"if",
"self",
".",
"_disabled",
":",
"return",
"exc_type",
"=",
"exc",
".",
"__class__",
".",
"__name__",
"exc_msg",
"=",
"str",
"(",
"exc",
")",
"header",
"=",
"self",
".",
"st_head... | https://github.com/wxGlade/wxGlade/blob/44ed0d1cba78f27c5c0a56918112a737653b7b27/bugdialog.py#L27-L40 | ||
doorstop-dev/doorstop | 03aa287e5069e29da6979274e1cb6714ee450d3a | doorstop/gui/application.py | python | Application.link | (self) | Add the specified link to the current item. | Add the specified link to the current item. | [
"Add",
"the",
"specified",
"link",
"to",
"the",
"current",
"item",
"."
] | def link(self):
"""Add the specified link to the current item."""
# Add the specified link to the list
uid = self.stringvar_link.get()
if uid:
self.listbox_links.insert(tk.END, uid)
self.stringvar_link.set('')
# Update the current item
self.update_item() | [
"def",
"link",
"(",
"self",
")",
":",
"# Add the specified link to the list",
"uid",
"=",
"self",
".",
"stringvar_link",
".",
"get",
"(",
")",
"if",
"uid",
":",
"self",
".",
"listbox_links",
".",
"insert",
"(",
"tk",
".",
"END",
",",
"uid",
")",
"self",
... | https://github.com/doorstop-dev/doorstop/blob/03aa287e5069e29da6979274e1cb6714ee450d3a/doorstop/gui/application.py#L734-L743 | ||
maas/maas | db2f89970c640758a51247c59bf1ec6f60cf4ab5 | src/provisioningserver/prometheus/metrics.py | python | set_global_labels | (**labels) | Update global labels for metrics. | Update global labels for metrics. | [
"Update",
"global",
"labels",
"for",
"metrics",
"."
] | def set_global_labels(**labels):
"""Update global labels for metrics."""
global GLOBAL_LABELS
GLOBAL_LABELS.update(labels) | [
"def",
"set_global_labels",
"(",
"*",
"*",
"labels",
")",
":",
"global",
"GLOBAL_LABELS",
"GLOBAL_LABELS",
".",
"update",
"(",
"labels",
")"
] | https://github.com/maas/maas/blob/db2f89970c640758a51247c59bf1ec6f60cf4ab5/src/provisioningserver/prometheus/metrics.py#L103-L106 | ||
giantbranch/python-hacker-code | addbc8c73e7e6fb9e4fcadcec022fa1d3da4b96d | 我手敲的代码(中文注释)/chapter11/volatility-2.3/build/lib/volatility/plugins/gui/messagehooks.py | python | MessageHooks.render_text | (self, outfd, data) | Render output in table form | Render output in table form | [
"Render",
"output",
"in",
"table",
"form"
] | def render_text(self, outfd, data):
"""Render output in table form"""
self.table_header(outfd,
[("Offset(V)", "[addrpad]"),
("Sess", "<6"),
("Desktop", "20"),
("Thread", "30"),
("Filter", "20"),
("Flags", "20"),
("Function", "[addrpad]"),
("Module", ""),
])
for winsta, atom_tables in data:
for desk in winsta.desktops():
for name, hook in desk.hooks():
module = self.translate_hmod(winsta, atom_tables, hook.ihmod)
self.table_row(outfd,
hook.obj_offset,
winsta.dwSessionId,
"{0}\\{1}".format(winsta.Name, desk.Name),
"<any>", name,
str(hook.flags),
hook.offPfn,
module,
)
for thrd in desk.threads():
info = "{0} ({1} {2})".format(
thrd.pEThread.Cid.UniqueThread,
thrd.ppi.Process.ImageFileName,
thrd.ppi.Process.UniqueProcessId
)
for name, hook in thrd.hooks():
module = self.translate_hmod(winsta, atom_tables, hook.ihmod)
self.table_row(outfd,
hook.obj_offset,
winsta.dwSessionId,
"{0}\\{1}".format(winsta.Name, desk.Name),
info, name,
str(hook.flags),
hook.offPfn,
module,
) | [
"def",
"render_text",
"(",
"self",
",",
"outfd",
",",
"data",
")",
":",
"self",
".",
"table_header",
"(",
"outfd",
",",
"[",
"(",
"\"Offset(V)\"",
",",
"\"[addrpad]\"",
")",
",",
"(",
"\"Sess\"",
",",
"\"<6\"",
")",
",",
"(",
"\"Desktop\"",
",",
"\"20\... | https://github.com/giantbranch/python-hacker-code/blob/addbc8c73e7e6fb9e4fcadcec022fa1d3da4b96d/我手敲的代码(中文注释)/chapter11/volatility-2.3/build/lib/volatility/plugins/gui/messagehooks.py#L194-L238 | ||
saltstack/salt | fae5bc757ad0f1716483ce7ae180b451545c2058 | salt/states/boto_apigateway.py | python | _Swagger.deployment_label_json | (self) | return _dict_to_json_pretty(self.deployment_label) | this property returns the unique description in pretty printed json for
a particular api deployment | this property returns the unique description in pretty printed json for
a particular api deployment | [
"this",
"property",
"returns",
"the",
"unique",
"description",
"in",
"pretty",
"printed",
"json",
"for",
"a",
"particular",
"api",
"deployment"
] | def deployment_label_json(self):
"""
this property returns the unique description in pretty printed json for
a particular api deployment
"""
return _dict_to_json_pretty(self.deployment_label) | [
"def",
"deployment_label_json",
"(",
"self",
")",
":",
"return",
"_dict_to_json_pretty",
"(",
"self",
".",
"deployment_label",
")"
] | https://github.com/saltstack/salt/blob/fae5bc757ad0f1716483ce7ae180b451545c2058/salt/states/boto_apigateway.py#L1057-L1062 | |
oaubert/python-vlc | 908ffdbd0844dc1849728c456e147788798c99da | generated/dev/vlc.py | python | MediaDiscoverer.release | (self) | return libvlc_media_discoverer_release(self) | Release media discover object. If the reference count reaches 0, then
the object will be released. | Release media discover object. If the reference count reaches 0, then
the object will be released. | [
"Release",
"media",
"discover",
"object",
".",
"If",
"the",
"reference",
"count",
"reaches",
"0",
"then",
"the",
"object",
"will",
"be",
"released",
"."
] | def release(self):
'''Release media discover object. If the reference count reaches 0, then
the object will be released.
'''
return libvlc_media_discoverer_release(self) | [
"def",
"release",
"(",
"self",
")",
":",
"return",
"libvlc_media_discoverer_release",
"(",
"self",
")"
] | https://github.com/oaubert/python-vlc/blob/908ffdbd0844dc1849728c456e147788798c99da/generated/dev/vlc.py#L2845-L2849 | |
ant4g0nist/lisa.py | fb74a309a314d041d4902944a8d449650afc76db | lisa.py | python | CapstoneDisassembleCommand.name | (self) | return "csdis" | [] | def name(self):
return "csdis" | [
"def",
"name",
"(",
"self",
")",
":",
"return",
"\"csdis\""
] | https://github.com/ant4g0nist/lisa.py/blob/fb74a309a314d041d4902944a8d449650afc76db/lisa.py#L3382-L3383 | |||
oilshell/oil | 94388e7d44a9ad879b12615f6203b38596b5a2d3 | Python-2.7.13/Lib/decimal.py | python | _div_nearest | (a, b) | return q + (2*r + (q&1) > b) | Closest integer to a/b, a and b positive integers; rounds to even
in the case of a tie. | Closest integer to a/b, a and b positive integers; rounds to even
in the case of a tie. | [
"Closest",
"integer",
"to",
"a",
"/",
"b",
"a",
"and",
"b",
"positive",
"integers",
";",
"rounds",
"to",
"even",
"in",
"the",
"case",
"of",
"a",
"tie",
"."
] | def _div_nearest(a, b):
"""Closest integer to a/b, a and b positive integers; rounds to even
in the case of a tie.
"""
q, r = divmod(a, b)
return q + (2*r + (q&1) > b) | [
"def",
"_div_nearest",
"(",
"a",
",",
"b",
")",
":",
"q",
",",
"r",
"=",
"divmod",
"(",
"a",
",",
"b",
")",
"return",
"q",
"+",
"(",
"2",
"*",
"r",
"+",
"(",
"q",
"&",
"1",
")",
">",
"b",
")"
] | https://github.com/oilshell/oil/blob/94388e7d44a9ad879b12615f6203b38596b5a2d3/Python-2.7.13/Lib/decimal.py#L5558-L5564 | |
tuckerbalch/QSTK | 4981506c37227a72404229d5e1e0887f797a5d57 | epydoc-3.0.1/epydoc/markup/epytext.py | python | _add_section | (doc, heading_token, stack, indent_stack, errors) | Add a new section to the DOM tree, with the given heading. | Add a new section to the DOM tree, with the given heading. | [
"Add",
"a",
"new",
"section",
"to",
"the",
"DOM",
"tree",
"with",
"the",
"given",
"heading",
"."
] | def _add_section(doc, heading_token, stack, indent_stack, errors):
"""Add a new section to the DOM tree, with the given heading."""
if indent_stack[-1] == None:
indent_stack[-1] = heading_token.indent
elif indent_stack[-1] != heading_token.indent:
estr = "Improper heading indentation."
errors.append(StructuringError(estr, heading_token.startline))
# Check for errors.
for tok in stack[2:]:
if tok.tag != "section":
estr = "Headings must occur at the top level."
errors.append(StructuringError(estr, heading_token.startline))
break
if (heading_token.level+2) > len(stack):
estr = "Wrong underline character for heading."
errors.append(StructuringError(estr, heading_token.startline))
# Pop the appropriate number of headings so we're at the
# correct level.
stack[heading_token.level+2:] = []
indent_stack[heading_token.level+2:] = []
# Colorize the heading
head = _colorize(doc, heading_token, errors, 'heading')
# Add the section's and heading's DOM elements.
sec = Element("section")
stack[-1].children.append(sec)
stack.append(sec)
sec.children.append(head)
indent_stack.append(None) | [
"def",
"_add_section",
"(",
"doc",
",",
"heading_token",
",",
"stack",
",",
"indent_stack",
",",
"errors",
")",
":",
"if",
"indent_stack",
"[",
"-",
"1",
"]",
"==",
"None",
":",
"indent_stack",
"[",
"-",
"1",
"]",
"=",
"heading_token",
".",
"indent",
"... | https://github.com/tuckerbalch/QSTK/blob/4981506c37227a72404229d5e1e0887f797a5d57/epydoc-3.0.1/epydoc/markup/epytext.py#L411-L442 | ||
IJDykeman/wangTiles | 7c1ee2095ebdf7f72bce07d94c6484915d5cae8b | experimental_code/tiles_3d/venv/lib/python2.7/site-packages/pip/wheel.py | python | WheelBuilder.build | (self) | Build wheels. | Build wheels. | [
"Build",
"wheels",
"."
] | def build(self):
"""Build wheels."""
#unpack and constructs req set
self.requirement_set.prepare_files(self.finder)
reqset = self.requirement_set.requirements.values()
buildset = [req for req in reqset if not req.is_wheel]
if not buildset:
return
#build the wheels
logger.notify(
'Building wheels for collected packages: %s' %
','.join([req.name for req in buildset])
)
logger.indent += 2
build_success, build_failure = [], []
for req in buildset:
if self._build_one(req):
build_success.append(req)
else:
build_failure.append(req)
logger.indent -= 2
#notify sucess/failure
if build_success:
logger.notify('Successfully built %s' % ' '.join([req.name for req in build_success]))
if build_failure:
logger.notify('Failed to build %s' % ' '.join([req.name for req in build_failure])) | [
"def",
"build",
"(",
"self",
")",
":",
"#unpack and constructs req set",
"self",
".",
"requirement_set",
".",
"prepare_files",
"(",
"self",
".",
"finder",
")",
"reqset",
"=",
"self",
".",
"requirement_set",
".",
"requirements",
".",
"values",
"(",
")",
"builds... | https://github.com/IJDykeman/wangTiles/blob/7c1ee2095ebdf7f72bce07d94c6484915d5cae8b/experimental_code/tiles_3d/venv/lib/python2.7/site-packages/pip/wheel.py#L521-L552 | ||
sicara/tf-explain | 3d37ece2445570b2468d51b5ab4cfaf614b21f82 | tf_explain/callbacks/occlusion_sensitivity.py | python | OcclusionSensitivityCallback.on_epoch_end | (self, epoch, logs=None) | Draw Occlusion Sensitivity outputs at each epoch end to Tensorboard.
Args:
epoch (int): Epoch index
logs (dict): Additional information on epoch | Draw Occlusion Sensitivity outputs at each epoch end to Tensorboard. | [
"Draw",
"Occlusion",
"Sensitivity",
"outputs",
"at",
"each",
"epoch",
"end",
"to",
"Tensorboard",
"."
] | def on_epoch_end(self, epoch, logs=None):
"""
Draw Occlusion Sensitivity outputs at each epoch end to Tensorboard.
Args:
epoch (int): Epoch index
logs (dict): Additional information on epoch
"""
explainer = OcclusionSensitivity()
grid = explainer.explain(
self.validation_data, self.model, self.class_index, self.patch_size
)
# Using the file writer, log the reshaped image.
with self.file_writer.as_default():
tf.summary.image("Occlusion Sensitivity", np.array([grid]), step=epoch) | [
"def",
"on_epoch_end",
"(",
"self",
",",
"epoch",
",",
"logs",
"=",
"None",
")",
":",
"explainer",
"=",
"OcclusionSensitivity",
"(",
")",
"grid",
"=",
"explainer",
".",
"explain",
"(",
"self",
".",
"validation_data",
",",
"self",
".",
"model",
",",
"self... | https://github.com/sicara/tf-explain/blob/3d37ece2445570b2468d51b5ab4cfaf614b21f82/tf_explain/callbacks/occlusion_sensitivity.py#L46-L61 | ||
analyticalmindsltd/smote_variants | dedbc3d00b266954fedac0ae87775e1643bc920a | smote_variants/_smote_variants.py | python | NRSBoundary_SMOTE.__init__ | (self,
proportion=1.0,
n_neighbors=5,
w=0.005,
n_jobs=1,
random_state=None) | Constructor of the sampling object
Args:
proportion (float): proportion of the difference of n_maj and n_min
to sample e.g. 1.0 means that after sampling
the number of minority samples will be equal to
the number of majority samples
n_neighbors (int): number of neighbors in nearest neighbors
component
w (float): used to set neighborhood radius
n_jobs (int): number of parallel jobs
random_state (int/RandomState/None): initializer of random_state,
like in sklearn | Constructor of the sampling object | [
"Constructor",
"of",
"the",
"sampling",
"object"
] | def __init__(self,
proportion=1.0,
n_neighbors=5,
w=0.005,
n_jobs=1,
random_state=None):
"""
Constructor of the sampling object
Args:
proportion (float): proportion of the difference of n_maj and n_min
to sample e.g. 1.0 means that after sampling
the number of minority samples will be equal to
the number of majority samples
n_neighbors (int): number of neighbors in nearest neighbors
component
w (float): used to set neighborhood radius
n_jobs (int): number of parallel jobs
random_state (int/RandomState/None): initializer of random_state,
like in sklearn
"""
super().__init__()
self.check_greater_or_equal(proportion, "proportion", 0)
self.check_greater_or_equal(n_neighbors, "n_neighbors", 1)
self.check_greater_or_equal(w, "w", 0)
self.check_n_jobs(n_jobs, 'n_jobs')
self.proportion = proportion
self.n_neighbors = n_neighbors
self.w = w
self.n_jobs = n_jobs
self.set_random_state(random_state) | [
"def",
"__init__",
"(",
"self",
",",
"proportion",
"=",
"1.0",
",",
"n_neighbors",
"=",
"5",
",",
"w",
"=",
"0.005",
",",
"n_jobs",
"=",
"1",
",",
"random_state",
"=",
"None",
")",
":",
"super",
"(",
")",
".",
"__init__",
"(",
")",
"self",
".",
"... | https://github.com/analyticalmindsltd/smote_variants/blob/dedbc3d00b266954fedac0ae87775e1643bc920a/smote_variants/_smote_variants.py#L5713-L5745 | ||
plotly/plotly.py | cfad7862594b35965c0e000813bd7805e8494a5b | packages/python/plotly/plotly/graph_objs/bar/marker/_line.py | python | Line.widthsrc | (self) | return self["widthsrc"] | Sets the source reference on Chart Studio Cloud for `width`.
The 'widthsrc' property must be specified as a string or
as a plotly.grid_objs.Column object
Returns
-------
str | Sets the source reference on Chart Studio Cloud for `width`.
The 'widthsrc' property must be specified as a string or
as a plotly.grid_objs.Column object | [
"Sets",
"the",
"source",
"reference",
"on",
"Chart",
"Studio",
"Cloud",
"for",
"width",
".",
"The",
"widthsrc",
"property",
"must",
"be",
"specified",
"as",
"a",
"string",
"or",
"as",
"a",
"plotly",
".",
"grid_objs",
".",
"Column",
"object"
] | def widthsrc(self):
"""
Sets the source reference on Chart Studio Cloud for `width`.
The 'widthsrc' property must be specified as a string or
as a plotly.grid_objs.Column object
Returns
-------
str
"""
return self["widthsrc"] | [
"def",
"widthsrc",
"(",
"self",
")",
":",
"return",
"self",
"[",
"\"widthsrc\"",
"]"
] | https://github.com/plotly/plotly.py/blob/cfad7862594b35965c0e000813bd7805e8494a5b/packages/python/plotly/plotly/graph_objs/bar/marker/_line.py#L363-L374 | |
spyder-ide/spyder | 55da47c032dfcf519600f67f8b30eab467f965e7 | spyder/widgets/tabs.py | python | TabBar.dropEvent | (self, event) | Override Qt method | Override Qt method | [
"Override",
"Qt",
"method"
] | def dropEvent(self, event):
"""Override Qt method"""
mimeData = event.mimeData()
index_from = int(mimeData.data("source-index"))
index_to = self.tabAt(event.pos())
if index_to == -1:
index_to = self.count()
if int(mimeData.data("tabbar-id")) != id(self):
tabwidget_from = to_text_string(mimeData.data("tabwidget-id"))
# We pass self object ID as a QString, because otherwise it would
# depend on the platform: long for 64bit, int for 32bit. Replacing
# by long all the time is not working on some 32bit platforms.
# See spyder-ide/spyder#1094 and spyder-ide/spyder#1098.
self.sig_move_tab[(str, int, int)].emit(tabwidget_from, index_from,
index_to)
event.acceptProposedAction()
elif index_from != index_to:
self.sig_move_tab.emit(index_from, index_to)
event.acceptProposedAction()
QTabBar.dropEvent(self, event) | [
"def",
"dropEvent",
"(",
"self",
",",
"event",
")",
":",
"mimeData",
"=",
"event",
".",
"mimeData",
"(",
")",
"index_from",
"=",
"int",
"(",
"mimeData",
".",
"data",
"(",
"\"source-index\"",
")",
")",
"index_to",
"=",
"self",
".",
"tabAt",
"(",
"event"... | https://github.com/spyder-ide/spyder/blob/55da47c032dfcf519600f67f8b30eab467f965e7/spyder/widgets/tabs.py#L204-L224 | ||
geventhttpclient/geventhttpclient | 6a04eabe4bff70bbece136966f050ed8986c1b12 | src/geventhttpclient/useragent.py | python | UserAgent._verify_status | (self, status_code, url=None) | Hook for subclassing | Hook for subclassing | [
"Hook",
"for",
"subclassing"
] | def _verify_status(self, status_code, url=None):
""" Hook for subclassing
"""
if status_code not in self.valid_response_codes:
raise BadStatusCode(url, code=status_code) | [
"def",
"_verify_status",
"(",
"self",
",",
"status_code",
",",
"url",
"=",
"None",
")",
":",
"if",
"status_code",
"not",
"in",
"self",
".",
"valid_response_codes",
":",
"raise",
"BadStatusCode",
"(",
"url",
",",
"code",
"=",
"status_code",
")"
] | https://github.com/geventhttpclient/geventhttpclient/blob/6a04eabe4bff70bbece136966f050ed8986c1b12/src/geventhttpclient/useragent.py#L311-L315 | ||
explosion/srsly | 8617ecc099d1f34a60117b5287bef5424ea2c837 | srsly/ruamel_yaml/tokens.py | python | StreamStartToken.__init__ | (self, start_mark=None, end_mark=None, encoding=None) | [] | def __init__(self, start_mark=None, end_mark=None, encoding=None):
# type: (Any, Any, Any) -> None
Token.__init__(self, start_mark, end_mark)
self.encoding = encoding | [
"def",
"__init__",
"(",
"self",
",",
"start_mark",
"=",
"None",
",",
"end_mark",
"=",
"None",
",",
"encoding",
"=",
"None",
")",
":",
"# type: (Any, Any, Any) -> None",
"Token",
".",
"__init__",
"(",
"self",
",",
"start_mark",
",",
"end_mark",
")",
"self",
... | https://github.com/explosion/srsly/blob/8617ecc099d1f34a60117b5287bef5424ea2c837/srsly/ruamel_yaml/tokens.py#L137-L140 | ||||
azavea/raster-vision | fc181a6f31f085affa1ee12f0204bdbc5a6bf85a | rastervision_pipeline/rastervision/pipeline/cli.py | python | run_command | (cfg_json_uri: str, command: str, split_ind: Optional[int],
num_splits: Optional[int], runner: str) | Run a single COMMAND using a serialized PipelineConfig in CFG_JSON_URI. | Run a single COMMAND using a serialized PipelineConfig in CFG_JSON_URI. | [
"Run",
"a",
"single",
"COMMAND",
"using",
"a",
"serialized",
"PipelineConfig",
"in",
"CFG_JSON_URI",
"."
] | def run_command(cfg_json_uri: str, command: str, split_ind: Optional[int],
num_splits: Optional[int], runner: str):
"""Run a single COMMAND using a serialized PipelineConfig in CFG_JSON_URI."""
_run_command(
cfg_json_uri,
command,
split_ind=split_ind,
num_splits=num_splits,
runner=runner) | [
"def",
"run_command",
"(",
"cfg_json_uri",
":",
"str",
",",
"command",
":",
"str",
",",
"split_ind",
":",
"Optional",
"[",
"int",
"]",
",",
"num_splits",
":",
"Optional",
"[",
"int",
"]",
",",
"runner",
":",
"str",
")",
":",
"_run_command",
"(",
"cfg_j... | https://github.com/azavea/raster-vision/blob/fc181a6f31f085affa1ee12f0204bdbc5a6bf85a/rastervision_pipeline/rastervision/pipeline/cli.py#L232-L240 | ||
EiNSTeiN-/decompiler | f7923b54233d5aa2118686d99ba93380063993ed | src/graph.py | python | graph_t.jump_targets | (self) | return | find each point in the function which is the
destination of a jump (conditional or not).
jump destinations are the points that delimit new
blocks. | find each point in the function which is the
destination of a jump (conditional or not). | [
"find",
"each",
"point",
"in",
"the",
"function",
"which",
"is",
"the",
"destination",
"of",
"a",
"jump",
"(",
"conditional",
"or",
"not",
")",
"."
] | def jump_targets(self):
""" find each point in the function which is the
destination of a jump (conditional or not).
jump destinations are the points that delimit new
blocks. """
for item in self.func_items:
if self.arch.has_jump(item):
for dest in self.arch.jump_branches(item):
if type(dest) == value_t and dest.value in self.func_items:
ea = dest.value
yield ea
return | [
"def",
"jump_targets",
"(",
"self",
")",
":",
"for",
"item",
"in",
"self",
".",
"func_items",
":",
"if",
"self",
".",
"arch",
".",
"has_jump",
"(",
"item",
")",
":",
"for",
"dest",
"in",
"self",
".",
"arch",
".",
"jump_branches",
"(",
"item",
")",
... | https://github.com/EiNSTeiN-/decompiler/blob/f7923b54233d5aa2118686d99ba93380063993ed/src/graph.py#L64-L77 | |
IntelLabs/nlp-architect | 60afd0dd1bfd74f01b4ac8f613cb484777b80284 | solutions/InterpreT/application/tasks.py | python | WSCTask.get_dist_per_layer | (
self, option: str, model_selector_val: str, selected_sentences: pd.DataFrame
) | return span_mean_distance_0, span_mean_distance_1 | Computing the mean distance per layer between span tokens for target/pred == 1 and target/pred == 0. | Computing the mean distance per layer between span tokens for target/pred == 1 and target/pred == 0. | [
"Computing",
"the",
"mean",
"distance",
"per",
"layer",
"between",
"span",
"tokens",
"for",
"target",
"/",
"pred",
"==",
"1",
"and",
"target",
"/",
"pred",
"==",
"0",
"."
] | def get_dist_per_layer(
self, option: str, model_selector_val: str, selected_sentences: pd.DataFrame
) -> Tuple[np.array, np.array]:
"""Computing the mean distance per layer between span tokens for target/pred == 1 and target/pred == 0."""
# Getting example rows and sentence rows (minus example rows) in df
curr_model_full_df = self.map_model_to_full_df[
self.map_model_name_to_id[model_selector_val]
]
selected_sentence_rows = curr_model_full_df.loc[
curr_model_full_df["sentence_idx"].isin(selected_sentences)
]
all_ex_ids = selected_sentence_rows["id"].unique()
span_agg_distance_0 = np.zeros(self.num_layers + 1)
span_agg_distance_1 = np.zeros(self.num_layers + 1)
count_0 = 0
count_1 = 0
for ex_id in all_ex_ids:
ex_rows = selected_sentence_rows[selected_sentence_rows["id"] == ex_id]
span1 = ex_rows["span1"].iloc[0]
span2 = ex_rows["span2"].iloc[0]
span1_rows = ex_rows[ex_rows["span"] == span1]
span2_rows = ex_rows[ex_rows["span"] == span2]
span1_coords = [
np.array(
(
span1_rows[f"layer_{layer:02}_tsne_x"].mean(),
span1_rows[f"layer_{layer:02}_tsne_y"].mean(),
)
)
for layer in range(self.num_layers + 1)
]
span2_coords = [
np.array(
(
span2_rows[f"layer_{layer:02}_tsne_x"].mean(),
span1_rows[f"layer_{layer:02}_tsne_y"].mean(),
)
)
for layer in range(self.num_layers + 1)
]
dist_per_layer = np.array(
[
np.linalg.norm(span1_coord - span2_coord)
for span1_coord, span2_coord in zip(span1_coords, span2_coords)
]
)
if ex_rows[option].iloc[0] == 1:
span_agg_distance_1 += dist_per_layer
count_1 += 1
else:
span_agg_distance_0 += dist_per_layer
count_0 += 1
# Averaging by number of examples
span_mean_distance_0 = span_agg_distance_0 / count_0
span_mean_distance_1 = span_agg_distance_1 / count_1
return span_mean_distance_0, span_mean_distance_1 | [
"def",
"get_dist_per_layer",
"(",
"self",
",",
"option",
":",
"str",
",",
"model_selector_val",
":",
"str",
",",
"selected_sentences",
":",
"pd",
".",
"DataFrame",
")",
"->",
"Tuple",
"[",
"np",
".",
"array",
",",
"np",
".",
"array",
"]",
":",
"# Getting... | https://github.com/IntelLabs/nlp-architect/blob/60afd0dd1bfd74f01b4ac8f613cb484777b80284/solutions/InterpreT/application/tasks.py#L357-L417 | |
holzschu/Carnets | 44effb10ddfc6aa5c8b0687582a724ba82c6b547 | Library/lib/python3.7/site-packages/zmq/ssh/tunnel.py | python | try_passwordless_ssh | (server, keyfile, paramiko=None) | return f(server, keyfile) | Attempt to make an ssh connection without a password.
This is mainly used for requiring password input only once
when many tunnels may be connected to the same server.
If paramiko is None, the default for the platform is chosen. | Attempt to make an ssh connection without a password.
This is mainly used for requiring password input only once
when many tunnels may be connected to the same server.
If paramiko is None, the default for the platform is chosen. | [
"Attempt",
"to",
"make",
"an",
"ssh",
"connection",
"without",
"a",
"password",
".",
"This",
"is",
"mainly",
"used",
"for",
"requiring",
"password",
"input",
"only",
"once",
"when",
"many",
"tunnels",
"may",
"be",
"connected",
"to",
"the",
"same",
"server",
... | def try_passwordless_ssh(server, keyfile, paramiko=None):
"""Attempt to make an ssh connection without a password.
This is mainly used for requiring password input only once
when many tunnels may be connected to the same server.
If paramiko is None, the default for the platform is chosen.
"""
if paramiko is None:
paramiko = sys.platform == 'win32'
if not paramiko:
f = _try_passwordless_openssh
else:
f = _try_passwordless_paramiko
return f(server, keyfile) | [
"def",
"try_passwordless_ssh",
"(",
"server",
",",
"keyfile",
",",
"paramiko",
"=",
"None",
")",
":",
"if",
"paramiko",
"is",
"None",
":",
"paramiko",
"=",
"sys",
".",
"platform",
"==",
"'win32'",
"if",
"not",
"paramiko",
":",
"f",
"=",
"_try_passwordless_... | https://github.com/holzschu/Carnets/blob/44effb10ddfc6aa5c8b0687582a724ba82c6b547/Library/lib/python3.7/site-packages/zmq/ssh/tunnel.py#L61-L74 | |
exaile/exaile | a7b58996c5c15b3aa7b9975ac13ee8f784ef4689 | xl/trax/track.py | python | Track.get_type | (self) | return Gio.File.new_for_uri(self.get_loc_for_io()).get_uri_scheme() | Get the URI schema the file uses, e.g. file, http, smb. | Get the URI schema the file uses, e.g. file, http, smb. | [
"Get",
"the",
"URI",
"schema",
"the",
"file",
"uses",
"e",
".",
"g",
".",
"file",
"http",
"smb",
"."
] | def get_type(self):
"""
Get the URI schema the file uses, e.g. file, http, smb.
"""
return Gio.File.new_for_uri(self.get_loc_for_io()).get_uri_scheme() | [
"def",
"get_type",
"(",
"self",
")",
":",
"return",
"Gio",
".",
"File",
".",
"new_for_uri",
"(",
"self",
".",
"get_loc_for_io",
"(",
")",
")",
".",
"get_uri_scheme",
"(",
")"
] | https://github.com/exaile/exaile/blob/a7b58996c5c15b3aa7b9975ac13ee8f784ef4689/xl/trax/track.py#L345-L349 | |
dbrattli/aioreactive | e057264a5905964c68d443b98b3e602279b3b9ed | aioreactive/__init__.py | python | from_async_iterable | (iter: AsyncIterable[TSource]) | return AsyncRx(of_async_iterable(iter)) | Convert an async iterable to an async observable stream.
Example:
>>> xs = rx.from_async_iterable(async_iterable)
Returns:
The source stream whose elements are pulled from the given
(async) iterable sequence. | Convert an async iterable to an async observable stream. | [
"Convert",
"an",
"async",
"iterable",
"to",
"an",
"async",
"observable",
"stream",
"."
] | def from_async_iterable(iter: AsyncIterable[TSource]) -> "AsyncObservable[TSource]":
"""Convert an async iterable to an async observable stream.
Example:
>>> xs = rx.from_async_iterable(async_iterable)
Returns:
The source stream whose elements are pulled from the given
(async) iterable sequence.
"""
from .create import of_async_iterable
return AsyncRx(of_async_iterable(iter)) | [
"def",
"from_async_iterable",
"(",
"iter",
":",
"AsyncIterable",
"[",
"TSource",
"]",
")",
"->",
"\"AsyncObservable[TSource]\"",
":",
"from",
".",
"create",
"import",
"of_async_iterable",
"return",
"AsyncRx",
"(",
"of_async_iterable",
"(",
"iter",
")",
")"
] | https://github.com/dbrattli/aioreactive/blob/e057264a5905964c68d443b98b3e602279b3b9ed/aioreactive/__init__.py#L659-L671 | |
mchristopher/PokemonGo-DesktopMap | ec37575f2776ee7d64456e2a1f6b6b78830b4fe0 | app/pylibs/win32/Cryptodome/PublicKey/ECC.py | python | EccKey.pointQ | (self) | return self._point | An `EccPoint`, representating the public component | An `EccPoint`, representating the public component | [
"An",
"EccPoint",
"representating",
"the",
"public",
"component"
] | def pointQ(self):
"""An `EccPoint`, representating the public component"""
if self._point is None:
self._point = _curve.G * self._d
return self._point | [
"def",
"pointQ",
"(",
"self",
")",
":",
"if",
"self",
".",
"_point",
"is",
"None",
":",
"self",
".",
"_point",
"=",
"_curve",
".",
"G",
"*",
"self",
".",
"_d",
"return",
"self",
".",
"_point"
] | https://github.com/mchristopher/PokemonGo-DesktopMap/blob/ec37575f2776ee7d64456e2a1f6b6b78830b4fe0/app/pylibs/win32/Cryptodome/PublicKey/ECC.py#L370-L374 | |
openstack/cinder | 23494a6d6c51451688191e1847a458f1d3cdcaa5 | cinder/wsgi/common.py | python | Middleware.factory | (cls, global_config, **local_config) | return _factory | Used for paste app factories in paste.deploy config files.
Any local configuration (that is, values under the [filter:APPNAME]
section of the paste config) will be passed into the `__init__` method
as kwargs.
A hypothetical configuration would look like:
[filter:analytics]
redis_host = 127.0.0.1
paste.filter_factory = cinder.api.analytics:Analytics.factory
which would result in a call to the `Analytics` class as
import cinder.api.analytics
analytics.Analytics(app_from_paste, redis_host='127.0.0.1')
You could of course re-implement the `factory` method in subclasses,
but using the kwarg passing it shouldn't be necessary. | Used for paste app factories in paste.deploy config files. | [
"Used",
"for",
"paste",
"app",
"factories",
"in",
"paste",
".",
"deploy",
"config",
"files",
"."
] | def factory(cls, global_config, **local_config):
"""Used for paste app factories in paste.deploy config files.
Any local configuration (that is, values under the [filter:APPNAME]
section of the paste config) will be passed into the `__init__` method
as kwargs.
A hypothetical configuration would look like:
[filter:analytics]
redis_host = 127.0.0.1
paste.filter_factory = cinder.api.analytics:Analytics.factory
which would result in a call to the `Analytics` class as
import cinder.api.analytics
analytics.Analytics(app_from_paste, redis_host='127.0.0.1')
You could of course re-implement the `factory` method in subclasses,
but using the kwarg passing it shouldn't be necessary.
"""
def _factory(app):
return cls(app, **local_config)
return _factory | [
"def",
"factory",
"(",
"cls",
",",
"global_config",
",",
"*",
"*",
"local_config",
")",
":",
"def",
"_factory",
"(",
"app",
")",
":",
"return",
"cls",
"(",
"app",
",",
"*",
"*",
"local_config",
")",
"return",
"_factory"
] | https://github.com/openstack/cinder/blob/23494a6d6c51451688191e1847a458f1d3cdcaa5/cinder/wsgi/common.py#L103-L127 | |
automl/SMAC3 | d4cb7ed76e0fbdd9edf6ab5360ff75de67ac2195 | smac/optimizer/acquisition.py | python | EI._compute | (self, X: np.ndarray) | return f | Computes the EI value and its derivatives.
Parameters
----------
X: np.ndarray(N, D), The input points where the acquisition function
should be evaluated. The dimensionality of X is (N, D), with N as
the number of points to evaluate at and D is the number of
dimensions of one X.
Returns
-------
np.ndarray(N,1)
Expected Improvement of X | Computes the EI value and its derivatives. | [
"Computes",
"the",
"EI",
"value",
"and",
"its",
"derivatives",
"."
] | def _compute(self, X: np.ndarray) -> np.ndarray:
"""Computes the EI value and its derivatives.
Parameters
----------
X: np.ndarray(N, D), The input points where the acquisition function
should be evaluated. The dimensionality of X is (N, D), with N as
the number of points to evaluate at and D is the number of
dimensions of one X.
Returns
-------
np.ndarray(N,1)
Expected Improvement of X
"""
if len(X.shape) == 1:
X = X[:, np.newaxis]
m, v = self.model.predict_marginalized_over_instances(X)
s = np.sqrt(v)
if self.eta is None:
raise ValueError('No current best specified. Call update('
'eta=<int>) to inform the acquisition function '
'about the current best value.')
def calculate_f():
z = (self.eta - m - self.par) / s
return (self.eta - m - self.par) * norm.cdf(z) + s * norm.pdf(z)
if np.any(s == 0.0):
# if std is zero, we have observed x on all instances
# using a RF, std should be never exactly 0.0
# Avoid zero division by setting all zeros in s to one.
# Consider the corresponding results in f to be zero.
self.logger.warning("Predicted std is 0.0 for at least one sample.")
s_copy = np.copy(s)
s[s_copy == 0.0] = 1.0
f = calculate_f()
f[s_copy == 0.0] = 0.0
else:
f = calculate_f()
if (f < 0).any():
raise ValueError(
"Expected Improvement is smaller than 0 for at least one "
"sample.")
return f | [
"def",
"_compute",
"(",
"self",
",",
"X",
":",
"np",
".",
"ndarray",
")",
"->",
"np",
".",
"ndarray",
":",
"if",
"len",
"(",
"X",
".",
"shape",
")",
"==",
"1",
":",
"X",
"=",
"X",
"[",
":",
",",
"np",
".",
"newaxis",
"]",
"m",
",",
"v",
"... | https://github.com/automl/SMAC3/blob/d4cb7ed76e0fbdd9edf6ab5360ff75de67ac2195/smac/optimizer/acquisition.py#L211-L258 | |
oracle/graalpython | 577e02da9755d916056184ec441c26e00b70145c | graalpython/lib-python/3/idlelib/config.py | python | IdleConf.__init__ | (self, _utest=False) | [] | def __init__(self, _utest=False):
self.config_types = ('main', 'highlight', 'keys', 'extensions')
self.defaultCfg = {}
self.userCfg = {}
self.cfg = {} # TODO use to select userCfg vs defaultCfg
# self.blink_off_time = <first editor text>['insertofftime']
# See https:/bugs.python.org/issue4630, msg356516.
if not _utest:
self.CreateConfigHandlers()
self.LoadCfgFiles() | [
"def",
"__init__",
"(",
"self",
",",
"_utest",
"=",
"False",
")",
":",
"self",
".",
"config_types",
"=",
"(",
"'main'",
",",
"'highlight'",
",",
"'keys'",
",",
"'extensions'",
")",
"self",
".",
"defaultCfg",
"=",
"{",
"}",
"self",
".",
"userCfg",
"=",
... | https://github.com/oracle/graalpython/blob/577e02da9755d916056184ec441c26e00b70145c/graalpython/lib-python/3/idlelib/config.py#L156-L166 | ||||
mitmproxy/mitmproxy | 1abb8f69217910c8623bd1339da2502aed98ff0d | mitmproxy/contrib/kaitaistruct/vlq_base128_le.py | python | VlqBase128Le.value | (self) | return self._m_value if hasattr(self, '_m_value') else None | Resulting value as normal integer. | Resulting value as normal integer. | [
"Resulting",
"value",
"as",
"normal",
"integer",
"."
] | def value(self):
"""Resulting value as normal integer."""
if hasattr(self, '_m_value'):
return self._m_value if hasattr(self, '_m_value') else None
self._m_value = (((((((self.groups[0].value + ((self.groups[1].value << 7) if self.len >= 2 else 0)) + ((self.groups[2].value << 14) if self.len >= 3 else 0)) + ((self.groups[3].value << 21) if self.len >= 4 else 0)) + ((self.groups[4].value << 28) if self.len >= 5 else 0)) + ((self.groups[5].value << 35) if self.len >= 6 else 0)) + ((self.groups[6].value << 42) if self.len >= 7 else 0)) + ((self.groups[7].value << 49) if self.len >= 8 else 0))
return self._m_value if hasattr(self, '_m_value') else None | [
"def",
"value",
"(",
"self",
")",
":",
"if",
"hasattr",
"(",
"self",
",",
"'_m_value'",
")",
":",
"return",
"self",
".",
"_m_value",
"if",
"hasattr",
"(",
"self",
",",
"'_m_value'",
")",
"else",
"None",
"self",
".",
"_m_value",
"=",
"(",
"(",
"(",
... | https://github.com/mitmproxy/mitmproxy/blob/1abb8f69217910c8623bd1339da2502aed98ff0d/mitmproxy/contrib/kaitaistruct/vlq_base128_le.py#L86-L92 | |
deepmind/dm_control | 806a10e896e7c887635328bfa8352604ad0fedae | dm_control/locomotion/soccer/task.py | python | Task.after_compile | (self, physics, random_state) | [] | def after_compile(self, physics, random_state):
super().after_compile(physics, random_state)
for camera in self._tracking_cameras:
camera.after_compile(physics) | [
"def",
"after_compile",
"(",
"self",
",",
"physics",
",",
"random_state",
")",
":",
"super",
"(",
")",
".",
"after_compile",
"(",
"physics",
",",
"random_state",
")",
"for",
"camera",
"in",
"self",
".",
"_tracking_cameras",
":",
"camera",
".",
"after_compile... | https://github.com/deepmind/dm_control/blob/806a10e896e7c887635328bfa8352604ad0fedae/dm_control/locomotion/soccer/task.py#L135-L138 | ||||
rst2pdf/rst2pdf | dac0653f8eb894aa5b83cf0877ca3420cdfaf4b2 | rst2pdf/styles.py | python | StyleSheet.findStyle | (self, fn) | return result | Find the absolute file name for a given style filename.
Given a style filename, searches for it in StyleSearchPath
and returns the real file name. | Find the absolute file name for a given style filename. | [
"Find",
"the",
"absolute",
"file",
"name",
"for",
"a",
"given",
"style",
"filename",
"."
] | def findStyle(self, fn):
"""Find the absolute file name for a given style filename.
Given a style filename, searches for it in StyleSearchPath
and returns the real file name.
"""
def innerFind(path, fn):
if os.path.isabs(fn):
if os.path.isfile(fn):
return fn
else:
for D in path:
tfn = os.path.join(D, fn)
if os.path.isfile(tfn):
return tfn
return None
for ext in ['', '.yaml', '.yml', '.style', '.json']:
result = innerFind(self.StyleSearchPath, fn + ext)
if result:
break
if result is None:
log.warning("Can't find stylesheet %s" % fn)
return result | [
"def",
"findStyle",
"(",
"self",
",",
"fn",
")",
":",
"def",
"innerFind",
"(",
"path",
",",
"fn",
")",
":",
"if",
"os",
".",
"path",
".",
"isabs",
"(",
"fn",
")",
":",
"if",
"os",
".",
"path",
".",
"isfile",
"(",
"fn",
")",
":",
"return",
"fn... | https://github.com/rst2pdf/rst2pdf/blob/dac0653f8eb894aa5b83cf0877ca3420cdfaf4b2/rst2pdf/styles.py#L616-L641 | |
joblib/joblib | 7742f5882273889f7aaf1d483a8a1c72a97d57e3 | joblib/_store_backends.py | python | StoreBackendMixin.clear_path | (self, path) | Clear all items with a common path in the store. | Clear all items with a common path in the store. | [
"Clear",
"all",
"items",
"with",
"a",
"common",
"path",
"in",
"the",
"store",
"."
] | def clear_path(self, path):
"""Clear all items with a common path in the store."""
func_path = os.path.join(self.location, *path)
if self._item_exists(func_path):
self.clear_location(func_path) | [
"def",
"clear_path",
"(",
"self",
",",
"path",
")",
":",
"func_path",
"=",
"os",
".",
"path",
".",
"join",
"(",
"self",
".",
"location",
",",
"*",
"path",
")",
"if",
"self",
".",
"_item_exists",
"(",
"func_path",
")",
":",
"self",
".",
"clear_locatio... | https://github.com/joblib/joblib/blob/7742f5882273889f7aaf1d483a8a1c72a97d57e3/joblib/_store_backends.py#L244-L248 | ||
mikew/ss-plex.bundle | 031566c06205e08a8cb15c57a0c143fba5270493 | Contents/Libraries/Shared/ss/mechanize/_sgmllib_copy.py | python | SGMLParser.reset | (self) | Reset this instance. Loses all unprocessed data. | Reset this instance. Loses all unprocessed data. | [
"Reset",
"this",
"instance",
".",
"Loses",
"all",
"unprocessed",
"data",
"."
] | def reset(self):
"""Reset this instance. Loses all unprocessed data."""
self.__starttag_text = None
self.rawdata = ''
self.stack = []
self.lasttag = '???'
self.nomoretags = 0
self.literal = 0
markupbase.ParserBase.reset(self) | [
"def",
"reset",
"(",
"self",
")",
":",
"self",
".",
"__starttag_text",
"=",
"None",
"self",
".",
"rawdata",
"=",
"''",
"self",
".",
"stack",
"=",
"[",
"]",
"self",
".",
"lasttag",
"=",
"'???'",
"self",
".",
"nomoretags",
"=",
"0",
"self",
".",
"lit... | https://github.com/mikew/ss-plex.bundle/blob/031566c06205e08a8cb15c57a0c143fba5270493/Contents/Libraries/Shared/ss/mechanize/_sgmllib_copy.py#L77-L85 | ||
GoogleCloudPlatform/ml-on-gcp | ffd88931674e08ef6b0b20de27700ed1da61772c | example_zoo/tensorflow/models/keras_imagenet_main/official/utils/logs/hooks_helper.py | python | get_examples_per_second_hook | (every_n_steps=100,
batch_size=128,
warm_steps=5,
**kwargs) | return hooks.ExamplesPerSecondHook(
batch_size=batch_size, every_n_steps=every_n_steps,
warm_steps=warm_steps, metric_logger=logger.get_benchmark_logger()) | Function to get ExamplesPerSecondHook.
Args:
every_n_steps: `int`, print current and average examples per second every
N steps.
batch_size: `int`, total batch size used to calculate examples/second from
global time.
warm_steps: skip this number of steps before logging and running average.
**kwargs: a dictionary of arguments to ExamplesPerSecondHook.
Returns:
Returns a ProfilerHook that writes out timelines that can be loaded into
profiling tools like chrome://tracing. | Function to get ExamplesPerSecondHook. | [
"Function",
"to",
"get",
"ExamplesPerSecondHook",
"."
] | def get_examples_per_second_hook(every_n_steps=100,
batch_size=128,
warm_steps=5,
**kwargs): # pylint: disable=unused-argument
"""Function to get ExamplesPerSecondHook.
Args:
every_n_steps: `int`, print current and average examples per second every
N steps.
batch_size: `int`, total batch size used to calculate examples/second from
global time.
warm_steps: skip this number of steps before logging and running average.
**kwargs: a dictionary of arguments to ExamplesPerSecondHook.
Returns:
Returns a ProfilerHook that writes out timelines that can be loaded into
profiling tools like chrome://tracing.
"""
return hooks.ExamplesPerSecondHook(
batch_size=batch_size, every_n_steps=every_n_steps,
warm_steps=warm_steps, metric_logger=logger.get_benchmark_logger()) | [
"def",
"get_examples_per_second_hook",
"(",
"every_n_steps",
"=",
"100",
",",
"batch_size",
"=",
"128",
",",
"warm_steps",
"=",
"5",
",",
"*",
"*",
"kwargs",
")",
":",
"# pylint: disable=unused-argument",
"return",
"hooks",
".",
"ExamplesPerSecondHook",
"(",
"batc... | https://github.com/GoogleCloudPlatform/ml-on-gcp/blob/ffd88931674e08ef6b0b20de27700ed1da61772c/example_zoo/tensorflow/models/keras_imagenet_main/official/utils/logs/hooks_helper.py#L112-L132 | |
sabri-zaki/EasY_HaCk | 2a39ac384dd0d6fc51c0dd22e8d38cece683fdb9 | .modules/.metagoofil/pdfminer/utils.py | python | get_bound | (pts) | return (x0,y0,x1,y1) | Compute a minimal rectangle that covers all the points. | Compute a minimal rectangle that covers all the points. | [
"Compute",
"a",
"minimal",
"rectangle",
"that",
"covers",
"all",
"the",
"points",
"."
] | def get_bound(pts):
"""Compute a minimal rectangle that covers all the points."""
(x0, y0, x1, y1) = (INF, INF, -INF, -INF)
for (x,y) in pts:
x0 = min(x0, x)
y0 = min(y0, y)
x1 = max(x1, x)
y1 = max(y1, y)
return (x0,y0,x1,y1) | [
"def",
"get_bound",
"(",
"pts",
")",
":",
"(",
"x0",
",",
"y0",
",",
"x1",
",",
"y1",
")",
"=",
"(",
"INF",
",",
"INF",
",",
"-",
"INF",
",",
"-",
"INF",
")",
"for",
"(",
"x",
",",
"y",
")",
"in",
"pts",
":",
"x0",
"=",
"min",
"(",
"x0"... | https://github.com/sabri-zaki/EasY_HaCk/blob/2a39ac384dd0d6fc51c0dd22e8d38cece683fdb9/.modules/.metagoofil/pdfminer/utils.py#L70-L78 | |
CiscoDevNet/webexteamssdk | 673312779b8e05cf0535bea8b96599015cccbff1 | webexteamssdk/models/mixins/person.py | python | PersonBasicPropertiesMixin.invitePending | (self) | return self._json_data.get("invitePending") | Whether or not an invite is pending for the user to complete account
activation.
Person Invite Pending Enum:
`true`: The person has been invited to Webex Teams but has not
created an account
`false`: An invite is not pending for this person | Whether or not an invite is pending for the user to complete account
activation. | [
"Whether",
"or",
"not",
"an",
"invite",
"is",
"pending",
"for",
"the",
"user",
"to",
"complete",
"account",
"activation",
"."
] | def invitePending(self):
"""Whether or not an invite is pending for the user to complete account
activation.
Person Invite Pending Enum:
`true`: The person has been invited to Webex Teams but has not
created an account
`false`: An invite is not pending for this person
"""
return self._json_data.get("invitePending") | [
"def",
"invitePending",
"(",
"self",
")",
":",
"return",
"self",
".",
"_json_data",
".",
"get",
"(",
"\"invitePending\"",
")"
] | https://github.com/CiscoDevNet/webexteamssdk/blob/673312779b8e05cf0535bea8b96599015cccbff1/webexteamssdk/models/mixins/person.py#L166-L176 | |
FederatedAI/FATE | 32540492623568ecd1afcb367360133616e02fa3 | python/fate_client/pipeline/runtime/entity.py | python | JobParameters.get_config | (self, *args, **kwargs) | return conf | need to implement | need to implement | [
"need",
"to",
"implement"
] | def get_config(self, *args, **kwargs):
"""need to implement"""
roles = kwargs["roles"]
common_param_conf = self.get_common_param_conf()
role_param_conf = self.get_role_param_conf(roles)
conf = {}
if common_param_conf:
conf['common'] = common_param_conf
if role_param_conf:
conf["role"] = role_param_conf
return conf | [
"def",
"get_config",
"(",
"self",
",",
"*",
"args",
",",
"*",
"*",
"kwargs",
")",
":",
"roles",
"=",
"kwargs",
"[",
"\"roles\"",
"]",
"common_param_conf",
"=",
"self",
".",
"get_common_param_conf",
"(",
")",
"role_param_conf",
"=",
"self",
".",
"get_role_p... | https://github.com/FederatedAI/FATE/blob/32540492623568ecd1afcb367360133616e02fa3/python/fate_client/pipeline/runtime/entity.py#L122-L137 | |
leo-editor/leo-editor | 383d6776d135ef17d73d935a2f0ecb3ac0e99494 | leo/plugins/importers/javascript.py | python | JS_Importer.__init__ | (self, importCommands, force_at_others=False, **kwargs) | The ctor for the JS_ImportController class. | The ctor for the JS_ImportController class. | [
"The",
"ctor",
"for",
"the",
"JS_ImportController",
"class",
"."
] | def __init__(self, importCommands, force_at_others=False, **kwargs):
"""The ctor for the JS_ImportController class."""
# Init the base class.
super().__init__(
importCommands,
gen_refs=False, # Fix #639.
language='javascript',
state_class=JS_ScanState,
) | [
"def",
"__init__",
"(",
"self",
",",
"importCommands",
",",
"force_at_others",
"=",
"False",
",",
"*",
"*",
"kwargs",
")",
":",
"# Init the base class.",
"super",
"(",
")",
".",
"__init__",
"(",
"importCommands",
",",
"gen_refs",
"=",
"False",
",",
"# Fix #6... | https://github.com/leo-editor/leo-editor/blob/383d6776d135ef17d73d935a2f0ecb3ac0e99494/leo/plugins/importers/javascript.py#L15-L23 | ||
makehumancommunity/makehuman | 8006cf2cc851624619485658bb933a4244bbfd7c | makehuman/lib/qtgui.py | python | TextEdit.tabPressed | (self) | return False | Override and return True to override custom behaviour when TAB key
is pressed | Override and return True to override custom behaviour when TAB key
is pressed | [
"Override",
"and",
"return",
"True",
"to",
"override",
"custom",
"behaviour",
"when",
"TAB",
"key",
"is",
"pressed"
] | def tabPressed(self):
"""
Override and return True to override custom behaviour when TAB key
is pressed
"""
return False | [
"def",
"tabPressed",
"(",
"self",
")",
":",
"return",
"False"
] | https://github.com/makehumancommunity/makehuman/blob/8006cf2cc851624619485658bb933a4244bbfd7c/makehuman/lib/qtgui.py#L911-L916 | |
Hnfull/Intensio-Obfuscator | f66a22b50c19793edac673cfd7dc319405205c39 | src/intensio_obfuscator/obfuscation_examples/python/basic/input/basicRAT-example/basicRAT_server.py | python | EmptyFunc | () | comment | comment | [
"comment"
] | def EmptyFunc():
""" comment """ | [
"def",
"EmptyFunc",
"(",
")",
":"
] | https://github.com/Hnfull/Intensio-Obfuscator/blob/f66a22b50c19793edac673cfd7dc319405205c39/src/intensio_obfuscator/obfuscation_examples/python/basic/input/basicRAT-example/basicRAT_server.py#L84-L85 | ||
kakao/buffalo | 59d0d7caabf9c4bd9005e9c045bcb12366d37269 | buffalo/parallel/base.py | python | ParALS.topk_recommendation | (self, keys, topk=10, pool=None, repr=False) | return keys, topks, scores | See the documentation of Parallel. | See the documentation of Parallel. | [
"See",
"the",
"documentation",
"of",
"Parallel",
"."
] | def topk_recommendation(self, keys, topk=10, pool=None, repr=False):
"""See the documentation of Parallel."""
if self.algo.opt._nrz_P or self.algo.opt._nrz_Q:
raise RuntimeError('Cannot make topk recommendation with normalized factors')
# It is possible to skip make recommendation for not-existed keys.
indexes = self.algo.get_index_pool(keys, group='user')
keys = [k for k, i in zip(keys, indexes) if i is not None]
indexes = np.array([i for i in indexes if i is not None], dtype=np.int32)
if pool is not None:
pool = self.algo.get_index_pool(pool, group='item')
if len(pool) == 0:
raise RuntimeError('pool is empty')
else:
# It assume that empty pool menas for all items
pool = np.array([], dtype=np.int32)
topks, scores = super()._topk_recommendation(indexes, self.algo.P, self.algo.Q, topk, pool)
if repr:
mo = np.int32(-1)
topks = [[self.algo._idmanager.itemids[t] for t in tt if t != mo]
for tt in topks]
return keys, topks, scores | [
"def",
"topk_recommendation",
"(",
"self",
",",
"keys",
",",
"topk",
"=",
"10",
",",
"pool",
"=",
"None",
",",
"repr",
"=",
"False",
")",
":",
"if",
"self",
".",
"algo",
".",
"opt",
".",
"_nrz_P",
"or",
"self",
".",
"algo",
".",
"opt",
".",
"_nrz... | https://github.com/kakao/buffalo/blob/59d0d7caabf9c4bd9005e9c045bcb12366d37269/buffalo/parallel/base.py#L118-L138 | |
chen0040/keras-english-resume-parser-and-analyzer | 900f4a5afab53e1ae80b90ffecff964d8347c9c4 | demo/dl_based_parser_train.py | python | main | () | [] | def main():
sys.path.append(os.path.join(os.path.dirname(__file__), '..'))
from keras_en_parser_and_analyzer.library.dl_based_parser import ResumeParser
random_state = 42
np.random.seed(random_state)
current_dir = os.path.dirname(__file__)
current_dir = current_dir if current_dir is not '' else '.'
output_dir_path = current_dir + '/models'
training_data_dir_path = current_dir + '/data/training_data'
classifier = ResumeParser()
batch_size = 64
epochs = 20
history = classifier.fit(training_data_dir_path=training_data_dir_path,
model_dir_path=output_dir_path,
batch_size=batch_size, epochs=epochs,
test_size=0.3,
random_state=random_state) | [
"def",
"main",
"(",
")",
":",
"sys",
".",
"path",
".",
"append",
"(",
"os",
".",
"path",
".",
"join",
"(",
"os",
".",
"path",
".",
"dirname",
"(",
"__file__",
")",
",",
"'..'",
")",
")",
"from",
"keras_en_parser_and_analyzer",
".",
"library",
".",
... | https://github.com/chen0040/keras-english-resume-parser-and-analyzer/blob/900f4a5afab53e1ae80b90ffecff964d8347c9c4/demo/dl_based_parser_train.py#L6-L26 | ||||
eoyilmaz/stalker | a35c041b79d953d00dc2a09cf8206956ca269bef | stalker/models/review.py | python | Review.approve | (self) | Finalizes the review by approving the task | Finalizes the review by approving the task | [
"Finalizes",
"the",
"review",
"by",
"approving",
"the",
"task"
] | def approve(self):
"""Finalizes the review by approving the task
"""
# set self status to APP
with DBSession.no_autoflush:
app = Status.query.filter_by(code='APP').first()
self.status = app
# call finalize review_set
self.finalize_review_set() | [
"def",
"approve",
"(",
"self",
")",
":",
"# set self status to APP",
"with",
"DBSession",
".",
"no_autoflush",
":",
"app",
"=",
"Status",
".",
"query",
".",
"filter_by",
"(",
"code",
"=",
"'APP'",
")",
".",
"first",
"(",
")",
"self",
".",
"status",
"=",
... | https://github.com/eoyilmaz/stalker/blob/a35c041b79d953d00dc2a09cf8206956ca269bef/stalker/models/review.py#L216-L225 | ||
frePPLe/frepple | 57aa612030b4fcd03cb9c613f83a7dac4f0e8d6d | freppledb/input/models.py | python | OperationResource.natural_key | (self) | return (
self.operation,
self.resource,
self.effective_start or datetime(1971, 1, 1),
) | [] | def natural_key(self):
return (
self.operation,
self.resource,
self.effective_start or datetime(1971, 1, 1),
) | [
"def",
"natural_key",
"(",
"self",
")",
":",
"return",
"(",
"self",
".",
"operation",
",",
"self",
".",
"resource",
",",
"self",
".",
"effective_start",
"or",
"datetime",
"(",
"1971",
",",
"1",
",",
"1",
")",
",",
")"
] | https://github.com/frePPLe/frepple/blob/57aa612030b4fcd03cb9c613f83a7dac4f0e8d6d/freppledb/input/models.py#L1182-L1187 | |||
Source-Python-Dev-Team/Source.Python | d0ffd8ccbd1e9923c9bc44936f20613c1c76b7fb | addons/source-python/packages/site-packages/sphinx/pycode/pgen2/driver.py | python | Driver.parse_stream | (self, stream, debug=False) | return self.parse_stream_raw(stream, debug) | Parse a stream and return the syntax tree. | Parse a stream and return the syntax tree. | [
"Parse",
"a",
"stream",
"and",
"return",
"the",
"syntax",
"tree",
"."
] | def parse_stream(self, stream, debug=False):
"""Parse a stream and return the syntax tree."""
return self.parse_stream_raw(stream, debug) | [
"def",
"parse_stream",
"(",
"self",
",",
"stream",
",",
"debug",
"=",
"False",
")",
":",
"return",
"self",
".",
"parse_stream_raw",
"(",
"stream",
",",
"debug",
")"
] | https://github.com/Source-Python-Dev-Team/Source.Python/blob/d0ffd8ccbd1e9923c9bc44936f20613c1c76b7fb/addons/source-python/packages/site-packages/sphinx/pycode/pgen2/driver.py#L89-L91 | |
kyb3r/modmail | a1aacbfb817f6410d9c8b4fce41bbd0e1e55b6b3 | core/utils.py | python | match_user_id | (text: str) | return -1 | Matches a user ID in the format of "User ID: 12345".
Parameters
----------
text : str
The text of the user ID.
Returns
-------
int
The user ID if found. Otherwise, -1. | Matches a user ID in the format of "User ID: 12345". | [
"Matches",
"a",
"user",
"ID",
"in",
"the",
"format",
"of",
"User",
"ID",
":",
"12345",
"."
] | def match_user_id(text: str) -> int:
"""
Matches a user ID in the format of "User ID: 12345".
Parameters
----------
text : str
The text of the user ID.
Returns
-------
int
The user ID if found. Otherwise, -1.
"""
match = TOPIC_UID_REGEX.search(text)
if match is not None:
return int(match.group(1))
return -1 | [
"def",
"match_user_id",
"(",
"text",
":",
"str",
")",
"->",
"int",
":",
"match",
"=",
"TOPIC_UID_REGEX",
".",
"search",
"(",
"text",
")",
"if",
"match",
"is",
"not",
"None",
":",
"return",
"int",
"(",
"match",
".",
"group",
"(",
"1",
")",
")",
"ret... | https://github.com/kyb3r/modmail/blob/a1aacbfb817f6410d9c8b4fce41bbd0e1e55b6b3/core/utils.py#L245-L262 | |
google/textfsm | 65ce6c13f0b0c798a6505366cf17dd54bf285d90 | textfsm/parser.py | python | TextFSM._GetValue | (self, name) | Returns the TextFSMValue object natching the requested name. | Returns the TextFSMValue object natching the requested name. | [
"Returns",
"the",
"TextFSMValue",
"object",
"natching",
"the",
"requested",
"name",
"."
] | def _GetValue(self, name):
"""Returns the TextFSMValue object natching the requested name."""
for value in self.values:
if value.name == name:
return value | [
"def",
"_GetValue",
"(",
"self",
",",
"name",
")",
":",
"for",
"value",
"in",
"self",
".",
"values",
":",
"if",
"value",
".",
"name",
"==",
"name",
":",
"return",
"value"
] | https://github.com/google/textfsm/blob/65ce6c13f0b0c798a6505366cf17dd54bf285d90/textfsm/parser.py#L633-L637 | ||
gcollazo/BrowserRefresh-Sublime | daee0eda6480c07f8636ed24e5c555d24e088886 | win/pywinauto/controls/HwndWrapper.py | python | HwndWrapper.PressMouseInput | (self, button = "left", coords = (None, None)) | Press a mouse button using SendInput | Press a mouse button using SendInput | [
"Press",
"a",
"mouse",
"button",
"using",
"SendInput"
] | def PressMouseInput(self, button = "left", coords = (None, None)):
"Press a mouse button using SendInput"
_perform_click_input(self, button, coords, button_up = False) | [
"def",
"PressMouseInput",
"(",
"self",
",",
"button",
"=",
"\"left\"",
",",
"coords",
"=",
"(",
"None",
",",
"None",
")",
")",
":",
"_perform_click_input",
"(",
"self",
",",
"button",
",",
"coords",
",",
"button_up",
"=",
"False",
")"
] | https://github.com/gcollazo/BrowserRefresh-Sublime/blob/daee0eda6480c07f8636ed24e5c555d24e088886/win/pywinauto/controls/HwndWrapper.py#L854-L856 | ||
JenningsL/PointRCNN | 36b5e3226a230dcc89e7bb6cdd8d31cb7cdf8136 | dataset/frustum_dataset.py | python | FrustumDataset.find_match_label | (self, prop_corners, labels_corners) | return largest_idx, largest_iou | Find label with largest IOU. Label boxes can be rotated in xy plane | Find label with largest IOU. Label boxes can be rotated in xy plane | [
"Find",
"label",
"with",
"largest",
"IOU",
".",
"Label",
"boxes",
"can",
"be",
"rotated",
"in",
"xy",
"plane"
] | def find_match_label(self, prop_corners, labels_corners):
'''
Find label with largest IOU. Label boxes can be rotated in xy plane
'''
# labels = MultiPolygon(labels_corners)
labels = map(lambda corners: Polygon(corners), labels_corners)
target = Polygon(prop_corners)
largest_iou = 0
largest_idx = -1
for i, label in enumerate(labels):
area1 = label.area
area2 = target.area
intersection = target.intersection(label).area
iou = intersection / (area1 + area2 - intersection)
# if a proposal cover enough ground truth, take it as positive
#if intersection / area1 >= 0.8:
# iou = 0.66
# print(area1, area2, intersection)
# print(iou)
if iou > largest_iou:
largest_iou = iou
largest_idx = i
return largest_idx, largest_iou | [
"def",
"find_match_label",
"(",
"self",
",",
"prop_corners",
",",
"labels_corners",
")",
":",
"# labels = MultiPolygon(labels_corners)",
"labels",
"=",
"map",
"(",
"lambda",
"corners",
":",
"Polygon",
"(",
"corners",
")",
",",
"labels_corners",
")",
"target",
"=",... | https://github.com/JenningsL/PointRCNN/blob/36b5e3226a230dcc89e7bb6cdd8d31cb7cdf8136/dataset/frustum_dataset.py#L536-L558 | |
hakril/PythonForWindows | 61e027a678d5b87aa64fcf8a37a6661a86236589 | windows/winproxy/apis/kernel32.py | python | GetFirmwareEnvironmentVariableExW | (lpName, lpGuid, pBuffer, nSize, pdwAttribubutes) | return GetFirmwareEnvironmentVariableExW.ctypes_function(lpName, lpGuid, pBuffer, nSize, pdwAttribubutes) | [] | def GetFirmwareEnvironmentVariableExW(lpName, lpGuid, pBuffer, nSize, pdwAttribubutes):
return GetFirmwareEnvironmentVariableExW.ctypes_function(lpName, lpGuid, pBuffer, nSize, pdwAttribubutes) | [
"def",
"GetFirmwareEnvironmentVariableExW",
"(",
"lpName",
",",
"lpGuid",
",",
"pBuffer",
",",
"nSize",
",",
"pdwAttribubutes",
")",
":",
"return",
"GetFirmwareEnvironmentVariableExW",
".",
"ctypes_function",
"(",
"lpName",
",",
"lpGuid",
",",
"pBuffer",
",",
"nSize... | https://github.com/hakril/PythonForWindows/blob/61e027a678d5b87aa64fcf8a37a6661a86236589/windows/winproxy/apis/kernel32.py#L822-L823 | |||
mit-han-lab/data-efficient-gans | 6858275f08f43a33026844c8c2ac4e703e8a07ba | DiffAugment-stylegan2-pytorch/metrics/metric_main.py | python | kid50k | (opts) | return dict(kid50k=kid) | [] | def kid50k(opts):
opts.dataset_kwargs.update(max_size=None)
kid = kernel_inception_distance.compute_kid(opts, max_real=50000, num_gen=50000, num_subsets=100, max_subset_size=1000)
return dict(kid50k=kid) | [
"def",
"kid50k",
"(",
"opts",
")",
":",
"opts",
".",
"dataset_kwargs",
".",
"update",
"(",
"max_size",
"=",
"None",
")",
"kid",
"=",
"kernel_inception_distance",
".",
"compute_kid",
"(",
"opts",
",",
"max_real",
"=",
"50000",
",",
"num_gen",
"=",
"50000",
... | https://github.com/mit-han-lab/data-efficient-gans/blob/6858275f08f43a33026844c8c2ac4e703e8a07ba/DiffAugment-stylegan2-pytorch/metrics/metric_main.py#L121-L124 | |||
aneisch/home-assistant-config | 86e381fde9609cb8871c439c433c12989e4e225d | custom_components/aarlo/pyaarlo/util.py | python | http_get_img | (url, ignore_date=False) | return ret.content, date | Download HTTP image data. | Download HTTP image data. | [
"Download",
"HTTP",
"image",
"data",
"."
] | def http_get_img(url, ignore_date=False):
"""Download HTTP image data."""
ret = _http_get(url)
if ret is None:
return None, datetime.now().astimezone()
date = None
if not ignore_date:
date = ret.headers.get("Last-Modified", None)
if date is not None:
date = httptime_to_datetime(date)
if date is None:
date = datetime.now().astimezone()
return ret.content, date | [
"def",
"http_get_img",
"(",
"url",
",",
"ignore_date",
"=",
"False",
")",
":",
"ret",
"=",
"_http_get",
"(",
"url",
")",
"if",
"ret",
"is",
"None",
":",
"return",
"None",
",",
"datetime",
".",
"now",
"(",
")",
".",
"astimezone",
"(",
")",
"date",
"... | https://github.com/aneisch/home-assistant-config/blob/86e381fde9609cb8871c439c433c12989e4e225d/custom_components/aarlo/pyaarlo/util.py#L94-L109 | |
bnpy/bnpy | d5b311e8f58ccd98477f4a0c8a4d4982e3fca424 | bnpy/obsmodel/MultObsModel.py | python | MultObsModel.calcELBO_Memoized | (self, SS,
returnVec=0, afterGlobalStep=False, **kwargs) | return np.sum(elbo) | Calculate obsModel's objective using suff stats SS and Post.
Args
-------
SS : bnpy SuffStatBag
afterMStep : boolean flag
if 1, elbo calculated assuming M-step just completed
Returns
-------
obsELBO : scalar float
Equal to E[ log p(x) + log p(phi) - log q(phi)] | Calculate obsModel's objective using suff stats SS and Post. | [
"Calculate",
"obsModel",
"s",
"objective",
"using",
"suff",
"stats",
"SS",
"and",
"Post",
"."
] | def calcELBO_Memoized(self, SS,
returnVec=0, afterGlobalStep=False, **kwargs):
""" Calculate obsModel's objective using suff stats SS and Post.
Args
-------
SS : bnpy SuffStatBag
afterMStep : boolean flag
if 1, elbo calculated assuming M-step just completed
Returns
-------
obsELBO : scalar float
Equal to E[ log p(x) + log p(phi) - log q(phi)]
"""
elbo = np.zeros(SS.K)
Post = self.Post
Prior = self.Prior
if not afterGlobalStep:
Elogphi = self.GetCached('E_logphi', 'all') # K x V
for k in range(SS.K):
elbo[k] = self.prior_cFunc - self.GetCached('cFunc', k)
#elbo[k] = c_Diff(Prior.lam, Post.lam[k])
if not afterGlobalStep:
elbo[k] += np.inner(SS.WordCounts[k] + Prior.lam - Post.lam[k],
Elogphi[k])
if returnVec:
return elbo
return np.sum(elbo) | [
"def",
"calcELBO_Memoized",
"(",
"self",
",",
"SS",
",",
"returnVec",
"=",
"0",
",",
"afterGlobalStep",
"=",
"False",
",",
"*",
"*",
"kwargs",
")",
":",
"elbo",
"=",
"np",
".",
"zeros",
"(",
"SS",
".",
"K",
")",
"Post",
"=",
"self",
".",
"Post",
... | https://github.com/bnpy/bnpy/blob/d5b311e8f58ccd98477f4a0c8a4d4982e3fca424/bnpy/obsmodel/MultObsModel.py#L368-L396 | |
entropy1337/infernal-twin | 10995cd03312e39a48ade0f114ebb0ae3a711bb8 | Modules/build/reportlab/src/reportlab/pdfgen/textobject.py | python | _PDFColorSetter.setStrokeGray | (self, gray, alpha=None) | Sets the gray level; 0.0=black, 1.0=white | Sets the gray level; 0.0=black, 1.0=white | [
"Sets",
"the",
"gray",
"level",
";",
"0",
".",
"0",
"=",
"black",
"1",
".",
"0",
"=",
"white"
] | def setStrokeGray(self, gray, alpha=None):
"""Sets the gray level; 0.0=black, 1.0=white"""
self._strokeColorObj = (gray, gray, gray)
self._code.append('%s G' % fp_str(gray))
if alpha is not None:
self.setFillAlpha(alpha) | [
"def",
"setStrokeGray",
"(",
"self",
",",
"gray",
",",
"alpha",
"=",
"None",
")",
":",
"self",
".",
"_strokeColorObj",
"=",
"(",
"gray",
",",
"gray",
",",
"gray",
")",
"self",
".",
"_code",
".",
"append",
"(",
"'%s G'",
"%",
"fp_str",
"(",
"gray",
... | https://github.com/entropy1337/infernal-twin/blob/10995cd03312e39a48ade0f114ebb0ae3a711bb8/Modules/build/reportlab/src/reportlab/pdfgen/textobject.py#L144-L149 | ||
tomcatmanager/tomcatmanager | 41fa645d3cfef8ee83d98f401e653f174d59bfd4 | src/tomcatmanager/interactive_tomcat_manager.py | python | InteractiveTomcatManager.do_help | (self, args: str) | Show available commands, or help on a specific command. | Show available commands, or help on a specific command. | [
"Show",
"available",
"commands",
"or",
"help",
"on",
"a",
"specific",
"command",
"."
] | def do_help(self, args: str):
"""Show available commands, or help on a specific command."""
# pylint: disable=too-many-statements
if args:
# they want help on a specific command, use cmd2 for that
super().do_help(args)
else:
# show a custom list of commands, organized by category
help_ = []
help_.append(
"tomcat-manager is a command line tool for managing a Tomcat server"
)
help_ = self._help_add_header(help_, "Connecting to a Tomcat server")
help_.append(f"connect {self.do_connect.__doc__}")
help_.append(f"which {self.do_which.__doc__}")
help_ = self._help_add_header(help_, "Managing applications")
help_.append(f"list {self.do_list.__doc__}")
help_.append(f"deploy {self.do_deploy.__doc__}")
help_.append(f"redeploy {self.do_redeploy.__doc__}")
help_.append(f"undeploy {self.do_undeploy.__doc__}")
help_.append(f"start {self.do_start.__doc__}")
help_.append(f"stop {self.do_stop.__doc__}")
help_.append(f"restart {self.do_restart.__doc__}")
help_.append(" reload Synonym for 'restart'.")
help_.append(f"sessions {self.do_sessions.__doc__}")
help_.append(f"expire {self.do_expire.__doc__}")
help_ = self._help_add_header(help_, "Server information")
help_.append(f"findleakers {self.do_findleakers.__doc__}")
help_.append(f"resources {self.do_resources.__doc__}")
help_.append(f"serverinfo {self.do_serverinfo.__doc__}")
help_.append(f"status {self.do_status.__doc__}")
help_.append(f"threaddump {self.do_threaddump.__doc__}")
help_.append(f"vminfo {self.do_vminfo.__doc__}")
help_ = self._help_add_header(help_, "TLS configuration")
help_.append(
f"sslconnectorciphers {self.do_sslconnectorciphers.__doc__}"
)
help_.append(
f"sslconnectorcerts {self.do_sslconnectorcerts.__doc__}"
)
help_.append(
f"sslconnectortrustedcerts {self.do_sslconnectortrustedcerts.__doc__}"
)
help_.append(f"sslreload {self.do_sslreload.__doc__}")
help_ = self._help_add_header(help_, "Settings, configuration, and tools")
help_.append(f"config {self.do_config.__doc__}")
help_.append("edit Edit a file in the preferred text editor.")
help_.append(f"exit_code {self.do_exit_code.__doc__}")
help_.append(
"history View, run, edit, and save previously entered commands."
)
help_.append("py Run an interactive python shell.")
help_.append("run_pyscript Run a file containing a python script.")
help_.append(f"set {self.do_set.__doc__}")
help_.append(f"show {self.do_show.__doc__}")
help_.append(" settings Synonym for 'show'.")
help_.append(
"shell Execute a command in the operating system shell."
)
help_.append("shortcuts Show shortcuts for other commands.")
help_ = self._help_add_header(help_, "Other")
help_.append("exit Exit this program.")
help_.append(" quit Synonym for 'exit'.")
help_.append(f"help {self.do_help.__doc__}")
help_.append(f"version {self.do_version.__doc__}")
help_.append(f"license {self.do_license.__doc__}")
for line in help_:
self.poutput(line)
self.exit_code = self.EXIT_SUCCESS | [
"def",
"do_help",
"(",
"self",
",",
"args",
":",
"str",
")",
":",
"# pylint: disable=too-many-statements",
"if",
"args",
":",
"# they want help on a specific command, use cmd2 for that",
"super",
"(",
")",
".",
"do_help",
"(",
"args",
")",
"else",
":",
"# show a cus... | https://github.com/tomcatmanager/tomcatmanager/blob/41fa645d3cfef8ee83d98f401e653f174d59bfd4/src/tomcatmanager/interactive_tomcat_manager.py#L426-L501 | ||
tensorflow/federated | 5a60a032360087b8f4c7fcfd97ed1c0131c3eac3 | tensorflow_federated/python/core/templates/aggregation_process.py | python | AggregationProcess.next | (self) | return super().next | A `tff.Computation` that runs one iteration of the process.
Its first argument should always be the current state (originally produced
by the `initialize` attribute), the second argument must be the input placed
at `CLIENTS`, and the return type must be a
`tff.templates.MeasuredProcessOutput` with each field placed at `SERVER`.
Returns:
A `tff.Computation`. | A `tff.Computation` that runs one iteration of the process. | [
"A",
"tff",
".",
"Computation",
"that",
"runs",
"one",
"iteration",
"of",
"the",
"process",
"."
] | def next(self) -> computation_base.Computation:
"""A `tff.Computation` that runs one iteration of the process.
Its first argument should always be the current state (originally produced
by the `initialize` attribute), the second argument must be the input placed
at `CLIENTS`, and the return type must be a
`tff.templates.MeasuredProcessOutput` with each field placed at `SERVER`.
Returns:
A `tff.Computation`.
"""
return super().next | [
"def",
"next",
"(",
"self",
")",
"->",
"computation_base",
".",
"Computation",
":",
"return",
"super",
"(",
")",
".",
"next"
] | https://github.com/tensorflow/federated/blob/5a60a032360087b8f4c7fcfd97ed1c0131c3eac3/tensorflow_federated/python/core/templates/aggregation_process.py#L165-L176 | |
TencentCloud/tencentcloud-sdk-python | 3677fd1cdc8c5fd626ce001c13fd3b59d1f279d2 | tencentcloud/kms/v20190118/models.py | python | GetParametersForImportResponse.__init__ | (self) | r"""
:param KeyId: CMK的唯一标识,用于指定目标导入密钥材料的CMK。
:type KeyId: str
:param ImportToken: 导入密钥材料需要的token,用于作为 ImportKeyMaterial 的参数。
:type ImportToken: str
:param PublicKey: 用于加密密钥材料的RSA公钥,base64编码。使用PublicKey base64解码后的公钥将导入密钥进行加密后作为 ImportKeyMaterial 的参数。
:type PublicKey: str
:param ParametersValidTo: 该导出token和公钥的有效期,超过该时间后无法导入,需要重新调用GetParametersForImport获取。
:type ParametersValidTo: int
:param RequestId: 唯一请求 ID,每次请求都会返回。定位问题时需要提供该次请求的 RequestId。
:type RequestId: str | r"""
:param KeyId: CMK的唯一标识,用于指定目标导入密钥材料的CMK。
:type KeyId: str
:param ImportToken: 导入密钥材料需要的token,用于作为 ImportKeyMaterial 的参数。
:type ImportToken: str
:param PublicKey: 用于加密密钥材料的RSA公钥,base64编码。使用PublicKey base64解码后的公钥将导入密钥进行加密后作为 ImportKeyMaterial 的参数。
:type PublicKey: str
:param ParametersValidTo: 该导出token和公钥的有效期,超过该时间后无法导入,需要重新调用GetParametersForImport获取。
:type ParametersValidTo: int
:param RequestId: 唯一请求 ID,每次请求都会返回。定位问题时需要提供该次请求的 RequestId。
:type RequestId: str | [
"r",
":",
"param",
"KeyId",
":",
"CMK的唯一标识,用于指定目标导入密钥材料的CMK。",
":",
"type",
"KeyId",
":",
"str",
":",
"param",
"ImportToken",
":",
"导入密钥材料需要的token,用于作为",
"ImportKeyMaterial",
"的参数。",
":",
"type",
"ImportToken",
":",
"str",
":",
"param",
"PublicKey",
":",
"用于加密密钥... | def __init__(self):
r"""
:param KeyId: CMK的唯一标识,用于指定目标导入密钥材料的CMK。
:type KeyId: str
:param ImportToken: 导入密钥材料需要的token,用于作为 ImportKeyMaterial 的参数。
:type ImportToken: str
:param PublicKey: 用于加密密钥材料的RSA公钥,base64编码。使用PublicKey base64解码后的公钥将导入密钥进行加密后作为 ImportKeyMaterial 的参数。
:type PublicKey: str
:param ParametersValidTo: 该导出token和公钥的有效期,超过该时间后无法导入,需要重新调用GetParametersForImport获取。
:type ParametersValidTo: int
:param RequestId: 唯一请求 ID,每次请求都会返回。定位问题时需要提供该次请求的 RequestId。
:type RequestId: str
"""
self.KeyId = None
self.ImportToken = None
self.PublicKey = None
self.ParametersValidTo = None
self.RequestId = None | [
"def",
"__init__",
"(",
"self",
")",
":",
"self",
".",
"KeyId",
"=",
"None",
"self",
".",
"ImportToken",
"=",
"None",
"self",
".",
"PublicKey",
"=",
"None",
"self",
".",
"ParametersValidTo",
"=",
"None",
"self",
".",
"RequestId",
"=",
"None"
] | https://github.com/TencentCloud/tencentcloud-sdk-python/blob/3677fd1cdc8c5fd626ce001c13fd3b59d1f279d2/tencentcloud/kms/v20190118/models.py#L1750-L1767 | ||
bruderstein/PythonScript | df9f7071ddf3a079e3a301b9b53a6dc78cf1208f | PythonLib/min/inspect.py | python | _has_code_flag | (f, flag) | return bool(f.__code__.co_flags & flag) | Return true if ``f`` is a function (or a method or functools.partial
wrapper wrapping a function) whose code object has the given ``flag``
set in its flags. | Return true if ``f`` is a function (or a method or functools.partial
wrapper wrapping a function) whose code object has the given ``flag``
set in its flags. | [
"Return",
"true",
"if",
"f",
"is",
"a",
"function",
"(",
"or",
"a",
"method",
"or",
"functools",
".",
"partial",
"wrapper",
"wrapping",
"a",
"function",
")",
"whose",
"code",
"object",
"has",
"the",
"given",
"flag",
"set",
"in",
"its",
"flags",
"."
] | def _has_code_flag(f, flag):
"""Return true if ``f`` is a function (or a method or functools.partial
wrapper wrapping a function) whose code object has the given ``flag``
set in its flags."""
while ismethod(f):
f = f.__func__
f = functools._unwrap_partial(f)
if not isfunction(f):
return False
return bool(f.__code__.co_flags & flag) | [
"def",
"_has_code_flag",
"(",
"f",
",",
"flag",
")",
":",
"while",
"ismethod",
"(",
"f",
")",
":",
"f",
"=",
"f",
".",
"__func__",
"f",
"=",
"functools",
".",
"_unwrap_partial",
"(",
"f",
")",
"if",
"not",
"isfunction",
"(",
"f",
")",
":",
"return"... | https://github.com/bruderstein/PythonScript/blob/df9f7071ddf3a079e3a301b9b53a6dc78cf1208f/PythonLib/min/inspect.py#L290-L299 | |
bendmorris/static-python | 2e0f8c4d7ed5b359dc7d8a75b6fb37e6b6c5c473 | Lib/turtle.py | python | TNavigator.sety | (self, y) | Set the turtle's second coordinate to y
Argument:
y -- a number (integer or float)
Set the turtle's first coordinate to x, second coordinate remains
unchanged.
Example (for a Turtle instance named turtle):
>>> turtle.position()
(0.00, 40.00)
>>> turtle.sety(-10)
>>> turtle.position()
(0.00, -10.00) | Set the turtle's second coordinate to y | [
"Set",
"the",
"turtle",
"s",
"second",
"coordinate",
"to",
"y"
] | def sety(self, y):
"""Set the turtle's second coordinate to y
Argument:
y -- a number (integer or float)
Set the turtle's first coordinate to x, second coordinate remains
unchanged.
Example (for a Turtle instance named turtle):
>>> turtle.position()
(0.00, 40.00)
>>> turtle.sety(-10)
>>> turtle.position()
(0.00, -10.00)
"""
self._goto(Vec2D(self._position[0], y)) | [
"def",
"sety",
"(",
"self",
",",
"y",
")",
":",
"self",
".",
"_goto",
"(",
"Vec2D",
"(",
"self",
".",
"_position",
"[",
"0",
"]",
",",
"y",
")",
")"
] | https://github.com/bendmorris/static-python/blob/2e0f8c4d7ed5b359dc7d8a75b6fb37e6b6c5c473/Lib/turtle.py#L1802-L1818 | ||
theotherp/nzbhydra | 4b03d7f769384b97dfc60dade4806c0fc987514e | libs/jinja2/parser.py | python | Parser.parse_statements | (self, end_tokens, drop_needle=False) | return result | Parse multiple statements into a list until one of the end tokens
is reached. This is used to parse the body of statements as it also
parses template data if appropriate. The parser checks first if the
current token is a colon and skips it if there is one. Then it checks
for the block end and parses until if one of the `end_tokens` is
reached. Per default the active token in the stream at the end of
the call is the matched end token. If this is not wanted `drop_needle`
can be set to `True` and the end token is removed. | Parse multiple statements into a list until one of the end tokens
is reached. This is used to parse the body of statements as it also
parses template data if appropriate. The parser checks first if the
current token is a colon and skips it if there is one. Then it checks
for the block end and parses until if one of the `end_tokens` is
reached. Per default the active token in the stream at the end of
the call is the matched end token. If this is not wanted `drop_needle`
can be set to `True` and the end token is removed. | [
"Parse",
"multiple",
"statements",
"into",
"a",
"list",
"until",
"one",
"of",
"the",
"end",
"tokens",
"is",
"reached",
".",
"This",
"is",
"used",
"to",
"parse",
"the",
"body",
"of",
"statements",
"as",
"it",
"also",
"parses",
"template",
"data",
"if",
"a... | def parse_statements(self, end_tokens, drop_needle=False):
"""Parse multiple statements into a list until one of the end tokens
is reached. This is used to parse the body of statements as it also
parses template data if appropriate. The parser checks first if the
current token is a colon and skips it if there is one. Then it checks
for the block end and parses until if one of the `end_tokens` is
reached. Per default the active token in the stream at the end of
the call is the matched end token. If this is not wanted `drop_needle`
can be set to `True` and the end token is removed.
"""
# the first token may be a colon for python compatibility
self.stream.skip_if('colon')
# in the future it would be possible to add whole code sections
# by adding some sort of end of statement token and parsing those here.
self.stream.expect('block_end')
result = self.subparse(end_tokens)
# we reached the end of the template too early, the subparser
# does not check for this, so we do that now
if self.stream.current.type == 'eof':
self.fail_eof(end_tokens)
if drop_needle:
next(self.stream)
return result | [
"def",
"parse_statements",
"(",
"self",
",",
"end_tokens",
",",
"drop_needle",
"=",
"False",
")",
":",
"# the first token may be a colon for python compatibility",
"self",
".",
"stream",
".",
"skip_if",
"(",
"'colon'",
")",
"# in the future it would be possible to add whole... | https://github.com/theotherp/nzbhydra/blob/4b03d7f769384b97dfc60dade4806c0fc987514e/libs/jinja2/parser.py#L140-L165 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.