nwo stringlengths 5 106 | sha stringlengths 40 40 | path stringlengths 4 174 | language stringclasses 1
value | identifier stringlengths 1 140 | parameters stringlengths 0 87.7k | argument_list stringclasses 1
value | return_statement stringlengths 0 426k | docstring stringlengths 0 64.3k | docstring_summary stringlengths 0 26.3k | docstring_tokens list | function stringlengths 18 4.83M | function_tokens list | url stringlengths 83 304 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
omz/PythonistaAppTemplate | f560f93f8876d82a21d108977f90583df08d55af | PythonistaAppTemplate/PythonistaKit.framework/pylib/site-packages/sqlalchemy/ext/orderinglist.py | python | OrderingList.__init__ | (self, ordering_attr=None, ordering_func=None,
reorder_on_append=False) | A custom list that manages position information for its children.
``OrderingList`` is a ``collection_class`` list implementation that
syncs position in a Python list with a position attribute on the
mapped objects.
This implementation relies on the list starting in the proper order,
so be **sure** to put an ``order_by`` on your relationship.
:param ordering_attr:
Name of the attribute that stores the object's order in the
relationship.
:param ordering_func: Optional. A function that maps the position in
the Python list to a value to store in the
``ordering_attr``. Values returned are usually (but need not be!)
integers.
An ``ordering_func`` is called with two positional parameters: the
index of the element in the list, and the list itself.
If omitted, Python list indexes are used for the attribute values.
Two basic pre-built numbering functions are provided in this module:
``count_from_0`` and ``count_from_1``. For more exotic examples
like stepped numbering, alphabetical and Fibonacci numbering, see
the unit tests.
:param reorder_on_append:
Default False. When appending an object with an existing (non-None)
ordering value, that value will be left untouched unless
``reorder_on_append`` is true. This is an optimization to avoid a
variety of dangerous unexpected database writes.
SQLAlchemy will add instances to the list via append() when your
object loads. If for some reason the result set from the database
skips a step in the ordering (say, row '1' is missing but you get
'2', '3', and '4'), reorder_on_append=True would immediately
renumber the items to '1', '2', '3'. If you have multiple sessions
making changes, any of whom happen to load this collection even in
passing, all of the sessions would try to "clean up" the numbering
in their commits, possibly causing all but one to fail with a
concurrent modification error.
Recommend leaving this with the default of False, and just call
``reorder()`` if you're doing ``append()`` operations with
previously ordered instances or when doing some housekeeping after
manual sql operations. | A custom list that manages position information for its children. | [
"A",
"custom",
"list",
"that",
"manages",
"position",
"information",
"for",
"its",
"children",
"."
] | def __init__(self, ordering_attr=None, ordering_func=None,
reorder_on_append=False):
"""A custom list that manages position information for its children.
``OrderingList`` is a ``collection_class`` list implementation that
syncs position in a Python list with a position attribute on the
mapped objects.
This implementation relies on the list starting in the proper order,
so be **sure** to put an ``order_by`` on your relationship.
:param ordering_attr:
Name of the attribute that stores the object's order in the
relationship.
:param ordering_func: Optional. A function that maps the position in
the Python list to a value to store in the
``ordering_attr``. Values returned are usually (but need not be!)
integers.
An ``ordering_func`` is called with two positional parameters: the
index of the element in the list, and the list itself.
If omitted, Python list indexes are used for the attribute values.
Two basic pre-built numbering functions are provided in this module:
``count_from_0`` and ``count_from_1``. For more exotic examples
like stepped numbering, alphabetical and Fibonacci numbering, see
the unit tests.
:param reorder_on_append:
Default False. When appending an object with an existing (non-None)
ordering value, that value will be left untouched unless
``reorder_on_append`` is true. This is an optimization to avoid a
variety of dangerous unexpected database writes.
SQLAlchemy will add instances to the list via append() when your
object loads. If for some reason the result set from the database
skips a step in the ordering (say, row '1' is missing but you get
'2', '3', and '4'), reorder_on_append=True would immediately
renumber the items to '1', '2', '3'. If you have multiple sessions
making changes, any of whom happen to load this collection even in
passing, all of the sessions would try to "clean up" the numbering
in their commits, possibly causing all but one to fail with a
concurrent modification error.
Recommend leaving this with the default of False, and just call
``reorder()`` if you're doing ``append()`` operations with
previously ordered instances or when doing some housekeeping after
manual sql operations.
"""
self.ordering_attr = ordering_attr
if ordering_func is None:
ordering_func = count_from_0
self.ordering_func = ordering_func
self.reorder_on_append = reorder_on_append | [
"def",
"__init__",
"(",
"self",
",",
"ordering_attr",
"=",
"None",
",",
"ordering_func",
"=",
"None",
",",
"reorder_on_append",
"=",
"False",
")",
":",
"self",
".",
"ordering_attr",
"=",
"ordering_attr",
"if",
"ordering_func",
"is",
"None",
":",
"ordering_func... | https://github.com/omz/PythonistaAppTemplate/blob/f560f93f8876d82a21d108977f90583df08d55af/PythonistaAppTemplate/PythonistaKit.framework/pylib/site-packages/sqlalchemy/ext/orderinglist.py#L217-L272 | ||
PyCQA/astroid | a815443f62faae05249621a396dcf0afd884a619 | astroid/nodes/node_classes.py | python | Dict.pytype | (self) | return "builtins.dict" | Get the name of the type that this node represents.
:returns: The name of the type.
:rtype: str | Get the name of the type that this node represents. | [
"Get",
"the",
"name",
"of",
"the",
"type",
"that",
"this",
"node",
"represents",
"."
] | def pytype(self):
"""Get the name of the type that this node represents.
:returns: The name of the type.
:rtype: str
"""
return "builtins.dict" | [
"def",
"pytype",
"(",
"self",
")",
":",
"return",
"\"builtins.dict\""
] | https://github.com/PyCQA/astroid/blob/a815443f62faae05249621a396dcf0afd884a619/astroid/nodes/node_classes.py#L2341-L2347 | |
dfki-ric/phobos | 63ac5f8496df4d6f0b224878936e8cc45d661854 | phobos/operators/editing.py | python | EditYAMLDictionary.poll | (cls, context) | return ob is not None and ob.mode == 'OBJECT' and len(context.selected_objects) > 0 | Args:
context:
Returns: | [] | def poll(cls, context):
"""
Args:
context:
Returns:
"""
ob = context.active_object
return ob is not None and ob.mode == 'OBJECT' and len(context.selected_objects) > 0 | [
"def",
"poll",
"(",
"cls",
",",
"context",
")",
":",
"ob",
"=",
"context",
".",
"active_object",
"return",
"ob",
"is",
"not",
"None",
"and",
"ob",
".",
"mode",
"==",
"'OBJECT'",
"and",
"len",
"(",
"context",
".",
"selected_objects",
")",
">",
"0"
] | https://github.com/dfki-ric/phobos/blob/63ac5f8496df4d6f0b224878936e8cc45d661854/phobos/operators/editing.py#L1224-L1234 | ||
securesystemslab/zippy | ff0e84ac99442c2c55fe1d285332cfd4e185e089 | zippy/benchmarks/src/benchmarks/whoosh/src/whoosh/query/qcore.py | python | Query.with_boost | (self, boost) | return q | Returns a COPY of this query with the boost set to the given value.
If a query type does not accept a boost itself, it will try to pass the
boost on to its children, if any. | Returns a COPY of this query with the boost set to the given value. | [
"Returns",
"a",
"COPY",
"of",
"this",
"query",
"with",
"the",
"boost",
"set",
"to",
"the",
"given",
"value",
"."
] | def with_boost(self, boost):
"""Returns a COPY of this query with the boost set to the given value.
If a query type does not accept a boost itself, it will try to pass the
boost on to its children, if any.
"""
q = self.copy()
q.boost = boost
return q | [
"def",
"with_boost",
"(",
"self",
",",
"boost",
")",
":",
"q",
"=",
"self",
".",
"copy",
"(",
")",
"q",
".",
"boost",
"=",
"boost",
"return",
"q"
] | https://github.com/securesystemslab/zippy/blob/ff0e84ac99442c2c55fe1d285332cfd4e185e089/zippy/benchmarks/src/benchmarks/whoosh/src/whoosh/query/qcore.py#L475-L484 | |
collinsctk/PyQYT | 7af3673955f94ff1b2df2f94220cd2dab2e252af | ExtentionPackages/paramiko/transport.py | python | Transport.request_port_forward | (self, address, port, handler=None) | return port | Ask the server to forward TCP connections from a listening port on
the server, across this SSH session.
If a handler is given, that handler is called from a different thread
whenever a forwarded connection arrives. The handler parameters are::
handler(channel, (origin_addr, origin_port), (server_addr, server_port))
where ``server_addr`` and ``server_port`` are the address and port that
the server was listening on.
If no handler is set, the default behavior is to send new incoming
forwarded connections into the accept queue, to be picked up via
`accept`.
:param str address: the address to bind when forwarding
:param int port:
the port to forward, or 0 to ask the server to allocate any port
:param callable handler:
optional handler for incoming forwarded connections, of the form
``func(Channel, (str, int), (str, int))``.
:return: the port number (`int`) allocated by the server
:raises SSHException: if the server refused the TCP forward request | Ask the server to forward TCP connections from a listening port on
the server, across this SSH session. | [
"Ask",
"the",
"server",
"to",
"forward",
"TCP",
"connections",
"from",
"a",
"listening",
"port",
"on",
"the",
"server",
"across",
"this",
"SSH",
"session",
"."
] | def request_port_forward(self, address, port, handler=None):
"""
Ask the server to forward TCP connections from a listening port on
the server, across this SSH session.
If a handler is given, that handler is called from a different thread
whenever a forwarded connection arrives. The handler parameters are::
handler(channel, (origin_addr, origin_port), (server_addr, server_port))
where ``server_addr`` and ``server_port`` are the address and port that
the server was listening on.
If no handler is set, the default behavior is to send new incoming
forwarded connections into the accept queue, to be picked up via
`accept`.
:param str address: the address to bind when forwarding
:param int port:
the port to forward, or 0 to ask the server to allocate any port
:param callable handler:
optional handler for incoming forwarded connections, of the form
``func(Channel, (str, int), (str, int))``.
:return: the port number (`int`) allocated by the server
:raises SSHException: if the server refused the TCP forward request
"""
if not self.active:
raise SSHException('SSH session not active')
port = int(port)
response = self.global_request('tcpip-forward', (address, port), wait=True)
if response is None:
raise SSHException('TCP forwarding request denied')
if port == 0:
port = response.get_int()
if handler is None:
def default_handler(channel, src_addr, dest_addr_port):
#src_addr, src_port = src_addr_port
#dest_addr, dest_port = dest_addr_port
self._queue_incoming_channel(channel)
handler = default_handler
self._tcp_handler = handler
return port | [
"def",
"request_port_forward",
"(",
"self",
",",
"address",
",",
"port",
",",
"handler",
"=",
"None",
")",
":",
"if",
"not",
"self",
".",
"active",
":",
"raise",
"SSHException",
"(",
"'SSH session not active'",
")",
"port",
"=",
"int",
"(",
"port",
")",
... | https://github.com/collinsctk/PyQYT/blob/7af3673955f94ff1b2df2f94220cd2dab2e252af/ExtentionPackages/paramiko/transport.py#L837-L880 | |
google/jax | bebe9845a873b3203f8050395255f173ba3bbb71 | jax/_src/numpy/vectorize.py | python | _parse_gufunc_signature | (
signature: str,
) | return args, retvals | Parse string signatures for a generalized universal function.
Args:
signature: generalized universal function signature, e.g.,
``(m,n),(n,p)->(m,p)`` for ``jnp.matmul``.
Returns:
Input and output core dimensions parsed from the signature. | Parse string signatures for a generalized universal function. | [
"Parse",
"string",
"signatures",
"for",
"a",
"generalized",
"universal",
"function",
"."
] | def _parse_gufunc_signature(
signature: str,
) -> Tuple[List[CoreDims], List[CoreDims]]:
"""Parse string signatures for a generalized universal function.
Args:
signature: generalized universal function signature, e.g.,
``(m,n),(n,p)->(m,p)`` for ``jnp.matmul``.
Returns:
Input and output core dimensions parsed from the signature.
"""
if not re.match(_SIGNATURE, signature):
raise ValueError(
'not a valid gufunc signature: {}'.format(signature))
args, retvals = ([tuple(re.findall(_DIMENSION_NAME, arg))
for arg in re.findall(_ARGUMENT, arg_list)]
for arg_list in signature.split('->'))
return args, retvals | [
"def",
"_parse_gufunc_signature",
"(",
"signature",
":",
"str",
",",
")",
"->",
"Tuple",
"[",
"List",
"[",
"CoreDims",
"]",
",",
"List",
"[",
"CoreDims",
"]",
"]",
":",
"if",
"not",
"re",
".",
"match",
"(",
"_SIGNATURE",
",",
"signature",
")",
":",
"... | https://github.com/google/jax/blob/bebe9845a873b3203f8050395255f173ba3bbb71/jax/_src/numpy/vectorize.py#L37-L55 | |
mitmproxy/mitmproxy | 1abb8f69217910c8623bd1339da2502aed98ff0d | mitmproxy/addons/view.py | python | View.focus_prev | (self) | Set focus to the previous flow. | Set focus to the previous flow. | [
"Set",
"focus",
"to",
"the",
"previous",
"flow",
"."
] | def focus_prev(self) -> None:
"""
Set focus to the previous flow.
"""
if self.focus.index is not None:
idx = self.focus.index - 1
if self.inbounds(idx):
self.focus.flow = self[idx]
else:
pass | [
"def",
"focus_prev",
"(",
"self",
")",
"->",
"None",
":",
"if",
"self",
".",
"focus",
".",
"index",
"is",
"not",
"None",
":",
"idx",
"=",
"self",
".",
"focus",
".",
"index",
"-",
"1",
"if",
"self",
".",
"inbounds",
"(",
"idx",
")",
":",
"self",
... | https://github.com/mitmproxy/mitmproxy/blob/1abb8f69217910c8623bd1339da2502aed98ff0d/mitmproxy/addons/view.py#L269-L278 | ||
pymedusa/Medusa | 1405fbb6eb8ef4d20fcca24c32ddca52b11f0f38 | ext/boto/iam/connection.py | python | IAMConnection.get_group | (self, group_name, marker=None, max_items=None) | return self.get_response('GetGroup', params, list_marker='Users') | Return a list of users that are in the specified group.
:type group_name: string
:param group_name: The name of the group whose information should
be returned.
:type marker: string
:param marker: Use this only when paginating results and only
in follow-up request after you've received a response
where the results are truncated. Set this to the value of
the Marker element in the response you just received.
:type max_items: int
:param max_items: Use this only when paginating results to indicate
the maximum number of groups you want in the response. | Return a list of users that are in the specified group. | [
"Return",
"a",
"list",
"of",
"users",
"that",
"are",
"in",
"the",
"specified",
"group",
"."
] | def get_group(self, group_name, marker=None, max_items=None):
"""
Return a list of users that are in the specified group.
:type group_name: string
:param group_name: The name of the group whose information should
be returned.
:type marker: string
:param marker: Use this only when paginating results and only
in follow-up request after you've received a response
where the results are truncated. Set this to the value of
the Marker element in the response you just received.
:type max_items: int
:param max_items: Use this only when paginating results to indicate
the maximum number of groups you want in the response.
"""
params = {'GroupName': group_name}
if marker:
params['Marker'] = marker
if max_items:
params['MaxItems'] = max_items
return self.get_response('GetGroup', params, list_marker='Users') | [
"def",
"get_group",
"(",
"self",
",",
"group_name",
",",
"marker",
"=",
"None",
",",
"max_items",
"=",
"None",
")",
":",
"params",
"=",
"{",
"'GroupName'",
":",
"group_name",
"}",
"if",
"marker",
":",
"params",
"[",
"'Marker'",
"]",
"=",
"marker",
"if"... | https://github.com/pymedusa/Medusa/blob/1405fbb6eb8ef4d20fcca24c32ddca52b11f0f38/ext/boto/iam/connection.py#L136-L158 | |
MVIG-SJTU/AlphaPose | bcfbc997526bcac464d116356ac2efea9483ff68 | trackers/utils/utils.py | python | jaccard | (box_a, box_b, iscrowd:bool=False) | return out if use_batch else out.squeeze(0) | Compute the jaccard overlap of two sets of boxes. The jaccard overlap
is simply the intersection over union of two boxes. Here we operate on
ground truth boxes and default boxes. If iscrowd=True, put the crowd in box_b.
E.g.:
A ∩ B / A ∪ B = A ∩ B / (area(A) + area(B) - A ∩ B)
Args:
box_a: (tensor) Ground truth bounding boxes, Shape: [num_objects,4]
box_b: (tensor) Prior boxes from priorbox layers, Shape: [num_priors,4]
Return:
jaccard overlap: (tensor) Shape: [box_a.size(0), box_b.size(0)] | Compute the jaccard overlap of two sets of boxes. The jaccard overlap
is simply the intersection over union of two boxes. Here we operate on
ground truth boxes and default boxes. If iscrowd=True, put the crowd in box_b.
E.g.:
A ∩ B / A ∪ B = A ∩ B / (area(A) + area(B) - A ∩ B)
Args:
box_a: (tensor) Ground truth bounding boxes, Shape: [num_objects,4]
box_b: (tensor) Prior boxes from priorbox layers, Shape: [num_priors,4]
Return:
jaccard overlap: (tensor) Shape: [box_a.size(0), box_b.size(0)] | [
"Compute",
"the",
"jaccard",
"overlap",
"of",
"two",
"sets",
"of",
"boxes",
".",
"The",
"jaccard",
"overlap",
"is",
"simply",
"the",
"intersection",
"over",
"union",
"of",
"two",
"boxes",
".",
"Here",
"we",
"operate",
"on",
"ground",
"truth",
"boxes",
"and... | def jaccard(box_a, box_b, iscrowd:bool=False):
"""Compute the jaccard overlap of two sets of boxes. The jaccard overlap
is simply the intersection over union of two boxes. Here we operate on
ground truth boxes and default boxes. If iscrowd=True, put the crowd in box_b.
E.g.:
A ∩ B / A ∪ B = A ∩ B / (area(A) + area(B) - A ∩ B)
Args:
box_a: (tensor) Ground truth bounding boxes, Shape: [num_objects,4]
box_b: (tensor) Prior boxes from priorbox layers, Shape: [num_priors,4]
Return:
jaccard overlap: (tensor) Shape: [box_a.size(0), box_b.size(0)]
"""
use_batch = True
if box_a.dim() == 2:
use_batch = False
box_a = box_a[None, ...]
box_b = box_b[None, ...]
inter = intersect(box_a, box_b)
area_a = ((box_a[:, :, 2]-box_a[:, :, 0]) *
(box_a[:, :, 3]-box_a[:, :, 1])).unsqueeze(2).expand_as(inter) # [A,B]
area_b = ((box_b[:, :, 2]-box_b[:, :, 0]) *
(box_b[:, :, 3]-box_b[:, :, 1])).unsqueeze(1).expand_as(inter) # [A,B]
union = area_a + area_b - inter
out = inter / area_a if iscrowd else inter / union
return out if use_batch else out.squeeze(0) | [
"def",
"jaccard",
"(",
"box_a",
",",
"box_b",
",",
"iscrowd",
":",
"bool",
"=",
"False",
")",
":",
"use_batch",
"=",
"True",
"if",
"box_a",
".",
"dim",
"(",
")",
"==",
"2",
":",
"use_batch",
"=",
"False",
"box_a",
"=",
"box_a",
"[",
"None",
",",
... | https://github.com/MVIG-SJTU/AlphaPose/blob/bcfbc997526bcac464d116356ac2efea9483ff68/trackers/utils/utils.py#L587-L613 | |
hacktoolspack/hack-tools | c2b1e324f4b24a2c5b4f111e7ef84e9c547159c2 | fsociety/fsociety.py | python | Fscan.getUsers | (self) | get server users using a method found by
iranian hackers , the attacker may
do a bruteforce attack on CPanel, ssh, ftp or
even mysql if it supports remote login
(you can use medusa or hydra) | get server users using a method found by
iranian hackers , the attacker may
do a bruteforce attack on CPanel, ssh, ftp or
even mysql if it supports remote login
(you can use medusa or hydra) | [
"get",
"server",
"users",
"using",
"a",
"method",
"found",
"by",
"iranian",
"hackers",
"the",
"attacker",
"may",
"do",
"a",
"bruteforce",
"attack",
"on",
"CPanel",
"ssh",
"ftp",
"or",
"even",
"mysql",
"if",
"it",
"supports",
"remote",
"login",
"(",
"you",
... | def getUsers(self):
"""
get server users using a method found by
iranian hackers , the attacker may
do a bruteforce attack on CPanel, ssh, ftp or
even mysql if it supports remote login
(you can use medusa or hydra)
"""
clearScr()
print "[~] Grabbing Users"
userslist = []
for site1 in self.sites:
try:
site = site1
site = site.replace('http://www.', '')
site = site.replace('http://', '')
site = site.replace('.', '')
if '-' in site:
site = site.replace('-', '')
site = site.replace('/', '')
while len(site) > 2:
resp = urllib2.urlopen(
site1 + '/cgi-sys/guestbook.cgi?user=%s' % site).read()
if 'invalid username' not in resp.lower():
print '\t [*] Found -> ', site
userslist.append(site)
break
else:
print site
site = site[:-1]
except:
pass
clearScr()
for user in userslist:
print user | [
"def",
"getUsers",
"(",
"self",
")",
":",
"clearScr",
"(",
")",
"print",
"\"[~] Grabbing Users\"",
"userslist",
"=",
"[",
"]",
"for",
"site1",
"in",
"self",
".",
"sites",
":",
"try",
":",
"site",
"=",
"site1",
"site",
"=",
"site",
".",
"replace",
"(",
... | https://github.com/hacktoolspack/hack-tools/blob/c2b1e324f4b24a2c5b4f111e7ef84e9c547159c2/fsociety/fsociety.py#L1323-L1359 | ||
dabeaz-course/practical-python | 92a8a73c078e7721d75fd0400e973b44bf7639b8 | Solutions/9_5/porty-app/porty/report.py | python | print_report | (reportdata, formatter) | Print a nicely formated table from a list of (name, shares, price, change) tuples. | Print a nicely formated table from a list of (name, shares, price, change) tuples. | [
"Print",
"a",
"nicely",
"formated",
"table",
"from",
"a",
"list",
"of",
"(",
"name",
"shares",
"price",
"change",
")",
"tuples",
"."
] | def print_report(reportdata, formatter):
'''
Print a nicely formated table from a list of (name, shares, price, change) tuples.
'''
formatter.headings(['Name','Shares','Price','Change'])
for name, shares, price, change in reportdata:
rowdata = [ name, str(shares), f'{price:0.2f}', f'{change:0.2f}' ]
formatter.row(rowdata) | [
"def",
"print_report",
"(",
"reportdata",
",",
"formatter",
")",
":",
"formatter",
".",
"headings",
"(",
"[",
"'Name'",
",",
"'Shares'",
",",
"'Price'",
",",
"'Change'",
"]",
")",
"for",
"name",
",",
"shares",
",",
"price",
",",
"change",
"in",
"reportda... | https://github.com/dabeaz-course/practical-python/blob/92a8a73c078e7721d75fd0400e973b44bf7639b8/Solutions/9_5/porty-app/porty/report.py#L36-L43 | ||
statsmodels/statsmodels | debbe7ea6ba28fe5bdb78f09f8cac694bef98722 | statsmodels/sandbox/tsa/example_arma.py | python | pltxcorr | (self, x, y, normed=True, detrend=detrend_none,
usevlines=True, maxlags=10, **kwargs) | return lags, c, a, b | call signature::
def xcorr(self, x, y, normed=True, detrend=detrend_none,
usevlines=True, maxlags=10, **kwargs):
Plot the cross correlation between *x* and *y*. If *normed* =
*True*, normalize the data by the cross correlation at 0-th
lag. *x* and y are detrended by the *detrend* callable
(default no normalization). *x* and *y* must be equal length.
Data are plotted as ``plot(lags, c, **kwargs)``
Return value is a tuple (*lags*, *c*, *line*) where:
- *lags* are a length ``2*maxlags+1`` lag vector
- *c* is the ``2*maxlags+1`` auto correlation vector
- *line* is a :class:`~matplotlib.lines.Line2D` instance
returned by :func:`~matplotlib.pyplot.plot`.
The default *linestyle* is *None* and the default *marker* is
'o', though these can be overridden with keyword args. The
cross correlation is performed with :func:`numpy.correlate`
with *mode* = 2.
If *usevlines* is *True*:
:func:`~matplotlib.pyplot.vlines`
rather than :func:`~matplotlib.pyplot.plot` is used to draw
vertical lines from the origin to the xcorr. Otherwise the
plotstyle is determined by the kwargs, which are
:class:`~matplotlib.lines.Line2D` properties.
The return value is a tuple (*lags*, *c*, *linecol*, *b*)
where *linecol* is the
:class:`matplotlib.collections.LineCollection` instance and
*b* is the *x*-axis.
*maxlags* is a positive integer detailing the number of lags to show.
The default value of *None* will return all ``(2*len(x)-1)`` lags.
**Example:**
:func:`~matplotlib.pyplot.xcorr` above, and
:func:`~matplotlib.pyplot.acorr` below.
**Example:**
.. plot:: mpl_examples/pylab_examples/xcorr_demo.py | call signature:: | [
"call",
"signature",
"::"
] | def pltxcorr(self, x, y, normed=True, detrend=detrend_none,
usevlines=True, maxlags=10, **kwargs):
"""
call signature::
def xcorr(self, x, y, normed=True, detrend=detrend_none,
usevlines=True, maxlags=10, **kwargs):
Plot the cross correlation between *x* and *y*. If *normed* =
*True*, normalize the data by the cross correlation at 0-th
lag. *x* and y are detrended by the *detrend* callable
(default no normalization). *x* and *y* must be equal length.
Data are plotted as ``plot(lags, c, **kwargs)``
Return value is a tuple (*lags*, *c*, *line*) where:
- *lags* are a length ``2*maxlags+1`` lag vector
- *c* is the ``2*maxlags+1`` auto correlation vector
- *line* is a :class:`~matplotlib.lines.Line2D` instance
returned by :func:`~matplotlib.pyplot.plot`.
The default *linestyle* is *None* and the default *marker* is
'o', though these can be overridden with keyword args. The
cross correlation is performed with :func:`numpy.correlate`
with *mode* = 2.
If *usevlines* is *True*:
:func:`~matplotlib.pyplot.vlines`
rather than :func:`~matplotlib.pyplot.plot` is used to draw
vertical lines from the origin to the xcorr. Otherwise the
plotstyle is determined by the kwargs, which are
:class:`~matplotlib.lines.Line2D` properties.
The return value is a tuple (*lags*, *c*, *linecol*, *b*)
where *linecol* is the
:class:`matplotlib.collections.LineCollection` instance and
*b* is the *x*-axis.
*maxlags* is a positive integer detailing the number of lags to show.
The default value of *None* will return all ``(2*len(x)-1)`` lags.
**Example:**
:func:`~matplotlib.pyplot.xcorr` above, and
:func:`~matplotlib.pyplot.acorr` below.
**Example:**
.. plot:: mpl_examples/pylab_examples/xcorr_demo.py
"""
Nx = len(x)
if Nx!=len(y):
raise ValueError('x and y must be equal length')
x = detrend(np.asarray(x))
y = detrend(np.asarray(y))
c = np.correlate(x, y, mode=2)
if normed:
c /= np.sqrt(np.dot(x, x) * np.dot(y, y))
if maxlags is None:
maxlags = Nx - 1
if maxlags >= Nx or maxlags < 1:
raise ValueError('maxlags must be None or strictly '
'positive < %d' % Nx)
lags = np.arange(-maxlags,maxlags+1)
c = c[Nx-1-maxlags:Nx+maxlags]
if usevlines:
a = self.vlines(lags, [0], c, **kwargs)
b = self.axhline(**kwargs)
kwargs.setdefault('marker', 'o')
kwargs.setdefault('linestyle', 'None')
d = self.plot(lags, c, **kwargs)
else:
kwargs.setdefault('marker', 'o')
kwargs.setdefault('linestyle', 'None')
a, = self.plot(lags, c, **kwargs)
b = None
return lags, c, a, b | [
"def",
"pltxcorr",
"(",
"self",
",",
"x",
",",
"y",
",",
"normed",
"=",
"True",
",",
"detrend",
"=",
"detrend_none",
",",
"usevlines",
"=",
"True",
",",
"maxlags",
"=",
"10",
",",
"*",
"*",
"kwargs",
")",
":",
"Nx",
"=",
"len",
"(",
"x",
")",
"... | https://github.com/statsmodels/statsmodels/blob/debbe7ea6ba28fe5bdb78f09f8cac694bef98722/statsmodels/sandbox/tsa/example_arma.py#L270-L361 | |
salabim/salabim | e0de846b042daf2dc71aaf43d8adc6486b57f376 | salabim.py | python | Monitor.xt | (self, ex0=False, exoff=False, force_numeric=True, add_now=True) | return xx, t | tuple of array/list with x-values and array with timestamp
Parameters
----------
ex0 : bool
if False (default), include zeroes. if True, exclude zeroes
exoff : bool
if False (default), include self.off. if True, exclude self.off's |n|
non level monitors will return all values, regardless of exoff
force_numeric : bool
if True (default), convert non numeric tallied values numeric if possible, otherwise assume 0 |n|
if False, do not interpret x-values, return as list if type is list
add_now : bool
if True (default), the last tallied x-value and the current time is added to the result |n|
if False, the result ends with the last tallied value and the time that was tallied |n|
non level monitors will never add now |n|
if now is <= last tallied value, nothing will be added, even if add_now is True
Returns
-------
array/list with x-values and array with timestamps : tuple
Note
----
The value self.off is stored when monitoring is turned off |n|
The timestamps are not corrected for any reset_now() adjustment. | tuple of array/list with x-values and array with timestamp | [
"tuple",
"of",
"array",
"/",
"list",
"with",
"x",
"-",
"values",
"and",
"array",
"with",
"timestamp"
] | def xt(self, ex0=False, exoff=False, force_numeric=True, add_now=True):
"""
tuple of array/list with x-values and array with timestamp
Parameters
----------
ex0 : bool
if False (default), include zeroes. if True, exclude zeroes
exoff : bool
if False (default), include self.off. if True, exclude self.off's |n|
non level monitors will return all values, regardless of exoff
force_numeric : bool
if True (default), convert non numeric tallied values numeric if possible, otherwise assume 0 |n|
if False, do not interpret x-values, return as list if type is list
add_now : bool
if True (default), the last tallied x-value and the current time is added to the result |n|
if False, the result ends with the last tallied value and the time that was tallied |n|
non level monitors will never add now |n|
if now is <= last tallied value, nothing will be added, even if add_now is True
Returns
-------
array/list with x-values and array with timestamps : tuple
Note
----
The value self.off is stored when monitoring is turned off |n|
The timestamps are not corrected for any reset_now() adjustment.
"""
self._block_stats_only()
if not self._level:
exoff = False
add_now = False
if self.xtypecode or (not force_numeric):
x = self._x
typecode = self.xtypecode
off = self.off
else:
x = do_force_numeric(self._x)
typecode = ""
off = -inf # float
if typecode:
xx = array.array(typecode)
else:
xx = []
t = array.array("d")
if add_now:
addx = [x[-1]]
addt = [self.env._now]
else:
addx = []
addt = []
for vx, vt in zip(itertools.chain(x, addx), itertools.chain(self._t, addt)):
if not ex0 or (vx != 0):
if not exoff or (vx != off):
xx.append(vx)
t.append(vt)
return xx, t | [
"def",
"xt",
"(",
"self",
",",
"ex0",
"=",
"False",
",",
"exoff",
"=",
"False",
",",
"force_numeric",
"=",
"True",
",",
"add_now",
"=",
"True",
")",
":",
"self",
".",
"_block_stats_only",
"(",
")",
"if",
"not",
"self",
".",
"_level",
":",
"exoff",
... | https://github.com/salabim/salabim/blob/e0de846b042daf2dc71aaf43d8adc6486b57f376/salabim.py#L2559-L2624 | |
beeware/ouroboros | a29123c6fab6a807caffbb7587cf548e0c370296 | ouroboros/tkinter/__init__.py | python | Misc.winfo_fpixels | (self, number) | return getdouble(self.tk.call(
'winfo', 'fpixels', self._w, number)) | Return the number of pixels for the given distance NUMBER
(e.g. "3c") as float. | Return the number of pixels for the given distance NUMBER
(e.g. "3c") as float. | [
"Return",
"the",
"number",
"of",
"pixels",
"for",
"the",
"given",
"distance",
"NUMBER",
"(",
"e",
".",
"g",
".",
"3c",
")",
"as",
"float",
"."
] | def winfo_fpixels(self, number):
"""Return the number of pixels for the given distance NUMBER
(e.g. "3c") as float."""
return getdouble(self.tk.call(
'winfo', 'fpixels', self._w, number)) | [
"def",
"winfo_fpixels",
"(",
"self",
",",
"number",
")",
":",
"return",
"getdouble",
"(",
"self",
".",
"tk",
".",
"call",
"(",
"'winfo'",
",",
"'fpixels'",
",",
"self",
".",
"_w",
",",
"number",
")",
")"
] | https://github.com/beeware/ouroboros/blob/a29123c6fab6a807caffbb7587cf548e0c370296/ouroboros/tkinter/__init__.py#L828-L832 | |
nsacyber/WALKOFF | 52d3311abe99d64cd2a902eb998c5e398efe0e07 | api_gateway/serverdb/tokens.py | python | approve_token | (token_id, user) | Approves the given token
Args:
token_id (int): The ID of the token
user (User): The User | Approves the given token | [
"Approves",
"the",
"given",
"token"
] | def approve_token(token_id, user):
"""Approves the given token
Args:
token_id (int): The ID of the token
user (User): The User
"""
token = BlacklistedToken.query.filter_by(id=token_id, user_identity=user).first()
if token is not None:
db.session.remove(token)
prune_if_necessary()
db.session.commit() | [
"def",
"approve_token",
"(",
"token_id",
",",
"user",
")",
":",
"token",
"=",
"BlacklistedToken",
".",
"query",
".",
"filter_by",
"(",
"id",
"=",
"token_id",
",",
"user_identity",
"=",
"user",
")",
".",
"first",
"(",
")",
"if",
"token",
"is",
"not",
"N... | https://github.com/nsacyber/WALKOFF/blob/52d3311abe99d64cd2a902eb998c5e398efe0e07/api_gateway/serverdb/tokens.py#L62-L73 | ||
leo-editor/leo-editor | 383d6776d135ef17d73d935a2f0ecb3ac0e99494 | leo/commands/editCommands.py | python | EditCommandsClass.insertHardTab | (self, event) | Insert one hard tab. | Insert one hard tab. | [
"Insert",
"one",
"hard",
"tab",
"."
] | def insertHardTab(self, event):
"""Insert one hard tab."""
c = self.c
w = self.editWidget(event)
if not w:
return
if not g.isTextWrapper(w):
return
name = c.widget_name(w)
if name.startswith('head'):
return
ins = w.getInsertPoint()
self.beginCommand(w, undoType='insert-hard-tab')
w.insert(ins, '\t')
ins += 1
w.setSelectionRange(ins, ins, insert=ins)
self.endCommand() | [
"def",
"insertHardTab",
"(",
"self",
",",
"event",
")",
":",
"c",
"=",
"self",
".",
"c",
"w",
"=",
"self",
".",
"editWidget",
"(",
"event",
")",
"if",
"not",
"w",
":",
"return",
"if",
"not",
"g",
".",
"isTextWrapper",
"(",
"w",
")",
":",
"return"... | https://github.com/leo-editor/leo-editor/blob/383d6776d135ef17d73d935a2f0ecb3ac0e99494/leo/commands/editCommands.py#L1600-L1616 | ||
codypierce/pyemu | f3fe115cc99e38c9d9c5e0d659d1fe5b8e19eac4 | lib/pefile.py | python | PE.get_string_at_rva | (self, rva) | return self.get_string_from_data(rva-s.VirtualAddress, s.data) | Get an ASCII string located at the given address. | Get an ASCII string located at the given address. | [
"Get",
"an",
"ASCII",
"string",
"located",
"at",
"the",
"given",
"address",
"."
] | def get_string_at_rva(self, rva):
"""Get an ASCII string located at the given address."""
s = self.get_section_by_rva(rva)
if not s:
if rva<len(self.header):
return self.get_string_from_data(rva, self.header)
return None
return self.get_string_from_data(rva-s.VirtualAddress, s.data) | [
"def",
"get_string_at_rva",
"(",
"self",
",",
"rva",
")",
":",
"s",
"=",
"self",
".",
"get_section_by_rva",
"(",
"rva",
")",
"if",
"not",
"s",
":",
"if",
"rva",
"<",
"len",
"(",
"self",
".",
"header",
")",
":",
"return",
"self",
".",
"get_string_from... | https://github.com/codypierce/pyemu/blob/f3fe115cc99e38c9d9c5e0d659d1fe5b8e19eac4/lib/pefile.py#L2528-L2537 | |
mongodb/pymodm | be1c7b079df4954ef7e79e46f1b4a9ac9510766c | ez_setup.py | python | _python_cmd | (*args) | return subprocess.call(args) == 0 | Execute a command.
Return True if the command succeeded. | Execute a command. | [
"Execute",
"a",
"command",
"."
] | def _python_cmd(*args):
"""
Execute a command.
Return True if the command succeeded.
"""
args = (sys.executable,) + args
return subprocess.call(args) == 0 | [
"def",
"_python_cmd",
"(",
"*",
"args",
")",
":",
"args",
"=",
"(",
"sys",
".",
"executable",
",",
")",
"+",
"args",
"return",
"subprocess",
".",
"call",
"(",
"args",
")",
"==",
"0"
] | https://github.com/mongodb/pymodm/blob/be1c7b079df4954ef7e79e46f1b4a9ac9510766c/ez_setup.py#L44-L51 | |
jinserk/pytorch-asr | 26b12981af942631b98afea83245448e5af55450 | asr/models/las/loss.py | python | EditDistanceLoss.forward | (self, input, target, input_seq_lens, target_seq_lens) | input: BxTxH, target: BxN, input_seq_lens: B, target_seq_lens: B | input: BxTxH, target: BxN, input_seq_lens: B, target_seq_lens: B | [
"input",
":",
"BxTxH",
"target",
":",
"BxN",
"input_seq_lens",
":",
"B",
"target_seq_lens",
":",
"B"
] | def forward(self, input, target, input_seq_lens, target_seq_lens):
"""
input: BxTxH, target: BxN, input_seq_lens: B, target_seq_lens: B
"""
batch_size = input.size(0)
eds = list()
for b in range(batch_size):
x = torch.argmax(input[b, :input_seq_lens[b]], dim=-1)
y = target[b, :target_seq_lens[b]]
d = self.calculate_levenshtein(x, y)
eds.append(d)
loss = torch.FloatTensor(eds)
if self.reduction == 'none':
return loss
elif self.reduction == 'mean':
return loss.mean() | [
"def",
"forward",
"(",
"self",
",",
"input",
",",
"target",
",",
"input_seq_lens",
",",
"target_seq_lens",
")",
":",
"batch_size",
"=",
"input",
".",
"size",
"(",
"0",
")",
"eds",
"=",
"list",
"(",
")",
"for",
"b",
"in",
"range",
"(",
"batch_size",
"... | https://github.com/jinserk/pytorch-asr/blob/26b12981af942631b98afea83245448e5af55450/asr/models/las/loss.py#L12-L28 | ||
robotlearn/pyrobolearn | 9cd7c060723fda7d2779fa255ac998c2c82b8436 | pyrobolearn/models/promp/promp.py | python | ProMP.power | (self, priority) | Set the priority (aka activation function) which represents the degree of activation of this ProMP.
Args:
priority (float, int, callable): priority function | Set the priority (aka activation function) which represents the degree of activation of this ProMP. | [
"Set",
"the",
"priority",
"(",
"aka",
"activation",
"function",
")",
"which",
"represents",
"the",
"degree",
"of",
"activation",
"of",
"this",
"ProMP",
"."
] | def power(self, priority):
"""
Set the priority (aka activation function) which represents the degree of activation of this ProMP.
Args:
priority (float, int, callable): priority function
"""
self.priority = priority | [
"def",
"power",
"(",
"self",
",",
"priority",
")",
":",
"self",
".",
"priority",
"=",
"priority"
] | https://github.com/robotlearn/pyrobolearn/blob/9cd7c060723fda7d2779fa255ac998c2c82b8436/pyrobolearn/models/promp/promp.py#L1364-L1371 | ||
openshift/openshift-tools | 1188778e728a6e4781acf728123e5b356380fe6f | openshift/installer/vendored/openshift-ansible-3.10.0-0.29.0/roles/lib_openshift/src/class/oc_user.py | python | OCUser.run_ansible | (params, check_mode=False) | return {'failed': True,
'changed': False,
'results': 'Unknown state passed. %s' % state,
'state': "unknown"} | run the idempotent ansible code
params comes from the ansible portion of this module
check_mode: does the module support check mode. (module.check_mode) | run the idempotent ansible code | [
"run",
"the",
"idempotent",
"ansible",
"code"
] | def run_ansible(params, check_mode=False):
''' run the idempotent ansible code
params comes from the ansible portion of this module
check_mode: does the module support check mode. (module.check_mode)
'''
uconfig = UserConfig(params['kubeconfig'],
params['username'],
params['full_name'],
)
oc_user = OCUser(uconfig, params['groups'],
verbose=params['debug'])
state = params['state']
api_rval = oc_user.get()
#####
# Get
#####
if state == 'list':
return {'changed': False, 'results': api_rval['results'], 'state': "list"}
########
# Delete
########
if state == 'absent':
if oc_user.exists():
if check_mode:
return {'changed': False, 'msg': 'Would have performed a delete.'}
api_rval = oc_user.delete()
return {'changed': True, 'results': api_rval, 'state': "absent"}
return {'changed': False, 'state': "absent"}
if state == 'present':
########
# Create
########
if not oc_user.exists():
if check_mode:
return {'changed': False, 'msg': 'Would have performed a create.'}
# Create it here
api_rval = oc_user.create()
if api_rval['returncode'] != 0:
return {'failed': True, 'msg': api_rval}
# return the created object
api_rval = oc_user.get()
if api_rval['returncode'] != 0:
return {'failed': True, 'msg': api_rval}
return {'changed': True, 'results': api_rval, 'state': "present"}
########
# Update
########
if oc_user.needs_update():
api_rval = oc_user.update()
if api_rval['returncode'] != 0:
return {'failed': True, 'msg': api_rval}
orig_cmd = api_rval['cmd']
# return the created object
api_rval = oc_user.get()
# overwrite the get/list cmd
api_rval['cmd'] = orig_cmd
if api_rval['returncode'] != 0:
return {'failed': True, 'msg': api_rval}
return {'changed': True, 'results': api_rval, 'state': "present"}
return {'changed': False, 'results': api_rval, 'state': "present"}
return {'failed': True,
'changed': False,
'results': 'Unknown state passed. %s' % state,
'state': "unknown"} | [
"def",
"run_ansible",
"(",
"params",
",",
"check_mode",
"=",
"False",
")",
":",
"uconfig",
"=",
"UserConfig",
"(",
"params",
"[",
"'kubeconfig'",
"]",
",",
"params",
"[",
"'username'",
"]",
",",
"params",
"[",
"'full_name'",
"]",
",",
")",
"oc_user",
"="... | https://github.com/openshift/openshift-tools/blob/1188778e728a6e4781acf728123e5b356380fe6f/openshift/installer/vendored/openshift-ansible-3.10.0-0.29.0/roles/lib_openshift/src/class/oc_user.py#L141-L227 | |
apache/incubator-spot | 2d60a2adae7608b43e90ce1b9ec0adf24f6cc8eb | spot-oa/api/resources/impala_engine.py | python | create_connection | () | return conn.cursor() | [] | def create_connection():
impala_host, impala_port = config.impala()
conf = {}
# TODO: if using hive, kerberos service name must be changed, impyla sets 'impala' as default
service_name = {'kerberos_service_name': 'impala'}
if config.kerberos_enabled():
principal, keytab, sasl_mech, security_proto = config.kerberos()
conf.update({'auth_mechanism': 'GSSAPI',
})
if config.ssl_enabled():
ssl_verify, ca_location, cert, key = config.ssl()
conf.update({'ca_cert': cert,
'use_ssl': ssl_verify
})
db = config.db()
conn = connect(host=impala_host, port=int(impala_port), database=db, **conf)
return conn.cursor() | [
"def",
"create_connection",
"(",
")",
":",
"impala_host",
",",
"impala_port",
"=",
"config",
".",
"impala",
"(",
")",
"conf",
"=",
"{",
"}",
"# TODO: if using hive, kerberos service name must be changed, impyla sets 'impala' as default",
"service_name",
"=",
"{",
"'kerber... | https://github.com/apache/incubator-spot/blob/2d60a2adae7608b43e90ce1b9ec0adf24f6cc8eb/spot-oa/api/resources/impala_engine.py#L21-L42 | |||
SUSE/DeepSea | 9c7fad93915ba1250c40d50c855011e9fe41ed21 | srv/modules/runners/validate.py | python | JsonPrinter.print_result | (self) | Dump results as json | Dump results as json | [
"Dump",
"results",
"as",
"json"
] | def print_result(self):
"""
Dump results as json
"""
json.dump(self.result, sys.stdout) | [
"def",
"print_result",
"(",
"self",
")",
":",
"json",
".",
"dump",
"(",
"self",
".",
"result",
",",
"sys",
".",
"stdout",
")"
] | https://github.com/SUSE/DeepSea/blob/9c7fad93915ba1250c40d50c855011e9fe41ed21/srv/modules/runners/validate.py#L126-L130 | ||
zhl2008/awd-platform | 0416b31abea29743387b10b3914581fbe8e7da5e | web_flaskbb/lib/python2.7/site-packages/sqlalchemy/event/attr.py | python | _ListenerCollection.insert | (self, event_key, propagate) | [] | def insert(self, event_key, propagate):
if event_key.prepend_to_list(self, self.listeners):
if propagate:
self.propagate.add(event_key._listen_fn) | [
"def",
"insert",
"(",
"self",
",",
"event_key",
",",
"propagate",
")",
":",
"if",
"event_key",
".",
"prepend_to_list",
"(",
"self",
",",
"self",
".",
"listeners",
")",
":",
"if",
"propagate",
":",
"self",
".",
"propagate",
".",
"add",
"(",
"event_key",
... | https://github.com/zhl2008/awd-platform/blob/0416b31abea29743387b10b3914581fbe8e7da5e/web_flaskbb/lib/python2.7/site-packages/sqlalchemy/event/attr.py#L349-L352 | ||||
Yelp/venv-update | 5fb5491bd421fdd8ef3cff3faa5d4846b5985ec8 | venv_update.py | python | ensure_virtualenv | (args, return_values) | Ensure we have a valid virtualenv. | Ensure we have a valid virtualenv. | [
"Ensure",
"we",
"have",
"a",
"valid",
"virtualenv",
"."
] | def ensure_virtualenv(args, return_values):
"""Ensure we have a valid virtualenv."""
from sys import argv
argv[:] = ('virtualenv',) + args
info(colorize(argv))
import virtualenv
run_virtualenv = True
filtered_args = [a for a in args if not a.startswith('-')]
if filtered_args:
venv_path = return_values.venv_path = filtered_args[0] if filtered_args else None
if venv_path == DEFAULT_VIRTUALENV_PATH:
from os.path import abspath, basename, dirname
args = ('--prompt=({})'.format(basename(dirname(abspath(venv_path)))),) + args
# Validate existing virtualenv if there is one
# there are two python interpreters involved here:
# 1) the interpreter we're instructing virtualenv to copy
python_options_arg = [a for a in args if a.startswith('-p') or a.startswith('--python')]
if not python_options_arg:
source_python = None
else:
virtualenv_session = virtualenv.session_via_cli(args)
source_python = virtualenv_session._interpreter.executable
# 2) the interpreter virtualenv will create
destination_python = venv_python(venv_path)
if exists(destination_python):
# Check if --system-site-packages is a passed-in option
dummy_session = virtualenv.session_via_cli(args)
virtualenv_system_site_packages = dummy_session.creator.enable_system_site_package
reason = invalid_virtualenv_reason(venv_path, source_python, destination_python, virtualenv_system_site_packages)
if reason:
info('Removing invalidated virtualenv. (%s)' % reason)
run(('rm', '-rf', venv_path))
else:
info('Keeping valid virtualenv from previous run.')
run_virtualenv = False # looks good! we're done here.
if run_virtualenv:
raise_on_failure(lambda: virtualenv.cli_run(args), ignore_return=True)
# There might not be a venv_path if doing something like "venv= --version"
# and not actually asking virtualenv to make a venv.
if return_values.venv_path is not None:
run(('rm', '-rf', join(return_values.venv_path, 'local'))) | [
"def",
"ensure_virtualenv",
"(",
"args",
",",
"return_values",
")",
":",
"from",
"sys",
"import",
"argv",
"argv",
"[",
":",
"]",
"=",
"(",
"'virtualenv'",
",",
")",
"+",
"args",
"info",
"(",
"colorize",
"(",
"argv",
")",
")",
"import",
"virtualenv",
"r... | https://github.com/Yelp/venv-update/blob/5fb5491bd421fdd8ef3cff3faa5d4846b5985ec8/venv_update.py#L280-L328 | ||
holoviz/panel | 5e25cb09447d8edf0b316f130ee1318a2aeb880f | panel/interact.py | python | interactive.widget_from_single_value | (o, name) | Make widgets from single values, which can be used as parameter defaults. | Make widgets from single values, which can be used as parameter defaults. | [
"Make",
"widgets",
"from",
"single",
"values",
"which",
"can",
"be",
"used",
"as",
"parameter",
"defaults",
"."
] | def widget_from_single_value(o, name):
"""Make widgets from single values, which can be used as parameter defaults."""
if isinstance(o, str):
return TextInput(value=str(o), name=name)
elif isinstance(o, bool):
return Checkbox(value=o, name=name)
elif isinstance(o, Integral):
min, max, value = _get_min_max_value(None, None, o)
return IntSlider(value=o, start=min, end=max, name=name)
elif isinstance(o, Real):
min, max, value = _get_min_max_value(None, None, o)
return FloatSlider(value=o, start=min, end=max, name=name)
else:
return None | [
"def",
"widget_from_single_value",
"(",
"o",
",",
"name",
")",
":",
"if",
"isinstance",
"(",
"o",
",",
"str",
")",
":",
"return",
"TextInput",
"(",
"value",
"=",
"str",
"(",
"o",
")",
",",
"name",
"=",
"name",
")",
"elif",
"isinstance",
"(",
"o",
"... | https://github.com/holoviz/panel/blob/5e25cb09447d8edf0b316f130ee1318a2aeb880f/panel/interact.py#L300-L313 | ||
sagemath/sage | f9b2db94f675ff16963ccdefba4f1a3393b3fe0d | src/sage/schemes/projective/projective_space.py | python | ProjectiveSpace_ring.Lattes_map | (self, E, m) | return DynamicalSystem_projective(F, domain=self) | r"""
Given an elliptic curve ``E`` and an integer ``m`` return
the Lattes map associated to multiplication by `m`.
In other words, the rational map on the quotient
`E/\{\pm 1\} \cong \mathbb{P}^1` associated to `[m]:E \to E`.
INPUT:
- ``E`` -- an elliptic curve.
- ``m`` -- an integer.
OUTPUT: a dynamical system on this projective space.
EXAMPLES::
sage: P.<x,y> = ProjectiveSpace(QQ,1)
sage: E = EllipticCurve(QQ,[-1, 0])
sage: P.Lattes_map(E, 2)
Dynamical System of Projective Space of dimension 1 over Rational Field
Defn: Defined on coordinates by sending (x : y) to
(1/4*x^4 + 1/2*x^2*y^2 + 1/4*y^4 : x^3*y - x*y^3)
TESTS::
sage: P.<x,y> = ProjectiveSpace(GF(37), 1)
sage: E = EllipticCurve([1, 1])
sage: f = P.Lattes_map(E, 2); f
Dynamical System of Projective Space of dimension 1 over Finite Field of size 37
Defn: Defined on coordinates by sending (x : y) to
(-9*x^4 + 18*x^2*y^2 - 2*x*y^3 - 9*y^4 : x^3*y + x*y^3 + y^4) | r"""
Given an elliptic curve ``E`` and an integer ``m`` return
the Lattes map associated to multiplication by `m`. | [
"r",
"Given",
"an",
"elliptic",
"curve",
"E",
"and",
"an",
"integer",
"m",
"return",
"the",
"Lattes",
"map",
"associated",
"to",
"multiplication",
"by",
"m",
"."
] | def Lattes_map(self, E, m):
r"""
Given an elliptic curve ``E`` and an integer ``m`` return
the Lattes map associated to multiplication by `m`.
In other words, the rational map on the quotient
`E/\{\pm 1\} \cong \mathbb{P}^1` associated to `[m]:E \to E`.
INPUT:
- ``E`` -- an elliptic curve.
- ``m`` -- an integer.
OUTPUT: a dynamical system on this projective space.
EXAMPLES::
sage: P.<x,y> = ProjectiveSpace(QQ,1)
sage: E = EllipticCurve(QQ,[-1, 0])
sage: P.Lattes_map(E, 2)
Dynamical System of Projective Space of dimension 1 over Rational Field
Defn: Defined on coordinates by sending (x : y) to
(1/4*x^4 + 1/2*x^2*y^2 + 1/4*y^4 : x^3*y - x*y^3)
TESTS::
sage: P.<x,y> = ProjectiveSpace(GF(37), 1)
sage: E = EllipticCurve([1, 1])
sage: f = P.Lattes_map(E, 2); f
Dynamical System of Projective Space of dimension 1 over Finite Field of size 37
Defn: Defined on coordinates by sending (x : y) to
(-9*x^4 + 18*x^2*y^2 - 2*x*y^3 - 9*y^4 : x^3*y + x*y^3 + y^4)
"""
if self.dimension_relative() != 1:
raise TypeError("must be dimension 1")
if self.base_ring() != E.base_ring():
E = E.change_ring(self.base_ring())
L = E.multiplication_by_m(m, x_only=True)
F = [L.numerator(), L.denominator()]
R = self.coordinate_ring()
x, y = R.gens()
phi = F[0].parent().hom([x], R)
F = [phi(F[0]).homogenize(y), phi(F[1]).homogenize(y) * y]
from sage.dynamics.arithmetic_dynamics.projective_ds import DynamicalSystem_projective
return DynamicalSystem_projective(F, domain=self) | [
"def",
"Lattes_map",
"(",
"self",
",",
"E",
",",
"m",
")",
":",
"if",
"self",
".",
"dimension_relative",
"(",
")",
"!=",
"1",
":",
"raise",
"TypeError",
"(",
"\"must be dimension 1\"",
")",
"if",
"self",
".",
"base_ring",
"(",
")",
"!=",
"E",
".",
"b... | https://github.com/sagemath/sage/blob/f9b2db94f675ff16963ccdefba4f1a3393b3fe0d/src/sage/schemes/projective/projective_space.py#L1070-L1116 | |
qilingframework/qiling | 32cc674f2f6fa4b4c9d64a35a1a57853fe1e4142 | qiling/arch/evm/db/journal.py | python | Journal.is_flattened | (self) | return len(self._checkpoint_stack) < 2 | :return: whether there are any explicitly committed checkpoints | :return: whether there are any explicitly committed checkpoints | [
":",
"return",
":",
"whether",
"there",
"are",
"any",
"explicitly",
"committed",
"checkpoints"
] | def is_flattened(self) -> bool:
"""
:return: whether there are any explicitly committed checkpoints
"""
return len(self._checkpoint_stack) < 2 | [
"def",
"is_flattened",
"(",
"self",
")",
"->",
"bool",
":",
"return",
"len",
"(",
"self",
".",
"_checkpoint_stack",
")",
"<",
"2"
] | https://github.com/qilingframework/qiling/blob/32cc674f2f6fa4b4c9d64a35a1a57853fe1e4142/qiling/arch/evm/db/journal.py#L98-L102 | |
mcedit/mcedit2 | 4bb98da521447b6cf43d923cea9f00acf2f427e9 | src/mcedit2/rendering/loadablechunks.py | python | chunkMarkers | (chunkSet) | return sizedChunks | Returns a mapping { size: [position, ...] } for different powers of 2
as size. | Returns a mapping { size: [position, ...] } for different powers of 2
as size. | [
"Returns",
"a",
"mapping",
"{",
"size",
":",
"[",
"position",
"...",
"]",
"}",
"for",
"different",
"powers",
"of",
"2",
"as",
"size",
"."
] | def chunkMarkers(chunkSet):
""" Returns a mapping { size: [position, ...] } for different powers of 2
as size.
"""
sizedChunks = defaultdict(list)
size = 1
def all4(cx, cz):
cx &= ~size
cz &= ~size
return [(cx, cz), (cx + size, cz), (cx + size, cz + size), (cx, cz + size)]
# lastsize = 6
size = 1
while True:
nextsize = size << 1
chunkSet = set(chunkSet)
while len(chunkSet):
cx, cz = chunkSet.pop()
chunkSet.add((cx, cz))
o = all4(cx, cz)
others = set(o).intersection(chunkSet)
if len(others) == 4:
sizedChunks[nextsize].append(o[0])
for c in others:
chunkSet.discard(c)
else:
for c in others:
sizedChunks[size].append(c)
chunkSet.discard(c)
if len(sizedChunks[nextsize]):
chunkSet = set(sizedChunks[nextsize])
sizedChunks[nextsize] = []
size <<= 1
else:
break
return sizedChunks | [
"def",
"chunkMarkers",
"(",
"chunkSet",
")",
":",
"sizedChunks",
"=",
"defaultdict",
"(",
"list",
")",
"size",
"=",
"1",
"def",
"all4",
"(",
"cx",
",",
"cz",
")",
":",
"cx",
"&=",
"~",
"size",
"cz",
"&=",
"~",
"size",
"return",
"[",
"(",
"cx",
",... | https://github.com/mcedit/mcedit2/blob/4bb98da521447b6cf43d923cea9f00acf2f427e9/src/mcedit2/rendering/loadablechunks.py#L97-L135 | |
nerdvegas/rez | d392c65bf63b4bca8106f938cec49144ba54e770 | src/rez/vendor/pika/adapters/utils/io_services_utils.py | python | _AsyncStreamConnector._do_ssl_handshake | (self) | Perform asynchronous SSL handshake on the already wrapped socket | Perform asynchronous SSL handshake on the already wrapped socket | [
"Perform",
"asynchronous",
"SSL",
"handshake",
"on",
"the",
"already",
"wrapped",
"socket"
] | def _do_ssl_handshake(self):
"""Perform asynchronous SSL handshake on the already wrapped socket
"""
_LOGGER.debug('_AsyncStreamConnector._do_ssl_handshake()')
if self._state != self._STATE_ACTIVE:
_LOGGER.debug(
'_do_ssl_handshake: Abandoning streaming linkup due '
'to inactive state transition; state=%s; %s; .', self._state,
self._sock)
return
done = False
try:
try:
self._sock.do_handshake()
except ssl.SSLError as error:
if error.errno == ssl.SSL_ERROR_WANT_READ:
_LOGGER.debug('SSL handshake wants read; %s.', self._sock)
self._watching_socket = True
self._nbio.set_reader(self._sock.fileno(),
self._do_ssl_handshake)
self._nbio.remove_writer(self._sock.fileno())
elif error.errno == ssl.SSL_ERROR_WANT_WRITE:
_LOGGER.debug('SSL handshake wants write. %s', self._sock)
self._watching_socket = True
self._nbio.set_writer(self._sock.fileno(),
self._do_ssl_handshake)
self._nbio.remove_reader(self._sock.fileno())
else:
# Outer catch will report it
raise
else:
done = True
_LOGGER.info('SSL handshake completed successfully: %s',
self._sock)
except Exception as error: # pylint: disable=W0703
_LOGGER.exception('SSL do_handshake failed: error=%r; %s', error,
self._sock)
self._report_completion(error)
return
if done:
# Suspend I/O and link up transport with protocol
_LOGGER.debug(
'_do_ssl_handshake: removing watchers ahead of linkup: %s',
self._sock)
self._nbio.remove_reader(self._sock.fileno())
self._nbio.remove_writer(self._sock.fileno())
# So that our `_cleanup()` won't interfere with the transport's
# socket watcher configuration.
self._watching_socket = False
_LOGGER.debug(
'_do_ssl_handshake: pre-linkup removal of watchers is done; %s',
self._sock)
self._linkup() | [
"def",
"_do_ssl_handshake",
"(",
"self",
")",
":",
"_LOGGER",
".",
"debug",
"(",
"'_AsyncStreamConnector._do_ssl_handshake()'",
")",
"if",
"self",
".",
"_state",
"!=",
"self",
".",
"_STATE_ACTIVE",
":",
"_LOGGER",
".",
"debug",
"(",
"'_do_ssl_handshake: Abandoning s... | https://github.com/nerdvegas/rez/blob/d392c65bf63b4bca8106f938cec49144ba54e770/src/rez/vendor/pika/adapters/utils/io_services_utils.py#L619-L677 | ||
bnpy/bnpy | d5b311e8f58ccd98477f4a0c8a4d4982e3fca424 | bnpy/data/GroupXData.py | python | GroupXData.LoadFromFile | (cls, filepath, nDocTotal=None, **kwargs) | Constructor for loading data from disk into XData instance | Constructor for loading data from disk into XData instance | [
"Constructor",
"for",
"loading",
"data",
"from",
"disk",
"into",
"XData",
"instance"
] | def LoadFromFile(cls, filepath, nDocTotal=None, **kwargs):
''' Constructor for loading data from disk into XData instance
'''
if filepath.endswith('.mat'):
return cls.read_mat(filepath, nDocTotal=nDocTotal, **kwargs)
raise NotImplemented('Only .mat file supported.') | [
"def",
"LoadFromFile",
"(",
"cls",
",",
"filepath",
",",
"nDocTotal",
"=",
"None",
",",
"*",
"*",
"kwargs",
")",
":",
"if",
"filepath",
".",
"endswith",
"(",
"'.mat'",
")",
":",
"return",
"cls",
".",
"read_mat",
"(",
"filepath",
",",
"nDocTotal",
"=",
... | https://github.com/bnpy/bnpy/blob/d5b311e8f58ccd98477f4a0c8a4d4982e3fca424/bnpy/data/GroupXData.py#L61-L66 | ||
XuezheMax/flowseq | 8cb4ae00c26fbeb3e1459e3b3b90e7e9a84c3d2b | flownmt/modules/posteriors/transformer.py | python | TransformerCore.forward | (self, tgt_sents, tgt_masks, src_enc, src_masks) | return mu, logvar | [] | def forward(self, tgt_sents, tgt_masks, src_enc, src_masks):
x = self.embed_scale * self.tgt_embed(tgt_sents)
x = F.dropout2d(x, p=self.dropword, training=self.training)
x += self.pos_enc(tgt_sents)
x = F.dropout(x, p=0.2, training=self.training)
mask = tgt_masks.eq(0)
key_mask = src_masks.eq(0)
for layer in self.layers:
x = layer(x, mask, src_enc, key_mask)
mu = self.mu(x) * tgt_masks.unsqueeze(2)
logvar = self.logvar(x) * tgt_masks.unsqueeze(2)
return mu, logvar | [
"def",
"forward",
"(",
"self",
",",
"tgt_sents",
",",
"tgt_masks",
",",
"src_enc",
",",
"src_masks",
")",
":",
"x",
"=",
"self",
".",
"embed_scale",
"*",
"self",
".",
"tgt_embed",
"(",
"tgt_sents",
")",
"x",
"=",
"F",
".",
"dropout2d",
"(",
"x",
",",... | https://github.com/XuezheMax/flowseq/blob/8cb4ae00c26fbeb3e1459e3b3b90e7e9a84c3d2b/flownmt/modules/posteriors/transformer.py#L35-L48 | |||
colour-science/colour | 38782ac059e8ddd91939f3432bf06811c16667f0 | colour/models/rgb/transfer_functions/canon_log.py | python | log_encoding_CanonLog | (x,
bit_depth=10,
out_normalised_code_value=True,
in_reflection=True) | return as_float(from_range_1(clog)) | Defines the *Canon Log* log encoding curve / opto-electronic transfer
function.
Parameters
----------
x : numeric or array_like
Linear data :math:`x`.
bit_depth : int, optional
Bit depth used for conversion.
out_normalised_code_value : bool, optional
Whether the *Canon Log* non-linear data is encoded as normalised code
values.
in_reflection : bool, optional
Whether the light level :math:`x` to a camera is reflection.
Returns
-------
numeric or ndarray
*Canon Log* non-linear data.
References
----------
:cite:`Thorpe2012a`
Notes
-----
+------------+-----------------------+---------------+
| **Domain** | **Scale - Reference** | **Scale - 1** |
+============+=======================+===============+
| ``x`` | [0, 1] | [0, 1] |
+------------+-----------------------+---------------+
+------------+-----------------------+---------------+
| **Range** | **Scale - Reference** | **Scale - 1** |
+============+=======================+===============+
| ``clog`` | [0, 1] | [0, 1] |
+------------+-----------------------+---------------+
Examples
--------
>>> log_encoding_CanonLog(0.18) * 100 # doctest: +ELLIPSIS
34.3389651...
The values of *Table 2 Canon-Log Code Values* table in :cite:`Thorpe2012a`
are obtained as follows:
>>> x = np.array([0, 2, 18, 90, 720]) / 100
>>> np.around(log_encoding_CanonLog(x) * (2 ** 10 - 1)).astype(np.int)
array([ 128, 169, 351, 614, 1016])
>>> np.around(log_encoding_CanonLog(x, 10, False) * 100, 1)
array([ 7.3, 12. , 32.8, 62.7, 108.7]) | Defines the *Canon Log* log encoding curve / opto-electronic transfer
function. | [
"Defines",
"the",
"*",
"Canon",
"Log",
"*",
"log",
"encoding",
"curve",
"/",
"opto",
"-",
"electronic",
"transfer",
"function",
"."
] | def log_encoding_CanonLog(x,
bit_depth=10,
out_normalised_code_value=True,
in_reflection=True):
"""
Defines the *Canon Log* log encoding curve / opto-electronic transfer
function.
Parameters
----------
x : numeric or array_like
Linear data :math:`x`.
bit_depth : int, optional
Bit depth used for conversion.
out_normalised_code_value : bool, optional
Whether the *Canon Log* non-linear data is encoded as normalised code
values.
in_reflection : bool, optional
Whether the light level :math:`x` to a camera is reflection.
Returns
-------
numeric or ndarray
*Canon Log* non-linear data.
References
----------
:cite:`Thorpe2012a`
Notes
-----
+------------+-----------------------+---------------+
| **Domain** | **Scale - Reference** | **Scale - 1** |
+============+=======================+===============+
| ``x`` | [0, 1] | [0, 1] |
+------------+-----------------------+---------------+
+------------+-----------------------+---------------+
| **Range** | **Scale - Reference** | **Scale - 1** |
+============+=======================+===============+
| ``clog`` | [0, 1] | [0, 1] |
+------------+-----------------------+---------------+
Examples
--------
>>> log_encoding_CanonLog(0.18) * 100 # doctest: +ELLIPSIS
34.3389651...
The values of *Table 2 Canon-Log Code Values* table in :cite:`Thorpe2012a`
are obtained as follows:
>>> x = np.array([0, 2, 18, 90, 720]) / 100
>>> np.around(log_encoding_CanonLog(x) * (2 ** 10 - 1)).astype(np.int)
array([ 128, 169, 351, 614, 1016])
>>> np.around(log_encoding_CanonLog(x, 10, False) * 100, 1)
array([ 7.3, 12. , 32.8, 62.7, 108.7])
"""
x = to_domain_1(x)
if in_reflection:
x = x / 0.9
with domain_range_scale('ignore'):
clog = np.where(
x < log_decoding_CanonLog(0.0730597, bit_depth, False),
-(0.529136 * (np.log10(-x * 10.1596 + 1)) - 0.0730597),
0.529136 * np.log10(10.1596 * x + 1) + 0.0730597,
)
clog = (full_to_legal(clog, bit_depth)
if out_normalised_code_value else clog)
return as_float(from_range_1(clog)) | [
"def",
"log_encoding_CanonLog",
"(",
"x",
",",
"bit_depth",
"=",
"10",
",",
"out_normalised_code_value",
"=",
"True",
",",
"in_reflection",
"=",
"True",
")",
":",
"x",
"=",
"to_domain_1",
"(",
"x",
")",
"if",
"in_reflection",
":",
"x",
"=",
"x",
"/",
"0.... | https://github.com/colour-science/colour/blob/38782ac059e8ddd91939f3432bf06811c16667f0/colour/models/rgb/transfer_functions/canon_log.py#L59-L133 | |
ilastik/ilastik | 6acd2c554bc517e9c8ddad3623a7aaa2e6970c28 | lazyflow/operators/opDetectMissingData.py | python | OpDetectMissing.predict | (cls, X, method="classic") | return np.asarray(y) | predict if the histograms in X correspond to missing regions
do this for subsets of X in parallel | predict if the histograms in X correspond to missing regions
do this for subsets of X in parallel | [
"predict",
"if",
"the",
"histograms",
"in",
"X",
"correspond",
"to",
"missing",
"regions",
"do",
"this",
"for",
"subsets",
"of",
"X",
"in",
"parallel"
] | def predict(cls, X, method="classic"):
"""
predict if the histograms in X correspond to missing regions
do this for subsets of X in parallel
"""
if cls._manager is None:
cls._manager = SVMManager()
assert len(X.shape) == 2, "Prediction data must have shape (nSamples, nHistogramBins)."
nBins = X.shape[1]
if method == "classic":
svm = PseudoSVC()
else:
try:
svm = cls._manager.get(nBins)
except SVMManager.NotTrainedError:
# fail gracefully if not trained => responsibility of user!
svm = PseudoSVC()
y = np.zeros((len(X),)) * np.nan
pool = RequestPool()
chunkSize = 1000 # FIXME magic number??
nChunks = len(X) // chunkSize + (1 if len(X) % chunkSize > 0 else 0)
s = [slice(k * chunkSize, min((k + 1) * chunkSize, len(X))) for k in range(nChunks)]
def partFun(i):
y[s[i]] = svm.predict(X[s[i]])
for i in range(nChunks):
req = Request(partial(partFun, i))
pool.add(req)
pool.wait()
pool.clean()
# not neccessary
# assert not np.any(np.isnan(y))
return np.asarray(y) | [
"def",
"predict",
"(",
"cls",
",",
"X",
",",
"method",
"=",
"\"classic\"",
")",
":",
"if",
"cls",
".",
"_manager",
"is",
"None",
":",
"cls",
".",
"_manager",
"=",
"SVMManager",
"(",
")",
"assert",
"len",
"(",
"X",
".",
"shape",
")",
"==",
"2",
",... | https://github.com/ilastik/ilastik/blob/6acd2c554bc517e9c8ddad3623a7aaa2e6970c28/lazyflow/operators/opDetectMissingData.py#L393-L436 | |
makerbot/ReplicatorG | d6f2b07785a5a5f1e172fb87cb4303b17c575d5d | skein_engines/skeinforge-47/fabmetheus_utilities/geometry/creation/extrude.py | python | getNewDerivation | (elementNode) | return ExtrudeDerivation(elementNode) | Get new derivation. | Get new derivation. | [
"Get",
"new",
"derivation",
"."
] | def getNewDerivation(elementNode):
'Get new derivation.'
return ExtrudeDerivation(elementNode) | [
"def",
"getNewDerivation",
"(",
"elementNode",
")",
":",
"return",
"ExtrudeDerivation",
"(",
"elementNode",
")"
] | https://github.com/makerbot/ReplicatorG/blob/d6f2b07785a5a5f1e172fb87cb4303b17c575d5d/skein_engines/skeinforge-47/fabmetheus_utilities/geometry/creation/extrude.py#L215-L217 | |
avidLearnerInProgress/python-automation-scripts | 859cbbf72571673500cfc0fbcf493beaed48b7c5 | quora-image-scraper/driverScrape.py | python | createDirectory | (folderName) | return directory | [] | def createDirectory(folderName):
directory = folderName + "Images"
system("mkdir " + directory)
system("cd " + directory)
return directory | [
"def",
"createDirectory",
"(",
"folderName",
")",
":",
"directory",
"=",
"folderName",
"+",
"\"Images\"",
"system",
"(",
"\"mkdir \"",
"+",
"directory",
")",
"system",
"(",
"\"cd \"",
"+",
"directory",
")",
"return",
"directory"
] | https://github.com/avidLearnerInProgress/python-automation-scripts/blob/859cbbf72571673500cfc0fbcf493beaed48b7c5/quora-image-scraper/driverScrape.py#L23-L27 | |||
exaile/exaile | a7b58996c5c15b3aa7b9975ac13ee8f784ef4689 | xlgui/widgets/playback.py | python | Marker.do_set_property | (self, gproperty, value) | Sets a GObject property | Sets a GObject property | [
"Sets",
"a",
"GObject",
"property"
] | def do_set_property(self, gproperty, value):
"""
Sets a GObject property
"""
try:
self.__values[gproperty.name] = value
except KeyError:
raise AttributeError('unknown property %s' % property.name) | [
"def",
"do_set_property",
"(",
"self",
",",
"gproperty",
",",
"value",
")",
":",
"try",
":",
"self",
".",
"__values",
"[",
"gproperty",
".",
"name",
"]",
"=",
"value",
"except",
"KeyError",
":",
"raise",
"AttributeError",
"(",
"'unknown property %s'",
"%",
... | https://github.com/exaile/exaile/blob/a7b58996c5c15b3aa7b9975ac13ee8f784ef4689/xlgui/widgets/playback.py#L283-L290 | ||
kubernetes-client/python | 47b9da9de2d02b2b7a34fbe05afb44afd130d73a | kubernetes/client/models/v1_volume.py | python | V1Volume.git_repo | (self, git_repo) | Sets the git_repo of this V1Volume.
:param git_repo: The git_repo of this V1Volume. # noqa: E501
:type: V1GitRepoVolumeSource | Sets the git_repo of this V1Volume. | [
"Sets",
"the",
"git_repo",
"of",
"this",
"V1Volume",
"."
] | def git_repo(self, git_repo):
"""Sets the git_repo of this V1Volume.
:param git_repo: The git_repo of this V1Volume. # noqa: E501
:type: V1GitRepoVolumeSource
"""
self._git_repo = git_repo | [
"def",
"git_repo",
"(",
"self",
",",
"git_repo",
")",
":",
"self",
".",
"_git_repo",
"=",
"git_repo"
] | https://github.com/kubernetes-client/python/blob/47b9da9de2d02b2b7a34fbe05afb44afd130d73a/kubernetes/client/models/v1_volume.py#L504-L512 | ||
JiYou/openstack | 8607dd488bde0905044b303eb6e52bdea6806923 | packages/source/quantum/quantum/quantum_plugin_base_v2.py | python | QuantumPluginBaseV2.create_port | (self, context, port) | Create a port, which is a connection point of a device (e.g., a VM
NIC) to attach to a L2 Quantum network.
: param context: quantum api request context
: param port: dictionary describing the port, with keys
as listed in the RESOURCE_ATTRIBUTE_MAP object in
quantum/api/v2/attributes.py. All keys will be populated. | Create a port, which is a connection point of a device (e.g., a VM
NIC) to attach to a L2 Quantum network.
: param context: quantum api request context
: param port: dictionary describing the port, with keys
as listed in the RESOURCE_ATTRIBUTE_MAP object in
quantum/api/v2/attributes.py. All keys will be populated. | [
"Create",
"a",
"port",
"which",
"is",
"a",
"connection",
"point",
"of",
"a",
"device",
"(",
"e",
".",
"g",
".",
"a",
"VM",
"NIC",
")",
"to",
"attach",
"to",
"a",
"L2",
"Quantum",
"network",
".",
":",
"param",
"context",
":",
"quantum",
"api",
"requ... | def create_port(self, context, port):
"""
Create a port, which is a connection point of a device (e.g., a VM
NIC) to attach to a L2 Quantum network.
: param context: quantum api request context
: param port: dictionary describing the port, with keys
as listed in the RESOURCE_ATTRIBUTE_MAP object in
quantum/api/v2/attributes.py. All keys will be populated.
"""
pass | [
"def",
"create_port",
"(",
"self",
",",
"context",
",",
"port",
")",
":",
"pass"
] | https://github.com/JiYou/openstack/blob/8607dd488bde0905044b303eb6e52bdea6806923/packages/source/quantum/quantum/quantum_plugin_base_v2.py#L210-L219 | ||
replit-archive/empythoned | 977ec10ced29a3541a4973dc2b59910805695752 | dist/lib/python2.7/encodings/__init__.py | python | normalize_encoding | (encoding) | return '_'.join(encoding.translate(_norm_encoding_map).split()) | Normalize an encoding name.
Normalization works as follows: all non-alphanumeric
characters except the dot used for Python package names are
collapsed and replaced with a single underscore, e.g. ' -;#'
becomes '_'. Leading and trailing underscores are removed.
Note that encoding names should be ASCII only; if they do use
non-ASCII characters, these must be Latin-1 compatible. | Normalize an encoding name. | [
"Normalize",
"an",
"encoding",
"name",
"."
] | def normalize_encoding(encoding):
""" Normalize an encoding name.
Normalization works as follows: all non-alphanumeric
characters except the dot used for Python package names are
collapsed and replaced with a single underscore, e.g. ' -;#'
becomes '_'. Leading and trailing underscores are removed.
Note that encoding names should be ASCII only; if they do use
non-ASCII characters, these must be Latin-1 compatible.
"""
# Make sure we have an 8-bit string, because .translate() works
# differently for Unicode strings.
if hasattr(__builtin__, "unicode") and isinstance(encoding, unicode):
# Note that .encode('latin-1') does *not* use the codec
# registry, so this call doesn't recurse. (See unicodeobject.c
# PyUnicode_AsEncodedString() for details)
encoding = encoding.encode('latin-1')
return '_'.join(encoding.translate(_norm_encoding_map).split()) | [
"def",
"normalize_encoding",
"(",
"encoding",
")",
":",
"# Make sure we have an 8-bit string, because .translate() works",
"# differently for Unicode strings.",
"if",
"hasattr",
"(",
"__builtin__",
",",
"\"unicode\"",
")",
"and",
"isinstance",
"(",
"encoding",
",",
"unicode",... | https://github.com/replit-archive/empythoned/blob/977ec10ced29a3541a4973dc2b59910805695752/dist/lib/python2.7/encodings/__init__.py#L49-L69 | |
respeaker/get_started_with_respeaker | ec859759fcec7e683a5e09328a8ea307046f353d | files/usr/lib/python2.7/site-packages/tornado/curl_httpclient.py | python | CurlAsyncHTTPClient._handle_socket | (self, event, fd, multi, data) | Called by libcurl when it wants to change the file descriptors
it cares about. | Called by libcurl when it wants to change the file descriptors
it cares about. | [
"Called",
"by",
"libcurl",
"when",
"it",
"wants",
"to",
"change",
"the",
"file",
"descriptors",
"it",
"cares",
"about",
"."
] | def _handle_socket(self, event, fd, multi, data):
"""Called by libcurl when it wants to change the file descriptors
it cares about.
"""
event_map = {
pycurl.POLL_NONE: ioloop.IOLoop.NONE,
pycurl.POLL_IN: ioloop.IOLoop.READ,
pycurl.POLL_OUT: ioloop.IOLoop.WRITE,
pycurl.POLL_INOUT: ioloop.IOLoop.READ | ioloop.IOLoop.WRITE
}
if event == pycurl.POLL_REMOVE:
if fd in self._fds:
self.io_loop.remove_handler(fd)
del self._fds[fd]
else:
ioloop_event = event_map[event]
if fd not in self._fds:
self.io_loop.add_handler(fd, self._handle_events,
ioloop_event)
self._fds[fd] = ioloop_event
else:
self.io_loop.update_handler(fd, ioloop_event)
self._fds[fd] = ioloop_event | [
"def",
"_handle_socket",
"(",
"self",
",",
"event",
",",
"fd",
",",
"multi",
",",
"data",
")",
":",
"event_map",
"=",
"{",
"pycurl",
".",
"POLL_NONE",
":",
"ioloop",
".",
"IOLoop",
".",
"NONE",
",",
"pycurl",
".",
"POLL_IN",
":",
"ioloop",
".",
"IOLo... | https://github.com/respeaker/get_started_with_respeaker/blob/ec859759fcec7e683a5e09328a8ea307046f353d/files/usr/lib/python2.7/site-packages/tornado/curl_httpclient.py#L98-L120 | ||
getpelican/pelican | 0384c9bc071dd82b9dbe29fb73521587311bfc84 | pelican/readers.py | python | HTMLReader.read | (self, filename) | return parser.body, metadata | Parse content and metadata of HTML files | Parse content and metadata of HTML files | [
"Parse",
"content",
"and",
"metadata",
"of",
"HTML",
"files"
] | def read(self, filename):
"""Parse content and metadata of HTML files"""
with pelican_open(filename) as content:
parser = self._HTMLParser(self.settings, filename)
parser.feed(content)
parser.close()
metadata = {}
for k in parser.metadata:
metadata[k] = self.process_metadata(k, parser.metadata[k])
return parser.body, metadata | [
"def",
"read",
"(",
"self",
",",
"filename",
")",
":",
"with",
"pelican_open",
"(",
"filename",
")",
"as",
"content",
":",
"parser",
"=",
"self",
".",
"_HTMLParser",
"(",
"self",
".",
"settings",
",",
"filename",
")",
"parser",
".",
"feed",
"(",
"conte... | https://github.com/getpelican/pelican/blob/0384c9bc071dd82b9dbe29fb73521587311bfc84/pelican/readers.py#L481-L491 | |
wagtail/wagtail | ba8207a5d82c8a1de8f5f9693a7cd07421762999 | wagtail/core/blocks/base.py | python | Block.get_default | (self) | return self.meta.default | Return this block's default value (conventionally found in self.meta.default),
converted to the value type expected by this block. This caters for the case
where that value type is not something that can be expressed statically at
model definition type (e.g. something like StructValue which incorporates a
pointer back to the block definion object). | Return this block's default value (conventionally found in self.meta.default),
converted to the value type expected by this block. This caters for the case
where that value type is not something that can be expressed statically at
model definition type (e.g. something like StructValue which incorporates a
pointer back to the block definion object). | [
"Return",
"this",
"block",
"s",
"default",
"value",
"(",
"conventionally",
"found",
"in",
"self",
".",
"meta",
".",
"default",
")",
"converted",
"to",
"the",
"value",
"type",
"expected",
"by",
"this",
"block",
".",
"This",
"caters",
"for",
"the",
"case",
... | def get_default(self):
"""
Return this block's default value (conventionally found in self.meta.default),
converted to the value type expected by this block. This caters for the case
where that value type is not something that can be expressed statically at
model definition type (e.g. something like StructValue which incorporates a
pointer back to the block definion object).
"""
return self.meta.default | [
"def",
"get_default",
"(",
"self",
")",
":",
"return",
"self",
".",
"meta",
".",
"default"
] | https://github.com/wagtail/wagtail/blob/ba8207a5d82c8a1de8f5f9693a7cd07421762999/wagtail/core/blocks/base.py#L129-L137 | |
zzw922cn/Automatic_Speech_Recognition | 3fe0514d1377a16170923ff1ada6f808e907fcba | speechvalley/feature/madarian/preprocess.py | python | DigitPrecessor.processFile | (self, fileName) | return result | [] | def processFile(self, fileName):
result = []
assert os.path.isfile(fileName), "Wrong file path: %s" % str(fileName)
with codecs.open(fileName,'r','utf-8') as f:
content=f.readlines()
if self.mode == 'digit2char':
for string in content:
result.append(convertDigit2Character(string))
else:
for string in content:
result.append(convertCharacter2Digit(string))
return result | [
"def",
"processFile",
"(",
"self",
",",
"fileName",
")",
":",
"result",
"=",
"[",
"]",
"assert",
"os",
".",
"path",
".",
"isfile",
"(",
"fileName",
")",
",",
"\"Wrong file path: %s\"",
"%",
"str",
"(",
"fileName",
")",
"with",
"codecs",
".",
"open",
"(... | https://github.com/zzw922cn/Automatic_Speech_Recognition/blob/3fe0514d1377a16170923ff1ada6f808e907fcba/speechvalley/feature/madarian/preprocess.py#L25-L36 | |||
ilovin/lstm_ctc_ocr | 6c753df22e7c1bab40ce2170e9a11e7b3868cf80 | lib/lstm/config.py | python | cfg_from_file | (filename) | Load a config file and merge it into the default options. | Load a config file and merge it into the default options. | [
"Load",
"a",
"config",
"file",
"and",
"merge",
"it",
"into",
"the",
"default",
"options",
"."
] | def cfg_from_file(filename):
"""Load a config file and merge it into the default options."""
import yaml
with open(filename, 'r') as f:
yaml_cfg = edict(yaml.load(f))
_merge_a_into_b(yaml_cfg, __C) | [
"def",
"cfg_from_file",
"(",
"filename",
")",
":",
"import",
"yaml",
"with",
"open",
"(",
"filename",
",",
"'r'",
")",
"as",
"f",
":",
"yaml_cfg",
"=",
"edict",
"(",
"yaml",
".",
"load",
"(",
"f",
")",
")",
"_merge_a_into_b",
"(",
"yaml_cfg",
",",
"_... | https://github.com/ilovin/lstm_ctc_ocr/blob/6c753df22e7c1bab40ce2170e9a11e7b3868cf80/lib/lstm/config.py#L128-L134 | ||
asappresearch/flambe | 98f10f859fe9223fd2d1d76d430f77cdbddc0956 | flambe/metric/dev/binary.py | python | BinaryRecall.__str__ | (self) | return f"{invert_label}{self.__class__.__name__}" | Return the name of the Metric (for use in logging). | Return the name of the Metric (for use in logging). | [
"Return",
"the",
"name",
"of",
"the",
"Metric",
"(",
"for",
"use",
"in",
"logging",
")",
"."
] | def __str__(self) -> str:
"""Return the name of the Metric (for use in logging)."""
invert_label = "Negative" if self.positive_label == 0 else "Positive"
return f"{invert_label}{self.__class__.__name__}" | [
"def",
"__str__",
"(",
"self",
")",
"->",
"str",
":",
"invert_label",
"=",
"\"Negative\"",
"if",
"self",
".",
"positive_label",
"==",
"0",
"else",
"\"Positive\"",
"return",
"f\"{invert_label}{self.__class__.__name__}\""
] | https://github.com/asappresearch/flambe/blob/98f10f859fe9223fd2d1d76d430f77cdbddc0956/flambe/metric/dev/binary.py#L247-L250 | |
nopernik/mpDNS | b17dc39e7068406df82cb3431b3042e74e520cf9 | circuits/core/bridge.py | python | Bridge.__send | (self, eid, event) | [] | def __send(self, eid, event):
try:
if isinstance(event, exception):
Bridge.__adapt_exception(event)
self._values[eid] = event.value
self.__write(eid, event)
except:
pass | [
"def",
"__send",
"(",
"self",
",",
"eid",
",",
"event",
")",
":",
"try",
":",
"if",
"isinstance",
"(",
"event",
",",
"exception",
")",
":",
"Bridge",
".",
"__adapt_exception",
"(",
"event",
")",
"self",
".",
"_values",
"[",
"eid",
"]",
"=",
"event",
... | https://github.com/nopernik/mpDNS/blob/b17dc39e7068406df82cb3431b3042e74e520cf9/circuits/core/bridge.py#L98-L105 | ||||
deepgully/me | f7ad65edc2fe435310c6676bc2e322cfe5d4c8f0 | libs/werkzeug/wrappers.py | python | BaseRequest.want_form_data_parsed | (self) | return self.environ['REQUEST_METHOD'] in ('POST', 'PUT', 'PATCH') | Returns True if the request method is ``POST``, ``PUT`` or
``PATCH``. Can be overriden to support other HTTP methods that
should carry form data.
.. versionadded:: 0.8 | Returns True if the request method is ``POST``, ``PUT`` or
``PATCH``. Can be overriden to support other HTTP methods that
should carry form data. | [
"Returns",
"True",
"if",
"the",
"request",
"method",
"is",
"POST",
"PUT",
"or",
"PATCH",
".",
"Can",
"be",
"overriden",
"to",
"support",
"other",
"HTTP",
"methods",
"that",
"should",
"carry",
"form",
"data",
"."
] | def want_form_data_parsed(self):
"""Returns True if the request method is ``POST``, ``PUT`` or
``PATCH``. Can be overriden to support other HTTP methods that
should carry form data.
.. versionadded:: 0.8
"""
return self.environ['REQUEST_METHOD'] in ('POST', 'PUT', 'PATCH') | [
"def",
"want_form_data_parsed",
"(",
"self",
")",
":",
"return",
"self",
".",
"environ",
"[",
"'REQUEST_METHOD'",
"]",
"in",
"(",
"'POST'",
",",
"'PUT'",
",",
"'PATCH'",
")"
] | https://github.com/deepgully/me/blob/f7ad65edc2fe435310c6676bc2e322cfe5d4c8f0/libs/werkzeug/wrappers.py#L277-L284 | |
vrenkens/tfkaldi | 30e8f7a32582a82a58cea66c2c52bcb66c06c326 | processing/ark.py | python | ArkWriter.write_next_utt | (self, utt_id, utt_mat, ark_path=None) | read an utterance to the archive
Args:
ark_path: path to the .ark file that will be used for writing
utt_id: the utterance ID
utt_mat: a numpy array containing the utterance data | read an utterance to the archive | [
"read",
"an",
"utterance",
"to",
"the",
"archive"
] | def write_next_utt(self, utt_id, utt_mat, ark_path=None):
'''
read an utterance to the archive
Args:
ark_path: path to the .ark file that will be used for writing
utt_id: the utterance ID
utt_mat: a numpy array containing the utterance data
'''
ark = ark_path or self.default_ark
ark_file_write = open(ark, 'ab')
utt_mat = np.asarray(utt_mat, dtype=np.float32)
rows, cols = utt_mat.shape
ark_file_write.write(struct.pack('<%ds'%(len(utt_id)), utt_id))
pos = ark_file_write.tell()
ark_file_write.write(struct.pack('<xcccc', 'B', 'F', 'M', ' '))
ark_file_write.write(struct.pack('<bi', 4, rows))
ark_file_write.write(struct.pack('<bi', 4, cols))
ark_file_write.write(utt_mat)
self.scp_file_write.write('%s %s:%s\n' % (utt_id, ark, pos))
ark_file_write.close() | [
"def",
"write_next_utt",
"(",
"self",
",",
"utt_id",
",",
"utt_mat",
",",
"ark_path",
"=",
"None",
")",
":",
"ark",
"=",
"ark_path",
"or",
"self",
".",
"default_ark",
"ark_file_write",
"=",
"open",
"(",
"ark",
",",
"'ab'",
")",
"utt_mat",
"=",
"np",
".... | https://github.com/vrenkens/tfkaldi/blob/30e8f7a32582a82a58cea66c2c52bcb66c06c326/processing/ark.py#L190-L211 | ||
drj11/pypng | 1b5bc2d7e7a068a2f0b41bfce282349748bf133b | code/plan9topng.py | python | bitdepthof | (pixel) | return maxd | Return the bitdepth for a Plan9 pixel format string. | Return the bitdepth for a Plan9 pixel format string. | [
"Return",
"the",
"bitdepth",
"for",
"a",
"Plan9",
"pixel",
"format",
"string",
"."
] | def bitdepthof(pixel):
"""Return the bitdepth for a Plan9 pixel format string."""
maxd = 0
for c in re.findall(r'[a-z]\d*', pixel):
if c[0] != 'x':
maxd = max(maxd, int(c[1:]))
return maxd | [
"def",
"bitdepthof",
"(",
"pixel",
")",
":",
"maxd",
"=",
"0",
"for",
"c",
"in",
"re",
".",
"findall",
"(",
"r'[a-z]\\d*'",
",",
"pixel",
")",
":",
"if",
"c",
"[",
"0",
"]",
"!=",
"'x'",
":",
"maxd",
"=",
"max",
"(",
"maxd",
",",
"int",
"(",
... | https://github.com/drj11/pypng/blob/1b5bc2d7e7a068a2f0b41bfce282349748bf133b/code/plan9topng.py#L68-L75 | |
KalleHallden/AutoTimer | 2d954216700c4930baa154e28dbddc34609af7ce | env/lib/python2.7/site-packages/pip/_vendor/urllib3/response.py | python | _get_decoder | (mode) | return DeflateDecoder() | [] | def _get_decoder(mode):
if ',' in mode:
return MultiDecoder(mode)
if mode == 'gzip':
return GzipDecoder()
if brotli is not None and mode == 'br':
return BrotliDecoder()
return DeflateDecoder() | [
"def",
"_get_decoder",
"(",
"mode",
")",
":",
"if",
"','",
"in",
"mode",
":",
"return",
"MultiDecoder",
"(",
"mode",
")",
"if",
"mode",
"==",
"'gzip'",
":",
"return",
"GzipDecoder",
"(",
")",
"if",
"brotli",
"is",
"not",
"None",
"and",
"mode",
"==",
... | https://github.com/KalleHallden/AutoTimer/blob/2d954216700c4930baa154e28dbddc34609af7ce/env/lib/python2.7/site-packages/pip/_vendor/urllib3/response.py#L138-L148 | |||
shmilylty/OneForAll | 48591142a641e80f8a64ab215d11d06b696702d7 | common/utils.py | python | check_response | (method, resp) | return False | 检查响应 输出非正常响应返回json的信息
:param method: 请求方法
:param resp: 响应体
:return: 是否正常响应 | 检查响应 输出非正常响应返回json的信息 | [
"检查响应",
"输出非正常响应返回json的信息"
] | def check_response(method, resp):
"""
检查响应 输出非正常响应返回json的信息
:param method: 请求方法
:param resp: 响应体
:return: 是否正常响应
"""
if resp.status_code == 200 and resp.content:
return True
logger.log('ALERT', f'{method} {resp.url} {resp.status_code} - '
f'{resp.reason} {len(resp.content)}')
content_type = resp.headers.get('Content-Type')
if content_type and 'json' in content_type and resp.content:
try:
msg = resp.json()
except Exception as e:
logger.log('DEBUG', e.args)
else:
logger.log('ALERT', msg)
return False | [
"def",
"check_response",
"(",
"method",
",",
"resp",
")",
":",
"if",
"resp",
".",
"status_code",
"==",
"200",
"and",
"resp",
".",
"content",
":",
"return",
"True",
"logger",
".",
"log",
"(",
"'ALERT'",
",",
"f'{method} {resp.url} {resp.status_code} - '",
"f'{r... | https://github.com/shmilylty/OneForAll/blob/48591142a641e80f8a64ab215d11d06b696702d7/common/utils.py#L263-L283 | |
sagemath/sage | f9b2db94f675ff16963ccdefba4f1a3393b3fe0d | src/sage/tensor/modules/free_module_element.py | python | FiniteRankFreeModuleElement._repr_ | (self) | return description | r"""
Return a string representation of ``self``.
EXAMPLES::
sage: M = FiniteRankFreeModule(ZZ, 3, name='M')
sage: e = M.basis('e')
sage: M([1,-2,3], name='v')
Element v of the Rank-3 free module M over the Integer Ring | r"""
Return a string representation of ``self``. | [
"r",
"Return",
"a",
"string",
"representation",
"of",
"self",
"."
] | def _repr_(self):
r"""
Return a string representation of ``self``.
EXAMPLES::
sage: M = FiniteRankFreeModule(ZZ, 3, name='M')
sage: e = M.basis('e')
sage: M([1,-2,3], name='v')
Element v of the Rank-3 free module M over the Integer Ring
"""
description = "Element "
if self._name is not None:
description += self._name + " "
description += "of the {}".format(self._fmodule)
return description | [
"def",
"_repr_",
"(",
"self",
")",
":",
"description",
"=",
"\"Element \"",
"if",
"self",
".",
"_name",
"is",
"not",
"None",
":",
"description",
"+=",
"self",
".",
"_name",
"+",
"\" \"",
"description",
"+=",
"\"of the {}\"",
".",
"format",
"(",
"self",
"... | https://github.com/sagemath/sage/blob/f9b2db94f675ff16963ccdefba4f1a3393b3fe0d/src/sage/tensor/modules/free_module_element.py#L212-L228 | |
fonttools/fonttools | 892322aaff6a89bea5927379ec06bc0da3dfb7df | Lib/fontTools/ttLib/tables/D_S_I_G_.py | python | SignatureRecord.__repr__ | (self) | return "<%s: %s>" % (self.__class__.__name__, self.__dict__) | [] | def __repr__(self):
return "<%s: %s>" % (self.__class__.__name__, self.__dict__) | [
"def",
"__repr__",
"(",
"self",
")",
":",
"return",
"\"<%s: %s>\"",
"%",
"(",
"self",
".",
"__class__",
".",
"__name__",
",",
"self",
".",
"__dict__",
")"
] | https://github.com/fonttools/fonttools/blob/892322aaff6a89bea5927379ec06bc0da3dfb7df/Lib/fontTools/ttLib/tables/D_S_I_G_.py#L114-L115 | |||
neurokernel/neurokernel | e21e4aece1e8551dfa206d96e8ffd113beb10a85 | neurokernel/pattern.py | python | Interface.which_int | (self, s) | Return the interface containing the identifiers comprised by a selector.
Parameters
----------
selector : str or unicode
Port selector.
Returns
-------
i : set
Set of identifiers for interfaces that contain ports comprised by
the selector. | Return the interface containing the identifiers comprised by a selector. | [
"Return",
"the",
"interface",
"containing",
"the",
"identifiers",
"comprised",
"by",
"a",
"selector",
"."
] | def which_int(self, s):
"""
Return the interface containing the identifiers comprised by a selector.
Parameters
----------
selector : str or unicode
Port selector.
Returns
-------
i : set
Set of identifiers for interfaces that contain ports comprised by
the selector.
"""
try:
idx = self.sel.expand(s, self.idx_levels)
if not isinstance(self.data.index, pd.MultiIndex):
idx = [x[0] for x in idx]
d = self.data['interface'].loc[idx]
s = set(d)
s.discard(np.nan)
return s
except:
try:
s = set(self[s, 'interface'].values.flatten())
# Ignore unset entries:
s.discard(np.nan)
return s
except KeyError:
return set() | [
"def",
"which_int",
"(",
"self",
",",
"s",
")",
":",
"try",
":",
"idx",
"=",
"self",
".",
"sel",
".",
"expand",
"(",
"s",
",",
"self",
".",
"idx_levels",
")",
"if",
"not",
"isinstance",
"(",
"self",
".",
"data",
".",
"index",
",",
"pd",
".",
"M... | https://github.com/neurokernel/neurokernel/blob/e21e4aece1e8551dfa206d96e8ffd113beb10a85/neurokernel/pattern.py#L933-L965 | ||
homles11/IGCV3 | 83aba2f34702836cc1b82350163909034cd9b553 | detection/symbol/resnet.py | python | resnet | (units, num_stages, filter_list, num_classes, image_shape, bottle_neck=True, bn_mom=0.9, workspace=256, memonger=False) | return mx.symbol.SoftmaxOutput(data=fc1, name='softmax') | Return ResNet symbol of
Parameters
----------
units : list
Number of units in each stage
num_stages : int
Number of stage
filter_list : list
Channel size of each stage
num_classes : int
Ouput size of symbol
dataset : str
Dataset type, only cifar10 and imagenet supports
workspace : int
Workspace used in convolution operator | Return ResNet symbol of
Parameters
----------
units : list
Number of units in each stage
num_stages : int
Number of stage
filter_list : list
Channel size of each stage
num_classes : int
Ouput size of symbol
dataset : str
Dataset type, only cifar10 and imagenet supports
workspace : int
Workspace used in convolution operator | [
"Return",
"ResNet",
"symbol",
"of",
"Parameters",
"----------",
"units",
":",
"list",
"Number",
"of",
"units",
"in",
"each",
"stage",
"num_stages",
":",
"int",
"Number",
"of",
"stage",
"filter_list",
":",
"list",
"Channel",
"size",
"of",
"each",
"stage",
"nu... | def resnet(units, num_stages, filter_list, num_classes, image_shape, bottle_neck=True, bn_mom=0.9, workspace=256, memonger=False):
"""Return ResNet symbol of
Parameters
----------
units : list
Number of units in each stage
num_stages : int
Number of stage
filter_list : list
Channel size of each stage
num_classes : int
Ouput size of symbol
dataset : str
Dataset type, only cifar10 and imagenet supports
workspace : int
Workspace used in convolution operator
"""
num_unit = len(units)
assert(num_unit == num_stages)
data = mx.sym.Variable(name='data')
data = mx.sym.identity(data=data, name='id')
data = mx.sym.BatchNorm(data=data, fix_gamma=True, eps=2e-5, momentum=bn_mom, name='bn_data')
(nchannel, height, width) = image_shape
if height <= 32: # such as cifar10
body = mx.sym.Convolution(data=data, num_filter=filter_list[0], kernel=(3, 3), stride=(1,1), pad=(1, 1),
no_bias=True, name="conv0", workspace=workspace)
else: # often expected to be 224 such as imagenet
body = mx.sym.Convolution(data=data, num_filter=filter_list[0], kernel=(7, 7), stride=(2,2), pad=(3, 3),
no_bias=True, name="conv0", workspace=workspace)
body = mx.sym.BatchNorm(data=body, fix_gamma=False, eps=2e-5, momentum=bn_mom, name='bn0')
body = mx.sym.Activation(data=body, act_type='relu', name='relu0')
body = mx.symbol.Pooling(data=body, kernel=(3, 3), stride=(2,2), pad=(1,1), pool_type='max')
for i in range(num_stages):
body = residual_unit(body, filter_list[i+1], (1 if i==0 else 2, 1 if i==0 else 2), False,
name='stage%d_unit%d' % (i + 1, 1), bottle_neck=bottle_neck, workspace=workspace,
memonger=memonger)
for j in range(units[i]-1):
body = residual_unit(body, filter_list[i+1], (1,1), True, name='stage%d_unit%d' % (i + 1, j + 2),
bottle_neck=bottle_neck, workspace=workspace, memonger=memonger)
bn1 = mx.sym.BatchNorm(data=body, fix_gamma=False, eps=2e-5, momentum=bn_mom, name='bn1')
relu1 = mx.sym.Activation(data=bn1, act_type='relu', name='relu1')
# Although kernel is not used here when global_pool=True, we should put one
pool1 = mx.symbol.Pooling(data=relu1, global_pool=True, kernel=(7, 7), pool_type='avg', name='pool1')
flat = mx.symbol.Flatten(data=pool1)
fc1 = mx.symbol.FullyConnected(data=flat, num_hidden=num_classes, name='fc1')
return mx.symbol.SoftmaxOutput(data=fc1, name='softmax') | [
"def",
"resnet",
"(",
"units",
",",
"num_stages",
",",
"filter_list",
",",
"num_classes",
",",
"image_shape",
",",
"bottle_neck",
"=",
"True",
",",
"bn_mom",
"=",
"0.9",
",",
"workspace",
"=",
"256",
",",
"memonger",
"=",
"False",
")",
":",
"num_unit",
"... | https://github.com/homles11/IGCV3/blob/83aba2f34702836cc1b82350163909034cd9b553/detection/symbol/resnet.py#L70-L116 | |
gcollazo/BrowserRefresh-Sublime | daee0eda6480c07f8636ed24e5c555d24e088886 | win/pywinauto/controls/Accessability win32_controls.py | python | ListBoxWrapper.SetItemFocus | (self, item) | return self | Set the ListBox focus to the item at index | Set the ListBox focus to the item at index | [
"Set",
"the",
"ListBox",
"focus",
"to",
"the",
"item",
"at",
"index"
] | def SetItemFocus(self, item):
"Set the ListBox focus to the item at index"
index = self._get_item_index(item)
# if it is a multiple selection dialog
if self.HasStyle(win32defines.LBS_EXTENDEDSEL) or \
self.HasStyle(win32defines.LBS_MULTIPLESEL):
self.SendMessageTimeout(win32defines.LB_SETCARETINDEX, index)
else:
self.SendMessageTimeout(win32defines.LB_SETCURSEL, index)
win32functions.WaitGuiThreadIdle(self)
time.sleep(Timings.after_listboxfocuschange_wait)
# return this control so that actions can be chained.
return self | [
"def",
"SetItemFocus",
"(",
"self",
",",
"item",
")",
":",
"index",
"=",
"self",
".",
"_get_item_index",
"(",
"item",
")",
"# if it is a multiple selection dialog",
"if",
"self",
".",
"HasStyle",
"(",
"win32defines",
".",
"LBS_EXTENDEDSEL",
")",
"or",
"self",
... | https://github.com/gcollazo/BrowserRefresh-Sublime/blob/daee0eda6480c07f8636ed24e5c555d24e088886/win/pywinauto/controls/Accessability win32_controls.py#L503-L519 | |
replit-archive/empythoned | 977ec10ced29a3541a4973dc2b59910805695752 | cpython/Lib/inspect.py | python | getfile | (object) | Work out which source or compiled file an object was defined in. | Work out which source or compiled file an object was defined in. | [
"Work",
"out",
"which",
"source",
"or",
"compiled",
"file",
"an",
"object",
"was",
"defined",
"in",
"."
] | def getfile(object):
"""Work out which source or compiled file an object was defined in."""
if ismodule(object):
if hasattr(object, '__file__'):
return object.__file__
raise TypeError('{!r} is a built-in module'.format(object))
if isclass(object):
object = sys.modules.get(object.__module__)
if hasattr(object, '__file__'):
return object.__file__
raise TypeError('{!r} is a built-in class'.format(object))
if ismethod(object):
object = object.im_func
if isfunction(object):
object = object.func_code
if istraceback(object):
object = object.tb_frame
if isframe(object):
object = object.f_code
if iscode(object):
return object.co_filename
raise TypeError('{!r} is not a module, class, method, '
'function, traceback, frame, or code object'.format(object)) | [
"def",
"getfile",
"(",
"object",
")",
":",
"if",
"ismodule",
"(",
"object",
")",
":",
"if",
"hasattr",
"(",
"object",
",",
"'__file__'",
")",
":",
"return",
"object",
".",
"__file__",
"raise",
"TypeError",
"(",
"'{!r} is a built-in module'",
".",
"format",
... | https://github.com/replit-archive/empythoned/blob/977ec10ced29a3541a4973dc2b59910805695752/cpython/Lib/inspect.py#L400-L422 | ||
ifwe/digsby | f5fe00244744aa131e07f09348d10563f3d8fa99 | digsby/src/mail/passport.py | python | _handle_token_request_error | (soapexc, args, callback) | <S:Envelope>
<S:Fault>
<faultcode>
psf:Redirect
</faultcode>
<psf:redirectUrl>
https://msnia.login.live.com/pp550/RST.srf
</psf:redirectUrl>
<faultstring>
Authentication Failure
</faultstring>
</S:Fault>
</S:Envelope> | <S:Envelope>
<S:Fault>
<faultcode>
psf:Redirect
</faultcode>
<psf:redirectUrl>
https://msnia.login.live.com/pp550/RST.srf
</psf:redirectUrl>
<faultstring>
Authentication Failure
</faultstring>
</S:Fault>
</S:Envelope> | [
"<S",
":",
"Envelope",
">",
"<S",
":",
"Fault",
">",
"<faultcode",
">",
"psf",
":",
"Redirect",
"<",
"/",
"faultcode",
">",
"<psf",
":",
"redirectUrl",
">",
"https",
":",
"//",
"msnia",
".",
"login",
".",
"live",
".",
"com",
"/",
"pp550",
"/",
"RST... | def _handle_token_request_error(soapexc, args, callback):
'''
<S:Envelope>
<S:Fault>
<faultcode>
psf:Redirect
</faultcode>
<psf:redirectUrl>
https://msnia.login.live.com/pp550/RST.srf
</psf:redirectUrl>
<faultstring>
Authentication Failure
</faultstring>
</S:Fault>
</S:Envelope>
'''
if not isinstance(soapexc, util.xml_tag.SOAPException):
import sys
print >>sys.stderr, soapexc
import traceback;traceback.print_exc()
callback.error(soapexc)
elif 'Redirect' in str(soapexc.fault.faultcode):
redirect_url = soapexc.fault.redirectUrl._cdata
request_tokens(callback=callback, url=redirect_url, *args)
else:
log.error('Exception when requesting tokens. heres the response XML: %r', soapexc.t._to_xml())
callback.error(soapexc) | [
"def",
"_handle_token_request_error",
"(",
"soapexc",
",",
"args",
",",
"callback",
")",
":",
"if",
"not",
"isinstance",
"(",
"soapexc",
",",
"util",
".",
"xml_tag",
".",
"SOAPException",
")",
":",
"import",
"sys",
"print",
">>",
"sys",
".",
"stderr",
",",... | https://github.com/ifwe/digsby/blob/f5fe00244744aa131e07f09348d10563f3d8fa99/digsby/src/mail/passport.py#L243-L271 | ||
AppScale/gts | 46f909cf5dc5ba81faf9d81dc9af598dcf8a82a9 | AppServer/google/appengine/api/datastore_entities.py | python | GdKind._LeftoverPropertiesToXml | (self) | Convert all of this entity's properties that *aren't* part of this gd
kind to XML.
Returns:
string # the XML representation of the leftover properties | Convert all of this entity's properties that *aren't* part of this gd
kind to XML. | [
"Convert",
"all",
"of",
"this",
"entity",
"s",
"properties",
"that",
"*",
"aren",
"t",
"*",
"part",
"of",
"this",
"gd",
"kind",
"to",
"XML",
"."
] | def _LeftoverPropertiesToXml(self):
""" Convert all of this entity's properties that *aren't* part of this gd
kind to XML.
Returns:
string # the XML representation of the leftover properties
"""
leftovers = set(self.keys())
leftovers -= self._kind_properties
leftovers -= self._contact_properties
if leftovers:
return u'\n ' + '\n '.join(self._PropertiesToXml(leftovers))
else:
return u'' | [
"def",
"_LeftoverPropertiesToXml",
"(",
"self",
")",
":",
"leftovers",
"=",
"set",
"(",
"self",
".",
"keys",
"(",
")",
")",
"leftovers",
"-=",
"self",
".",
"_kind_properties",
"leftovers",
"-=",
"self",
".",
"_contact_properties",
"if",
"leftovers",
":",
"re... | https://github.com/AppScale/gts/blob/46f909cf5dc5ba81faf9d81dc9af598dcf8a82a9/AppServer/google/appengine/api/datastore_entities.py#L156-L169 | ||
holzschu/Carnets | 44effb10ddfc6aa5c8b0687582a724ba82c6b547 | Library/lib/python3.7/site-packages/pandas-0.24.2-py3.7-macosx-10.9-x86_64.egg/pandas/core/arrays/categorical.py | python | Categorical.__contains__ | (self, key) | return contains(self, key, container=self._codes) | Returns True if `key` is in this Categorical. | Returns True if `key` is in this Categorical. | [
"Returns",
"True",
"if",
"key",
"is",
"in",
"this",
"Categorical",
"."
] | def __contains__(self, key):
"""
Returns True if `key` is in this Categorical.
"""
# if key is a NaN, check if any NaN is in self.
if isna(key):
return self.isna().any()
return contains(self, key, container=self._codes) | [
"def",
"__contains__",
"(",
"self",
",",
"key",
")",
":",
"# if key is a NaN, check if any NaN is in self.",
"if",
"isna",
"(",
"key",
")",
":",
"return",
"self",
".",
"isna",
"(",
")",
".",
"any",
"(",
")",
"return",
"contains",
"(",
"self",
",",
"key",
... | https://github.com/holzschu/Carnets/blob/44effb10ddfc6aa5c8b0687582a724ba82c6b547/Library/lib/python3.7/site-packages/pandas-0.24.2-py3.7-macosx-10.9-x86_64.egg/pandas/core/arrays/categorical.py#L1932-L1940 | |
omz/PythonistaAppTemplate | f560f93f8876d82a21d108977f90583df08d55af | PythonistaAppTemplate/PythonistaKit.framework/pylib/site-packages/sqlalchemy/sql/selectable.py | python | Join._create_join | (cls, left, right, onclause=None, isouter=False) | return cls(left, right, onclause, isouter) | Produce a :class:`.Join` object, given two :class:`.FromClause`
expressions.
E.g.::
j = join(user_table, address_table,
user_table.c.id == address_table.c.user_id)
stmt = select([user_table]).select_from(j)
would emit SQL along the lines of::
SELECT user.id, user.name FROM user
JOIN address ON user.id = address.user_id
Similar functionality is available given any
:class:`.FromClause` object (e.g. such as a :class:`.Table`) using
the :meth:`.FromClause.join` method.
:param left: The left side of the join.
:param right: the right side of the join; this is any
:class:`.FromClause` object such as a :class:`.Table` object, and
may also be a selectable-compatible object such as an ORM-mapped
class.
:param onclause: a SQL expression representing the ON clause of the
join. If left at ``None``, :meth:`.FromClause.join` will attempt to
join the two tables based on a foreign key relationship.
:param isouter: if True, render a LEFT OUTER JOIN, instead of JOIN.
.. seealso::
:meth:`.FromClause.join` - method form, based on a given left side
:class:`.Join` - the type of object produced | Produce a :class:`.Join` object, given two :class:`.FromClause`
expressions. | [
"Produce",
"a",
":",
"class",
":",
".",
"Join",
"object",
"given",
"two",
":",
"class",
":",
".",
"FromClause",
"expressions",
"."
] | def _create_join(cls, left, right, onclause=None, isouter=False):
"""Produce a :class:`.Join` object, given two :class:`.FromClause`
expressions.
E.g.::
j = join(user_table, address_table,
user_table.c.id == address_table.c.user_id)
stmt = select([user_table]).select_from(j)
would emit SQL along the lines of::
SELECT user.id, user.name FROM user
JOIN address ON user.id = address.user_id
Similar functionality is available given any
:class:`.FromClause` object (e.g. such as a :class:`.Table`) using
the :meth:`.FromClause.join` method.
:param left: The left side of the join.
:param right: the right side of the join; this is any
:class:`.FromClause` object such as a :class:`.Table` object, and
may also be a selectable-compatible object such as an ORM-mapped
class.
:param onclause: a SQL expression representing the ON clause of the
join. If left at ``None``, :meth:`.FromClause.join` will attempt to
join the two tables based on a foreign key relationship.
:param isouter: if True, render a LEFT OUTER JOIN, instead of JOIN.
.. seealso::
:meth:`.FromClause.join` - method form, based on a given left side
:class:`.Join` - the type of object produced
"""
return cls(left, right, onclause, isouter) | [
"def",
"_create_join",
"(",
"cls",
",",
"left",
",",
"right",
",",
"onclause",
"=",
"None",
",",
"isouter",
"=",
"False",
")",
":",
"return",
"cls",
"(",
"left",
",",
"right",
",",
"onclause",
",",
"isouter",
")"
] | https://github.com/omz/PythonistaAppTemplate/blob/f560f93f8876d82a21d108977f90583df08d55af/PythonistaAppTemplate/PythonistaKit.framework/pylib/site-packages/sqlalchemy/sql/selectable.py#L573-L613 | |
replit-archive/empythoned | 977ec10ced29a3541a4973dc2b59910805695752 | dist/lib/python2.7/wsgiref/headers.py | python | Headers.__delitem__ | (self,name) | Delete all occurrences of a header, if present.
Does *not* raise an exception if the header is missing. | Delete all occurrences of a header, if present. | [
"Delete",
"all",
"occurrences",
"of",
"a",
"header",
"if",
"present",
"."
] | def __delitem__(self,name):
"""Delete all occurrences of a header, if present.
Does *not* raise an exception if the header is missing.
"""
name = name.lower()
self._headers[:] = [kv for kv in self._headers if kv[0].lower() != name] | [
"def",
"__delitem__",
"(",
"self",
",",
"name",
")",
":",
"name",
"=",
"name",
".",
"lower",
"(",
")",
"self",
".",
"_headers",
"[",
":",
"]",
"=",
"[",
"kv",
"for",
"kv",
"in",
"self",
".",
"_headers",
"if",
"kv",
"[",
"0",
"]",
".",
"lower",
... | https://github.com/replit-archive/empythoned/blob/977ec10ced29a3541a4973dc2b59910805695752/dist/lib/python2.7/wsgiref/headers.py#L48-L54 | ||
intel/fMBT | a221c55cd7b6367aa458781b134ae155aa47a71f | utils3/fmbtwindows.py | python | Device.putFile | (self, localFilename, remoteFilepath, data=None) | return self._conn.sendFile(localFilename, remoteFilepath, data) | Send local file to the device.
Parameters:
localFilename (string):
file to be sent.
remoteFilepath (string):
destination on the device. If destination is an
existing directory, the file will be saved to the
directory with its original local name. Otherwise the file
will be saved with remoteFilepath as new name.
data (string, optional):
data to be stored to remoteFilepath. The default is
the data in the local file.
Example: Copy local /tmp/file.txt to c:/temp
putFile("/tmp/file.txt", "c:/temp/")
Example: Create new remote file
putFile(None, "c:/temp/file.txt", "remote file contents") | Send local file to the device. | [
"Send",
"local",
"file",
"to",
"the",
"device",
"."
] | def putFile(self, localFilename, remoteFilepath, data=None):
"""
Send local file to the device.
Parameters:
localFilename (string):
file to be sent.
remoteFilepath (string):
destination on the device. If destination is an
existing directory, the file will be saved to the
directory with its original local name. Otherwise the file
will be saved with remoteFilepath as new name.
data (string, optional):
data to be stored to remoteFilepath. The default is
the data in the local file.
Example: Copy local /tmp/file.txt to c:/temp
putFile("/tmp/file.txt", "c:/temp/")
Example: Create new remote file
putFile(None, "c:/temp/file.txt", "remote file contents")
"""
return self._conn.sendFile(localFilename, remoteFilepath, data) | [
"def",
"putFile",
"(",
"self",
",",
"localFilename",
",",
"remoteFilepath",
",",
"data",
"=",
"None",
")",
":",
"return",
"self",
".",
"_conn",
".",
"sendFile",
"(",
"localFilename",
",",
"remoteFilepath",
",",
"data",
")"
] | https://github.com/intel/fMBT/blob/a221c55cd7b6367aa458781b134ae155aa47a71f/utils3/fmbtwindows.py#L1005-L1030 | |
almarklein/visvis | 766ed97767b44a55a6ff72c742d7385e074d3d55 | core/axises.py | python | BaseAxis.showMinorGridY | () | return locals() | Get/Set whether to show a minor grid for the y dimension. | Get/Set whether to show a minor grid for the y dimension. | [
"Get",
"/",
"Set",
"whether",
"to",
"show",
"a",
"minor",
"grid",
"for",
"the",
"y",
"dimension",
"."
] | def showMinorGridY():
""" Get/Set whether to show a minor grid for the y dimension. """
def fget(self):
return self._yminorgrid
def fset(self, value):
self._yminorgrid = bool(value)
return locals() | [
"def",
"showMinorGridY",
"(",
")",
":",
"def",
"fget",
"(",
"self",
")",
":",
"return",
"self",
".",
"_yminorgrid",
"def",
"fset",
"(",
"self",
",",
"value",
")",
":",
"self",
".",
"_yminorgrid",
"=",
"bool",
"(",
"value",
")",
"return",
"locals",
"(... | https://github.com/almarklein/visvis/blob/766ed97767b44a55a6ff72c742d7385e074d3d55/core/axises.py#L569-L575 | |
replit-archive/empythoned | 977ec10ced29a3541a4973dc2b59910805695752 | dist/lib/python2.7/multiprocessing/__init__.py | python | Lock | () | return Lock() | Returns a non-recursive lock object | Returns a non-recursive lock object | [
"Returns",
"a",
"non",
"-",
"recursive",
"lock",
"object"
] | def Lock():
'''
Returns a non-recursive lock object
'''
from multiprocessing.synchronize import Lock
return Lock() | [
"def",
"Lock",
"(",
")",
":",
"from",
"multiprocessing",
".",
"synchronize",
"import",
"Lock",
"return",
"Lock",
"(",
")"
] | https://github.com/replit-archive/empythoned/blob/977ec10ced29a3541a4973dc2b59910805695752/dist/lib/python2.7/multiprocessing/__init__.py#L171-L176 | |
facebookresearch/pytorch3d | fddd6a700fa9685c1ce2d4b266c111d7db424ecc | pytorch3d/transforms/transform3d.py | python | Rotate._get_matrix_inverse | (self) | return self._matrix.permute(0, 2, 1).contiguous() | Return the inverse of self._matrix. | Return the inverse of self._matrix. | [
"Return",
"the",
"inverse",
"of",
"self",
".",
"_matrix",
"."
] | def _get_matrix_inverse(self):
"""
Return the inverse of self._matrix.
"""
return self._matrix.permute(0, 2, 1).contiguous() | [
"def",
"_get_matrix_inverse",
"(",
"self",
")",
":",
"return",
"self",
".",
"_matrix",
".",
"permute",
"(",
"0",
",",
"2",
",",
"1",
")",
".",
"contiguous",
"(",
")"
] | https://github.com/facebookresearch/pytorch3d/blob/fddd6a700fa9685c1ce2d4b266c111d7db424ecc/pytorch3d/transforms/transform3d.py#L578-L582 | |
googleads/google-ads-python | 2a1d6062221f6aad1992a6bcca0e7e4a93d2db86 | google/ads/googleads/v7/services/services/ad_group_ad_service/client.py | python | AdGroupAdServiceClient.parse_common_location_path | (path: str) | return m.groupdict() if m else {} | Parse a location path into its component segments. | Parse a location path into its component segments. | [
"Parse",
"a",
"location",
"path",
"into",
"its",
"component",
"segments",
"."
] | def parse_common_location_path(path: str) -> Dict[str, str]:
"""Parse a location path into its component segments."""
m = re.match(
r"^projects/(?P<project>.+?)/locations/(?P<location>.+?)$", path
)
return m.groupdict() if m else {} | [
"def",
"parse_common_location_path",
"(",
"path",
":",
"str",
")",
"->",
"Dict",
"[",
"str",
",",
"str",
"]",
":",
"m",
"=",
"re",
".",
"match",
"(",
"r\"^projects/(?P<project>.+?)/locations/(?P<location>.+?)$\"",
",",
"path",
")",
"return",
"m",
".",
"groupdi... | https://github.com/googleads/google-ads-python/blob/2a1d6062221f6aad1992a6bcca0e7e4a93d2db86/google/ads/googleads/v7/services/services/ad_group_ad_service/client.py#L283-L288 | |
MyDuerOS/DuerOS-Python-Client | 71b3482f00cfb11b6d6d8a33065cb33e05ba339e | app/framework/player.py | python | Player.duration | (self) | 播放时长
:return: | 播放时长
:return: | [
"播放时长",
":",
"return",
":"
] | def duration(self):
'''
播放时长
:return:
'''
success, duration = self.player.query_duration(Gst.Format.TIME)
if success:
return int(duration / Gst.MSECOND) | [
"def",
"duration",
"(",
"self",
")",
":",
"success",
",",
"duration",
"=",
"self",
".",
"player",
".",
"query_duration",
"(",
"Gst",
".",
"Format",
".",
"TIME",
")",
"if",
"success",
":",
"return",
"int",
"(",
"duration",
"/",
"Gst",
".",
"MSECOND",
... | https://github.com/MyDuerOS/DuerOS-Python-Client/blob/71b3482f00cfb11b6d6d8a33065cb33e05ba339e/app/framework/player.py#L75-L82 | ||
wxWidgets/Phoenix | b2199e299a6ca6d866aa6f3d0888499136ead9d6 | wx/lib/agw/aui/framemanager.py | python | AuiManager.OnSetCursor | (self, event) | Handles the ``wx.EVT_SET_CURSOR`` event for :class:`AuiManager`.
:param `event`: a :class:`SetCursorEvent` to be processed. | Handles the ``wx.EVT_SET_CURSOR`` event for :class:`AuiManager`. | [
"Handles",
"the",
"wx",
".",
"EVT_SET_CURSOR",
"event",
"for",
":",
"class",
":",
"AuiManager",
"."
] | def OnSetCursor(self, event):
"""
Handles the ``wx.EVT_SET_CURSOR`` event for :class:`AuiManager`.
:param `event`: a :class:`SetCursorEvent` to be processed.
"""
# determine cursor
part = self.HitTest(event.GetX(), event.GetY())
cursor = wx.NullCursor
if part:
if part.type in [AuiDockUIPart.typeDockSizer, AuiDockUIPart.typePaneSizer]:
if not self.CheckMovableSizer(part):
return
if part.orientation == wx.VERTICAL:
cursor = wx.Cursor(wx.CURSOR_SIZEWE)
else:
cursor = wx.Cursor(wx.CURSOR_SIZENS)
elif part.type == AuiDockUIPart.typeGripper:
cursor = wx.Cursor(wx.CURSOR_SIZING)
event.SetCursor(cursor) | [
"def",
"OnSetCursor",
"(",
"self",
",",
"event",
")",
":",
"# determine cursor",
"part",
"=",
"self",
".",
"HitTest",
"(",
"event",
".",
"GetX",
"(",
")",
",",
"event",
".",
"GetY",
"(",
")",
")",
"cursor",
"=",
"wx",
".",
"NullCursor",
"if",
"part",... | https://github.com/wxWidgets/Phoenix/blob/b2199e299a6ca6d866aa6f3d0888499136ead9d6/wx/lib/agw/aui/framemanager.py#L8647-L8672 | ||
holzschu/Carnets | 44effb10ddfc6aa5c8b0687582a724ba82c6b547 | Library/lib/python3.7/site-packages/matplotlib-3.0.3-py3.7-macosx-10.9-x86_64.egg/matplotlib/widgets.py | python | ToolHandles.set_data | (self, pts, y=None) | Set x and y positions of handles | Set x and y positions of handles | [
"Set",
"x",
"and",
"y",
"positions",
"of",
"handles"
] | def set_data(self, pts, y=None):
"""Set x and y positions of handles"""
if y is not None:
x = pts
pts = np.array([x, y])
self._markers.set_data(pts) | [
"def",
"set_data",
"(",
"self",
",",
"pts",
",",
"y",
"=",
"None",
")",
":",
"if",
"y",
"is",
"not",
"None",
":",
"x",
"=",
"pts",
"pts",
"=",
"np",
".",
"array",
"(",
"[",
"x",
",",
"y",
"]",
")",
"self",
".",
"_markers",
".",
"set_data",
... | https://github.com/holzschu/Carnets/blob/44effb10ddfc6aa5c8b0687582a724ba82c6b547/Library/lib/python3.7/site-packages/matplotlib-3.0.3-py3.7-macosx-10.9-x86_64.egg/matplotlib/widgets.py#L1923-L1928 | ||
taowen/es-monitor | c4deceb4964857f495d13bfaf2d92f36734c9e1c | es_sql/sqlparse/filters.py | python | OutputFilter.__init__ | (self, varname='sql') | [] | def __init__(self, varname='sql'):
self.varname = self.varname_prefix + varname
self.count = 0 | [
"def",
"__init__",
"(",
"self",
",",
"varname",
"=",
"'sql'",
")",
":",
"self",
".",
"varname",
"=",
"self",
".",
"varname_prefix",
"+",
"varname",
"self",
".",
"count",
"=",
"0"
] | https://github.com/taowen/es-monitor/blob/c4deceb4964857f495d13bfaf2d92f36734c9e1c/es_sql/sqlparse/filters.py#L587-L589 | ||||
TencentCloud/tencentcloud-sdk-python | 3677fd1cdc8c5fd626ce001c13fd3b59d1f279d2 | tencentcloud/fmu/v20191213/models.py | python | StyleImageProRequest.__init__ | (self) | r"""
:param FilterType: 滤镜类型,取值如下:
1.白茶;2 白皙;3.初夏;4.东京;5.告白;6.暖阳;7.蔷薇;8.清澄;9.清透;10.甜薄荷;11.默认;12.心动;13.哑灰;14.樱桃布丁;15.自然;16.清逸;17.黑白;18.水果;19.爱情;20.冬日;21.相片;22.夏日;23.香氛;24.魅惑;25.悸动;26.沙滩;27.街拍;28.甜美;29.初吻;30.午后;31.活力;32.朦胧;33.悦动;34.时尚;35.气泡;36.柠檬;37.棉花糖;38.小溪;39.丽人;40.咖啡;41.嫩芽;42.热情;43.渐暖;44.早餐;45.白茶;46.白嫩;47.圣代;48.森林;49.冲浪;50.奶咖;51.清澈;52.微风;53.日落;54.水光;55.日系;56.星光;57.阳光;58.落叶;59.生机;60.甜心;61.清逸;62.春意;63.罗马;64.青涩;65.清风;66.暖心;67.海水;68.神秘;69.旧调1;70.旧调2;71.雪顶;72.日光;73.浮云;74.流彩;75.胶片;76.回味;77.奶酪;78.蝴蝶。
:type FilterType: int
:param Image: 图片 base64 数据,base64 编码后大小不可超过5M。
支持PNG、JPG、JPEG、BMP,不支持 GIF 图片。
:type Image: str
:param Url: 图片的 Url ,对应图片 base64 编码后大小不可超过5M。
图片的 Url、Image必须提供一个,如果都提供,只使用 Url。
图片存储于腾讯云的 Url 可保障更高下载速度和稳定性,建议图片存储于腾讯云。
非腾讯云存储的Url速度和稳定性可能受一定影响。
支持PNG、JPG、JPEG、BMP 等图片格式,不支持 GIF 图片。
:type Url: str
:param FilterDegree: 滤镜效果,取值[0,100],0表示无效果,100表示满滤镜效果。默认值为80。
:type FilterDegree: int
:param RspImgType: 返回图像方式(base64 或 url ) ,二选一。url有效期为1天。
:type RspImgType: str | r"""
:param FilterType: 滤镜类型,取值如下:
1.白茶;2 白皙;3.初夏;4.东京;5.告白;6.暖阳;7.蔷薇;8.清澄;9.清透;10.甜薄荷;11.默认;12.心动;13.哑灰;14.樱桃布丁;15.自然;16.清逸;17.黑白;18.水果;19.爱情;20.冬日;21.相片;22.夏日;23.香氛;24.魅惑;25.悸动;26.沙滩;27.街拍;28.甜美;29.初吻;30.午后;31.活力;32.朦胧;33.悦动;34.时尚;35.气泡;36.柠檬;37.棉花糖;38.小溪;39.丽人;40.咖啡;41.嫩芽;42.热情;43.渐暖;44.早餐;45.白茶;46.白嫩;47.圣代;48.森林;49.冲浪;50.奶咖;51.清澈;52.微风;53.日落;54.水光;55.日系;56.星光;57.阳光;58.落叶;59.生机;60.甜心;61.清逸;62.春意;63.罗马;64.青涩;65.清风;66.暖心;67.海水;68.神秘;69.旧调1;70.旧调2;71.雪顶;72.日光;73.浮云;74.流彩;75.胶片;76.回味;77.奶酪;78.蝴蝶。
:type FilterType: int
:param Image: 图片 base64 数据,base64 编码后大小不可超过5M。
支持PNG、JPG、JPEG、BMP,不支持 GIF 图片。
:type Image: str
:param Url: 图片的 Url ,对应图片 base64 编码后大小不可超过5M。
图片的 Url、Image必须提供一个,如果都提供,只使用 Url。
图片存储于腾讯云的 Url 可保障更高下载速度和稳定性,建议图片存储于腾讯云。
非腾讯云存储的Url速度和稳定性可能受一定影响。
支持PNG、JPG、JPEG、BMP 等图片格式,不支持 GIF 图片。
:type Url: str
:param FilterDegree: 滤镜效果,取值[0,100],0表示无效果,100表示满滤镜效果。默认值为80。
:type FilterDegree: int
:param RspImgType: 返回图像方式(base64 或 url ) ,二选一。url有效期为1天。
:type RspImgType: str | [
"r",
":",
"param",
"FilterType",
":",
"滤镜类型,取值如下:",
"1",
".",
"白茶;2",
"白皙;3",
".",
"初夏;4",
".",
"东京;5",
".",
"告白;6",
".",
"暖阳;7",
".",
"蔷薇;8",
".",
"清澄;9",
".",
"清透;10",
".",
"甜薄荷;11",
".",
"默认;12",
".",
"心动;13",
".",
"哑灰;14",
".",
"樱桃布丁;15",
".",
... | def __init__(self):
r"""
:param FilterType: 滤镜类型,取值如下:
1.白茶;2 白皙;3.初夏;4.东京;5.告白;6.暖阳;7.蔷薇;8.清澄;9.清透;10.甜薄荷;11.默认;12.心动;13.哑灰;14.樱桃布丁;15.自然;16.清逸;17.黑白;18.水果;19.爱情;20.冬日;21.相片;22.夏日;23.香氛;24.魅惑;25.悸动;26.沙滩;27.街拍;28.甜美;29.初吻;30.午后;31.活力;32.朦胧;33.悦动;34.时尚;35.气泡;36.柠檬;37.棉花糖;38.小溪;39.丽人;40.咖啡;41.嫩芽;42.热情;43.渐暖;44.早餐;45.白茶;46.白嫩;47.圣代;48.森林;49.冲浪;50.奶咖;51.清澈;52.微风;53.日落;54.水光;55.日系;56.星光;57.阳光;58.落叶;59.生机;60.甜心;61.清逸;62.春意;63.罗马;64.青涩;65.清风;66.暖心;67.海水;68.神秘;69.旧调1;70.旧调2;71.雪顶;72.日光;73.浮云;74.流彩;75.胶片;76.回味;77.奶酪;78.蝴蝶。
:type FilterType: int
:param Image: 图片 base64 数据,base64 编码后大小不可超过5M。
支持PNG、JPG、JPEG、BMP,不支持 GIF 图片。
:type Image: str
:param Url: 图片的 Url ,对应图片 base64 编码后大小不可超过5M。
图片的 Url、Image必须提供一个,如果都提供,只使用 Url。
图片存储于腾讯云的 Url 可保障更高下载速度和稳定性,建议图片存储于腾讯云。
非腾讯云存储的Url速度和稳定性可能受一定影响。
支持PNG、JPG、JPEG、BMP 等图片格式,不支持 GIF 图片。
:type Url: str
:param FilterDegree: 滤镜效果,取值[0,100],0表示无效果,100表示满滤镜效果。默认值为80。
:type FilterDegree: int
:param RspImgType: 返回图像方式(base64 或 url ) ,二选一。url有效期为1天。
:type RspImgType: str
"""
self.FilterType = None
self.Image = None
self.Url = None
self.FilterDegree = None
self.RspImgType = None | [
"def",
"__init__",
"(",
"self",
")",
":",
"self",
".",
"FilterType",
"=",
"None",
"self",
".",
"Image",
"=",
"None",
"self",
".",
"Url",
"=",
"None",
"self",
".",
"FilterDegree",
"=",
"None",
"self",
".",
"RspImgType",
"=",
"None"
] | https://github.com/TencentCloud/tencentcloud-sdk-python/blob/3677fd1cdc8c5fd626ce001c13fd3b59d1f279d2/tencentcloud/fmu/v20191213/models.py#L645-L668 | ||
kubernetes-client/python | 47b9da9de2d02b2b7a34fbe05afb44afd130d73a | kubernetes/client/models/v1_storage_class_list.py | python | V1StorageClassList.to_str | (self) | return pprint.pformat(self.to_dict()) | Returns the string representation of the model | Returns the string representation of the model | [
"Returns",
"the",
"string",
"representation",
"of",
"the",
"model"
] | def to_str(self):
"""Returns the string representation of the model"""
return pprint.pformat(self.to_dict()) | [
"def",
"to_str",
"(",
"self",
")",
":",
"return",
"pprint",
".",
"pformat",
"(",
"self",
".",
"to_dict",
"(",
")",
")"
] | https://github.com/kubernetes-client/python/blob/47b9da9de2d02b2b7a34fbe05afb44afd130d73a/kubernetes/client/models/v1_storage_class_list.py#L185-L187 | |
microsoft/ptvsd | 99c8513921021d2cc7cd82e132b65c644c256768 | src/ptvsd/_vendored/pydevd/pydevd_attach_to_process/winappdbg/system.py | python | System.get_postmortem_debugger | (cls, bits = None) | return (debugger, auto, hotkey) | Returns the postmortem debugging settings from the Registry.
@see: L{set_postmortem_debugger}
@type bits: int
@param bits: Set to C{32} for the 32 bits debugger, or C{64} for the
64 bits debugger. Set to {None} for the default (L{System.bits}.
@rtype: tuple( str, bool, int )
@return: A tuple containing the command line string to the postmortem
debugger, a boolean specifying if user interaction is allowed
before attaching, and an integer specifying a user defined hotkey.
Any member of the tuple may be C{None}.
See L{set_postmortem_debugger} for more details.
@raise WindowsError:
Raises an exception on error. | Returns the postmortem debugging settings from the Registry. | [
"Returns",
"the",
"postmortem",
"debugging",
"settings",
"from",
"the",
"Registry",
"."
] | def get_postmortem_debugger(cls, bits = None):
"""
Returns the postmortem debugging settings from the Registry.
@see: L{set_postmortem_debugger}
@type bits: int
@param bits: Set to C{32} for the 32 bits debugger, or C{64} for the
64 bits debugger. Set to {None} for the default (L{System.bits}.
@rtype: tuple( str, bool, int )
@return: A tuple containing the command line string to the postmortem
debugger, a boolean specifying if user interaction is allowed
before attaching, and an integer specifying a user defined hotkey.
Any member of the tuple may be C{None}.
See L{set_postmortem_debugger} for more details.
@raise WindowsError:
Raises an exception on error.
"""
if bits is None:
bits = cls.bits
elif bits not in (32, 64):
raise NotImplementedError("Unknown architecture (%r bits)" % bits)
if bits == 32 and cls.bits == 64:
keyname = 'HKLM\\SOFTWARE\\Wow6432Node\\Microsoft\\Windows NT\\CurrentVersion\\AeDebug'
else:
keyname = 'HKLM\\SOFTWARE\\Microsoft\\Windows NT\\CurrentVersion\\AeDebug'
key = cls.registry[keyname]
debugger = key.get('Debugger')
auto = key.get('Auto')
hotkey = key.get('UserDebuggerHotkey')
if auto is not None:
auto = bool(auto)
return (debugger, auto, hotkey) | [
"def",
"get_postmortem_debugger",
"(",
"cls",
",",
"bits",
"=",
"None",
")",
":",
"if",
"bits",
"is",
"None",
":",
"bits",
"=",
"cls",
".",
"bits",
"elif",
"bits",
"not",
"in",
"(",
"32",
",",
"64",
")",
":",
"raise",
"NotImplementedError",
"(",
"\"U... | https://github.com/microsoft/ptvsd/blob/99c8513921021d2cc7cd82e132b65c644c256768/src/ptvsd/_vendored/pydevd/pydevd_attach_to_process/winappdbg/system.py#L892-L931 | |
pypa/pipenv | b21baade71a86ab3ee1429f71fbc14d4f95fb75d | pipenv/patched/notpip/_internal/models/wheel.py | python | Wheel.support_index_min | (self, tags: List[Tag]) | return min(tags.index(tag) for tag in self.file_tags if tag in tags) | Return the lowest index that one of the wheel's file_tag combinations
achieves in the given list of supported tags.
For example, if there are 8 supported tags and one of the file tags
is first in the list, then return 0.
:param tags: the PEP 425 tags to check the wheel against, in order
with most preferred first.
:raises ValueError: If none of the wheel's file tags match one of
the supported tags. | Return the lowest index that one of the wheel's file_tag combinations
achieves in the given list of supported tags. | [
"Return",
"the",
"lowest",
"index",
"that",
"one",
"of",
"the",
"wheel",
"s",
"file_tag",
"combinations",
"achieves",
"in",
"the",
"given",
"list",
"of",
"supported",
"tags",
"."
] | def support_index_min(self, tags: List[Tag]) -> int:
"""Return the lowest index that one of the wheel's file_tag combinations
achieves in the given list of supported tags.
For example, if there are 8 supported tags and one of the file tags
is first in the list, then return 0.
:param tags: the PEP 425 tags to check the wheel against, in order
with most preferred first.
:raises ValueError: If none of the wheel's file tags match one of
the supported tags.
"""
return min(tags.index(tag) for tag in self.file_tags if tag in tags) | [
"def",
"support_index_min",
"(",
"self",
",",
"tags",
":",
"List",
"[",
"Tag",
"]",
")",
"->",
"int",
":",
"return",
"min",
"(",
"tags",
".",
"index",
"(",
"tag",
")",
"for",
"tag",
"in",
"self",
".",
"file_tags",
"if",
"tag",
"in",
"tags",
")"
] | https://github.com/pypa/pipenv/blob/b21baade71a86ab3ee1429f71fbc14d4f95fb75d/pipenv/patched/notpip/_internal/models/wheel.py#L51-L64 | |
freedombox/FreedomBox | 335a7f92cc08f27981f838a7cddfc67740598e54 | plinth/setup.py | python | Helper.install | (self, package_names, skip_recommends=False,
force_configuration=None, reinstall=False,
force_missing_configuration=False) | Install a set of packages marking progress. | Install a set of packages marking progress. | [
"Install",
"a",
"set",
"of",
"packages",
"marking",
"progress",
"."
] | def install(self, package_names, skip_recommends=False,
force_configuration=None, reinstall=False,
force_missing_configuration=False):
"""Install a set of packages marking progress."""
if self.allow_install is False:
# Raise error if packages are not already installed.
cache = apt.Cache()
for package_name in package_names:
if not cache[package_name].is_installed:
raise PackageNotInstalledError(package_name)
return
logger.info('Running install for module - %s, packages - %s',
self.module_name, package_names)
transaction = package.Transaction(self.module_name, package_names)
self.current_operation = {
'step': 'install',
'transaction': transaction,
}
transaction.install(skip_recommends, force_configuration, reinstall,
force_missing_configuration) | [
"def",
"install",
"(",
"self",
",",
"package_names",
",",
"skip_recommends",
"=",
"False",
",",
"force_configuration",
"=",
"None",
",",
"reinstall",
"=",
"False",
",",
"force_missing_configuration",
"=",
"False",
")",
":",
"if",
"self",
".",
"allow_install",
... | https://github.com/freedombox/FreedomBox/blob/335a7f92cc08f27981f838a7cddfc67740598e54/plinth/setup.py#L95-L117 | ||
DataIntegrationAlliance/data_integration_celery | 6775292030213dd1fa33a1ec0f542d5d2d2e612a | tasks/wind/index_constituent.py | python | get_sectorconstituent | (index_code, index_name, target_date) | return sec_df | 通过 wind 获取指数成分股及权重
:param index_code:
:param index_name:
:param target_date:
:return: | 通过 wind 获取指数成分股及权重
:param index_code:
:param index_name:
:param target_date:
:return: | [
"通过",
"wind",
"获取指数成分股及权重",
":",
"param",
"index_code",
":",
":",
"param",
"index_name",
":",
":",
"param",
"target_date",
":",
":",
"return",
":"
] | def get_sectorconstituent(index_code, index_name, target_date) -> pd.DataFrame:
"""
通过 wind 获取指数成分股及权重
:param index_code:
:param index_name:
:param target_date:
:return:
"""
target_date_str = date_2_str(target_date)
logger.info('获取 %s %s %s 板块信息', index_code, index_name, target_date)
sec_df = invoker.wset("indexconstituent", "date=%s;windcode=%s" % (target_date_str, index_code))
if sec_df is not None and sec_df.shape[0] > 0:
# 发现部分情况下返回数据的日期与 target_date 日期不匹配
sec_df = sec_df[sec_df['date'].apply(lambda x: str_2_date(x) == target_date)]
if sec_df is None or sec_df.shape[0] == 0:
return None
sec_df["index_code"] = index_code
sec_df["index_name"] = index_name
sec_df.rename(columns={
'date': 'trade_date',
'sec_name': 'stock_name',
'i_weight': 'weight',
}, inplace=True)
return sec_df | [
"def",
"get_sectorconstituent",
"(",
"index_code",
",",
"index_name",
",",
"target_date",
")",
"->",
"pd",
".",
"DataFrame",
":",
"target_date_str",
"=",
"date_2_str",
"(",
"target_date",
")",
"logger",
".",
"info",
"(",
"'获取 %s %s %s 板块信息', index_code",
",",
"ind... | https://github.com/DataIntegrationAlliance/data_integration_celery/blob/6775292030213dd1fa33a1ec0f542d5d2d2e612a/tasks/wind/index_constituent.py#L64-L87 | |
enzienaudio/hvcc | 30e47328958d600c54889e2a254c3f17f2b2fd06 | interpreters/max2hv/max2hv.py | python | max2hv.compile | (clazz, max_path, hv_dir, search_paths=None, verbose=False) | return {
"stage": "max2hv",
"notifs": {
"has_error": False,
"exception": None,
"errors": []
},
"in_dir": os.path.dirname(max_path),
"in_file": os.path.basename(max_path),
"out_dir": hv_dir,
"out_file": hv_file,
"compile_time": (time.time() - tick)
} | [] | def compile(clazz, max_path, hv_dir, search_paths=None, verbose=False):
tick = time.time()
max_graph = MaxParser.graph_from_file(max_path)
if not os.path.exists(hv_dir):
os.makedirs(hv_dir)
hv_file = os.path.basename(max_path).split(".")[0] + ".hv.json"
hv_path = os.path.join(hv_dir, hv_file)
with open(hv_path, "w") as f:
if verbose:
f.write(json.dumps(
max_graph.to_hv(),
sort_keys=True,
indent=2,
separators=(",", ": ")))
else:
f.write(json.dumps(max_graph.to_hv()))
return {
"stage": "max2hv",
"notifs": {
"has_error": False,
"exception": None,
"errors": []
},
"in_dir": os.path.dirname(max_path),
"in_file": os.path.basename(max_path),
"out_dir": hv_dir,
"out_file": hv_file,
"compile_time": (time.time() - tick)
} | [
"def",
"compile",
"(",
"clazz",
",",
"max_path",
",",
"hv_dir",
",",
"search_paths",
"=",
"None",
",",
"verbose",
"=",
"False",
")",
":",
"tick",
"=",
"time",
".",
"time",
"(",
")",
"max_graph",
"=",
"MaxParser",
".",
"graph_from_file",
"(",
"max_path",
... | https://github.com/enzienaudio/hvcc/blob/30e47328958d600c54889e2a254c3f17f2b2fd06/interpreters/max2hv/max2hv.py#L13-L45 | |||
arsaboo/homeassistant-config | 53c998986fbe84d793a0b174757154ab30e676e4 | custom_components/alexa_media/notify.py | python | AlexaNotificationService.targets | (self) | return devices | Return a dictionary of Alexa devices. | Return a dictionary of Alexa devices. | [
"Return",
"a",
"dictionary",
"of",
"Alexa",
"devices",
"."
] | def targets(self):
"""Return a dictionary of Alexa devices."""
devices = {}
for _, account_dict in self.hass.data[DATA_ALEXAMEDIA]["accounts"].items():
if "devices" not in account_dict:
return devices
for serial, alexa in account_dict["devices"]["media_player"].items():
devices[alexa["accountName"]] = serial
return devices | [
"def",
"targets",
"(",
"self",
")",
":",
"devices",
"=",
"{",
"}",
"for",
"_",
",",
"account_dict",
"in",
"self",
".",
"hass",
".",
"data",
"[",
"DATA_ALEXAMEDIA",
"]",
"[",
"\"accounts\"",
"]",
".",
"items",
"(",
")",
":",
"if",
"\"devices\"",
"not"... | https://github.com/arsaboo/homeassistant-config/blob/53c998986fbe84d793a0b174757154ab30e676e4/custom_components/alexa_media/notify.py#L130-L138 | |
lohriialo/photoshop-scripting-python | 6b97da967a5d0a45e54f7c99631b29773b923f09 | api_reference/photoshop_CC_2019.py | python | Channel.SetColor | (self, arg0=defaultUnnamedArg) | return self._oleobj_.InvokeTypes(1883456323, LCID, 8, (24, 0), ((9, 0),),arg0
) | color of the channel (not valid for component channels) | color of the channel (not valid for component channels) | [
"color",
"of",
"the",
"channel",
"(",
"not",
"valid",
"for",
"component",
"channels",
")"
] | def SetColor(self, arg0=defaultUnnamedArg):
'color of the channel (not valid for component channels)'
return self._oleobj_.InvokeTypes(1883456323, LCID, 8, (24, 0), ((9, 0),),arg0
) | [
"def",
"SetColor",
"(",
"self",
",",
"arg0",
"=",
"defaultUnnamedArg",
")",
":",
"return",
"self",
".",
"_oleobj_",
".",
"InvokeTypes",
"(",
"1883456323",
",",
"LCID",
",",
"8",
",",
"(",
"24",
",",
"0",
")",
",",
"(",
"(",
"9",
",",
"0",
")",
",... | https://github.com/lohriialo/photoshop-scripting-python/blob/6b97da967a5d0a45e54f7c99631b29773b923f09/api_reference/photoshop_CC_2019.py#L1265-L1268 | |
ronf/asyncssh | ee1714c598d8c2ea6f5484e465443f38b68714aa | asyncssh/agent.py | python | SSHAgentListener.get_path | (self) | return self._path | Return the path being listened on | Return the path being listened on | [
"Return",
"the",
"path",
"being",
"listened",
"on"
] | def get_path(self) -> str:
"""Return the path being listened on"""
return self._path | [
"def",
"get_path",
"(",
"self",
")",
"->",
"str",
":",
"return",
"self",
".",
"_path"
] | https://github.com/ronf/asyncssh/blob/ee1714c598d8c2ea6f5484e465443f38b68714aa/asyncssh/agent.py#L616-L619 | |
gramps-project/gramps | 04d4651a43eb210192f40a9f8c2bad8ee8fa3753 | gramps/plugins/lib/librecords.py | python | _get_styled | (name, callname, placeholder=False,
trans_text=glocale.translation.sgettext, name_format=None) | return StyledText(text, tags) | Return a StyledText object with the name formatted according to the
parameters:
@param callname: whether the callname should be used instead of the first
name (CALLNAME_REPLACE), underlined within the first name
(CALLNAME_UNDERLINE_ADD) or not used at all (CALLNAME_DONTUSE).
@param placeholder: whether a series of underscores should be inserted as a
placeholder if first name or surname are missing.
@param trans_text: allow deferred translation of strings
@type trans_text: a GrampsLocale sgettext instance
trans_text is a defined keyword (see po/update_po.py, po/genpot.sh)
:param name_format: optional format to control display of person's name
:type name_format: None or int | Return a StyledText object with the name formatted according to the
parameters: | [
"Return",
"a",
"StyledText",
"object",
"with",
"the",
"name",
"formatted",
"according",
"to",
"the",
"parameters",
":"
] | def _get_styled(name, callname, placeholder=False,
trans_text=glocale.translation.sgettext, name_format=None):
"""
Return a StyledText object with the name formatted according to the
parameters:
@param callname: whether the callname should be used instead of the first
name (CALLNAME_REPLACE), underlined within the first name
(CALLNAME_UNDERLINE_ADD) or not used at all (CALLNAME_DONTUSE).
@param placeholder: whether a series of underscores should be inserted as a
placeholder if first name or surname are missing.
@param trans_text: allow deferred translation of strings
@type trans_text: a GrampsLocale sgettext instance
trans_text is a defined keyword (see po/update_po.py, po/genpot.sh)
:param name_format: optional format to control display of person's name
:type name_format: None or int
"""
# Make a copy of the name object so we don't mess around with the real
# data.
n = Name(source=name)
# Insert placeholders.
if placeholder:
if not n.first_name:
n.first_name = "____________"
if not n.surname:
n.surname = "____________"
if n.call:
if callname == CALLNAME_REPLACE:
# Replace first name with call name.
n.first_name = n.call
elif callname == CALLNAME_UNDERLINE_ADD:
if n.call not in n.first_name:
# Add call name to first name.
# Translators: used in French+Russian, ignore otherwise
n.first_name = trans_text('"%(callname)s" (%(firstname)s)') % {
'callname': n.call,
'firstname': n.first_name }
real_format = name_displayer.get_default_format()
if name_format is not None:
name_displayer.set_default_format(name_format)
text = name_displayer.display_name(n)
name_displayer.set_default_format(real_format)
tags = []
if n.call:
if callname == CALLNAME_UNDERLINE_ADD:
# "name" in next line is on purpose: only underline the call name
# if it was a part of the *original* first name
if n.call in name.first_name:
# Underline call name
callpos = text.find(n.call)
tags = [StyledTextTag(StyledTextTagType.UNDERLINE, True,
[(callpos, callpos + len(n.call))])]
return StyledText(text, tags) | [
"def",
"_get_styled",
"(",
"name",
",",
"callname",
",",
"placeholder",
"=",
"False",
",",
"trans_text",
"=",
"glocale",
".",
"translation",
".",
"sgettext",
",",
"name_format",
"=",
"None",
")",
":",
"# Make a copy of the name object so we don't mess around with the ... | https://github.com/gramps-project/gramps/blob/04d4651a43eb210192f40a9f8c2bad8ee8fa3753/gramps/plugins/lib/librecords.py#L480-L538 | |
explosion/srsly | 8617ecc099d1f34a60117b5287bef5424ea2c837 | srsly/ruamel_yaml/nodes.py | python | CollectionNode.__init__ | (
self,
tag,
value,
start_mark=None,
end_mark=None,
flow_style=None,
comment=None,
anchor=None,
) | [] | def __init__(
self,
tag,
value,
start_mark=None,
end_mark=None,
flow_style=None,
comment=None,
anchor=None,
):
# type: (Any, Any, Any, Any, Any, Any, Any) -> None
Node.__init__(self, tag, value, start_mark, end_mark, comment=comment)
self.flow_style = flow_style
self.anchor = anchor | [
"def",
"__init__",
"(",
"self",
",",
"tag",
",",
"value",
",",
"start_mark",
"=",
"None",
",",
"end_mark",
"=",
"None",
",",
"flow_style",
"=",
"None",
",",
"comment",
"=",
"None",
",",
"anchor",
"=",
"None",
",",
")",
":",
"# type: (Any, Any, Any, Any, ... | https://github.com/explosion/srsly/blob/8617ecc099d1f34a60117b5287bef5424ea2c837/srsly/ruamel_yaml/nodes.py#L92-L105 | ||||
tristandeleu/pytorch-meta | d55d89ebd47f340180267106bde3e4b723f23762 | torchmeta/utils/matching.py | python | matching_loss | (train_embeddings,
train_targets,
test_embeddings,
test_targets,
num_classes,
eps=1e-8,
**kwargs) | return F.nll_loss(logits, test_targets, **kwargs) | Compute the loss (i.e. negative log-likelihood) for the matching network
on the test/query samples [1].
Parameters
----------
train_embeddings : `torch.Tensor` instance
A tensor containing the embeddings of the train/support inputs. This
tensor has shape `(batch_size, num_train_samples, embedding_size)`.
train_targets : `torch.LongTensor` instance
A tensor containing the targets of the train/support dataset. This tensor
has shape `(batch_size, num_train_samples)`.
test_embeddings : `torch.Tensor` instance
A tensor containing the embeddings of the test/query inputs. This tensor
has shape `(batch_size, num_test_samples, embedding_size)`.
test_targets : `torch.LongTensor` instance
A tensor containing the targets of the test/query dataset. This tensor
has shape `(batch_size, num_test_samples)`.
num_classes : int
Number of classes (i.e. `N` in "N-way classification") in the
classification task.
eps : float (default: 1e-8)
Small value to avoid division by zero.
kwargs :
Additional keyword arguments to be forwarded to the loss function. See
`torch.nn.functional.cross_entropy` for details.
Returns
-------
loss : `torch.Tensor` instance
A tensor containing the loss for the matching network.
References
----------
.. [1] Vinyals, O., Blundell, C., Lillicrap, T. and Wierstra, D. (2016).
Matching Networks for One Shot Learning. In Advances in Neural
Information Processing Systems (pp. 3630-3638) (https://arxiv.org/abs/1606.04080) | Compute the loss (i.e. negative log-likelihood) for the matching network
on the test/query samples [1]. | [
"Compute",
"the",
"loss",
"(",
"i",
".",
"e",
".",
"negative",
"log",
"-",
"likelihood",
")",
"for",
"the",
"matching",
"network",
"on",
"the",
"test",
"/",
"query",
"samples",
"[",
"1",
"]",
"."
] | def matching_loss(train_embeddings,
train_targets,
test_embeddings,
test_targets,
num_classes,
eps=1e-8,
**kwargs):
"""Compute the loss (i.e. negative log-likelihood) for the matching network
on the test/query samples [1].
Parameters
----------
train_embeddings : `torch.Tensor` instance
A tensor containing the embeddings of the train/support inputs. This
tensor has shape `(batch_size, num_train_samples, embedding_size)`.
train_targets : `torch.LongTensor` instance
A tensor containing the targets of the train/support dataset. This tensor
has shape `(batch_size, num_train_samples)`.
test_embeddings : `torch.Tensor` instance
A tensor containing the embeddings of the test/query inputs. This tensor
has shape `(batch_size, num_test_samples, embedding_size)`.
test_targets : `torch.LongTensor` instance
A tensor containing the targets of the test/query dataset. This tensor
has shape `(batch_size, num_test_samples)`.
num_classes : int
Number of classes (i.e. `N` in "N-way classification") in the
classification task.
eps : float (default: 1e-8)
Small value to avoid division by zero.
kwargs :
Additional keyword arguments to be forwarded to the loss function. See
`torch.nn.functional.cross_entropy` for details.
Returns
-------
loss : `torch.Tensor` instance
A tensor containing the loss for the matching network.
References
----------
.. [1] Vinyals, O., Blundell, C., Lillicrap, T. and Wierstra, D. (2016).
Matching Networks for One Shot Learning. In Advances in Neural
Information Processing Systems (pp. 3630-3638) (https://arxiv.org/abs/1606.04080)
"""
logits = matching_log_probas(train_embeddings,
train_targets,
test_embeddings,
num_classes,
eps=eps)
return F.nll_loss(logits, test_targets, **kwargs) | [
"def",
"matching_loss",
"(",
"train_embeddings",
",",
"train_targets",
",",
"test_embeddings",
",",
"test_targets",
",",
"num_classes",
",",
"eps",
"=",
"1e-8",
",",
"*",
"*",
"kwargs",
")",
":",
"logits",
"=",
"matching_log_probas",
"(",
"train_embeddings",
","... | https://github.com/tristandeleu/pytorch-meta/blob/d55d89ebd47f340180267106bde3e4b723f23762/torchmeta/utils/matching.py#L147-L202 | |
mesalock-linux/mesapy | ed546d59a21b36feb93e2309d5c6b75aa0ad95c9 | py/_code/source.py | python | Source.strip | (self) | return source | return new source object with trailing
and leading blank lines removed. | return new source object with trailing
and leading blank lines removed. | [
"return",
"new",
"source",
"object",
"with",
"trailing",
"and",
"leading",
"blank",
"lines",
"removed",
"."
] | def strip(self):
""" return new source object with trailing
and leading blank lines removed.
"""
start, end = 0, len(self)
while start < end and not self.lines[start].strip():
start += 1
while end > start and not self.lines[end-1].strip():
end -= 1
source = Source()
source.lines[:] = self.lines[start:end]
return source | [
"def",
"strip",
"(",
"self",
")",
":",
"start",
",",
"end",
"=",
"0",
",",
"len",
"(",
"self",
")",
"while",
"start",
"<",
"end",
"and",
"not",
"self",
".",
"lines",
"[",
"start",
"]",
".",
"strip",
"(",
")",
":",
"start",
"+=",
"1",
"while",
... | https://github.com/mesalock-linux/mesapy/blob/ed546d59a21b36feb93e2309d5c6b75aa0ad95c9/py/_code/source.py#L69-L80 | |
Komodo/KomodoEdit | 61edab75dce2bdb03943b387b0608ea36f548e8e | src/udl/ludditelib/cmdln.py | python | RawCmdln.postoptparse | (self) | Hook method executed just after `.main()' parses top-level
options.
When called `self.options' holds the results of the option parse. | Hook method executed just after `.main()' parses top-level
options. | [
"Hook",
"method",
"executed",
"just",
"after",
".",
"main",
"()",
"parses",
"top",
"-",
"level",
"options",
"."
] | def postoptparse(self):
"""Hook method executed just after `.main()' parses top-level
options.
When called `self.options' holds the results of the option parse.
"""
pass | [
"def",
"postoptparse",
"(",
"self",
")",
":",
"pass"
] | https://github.com/Komodo/KomodoEdit/blob/61edab75dce2bdb03943b387b0608ea36f548e8e/src/udl/ludditelib/cmdln.py#L192-L198 | ||
crits/crits_services | c7abf91f1865d913cffad4b966599da204f8ae43 | zip_meta_service/zip_meta.py | python | ZipParser.getStartOfCDDisk | (self) | return struct.unpack("<H",self.endDirectory[6:8])[0] | [] | def getStartOfCDDisk(self):
return struct.unpack("<H",self.endDirectory[6:8])[0] | [
"def",
"getStartOfCDDisk",
"(",
"self",
")",
":",
"return",
"struct",
".",
"unpack",
"(",
"\"<H\"",
",",
"self",
".",
"endDirectory",
"[",
"6",
":",
"8",
"]",
")",
"[",
"0",
"]"
] | https://github.com/crits/crits_services/blob/c7abf91f1865d913cffad4b966599da204f8ae43/zip_meta_service/zip_meta.py#L315-L316 | |||
brad-sp/cuckoo-modified | 038cfbba66ef76557d255aa89f2d4205f376ca45 | lib/cuckoo/common/dns.py | python | with_timeout | (func, args=(), kwargs={}) | This function will spawn a thread and run the given function
using the args, kwargs and return the given default value if the
timeout_duration is exceeded. | This function will spawn a thread and run the given function
using the args, kwargs and return the given default value if the
timeout_duration is exceeded. | [
"This",
"function",
"will",
"spawn",
"a",
"thread",
"and",
"run",
"the",
"given",
"function",
"using",
"the",
"args",
"kwargs",
"and",
"return",
"the",
"given",
"default",
"value",
"if",
"the",
"timeout_duration",
"is",
"exceeded",
"."
] | def with_timeout(func, args=(), kwargs={}):
"""This function will spawn a thread and run the given function
using the args, kwargs and return the given default value if the
timeout_duration is exceeded.
"""
class ResultThread(threading.Thread):
daemon = True
def __init__(self):
threading.Thread.__init__(self)
self.result, self.error = None, None
def run(self):
try:
self.result = func(*args, **kwargs)
except Exception, e:
self.error = e
it = ResultThread()
it.start()
it.join(DNS_TIMEOUT)
if it.isAlive():
return DNS_TIMEOUT_VALUE
else:
if it.error:
raise it.error
return it.result | [
"def",
"with_timeout",
"(",
"func",
",",
"args",
"=",
"(",
")",
",",
"kwargs",
"=",
"{",
"}",
")",
":",
"class",
"ResultThread",
"(",
"threading",
".",
"Thread",
")",
":",
"daemon",
"=",
"True",
"def",
"__init__",
"(",
"self",
")",
":",
"threading",
... | https://github.com/brad-sp/cuckoo-modified/blob/038cfbba66ef76557d255aa89f2d4205f376ca45/lib/cuckoo/common/dns.py#L37-L61 | ||
omz/PythonistaAppTemplate | f560f93f8876d82a21d108977f90583df08d55af | PythonistaAppTemplate/PythonistaKit.framework/pylib/site-packages/PIL/Jpeg2KImagePlugin.py | python | _parse_codestream | (fp) | return (size, mode) | Parse the JPEG 2000 codestream to extract the size and component
count from the SIZ marker segment, returning a PIL (size, mode) tuple. | Parse the JPEG 2000 codestream to extract the size and component
count from the SIZ marker segment, returning a PIL (size, mode) tuple. | [
"Parse",
"the",
"JPEG",
"2000",
"codestream",
"to",
"extract",
"the",
"size",
"and",
"component",
"count",
"from",
"the",
"SIZ",
"marker",
"segment",
"returning",
"a",
"PIL",
"(",
"size",
"mode",
")",
"tuple",
"."
] | def _parse_codestream(fp):
"""Parse the JPEG 2000 codestream to extract the size and component
count from the SIZ marker segment, returning a PIL (size, mode) tuple."""
hdr = fp.read(2)
lsiz = struct.unpack('>H', hdr)[0]
siz = hdr + fp.read(lsiz - 2)
lsiz, rsiz, xsiz, ysiz, xosiz, yosiz, xtsiz, ytsiz, \
xtosiz, ytosiz, csiz \
= struct.unpack('>HHIIIIIIIIH', siz[:38])
ssiz = [None]*csiz
xrsiz = [None]*csiz
yrsiz = [None]*csiz
for i in range(csiz):
ssiz[i], xrsiz[i], yrsiz[i] \
= struct.unpack('>BBB', siz[36 + 3 * i:39 + 3 * i])
size = (xsiz - xosiz, ysiz - yosiz)
if csiz == 1:
if (yrsiz[0] & 0x7f) > 8:
mode = 'I;16'
else:
mode = 'L'
elif csiz == 2:
mode = 'LA'
elif csiz == 3:
mode = 'RGB'
elif csiz == 4:
mode = 'RGBA'
else:
mode = None
return (size, mode) | [
"def",
"_parse_codestream",
"(",
"fp",
")",
":",
"hdr",
"=",
"fp",
".",
"read",
"(",
"2",
")",
"lsiz",
"=",
"struct",
".",
"unpack",
"(",
"'>H'",
",",
"hdr",
")",
"[",
"0",
"]",
"siz",
"=",
"hdr",
"+",
"fp",
".",
"read",
"(",
"lsiz",
"-",
"2"... | https://github.com/omz/PythonistaAppTemplate/blob/f560f93f8876d82a21d108977f90583df08d55af/PythonistaAppTemplate/PythonistaKit.framework/pylib/site-packages/PIL/Jpeg2KImagePlugin.py#L23-L55 | |
krintoxi/NoobSec-Toolkit | 38738541cbc03cedb9a3b3ed13b629f781ad64f6 | NoobSecToolkit /tools/sqli/thirdparty/oset/_abc.py | python | Set.__le__ | (self, other) | return True | [] | def __le__(self, other):
if not isinstance(other, Set):
return NotImplemented
if len(self) > len(other):
return False
for elem in self:
if elem not in other:
return False
return True | [
"def",
"__le__",
"(",
"self",
",",
"other",
")",
":",
"if",
"not",
"isinstance",
"(",
"other",
",",
"Set",
")",
":",
"return",
"NotImplemented",
"if",
"len",
"(",
"self",
")",
">",
"len",
"(",
"other",
")",
":",
"return",
"False",
"for",
"elem",
"i... | https://github.com/krintoxi/NoobSec-Toolkit/blob/38738541cbc03cedb9a3b3ed13b629f781ad64f6/NoobSecToolkit /tools/sqli/thirdparty/oset/_abc.py#L233-L241 | |||
oracle/graalpython | 577e02da9755d916056184ec441c26e00b70145c | graalpython/lib-python/3/distutils/util.py | python | convert_path | (pathname) | return os.path.join(*paths) | Return 'pathname' as a name that will work on the native filesystem,
i.e. split it on '/' and put it back together again using the current
directory separator. Needed because filenames in the setup script are
always supplied in Unix style, and have to be converted to the local
convention before we can actually use them in the filesystem. Raises
ValueError on non-Unix-ish systems if 'pathname' either starts or
ends with a slash. | Return 'pathname' as a name that will work on the native filesystem,
i.e. split it on '/' and put it back together again using the current
directory separator. Needed because filenames in the setup script are
always supplied in Unix style, and have to be converted to the local
convention before we can actually use them in the filesystem. Raises
ValueError on non-Unix-ish systems if 'pathname' either starts or
ends with a slash. | [
"Return",
"pathname",
"as",
"a",
"name",
"that",
"will",
"work",
"on",
"the",
"native",
"filesystem",
"i",
".",
"e",
".",
"split",
"it",
"on",
"/",
"and",
"put",
"it",
"back",
"together",
"again",
"using",
"the",
"current",
"directory",
"separator",
".",... | def convert_path (pathname):
"""Return 'pathname' as a name that will work on the native filesystem,
i.e. split it on '/' and put it back together again using the current
directory separator. Needed because filenames in the setup script are
always supplied in Unix style, and have to be converted to the local
convention before we can actually use them in the filesystem. Raises
ValueError on non-Unix-ish systems if 'pathname' either starts or
ends with a slash.
"""
if os.sep == '/':
return pathname
if not pathname:
return pathname
if pathname[0] == '/':
raise ValueError("path '%s' cannot be absolute" % pathname)
if pathname[-1] == '/':
raise ValueError("path '%s' cannot end with '/'" % pathname)
paths = pathname.split('/')
while '.' in paths:
paths.remove('.')
if not paths:
return os.curdir
return os.path.join(*paths) | [
"def",
"convert_path",
"(",
"pathname",
")",
":",
"if",
"os",
".",
"sep",
"==",
"'/'",
":",
"return",
"pathname",
"if",
"not",
"pathname",
":",
"return",
"pathname",
"if",
"pathname",
"[",
"0",
"]",
"==",
"'/'",
":",
"raise",
"ValueError",
"(",
"\"path... | https://github.com/oracle/graalpython/blob/577e02da9755d916056184ec441c26e00b70145c/graalpython/lib-python/3/distutils/util.py#L108-L131 | |
wepe/MachineLearning | e426ffdd126829a2b8c71f368fd33170f342f538 | Ridge/kernel_ridge/kernel_ridge.py | python | KernelRidge.compute_kernel_matrix | (self, X1, X2) | return K | compute kernel matrix (gram matrix) give two input matrix | compute kernel matrix (gram matrix) give two input matrix | [
"compute",
"kernel",
"matrix",
"(",
"gram",
"matrix",
")",
"give",
"two",
"input",
"matrix"
] | def compute_kernel_matrix(self, X1, X2):
"""
compute kernel matrix (gram matrix) give two input matrix
"""
# sample size
n1 = X1.shape[0]
n2 = X2.shape[0]
# Gram matrix
K = np.zeros((n1, n2))
for i in range(n1):
for j in range(n2):
K[i, j] = self.kernel(X1[i], X2[j])
return K | [
"def",
"compute_kernel_matrix",
"(",
"self",
",",
"X1",
",",
"X2",
")",
":",
"# sample size",
"n1",
"=",
"X1",
".",
"shape",
"[",
"0",
"]",
"n2",
"=",
"X2",
".",
"shape",
"[",
"0",
"]",
"# Gram matrix",
"K",
"=",
"np",
".",
"zeros",
"(",
"(",
"n1... | https://github.com/wepe/MachineLearning/blob/e426ffdd126829a2b8c71f368fd33170f342f538/Ridge/kernel_ridge/kernel_ridge.py#L44-L59 | |
donnemartin/gitsome | d7c57abc7cb66e9c910a844f15d4536866da3310 | xonsh/ply/example/ansic/cparse.py | python | p_declaration_2 | (t) | declaration : declaration_specifiers SEMI | declaration : declaration_specifiers SEMI | [
"declaration",
":",
"declaration_specifiers",
"SEMI"
] | def p_declaration_2(t):
'declaration : declaration_specifiers SEMI'
pass | [
"def",
"p_declaration_2",
"(",
"t",
")",
":",
"pass"
] | https://github.com/donnemartin/gitsome/blob/d7c57abc7cb66e9c910a844f15d4536866da3310/xonsh/ply/example/ansic/cparse.py#L68-L70 | ||
CastagnaIT/plugin.video.netflix | 5cf5fa436eb9956576c0f62aa31a4c7d6c5b8a4a | resources/lib/utils/logging.py | python | measure_exec_time_decorator | (is_immediate=False) | return exec_time_decorator | A decorator that wraps a function call and times its execution | A decorator that wraps a function call and times its execution | [
"A",
"decorator",
"that",
"wraps",
"a",
"function",
"call",
"and",
"times",
"its",
"execution"
] | def measure_exec_time_decorator(is_immediate=False):
"""A decorator that wraps a function call and times its execution"""
# pylint: disable=missing-docstring
def exec_time_decorator(func):
@wraps(func)
def timing_wrapper(*args, **kwargs):
if not LOG.is_time_trace_enabled:
return func(*args, **kwargs)
LOG.add_time_trace_level()
start = time.perf_counter()
try:
return func(*args, **kwargs)
finally:
execution_time = int((time.perf_counter() - start) * 1000)
if is_immediate:
LOG.debug('Call to {} took {}ms', func.__name__, execution_time)
else:
LOG.add_time_trace(func.__name__, execution_time)
LOG.remove_time_trace_level()
return timing_wrapper
return exec_time_decorator | [
"def",
"measure_exec_time_decorator",
"(",
"is_immediate",
"=",
"False",
")",
":",
"# pylint: disable=missing-docstring",
"def",
"exec_time_decorator",
"(",
"func",
")",
":",
"@",
"wraps",
"(",
"func",
")",
"def",
"timing_wrapper",
"(",
"*",
"args",
",",
"*",
"*... | https://github.com/CastagnaIT/plugin.video.netflix/blob/5cf5fa436eb9956576c0f62aa31a4c7d6c5b8a4a/resources/lib/utils/logging.py#L132-L152 | |
NervanaSystems/neon | 8c3fb8a93b4a89303467b25817c60536542d08bd | neon/models/model.py | python | Model.get_description | (self, get_weights=False, keep_states=False) | return pdict | Gets a description of the model required to reconstruct the model with
no weights like from a yaml file.
Arguments:
get_weights: (Default value = False)
keep_states: (Default value = False)
Returns:
dict: Description of each component of the model. | Gets a description of the model required to reconstruct the model with
no weights like from a yaml file. | [
"Gets",
"a",
"description",
"of",
"the",
"model",
"required",
"to",
"reconstruct",
"the",
"model",
"with",
"no",
"weights",
"like",
"from",
"a",
"yaml",
"file",
"."
] | def get_description(self, get_weights=False, keep_states=False):
"""
Gets a description of the model required to reconstruct the model with
no weights like from a yaml file.
Arguments:
get_weights: (Default value = False)
keep_states: (Default value = False)
Returns:
dict: Description of each component of the model.
"""
pdict = dict()
pdict['neon_version'] = __neon_version__
compat_mode = self.be.compat_mode if self.be.compat_mode is not None else 'neon'
pdict['backend'] = {'type': self.be.__class__.__name__,
'compat_mode': compat_mode,
'rng_seed': self.be.rng_seed,
'rng_state': self.be.rng_get_state()}
if self.cost:
pdict['cost'] = self.cost.get_description()
if self.optimizer:
pdict['optimizer'] = self.optimizer.get_description()
pdict['model'] = self.layers.get_description(get_weights=get_weights,
keep_states=keep_states)
return pdict | [
"def",
"get_description",
"(",
"self",
",",
"get_weights",
"=",
"False",
",",
"keep_states",
"=",
"False",
")",
":",
"pdict",
"=",
"dict",
"(",
")",
"pdict",
"[",
"'neon_version'",
"]",
"=",
"__neon_version__",
"compat_mode",
"=",
"self",
".",
"be",
".",
... | https://github.com/NervanaSystems/neon/blob/8c3fb8a93b4a89303467b25817c60536542d08bd/neon/models/model.py#L365-L392 | |
sagemath/sage | f9b2db94f675ff16963ccdefba4f1a3393b3fe0d | src/sage/modular/cusps.py | python | Cusp.is_infinity | (self) | return not self.__b | Returns True if this is the cusp infinity.
EXAMPLES::
sage: Cusp(3/5).is_infinity()
False
sage: Cusp(1,0).is_infinity()
True
sage: Cusp(0,1).is_infinity()
False | Returns True if this is the cusp infinity. | [
"Returns",
"True",
"if",
"this",
"is",
"the",
"cusp",
"infinity",
"."
] | def is_infinity(self):
"""
Returns True if this is the cusp infinity.
EXAMPLES::
sage: Cusp(3/5).is_infinity()
False
sage: Cusp(1,0).is_infinity()
True
sage: Cusp(0,1).is_infinity()
False
"""
return not self.__b | [
"def",
"is_infinity",
"(",
"self",
")",
":",
"return",
"not",
"self",
".",
"__b"
] | https://github.com/sagemath/sage/blob/f9b2db94f675ff16963ccdefba4f1a3393b3fe0d/src/sage/modular/cusps.py#L314-L327 | |
numba/numba | bf480b9e0da858a65508c2b17759a72ee6a44c51 | numba/typed/listobject.py | python | impl_make_mutable | (l) | list._make_mutable() | list._make_mutable() | [
"list",
".",
"_make_mutable",
"()"
] | def impl_make_mutable(l):
"""list._make_mutable()"""
if isinstance(l, types.ListType):
def impl(l):
_list_set_is_mutable(l, 1)
return impl | [
"def",
"impl_make_mutable",
"(",
"l",
")",
":",
"if",
"isinstance",
"(",
"l",
",",
"types",
".",
"ListType",
")",
":",
"def",
"impl",
"(",
"l",
")",
":",
"_list_set_is_mutable",
"(",
"l",
",",
"1",
")",
"return",
"impl"
] | https://github.com/numba/numba/blob/bf480b9e0da858a65508c2b17759a72ee6a44c51/numba/typed/listobject.py#L513-L519 | ||
out0fmemory/GoAgent-Always-Available | c4254984fea633ce3d1893fe5901debd9f22c2a9 | server/lib/google/appengine/tools/cron_xml_parser.py | python | CronXmlParser.ProcessXml | (self, xml_str) | Parses XML string and returns object representation of relevant info.
Args:
xml_str: The XML string.
Returns:
A list of Cron objects containing information about cron jobs from the
XML.
Raises:
AppEngineConfigException: In case of malformed XML or illegal inputs. | Parses XML string and returns object representation of relevant info. | [
"Parses",
"XML",
"string",
"and",
"returns",
"object",
"representation",
"of",
"relevant",
"info",
"."
] | def ProcessXml(self, xml_str):
"""Parses XML string and returns object representation of relevant info.
Args:
xml_str: The XML string.
Returns:
A list of Cron objects containing information about cron jobs from the
XML.
Raises:
AppEngineConfigException: In case of malformed XML or illegal inputs.
"""
try:
self.crons = []
self.errors = []
xml_root = ElementTree.fromstring(xml_str)
if xml_root.tag != 'cronentries':
raise AppEngineConfigException('Root tag must be <cronentries>')
for child in xml_root.getchildren():
self.ProcessCronNode(child)
if self.errors:
raise AppEngineConfigException('\n'.join(self.errors))
return self.crons
except ElementTree.ParseError:
raise AppEngineConfigException('Bad input -- not valid XML') | [
"def",
"ProcessXml",
"(",
"self",
",",
"xml_str",
")",
":",
"try",
":",
"self",
".",
"crons",
"=",
"[",
"]",
"self",
".",
"errors",
"=",
"[",
"]",
"xml_root",
"=",
"ElementTree",
".",
"fromstring",
"(",
"xml_str",
")",
"if",
"xml_root",
".",
"tag",
... | https://github.com/out0fmemory/GoAgent-Always-Available/blob/c4254984fea633ce3d1893fe5901debd9f22c2a9/server/lib/google/appengine/tools/cron_xml_parser.py#L73-L100 | ||
yumath/bertNER | ee044eb1333b532bed2147610abb4d600b1cd8cf | data_utils.py | python | cut_to_sentence | (text) | return sentences | Cut text to sentences | Cut text to sentences | [
"Cut",
"text",
"to",
"sentences"
] | def cut_to_sentence(text):
"""
Cut text to sentences
"""
sentence = []
sentences = []
len_p = len(text)
pre_cut = False
for idx, word in enumerate(text):
sentence.append(word)
cut = False
if pre_cut:
cut=True
pre_cut=False
if word in u"。;!?\n":
cut = True
if len_p > idx+1:
if text[idx+1] in ".。”\"\'“”‘’?!":
cut = False
pre_cut=True
if cut:
sentences.append(sentence)
sentence = []
if sentence:
sentences.append("".join(list(sentence)))
return sentences | [
"def",
"cut_to_sentence",
"(",
"text",
")",
":",
"sentence",
"=",
"[",
"]",
"sentences",
"=",
"[",
"]",
"len_p",
"=",
"len",
"(",
"text",
")",
"pre_cut",
"=",
"False",
"for",
"idx",
",",
"word",
"in",
"enumerate",
"(",
"text",
")",
":",
"sentence",
... | https://github.com/yumath/bertNER/blob/ee044eb1333b532bed2147610abb4d600b1cd8cf/data_utils.py#L224-L250 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.