nwo stringlengths 5 106 | sha stringlengths 40 40 | path stringlengths 4 174 | language stringclasses 1
value | identifier stringlengths 1 140 | parameters stringlengths 0 87.7k | argument_list stringclasses 1
value | return_statement stringlengths 0 426k | docstring stringlengths 0 64.3k | docstring_summary stringlengths 0 26.3k | docstring_tokens list | function stringlengths 18 4.83M | function_tokens list | url stringlengths 83 304 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
BigBrotherBot/big-brother-bot | 848823c71413c86e7f1ff9584f43e08d40a7f2c0 | b3/plugins/pluginmanager/__init__.py | python | PluginmanagerPlugin.onStartup | (self) | Initialize plugin settings. | Initialize plugin settings. | [
"Initialize",
"plugin",
"settings",
"."
] | def onStartup(self):
"""
Initialize plugin settings.
"""
self._adminPlugin = self.console.getPlugin('admin')
# register our commands
if 'commands' in self.config.sections():
for cmd in self.config.options('commands'):
level = self.config.get('commands', cmd)
sp = cmd.split('-')
alias = None
if len(sp) == 2:
cmd, alias = sp
func = getCmd(self, cmd)
if func:
self._adminPlugin.registerCommand(self, cmd, level, func, alias)
# notice plugin started
self.debug('plugin started') | [
"def",
"onStartup",
"(",
"self",
")",
":",
"self",
".",
"_adminPlugin",
"=",
"self",
".",
"console",
".",
"getPlugin",
"(",
"'admin'",
")",
"# register our commands",
"if",
"'commands'",
"in",
"self",
".",
"config",
".",
"sections",
"(",
")",
":",
"for",
... | https://github.com/BigBrotherBot/big-brother-bot/blob/848823c71413c86e7f1ff9584f43e08d40a7f2c0/b3/plugins/pluginmanager/__init__.py#L59-L79 | ||
aws/aws-parallelcluster | f1fe5679a01c524e7ea904c329bd6d17318c6cd9 | cli/src/pcluster/aws/s3_resource.py | python | S3Resource.delete_object_versions | (self, bucket_name, prefix=None) | Delete object versions by filter. | Delete object versions by filter. | [
"Delete",
"object",
"versions",
"by",
"filter",
"."
] | def delete_object_versions(self, bucket_name, prefix=None):
"""Delete object versions by filter."""
self.get_bucket(bucket_name).object_versions.filter(Prefix=prefix).delete() | [
"def",
"delete_object_versions",
"(",
"self",
",",
"bucket_name",
",",
"prefix",
"=",
"None",
")",
":",
"self",
".",
"get_bucket",
"(",
"bucket_name",
")",
".",
"object_versions",
".",
"filter",
"(",
"Prefix",
"=",
"prefix",
")",
".",
"delete",
"(",
")"
] | https://github.com/aws/aws-parallelcluster/blob/f1fe5679a01c524e7ea904c329bd6d17318c6cd9/cli/src/pcluster/aws/s3_resource.py#L50-L52 | ||
aws-samples/aws-kube-codesuite | ab4e5ce45416b83bffb947ab8d234df5437f4fca | src/kubernetes/client/models/v2alpha1_cron_job_list.py | python | V2alpha1CronJobList.metadata | (self) | return self._metadata | Gets the metadata of this V2alpha1CronJobList.
Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
:return: The metadata of this V2alpha1CronJobList.
:rtype: V1ListMeta | Gets the metadata of this V2alpha1CronJobList.
Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata | [
"Gets",
"the",
"metadata",
"of",
"this",
"V2alpha1CronJobList",
".",
"Standard",
"list",
"metadata",
".",
"More",
"info",
":",
"https",
":",
"//",
"git",
".",
"k8s",
".",
"io",
"/",
"community",
"/",
"contributors",
"/",
"devel",
"/",
"api",
"-",
"conven... | def metadata(self):
"""
Gets the metadata of this V2alpha1CronJobList.
Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
:return: The metadata of this V2alpha1CronJobList.
:rtype: V1ListMeta
"""
return self._metadata | [
"def",
"metadata",
"(",
"self",
")",
":",
"return",
"self",
".",
"_metadata"
] | https://github.com/aws-samples/aws-kube-codesuite/blob/ab4e5ce45416b83bffb947ab8d234df5437f4fca/src/kubernetes/client/models/v2alpha1_cron_job_list.py#L124-L132 | |
TencentCloud/tencentcloud-sdk-python | 3677fd1cdc8c5fd626ce001c13fd3b59d1f279d2 | tencentcloud/ecm/v20190719/models.py | python | DescribeDisksRequest.__init__ | (self) | r"""
:param Filters: 过滤条件。参数不支持同时指定`DiskIds`和`Filters`。<br><li>disk-usage - Array of String - 是否必填:否 -(过滤条件)按云盘类型过滤。 (SYSTEM_DISK:表示系统盘 | DATA_DISK:表示数据盘)<br><li>disk-charge-type - Array of String - 是否必填:否 -(过滤条件)按照云硬盘计费模式过滤。 (PREPAID:表示预付费,即包年包月 | POSTPAID_BY_HOUR:表示后付费,即按量计费。)<br><li>portable - Array of String - 是否必填:否 -(过滤条件)按是否为弹性云盘过滤。 (TRUE:表示弹性云盘 | FALSE:表示非弹性云盘。)<br><li>project-id - Array of Integer - 是否必填:否 -(过滤条件)按云硬盘所属项目ID过滤。<br><li>disk-id - Array of String - 是否必填:否 -(过滤条件)按照云硬盘ID过滤。云盘ID形如:`disk-11112222`。<br><li>disk-name - Array of String - 是否必填:否 -(过滤条件)按照云盘名称过滤。<br><li>disk-type - Array of String - 是否必填:否 -(过滤条件)按照云盘介质类型过滤。(CLOUD_BASIC:表示普通云硬盘 | CLOUD_PREMIUM:表示高性能云硬盘。| CLOUD_SSD:表示SSD云硬盘 | CLOUD_HSSD:表示增强型SSD云硬盘。| CLOUD_TSSD:表示极速型云硬盘。)<br><li>disk-state - Array of String - 是否必填:否 -(过滤条件)按照云盘状态过滤。(UNATTACHED:未挂载 | ATTACHING:挂载中 | ATTACHED:已挂载 | DETACHING:解挂中 | EXPANDING:扩容中 | ROLLBACKING:回滚中 | TORECYCLE:待回收。)<br><li>instance-id - Array of String - 是否必填:否 -(过滤条件)按照云盘挂载的云主机实例ID过滤。可根据此参数查询挂载在指定云主机下的云硬盘。<br><li>zone - Array of String - 是否必填:否 -(过滤条件)按照[可用区](/document/product/213/15753#ZoneInfo)过滤。<br><li>instance-ip-address - Array of String - 是否必填:否 -(过滤条件)按云盘所挂载云主机的内网或外网IP过滤。<br><li>instance-name - Array of String - 是否必填:否 -(过滤条件)按云盘所挂载的实例名称过滤。<br><li>tag-key - Array of String - 是否必填:否 -(过滤条件)按照标签键进行过滤。<br><li>tag-value - Array of String - 是否必填:否 -(过滤条件)照标签值进行过滤。<br><li>tag:tag-key - Array of String - 是否必填:否 -(过滤条件)按照标签键值对进行过滤。 tag-key使用具体的标签键进行替换。
:type Filters: list of Filter
:param Limit: 返回数量,默认为20,最大值为100。关于`Limit`的更进一步介绍请参考 API [简介](/document/product/362/15633)中的相关小节。
:type Limit: int
:param OrderField: 云盘列表排序的依据字段。取值范围:<br><li>CREATE_TIME:依据云盘的创建时间排序<br><li>DEADLINE:依据云盘的到期时间排序<br>默认按云盘创建时间排序。
:type OrderField: str
:param Offset: 偏移量,默认为0。关于`Offset`的更进一步介绍请参考API[简介](/document/product/362/15633)中的相关小节。
:type Offset: int
:param ReturnBindAutoSnapshotPolicy: 云盘详情中是否需要返回云盘绑定的定期快照策略ID,TRUE表示需要返回,FALSE表示不返回。
:type ReturnBindAutoSnapshotPolicy: bool
:param DiskIds: 按照一个或者多个云硬盘ID查询。云硬盘ID形如:`disk-11112222`,此参数的具体格式可参考API[简介](/document/product/362/15633)的ids.N一节)。参数不支持同时指定`DiskIds`和`Filters`。
:type DiskIds: list of str
:param Order: 输出云盘列表的排列顺序。取值范围:<br><li>ASC:升序排列<br><li>DESC:降序排列。
:type Order: str | r"""
:param Filters: 过滤条件。参数不支持同时指定`DiskIds`和`Filters`。<br><li>disk-usage - Array of String - 是否必填:否 -(过滤条件)按云盘类型过滤。 (SYSTEM_DISK:表示系统盘 | DATA_DISK:表示数据盘)<br><li>disk-charge-type - Array of String - 是否必填:否 -(过滤条件)按照云硬盘计费模式过滤。 (PREPAID:表示预付费,即包年包月 | POSTPAID_BY_HOUR:表示后付费,即按量计费。)<br><li>portable - Array of String - 是否必填:否 -(过滤条件)按是否为弹性云盘过滤。 (TRUE:表示弹性云盘 | FALSE:表示非弹性云盘。)<br><li>project-id - Array of Integer - 是否必填:否 -(过滤条件)按云硬盘所属项目ID过滤。<br><li>disk-id - Array of String - 是否必填:否 -(过滤条件)按照云硬盘ID过滤。云盘ID形如:`disk-11112222`。<br><li>disk-name - Array of String - 是否必填:否 -(过滤条件)按照云盘名称过滤。<br><li>disk-type - Array of String - 是否必填:否 -(过滤条件)按照云盘介质类型过滤。(CLOUD_BASIC:表示普通云硬盘 | CLOUD_PREMIUM:表示高性能云硬盘。| CLOUD_SSD:表示SSD云硬盘 | CLOUD_HSSD:表示增强型SSD云硬盘。| CLOUD_TSSD:表示极速型云硬盘。)<br><li>disk-state - Array of String - 是否必填:否 -(过滤条件)按照云盘状态过滤。(UNATTACHED:未挂载 | ATTACHING:挂载中 | ATTACHED:已挂载 | DETACHING:解挂中 | EXPANDING:扩容中 | ROLLBACKING:回滚中 | TORECYCLE:待回收。)<br><li>instance-id - Array of String - 是否必填:否 -(过滤条件)按照云盘挂载的云主机实例ID过滤。可根据此参数查询挂载在指定云主机下的云硬盘。<br><li>zone - Array of String - 是否必填:否 -(过滤条件)按照[可用区](/document/product/213/15753#ZoneInfo)过滤。<br><li>instance-ip-address - Array of String - 是否必填:否 -(过滤条件)按云盘所挂载云主机的内网或外网IP过滤。<br><li>instance-name - Array of String - 是否必填:否 -(过滤条件)按云盘所挂载的实例名称过滤。<br><li>tag-key - Array of String - 是否必填:否 -(过滤条件)按照标签键进行过滤。<br><li>tag-value - Array of String - 是否必填:否 -(过滤条件)照标签值进行过滤。<br><li>tag:tag-key - Array of String - 是否必填:否 -(过滤条件)按照标签键值对进行过滤。 tag-key使用具体的标签键进行替换。
:type Filters: list of Filter
:param Limit: 返回数量,默认为20,最大值为100。关于`Limit`的更进一步介绍请参考 API [简介](/document/product/362/15633)中的相关小节。
:type Limit: int
:param OrderField: 云盘列表排序的依据字段。取值范围:<br><li>CREATE_TIME:依据云盘的创建时间排序<br><li>DEADLINE:依据云盘的到期时间排序<br>默认按云盘创建时间排序。
:type OrderField: str
:param Offset: 偏移量,默认为0。关于`Offset`的更进一步介绍请参考API[简介](/document/product/362/15633)中的相关小节。
:type Offset: int
:param ReturnBindAutoSnapshotPolicy: 云盘详情中是否需要返回云盘绑定的定期快照策略ID,TRUE表示需要返回,FALSE表示不返回。
:type ReturnBindAutoSnapshotPolicy: bool
:param DiskIds: 按照一个或者多个云硬盘ID查询。云硬盘ID形如:`disk-11112222`,此参数的具体格式可参考API[简介](/document/product/362/15633)的ids.N一节)。参数不支持同时指定`DiskIds`和`Filters`。
:type DiskIds: list of str
:param Order: 输出云盘列表的排列顺序。取值范围:<br><li>ASC:升序排列<br><li>DESC:降序排列。
:type Order: str | [
"r",
":",
"param",
"Filters",
":",
"过滤条件。参数不支持同时指定",
"DiskIds",
"和",
"Filters",
"。<br",
">",
"<li",
">",
"disk",
"-",
"usage",
"-",
"Array",
"of",
"String",
"-",
"是否必填:否",
"-",
"(过滤条件)按云盘类型过滤。",
"(",
"SYSTEM_DISK:表示系统盘",
"|",
"DATA_DISK:表示数据盘",
")",
"<br",
... | def __init__(self):
r"""
:param Filters: 过滤条件。参数不支持同时指定`DiskIds`和`Filters`。<br><li>disk-usage - Array of String - 是否必填:否 -(过滤条件)按云盘类型过滤。 (SYSTEM_DISK:表示系统盘 | DATA_DISK:表示数据盘)<br><li>disk-charge-type - Array of String - 是否必填:否 -(过滤条件)按照云硬盘计费模式过滤。 (PREPAID:表示预付费,即包年包月 | POSTPAID_BY_HOUR:表示后付费,即按量计费。)<br><li>portable - Array of String - 是否必填:否 -(过滤条件)按是否为弹性云盘过滤。 (TRUE:表示弹性云盘 | FALSE:表示非弹性云盘。)<br><li>project-id - Array of Integer - 是否必填:否 -(过滤条件)按云硬盘所属项目ID过滤。<br><li>disk-id - Array of String - 是否必填:否 -(过滤条件)按照云硬盘ID过滤。云盘ID形如:`disk-11112222`。<br><li>disk-name - Array of String - 是否必填:否 -(过滤条件)按照云盘名称过滤。<br><li>disk-type - Array of String - 是否必填:否 -(过滤条件)按照云盘介质类型过滤。(CLOUD_BASIC:表示普通云硬盘 | CLOUD_PREMIUM:表示高性能云硬盘。| CLOUD_SSD:表示SSD云硬盘 | CLOUD_HSSD:表示增强型SSD云硬盘。| CLOUD_TSSD:表示极速型云硬盘。)<br><li>disk-state - Array of String - 是否必填:否 -(过滤条件)按照云盘状态过滤。(UNATTACHED:未挂载 | ATTACHING:挂载中 | ATTACHED:已挂载 | DETACHING:解挂中 | EXPANDING:扩容中 | ROLLBACKING:回滚中 | TORECYCLE:待回收。)<br><li>instance-id - Array of String - 是否必填:否 -(过滤条件)按照云盘挂载的云主机实例ID过滤。可根据此参数查询挂载在指定云主机下的云硬盘。<br><li>zone - Array of String - 是否必填:否 -(过滤条件)按照[可用区](/document/product/213/15753#ZoneInfo)过滤。<br><li>instance-ip-address - Array of String - 是否必填:否 -(过滤条件)按云盘所挂载云主机的内网或外网IP过滤。<br><li>instance-name - Array of String - 是否必填:否 -(过滤条件)按云盘所挂载的实例名称过滤。<br><li>tag-key - Array of String - 是否必填:否 -(过滤条件)按照标签键进行过滤。<br><li>tag-value - Array of String - 是否必填:否 -(过滤条件)照标签值进行过滤。<br><li>tag:tag-key - Array of String - 是否必填:否 -(过滤条件)按照标签键值对进行过滤。 tag-key使用具体的标签键进行替换。
:type Filters: list of Filter
:param Limit: 返回数量,默认为20,最大值为100。关于`Limit`的更进一步介绍请参考 API [简介](/document/product/362/15633)中的相关小节。
:type Limit: int
:param OrderField: 云盘列表排序的依据字段。取值范围:<br><li>CREATE_TIME:依据云盘的创建时间排序<br><li>DEADLINE:依据云盘的到期时间排序<br>默认按云盘创建时间排序。
:type OrderField: str
:param Offset: 偏移量,默认为0。关于`Offset`的更进一步介绍请参考API[简介](/document/product/362/15633)中的相关小节。
:type Offset: int
:param ReturnBindAutoSnapshotPolicy: 云盘详情中是否需要返回云盘绑定的定期快照策略ID,TRUE表示需要返回,FALSE表示不返回。
:type ReturnBindAutoSnapshotPolicy: bool
:param DiskIds: 按照一个或者多个云硬盘ID查询。云硬盘ID形如:`disk-11112222`,此参数的具体格式可参考API[简介](/document/product/362/15633)的ids.N一节)。参数不支持同时指定`DiskIds`和`Filters`。
:type DiskIds: list of str
:param Order: 输出云盘列表的排列顺序。取值范围:<br><li>ASC:升序排列<br><li>DESC:降序排列。
:type Order: str
"""
self.Filters = None
self.Limit = None
self.OrderField = None
self.Offset = None
self.ReturnBindAutoSnapshotPolicy = None
self.DiskIds = None
self.Order = None | [
"def",
"__init__",
"(",
"self",
")",
":",
"self",
".",
"Filters",
"=",
"None",
"self",
".",
"Limit",
"=",
"None",
"self",
".",
"OrderField",
"=",
"None",
"self",
".",
"Offset",
"=",
"None",
"self",
".",
"ReturnBindAutoSnapshotPolicy",
"=",
"None",
"self"... | https://github.com/TencentCloud/tencentcloud-sdk-python/blob/3677fd1cdc8c5fd626ce001c13fd3b59d1f279d2/tencentcloud/ecm/v20190719/models.py#L3028-L3051 | ||
cms-dev/cms | 0401c5336b34b1731736045da4877fef11889274 | cms/grading/Sandbox.py | python | wait_without_std | (procs) | return [process.wait() for process in procs] | Wait for the conclusion of the processes in the list, avoiding
starving for input and output.
procs (list): a list of processes as returned by Popen.
return (list): a list of return codes. | Wait for the conclusion of the processes in the list, avoiding
starving for input and output. | [
"Wait",
"for",
"the",
"conclusion",
"of",
"the",
"processes",
"in",
"the",
"list",
"avoiding",
"starving",
"for",
"input",
"and",
"output",
"."
] | def wait_without_std(procs):
"""Wait for the conclusion of the processes in the list, avoiding
starving for input and output.
procs (list): a list of processes as returned by Popen.
return (list): a list of return codes.
"""
def get_to_consume():
"""Amongst stdout and stderr of list of processes, find the
ones that are alive and not closed (i.e., that may still want
to write to).
return (list): a list of open streams.
"""
to_consume = []
for process in procs:
if process.poll() is None: # If the process is alive.
if process.stdout and not process.stdout.closed:
to_consume.append(process.stdout)
if process.stderr and not process.stderr.closed:
to_consume.append(process.stderr)
return to_consume
# Close stdin; just saying stdin=None isn't ok, because the
# standard input would be obtained from the application stdin,
# that could interfere with the child process behaviour
for process in procs:
if process.stdin:
process.stdin.close()
# Read stdout and stderr to the end without having to block
# because of insufficient buffering (and without allocating too
# much memory). Unix specific.
to_consume = get_to_consume()
while len(to_consume) > 0:
to_read = select.select(to_consume, [], [], 1.0)[0]
for file_ in to_read:
file_.read(8 * 1024)
to_consume = get_to_consume()
return [process.wait() for process in procs] | [
"def",
"wait_without_std",
"(",
"procs",
")",
":",
"def",
"get_to_consume",
"(",
")",
":",
"\"\"\"Amongst stdout and stderr of list of processes, find the\n ones that are alive and not closed (i.e., that may still want\n to write to).\n\n return (list): a list of open stre... | https://github.com/cms-dev/cms/blob/0401c5336b34b1731736045da4877fef11889274/cms/grading/Sandbox.py#L63-L106 | |
moinwiki/moin | 568f223231aadecbd3b21a701ec02271f8d8021d | src/moin/scripts/migration/moin19/_utils19.py | python | quoteWikinameFS | (wikiname, charset=CHARSET19) | return ''.join(quoted) | Return file system representation of a Unicode WikiName.
Warning: will raise UnicodeError if wikiname can not be encoded using
charset. The default value 'utf-8' can encode any character.
:param wikiname: wiki name [unicode]
:param charset: charset to encode string (before quoting)
:rtype: string
:returns: quoted name, safe for any file system | Return file system representation of a Unicode WikiName. | [
"Return",
"file",
"system",
"representation",
"of",
"a",
"Unicode",
"WikiName",
"."
] | def quoteWikinameFS(wikiname, charset=CHARSET19):
"""
Return file system representation of a Unicode WikiName.
Warning: will raise UnicodeError if wikiname can not be encoded using
charset. The default value 'utf-8' can encode any character.
:param wikiname: wiki name [unicode]
:param charset: charset to encode string (before quoting)
:rtype: string
:returns: quoted name, safe for any file system
"""
filename = wikiname.encode(charset)
quoted = []
location = 0
for needle in UNSAFE.finditer(filename):
# append leading safe stuff
quoted.append(filename[location:needle.start()])
location = needle.end()
# Quote and append unsafe stuff
quoted.append('(')
for character in needle.group():
quoted.append("{0:02x}".format(ord(character)))
quoted.append(')')
# append rest of string
quoted.append(filename[location:])
return ''.join(quoted) | [
"def",
"quoteWikinameFS",
"(",
"wikiname",
",",
"charset",
"=",
"CHARSET19",
")",
":",
"filename",
"=",
"wikiname",
".",
"encode",
"(",
"charset",
")",
"quoted",
"=",
"[",
"]",
"location",
"=",
"0",
"for",
"needle",
"in",
"UNSAFE",
".",
"finditer",
"(",
... | https://github.com/moinwiki/moin/blob/568f223231aadecbd3b21a701ec02271f8d8021d/src/moin/scripts/migration/moin19/_utils19.py#L82-L110 | |
KalleHallden/AutoTimer | 2d954216700c4930baa154e28dbddc34609af7ce | env/lib/python2.7/site-packages/setuptools/_vendor/packaging/markers.py | python | _eval_op | (lhs, op, rhs) | return oper(lhs, rhs) | [] | def _eval_op(lhs, op, rhs):
try:
spec = Specifier("".join([op.serialize(), rhs]))
except InvalidSpecifier:
pass
else:
return spec.contains(lhs)
oper = _operators.get(op.serialize())
if oper is None:
raise UndefinedComparison(
"Undefined {0!r} on {1!r} and {2!r}.".format(op, lhs, rhs)
)
return oper(lhs, rhs) | [
"def",
"_eval_op",
"(",
"lhs",
",",
"op",
",",
"rhs",
")",
":",
"try",
":",
"spec",
"=",
"Specifier",
"(",
"\"\"",
".",
"join",
"(",
"[",
"op",
".",
"serialize",
"(",
")",
",",
"rhs",
"]",
")",
")",
"except",
"InvalidSpecifier",
":",
"pass",
"els... | https://github.com/KalleHallden/AutoTimer/blob/2d954216700c4930baa154e28dbddc34609af7ce/env/lib/python2.7/site-packages/setuptools/_vendor/packaging/markers.py#L183-L197 | |||
umautobots/vod-converter | 29e16918145ebd97e1692ae8e7ef3dc4da242a88 | vod_converter/converter.py | python | Ingestor.ingest | (self, path) | Read in data from the filesytem.
:param path: '/path/to/data/'
:return: an array of dicts conforming to `IMAGE_DETECTION_SCHEMA` | Read in data from the filesytem.
:param path: '/path/to/data/'
:return: an array of dicts conforming to `IMAGE_DETECTION_SCHEMA` | [
"Read",
"in",
"data",
"from",
"the",
"filesytem",
".",
":",
"param",
"path",
":",
"/",
"path",
"/",
"to",
"/",
"data",
"/",
":",
"return",
":",
"an",
"array",
"of",
"dicts",
"conforming",
"to",
"IMAGE_DETECTION_SCHEMA"
] | def ingest(self, path):
"""
Read in data from the filesytem.
:param path: '/path/to/data/'
:return: an array of dicts conforming to `IMAGE_DETECTION_SCHEMA`
"""
pass | [
"def",
"ingest",
"(",
"self",
",",
"path",
")",
":",
"pass"
] | https://github.com/umautobots/vod-converter/blob/29e16918145ebd97e1692ae8e7ef3dc4da242a88/vod_converter/converter.py#L81-L87 | ||
imagr/imagr | e54bcf3f0f951babcd2fa153de2dd8556aa3506d | Imagr/gmacpyutil/systemconfig.py | python | GetDot1xInterfaces | () | return interfaces | Retrieves attributes of all dot1x compatible interfaces.
Returns:
Array of dict or empty array | Retrieves attributes of all dot1x compatible interfaces. | [
"Retrieves",
"attributes",
"of",
"all",
"dot1x",
"compatible",
"interfaces",
"."
] | def GetDot1xInterfaces():
"""Retrieves attributes of all dot1x compatible interfaces.
Returns:
Array of dict or empty array
"""
interfaces = []
for interface in GetNetworkInterfaces():
if interface['type'] == 'IEEE80211' or interface['type'] == 'Ethernet':
if (interface['builtin'] and
'AppleThunderboltIPPort' not in interface['bus']):
interfaces.append(interface)
return interfaces | [
"def",
"GetDot1xInterfaces",
"(",
")",
":",
"interfaces",
"=",
"[",
"]",
"for",
"interface",
"in",
"GetNetworkInterfaces",
"(",
")",
":",
"if",
"interface",
"[",
"'type'",
"]",
"==",
"'IEEE80211'",
"or",
"interface",
"[",
"'type'",
"]",
"==",
"'Ethernet'",
... | https://github.com/imagr/imagr/blob/e54bcf3f0f951babcd2fa153de2dd8556aa3506d/Imagr/gmacpyutil/systemconfig.py#L381-L393 | |
cocagne/paxos | cf3b5a2bf6ece39d2432b7ebfe1efb2e232bc2df | paxos/practical.py | python | Messenger.send_prepare_nack | (self, to_uid, proposal_id, promised_id) | Sends a Prepare Nack message for the proposal to the specified node | Sends a Prepare Nack message for the proposal to the specified node | [
"Sends",
"a",
"Prepare",
"Nack",
"message",
"for",
"the",
"proposal",
"to",
"the",
"specified",
"node"
] | def send_prepare_nack(self, to_uid, proposal_id, promised_id):
'''
Sends a Prepare Nack message for the proposal to the specified node
''' | [
"def",
"send_prepare_nack",
"(",
"self",
",",
"to_uid",
",",
"proposal_id",
",",
"promised_id",
")",
":"
] | https://github.com/cocagne/paxos/blob/cf3b5a2bf6ece39d2432b7ebfe1efb2e232bc2df/paxos/practical.py#L12-L15 | ||
zhl2008/awd-platform | 0416b31abea29743387b10b3914581fbe8e7da5e | web_flaskbb/lib/python2.7/site-packages/redis/client.py | python | PubSub.unsubscribe | (self, *args) | return self.execute_command('UNSUBSCRIBE', *args) | Unsubscribe from the supplied channels. If empty, unsubscribe from
all channels | Unsubscribe from the supplied channels. If empty, unsubscribe from
all channels | [
"Unsubscribe",
"from",
"the",
"supplied",
"channels",
".",
"If",
"empty",
"unsubscribe",
"from",
"all",
"channels"
] | def unsubscribe(self, *args):
"""
Unsubscribe from the supplied channels. If empty, unsubscribe from
all channels
"""
if args:
args = list_or_args(args[0], args[1:])
return self.execute_command('UNSUBSCRIBE', *args) | [
"def",
"unsubscribe",
"(",
"self",
",",
"*",
"args",
")",
":",
"if",
"args",
":",
"args",
"=",
"list_or_args",
"(",
"args",
"[",
"0",
"]",
",",
"args",
"[",
"1",
":",
"]",
")",
"return",
"self",
".",
"execute_command",
"(",
"'UNSUBSCRIBE'",
",",
"*... | https://github.com/zhl2008/awd-platform/blob/0416b31abea29743387b10b3914581fbe8e7da5e/web_flaskbb/lib/python2.7/site-packages/redis/client.py#L2489-L2496 | |
OpenMDAO/OpenMDAO-Framework | f2e37b7de3edeaaeb2d251b375917adec059db9b | openmdao.main/src/openmdao/main/plugin.py | python | _load_templates | () | return templates, class_templates, test_template | Reads templates from files in the plugin_templates directory.
conf.py:
This is the template for the file that Sphinx uses to configure itself.
It's intended to match the conf.py for the OpenMDAO docs, so if those
change, this may need to be updated.
index.rst
Template for the top level file in the Sphinx docs for the plugin.
usage.rst
Template for the file where the user may add specific usage documentation
for the plugin.
setup.py
Template for the file that packages and install the plugin using
setuptools.
MANIFEST.in
Template for the file that tells setuptools/distutils what extra data
files to include in the distribution for the plugin.
README.txt
Template for the README.txt file.
setup.cfg
Template for the setup configuration file, where all of the user
supplied metadata is located. This file may be hand edited by the
plugin developer. | Reads templates from files in the plugin_templates directory. | [
"Reads",
"templates",
"from",
"files",
"in",
"the",
"plugin_templates",
"directory",
"."
] | def _load_templates():
''' Reads templates from files in the plugin_templates directory.
conf.py:
This is the template for the file that Sphinx uses to configure itself.
It's intended to match the conf.py for the OpenMDAO docs, so if those
change, this may need to be updated.
index.rst
Template for the top level file in the Sphinx docs for the plugin.
usage.rst
Template for the file where the user may add specific usage documentation
for the plugin.
setup.py
Template for the file that packages and install the plugin using
setuptools.
MANIFEST.in
Template for the file that tells setuptools/distutils what extra data
files to include in the distribution for the plugin.
README.txt
Template for the README.txt file.
setup.cfg
Template for the setup configuration file, where all of the user
supplied metadata is located. This file may be hand edited by the
plugin developer.
'''
# There are a number of string templates that are used to produce various
# files within the plugin distribution. These templates are stored in the
# templates dict, with the key being the name of the file that the
# template corresponds to.
templates = {}
for item in ['index.rst', 'usage.rst', 'MANIFEST.in',
'README.txt', 'setup.cfg']:
infile = resource_stream(__name__,
os.path.join('plugin_templates', item))
templates[item] = infile.read()
infile.close()
infile = resource_stream(__name__,
os.path.join('plugin_templates', 'setup_py_template'))
templates['setup.py'] = infile.read()
infile.close()
infile = resource_stream(__name__,
os.path.join('plugin_templates', 'conf_py_template'))
templates['conf.py'] = infile.read()
infile.close()
# This dict contains string templates corresponding to skeleton python
# source files for each of the recognized plugin types.
# TODO: These should be updated to reflect best practices because most
# plugin developers will start with these when they create new plugins.
class_templates = {}
for item in ['openmdao.component', 'openmdao.driver', 'openmdao.variable',
'openmdao.surrogatemodel']:
infile = resource_stream(__name__,
os.path.join('plugin_templates', item))
class_templates[item] = infile.read()
infile.close()
infile = resource_stream(__name__,
os.path.join('plugin_templates', 'test_template'))
test_template = infile.read()
infile.close()
return templates, class_templates, test_template | [
"def",
"_load_templates",
"(",
")",
":",
"# There are a number of string templates that are used to produce various",
"# files within the plugin distribution. These templates are stored in the",
"# templates dict, with the key being the name of the file that the",
"# template corresponds to.",
"te... | https://github.com/OpenMDAO/OpenMDAO-Framework/blob/f2e37b7de3edeaaeb2d251b375917adec059db9b/openmdao.main/src/openmdao/main/plugin.py#L41-L123 | |
hubblestack/hubble | 763142474edcecdec5fd25591dc29c3536e8f969 | hubblestack/modules/conf_publisher.py | python | _filter_config | (opts_to_log, remove_dots=True) | return filtered_conf | Filters out keys containing certain patterns to avoid sensitive information being sent to splunk | Filters out keys containing certain patterns to avoid sensitive information being sent to splunk | [
"Filters",
"out",
"keys",
"containing",
"certain",
"patterns",
"to",
"avoid",
"sensitive",
"information",
"being",
"sent",
"to",
"splunk"
] | def _filter_config(opts_to_log, remove_dots=True):
"""
Filters out keys containing certain patterns to avoid sensitive information being sent to splunk
"""
patterns_to_filter = ["password", "token", "passphrase", "privkey", "keyid", "s3.key"]
filtered_conf = _remove_sensitive_info(opts_to_log, patterns_to_filter)
if remove_dots:
for key in filtered_conf.keys():
if '.' in key:
filtered_conf[key.replace('.', '_')] = filtered_conf.pop(key)
return filtered_conf | [
"def",
"_filter_config",
"(",
"opts_to_log",
",",
"remove_dots",
"=",
"True",
")",
":",
"patterns_to_filter",
"=",
"[",
"\"password\"",
",",
"\"token\"",
",",
"\"passphrase\"",
",",
"\"privkey\"",
",",
"\"keyid\"",
",",
"\"s3.key\"",
"]",
"filtered_conf",
"=",
"... | https://github.com/hubblestack/hubble/blob/763142474edcecdec5fd25591dc29c3536e8f969/hubblestack/modules/conf_publisher.py#L65-L75 | |
openstack/openstack-ansible | 954567346e24c46a07d1f6d018ffb9e80ea7960d | osa_toolkit/manage.py | python | print_containers_per_group | (inventory) | return table | Return a table of groups and the containers in each group.
Keyword arguments:
inventory -- inventory dictionary | Return a table of groups and the containers in each group. | [
"Return",
"a",
"table",
"of",
"groups",
"and",
"the",
"containers",
"in",
"each",
"group",
"."
] | def print_containers_per_group(inventory):
"""Return a table of groups and the containers in each group.
Keyword arguments:
inventory -- inventory dictionary
"""
required_list = [
'groups',
'container_name'
]
table = prettytable.PrettyTable(required_list)
for group_name in inventory.keys():
containers = get_containers_for_group(inventory, group_name)
# Don't show a group if it has no containers
if containers is None or len(containers) < 1:
continue
# Don't show default group
if len(containers) == 1 and '_' not in containers[0]:
continue
# Join with newlines here to avoid having a horrific table with tons
# of line wrapping.
row = [group_name, '\n'.join(containers)]
table.add_row(row)
for tbl in table.align.keys():
table.align[tbl] = 'l'
return table | [
"def",
"print_containers_per_group",
"(",
"inventory",
")",
":",
"required_list",
"=",
"[",
"'groups'",
",",
"'container_name'",
"]",
"table",
"=",
"prettytable",
".",
"PrettyTable",
"(",
"required_list",
")",
"for",
"group_name",
"in",
"inventory",
".",
"keys",
... | https://github.com/openstack/openstack-ansible/blob/954567346e24c46a07d1f6d018ffb9e80ea7960d/osa_toolkit/manage.py#L189-L220 | |
coderSkyChen/Action_Recognition_Zoo | 92ec5ec3efeee852aec5c057798298cd3a8e58ae | model_zoo/models/slim/nets/resnet_v1.py | python | resnet_v1_152 | (inputs,
num_classes=None,
is_training=True,
global_pool=True,
output_stride=None,
reuse=None,
scope='resnet_v1_152') | return resnet_v1(inputs, blocks, num_classes, is_training,
global_pool=global_pool, output_stride=output_stride,
include_root_block=True, reuse=reuse, scope=scope) | ResNet-152 model of [1]. See resnet_v1() for arg and return description. | ResNet-152 model of [1]. See resnet_v1() for arg and return description. | [
"ResNet",
"-",
"152",
"model",
"of",
"[",
"1",
"]",
".",
"See",
"resnet_v1",
"()",
"for",
"arg",
"and",
"return",
"description",
"."
] | def resnet_v1_152(inputs,
num_classes=None,
is_training=True,
global_pool=True,
output_stride=None,
reuse=None,
scope='resnet_v1_152'):
"""ResNet-152 model of [1]. See resnet_v1() for arg and return description."""
blocks = [
resnet_utils.Block(
'block1', bottleneck, [(256, 64, 1)] * 2 + [(256, 64, 2)]),
resnet_utils.Block(
'block2', bottleneck, [(512, 128, 1)] * 7 + [(512, 128, 2)]),
resnet_utils.Block(
'block3', bottleneck, [(1024, 256, 1)] * 35 + [(1024, 256, 2)]),
resnet_utils.Block(
'block4', bottleneck, [(2048, 512, 1)] * 3)]
return resnet_v1(inputs, blocks, num_classes, is_training,
global_pool=global_pool, output_stride=output_stride,
include_root_block=True, reuse=reuse, scope=scope) | [
"def",
"resnet_v1_152",
"(",
"inputs",
",",
"num_classes",
"=",
"None",
",",
"is_training",
"=",
"True",
",",
"global_pool",
"=",
"True",
",",
"output_stride",
"=",
"None",
",",
"reuse",
"=",
"None",
",",
"scope",
"=",
"'resnet_v1_152'",
")",
":",
"blocks"... | https://github.com/coderSkyChen/Action_Recognition_Zoo/blob/92ec5ec3efeee852aec5c057798298cd3a8e58ae/model_zoo/models/slim/nets/resnet_v1.py#L254-L273 | |
kubernetes-client/python | 47b9da9de2d02b2b7a34fbe05afb44afd130d73a | kubernetes/client/api/core_v1_api.py | python | CoreV1Api.replace_namespace_status | (self, name, body, **kwargs) | return self.replace_namespace_status_with_http_info(name, body, **kwargs) | replace_namespace_status # noqa: E501
replace status of the specified Namespace # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.replace_namespace_status(name, body, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str name: name of the Namespace (required)
:param V1Namespace body: (required)
:param str pretty: If 'true', then the output is pretty printed.
:param str dry_run: When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed
:param str field_manager: fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint.
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: V1Namespace
If the method is called asynchronously,
returns the request thread. | replace_namespace_status # noqa: E501 | [
"replace_namespace_status",
"#",
"noqa",
":",
"E501"
] | def replace_namespace_status(self, name, body, **kwargs): # noqa: E501
"""replace_namespace_status # noqa: E501
replace status of the specified Namespace # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.replace_namespace_status(name, body, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str name: name of the Namespace (required)
:param V1Namespace body: (required)
:param str pretty: If 'true', then the output is pretty printed.
:param str dry_run: When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed
:param str field_manager: fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint.
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: V1Namespace
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.replace_namespace_status_with_http_info(name, body, **kwargs) | [
"def",
"replace_namespace_status",
"(",
"self",
",",
"name",
",",
"body",
",",
"*",
"*",
"kwargs",
")",
":",
"# noqa: E501",
"kwargs",
"[",
"'_return_http_data_only'",
"]",
"=",
"True",
"return",
"self",
".",
"replace_namespace_status_with_http_info",
"(",
"name",... | https://github.com/kubernetes-client/python/blob/47b9da9de2d02b2b7a34fbe05afb44afd130d73a/kubernetes/client/api/core_v1_api.py#L25643-L25670 | |
bachya/smart-home | 536b989e0d7057c7a8a65b2ac9bbffd4b826cce7 | hass/settings/custom_components/hacs/tasks/base.py | python | HacsTask.slug | (self) | return self.__class__.__module__.rsplit(".", maxsplit=1)[-1] | Return the check slug. | Return the check slug. | [
"Return",
"the",
"check",
"slug",
"."
] | def slug(self) -> str:
"""Return the check slug."""
return self.__class__.__module__.rsplit(".", maxsplit=1)[-1] | [
"def",
"slug",
"(",
"self",
")",
"->",
"str",
":",
"return",
"self",
".",
"__class__",
".",
"__module__",
".",
"rsplit",
"(",
"\".\"",
",",
"maxsplit",
"=",
"1",
")",
"[",
"-",
"1",
"]"
] | https://github.com/bachya/smart-home/blob/536b989e0d7057c7a8a65b2ac9bbffd4b826cce7/hass/settings/custom_components/hacs/tasks/base.py#L29-L31 | |
bendmorris/static-python | 2e0f8c4d7ed5b359dc7d8a75b6fb37e6b6c5c473 | Lib/numbers.py | python | Integral.__rlshift__ | (self, other) | other << self | other << self | [
"other",
"<<",
"self"
] | def __rlshift__(self, other):
"""other << self"""
raise NotImplementedError | [
"def",
"__rlshift__",
"(",
"self",
",",
"other",
")",
":",
"raise",
"NotImplementedError"
] | https://github.com/bendmorris/static-python/blob/2e0f8c4d7ed5b359dc7d8a75b6fb37e6b6c5c473/Lib/numbers.py#L330-L332 | ||
tensorflow/models | 6b8bb0cbeb3e10415c7a87448f08adc3c484c1d3 | research/object_detection/utils/json_utils.py | python | Dumps | (obj, float_digits=-1, **params) | return json_str | Wrapper of json.dumps that allows specifying the float precision used.
Args:
obj: The object to dump.
float_digits: The number of digits of precision when writing floats out.
**params: Additional parameters to pass to json.dumps.
Returns:
output: JSON string representation of obj. | Wrapper of json.dumps that allows specifying the float precision used. | [
"Wrapper",
"of",
"json",
".",
"dumps",
"that",
"allows",
"specifying",
"the",
"float",
"precision",
"used",
"."
] | def Dumps(obj, float_digits=-1, **params):
"""Wrapper of json.dumps that allows specifying the float precision used.
Args:
obj: The object to dump.
float_digits: The number of digits of precision when writing floats out.
**params: Additional parameters to pass to json.dumps.
Returns:
output: JSON string representation of obj.
"""
json_str = json.dumps(obj, **params)
if float_digits > -1:
json_str = FormatFloat(json_str, float_digits)
return json_str | [
"def",
"Dumps",
"(",
"obj",
",",
"float_digits",
"=",
"-",
"1",
",",
"*",
"*",
"params",
")",
":",
"json_str",
"=",
"json",
".",
"dumps",
"(",
"obj",
",",
"*",
"*",
"params",
")",
"if",
"float_digits",
">",
"-",
"1",
":",
"json_str",
"=",
"Format... | https://github.com/tensorflow/models/blob/6b8bb0cbeb3e10415c7a87448f08adc3c484c1d3/research/object_detection/utils/json_utils.py#L45-L59 | |
taigaio/taiga-ncurses | 65312098f2d167762e0dbd1c16019754ab64d068 | taiga_ncurses/api/client.py | python | TaigaClient.get_issue | (self, id, params={}) | return self._get(url, params) | [] | def get_issue(self, id, params={}):
url = urljoin(self._host, self.URLS.get("issue").format(id))
return self._get(url, params) | [
"def",
"get_issue",
"(",
"self",
",",
"id",
",",
"params",
"=",
"{",
"}",
")",
":",
"url",
"=",
"urljoin",
"(",
"self",
".",
"_host",
",",
"self",
".",
"URLS",
".",
"get",
"(",
"\"issue\"",
")",
".",
"format",
"(",
"id",
")",
")",
"return",
"se... | https://github.com/taigaio/taiga-ncurses/blob/65312098f2d167762e0dbd1c16019754ab64d068/taiga_ncurses/api/client.py#L297-L299 | |||
spesmilo/electrum | bdbd59300fbd35b01605e66145458e5f396108e8 | electrum/util.py | python | JsonRPCClient.request | (self, endpoint, *args) | [] | async def request(self, endpoint, *args):
self._id += 1
data = ('{"jsonrpc": "2.0", "id":"%d", "method": "%s", "params": %s }'
% (self._id, endpoint, json.dumps(args)))
async with self.session.post(self.url, data=data) as resp:
if resp.status == 200:
r = await resp.json()
result = r.get('result')
error = r.get('error')
if error:
return 'Error: ' + str(error)
else:
return result
else:
text = await resp.text()
return 'Error: ' + str(text) | [
"async",
"def",
"request",
"(",
"self",
",",
"endpoint",
",",
"*",
"args",
")",
":",
"self",
".",
"_id",
"+=",
"1",
"data",
"=",
"(",
"'{\"jsonrpc\": \"2.0\", \"id\":\"%d\", \"method\": \"%s\", \"params\": %s }'",
"%",
"(",
"self",
".",
"_id",
",",
"endpoint",
... | https://github.com/spesmilo/electrum/blob/bdbd59300fbd35b01605e66145458e5f396108e8/electrum/util.py#L1598-L1613 | ||||
dimagi/commcare-hq | d67ff1d3b4c51fa050c19e60c3253a79d3452a39 | corehq/messaging/scheduling/views.py | python | BroadcastListView._format_time | (self, time) | return user_time.ui_string(SERVER_DATETIME_FORMAT) | [] | def _format_time(self, time):
if not time:
return ''
user_time = ServerTime(time).user_time(self.project_timezone)
return user_time.ui_string(SERVER_DATETIME_FORMAT) | [
"def",
"_format_time",
"(",
"self",
",",
"time",
")",
":",
"if",
"not",
"time",
":",
"return",
"''",
"user_time",
"=",
"ServerTime",
"(",
"time",
")",
".",
"user_time",
"(",
"self",
".",
"project_timezone",
")",
"return",
"user_time",
".",
"ui_string",
"... | https://github.com/dimagi/commcare-hq/blob/d67ff1d3b4c51fa050c19e60c3253a79d3452a39/corehq/messaging/scheduling/views.py#L302-L307 | |||
garethdmm/gryphon | 73e19fa2d0b64c3fc7dac9e0036fc92e25e5b694 | gryphon/execution/live_runner.py | python | get_strategy_class_from_module | (module) | Get the strategy class from a module according to a simple ruleset.
This ruleset is: return the first class defined in the module that is a subclass of
the Strategy base class, and which case-insensitive matches the module name
(filename) when any underscores are removed from the module name. This allows us to
preserve the python convention of using CamelCase in class names and lower_case for
module names.
e.g.
'simple_market_making.pyx' will match class 'SimpleMarketMaking'.
'geminicoinbasearb.pyx' will match 'GeminCoinbaseArb'. | Get the strategy class from a module according to a simple ruleset. | [
"Get",
"the",
"strategy",
"class",
"from",
"a",
"module",
"according",
"to",
"a",
"simple",
"ruleset",
"."
] | def get_strategy_class_from_module(module):
"""
Get the strategy class from a module according to a simple ruleset.
This ruleset is: return the first class defined in the module that is a subclass of
the Strategy base class, and which case-insensitive matches the module name
(filename) when any underscores are removed from the module name. This allows us to
preserve the python convention of using CamelCase in class names and lower_case for
module names.
e.g.
'simple_market_making.pyx' will match class 'SimpleMarketMaking'.
'geminicoinbasearb.pyx' will match 'GeminCoinbaseArb'.
"""
expected_strat_name = module.__name__.replace('_', '')
if '.' in expected_strat_name: # Probably a module path.
expected_strat_name = expected_strat_name[expected_strat_name.rfind('.') + 1:]
for x in dir(module):
if x.lower() == expected_strat_name.lower():
cls = getattr(module, x)
if inspect.isclass(cls) and cls != Strategy and issubclass(cls, Strategy):
return cls
raise NoStrategyClassFoundInModuleException() | [
"def",
"get_strategy_class_from_module",
"(",
"module",
")",
":",
"expected_strat_name",
"=",
"module",
".",
"__name__",
".",
"replace",
"(",
"'_'",
",",
"''",
")",
"if",
"'.'",
"in",
"expected_strat_name",
":",
"# Probably a module path.",
"expected_strat_name",
"=... | https://github.com/garethdmm/gryphon/blob/73e19fa2d0b64c3fc7dac9e0036fc92e25e5b694/gryphon/execution/live_runner.py#L102-L129 | ||
OpenMDAO/OpenMDAO | f47eb5485a0bb5ea5d2ae5bd6da4b94dc6b296bd | openmdao/solvers/solver.py | python | SolverInfo.append_subsolver | (self) | Add a new level for any sub-solver for your solver. | Add a new level for any sub-solver for your solver. | [
"Add",
"a",
"new",
"level",
"for",
"any",
"sub",
"-",
"solver",
"for",
"your",
"solver",
"."
] | def append_subsolver(self):
"""
Add a new level for any sub-solver for your solver.
"""
new_str = '| '
self.prefix += new_str
self.stack.append(new_str) | [
"def",
"append_subsolver",
"(",
"self",
")",
":",
"new_str",
"=",
"'| '",
"self",
".",
"prefix",
"+=",
"new_str",
"self",
".",
"stack",
".",
"append",
"(",
"new_str",
")"
] | https://github.com/OpenMDAO/OpenMDAO/blob/f47eb5485a0bb5ea5d2ae5bd6da4b94dc6b296bd/openmdao/solvers/solver.py#L66-L72 | ||
PaddlePaddle/PaddleHub | 107ee7e1a49d15e9c94da3956475d88a53fc165f | modules/text/language_model/slda_novel/sampler.py | python | MHSampler.__construct_alias_table | (self) | Construct alias table for all words. | Construct alias table for all words. | [
"Construct",
"alias",
"table",
"for",
"all",
"words",
"."
] | def __construct_alias_table(self):
"""Construct alias table for all words.
"""
logger.info("Construct alias table for alias sampling method.")
vocab_size = self.__model.vocab_size()
self.__topic_indexes = [[] for _ in range(vocab_size)]
self.__alias_tables = [VoseAlias() for _ in range(vocab_size)]
self.__prob_sum = np.zeros(vocab_size)
# Construct each word's alias table (prior is not included).
for i in tqdm(range(vocab_size)):
dist = []
prob_sum = 0
for key in self.__model.word_topic(i):
topic_id = key
word_topic_count = self.__model.word_topic(i)[key]
topic_sum = self.__model.topic_sum_value(topic_id)
self.__topic_indexes[i].append(topic_id)
q = word_topic_count / (topic_sum + self.__model.beta_sum())
dist.append(q)
prob_sum += q
self.__prob_sum[i] = prob_sum
if len(dist) > 0:
dist = np.array(dist, dtype=np.float)
self.__alias_tables[i].initialize(dist)
# Build prior parameter beta's alias table.
beta_dist = self.__model.beta() / (self.__model.topic_sum() + self.__model.beta_sum())
self.__beta_prior_sum = np.sum(beta_dist)
self.__beta_alias.initialize(beta_dist) | [
"def",
"__construct_alias_table",
"(",
"self",
")",
":",
"logger",
".",
"info",
"(",
"\"Construct alias table for alias sampling method.\"",
")",
"vocab_size",
"=",
"self",
".",
"__model",
".",
"vocab_size",
"(",
")",
"self",
".",
"__topic_indexes",
"=",
"[",
"[",... | https://github.com/PaddlePaddle/PaddleHub/blob/107ee7e1a49d15e9c94da3956475d88a53fc165f/modules/text/language_model/slda_novel/sampler.py#L34-L64 | ||
inducer/pycuda | 9f3b898ec0846e2a4dff5077d4403ea03b1fccf9 | pycuda/gpuarray.py | python | if_positive | (criterion, then_, else_, out=None, stream=None) | return out | [] | def if_positive(criterion, then_, else_, out=None, stream=None):
if not (criterion.shape == then_.shape == else_.shape):
raise ValueError("shapes do not match")
if not (then_.dtype == else_.dtype):
raise ValueError("dtypes do not match")
func = elementwise.get_if_positive_kernel(criterion.dtype, then_.dtype)
if out is None:
out = empty_like(then_)
func.prepared_async_call(
criterion._grid,
criterion._block,
stream,
criterion.gpudata,
then_.gpudata,
else_.gpudata,
out.gpudata,
criterion.size,
)
return out | [
"def",
"if_positive",
"(",
"criterion",
",",
"then_",
",",
"else_",
",",
"out",
"=",
"None",
",",
"stream",
"=",
"None",
")",
":",
"if",
"not",
"(",
"criterion",
".",
"shape",
"==",
"then_",
".",
"shape",
"==",
"else_",
".",
"shape",
")",
":",
"rai... | https://github.com/inducer/pycuda/blob/9f3b898ec0846e2a4dff5077d4403ea03b1fccf9/pycuda/gpuarray.py#L1890-L1913 | |||
abhinavsingh/proxy.py | c6fceb639a5925994f04b15b333413693eb434eb | proxy/core/base/tcp_server.py | python | BaseTcpServerHandler._optionally_wrap_socket | (
self, conn: socket.socket,
) | return conn | Attempts to wrap accepted client connection using provided certificates.
Shutdown and closes client connection upon error. | Attempts to wrap accepted client connection using provided certificates. | [
"Attempts",
"to",
"wrap",
"accepted",
"client",
"connection",
"using",
"provided",
"certificates",
"."
] | def _optionally_wrap_socket(
self, conn: socket.socket,
) -> Union[ssl.SSLSocket, socket.socket]:
"""Attempts to wrap accepted client connection using provided certificates.
Shutdown and closes client connection upon error.
"""
if self._encryption_enabled():
assert self.flags.keyfile and self.flags.certfile
# TODO(abhinavsingh): Insecure TLS versions must not be accepted by default
conn = wrap_socket(conn, self.flags.keyfile, self.flags.certfile)
self.work._conn = conn
return conn | [
"def",
"_optionally_wrap_socket",
"(",
"self",
",",
"conn",
":",
"socket",
".",
"socket",
",",
")",
"->",
"Union",
"[",
"ssl",
".",
"SSLSocket",
",",
"socket",
".",
"socket",
"]",
":",
"if",
"self",
".",
"_encryption_enabled",
"(",
")",
":",
"assert",
... | https://github.com/abhinavsingh/proxy.py/blob/c6fceb639a5925994f04b15b333413693eb434eb/proxy/core/base/tcp_server.py#L210-L222 | |
shmilylty/OneForAll | 48591142a641e80f8a64ab215d11d06b696702d7 | common/utils.py | python | gen_random_ip | () | Generate random decimal IP string | Generate random decimal IP string | [
"Generate",
"random",
"decimal",
"IP",
"string"
] | def gen_random_ip():
"""
Generate random decimal IP string
"""
while True:
ip = IPv4Address(random.randint(0, 2 ** 32 - 1))
if ip.is_global:
return ip.exploded | [
"def",
"gen_random_ip",
"(",
")",
":",
"while",
"True",
":",
"ip",
"=",
"IPv4Address",
"(",
"random",
".",
"randint",
"(",
"0",
",",
"2",
"**",
"32",
"-",
"1",
")",
")",
"if",
"ip",
".",
"is_global",
":",
"return",
"ip",
".",
"exploded"
] | https://github.com/shmilylty/OneForAll/blob/48591142a641e80f8a64ab215d11d06b696702d7/common/utils.py#L43-L50 | ||
sixty-north/cosmic-ray | cf7fb7c3cc564db1e8d53b8e8848c9f46e61a879 | src/cosmic_ray/cli.py | python | baseline | (config_file, session_file) | Runs a baseline execution that executes the test suite over unmutated code.
If ``--session-file`` is provided, the session used for baselining is stored in that file. Otherwise,
the session is stored in a temporary file which is deleted after the baselining.
Exits with 0 if the job has exited normally, otherwise 1. | Runs a baseline execution that executes the test suite over unmutated code. | [
"Runs",
"a",
"baseline",
"execution",
"that",
"executes",
"the",
"test",
"suite",
"over",
"unmutated",
"code",
"."
] | def baseline(config_file, session_file):
"""Runs a baseline execution that executes the test suite over unmutated code.
If ``--session-file`` is provided, the session used for baselining is stored in that file. Otherwise,
the session is stored in a temporary file which is deleted after the baselining.
Exits with 0 if the job has exited normally, otherwise 1.
"""
cfg = load_config(config_file)
@contextmanager
def path_or_temp(path):
if path is None:
with tempfile.TemporaryDirectory() as tmpdir:
yield Path(tmpdir) / "session.sqlite"
else:
yield path
with path_or_temp(session_file) as session_path:
with use_db(session_path, mode=WorkDB.Mode.create) as db:
db.clear()
db.add_work_item(
WorkItem(
mutations=[],
job_id="baseline",
)
)
# Run the single-entry session.
cosmic_ray.commands.execute(db, cfg)
result = next(db.results)[1]
if result.test_outcome == TestOutcome.KILLED:
message = ["Baseline failed. Execution with no mutation gives those following errors:"]
for line in result.output.split("\n"):
message.append(" >>> {}".format(line))
log.error("\n".join(message))
sys.exit(1)
else:
log.info("Baseline passed. Execution with no mutation works fine.")
sys.exit(ExitCode.OK) | [
"def",
"baseline",
"(",
"config_file",
",",
"session_file",
")",
":",
"cfg",
"=",
"load_config",
"(",
"config_file",
")",
"@",
"contextmanager",
"def",
"path_or_temp",
"(",
"path",
")",
":",
"if",
"path",
"is",
"None",
":",
"with",
"tempfile",
".",
"Tempor... | https://github.com/sixty-north/cosmic-ray/blob/cf7fb7c3cc564db1e8d53b8e8848c9f46e61a879/src/cosmic_ray/cli.py#L127-L167 | ||
latchset/jwcrypto | 48f6234713bc0da46a947c543c02a86adda080aa | jwcrypto/jwk.py | python | JWK.export_public | (self, as_dict=False) | return json_encode(pub) | Exports the public key in the standard JSON format.
It fails if one is not available like when this function
is called on a symmetric key.
:param as_dict(bool): If set to True export as python dict not JSON | Exports the public key in the standard JSON format.
It fails if one is not available like when this function
is called on a symmetric key. | [
"Exports",
"the",
"public",
"key",
"in",
"the",
"standard",
"JSON",
"format",
".",
"It",
"fails",
"if",
"one",
"is",
"not",
"available",
"like",
"when",
"this",
"function",
"is",
"called",
"on",
"a",
"symmetric",
"key",
"."
] | def export_public(self, as_dict=False):
"""Exports the public key in the standard JSON format.
It fails if one is not available like when this function
is called on a symmetric key.
:param as_dict(bool): If set to True export as python dict not JSON
"""
pub = self._public_params()
if as_dict is True:
return pub
return json_encode(pub) | [
"def",
"export_public",
"(",
"self",
",",
"as_dict",
"=",
"False",
")",
":",
"pub",
"=",
"self",
".",
"_public_params",
"(",
")",
"if",
"as_dict",
"is",
"True",
":",
"return",
"pub",
"return",
"json_encode",
"(",
"pub",
")"
] | https://github.com/latchset/jwcrypto/blob/48f6234713bc0da46a947c543c02a86adda080aa/jwcrypto/jwk.py#L629-L639 | |
WenmuZhou/PytorchOCR | 0b2b3a67814ae40b20f3814d6793f5d75d644e38 | tools/det_train.py | python | get_fine_tune_params | (net, finetune_stage) | return to_return_parameters | 获取需要优化的参数
Args:
net:
Returns: 需要优化的参数 | 获取需要优化的参数
Args:
net:
Returns: 需要优化的参数 | [
"获取需要优化的参数",
"Args",
":",
"net",
":",
"Returns",
":",
"需要优化的参数"
] | def get_fine_tune_params(net, finetune_stage):
"""
获取需要优化的参数
Args:
net:
Returns: 需要优化的参数
"""
to_return_parameters = []
for stage in finetune_stage:
attr = getattr(net.module, stage, None)
for element in attr.parameters():
to_return_parameters.append(element)
return to_return_parameters | [
"def",
"get_fine_tune_params",
"(",
"net",
",",
"finetune_stage",
")",
":",
"to_return_parameters",
"=",
"[",
"]",
"for",
"stage",
"in",
"finetune_stage",
":",
"attr",
"=",
"getattr",
"(",
"net",
".",
"module",
",",
"stage",
",",
"None",
")",
"for",
"eleme... | https://github.com/WenmuZhou/PytorchOCR/blob/0b2b3a67814ae40b20f3814d6793f5d75d644e38/tools/det_train.py#L112-L124 | |
florath/rmtoo | 6ffe08703451358dca24b232ee4380b1da23bcad | rmtoo/lib/vcs/Git.py | python | Git.__get_blob | (self, commit, base_dir, sub_path) | return self.__get_blob_direct(ltree, sub_path_split[-1]) | Returns the blob from the give base directory and path.
If the file (blob) is not available, a None is returned.
If the directory is not available / accessable an exception
is thrown. | Returns the blob from the give base directory and path.
If the file (blob) is not available, a None is returned.
If the directory is not available / accessable an exception
is thrown. | [
"Returns",
"the",
"blob",
"from",
"the",
"give",
"base",
"directory",
"and",
"path",
".",
"If",
"the",
"file",
"(",
"blob",
")",
"is",
"not",
"available",
"a",
"None",
"is",
"returned",
".",
"If",
"the",
"directory",
"is",
"not",
"available",
"/",
"acc... | def __get_blob(self, commit, base_dir, sub_path):
'''Returns the blob from the give base directory and path.
If the file (blob) is not available, a None is returned.
If the directory is not available / accessable an exception
is thrown.'''
assert sub_path
full_path = base_dir.split("/")
sub_path_split = sub_path.split("/")
if len(sub_path_split) > 1:
full_path.extend(sub_path_split[:-1])
ltree = self.__get_tree(commit.tree, full_path)
return self.__get_blob_direct(ltree, sub_path_split[-1]) | [
"def",
"__get_blob",
"(",
"self",
",",
"commit",
",",
"base_dir",
",",
"sub_path",
")",
":",
"assert",
"sub_path",
"full_path",
"=",
"base_dir",
".",
"split",
"(",
"\"/\"",
")",
"sub_path_split",
"=",
"sub_path",
".",
"split",
"(",
"\"/\"",
")",
"if",
"l... | https://github.com/florath/rmtoo/blob/6ffe08703451358dca24b232ee4380b1da23bcad/rmtoo/lib/vcs/Git.py#L228-L239 | |
qfpl/hpython | 4c608ebbcfaee56ad386666d27e67cd4c77b0b4f | benchmarks/pypy.py | python | Decimal._round_half_down | (self, prec) | Round 5 down | Round 5 down | [
"Round",
"5",
"down"
] | def _round_half_down(self, prec):
"""Round 5 down"""
if _exact_half(self._int, prec):
return -1
else:
return self._round_half_up(prec) | [
"def",
"_round_half_down",
"(",
"self",
",",
"prec",
")",
":",
"if",
"_exact_half",
"(",
"self",
".",
"_int",
",",
"prec",
")",
":",
"return",
"-",
"1",
"else",
":",
"return",
"self",
".",
"_round_half_up",
"(",
"prec",
")"
] | https://github.com/qfpl/hpython/blob/4c608ebbcfaee56ad386666d27e67cd4c77b0b4f/benchmarks/pypy.py#L1770-L1775 | ||
psychopy/psychopy | 01b674094f38d0e0bd51c45a6f66f671d7041696 | psychopy/experiment/routines/_base.py | python | Routine.writeMainCode | (self, buff) | This defines the code for the frames of a single routine | This defines the code for the frames of a single routine | [
"This",
"defines",
"the",
"code",
"for",
"the",
"frames",
"of",
"a",
"single",
"routine"
] | def writeMainCode(self, buff):
"""This defines the code for the frames of a single routine
"""
# create the frame loop for this routine
code = ('\n# ------Prepare to start Routine "%s"-------\n')
buff.writeIndentedLines(code % (self.name))
code = 'continueRoutine = True\n'
buff.writeIndentedLines(code)
# can we use non-slip timing?
maxTime, useNonSlip = self.getMaxTime()
if useNonSlip:
buff.writeIndented('routineTimer.add(%f)\n' % (maxTime))
code = "# update component parameters for each repeat\n"
buff.writeIndentedLines(code)
# This is the beginning of the routine, before the loop starts
for event in self:
event.writeRoutineStartCode(buff)
code = '# keep track of which components have finished\n'
buff.writeIndentedLines(code)
# Get list of components, but leave out Variable components, which may not support attributes
compStr = ', '.join([c.params['name'].val for c in self
if 'startType' in c.params and c.type != 'Variable'])
buff.writeIndented('%sComponents = [%s]\n' % (self.name, compStr))
code = ("for thisComponent in {name}Components:\n"
" thisComponent.tStart = None\n"
" thisComponent.tStop = None\n"
" thisComponent.tStartRefresh = None\n"
" thisComponent.tStopRefresh = None\n"
" if hasattr(thisComponent, 'status'):\n"
" thisComponent.status = NOT_STARTED\n"
"# reset timers\n"
't = 0\n'
'_timeToFirstFrame = win.getFutureFlipTime(clock="now")\n'
'{clockName}.reset(-_timeToFirstFrame) # t0 is time of first possible flip\n'
'frameN = -1\n'
'\n# -------Run Routine "{name}"-------\n')
buff.writeIndentedLines(code.format(name=self.name,
clockName=self._clockName))
if useNonSlip:
code = 'while continueRoutine and routineTimer.getTime() > 0:\n'
else:
code = 'while continueRoutine:\n'
buff.writeIndented(code)
buff.setIndentLevel(1, True)
# on each frame
code = ('# get current time\n'
't = {clockName}.getTime()\n'
'tThisFlip = win.getFutureFlipTime(clock={clockName})\n'
'tThisFlipGlobal = win.getFutureFlipTime(clock=None)\n'
'frameN = frameN + 1 # number of completed frames '
'(so 0 is the first frame)\n')
buff.writeIndentedLines(code.format(clockName=self._clockName))
# write the code for each component during frame
buff.writeIndentedLines('# update/draw components on each frame\n')
# just 'normal' components
for event in self:
if event.type == 'Static':
continue # we'll do those later
event.writeFrameCode(buff)
# update static component code last
for event in self.getStatics():
event.writeFrameCode(buff)
# allow subject to quit via Esc key?
if self.exp.settings.params['Enable Escape'].val:
code = ('\n# check for quit (typically the Esc key)\n'
'if endExpNow or defaultKeyboard.getKeys(keyList=["escape"]):\n'
' core.quit()\n')
buff.writeIndentedLines(code)
# are we done yet?
code = (
'\n# check if all components have finished\n'
'if not continueRoutine: # a component has requested a '
'forced-end of Routine\n'
' break\n'
'continueRoutine = False # will revert to True if at least '
'one component still running\n'
'for thisComponent in %sComponents:\n'
' if hasattr(thisComponent, "status") and '
'thisComponent.status != FINISHED:\n'
' continueRoutine = True\n'
' break # at least one component has not yet finished\n')
buff.writeIndentedLines(code % self.name)
# update screen
code = ('\n# refresh the screen\n'
"if continueRoutine: # don't flip if this routine is over "
"or we'll get a blank screen\n"
' win.flip()\n')
buff.writeIndentedLines(code)
# that's done decrement indent to end loop
buff.setIndentLevel(-1, True)
# write the code for each component for the end of the routine
code = ('\n# -------Ending Routine "%s"-------\n'
'for thisComponent in %sComponents:\n'
' if hasattr(thisComponent, "setAutoDraw"):\n'
' thisComponent.setAutoDraw(False)\n')
buff.writeIndentedLines(code % (self.name, self.name))
for event in self:
event.writeRoutineEndCode(buff) | [
"def",
"writeMainCode",
"(",
"self",
",",
"buff",
")",
":",
"# create the frame loop for this routine",
"code",
"=",
"(",
"'\\n# ------Prepare to start Routine \"%s\"-------\\n'",
")",
"buff",
".",
"writeIndentedLines",
"(",
"code",
"%",
"(",
"self",
".",
"name",
")",... | https://github.com/psychopy/psychopy/blob/01b674094f38d0e0bd51c45a6f66f671d7041696/psychopy/experiment/routines/_base.py#L340-L448 | ||
PokemonGoF/PokemonGo-Bot-Desktop | 4bfa94f0183406c6a86f93645eff7abd3ad4ced8 | build/pywin/Lib/collections.py | python | OrderedDict.__ne__ | (self, other) | return not self == other | od.__ne__(y) <==> od!=y | od.__ne__(y) <==> od!=y | [
"od",
".",
"__ne__",
"(",
"y",
")",
"<",
"==",
">",
"od!",
"=",
"y"
] | def __ne__(self, other):
'od.__ne__(y) <==> od!=y'
return not self == other | [
"def",
"__ne__",
"(",
"self",
",",
"other",
")",
":",
"return",
"not",
"self",
"==",
"other"
] | https://github.com/PokemonGoF/PokemonGo-Bot-Desktop/blob/4bfa94f0183406c6a86f93645eff7abd3ad4ced8/build/pywin/Lib/collections.py#L228-L230 | |
NiaOrg/NiaPy | 08f24ffc79fe324bc9c66ee7186ef98633026005 | niapy/problems/zakharov.py | python | Zakharov.__init__ | (self, dimension=4, lower=-5.0, upper=10.0, *args, **kwargs) | r"""Initialize Zakharov problem..
Args:
dimension (Optional[int]): Dimension of the problem.
lower (Optional[Union[float, Iterable[float]]]): Lower bounds of the problem.
upper (Optional[Union[float, Iterable[float]]]): Upper bounds of the problem.
See Also:
:func:`niapy.problems.Problem.__init__` | r"""Initialize Zakharov problem.. | [
"r",
"Initialize",
"Zakharov",
"problem",
".."
] | def __init__(self, dimension=4, lower=-5.0, upper=10.0, *args, **kwargs):
r"""Initialize Zakharov problem..
Args:
dimension (Optional[int]): Dimension of the problem.
lower (Optional[Union[float, Iterable[float]]]): Lower bounds of the problem.
upper (Optional[Union[float, Iterable[float]]]): Upper bounds of the problem.
See Also:
:func:`niapy.problems.Problem.__init__`
"""
super().__init__(dimension, lower, upper, *args, **kwargs) | [
"def",
"__init__",
"(",
"self",
",",
"dimension",
"=",
"4",
",",
"lower",
"=",
"-",
"5.0",
",",
"upper",
"=",
"10.0",
",",
"*",
"args",
",",
"*",
"*",
"kwargs",
")",
":",
"super",
"(",
")",
".",
"__init__",
"(",
"dimension",
",",
"lower",
",",
... | https://github.com/NiaOrg/NiaPy/blob/08f24ffc79fe324bc9c66ee7186ef98633026005/niapy/problems/zakharov.py#L46-L58 | ||
KoalixSwitzerland/koalixcrm | 87d125379845d6ab990c19500d63cbed4051040a | koalixcrm/crm/product/currency.py | python | Currency.get_rounding | (self) | Returns either the stored rounding value for a currency or a default rounding value of 0.05
Args: no arguments
Returns: Decimal value
Raises: should not return exceptions | Returns either the stored rounding value for a currency or a default rounding value of 0.05 | [
"Returns",
"either",
"the",
"stored",
"rounding",
"value",
"for",
"a",
"currency",
"or",
"a",
"default",
"rounding",
"value",
"of",
"0",
".",
"05"
] | def get_rounding(self):
"""Returns either the stored rounding value for a currency or a default rounding value of 0.05
Args: no arguments
Returns: Decimal value
Raises: should not return exceptions"""
if self.rounding is None:
return Decimal('0.05')
else:
return self.rounding | [
"def",
"get_rounding",
"(",
"self",
")",
":",
"if",
"self",
".",
"rounding",
"is",
"None",
":",
"return",
"Decimal",
"(",
"'0.05'",
")",
"else",
":",
"return",
"self",
".",
"rounding"
] | https://github.com/KoalixSwitzerland/koalixcrm/blob/87d125379845d6ab990c19500d63cbed4051040a/koalixcrm/crm/product/currency.py#L20-L31 | ||
yidao620c/core-algorithm | 73f249e00d0e57eb2cea5a4c527ad7ae89ae7c08 | algorithms/ch02sort/m03_merge_sort.py | python | MergeSort.merge_ordered_seq | (self, left, middle, right) | seq: 待排序序列
left <= middle <= right
子数组seq[left..middle]和seq[middle+1..right]都是排好序的
该排序的时间复杂度为O(n) | seq: 待排序序列
left <= middle <= right
子数组seq[left..middle]和seq[middle+1..right]都是排好序的
该排序的时间复杂度为O(n) | [
"seq",
":",
"待排序序列",
"left",
"<",
"=",
"middle",
"<",
"=",
"right",
"子数组seq",
"[",
"left",
"..",
"middle",
"]",
"和seq",
"[",
"middle",
"+",
"1",
"..",
"right",
"]",
"都是排好序的",
"该排序的时间复杂度为O",
"(",
"n",
")"
] | def merge_ordered_seq(self, left, middle, right):
"""
seq: 待排序序列
left <= middle <= right
子数组seq[left..middle]和seq[middle+1..right]都是排好序的
该排序的时间复杂度为O(n)
"""
temp_seq = []
i = left
j = middle + 1
while i <= middle and j <= right:
if self.seq[i] <= self.seq[j]:
temp_seq.append(self.seq[i])
i += 1
else:
temp_seq.append(self.seq[j])
j += 1
if i <= middle:
temp_seq.extend(self.seq[i:middle + 1])
else:
temp_seq.extend(self.seq[j:right + 1])
self.seq[left:right + 1] = temp_seq[:] | [
"def",
"merge_ordered_seq",
"(",
"self",
",",
"left",
",",
"middle",
",",
"right",
")",
":",
"temp_seq",
"=",
"[",
"]",
"i",
"=",
"left",
"j",
"=",
"middle",
"+",
"1",
"while",
"i",
"<=",
"middle",
"and",
"j",
"<=",
"right",
":",
"if",
"self",
".... | https://github.com/yidao620c/core-algorithm/blob/73f249e00d0e57eb2cea5a4c527ad7ae89ae7c08/algorithms/ch02sort/m03_merge_sort.py#L31-L52 | ||
saltstack/salt | fae5bc757ad0f1716483ce7ae180b451545c2058 | salt/sdb/vault.py | python | get | (key, profile=None) | Get a value from the vault service | Get a value from the vault service | [
"Get",
"a",
"value",
"from",
"the",
"vault",
"service"
] | def get(key, profile=None):
"""
Get a value from the vault service
"""
if "?" in key:
path, key = key.split("?")
else:
path, key = key.rsplit("/", 1)
version2 = __utils__["vault.is_v2"](path)
if version2["v2"]:
path = version2["data"]
try:
url = "v1/{}".format(path)
response = __utils__["vault.make_request"]("GET", url)
if response.status_code == 404:
return None
if response.status_code != 200:
response.raise_for_status()
data = response.json()["data"]
if version2["v2"]:
if key in data["data"]:
return data["data"][key]
else:
if key in data:
return data[key]
return None
except Exception as e: # pylint: disable=broad-except
log.error("Failed to read secret! %s: %s", type(e).__name__, e)
raise salt.exceptions.CommandExecutionError(e) | [
"def",
"get",
"(",
"key",
",",
"profile",
"=",
"None",
")",
":",
"if",
"\"?\"",
"in",
"key",
":",
"path",
",",
"key",
"=",
"key",
".",
"split",
"(",
"\"?\"",
")",
"else",
":",
"path",
",",
"key",
"=",
"key",
".",
"rsplit",
"(",
"\"/\"",
",",
... | https://github.com/saltstack/salt/blob/fae5bc757ad0f1716483ce7ae180b451545c2058/salt/sdb/vault.py#L79-L110 | ||
interpretml/interpret-community | 84d86b7514fd9812f1497329bf1c4c9fc864370e | python/interpret_community/common/explanation_utils.py | python | _sparse_order_imp | (local_importance_values, values_type=_RANKING, features=None, top_k=None) | Compute the ranking for sparse feature importance values.
:param local_importance_values: The local importance values to compute the ranking for.
:type local_importance_values: scipy.sparse.csr_matrix or list[scipy.sparse.csr_matrix]
:param values_type: The type of values, can be 'ranking', which returns the sorted
indices, 'values', which returns the sorted values, or 'features', which returns
the feature names.
:type values_type: str
:param features: The feature names.
:type features: list[str]
:param top_k: If specified, only the top k values will be returned.
:type top_k: int
:return: The rank of the non-zero sparse feature importance values.
:rtype: list | Compute the ranking for sparse feature importance values. | [
"Compute",
"the",
"ranking",
"for",
"sparse",
"feature",
"importance",
"values",
"."
] | def _sparse_order_imp(local_importance_values, values_type=_RANKING, features=None, top_k=None):
"""Compute the ranking for sparse feature importance values.
:param local_importance_values: The local importance values to compute the ranking for.
:type local_importance_values: scipy.sparse.csr_matrix or list[scipy.sparse.csr_matrix]
:param values_type: The type of values, can be 'ranking', which returns the sorted
indices, 'values', which returns the sorted values, or 'features', which returns
the feature names.
:type values_type: str
:param features: The feature names.
:type features: list[str]
:param top_k: If specified, only the top k values will be returned.
:type top_k: int
:return: The rank of the non-zero sparse feature importance values.
:rtype: list
"""
if isinstance(local_importance_values, list):
per_class_sparse_ranking = []
for class_importance_values in local_importance_values:
per_class_sparse_ranking.append(_sparse_order_imp_csr_matrix(class_importance_values,
values_type=values_type,
features=features,
top_k=top_k))
return per_class_sparse_ranking
else:
return _sparse_order_imp_csr_matrix(local_importance_values,
values_type=values_type,
features=features,
top_k=top_k) | [
"def",
"_sparse_order_imp",
"(",
"local_importance_values",
",",
"values_type",
"=",
"_RANKING",
",",
"features",
"=",
"None",
",",
"top_k",
"=",
"None",
")",
":",
"if",
"isinstance",
"(",
"local_importance_values",
",",
"list",
")",
":",
"per_class_sparse_ranking... | https://github.com/interpretml/interpret-community/blob/84d86b7514fd9812f1497329bf1c4c9fc864370e/python/interpret_community/common/explanation_utils.py#L452-L480 | ||
PaddlePaddle/PaddleHub | 107ee7e1a49d15e9c94da3956475d88a53fc165f | modules/image/semantic_segmentation/deeplabv3p_xception65_humanseg/processor.py | python | postprocess | (data_out, org_im, org_im_shape, org_im_path, output_dir, visualization, thresh=120) | return result | Postprocess output of network. one image at a time.
Args:
data_out (numpy.ndarray): output of network.
org_im (numpy.ndarray): original image.
org_im_shape (list): shape pf original image.
org_im_path (list): path of riginal image.
output_dir (str): output directory to store image.
visualization (bool): whether to save image or not.
thresh (float): threshold.
Returns:
result (dict): The data of processed image. | Postprocess output of network. one image at a time. | [
"Postprocess",
"output",
"of",
"network",
".",
"one",
"image",
"at",
"a",
"time",
"."
] | def postprocess(data_out, org_im, org_im_shape, org_im_path, output_dir, visualization, thresh=120):
"""
Postprocess output of network. one image at a time.
Args:
data_out (numpy.ndarray): output of network.
org_im (numpy.ndarray): original image.
org_im_shape (list): shape pf original image.
org_im_path (list): path of riginal image.
output_dir (str): output directory to store image.
visualization (bool): whether to save image or not.
thresh (float): threshold.
Returns:
result (dict): The data of processed image.
"""
result = dict()
for logit in data_out:
logit = logit[1] * 255
logit = cv2.resize(logit, (org_im_shape[1], org_im_shape[0]))
logit -= thresh
logit[logit < 0] = 0
logit = 255 * logit / (255 - thresh)
rgba = np.concatenate((org_im, np.expand_dims(logit, axis=2)), axis=2)
if visualization:
check_dir(output_dir)
save_im_path = get_save_image_name(org_im, org_im_path, output_dir)
cv2.imwrite(save_im_path, rgba)
result['save_path'] = save_im_path
result['data'] = rgba[:, :, 3]
else:
result['data'] = rgba[:, :, 3]
return result | [
"def",
"postprocess",
"(",
"data_out",
",",
"org_im",
",",
"org_im_shape",
",",
"org_im_path",
",",
"output_dir",
",",
"visualization",
",",
"thresh",
"=",
"120",
")",
":",
"result",
"=",
"dict",
"(",
")",
"for",
"logit",
"in",
"data_out",
":",
"logit",
... | https://github.com/PaddlePaddle/PaddleHub/blob/107ee7e1a49d15e9c94da3956475d88a53fc165f/modules/image/semantic_segmentation/deeplabv3p_xception65_humanseg/processor.py#L29-L62 | |
facebook/FAI-PEP | 632918e8b4025044b67eb24aff57027e84836995 | benchmarking/run_remote.py | python | RunRemote._downloadRepoFile | (self, location, tgt_dir, commit_hash) | return tgt_file | location: //repo/fbsource/fbcode/aibench/...../a.py | location: //repo/fbsource/fbcode/aibench/...../a.py | [
"location",
":",
"//",
"repo",
"/",
"fbsource",
"/",
"fbcode",
"/",
"aibench",
"/",
".....",
"/",
"a",
".",
"py"
] | def _downloadRepoFile(self, location, tgt_dir, commit_hash):
"""
location: //repo/fbsource/fbcode/aibench/...../a.py
"""
raw_scm_query = pkg_resources.resource_string(
"aibench", "benchmarking/bin/scm_query.par"
)
query_exe = os.path.join(tgt_dir, "scm_query.par")
with open(query_exe, "wb") as f:
f.write(raw_scm_query)
cmd = ["chmod", "+x", os.path.join(tgt_dir, "scm_query.par")]
subprocess.check_output(cmd)
dirs = location[2:].split("/")
tgt_file = os.path.join(tgt_dir, dirs[-1])
cmd = [
query_exe,
"--repo",
dirs[1],
"--file_path",
"/".join(dirs[2:]),
"--target_file",
tgt_file,
"--commit_hash",
commit_hash,
]
getLogger().info("Downloading {}".format(location))
subprocess.check_output(cmd)
os.remove(query_exe)
return tgt_file | [
"def",
"_downloadRepoFile",
"(",
"self",
",",
"location",
",",
"tgt_dir",
",",
"commit_hash",
")",
":",
"raw_scm_query",
"=",
"pkg_resources",
".",
"resource_string",
"(",
"\"aibench\"",
",",
"\"benchmarking/bin/scm_query.par\"",
")",
"query_exe",
"=",
"os",
".",
... | https://github.com/facebook/FAI-PEP/blob/632918e8b4025044b67eb24aff57027e84836995/benchmarking/run_remote.py#L571-L599 | |
Tautulli/Tautulli | 2410eb33805aaac4bd1c5dad0f71e4f15afaf742 | lib/dateutil/relativedelta.py | python | relativedelta.__add__ | (self, other) | return ret | [] | def __add__(self, other):
if isinstance(other, relativedelta):
return self.__class__(years=other.years + self.years,
months=other.months + self.months,
days=other.days + self.days,
hours=other.hours + self.hours,
minutes=other.minutes + self.minutes,
seconds=other.seconds + self.seconds,
microseconds=(other.microseconds +
self.microseconds),
leapdays=other.leapdays or self.leapdays,
year=(other.year if other.year is not None
else self.year),
month=(other.month if other.month is not None
else self.month),
day=(other.day if other.day is not None
else self.day),
weekday=(other.weekday if other.weekday is not None
else self.weekday),
hour=(other.hour if other.hour is not None
else self.hour),
minute=(other.minute if other.minute is not None
else self.minute),
second=(other.second if other.second is not None
else self.second),
microsecond=(other.microsecond if other.microsecond
is not None else
self.microsecond))
if isinstance(other, datetime.timedelta):
return self.__class__(years=self.years,
months=self.months,
days=self.days + other.days,
hours=self.hours,
minutes=self.minutes,
seconds=self.seconds + other.seconds,
microseconds=self.microseconds + other.microseconds,
leapdays=self.leapdays,
year=self.year,
month=self.month,
day=self.day,
weekday=self.weekday,
hour=self.hour,
minute=self.minute,
second=self.second,
microsecond=self.microsecond)
if not isinstance(other, datetime.date):
return NotImplemented
elif self._has_time and not isinstance(other, datetime.datetime):
other = datetime.datetime.fromordinal(other.toordinal())
year = (self.year or other.year)+self.years
month = self.month or other.month
if self.months:
assert 1 <= abs(self.months) <= 12
month += self.months
if month > 12:
year += 1
month -= 12
elif month < 1:
year -= 1
month += 12
day = min(calendar.monthrange(year, month)[1],
self.day or other.day)
repl = {"year": year, "month": month, "day": day}
for attr in ["hour", "minute", "second", "microsecond"]:
value = getattr(self, attr)
if value is not None:
repl[attr] = value
days = self.days
if self.leapdays and month > 2 and calendar.isleap(year):
days += self.leapdays
ret = (other.replace(**repl)
+ datetime.timedelta(days=days,
hours=self.hours,
minutes=self.minutes,
seconds=self.seconds,
microseconds=self.microseconds))
if self.weekday:
weekday, nth = self.weekday.weekday, self.weekday.n or 1
jumpdays = (abs(nth) - 1) * 7
if nth > 0:
jumpdays += (7 - ret.weekday() + weekday) % 7
else:
jumpdays += (ret.weekday() - weekday) % 7
jumpdays *= -1
ret += datetime.timedelta(days=jumpdays)
return ret | [
"def",
"__add__",
"(",
"self",
",",
"other",
")",
":",
"if",
"isinstance",
"(",
"other",
",",
"relativedelta",
")",
":",
"return",
"self",
".",
"__class__",
"(",
"years",
"=",
"other",
".",
"years",
"+",
"self",
".",
"years",
",",
"months",
"=",
"oth... | https://github.com/Tautulli/Tautulli/blob/2410eb33805aaac4bd1c5dad0f71e4f15afaf742/lib/dateutil/relativedelta.py#L317-L402 | |||
omz/PythonistaAppTemplate | f560f93f8876d82a21d108977f90583df08d55af | PythonistaAppTemplate/PythonistaKit.framework/pylib/site-packages/reportlab/pdfbase/cidfonts.py | python | CIDTypeFace.__init__ | (self, name) | Initialised from one of the canned dictionaries in allowedEncodings
Or rather, it will be shortly... | Initialised from one of the canned dictionaries in allowedEncodings | [
"Initialised",
"from",
"one",
"of",
"the",
"canned",
"dictionaries",
"in",
"allowedEncodings"
] | def __init__(self, name):
"""Initialised from one of the canned dictionaries in allowedEncodings
Or rather, it will be shortly..."""
pdfmetrics.TypeFace.__init__(self, name)
self._extractDictInfo(name) | [
"def",
"__init__",
"(",
"self",
",",
"name",
")",
":",
"pdfmetrics",
".",
"TypeFace",
".",
"__init__",
"(",
"self",
",",
"name",
")",
"self",
".",
"_extractDictInfo",
"(",
"name",
")"
] | https://github.com/omz/PythonistaAppTemplate/blob/f560f93f8876d82a21d108977f90583df08d55af/PythonistaAppTemplate/PythonistaKit.framework/pylib/site-packages/reportlab/pdfbase/cidfonts.py#L232-L237 | ||
bhoov/exbert | d27b6236aa51b185f7d3fed904f25cabe3baeb1a | server/transformers/examples/summarization/bertabs/utils_summarization.py | python | encode_for_summarization | (story_lines, summary_lines, tokenizer) | return story_token_ids, summary_token_ids | Encode the story and summary lines, and join them
as specified in [1] by using `[SEP] [CLS]` tokens to separate
sentences. | Encode the story and summary lines, and join them
as specified in [1] by using `[SEP] [CLS]` tokens to separate
sentences. | [
"Encode",
"the",
"story",
"and",
"summary",
"lines",
"and",
"join",
"them",
"as",
"specified",
"in",
"[",
"1",
"]",
"by",
"using",
"[",
"SEP",
"]",
"[",
"CLS",
"]",
"tokens",
"to",
"separate",
"sentences",
"."
] | def encode_for_summarization(story_lines, summary_lines, tokenizer):
""" Encode the story and summary lines, and join them
as specified in [1] by using `[SEP] [CLS]` tokens to separate
sentences.
"""
story_lines_token_ids = [tokenizer.encode(line) for line in story_lines]
story_token_ids = [token for sentence in story_lines_token_ids for token in sentence]
summary_lines_token_ids = [tokenizer.encode(line) for line in summary_lines]
summary_token_ids = [token for sentence in summary_lines_token_ids for token in sentence]
return story_token_ids, summary_token_ids | [
"def",
"encode_for_summarization",
"(",
"story_lines",
",",
"summary_lines",
",",
"tokenizer",
")",
":",
"story_lines_token_ids",
"=",
"[",
"tokenizer",
".",
"encode",
"(",
"line",
")",
"for",
"line",
"in",
"story_lines",
"]",
"story_token_ids",
"=",
"[",
"token... | https://github.com/bhoov/exbert/blob/d27b6236aa51b185f7d3fed904f25cabe3baeb1a/server/transformers/examples/summarization/bertabs/utils_summarization.py#L130-L140 | |
ghostop14/sparrow-wifi | 4b8289773ea4304872062f65a6ffc9352612b08e | wirelessengine.py | python | WirelessEngine.parseIWoutput | (iwOutput) | return retVal | [] | def parseIWoutput(iwOutput):
# Define search regexes once:
p_bss = re.compile('^BSS (.*?)\(')
p_ssid = re.compile('^.+?SSID: +(.*)')
p_ess = re.compile('^ capability:.*(ESS)')
p_ess_privacy = re.compile('^ capability:.*(ESS Privacy)')
p_ibss = re.compile('^ capability:.*(IBSS)')
p_ibss_privacy = re.compile('^ capability:.*(IBSS Privacy)')
p_auth_suites = re.compile('.*?Authentication suites: *(.*)')
p_pw_ciphers = re.compile('.*?Pairwise ciphers: *(.*)')
p_param_channel = re.compile('^.*?DS Parameter set: channel +([0-9]+).*')
p_primary_channel = re.compile('^.*?primary channel: +([0-9]+).*')
p_freq = re.compile('^.*?freq:.*?([0-9]+).*')
p_signal = re.compile('^.*?signal:.*?([\-0-9]+).*?dBm')
p_ht = re.compile('.*?HT20/HT40.*')
p_bw = re.compile('.*?\\* channel width:.*?([0-9]+) MHz.*')
p_secondary = re.compile('^.*?secondary channel offset: *([^ \\t]+).*')
p_thirdfreq = re.compile('^.*?center freq segment 1: *([^ \\t]+).*')
p_stationcount = re.compile('.*station count: ([0-9]+)')
p_utilization = re.compile('.*channel utilisation: ([0-9]+)/255')
# start
retVal = {}
curNetwork = None
now=datetime.datetime.now()
# This now supports direct from STDOUT via scanForNetworks,
# and input from a file as f.readlines() which returns a list
if type(iwOutput) == str:
inputLines = iwOutput.splitlines()
else:
inputLines = iwOutput
for curLine in inputLines:
fieldValue = WirelessEngine.getFieldValue(p_bss, curLine)
if (len(fieldValue) > 0):
# New object
if curNetwork is not None:
# Store first
if curNetwork.channel > 0:
# I did see incomplete output from iw where not all the data was there
retVal[curNetwork.getKey()] = curNetwork
# Create a new network. BSSID will be the header for each network
curNetwork = WirelessNetwork()
curNetwork.lastSeen = now
curNetwork.firstSeen = now
curNetwork.macAddr = fieldValue
continue
if curNetwork is None:
# If we don't have a network object yet, then we haven't
# seen a BSSID so just keep going through the lines.
continue
fieldValue = WirelessEngine.getFieldValue(p_ssid, curLine)
if (len(fieldValue) > 0):
curNetwork.ssid = WirelessEngine.convertUnknownToString(fieldValue)
fieldValue = WirelessEngine.getFieldValue(p_ess, curLine)
if (len(fieldValue) > 0):
curNetwork.mode = "AP"
# Had issue with WEP not showing up.
# If capability has "ESS Privacy" there's something there.
# If it's PSK, etc. there will be other RSN fields, etc.
# So for now start by assuming WEP
# See: https://wiki.archlinux.org/index.php/Wireless_network_configuration
fieldValue = WirelessEngine.getFieldValue(p_ess_privacy, curLine)
if (len(fieldValue) > 0):
curNetwork.security = "WEP"
curNetwork.privacy = "WEP"
continue #Found the item
fieldValue = WirelessEngine.getFieldValue(p_ibss, curLine)
if (len(fieldValue) > 0):
curNetwork.mode = "Ad Hoc"
curNetwork.security = "[Ad-Hoc] Open"
fieldValue = WirelessEngine.getFieldValue(p_ibss_privacy, curLine)
if (len(fieldValue) > 0):
curNetwork.security = "[Ad-Hoc] WEP"
curNetwork.privacy = "WEP"
continue #Found the item
# Station count
fieldValue = WirelessEngine.getFieldValue(p_stationcount, curLine)
if (len(fieldValue) > 0):
curNetwork.stationcount = int(fieldValue)
continue #Found the item
# Utilization
fieldValue = WirelessEngine.getFieldValue(p_utilization, curLine)
if (len(fieldValue) > 0):
utilization = round(float(fieldValue) / 255.0 * 100.0 * 100.0) / 100.0
curNetwork.utilization = utilization
continue #Found the item
# Auth suites
fieldValue = WirelessEngine.getFieldValue(p_auth_suites, curLine)
if (len(fieldValue) > 0):
curNetwork.security = fieldValue
continue #Found the item
# p = re.compile('.*?Group cipher: *(.*)')
fieldValue = WirelessEngine.getFieldValue(p_pw_ciphers, curLine)
fieldValue = fieldValue.replace(' ', '/')
if (len(fieldValue) > 0):
curNetwork.privacy = fieldValue
curNetwork.cipher = fieldValue
continue #Found the item
fieldValue = WirelessEngine.getFieldValue(p_param_channel, curLine)
if (len(fieldValue) > 0):
curNetwork.channel = int(fieldValue)
continue #Found the item
fieldValue = WirelessEngine.getFieldValue(p_primary_channel, curLine)
if (len(fieldValue) > 0):
curNetwork.channel = int(fieldValue)
continue #Found the item
fieldValue = WirelessEngine.getFieldValue(p_freq, curLine)
if (len(fieldValue) > 0):
curNetwork.frequency = int(fieldValue)
continue #Found the item
fieldValue = WirelessEngine.getFieldValue(p_signal, curLine)
# This test is different. dBm is negative so can't test > 0. 10dBm is really high so lets use that
if (len(fieldValue) > 0):
curNetwork.signal = int(fieldValue)
curNetwork.strongestsignal = curNetwork.signal
continue #Found the item
fieldValue = WirelessEngine.getFieldValue(p_ht, curLine)
if (len(fieldValue) > 0):
if (curNetwork.bandwidth == 20):
curNetwork.bandwidth = 40
continue #Found the item
fieldValue = WirelessEngine.getFieldValue(p_bw, curLine)
if (len(fieldValue) > 0):
curNetwork.bandwidth = int(fieldValue)
continue #Found the item
fieldValue = WirelessEngine.getFieldValue(p_secondary, curLine)
if (len(fieldValue) > 0):
curNetwork.secondaryChannelLocation = fieldValue
if (fieldValue == 'above'):
curNetwork.secondaryChannel = curNetwork.channel + 4
elif (fieldValue == 'below'):
curNetwork.secondaryChannel = curNetwork.channel - 4
# else it'll say 'no secondary'
continue #Found the item
fieldValue = WirelessEngine.getFieldValue(p_thirdfreq, curLine)
if (len(fieldValue) > 0):
curNetwork.thirdChannel = int(fieldValue)
continue #Found the item
# #### End loop ######
# Add the last network
if curNetwork is not None:
if curNetwork.channel > 0:
# I did see incomplete output from iw where not all the data was there
retVal[curNetwork.getKey()] = curNetwork
return retVal | [
"def",
"parseIWoutput",
"(",
"iwOutput",
")",
":",
"# Define search regexes once:",
"p_bss",
"=",
"re",
".",
"compile",
"(",
"'^BSS (.*?)\\('",
")",
"p_ssid",
"=",
"re",
".",
"compile",
"(",
"'^.+?SSID: +(.*)'",
")",
"p_ess",
"=",
"re",
".",
"compile",
"(",
... | https://github.com/ghostop14/sparrow-wifi/blob/4b8289773ea4304872062f65a6ffc9352612b08e/wirelessengine.py#L667-L855 | |||
mozilla/pontoon | d26999eea57902a30b5c15e9b77277fe7e76a60f | pontoon/base/templatetags/helpers.py | python | provider_login_url | (request, provider_id=settings.AUTHENTICATION_METHOD, **query) | return provider.get_login_url(request, **query) | This function adapts the django-allauth templatetags that don't support jinja2.
@TODO: land support for the jinja2 tags in the django-allauth. | This function adapts the django-allauth templatetags that don't support jinja2. | [
"This",
"function",
"adapts",
"the",
"django",
"-",
"allauth",
"templatetags",
"that",
"don",
"t",
"support",
"jinja2",
"."
] | def provider_login_url(request, provider_id=settings.AUTHENTICATION_METHOD, **query):
"""
This function adapts the django-allauth templatetags that don't support jinja2.
@TODO: land support for the jinja2 tags in the django-allauth.
"""
provider = providers.registry.by_id(provider_id)
auth_params = query.get("auth_params", None)
process = query.get("process", None)
if auth_params == "":
del query["auth_params"]
if "next" not in query:
next_ = get_request_param(request, "next")
if next_:
query["next"] = next_
elif process == "redirect":
query["next"] = request.get_full_path()
else:
if not query["next"]:
del query["next"]
return provider.get_login_url(request, **query) | [
"def",
"provider_login_url",
"(",
"request",
",",
"provider_id",
"=",
"settings",
".",
"AUTHENTICATION_METHOD",
",",
"*",
"*",
"query",
")",
":",
"provider",
"=",
"providers",
".",
"registry",
".",
"by_id",
"(",
"provider_id",
")",
"auth_params",
"=",
"query",... | https://github.com/mozilla/pontoon/blob/d26999eea57902a30b5c15e9b77277fe7e76a60f/pontoon/base/templatetags/helpers.py#L181-L203 | |
flow-project/flow | a511c41c48e6b928bb2060de8ad1ef3c3e3d9554 | flow/networks/highway.py | python | HighwayNetwork.specify_routes | (self, net_params) | return rts | See parent class. | See parent class. | [
"See",
"parent",
"class",
"."
] | def specify_routes(self, net_params):
"""See parent class."""
num_edges = net_params.additional_params.get("num_edges", 1)
rts = {}
for i in range(num_edges):
rts["highway_{}".format(i)] = ["highway_{}".format(j) for
j in range(i, num_edges)]
if self.net_params.additional_params["use_ghost_edge"]:
rts["highway_{}".format(i)].append("highway_end")
return rts | [
"def",
"specify_routes",
"(",
"self",
",",
"net_params",
")",
":",
"num_edges",
"=",
"net_params",
".",
"additional_params",
".",
"get",
"(",
"\"num_edges\"",
",",
"1",
")",
"rts",
"=",
"{",
"}",
"for",
"i",
"in",
"range",
"(",
"num_edges",
")",
":",
"... | https://github.com/flow-project/flow/blob/a511c41c48e6b928bb2060de8ad1ef3c3e3d9554/flow/networks/highway.py#L153-L163 | |
google/coursebuilder-core | 08f809db3226d9269e30d5edd0edd33bd22041f4 | coursebuilder/common/safe_dom.py | python | ScriptElement.add_text | (self, text) | Add the script body. | Add the script body. | [
"Add",
"the",
"script",
"body",
"."
] | def add_text(self, text):
"""Add the script body."""
class Script(Text):
def __init__(self, script):
# Pylint is just plain wrong about warning here; suppressing.
# pylint: disable=bad-super-call
super(Script, self).__init__(None)
self._script = script
@property
def sanitized(self):
if '</script>' in self._script:
raise ValueError('End script tag forbidden')
return self._script
self._children.append(Script(text)) | [
"def",
"add_text",
"(",
"self",
",",
"text",
")",
":",
"class",
"Script",
"(",
"Text",
")",
":",
"def",
"__init__",
"(",
"self",
",",
"script",
")",
":",
"# Pylint is just plain wrong about warning here; suppressing.",
"# pylint: disable=bad-super-call",
"super",
"(... | https://github.com/google/coursebuilder-core/blob/08f809db3226d9269e30d5edd0edd33bd22041f4/coursebuilder/common/safe_dom.py#L280-L297 | ||
keon/algorithms | 23d4e85a506eaeaff315e855be12f8dbe47a7ec3 | algorithms/graph/find_path.py | python | find_path | (graph, start, end, path=[]) | return None | [] | def find_path(graph, start, end, path=[]):
path = path + [start]
if (start == end):
return path
if not start in graph:
return None
for node in graph[start]:
if node not in path:
newpath = find_path(graph, node, end, path)
return newpath
return None | [
"def",
"find_path",
"(",
"graph",
",",
"start",
",",
"end",
",",
"path",
"=",
"[",
"]",
")",
":",
"path",
"=",
"path",
"+",
"[",
"start",
"]",
"if",
"(",
"start",
"==",
"end",
")",
":",
"return",
"path",
"if",
"not",
"start",
"in",
"graph",
":"... | https://github.com/keon/algorithms/blob/23d4e85a506eaeaff315e855be12f8dbe47a7ec3/algorithms/graph/find_path.py#L9-L19 | |||
XKNX/xknx | 1deeeb3dc0978aebacf14492a84e1f1eaf0970ed | xknx/remote_value/remote_value_climate_mode.py | python | RemoteValueOperationMode.to_knx | (self, value: Any) | return DPTArray(self._climate_mode_transcoder.to_knx(value)) | Convert value to payload. | Convert value to payload. | [
"Convert",
"value",
"to",
"payload",
"."
] | def to_knx(self, value: Any) -> DPTArray:
"""Convert value to payload."""
return DPTArray(self._climate_mode_transcoder.to_knx(value)) | [
"def",
"to_knx",
"(",
"self",
",",
"value",
":",
"Any",
")",
"->",
"DPTArray",
":",
"return",
"DPTArray",
"(",
"self",
".",
"_climate_mode_transcoder",
".",
"to_knx",
"(",
"value",
")",
")"
] | https://github.com/XKNX/xknx/blob/1deeeb3dc0978aebacf14492a84e1f1eaf0970ed/xknx/remote_value/remote_value_climate_mode.py#L96-L98 | |
LinkedInAttic/indextank-service | 880c6295ce8e7a3a55bf9b3777cc35c7680e0d7e | storefront/boto/ec2/connection.py | python | EC2Connection.get_spot_price_history | (self, start_time=None, end_time=None,
instance_type=None, product_description=None) | return self.get_list('DescribeSpotPriceHistory', params, [('item', SpotPriceHistory)]) | Retrieve the recent history of spot instances pricing.
@type start_time: str
@param start_time: An indication of how far back to provide price
changes for. An ISO8601 DateTime string.
@type end_time: str
@param end_time: An indication of how far forward to provide price
changes for. An ISO8601 DateTime string.
@type instance_type: str
@param instance_type: Filter responses to a particular instance type.
@type product_description: str
@param product_descripton: Filter responses to a particular platform.
Valid values are currently: Linux
@rtype: list
@return: A list tuples containing price and timestamp. | Retrieve the recent history of spot instances pricing. | [
"Retrieve",
"the",
"recent",
"history",
"of",
"spot",
"instances",
"pricing",
"."
] | def get_spot_price_history(self, start_time=None, end_time=None,
instance_type=None, product_description=None):
"""
Retrieve the recent history of spot instances pricing.
@type start_time: str
@param start_time: An indication of how far back to provide price
changes for. An ISO8601 DateTime string.
@type end_time: str
@param end_time: An indication of how far forward to provide price
changes for. An ISO8601 DateTime string.
@type instance_type: str
@param instance_type: Filter responses to a particular instance type.
@type product_description: str
@param product_descripton: Filter responses to a particular platform.
Valid values are currently: Linux
@rtype: list
@return: A list tuples containing price and timestamp.
"""
params = {}
if start_time:
params['StartTime'] = start_time
if end_time:
params['EndTime'] = end_time
if instance_type:
params['InstanceType'] = instance_type
if product_description:
params['ProductDescription'] = product_description
return self.get_list('DescribeSpotPriceHistory', params, [('item', SpotPriceHistory)]) | [
"def",
"get_spot_price_history",
"(",
"self",
",",
"start_time",
"=",
"None",
",",
"end_time",
"=",
"None",
",",
"instance_type",
"=",
"None",
",",
"product_description",
"=",
"None",
")",
":",
"params",
"=",
"{",
"}",
"if",
"start_time",
":",
"params",
"[... | https://github.com/LinkedInAttic/indextank-service/blob/880c6295ce8e7a3a55bf9b3777cc35c7680e0d7e/storefront/boto/ec2/connection.py#L643-L675 | |
cltk/cltk | 1a8c2f5ef72389e2579dfce1fa5af8e59ebc9ec1 | src/cltk/prosody/non.py | python | LongLine.syllabify | (self, syllabifier) | >>> raw_long_line = "Deyr fé,\\ndeyja frændr"
>>> short_line = ShortLine(raw_long_line)
>>> syl = Syllabifier(language="non", break_geminants=True)
>>> syl.set_invalid_onsets(old_norse_syllabifier.invalid_onsets)
>>> short_line.syllabify(syl)
:param syllabifier: Old Norse syllabifier
:return: | >>> raw_long_line = "Deyr fé,\\ndeyja frændr"
>>> short_line = ShortLine(raw_long_line)
>>> syl = Syllabifier(language="non", break_geminants=True)
>>> syl.set_invalid_onsets(old_norse_syllabifier.invalid_onsets)
>>> short_line.syllabify(syl) | [
">>>",
"raw_long_line",
"=",
"Deyr",
"fé",
"\\\\",
"ndeyja",
"frændr",
">>>",
"short_line",
"=",
"ShortLine",
"(",
"raw_long_line",
")",
">>>",
"syl",
"=",
"Syllabifier",
"(",
"language",
"=",
"non",
"break_geminants",
"=",
"True",
")",
">>>",
"syl",
".",
... | def syllabify(self, syllabifier):
"""
>>> raw_long_line = "Deyr fé,\\ndeyja frændr"
>>> short_line = ShortLine(raw_long_line)
>>> syl = Syllabifier(language="non", break_geminants=True)
>>> syl.set_invalid_onsets(old_norse_syllabifier.invalid_onsets)
>>> short_line.syllabify(syl)
:param syllabifier: Old Norse syllabifier
:return:
"""
for viisuordh in self.tokenized_text:
word = old_norse_normalize(viisuordh)
if word:
self.syllabified.append(syllabifier.syllabify(word)) | [
"def",
"syllabify",
"(",
"self",
",",
"syllabifier",
")",
":",
"for",
"viisuordh",
"in",
"self",
".",
"tokenized_text",
":",
"word",
"=",
"old_norse_normalize",
"(",
"viisuordh",
")",
"if",
"word",
":",
"self",
".",
"syllabified",
".",
"append",
"(",
"syll... | https://github.com/cltk/cltk/blob/1a8c2f5ef72389e2579dfce1fa5af8e59ebc9ec1/src/cltk/prosody/non.py#L154-L168 | ||
aws-samples/aws-kube-codesuite | ab4e5ce45416b83bffb947ab8d234df5437f4fca | src/networkx/algorithms/components/connected.py | python | is_connected | (G) | return len(set(_plain_bfs(G, arbitrary_element(G)))) == len(G) | Return True if the graph is connected, false otherwise.
Parameters
----------
G : NetworkX Graph
An undirected graph.
Returns
-------
connected : bool
True if the graph is connected, false otherwise.
Raises
------
NetworkXNotImplemented:
If G is undirected.
Examples
--------
>>> G = nx.path_graph(4)
>>> print(nx.is_connected(G))
True
See Also
--------
is_strongly_connected
is_weakly_connected
is_semiconnected
is_biconnected
connected_components
Notes
-----
For undirected graphs only. | Return True if the graph is connected, false otherwise. | [
"Return",
"True",
"if",
"the",
"graph",
"is",
"connected",
"false",
"otherwise",
"."
] | def is_connected(G):
"""Return True if the graph is connected, false otherwise.
Parameters
----------
G : NetworkX Graph
An undirected graph.
Returns
-------
connected : bool
True if the graph is connected, false otherwise.
Raises
------
NetworkXNotImplemented:
If G is undirected.
Examples
--------
>>> G = nx.path_graph(4)
>>> print(nx.is_connected(G))
True
See Also
--------
is_strongly_connected
is_weakly_connected
is_semiconnected
is_biconnected
connected_components
Notes
-----
For undirected graphs only.
"""
if len(G) == 0:
raise nx.NetworkXPointlessConcept('Connectivity is undefined ',
'for the null graph.')
return len(set(_plain_bfs(G, arbitrary_element(G)))) == len(G) | [
"def",
"is_connected",
"(",
"G",
")",
":",
"if",
"len",
"(",
"G",
")",
"==",
"0",
":",
"raise",
"nx",
".",
"NetworkXPointlessConcept",
"(",
"'Connectivity is undefined '",
",",
"'for the null graph.'",
")",
"return",
"len",
"(",
"set",
"(",
"_plain_bfs",
"("... | https://github.com/aws-samples/aws-kube-codesuite/blob/ab4e5ce45416b83bffb947ab8d234df5437f4fca/src/networkx/algorithms/components/connected.py#L157-L197 | |
kubernetes-client/python | 47b9da9de2d02b2b7a34fbe05afb44afd130d73a | kubernetes/client/api/rbac_authorization_v1_api.py | python | RbacAuthorizationV1Api.replace_namespaced_role | (self, name, namespace, body, **kwargs) | return self.replace_namespaced_role_with_http_info(name, namespace, body, **kwargs) | replace_namespaced_role # noqa: E501
replace the specified Role # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.replace_namespaced_role(name, namespace, body, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str name: name of the Role (required)
:param str namespace: object name and auth scope, such as for teams and projects (required)
:param V1Role body: (required)
:param str pretty: If 'true', then the output is pretty printed.
:param str dry_run: When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed
:param str field_manager: fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint.
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: V1Role
If the method is called asynchronously,
returns the request thread. | replace_namespaced_role # noqa: E501 | [
"replace_namespaced_role",
"#",
"noqa",
":",
"E501"
] | def replace_namespaced_role(self, name, namespace, body, **kwargs): # noqa: E501
"""replace_namespaced_role # noqa: E501
replace the specified Role # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.replace_namespaced_role(name, namespace, body, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str name: name of the Role (required)
:param str namespace: object name and auth scope, such as for teams and projects (required)
:param V1Role body: (required)
:param str pretty: If 'true', then the output is pretty printed.
:param str dry_run: When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed
:param str field_manager: fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint.
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: V1Role
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.replace_namespaced_role_with_http_info(name, namespace, body, **kwargs) | [
"def",
"replace_namespaced_role",
"(",
"self",
",",
"name",
",",
"namespace",
",",
"body",
",",
"*",
"*",
"kwargs",
")",
":",
"# noqa: E501",
"kwargs",
"[",
"'_return_http_data_only'",
"]",
"=",
"True",
"return",
"self",
".",
"replace_namespaced_role_with_http_inf... | https://github.com/kubernetes-client/python/blob/47b9da9de2d02b2b7a34fbe05afb44afd130d73a/kubernetes/client/api/rbac_authorization_v1_api.py#L4294-L4322 | |
bruderstein/PythonScript | df9f7071ddf3a079e3a301b9b53a6dc78cf1208f | PythonLib/min/inspect.py | python | Signature.replace | (self, *, parameters=_void, return_annotation=_void) | return type(self)(parameters,
return_annotation=return_annotation) | Creates a customized copy of the Signature.
Pass 'parameters' and/or 'return_annotation' arguments
to override them in the new copy. | Creates a customized copy of the Signature.
Pass 'parameters' and/or 'return_annotation' arguments
to override them in the new copy. | [
"Creates",
"a",
"customized",
"copy",
"of",
"the",
"Signature",
".",
"Pass",
"parameters",
"and",
"/",
"or",
"return_annotation",
"arguments",
"to",
"override",
"them",
"in",
"the",
"new",
"copy",
"."
] | def replace(self, *, parameters=_void, return_annotation=_void):
"""Creates a customized copy of the Signature.
Pass 'parameters' and/or 'return_annotation' arguments
to override them in the new copy.
"""
if parameters is _void:
parameters = self.parameters.values()
if return_annotation is _void:
return_annotation = self._return_annotation
return type(self)(parameters,
return_annotation=return_annotation) | [
"def",
"replace",
"(",
"self",
",",
"*",
",",
"parameters",
"=",
"_void",
",",
"return_annotation",
"=",
"_void",
")",
":",
"if",
"parameters",
"is",
"_void",
":",
"parameters",
"=",
"self",
".",
"parameters",
".",
"values",
"(",
")",
"if",
"return_annot... | https://github.com/bruderstein/PythonScript/blob/df9f7071ddf3a079e3a301b9b53a6dc78cf1208f/PythonLib/min/inspect.py#L3007-L3020 | |
oaubert/python-vlc | 908ffdbd0844dc1849728c456e147788798c99da | generated/dev/vlc.py | python | libvlc_media_new_path | (p_instance, path) | return f(p_instance, path) | Create a media for a certain file path.
See L{libvlc_media_release}.
@param p_instance: the instance.
@param path: local filesystem path.
@return: the newly created media or None on error. | Create a media for a certain file path.
See L{libvlc_media_release}. | [
"Create",
"a",
"media",
"for",
"a",
"certain",
"file",
"path",
".",
"See",
"L",
"{",
"libvlc_media_release",
"}",
"."
] | def libvlc_media_new_path(p_instance, path):
'''Create a media for a certain file path.
See L{libvlc_media_release}.
@param p_instance: the instance.
@param path: local filesystem path.
@return: the newly created media or None on error.
'''
f = _Cfunctions.get('libvlc_media_new_path', None) or \
_Cfunction('libvlc_media_new_path', ((1,), (1,),), class_result(Media),
ctypes.c_void_p, Instance, ctypes.c_char_p)
return f(p_instance, path) | [
"def",
"libvlc_media_new_path",
"(",
"p_instance",
",",
"path",
")",
":",
"f",
"=",
"_Cfunctions",
".",
"get",
"(",
"'libvlc_media_new_path'",
",",
"None",
")",
"or",
"_Cfunction",
"(",
"'libvlc_media_new_path'",
",",
"(",
"(",
"1",
",",
")",
",",
"(",
"1"... | https://github.com/oaubert/python-vlc/blob/908ffdbd0844dc1849728c456e147788798c99da/generated/dev/vlc.py#L5256-L5266 | |
pypa/pipenv | b21baade71a86ab3ee1429f71fbc14d4f95fb75d | pipenv/patched/notpip/_vendor/distlib/util.py | python | Sequencer.add | (self, pred, succ) | [] | def add(self, pred, succ):
assert pred != succ
self._preds.setdefault(succ, set()).add(pred)
self._succs.setdefault(pred, set()).add(succ) | [
"def",
"add",
"(",
"self",
",",
"pred",
",",
"succ",
")",
":",
"assert",
"pred",
"!=",
"succ",
"self",
".",
"_preds",
".",
"setdefault",
"(",
"succ",
",",
"set",
"(",
")",
")",
".",
"add",
"(",
"pred",
")",
"self",
".",
"_succs",
".",
"setdefault... | https://github.com/pypa/pipenv/blob/b21baade71a86ab3ee1429f71fbc14d4f95fb75d/pipenv/patched/notpip/_vendor/distlib/util.py#L1084-L1087 | ||||
kblomqvist/yasha | 56bb1f69077957954e1ebeb77f7273e8dc6a891b | yasha/cli.py | python | print_version | (ctx, param, value) | [] | def print_version(ctx, param, value):
if not value or ctx.resilient_parsing:
return
click.echo(yasha.__version__)
ctx.exit() | [
"def",
"print_version",
"(",
"ctx",
",",
"param",
",",
"value",
")",
":",
"if",
"not",
"value",
"or",
"ctx",
".",
"resilient_parsing",
":",
"return",
"click",
".",
"echo",
"(",
"yasha",
".",
"__version__",
")",
"ctx",
".",
"exit",
"(",
")"
] | https://github.com/kblomqvist/yasha/blob/56bb1f69077957954e1ebeb77f7273e8dc6a891b/yasha/cli.py#L39-L43 | ||||
zzzeek/sqlalchemy | fc5c54fcd4d868c2a4c7ac19668d72f506fe821e | lib/sqlalchemy/engine/row.py | python | Row._special_name_accessor | (name) | return go | Handle ambiguous names such as "count" and "index" | Handle ambiguous names such as "count" and "index" | [
"Handle",
"ambiguous",
"names",
"such",
"as",
"count",
"and",
"index"
] | def _special_name_accessor(name):
"""Handle ambiguous names such as "count" and "index" """
@property
def go(self):
if self._parent._has_key(name):
return self.__getattr__(name)
else:
def meth(*arg, **kw):
return getattr(collections_abc.Sequence, name)(
self, *arg, **kw
)
return meth
return go | [
"def",
"_special_name_accessor",
"(",
"name",
")",
":",
"@",
"property",
"def",
"go",
"(",
"self",
")",
":",
"if",
"self",
".",
"_parent",
".",
"_has_key",
"(",
"name",
")",
":",
"return",
"self",
".",
"__getattr__",
"(",
"name",
")",
"else",
":",
"d... | https://github.com/zzzeek/sqlalchemy/blob/fc5c54fcd4d868c2a4c7ac19668d72f506fe821e/lib/sqlalchemy/engine/row.py#L204-L220 | |
binaryage/firelogger.py | 23980e7964fd330aa3ce48d3cc0911fe3dafbfba | gprof2dot.py | python | Profile._tarjan | (self, function, order, stack, orders, lowlinks, visited) | return order | Tarjan's strongly connected components algorithm.
See also:
- http://en.wikipedia.org/wiki/Tarjan's_strongly_connected_components_algorithm | Tarjan's strongly connected components algorithm. | [
"Tarjan",
"s",
"strongly",
"connected",
"components",
"algorithm",
"."
] | def _tarjan(self, function, order, stack, orders, lowlinks, visited):
"""Tarjan's strongly connected components algorithm.
See also:
- http://en.wikipedia.org/wiki/Tarjan's_strongly_connected_components_algorithm
"""
visited.add(function)
orders[function] = order
lowlinks[function] = order
order += 1
pos = len(stack)
stack.append(function)
for call in function.calls.itervalues():
callee = self.functions[call.callee_id]
# TODO: use a set to optimize lookup
if callee not in orders:
order = self._tarjan(callee, order, stack, orders, lowlinks, visited)
lowlinks[function] = min(lowlinks[function], lowlinks[callee])
elif callee in stack:
lowlinks[function] = min(lowlinks[function], orders[callee])
if lowlinks[function] == orders[function]:
# Strongly connected component found
members = stack[pos:]
del stack[pos:]
if len(members) > 1:
cycle = Cycle()
for member in members:
cycle.add_function(member)
return order | [
"def",
"_tarjan",
"(",
"self",
",",
"function",
",",
"order",
",",
"stack",
",",
"orders",
",",
"lowlinks",
",",
"visited",
")",
":",
"visited",
".",
"add",
"(",
"function",
")",
"orders",
"[",
"function",
"]",
"=",
"order",
"lowlinks",
"[",
"function"... | https://github.com/binaryage/firelogger.py/blob/23980e7964fd330aa3ce48d3cc0911fe3dafbfba/gprof2dot.py#L263-L292 | |
abr/abr_control | a248ec56166f01791857a766ac58ee0920c0861c | abr_control/interfaces/coppeliasim_files/sim.py | python | simxGetUIEventButton | (clientID, uiHandle, operationMode) | return ret, uiEventButtonID.value, arr | Please have a look at the function description/documentation in the CoppeliaSim user manual | Please have a look at the function description/documentation in the CoppeliaSim user manual | [
"Please",
"have",
"a",
"look",
"at",
"the",
"function",
"description",
"/",
"documentation",
"in",
"the",
"CoppeliaSim",
"user",
"manual"
] | def simxGetUIEventButton(clientID, uiHandle, operationMode):
"""
Please have a look at the function description/documentation in the CoppeliaSim user manual
"""
uiEventButtonID = ct.c_int()
auxValues = (ct.c_int * 2)()
ret = c_GetUIEventButton(
clientID, uiHandle, ct.byref(uiEventButtonID), auxValues, operationMode
)
arr = []
for i in range(2):
arr.append(auxValues[i])
return ret, uiEventButtonID.value, arr | [
"def",
"simxGetUIEventButton",
"(",
"clientID",
",",
"uiHandle",
",",
"operationMode",
")",
":",
"uiEventButtonID",
"=",
"ct",
".",
"c_int",
"(",
")",
"auxValues",
"=",
"(",
"ct",
".",
"c_int",
"*",
"2",
")",
"(",
")",
"ret",
"=",
"c_GetUIEventButton",
"... | https://github.com/abr/abr_control/blob/a248ec56166f01791857a766ac58ee0920c0861c/abr_control/interfaces/coppeliasim_files/sim.py#L935-L948 | |
facebookresearch/habitat-lab | c6b96ac061f238f18ad5ca2c08f7f46819d30bd0 | habitat/tasks/rearrange/rearrange_grasp_manager.py | python | RearrangeGraspManager.is_violating_hold_constraint | (self) | return False | Returns true if the object is too far away from the gripper, meaning
the agent violated the hold constraint. | Returns true if the object is too far away from the gripper, meaning
the agent violated the hold constraint. | [
"Returns",
"true",
"if",
"the",
"object",
"is",
"too",
"far",
"away",
"from",
"the",
"gripper",
"meaning",
"the",
"agent",
"violated",
"the",
"hold",
"constraint",
"."
] | def is_violating_hold_constraint(self) -> bool:
"""
Returns true if the object is too far away from the gripper, meaning
the agent violated the hold constraint.
"""
ee_pos = self._sim.robot.ee_transform.translation
if self._snapped_obj_id is not None and (
np.linalg.norm(ee_pos - self.snap_rigid_obj.translation)
>= self._config.HOLD_THRESH
):
return True
if self._snapped_marker_id is not None:
marker = self._sim.get_marker(self._snapped_marker_id)
if (
np.linalg.norm(ee_pos - marker.get_current_position())
>= self._config.HOLD_THRESH
):
return True
return False | [
"def",
"is_violating_hold_constraint",
"(",
"self",
")",
"->",
"bool",
":",
"ee_pos",
"=",
"self",
".",
"_sim",
".",
"robot",
".",
"ee_transform",
".",
"translation",
"if",
"self",
".",
"_snapped_obj_id",
"is",
"not",
"None",
"and",
"(",
"np",
".",
"linalg... | https://github.com/facebookresearch/habitat-lab/blob/c6b96ac061f238f18ad5ca2c08f7f46819d30bd0/habitat/tasks/rearrange/rearrange_grasp_manager.py#L64-L83 | |
securesystemslab/zippy | ff0e84ac99442c2c55fe1d285332cfd4e185e089 | zippy/benchmarks/src/benchmarks/sympy/sympy/galgebra/printing.py | python | GA_Printer.__exit__ | (self, type, value, traceback) | [] | def __exit__(self, type, value, traceback):
GA_Printer._off() | [
"def",
"__exit__",
"(",
"self",
",",
"type",
",",
"value",
",",
"traceback",
")",
":",
"GA_Printer",
".",
"_off",
"(",
")"
] | https://github.com/securesystemslab/zippy/blob/ff0e84ac99442c2c55fe1d285332cfd4e185e089/zippy/benchmarks/src/benchmarks/sympy/sympy/galgebra/printing.py#L229-L230 | ||||
exaile/exaile | a7b58996c5c15b3aa7b9975ac13ee8f784ef4689 | xlgui/widgets/rating.py | python | RatingWidget.destroy | (self) | Cleanups | Cleanups | [
"Cleanups"
] | def destroy(self):
"""
Cleanups
"""
if self._player is not None:
event.remove_callback(
self.on_rating_update, 'playback_track_start', self._player
)
event.remove_callback(
self.on_rating_update, 'playback_track_end', self._player
)
event.remove_callback(self.on_rating_update, 'rating_changed') | [
"def",
"destroy",
"(",
"self",
")",
":",
"if",
"self",
".",
"_player",
"is",
"not",
"None",
":",
"event",
".",
"remove_callback",
"(",
"self",
".",
"on_rating_update",
",",
"'playback_track_start'",
",",
"self",
".",
"_player",
")",
"event",
".",
"remove_c... | https://github.com/exaile/exaile/blob/a7b58996c5c15b3aa7b9975ac13ee8f784ef4689/xlgui/widgets/rating.py#L93-L104 | ||
Ecogenomics/GTDBTk | 1e10c56530b4a15eadce519619a62584a490632d | gtdbtk/markers.py | python | Markers.identify | (self, genomes, tln_tables, out_dir, prefix, force, write_single_copy_genes) | Identify marker genes in genomes.
Parameters
----------
genomes : dict
Genome IDs as the key, path to genome file as value.
tln_tables: Dict[str, int]
Genome ID -> translation table mapping for those user-specified.
out_dir : str
Path to the output directory.
prefix : str
Prefix to append to generated files.
force : bool
Overwrite any existing files.
write_single_copy_genes : bool
Write unique AR122/BAC120 marker files to disk.
Raises
------
GTDBTkException
If an exception is encountered during the identify step. | Identify marker genes in genomes. | [
"Identify",
"marker",
"genes",
"in",
"genomes",
"."
] | def identify(self, genomes, tln_tables, out_dir, prefix, force, write_single_copy_genes):
"""Identify marker genes in genomes.
Parameters
----------
genomes : dict
Genome IDs as the key, path to genome file as value.
tln_tables: Dict[str, int]
Genome ID -> translation table mapping for those user-specified.
out_dir : str
Path to the output directory.
prefix : str
Prefix to append to generated files.
force : bool
Overwrite any existing files.
write_single_copy_genes : bool
Write unique AR122/BAC120 marker files to disk.
Raises
------
GTDBTkException
If an exception is encountered during the identify step.
"""
check_dependencies(['prodigal', 'hmmsearch'])
self.logger.info(f'Identifying markers in {len(genomes):,} genomes with '
f'{self.cpus} threads.')
self.marker_gene_dir = os.path.join(out_dir, DIR_MARKER_GENE)
self.failed_genomes = os.path.join(out_dir, PATH_FAILS.format(prefix=prefix))
prodigal = Prodigal(self.cpus,
self.failed_genomes,
self.marker_gene_dir,
self.protein_file_suffix,
self.nt_gene_file_suffix,
self.gff_file_suffix,
force)
self.logger.log(Config.LOG_TASK, f'Running Prodigal {prodigal.version} to identify genes.')
genome_dictionary = prodigal.run(genomes, tln_tables)
# annotated genes against TIGRFAM and Pfam databases
self.logger.log(Config.LOG_TASK, 'Identifying TIGRFAM protein families.')
gene_files = [genome_dictionary[db_genome_id]['aa_gene_path']
for db_genome_id in genome_dictionary.keys()]
tigr_search = TigrfamSearch(self.cpus,
self.tigrfam_hmms,
self.protein_file_suffix,
self.tigrfam_suffix,
self.tigrfam_top_hit_suffix,
self.checksum_suffix,
self.marker_gene_dir)
tigr_search.run(gene_files)
self.logger.log(Config.LOG_TASK, 'Identifying Pfam protein families.')
pfam_search = PfamSearch(self.cpus,
self.pfam_hmm_dir,
self.protein_file_suffix,
self.pfam_suffix,
self.pfam_top_hit_suffix,
self.checksum_suffix,
self.marker_gene_dir)
pfam_search.run(gene_files)
self.logger.info(f'Annotations done using HMMER {tigr_search.version}.')
self.logger.log(Config.LOG_TASK, 'Summarising identified marker genes.')
self._report_identified_marker_genes(genome_dictionary, out_dir, prefix,
write_single_copy_genes) | [
"def",
"identify",
"(",
"self",
",",
"genomes",
",",
"tln_tables",
",",
"out_dir",
",",
"prefix",
",",
"force",
",",
"write_single_copy_genes",
")",
":",
"check_dependencies",
"(",
"[",
"'prodigal'",
",",
"'hmmsearch'",
"]",
")",
"self",
".",
"logger",
".",
... | https://github.com/Ecogenomics/GTDBTk/blob/1e10c56530b4a15eadce519619a62584a490632d/gtdbtk/markers.py#L153-L220 | ||
ricequant/rqalpha | d8b345ca3fde299e061c6a89c1f2c362c3584c96 | rqalpha/portfolio/position.py | python | PositionProxy.position_pnl | (self) | return self._long.position_pnl + self._short.position_pnl | [float] 昨仓盈亏,当前交易日盈亏中来源于昨仓的部分
多方向昨仓盈亏 = 昨日收盘时的持仓 * 合约乘数 * (最新价 - 昨收价)
空方向昨仓盈亏 = 昨日收盘时的持仓 * 合约乘数 * (昨收价 - 最新价) | [float] 昨仓盈亏,当前交易日盈亏中来源于昨仓的部分 | [
"[",
"float",
"]",
"昨仓盈亏,当前交易日盈亏中来源于昨仓的部分"
] | def position_pnl(self):
"""
[float] 昨仓盈亏,当前交易日盈亏中来源于昨仓的部分
多方向昨仓盈亏 = 昨日收盘时的持仓 * 合约乘数 * (最新价 - 昨收价)
空方向昨仓盈亏 = 昨日收盘时的持仓 * 合约乘数 * (昨收价 - 最新价)
"""
return self._long.position_pnl + self._short.position_pnl | [
"def",
"position_pnl",
"(",
"self",
")",
":",
"return",
"self",
".",
"_long",
".",
"position_pnl",
"+",
"self",
".",
"_short",
".",
"position_pnl"
] | https://github.com/ricequant/rqalpha/blob/d8b345ca3fde299e061c6a89c1f2c362c3584c96/rqalpha/portfolio/position.py#L294-L302 | |
shiweibsw/Translation-Tools | 2fbbf902364e557fa7017f9a74a8797b7440c077 | venv/Lib/site-packages/pip-9.0.3-py3.6.egg/pip/utils/hashes.py | python | Hashes.check_against_file | (self, file) | return self.check_against_chunks(read_chunks(file)) | Check good hashes against a file-like object
Raise HashMismatch if none match. | Check good hashes against a file-like object | [
"Check",
"good",
"hashes",
"against",
"a",
"file",
"-",
"like",
"object"
] | def check_against_file(self, file):
"""Check good hashes against a file-like object
Raise HashMismatch if none match.
"""
return self.check_against_chunks(read_chunks(file)) | [
"def",
"check_against_file",
"(",
"self",
",",
"file",
")",
":",
"return",
"self",
".",
"check_against_chunks",
"(",
"read_chunks",
"(",
"file",
")",
")"
] | https://github.com/shiweibsw/Translation-Tools/blob/2fbbf902364e557fa7017f9a74a8797b7440c077/venv/Lib/site-packages/pip-9.0.3-py3.6.egg/pip/utils/hashes.py#L58-L64 | |
divio/django-mailchimp | a55c6fadcc295bdeb4514eab30f61b74caf63eaf | mailchimp/chimpy/chimpy.py | python | Connection.templates | (self, template_type='user', start=0, limit=50) | return self.make_request('GET', 'templates', queries=queries) | Retrieve various templates available in the system, allowing something
similar to our template gallery to be created. | Retrieve various templates available in the system, allowing something
similar to our template gallery to be created. | [
"Retrieve",
"various",
"templates",
"available",
"in",
"the",
"system",
"allowing",
"something",
"similar",
"to",
"our",
"template",
"gallery",
"to",
"be",
"created",
"."
] | def templates(self, template_type='user', start=0, limit=50):
"""
Retrieve various templates available in the system, allowing something
similar to our template gallery to be created.
"""
queries = {
'count': limit,
'offset': start,
"type" : template_type,
}
return self.make_request('GET', 'templates', queries=queries) | [
"def",
"templates",
"(",
"self",
",",
"template_type",
"=",
"'user'",
",",
"start",
"=",
"0",
",",
"limit",
"=",
"50",
")",
":",
"queries",
"=",
"{",
"'count'",
":",
"limit",
",",
"'offset'",
":",
"start",
",",
"\"type\"",
":",
"template_type",
",",
... | https://github.com/divio/django-mailchimp/blob/a55c6fadcc295bdeb4514eab30f61b74caf63eaf/mailchimp/chimpy/chimpy.py#L363-L373 | |
SUSE/DeepSea | 9c7fad93915ba1250c40d50c855011e9fe41ed21 | srv/modules/runners/net.py | python | _address | (addresses, network) | return matched | Return all addresses in the given network
Note: list comprehension vs. netaddr vs. simple | Return all addresses in the given network | [
"Return",
"all",
"addresses",
"in",
"the",
"given",
"network"
] | def _address(addresses, network):
"""
Return all addresses in the given network
Note: list comprehension vs. netaddr vs. simple
"""
matched = []
for address in addresses:
log.debug("_address: ip {} in network {} ".format(address, network))
if IPAddress(address) in IPNetwork(network):
matched.append(address)
return matched | [
"def",
"_address",
"(",
"addresses",
",",
"network",
")",
":",
"matched",
"=",
"[",
"]",
"for",
"address",
"in",
"addresses",
":",
"log",
".",
"debug",
"(",
"\"_address: ip {} in network {} \"",
".",
"format",
"(",
"address",
",",
"network",
")",
")",
"if"... | https://github.com/SUSE/DeepSea/blob/9c7fad93915ba1250c40d50c855011e9fe41ed21/srv/modules/runners/net.py#L431-L442 | |
PaddlePaddle/PaddleDetection | 635e3e0a80f3d05751cdcfca8af04ee17c601a92 | deploy/python/mot_sde_infer.py | python | SDE_DetectorPicoDet.predict | (self, image, scaled, threshold=0.5, repeats=1, add_timer=True) | return pred_dets, pred_xyxys | Args:
image (np.ndarray): image numpy data
scaled (bool): whether the coords after detector outputs are scaled,
default False in jde yolov3, set True in general detector.
threshold (float): threshold of predicted box' score
repeats (int): repeat number for prediction
add_timer (bool): whether add timer during prediction
Returns:
pred_dets (np.ndarray, [N, 6]) | Args:
image (np.ndarray): image numpy data
scaled (bool): whether the coords after detector outputs are scaled,
default False in jde yolov3, set True in general detector.
threshold (float): threshold of predicted box' score
repeats (int): repeat number for prediction
add_timer (bool): whether add timer during prediction
Returns:
pred_dets (np.ndarray, [N, 6]) | [
"Args",
":",
"image",
"(",
"np",
".",
"ndarray",
")",
":",
"image",
"numpy",
"data",
"scaled",
"(",
"bool",
")",
":",
"whether",
"the",
"coords",
"after",
"detector",
"outputs",
"are",
"scaled",
"default",
"False",
"in",
"jde",
"yolov3",
"set",
"True",
... | def predict(self, image, scaled, threshold=0.5, repeats=1, add_timer=True):
'''
Args:
image (np.ndarray): image numpy data
scaled (bool): whether the coords after detector outputs are scaled,
default False in jde yolov3, set True in general detector.
threshold (float): threshold of predicted box' score
repeats (int): repeat number for prediction
add_timer (bool): whether add timer during prediction
Returns:
pred_dets (np.ndarray, [N, 6])
'''
# preprocess
if add_timer:
self.det_times.preprocess_time_s.start()
inputs = self.preprocess(image)
input_names = self.predictor.get_input_names()
for i in range(len(input_names)):
input_tensor = self.predictor.get_input_handle(input_names[i])
input_tensor.copy_from_cpu(inputs[input_names[i]])
if add_timer:
self.det_times.preprocess_time_s.end()
self.det_times.inference_time_s.start()
# model prediction
np_score_list, np_boxes_list = [], []
for i in range(repeats):
self.predictor.run()
np_score_list.clear()
np_boxes_list.clear()
output_names = self.predictor.get_output_names()
num_outs = int(len(output_names) / 2)
for out_idx in range(num_outs):
np_score_list.append(
self.predictor.get_output_handle(output_names[out_idx])
.copy_to_cpu())
np_boxes_list.append(
self.predictor.get_output_handle(output_names[
out_idx + num_outs]).copy_to_cpu())
if add_timer:
self.det_times.inference_time_s.end(repeats=repeats)
self.det_times.img_num += 1
self.det_times.postprocess_time_s.start()
# postprocess
self.postprocess = PicoDetPostProcess(
inputs['image'].shape[2:],
inputs['im_shape'],
inputs['scale_factor'],
strides=self.pred_config.fpn_stride,
nms_threshold=self.pred_config.nms['nms_threshold'])
boxes, boxes_num = self.postprocess(np_score_list, np_boxes_list)
if len(boxes) == 0:
pred_dets = np.zeros((1, 6), dtype=np.float32)
pred_xyxys = np.zeros((1, 4), dtype=np.float32)
else:
input_shape = inputs['image'].shape[2:]
im_shape = inputs['im_shape']
scale_factor = inputs['scale_factor']
pred_dets, pred_xyxys = self.postprocess_bboxes(
boxes, input_shape, im_shape, scale_factor, threshold)
if add_timer:
self.det_times.postprocess_time_s.end()
return pred_dets, pred_xyxys | [
"def",
"predict",
"(",
"self",
",",
"image",
",",
"scaled",
",",
"threshold",
"=",
"0.5",
",",
"repeats",
"=",
"1",
",",
"add_timer",
"=",
"True",
")",
":",
"# preprocess",
"if",
"add_timer",
":",
"self",
".",
"det_times",
".",
"preprocess_time_s",
".",
... | https://github.com/PaddlePaddle/PaddleDetection/blob/635e3e0a80f3d05751cdcfca8af04ee17c601a92/deploy/python/mot_sde_infer.py#L307-L375 | |
holzschu/Carnets | 44effb10ddfc6aa5c8b0687582a724ba82c6b547 | Library/lib/python3.7/site-packages/sympy/vector/coordsysrect.py | python | CoordSys3D._get_lame_coeff | (curv_coord_name) | return CoordSys3D._calculate_lame_coefficients(curv_coord_name) | Store information about Lame coefficients for pre-defined
coordinate systems.
Parameters
==========
curv_coord_name : str
Name of coordinate system | Store information about Lame coefficients for pre-defined
coordinate systems. | [
"Store",
"information",
"about",
"Lame",
"coefficients",
"for",
"pre",
"-",
"defined",
"coordinate",
"systems",
"."
] | def _get_lame_coeff(curv_coord_name):
"""
Store information about Lame coefficients for pre-defined
coordinate systems.
Parameters
==========
curv_coord_name : str
Name of coordinate system
"""
if isinstance(curv_coord_name, string_types):
if curv_coord_name == 'cartesian':
return lambda x, y, z: (S.One, S.One, S.One)
if curv_coord_name == 'spherical':
return lambda r, theta, phi: (S.One, r, r*sin(theta))
if curv_coord_name == 'cylindrical':
return lambda r, theta, h: (S.One, r, S.One)
raise ValueError('Wrong set of parameters.'
' Type of coordinate system is not defined')
return CoordSys3D._calculate_lame_coefficients(curv_coord_name) | [
"def",
"_get_lame_coeff",
"(",
"curv_coord_name",
")",
":",
"if",
"isinstance",
"(",
"curv_coord_name",
",",
"string_types",
")",
":",
"if",
"curv_coord_name",
"==",
"'cartesian'",
":",
"return",
"lambda",
"x",
",",
"y",
",",
"z",
":",
"(",
"S",
".",
"One"... | https://github.com/holzschu/Carnets/blob/44effb10ddfc6aa5c8b0687582a724ba82c6b547/Library/lib/python3.7/site-packages/sympy/vector/coordsysrect.py#L353-L374 | |
zhl2008/awd-platform | 0416b31abea29743387b10b3914581fbe8e7da5e | web_flaskbb/Python-2.7.9/Lib/pickle.py | python | Unpickler.load_binput | (self) | [] | def load_binput(self):
i = ord(self.read(1))
self.memo[repr(i)] = self.stack[-1] | [
"def",
"load_binput",
"(",
"self",
")",
":",
"i",
"=",
"ord",
"(",
"self",
".",
"read",
"(",
"1",
")",
")",
"self",
".",
"memo",
"[",
"repr",
"(",
"i",
")",
"]",
"=",
"self",
".",
"stack",
"[",
"-",
"1",
"]"
] | https://github.com/zhl2008/awd-platform/blob/0416b31abea29743387b10b3914581fbe8e7da5e/web_flaskbb/Python-2.7.9/Lib/pickle.py#L1168-L1170 | ||||
dluvizon/deephar | fbebb148a3b7153f911b86e1a7300aa32c336f31 | deephar/utils/plot.py | python | data_to_image | (x, gray_scale=False) | return Image.fromarray(buf.astype(np.uint8), 'RGB') | Convert 'x' to a RGB Image object.
# Arguments
x: image in the format (num_cols, num_rows, 3) for RGB images or
(num_cols, num_rows) for gray scale images. If None, return a
light gray image with size 100x100.
gray_scale: convert the RGB color space to a RGB gray scale space. | Convert 'x' to a RGB Image object. | [
"Convert",
"x",
"to",
"a",
"RGB",
"Image",
"object",
"."
] | def data_to_image(x, gray_scale=False):
""" Convert 'x' to a RGB Image object.
# Arguments
x: image in the format (num_cols, num_rows, 3) for RGB images or
(num_cols, num_rows) for gray scale images. If None, return a
light gray image with size 100x100.
gray_scale: convert the RGB color space to a RGB gray scale space.
"""
if x is None:
x = 224 * np.ones((100, 100, 3), dtype=np.uint8)
if x.max() - x.min() > 0.:
buf = 255. * (x - x.min()) / (x.max() - x.min())
else:
buf = x.copy()
if len(buf.shape) == 3:
(w, h) = buf.shape[0:2]
num_ch = buf.shape[2]
else:
(h, w) = buf.shape
num_ch = 1
if ((num_ch is 3) and gray_scale):
g = 0.2989*buf[:,:,0] + 0.5870*buf[:,:,1] + 0.1140*buf[:,:,2]
buf[:,:,0] = g
buf[:,:,1] = g
buf[:,:,2] = g
elif num_ch is 1:
aux = np.zeros((h, w, 3), dtype=buf.dtype)
aux[:,:,0] = buf
aux[:,:,1] = buf
aux[:,:,2] = buf
buf = aux
return Image.fromarray(buf.astype(np.uint8), 'RGB') | [
"def",
"data_to_image",
"(",
"x",
",",
"gray_scale",
"=",
"False",
")",
":",
"if",
"x",
"is",
"None",
":",
"x",
"=",
"224",
"*",
"np",
".",
"ones",
"(",
"(",
"100",
",",
"100",
",",
"3",
")",
",",
"dtype",
"=",
"np",
".",
"uint8",
")",
"if",
... | https://github.com/dluvizon/deephar/blob/fbebb148a3b7153f911b86e1a7300aa32c336f31/deephar/utils/plot.py#L21-L58 | |
angr/angr | 4b04d56ace135018083d36d9083805be8146688b | angr/storage/file.py | python | SimFileStream.set_state | (self, state) | [] | def set_state(self, state):
super().set_state(state)
if type(self.pos) is int:
self.pos = state.solver.BVV(self.pos, state.arch.bits)
elif len(self.pos) != state.arch.bits:
raise TypeError("SimFileStream position must be a bitvector of size %d (arch.bits)" % state.arch.bits) | [
"def",
"set_state",
"(",
"self",
",",
"state",
")",
":",
"super",
"(",
")",
".",
"set_state",
"(",
"state",
")",
"if",
"type",
"(",
"self",
".",
"pos",
")",
"is",
"int",
":",
"self",
".",
"pos",
"=",
"state",
".",
"solver",
".",
"BVV",
"(",
"se... | https://github.com/angr/angr/blob/4b04d56ace135018083d36d9083805be8146688b/angr/storage/file.py#L350-L355 | ||||
lesscpy/lesscpy | 1172a1693df2f4bc929a88b1bebb920e666c0c9f | lesscpy/plib/call.py | python | Call.isurl | (self, string, *args) | return regex.match(arg) | Is url
args:
string (str): match
returns:
bool | Is url
args:
string (str): match
returns:
bool | [
"Is",
"url",
"args",
":",
"string",
"(",
"str",
")",
":",
"match",
"returns",
":",
"bool"
] | def isurl(self, string, *args):
"""Is url
args:
string (str): match
returns:
bool
"""
arg = utility.destring(string)
regex = re.compile(
r'^(?:http|ftp)s?://' # http:// or https://
r'(?:(?:[A-Z0-9](?:[A-Z0-9-]{0,61}[A-Z0-9])?\.)+'
r'(?:[A-Z]{2,6}\.?|[A-Z0-9-]{2,}\.?)|' # domain...
# localhost...
r'localhost|'
r'\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})' # ...or ip
# optional port
r'(?::\d+)?'
r'(?:/?|[/?]\S+)$',
re.IGNORECASE)
return regex.match(arg) | [
"def",
"isurl",
"(",
"self",
",",
"string",
",",
"*",
"args",
")",
":",
"arg",
"=",
"utility",
".",
"destring",
"(",
"string",
")",
"regex",
"=",
"re",
".",
"compile",
"(",
"r'^(?:http|ftp)s?://'",
"# http:// or https://",
"r'(?:(?:[A-Z0-9](?:[A-Z0-9-]{0,61}[A-Z... | https://github.com/lesscpy/lesscpy/blob/1172a1693df2f4bc929a88b1bebb920e666c0c9f/lesscpy/plib/call.py#L124-L143 | |
WebwareForPython/DBUtils | 798d5ad3bdfccdebc22fd3d7b16c8b9e86bdcaf7 | dbutils/pooled_db.py | python | SharedDBConnection.__init__ | (self, con) | Create a shared connection.
con: the underlying SteadyDB connection | Create a shared connection. | [
"Create",
"a",
"shared",
"connection",
"."
] | def __init__(self, con):
"""Create a shared connection.
con: the underlying SteadyDB connection
"""
self.con = con
self.shared = 1 | [
"def",
"__init__",
"(",
"self",
",",
"con",
")",
":",
"self",
".",
"con",
"=",
"con",
"self",
".",
"shared",
"=",
"1"
] | https://github.com/WebwareForPython/DBUtils/blob/798d5ad3bdfccdebc22fd3d7b16c8b9e86bdcaf7/dbutils/pooled_db.py#L448-L454 | ||
MarioVilas/winappdbg | 975a088ac54253d0bdef39fe831e82f24b4c11f6 | winappdbg/win32/advapi32.py | python | ServiceStatusEntry.__init__ | (self, raw) | @type raw: L{ENUM_SERVICE_STATUSA} or L{ENUM_SERVICE_STATUSW}
@param raw: Raw structure for this service status entry. | [] | def __init__(self, raw):
"""
@type raw: L{ENUM_SERVICE_STATUSA} or L{ENUM_SERVICE_STATUSW}
@param raw: Raw structure for this service status entry.
"""
self.ServiceName = raw.lpServiceName
self.DisplayName = raw.lpDisplayName
self.ServiceType = raw.ServiceStatus.dwServiceType
self.CurrentState = raw.ServiceStatus.dwCurrentState
self.ControlsAccepted = raw.ServiceStatus.dwControlsAccepted
self.Win32ExitCode = raw.ServiceStatus.dwWin32ExitCode
self.ServiceSpecificExitCode = raw.ServiceStatus.dwServiceSpecificExitCode
self.CheckPoint = raw.ServiceStatus.dwCheckPoint
self.WaitHint = raw.ServiceStatus.dwWaitHint | [
"def",
"__init__",
"(",
"self",
",",
"raw",
")",
":",
"self",
".",
"ServiceName",
"=",
"raw",
".",
"lpServiceName",
"self",
".",
"DisplayName",
"=",
"raw",
".",
"lpDisplayName",
"self",
".",
"ServiceType",
"=",
"raw",
".",
"ServiceStatus",
".",
"dwServiceT... | https://github.com/MarioVilas/winappdbg/blob/975a088ac54253d0bdef39fe831e82f24b4c11f6/winappdbg/win32/advapi32.py#L972-L985 | |||
replit-archive/empythoned | 977ec10ced29a3541a4973dc2b59910805695752 | cpython/Lib/cookielib.py | python | CookieJar.extract_cookies | (self, response, request) | Extract cookies from response, where allowable given the request. | Extract cookies from response, where allowable given the request. | [
"Extract",
"cookies",
"from",
"response",
"where",
"allowable",
"given",
"the",
"request",
"."
] | def extract_cookies(self, response, request):
"""Extract cookies from response, where allowable given the request."""
_debug("extract_cookies: %s", response.info())
self._cookies_lock.acquire()
try:
self._policy._now = self._now = int(time.time())
for cookie in self.make_cookies(response, request):
if self._policy.set_ok(cookie, request):
_debug(" setting cookie: %s", cookie)
self.set_cookie(cookie)
finally:
self._cookies_lock.release() | [
"def",
"extract_cookies",
"(",
"self",
",",
"response",
",",
"request",
")",
":",
"_debug",
"(",
"\"extract_cookies: %s\"",
",",
"response",
".",
"info",
"(",
")",
")",
"self",
".",
"_cookies_lock",
".",
"acquire",
"(",
")",
"try",
":",
"self",
".",
"_po... | https://github.com/replit-archive/empythoned/blob/977ec10ced29a3541a4973dc2b59910805695752/cpython/Lib/cookielib.py#L1635-L1647 | ||
toxinu/pyhn | c36090e33ae730daa16c276e5f54c49baef2d8fe | pyhn/hnapi.py | python | HackerNewsAPI.get_best_stories | (self, extra_page=1) | return stories | Gets the "best" stories from Hacker News. | Gets the "best" stories from Hacker News. | [
"Gets",
"the",
"best",
"stories",
"from",
"Hacker",
"News",
"."
] | def get_best_stories(self, extra_page=1):
"""
Gets the "best" stories from Hacker News.
"""
stories = []
for i in range(1, extra_page + 2):
source_latest = self.get_source(
"https://news.ycombinator.com/best?p=%s" % i)
stories += self.get_stories(source_latest)
return stories | [
"def",
"get_best_stories",
"(",
"self",
",",
"extra_page",
"=",
"1",
")",
":",
"stories",
"=",
"[",
"]",
"for",
"i",
"in",
"range",
"(",
"1",
",",
"extra_page",
"+",
"2",
")",
":",
"source_latest",
"=",
"self",
".",
"get_source",
"(",
"\"https://news.y... | https://github.com/toxinu/pyhn/blob/c36090e33ae730daa16c276e5f54c49baef2d8fe/pyhn/hnapi.py#L389-L398 | |
benoitc/couchdbkit | 6be148640c00b54ee87a2f2d502e9d67fa5b45a8 | couchdbkit/client.py | python | Server.delete_db | (self, dbname) | Delete database | Delete database | [
"Delete",
"database"
] | def delete_db(self, dbname):
"""
Delete database
"""
del self[dbname] | [
"def",
"delete_db",
"(",
"self",
",",
"dbname",
")",
":",
"del",
"self",
"[",
"dbname",
"]"
] | https://github.com/benoitc/couchdbkit/blob/6be148640c00b54ee87a2f2d502e9d67fa5b45a8/couchdbkit/client.py#L155-L159 | ||
tdamdouni/Pythonista | 3e082d53b6b9b501a3c8cf3251a8ad4c8be9c2ad | weather/weatherdata.py | python | get_current_weather_in | (data) | Get current weather data. | Get current weather data. | [
"Get",
"current",
"weather",
"data",
"."
] | def get_current_weather_in(data):
'''
Get current weather data.
'''
#api_url_base = 'http://api.openweathermap.org/data/2.5/weather?q={data}{unit}'
api_url_base = 'http://api.openweathermap.org/data/2.5/weather?q={data}{unit}&APPID=7531e2794b160112a5202dcf3e454c8e'
try:
response = urlopen(api_url_base.format(data=data,
unit=UNITS))
except IOError:
error_dialog('Connection Error', 'Unable to perform request.')
if response.getcode() == 200:
weather_data = filter_data(response.read())
webbrowser.open('drafts4://x-callback-url/create?text={0}'.format(quote(weather_data)))
else:
error_dialog('Error', 'Status code: {0} - Message: {1}'.format(response.getcode(), response.read())) | [
"def",
"get_current_weather_in",
"(",
"data",
")",
":",
"#api_url_base = 'http://api.openweathermap.org/data/2.5/weather?q={data}{unit}'",
"api_url_base",
"=",
"'http://api.openweathermap.org/data/2.5/weather?q={data}{unit}&APPID=7531e2794b160112a5202dcf3e454c8e'",
"try",
":",
"response",
... | https://github.com/tdamdouni/Pythonista/blob/3e082d53b6b9b501a3c8cf3251a8ad4c8be9c2ad/weather/weatherdata.py#L42-L57 | ||
morganstanley/treadmill | f18267c665baf6def4374d21170198f63ff1cde4 | lib/python/treadmill/scheduler/backend.py | python | Backend.__init__ | (self) | Backend constructor. | Backend constructor. | [
"Backend",
"constructor",
"."
] | def __init__(self):
"""Backend constructor.
""" | [
"def",
"__init__",
"(",
"self",
")",
":"
] | https://github.com/morganstanley/treadmill/blob/f18267c665baf6def4374d21170198f63ff1cde4/lib/python/treadmill/scheduler/backend.py#L23-L25 | ||
ajinabraham/OWASP-Xenotix-XSS-Exploit-Framework | cb692f527e4e819b6c228187c5702d990a180043 | external/Scripting Engine/Xenotix Python Scripting Engine/Lib/wsgiref/util.py | python | guess_scheme | (environ) | Return a guess for whether 'wsgi.url_scheme' should be 'http' or 'https' | Return a guess for whether 'wsgi.url_scheme' should be 'http' or 'https' | [
"Return",
"a",
"guess",
"for",
"whether",
"wsgi",
".",
"url_scheme",
"should",
"be",
"http",
"or",
"https"
] | def guess_scheme(environ):
"""Return a guess for whether 'wsgi.url_scheme' should be 'http' or 'https'
"""
if environ.get("HTTPS") in ('yes','on','1'):
return 'https'
else:
return 'http' | [
"def",
"guess_scheme",
"(",
"environ",
")",
":",
"if",
"environ",
".",
"get",
"(",
"\"HTTPS\"",
")",
"in",
"(",
"'yes'",
",",
"'on'",
",",
"'1'",
")",
":",
"return",
"'https'",
"else",
":",
"return",
"'http'"
] | https://github.com/ajinabraham/OWASP-Xenotix-XSS-Exploit-Framework/blob/cb692f527e4e819b6c228187c5702d990a180043/external/Scripting Engine/Xenotix Python Scripting Engine/Lib/wsgiref/util.py#L35-L41 | ||
10XGenomics/cellranger | a83c753ce641db6409a59ad817328354fbe7187e | lib/python/cellranger/matrix.py | python | load_matrix_h5_custom_attrs | (filename) | Get matrix metadata attributes from an HDF5 file | Get matrix metadata attributes from an HDF5 file | [
"Get",
"matrix",
"metadata",
"attributes",
"from",
"an",
"HDF5",
"file"
] | def load_matrix_h5_custom_attrs(filename):
'''Get matrix metadata attributes from an HDF5 file'''
h5_version = CountMatrix.get_format_version_from_h5(filename)
if h5_version == 1:
# no support for custom attrs in older versions
return {}
attrs = {}
with h5.File(filename, 'r') as f:
for key, val in f.attrs.items():
if key not in MATRIX_H5_BUILTIN_ATTRS:
attrs[key] = val
return attrs | [
"def",
"load_matrix_h5_custom_attrs",
"(",
"filename",
")",
":",
"h5_version",
"=",
"CountMatrix",
".",
"get_format_version_from_h5",
"(",
"filename",
")",
"if",
"h5_version",
"==",
"1",
":",
"# no support for custom attrs in older versions",
"return",
"{",
"}",
"attrs"... | https://github.com/10XGenomics/cellranger/blob/a83c753ce641db6409a59ad817328354fbe7187e/lib/python/cellranger/matrix.py#L957-L969 | ||
shuyo/iir | a9b133f27e8ab5b8ef6f528c1f212717399d852f | sequence/hmm.py | python | HMM.inference | (self) | return log_likelihood | @brief one step of EM algorithm
@return log likelihood | [] | def inference(self):
"""
@brief one step of EM algorithm
@return log likelihood
"""
pi_new = numpy.zeros(self.K)
A_new = numpy.zeros((self.K, self.K))
B_new = numpy.zeros((self.V, self.K))
log_likelihood = 0
for x in self.x_ji:
gamma, xi_sum, likelihood = self.Estep(x)
log_likelihood += likelihood
# M-step
pi_new += gamma[0]
A_new += xi_sum
for v, g_n in zip(x, gamma):
B_new[v] += g_n
self.pi = pi_new / pi_new.sum()
self.A = A_new / (A_new.sum(1)[:, numpy.newaxis])
self.B = B_new / B_new.sum(0)
return log_likelihood | [
"def",
"inference",
"(",
"self",
")",
":",
"pi_new",
"=",
"numpy",
".",
"zeros",
"(",
"self",
".",
"K",
")",
"A_new",
"=",
"numpy",
".",
"zeros",
"(",
"(",
"self",
".",
"K",
",",
"self",
".",
"K",
")",
")",
"B_new",
"=",
"numpy",
".",
"zeros",
... | https://github.com/shuyo/iir/blob/a9b133f27e8ab5b8ef6f528c1f212717399d852f/sequence/hmm.py#L124-L147 | ||
jamiemcg/Remarkable | 7b0b3dacef270a00c28e8852a88d74f72a3544d7 | pdfkit/api.py | python | from_file | (input, output_path, options=None, toc=None, cover=None, css=None,
configuration=None) | return r.to_pdf(output_path) | Convert HTML file or files to PDF document
:param input: path to HTML file or list with paths or file-like object
:param output_path: path to output PDF file. False means file will be returned as string.
:param options: (optional) dict with wkhtmltopdf options, with or w/o '--'
:param toc: (optional) dict with toc-specific wkhtmltopdf options, with or w/o '--'
:param cover: (optional) string with url/filename with a cover html page
:param css: (optional) string with path to css file which will be added to a single input file
:param configuration: (optional) instance of pdfkit.configuration.Configuration()
Returns: True on success | Convert HTML file or files to PDF document | [
"Convert",
"HTML",
"file",
"or",
"files",
"to",
"PDF",
"document"
] | def from_file(input, output_path, options=None, toc=None, cover=None, css=None,
configuration=None):
"""
Convert HTML file or files to PDF document
:param input: path to HTML file or list with paths or file-like object
:param output_path: path to output PDF file. False means file will be returned as string.
:param options: (optional) dict with wkhtmltopdf options, with or w/o '--'
:param toc: (optional) dict with toc-specific wkhtmltopdf options, with or w/o '--'
:param cover: (optional) string with url/filename with a cover html page
:param css: (optional) string with path to css file which will be added to a single input file
:param configuration: (optional) instance of pdfkit.configuration.Configuration()
Returns: True on success
"""
r = PDFKit(input, 'file', options=options, toc=toc, cover=cover, css=css,
configuration=configuration)
return r.to_pdf(output_path) | [
"def",
"from_file",
"(",
"input",
",",
"output_path",
",",
"options",
"=",
"None",
",",
"toc",
"=",
"None",
",",
"cover",
"=",
"None",
",",
"css",
"=",
"None",
",",
"configuration",
"=",
"None",
")",
":",
"r",
"=",
"PDFKit",
"(",
"input",
",",
"'fi... | https://github.com/jamiemcg/Remarkable/blob/7b0b3dacef270a00c28e8852a88d74f72a3544d7/pdfkit/api.py#L27-L46 | |
jansel/opentuner | 070c5cef6d933eb760a2f9cd5cd08c95f27aee75 | opentuner/tuningrunmain.py | python | TuningRunMain.results_wait | (self, generation) | called by search_driver to wait for results | called by search_driver to wait for results | [
"called",
"by",
"search_driver",
"to",
"wait",
"for",
"results"
] | def results_wait(self, generation):
"""called by search_driver to wait for results"""
# single process version:
self.measurement_interface.pre_process()
self.measurement_driver.process_all()
self.measurement_interface.post_process() | [
"def",
"results_wait",
"(",
"self",
",",
"generation",
")",
":",
"# single process version:",
"self",
".",
"measurement_interface",
".",
"pre_process",
"(",
")",
"self",
".",
"measurement_driver",
".",
"process_all",
"(",
")",
"self",
".",
"measurement_interface",
... | https://github.com/jansel/opentuner/blob/070c5cef6d933eb760a2f9cd5cd08c95f27aee75/opentuner/tuningrunmain.py#L215-L220 | ||
SheffieldML/GPy | bb1bc5088671f9316bc92a46d356734e34c2d5c0 | GPy/kern/src/kernel_slice_operations.py | python | _slice_psi | (f) | return wrap | [] | def _slice_psi(f):
@wraps(f)
def wrap(self, Z, variational_posterior):
with _Slice_wrap(self, Z, variational_posterior) as s:
ret = f(self, s.X, s.X2)
return ret
return wrap | [
"def",
"_slice_psi",
"(",
"f",
")",
":",
"@",
"wraps",
"(",
"f",
")",
"def",
"wrap",
"(",
"self",
",",
"Z",
",",
"variational_posterior",
")",
":",
"with",
"_Slice_wrap",
"(",
"self",
",",
"Z",
",",
"variational_posterior",
")",
"as",
"s",
":",
"ret"... | https://github.com/SheffieldML/GPy/blob/bb1bc5088671f9316bc92a46d356734e34c2d5c0/GPy/kern/src/kernel_slice_operations.py#L271-L277 | |||
scrapy/scrapy | b04cfa48328d5d5749dca6f50fa34e0cfc664c89 | scrapy/extensions/httpcache.py | python | RFC2616Policy._compute_current_age | (self, response, request, now) | return currentage | [] | def _compute_current_age(self, response, request, now):
# Reference nsHttpResponseHead::ComputeCurrentAge
# https://dxr.mozilla.org/mozilla-central/source/netwerk/protocol/http/nsHttpResponseHead.cpp#658
currentage = 0
# If Date header is not set we assume it is a fast connection, and
# clock is in sync with the server
date = rfc1123_to_epoch(response.headers.get(b'Date')) or now
if now > date:
currentage = now - date
if b'Age' in response.headers:
try:
age = int(response.headers[b'Age'])
currentage = max(currentage, age)
except ValueError:
pass
return currentage | [
"def",
"_compute_current_age",
"(",
"self",
",",
"response",
",",
"request",
",",
"now",
")",
":",
"# Reference nsHttpResponseHead::ComputeCurrentAge",
"# https://dxr.mozilla.org/mozilla-central/source/netwerk/protocol/http/nsHttpResponseHead.cpp#658",
"currentage",
"=",
"0",
"# If... | https://github.com/scrapy/scrapy/blob/b04cfa48328d5d5749dca6f50fa34e0cfc664c89/scrapy/extensions/httpcache.py#L197-L214 | |||
misterch0c/shadowbroker | e3a069bea47a2c1009697941ac214adc6f90aa8d | windows/Resources/Python/Core/Lib/logging/config.py | python | BaseConfigurator.as_tuple | (self, value) | return value | Utility function which converts lists to tuples. | Utility function which converts lists to tuples. | [
"Utility",
"function",
"which",
"converts",
"lists",
"to",
"tuples",
"."
] | def as_tuple(self, value):
"""Utility function which converts lists to tuples."""
if isinstance(value, list):
value = tuple(value)
return value | [
"def",
"as_tuple",
"(",
"self",
",",
"value",
")",
":",
"if",
"isinstance",
"(",
"value",
",",
"list",
")",
":",
"value",
"=",
"tuple",
"(",
"value",
")",
"return",
"value"
] | https://github.com/misterch0c/shadowbroker/blob/e3a069bea47a2c1009697941ac214adc6f90aa8d/windows/Resources/Python/Core/Lib/logging/config.py#L442-L446 | |
TencentCloud/tencentcloud-sdk-python | 3677fd1cdc8c5fd626ce001c13fd3b59d1f279d2 | tencentcloud/scf/v20180416/models.py | python | GetProvisionedConcurrencyConfigRequest.__init__ | (self) | r"""
:param FunctionName: 需要获取预置并发详情的函数名称。
:type FunctionName: str
:param Namespace: 函数所在的命名空间,默认为default。
:type Namespace: str
:param Qualifier: 函数版本号,不传则返回函数所有版本的预置并发信息。
:type Qualifier: str | r"""
:param FunctionName: 需要获取预置并发详情的函数名称。
:type FunctionName: str
:param Namespace: 函数所在的命名空间,默认为default。
:type Namespace: str
:param Qualifier: 函数版本号,不传则返回函数所有版本的预置并发信息。
:type Qualifier: str | [
"r",
":",
"param",
"FunctionName",
":",
"需要获取预置并发详情的函数名称。",
":",
"type",
"FunctionName",
":",
"str",
":",
"param",
"Namespace",
":",
"函数所在的命名空间,默认为default。",
":",
"type",
"Namespace",
":",
"str",
":",
"param",
"Qualifier",
":",
"函数版本号,不传则返回函数所有版本的预置并发信息。",
":",
... | def __init__(self):
r"""
:param FunctionName: 需要获取预置并发详情的函数名称。
:type FunctionName: str
:param Namespace: 函数所在的命名空间,默认为default。
:type Namespace: str
:param Qualifier: 函数版本号,不传则返回函数所有版本的预置并发信息。
:type Qualifier: str
"""
self.FunctionName = None
self.Namespace = None
self.Qualifier = None | [
"def",
"__init__",
"(",
"self",
")",
":",
"self",
".",
"FunctionName",
"=",
"None",
"self",
".",
"Namespace",
"=",
"None",
"self",
".",
"Qualifier",
"=",
"None"
] | https://github.com/TencentCloud/tencentcloud-sdk-python/blob/3677fd1cdc8c5fd626ce001c13fd3b59d1f279d2/tencentcloud/scf/v20180416/models.py#L2276-L2287 | ||
EventGhost/EventGhost | 177be516849e74970d2e13cda82244be09f277ce | lib27/site-packages/tornado/tcpserver.py | python | TCPServer.start | (self, num_processes=1) | Starts this server in the `.IOLoop`.
By default, we run the server in this process and do not fork any
additional child process.
If num_processes is ``None`` or <= 0, we detect the number of cores
available on this machine and fork that number of child
processes. If num_processes is given and > 1, we fork that
specific number of sub-processes.
Since we use processes and not threads, there is no shared memory
between any server code.
Note that multiple processes are not compatible with the autoreload
module (or the ``autoreload=True`` option to `tornado.web.Application`
which defaults to True when ``debug=True``).
When using multiple processes, no IOLoops can be created or
referenced until after the call to ``TCPServer.start(n)``. | Starts this server in the `.IOLoop`. | [
"Starts",
"this",
"server",
"in",
"the",
".",
"IOLoop",
"."
] | def start(self, num_processes=1):
"""Starts this server in the `.IOLoop`.
By default, we run the server in this process and do not fork any
additional child process.
If num_processes is ``None`` or <= 0, we detect the number of cores
available on this machine and fork that number of child
processes. If num_processes is given and > 1, we fork that
specific number of sub-processes.
Since we use processes and not threads, there is no shared memory
between any server code.
Note that multiple processes are not compatible with the autoreload
module (or the ``autoreload=True`` option to `tornado.web.Application`
which defaults to True when ``debug=True``).
When using multiple processes, no IOLoops can be created or
referenced until after the call to ``TCPServer.start(n)``.
"""
assert not self._started
self._started = True
if num_processes != 1:
process.fork_processes(num_processes)
sockets = self._pending_sockets
self._pending_sockets = []
self.add_sockets(sockets) | [
"def",
"start",
"(",
"self",
",",
"num_processes",
"=",
"1",
")",
":",
"assert",
"not",
"self",
".",
"_started",
"self",
".",
"_started",
"=",
"True",
"if",
"num_processes",
"!=",
"1",
":",
"process",
".",
"fork_processes",
"(",
"num_processes",
")",
"so... | https://github.com/EventGhost/EventGhost/blob/177be516849e74970d2e13cda82244be09f277ce/lib27/site-packages/tornado/tcpserver.py#L177-L203 | ||
Gandi/gandi.cli | 5de0605126247e986f8288b467a52710a78e1794 | gandi/cli/commands/vlan.py | python | delete | (gandi, background, force, resource) | return opers | Delete a vlan.
Resource can be a vlan name or an ID | Delete a vlan. | [
"Delete",
"a",
"vlan",
"."
] | def delete(gandi, background, force, resource):
"""Delete a vlan.
Resource can be a vlan name or an ID
"""
output_keys = ['id', 'type', 'step']
possible_resources = gandi.vlan.resource_list()
for item in resource:
if item not in possible_resources:
gandi.echo('Sorry vlan %s does not exist' % item)
gandi.echo('Please use one of the following: %s' %
possible_resources)
return
if not force:
vlan_info = "'%s'" % ', '.join(resource)
proceed = click.confirm('Are you sure to delete vlan %s?' %
vlan_info)
if not proceed:
return
opers = gandi.vlan.delete(resource, background)
if background:
for oper in opers:
output_generic(gandi, oper, output_keys)
return opers | [
"def",
"delete",
"(",
"gandi",
",",
"background",
",",
"force",
",",
"resource",
")",
":",
"output_keys",
"=",
"[",
"'id'",
",",
"'type'",
",",
"'step'",
"]",
"possible_resources",
"=",
"gandi",
".",
"vlan",
".",
"resource_list",
"(",
")",
"for",
"item",... | https://github.com/Gandi/gandi.cli/blob/5de0605126247e986f8288b467a52710a78e1794/gandi/cli/commands/vlan.py#L104-L132 | |
abbat/ydcmd | 12f0e855fd4b7fdc7bf2ffe9c19beee439880763 | ydcmd.py | python | yd_put_retry | (options, source, target) | Реализация одной попытки помещения файла в хранилище
Аргументы:
options (ydOptions) -- Опции приложения
source (str) -- Имя локального файла
target (str) -- Имя файла в хранилище | Реализация одной попытки помещения файла в хранилище | [
"Реализация",
"одной",
"попытки",
"помещения",
"файла",
"в",
"хранилище"
] | def yd_put_retry(options, source, target):
"""
Реализация одной попытки помещения файла в хранилище
Аргументы:
options (ydOptions) -- Опции приложения
source (str) -- Имя локального файла
target (str) -- Имя файла в хранилище
"""
args = {
"path" : target,
"overwrite" : "true"
}
method = "GET"
url = options.baseurl + "/resources/upload"
result = yd_query_retry(options, method, url, args)
if "href" in result and "method" in result:
url = result["href"]
method = result["method"]
headers = yd_headers(options.token)
headers["Content-Type"] = "application/octet-stream"
headers["Content-Length"] = os.path.getsize(source)
yd_query_retry(options, method, url, None, headers, source)
else:
raise RuntimeError("Incomplete response") | [
"def",
"yd_put_retry",
"(",
"options",
",",
"source",
",",
"target",
")",
":",
"args",
"=",
"{",
"\"path\"",
":",
"target",
",",
"\"overwrite\"",
":",
"\"true\"",
"}",
"method",
"=",
"\"GET\"",
"url",
"=",
"options",
".",
"baseurl",
"+",
"\"/resources/uplo... | https://github.com/abbat/ydcmd/blob/12f0e855fd4b7fdc7bf2ffe9c19beee439880763/ydcmd.py#L1349-L1378 | ||
sarnthil/unify-emotion-datasets | aabc2bbb9794e51097e1778028fc66389e42f0c9 | download_datasets.py | python | download | (_, target, droot, __) | [] | def download(_, target, droot, __):
url = target["url"]
fname = target.get("target", url.split("/")[-1])
r = requests.get(
url,
stream=True,
headers={
"User-Agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_4) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/12.1 Safari/605.1.15"
},
)
chars = "-\\|/"
with open(f"{droot}/{fname}", "wb") as f:
for i, chunk in enumerate(r.iter_content(chunk_size=1024)):
arrow(f"Downloading... {chars[i%len(chars)]}", end="\r")
if chunk:
f.write(chunk)
if fname.endswith(".zip") or fname.endswith(".tar.gz"):
arrow(f"Unpacking {fname}...")
shutil.unpack_archive(f"{droot}/{fname}", droot) | [
"def",
"download",
"(",
"_",
",",
"target",
",",
"droot",
",",
"__",
")",
":",
"url",
"=",
"target",
"[",
"\"url\"",
"]",
"fname",
"=",
"target",
".",
"get",
"(",
"\"target\"",
",",
"url",
".",
"split",
"(",
"\"/\"",
")",
"[",
"-",
"1",
"]",
")... | https://github.com/sarnthil/unify-emotion-datasets/blob/aabc2bbb9794e51097e1778028fc66389e42f0c9/download_datasets.py#L28-L48 | ||||
SciTools/iris | a12d0b15bab3377b23a148e891270b13a0419c38 | lib/iris/analysis/__init__.py | python | Nearest.interpolator | (self, cube, coords) | return RectilinearInterpolator(
cube, coords, "nearest", self.extrapolation_mode
) | Creates a nearest-neighbour interpolator to perform
interpolation over the given :class:`~iris.cube.Cube` specified
by the dimensions of the specified coordinates.
Typically you should use :meth:`iris.cube.Cube.interpolate` for
interpolating a cube. There are, however, some situations when
constructing your own interpolator is preferable. These are detailed
in the :ref:`user guide <caching_an_interpolator>`.
Args:
* cube:
The source :class:`iris.cube.Cube` to be interpolated.
* coords:
The names or coordinate instances that are to be
interpolated over.
Returns:
A callable with the interface:
`callable(sample_points, collapse_scalar=True)`
where `sample_points` is a sequence containing an array of values
for each of the coordinates passed to this method, and
`collapse_scalar` determines whether to remove length one
dimensions in the result cube caused by scalar values in
`sample_points`.
The values for coordinates that correspond to date/times
may optionally be supplied as datetime.datetime or
cftime.datetime instances.
For example, for the callable returned by:
`Nearest().interpolator(cube, ['latitude', 'longitude'])`,
sample_points must have the form
`[new_lat_values, new_lon_values]`. | Creates a nearest-neighbour interpolator to perform
interpolation over the given :class:`~iris.cube.Cube` specified
by the dimensions of the specified coordinates. | [
"Creates",
"a",
"nearest",
"-",
"neighbour",
"interpolator",
"to",
"perform",
"interpolation",
"over",
"the",
"given",
":",
"class",
":",
"~iris",
".",
"cube",
".",
"Cube",
"specified",
"by",
"the",
"dimensions",
"of",
"the",
"specified",
"coordinates",
"."
] | def interpolator(self, cube, coords):
"""
Creates a nearest-neighbour interpolator to perform
interpolation over the given :class:`~iris.cube.Cube` specified
by the dimensions of the specified coordinates.
Typically you should use :meth:`iris.cube.Cube.interpolate` for
interpolating a cube. There are, however, some situations when
constructing your own interpolator is preferable. These are detailed
in the :ref:`user guide <caching_an_interpolator>`.
Args:
* cube:
The source :class:`iris.cube.Cube` to be interpolated.
* coords:
The names or coordinate instances that are to be
interpolated over.
Returns:
A callable with the interface:
`callable(sample_points, collapse_scalar=True)`
where `sample_points` is a sequence containing an array of values
for each of the coordinates passed to this method, and
`collapse_scalar` determines whether to remove length one
dimensions in the result cube caused by scalar values in
`sample_points`.
The values for coordinates that correspond to date/times
may optionally be supplied as datetime.datetime or
cftime.datetime instances.
For example, for the callable returned by:
`Nearest().interpolator(cube, ['latitude', 'longitude'])`,
sample_points must have the form
`[new_lat_values, new_lon_values]`.
"""
return RectilinearInterpolator(
cube, coords, "nearest", self.extrapolation_mode
) | [
"def",
"interpolator",
"(",
"self",
",",
"cube",
",",
"coords",
")",
":",
"return",
"RectilinearInterpolator",
"(",
"cube",
",",
"coords",
",",
"\"nearest\"",
",",
"self",
".",
"extrapolation_mode",
")"
] | https://github.com/SciTools/iris/blob/a12d0b15bab3377b23a148e891270b13a0419c38/lib/iris/analysis/__init__.py#L2592-L2634 | |
hyperledger/aries-cloudagent-python | 2f36776e99f6053ae92eed8123b5b1b2e891c02a | aries_cloudagent/protocols/actionmenu/v1_0/driver_service.py | python | DriverMenuService.perform_menu_action | (
self,
profile: Profile,
action_name: str,
action_params: dict,
connection: ConnRecord = None,
thread_id: str = None,
) | return None | Perform an action defined by the active menu.
Args:
profile: The profile
action_name: The unique name of the action being performed
action_params: A collection of parameters for the action
connection: The active connection record
thread_id: The thread identifier from the requesting message. | Perform an action defined by the active menu. | [
"Perform",
"an",
"action",
"defined",
"by",
"the",
"active",
"menu",
"."
] | async def perform_menu_action(
self,
profile: Profile,
action_name: str,
action_params: dict,
connection: ConnRecord = None,
thread_id: str = None,
) -> AgentMessage:
"""
Perform an action defined by the active menu.
Args:
profile: The profile
action_name: The unique name of the action being performed
action_params: A collection of parameters for the action
connection: The active connection record
thread_id: The thread identifier from the requesting message.
"""
await profile.notify(
"acapy::actionmenu::perform-menu-action",
{
"connection_id": connection and connection.connection_id,
"thread_id": thread_id,
"action_name": action_name,
"action_params": action_params,
},
)
return None | [
"async",
"def",
"perform_menu_action",
"(",
"self",
",",
"profile",
":",
"Profile",
",",
"action_name",
":",
"str",
",",
"action_params",
":",
"dict",
",",
"connection",
":",
"ConnRecord",
"=",
"None",
",",
"thread_id",
":",
"str",
"=",
"None",
",",
")",
... | https://github.com/hyperledger/aries-cloudagent-python/blob/2f36776e99f6053ae92eed8123b5b1b2e891c02a/aries_cloudagent/protocols/actionmenu/v1_0/driver_service.py#L41-L68 | |
rollbar/pyrollbar | 77cbffaa7447f04f653135e1d7f615ce41fcc4e9 | rollbar/__init__.py | python | _report_exc_info | (exc_info, request, extra_data, payload_data, level=None) | return data['uuid'] | Called by report_exc_info() wrapper | Called by report_exc_info() wrapper | [
"Called",
"by",
"report_exc_info",
"()",
"wrapper"
] | def _report_exc_info(exc_info, request, extra_data, payload_data, level=None):
"""
Called by report_exc_info() wrapper
"""
if not _check_config():
return
filtered_level = _filtered_level(exc_info[1])
if level is None:
level = filtered_level
filtered_exc_info = events.on_exception_info(exc_info,
request=request,
extra_data=extra_data,
payload_data=payload_data,
level=level)
if filtered_exc_info is False:
return
cls, exc, trace = filtered_exc_info
data = _build_base_data(request)
if level is not None:
data['level'] = level
# walk the trace chain to collect cause and context exceptions
trace_chain = _walk_trace_chain(cls, exc, trace)
extra_trace_data = None
if len(trace_chain) > 1:
data['body'] = {
'trace_chain': trace_chain
}
if payload_data and ('body' in payload_data) and ('trace' in payload_data['body']):
extra_trace_data = payload_data['body']['trace']
del payload_data['body']['trace']
else:
data['body'] = {
'trace': trace_chain[0]
}
if extra_data:
extra_data = extra_data
if not isinstance(extra_data, dict):
extra_data = {'value': extra_data}
if extra_trace_data:
extra_data = dict_merge(extra_data, extra_trace_data, silence_errors=True)
data['custom'] = extra_data
if extra_trace_data and not extra_data:
data['custom'] = extra_trace_data
request = _get_actual_request(request)
_add_request_data(data, request)
_add_person_data(data, request)
_add_lambda_context_data(data)
data['server'] = _build_server_data()
if payload_data:
data = dict_merge(data, payload_data, silence_errors=True)
payload = _build_payload(data)
send_payload(payload, payload.get('access_token'))
return data['uuid'] | [
"def",
"_report_exc_info",
"(",
"exc_info",
",",
"request",
",",
"extra_data",
",",
"payload_data",
",",
"level",
"=",
"None",
")",
":",
"if",
"not",
"_check_config",
"(",
")",
":",
"return",
"filtered_level",
"=",
"_filtered_level",
"(",
"exc_info",
"[",
"1... | https://github.com/rollbar/pyrollbar/blob/77cbffaa7447f04f653135e1d7f615ce41fcc4e9/rollbar/__init__.py#L723-L788 | |
oracle/graalpython | 577e02da9755d916056184ec441c26e00b70145c | graalpython/lib-python/3/getopt.py | python | do_longs | (opts, opt, longopts, args) | return opts, args | [] | def do_longs(opts, opt, longopts, args):
try:
i = opt.index('=')
except ValueError:
optarg = None
else:
opt, optarg = opt[:i], opt[i+1:]
has_arg, opt = long_has_args(opt, longopts)
if has_arg:
if optarg is None:
if not args:
raise GetoptError(_('option --%s requires argument') % opt, opt)
optarg, args = args[0], args[1:]
elif optarg is not None:
raise GetoptError(_('option --%s must not have an argument') % opt, opt)
opts.append(('--' + opt, optarg or ''))
return opts, args | [
"def",
"do_longs",
"(",
"opts",
",",
"opt",
",",
"longopts",
",",
"args",
")",
":",
"try",
":",
"i",
"=",
"opt",
".",
"index",
"(",
"'='",
")",
"except",
"ValueError",
":",
"optarg",
"=",
"None",
"else",
":",
"opt",
",",
"optarg",
"=",
"opt",
"["... | https://github.com/oracle/graalpython/blob/577e02da9755d916056184ec441c26e00b70145c/graalpython/lib-python/3/getopt.py#L149-L166 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.