nwo stringlengths 5 106 | sha stringlengths 40 40 | path stringlengths 4 174 | language stringclasses 1
value | identifier stringlengths 1 140 | parameters stringlengths 0 87.7k | argument_list stringclasses 1
value | return_statement stringlengths 0 426k | docstring stringlengths 0 64.3k | docstring_summary stringlengths 0 26.3k | docstring_tokens list | function stringlengths 18 4.83M | function_tokens list | url stringlengths 83 304 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
franciscod/telegram-twitter-forwarder-bot | 2a8ce7c579d8de8df28df3deced190484ffe6ec4 | util.py | python | escape_markdown | (text) | return re.sub(r'([%s])' % escape_chars, r'\\\1', text) | Helper function to escape telegram markup symbols | Helper function to escape telegram markup symbols | [
"Helper",
"function",
"to",
"escape",
"telegram",
"markup",
"symbols"
] | def escape_markdown(text):
"""Helper function to escape telegram markup symbols"""
escape_chars = '\*_`\['
return re.sub(r'([%s])' % escape_chars, r'\\\1', text) | [
"def",
"escape_markdown",
"(",
"text",
")",
":",
"escape_chars",
"=",
"'\\*_`\\['",
"return",
"re",
".",
"sub",
"(",
"r'([%s])'",
"%",
"escape_chars",
",",
"r'\\\\\\1'",
",",
"text",
")"
] | https://github.com/franciscod/telegram-twitter-forwarder-bot/blob/2a8ce7c579d8de8df28df3deced190484ffe6ec4/util.py#L19-L22 | |
IntelLabs/nlp-architect | 60afd0dd1bfd74f01b4ac8f613cb484777b80284 | examples/word_language_model_with_tcn/adding_problem/adding_with_tcn.py | python | main | (args) | Main function
Args:
args: output of argparse with all input arguments
Returns:
None | Main function
Args:
args: output of argparse with all input arguments | [
"Main",
"function",
"Args",
":",
"args",
":",
"output",
"of",
"argparse",
"with",
"all",
"input",
"arguments"
] | def main(args):
"""
Main function
Args:
args: output of argparse with all input arguments
Returns:
None
"""
n_features = 2
hidden_sizes = [args.nhid] * args.levels
kernel_size = args.ksize
dropout = args.dropout
seq_len = args.seq_len
n_train = 50000
n_val = 1000
batch_size = args.batch_size
n_epochs = args.epochs
num_iterations = int(n_train * n_epochs * 1.0 / batch_size)
results_dir = os.path.abspath(args.results_dir)
adding_dataset = Adding(seq_len=seq_len, n_train=n_train, n_test=n_val)
model = TCNForAdding(
seq_len, n_features, hidden_sizes, kernel_size=kernel_size, dropout=dropout
)
model.build_train_graph(args.lr, max_gradient_norm=args.grad_clip_value)
model.run(
adding_dataset,
num_iterations=num_iterations,
log_interval=args.log_interval,
result_dir=results_dir,
) | [
"def",
"main",
"(",
"args",
")",
":",
"n_features",
"=",
"2",
"hidden_sizes",
"=",
"[",
"args",
".",
"nhid",
"]",
"*",
"args",
".",
"levels",
"kernel_size",
"=",
"args",
".",
"ksize",
"dropout",
"=",
"args",
".",
"dropout",
"seq_len",
"=",
"args",
".... | https://github.com/IntelLabs/nlp-architect/blob/60afd0dd1bfd74f01b4ac8f613cb484777b80284/examples/word_language_model_with_tcn/adding_problem/adding_with_tcn.py#L39-L73 | ||
amundsen-io/amundsendatabuilder | a0af611350fde12438450d4bfd83b226ef220c3f | databuilder/transformer/bigquery_usage_transformer.py | python | BigqueryUsageTransformer.init | (self, conf: ConfigTree) | Transformer to convert TableColumnUsageTuple data to bigquery usage data
which can be uploaded to Neo4j | Transformer to convert TableColumnUsageTuple data to bigquery usage data
which can be uploaded to Neo4j | [
"Transformer",
"to",
"convert",
"TableColumnUsageTuple",
"data",
"to",
"bigquery",
"usage",
"data",
"which",
"can",
"be",
"uploaded",
"to",
"Neo4j"
] | def init(self, conf: ConfigTree) -> None:
"""
Transformer to convert TableColumnUsageTuple data to bigquery usage data
which can be uploaded to Neo4j
"""
self.conf = conf | [
"def",
"init",
"(",
"self",
",",
"conf",
":",
"ConfigTree",
")",
"->",
"None",
":",
"self",
".",
"conf",
"=",
"conf"
] | https://github.com/amundsen-io/amundsendatabuilder/blob/a0af611350fde12438450d4bfd83b226ef220c3f/databuilder/transformer/bigquery_usage_transformer.py#L15-L20 | ||
OCA/account-financial-reporting | 5de317737763a6da8f2b13e32921e860ad7898d4 | account_financial_report/report/abstract_report_xlsx.py | python | AbstractReportXslx._generate_report_content | (self, workbook, report, data) | Allow to fetch report content to be displayed. | Allow to fetch report content to be displayed. | [
"Allow",
"to",
"fetch",
"report",
"content",
"to",
"be",
"displayed",
"."
] | def _generate_report_content(self, workbook, report, data):
"""
Allow to fetch report content to be displayed.
"""
raise NotImplementedError() | [
"def",
"_generate_report_content",
"(",
"self",
",",
"workbook",
",",
"report",
",",
"data",
")",
":",
"raise",
"NotImplementedError",
"(",
")"
] | https://github.com/OCA/account-financial-reporting/blob/5de317737763a6da8f2b13e32921e860ad7898d4/account_financial_report/report/abstract_report_xlsx.py#L532-L536 | ||
jwkvam/bowtie | 220cd41367a70f2e206db846278cb7b6fd3649eb | bowtie/control.py | python | MonthPicker.__init__ | (self) | Create month picker. | Create month picker. | [
"Create",
"month",
"picker",
"."
] | def __init__(self) -> None:
"""Create month picker."""
super().__init__(month_type=True) | [
"def",
"__init__",
"(",
"self",
")",
"->",
"None",
":",
"super",
"(",
")",
".",
"__init__",
"(",
"month_type",
"=",
"True",
")"
] | https://github.com/jwkvam/bowtie/blob/220cd41367a70f2e206db846278cb7b6fd3649eb/bowtie/control.py#L292-L294 | ||
IronLanguages/main | a949455434b1fda8c783289e897e78a9a0caabb5 | External.LCA_RESTRICTED/Languages/IronPython/27/Lib/site-packages/win32com/makegw/makegwparse.py | python | ArgFormatter.GetParsePostCode | (self) | Get a string of C++ code to be executed after (ie, to finalise) the PyArg_ParseTuple conversion | Get a string of C++ code to be executed after (ie, to finalise) the PyArg_ParseTuple conversion | [
"Get",
"a",
"string",
"of",
"C",
"++",
"code",
"to",
"be",
"executed",
"after",
"(",
"ie",
"to",
"finalise",
")",
"the",
"PyArg_ParseTuple",
"conversion"
] | def GetParsePostCode(self):
"Get a string of C++ code to be executed after (ie, to finalise) the PyArg_ParseTuple conversion"
if DEBUG:
return "/* GetParsePostCode code goes here: %s */\n" % self.arg.name
else:
return "" | [
"def",
"GetParsePostCode",
"(",
"self",
")",
":",
"if",
"DEBUG",
":",
"return",
"\"/* GetParsePostCode code goes here: %s */\\n\"",
"%",
"self",
".",
"arg",
".",
"name",
"else",
":",
"return",
"\"\""
] | https://github.com/IronLanguages/main/blob/a949455434b1fda8c783289e897e78a9a0caabb5/External.LCA_RESTRICTED/Languages/IronPython/27/Lib/site-packages/win32com/makegw/makegwparse.py#L142-L147 | ||
vslavik/bakefile | 0757295c3e4ac23cd1e0767c77c14c2256ed16e1 | src/bkl/interpreter/passes.py | python | remove_disabled_model_parts | (model, toolset) | Removes disabled targets, source files etc. from the model. Disabled parts
are those with ``condition`` variable evaluating to false. | Removes disabled targets, source files etc. from the model. Disabled parts
are those with ``condition`` variable evaluating to false. | [
"Removes",
"disabled",
"targets",
"source",
"files",
"etc",
".",
"from",
"the",
"model",
".",
"Disabled",
"parts",
"are",
"those",
"with",
"condition",
"variable",
"evaluating",
"to",
"false",
"."
] | def remove_disabled_model_parts(model, toolset):
"""
Removes disabled targets, source files etc. from the model. Disabled parts
are those with ``condition`` variable evaluating to false.
"""
def _should_remove(part, allow_dynamic):
try:
return not part.should_build()
except NonConstError:
if allow_dynamic:
return False
else:
raise
def _remove_from_list(parts, allow_dynamic):
to_del = []
for p in parts:
if _should_remove(p, allow_dynamic):
to_del.append(p)
for p in to_del:
logger.debug("removing disabled %s from %s", p, p.parent)
parts.remove(p)
for module in model.modules:
targets_to_del = []
for target in module.targets.itervalues():
if _should_remove(target, allow_dynamic=True):
targets_to_del.append(target)
continue
_remove_from_list(target.sources, allow_dynamic=True)
_remove_from_list(target.headers, allow_dynamic=True)
for target in targets_to_del:
logger.debug("removing disabled %s", target)
del module.targets[target.name]
# remove any empty submodules:
mods_to_del = []
for module in model.modules:
if module is model.top_module:
continue
if not list(module.submodules) and not module.targets:
logger.debug("removing empty %s", module)
mods_to_del.append(module)
continue
mod_toolsets = module.get_variable_value("toolsets")
if toolset not in mod_toolsets.as_py():
logger.debug("removing %s, because it isn't for toolset %s (is for: %s)",
module, toolset, mod_toolsets.as_py())
mods_to_del.append(module)
for module in mods_to_del:
model.modules.remove(module)
# and remove unused settings too:
settings_to_del = []
for sname, setting in model.settings.iteritems():
if _should_remove(setting, allow_dynamic=False):
settings_to_del.append(sname)
for sname in settings_to_del:
logger.debug("removing setting %s", sname)
del model.settings[sname] | [
"def",
"remove_disabled_model_parts",
"(",
"model",
",",
"toolset",
")",
":",
"def",
"_should_remove",
"(",
"part",
",",
"allow_dynamic",
")",
":",
"try",
":",
"return",
"not",
"part",
".",
"should_build",
"(",
")",
"except",
"NonConstError",
":",
"if",
"all... | https://github.com/vslavik/bakefile/blob/0757295c3e4ac23cd1e0767c77c14c2256ed16e1/src/bkl/interpreter/passes.py#L94-L154 | ||
Kinto/kinto | a9e46e57de8f33c7be098c6f583de18df03b2824 | kinto/core/errors.py | python | raise_invalid | (request, location="body", name=None, description=None, **kwargs) | Helper to raise a validation error.
:param location: location in request (e.g. ``'querystring'``)
:param name: field name
:param description: detailed description of validation error
:raises: :class:`~pyramid:pyramid.httpexceptions.HTTPBadRequest` | Helper to raise a validation error. | [
"Helper",
"to",
"raise",
"a",
"validation",
"error",
"."
] | def raise_invalid(request, location="body", name=None, description=None, **kwargs):
"""Helper to raise a validation error.
:param location: location in request (e.g. ``'querystring'``)
:param name: field name
:param description: detailed description of validation error
:raises: :class:`~pyramid:pyramid.httpexceptions.HTTPBadRequest`
"""
request.errors.add(location, name, description, **kwargs)
response = json_error_handler(request)
raise response | [
"def",
"raise_invalid",
"(",
"request",
",",
"location",
"=",
"\"body\"",
",",
"name",
"=",
"None",
",",
"description",
"=",
"None",
",",
"*",
"*",
"kwargs",
")",
":",
"request",
".",
"errors",
".",
"add",
"(",
"location",
",",
"name",
",",
"descriptio... | https://github.com/Kinto/kinto/blob/a9e46e57de8f33c7be098c6f583de18df03b2824/kinto/core/errors.py#L172-L183 | ||
HonglinChu/SiamTrackers | 8471660b14f970578a43f077b28207d44a27e867 | SiamBAN/SiamBAN/toolkit/utils/misc.py | python | determine_thresholds | (confidence, resolution=100) | return thresholds | choose threshold according to confidence
Args:
confidence: list or numpy array or numpy array
reolution: number of threshold to choose
Restures:
threshold: numpy array | choose threshold according to confidence | [
"choose",
"threshold",
"according",
"to",
"confidence"
] | def determine_thresholds(confidence, resolution=100):
"""choose threshold according to confidence
Args:
confidence: list or numpy array or numpy array
reolution: number of threshold to choose
Restures:
threshold: numpy array
"""
if isinstance(confidence, list):
confidence = np.array(confidence)
confidence = confidence.flatten()
confidence = confidence[~np.isnan(confidence)]
confidence.sort()
assert len(confidence) > resolution and resolution > 2
thresholds = np.ones((resolution))
thresholds[0] = - np.inf
thresholds[-1] = np.inf
delta = np.floor(len(confidence) / (resolution - 2))
idxs = np.linspace(delta, len(confidence)-delta, resolution-2, dtype=np.int32)
thresholds[1:-1] = confidence[idxs]
return thresholds | [
"def",
"determine_thresholds",
"(",
"confidence",
",",
"resolution",
"=",
"100",
")",
":",
"if",
"isinstance",
"(",
"confidence",
",",
"list",
")",
":",
"confidence",
"=",
"np",
".",
"array",
"(",
"confidence",
")",
"confidence",
"=",
"confidence",
".",
"f... | https://github.com/HonglinChu/SiamTrackers/blob/8471660b14f970578a43f077b28207d44a27e867/SiamBAN/SiamBAN/toolkit/utils/misc.py#L6-L30 | |
kvazis/homeassistant | aca227a780f806d861342e3611025a52a3bb4366 | custom_components/xiaomi_miot_raw/deps/xiaomi_cloud_new.py | python | gen_signature | (url: str, signed_nonce: str, nonce: str, data: str) | return base64.b64encode(signature).decode() | Request signature based on url, signed_nonce, nonce and data. | Request signature based on url, signed_nonce, nonce and data. | [
"Request",
"signature",
"based",
"on",
"url",
"signed_nonce",
"nonce",
"and",
"data",
"."
] | def gen_signature(url: str, signed_nonce: str, nonce: str, data: str) -> str:
"""Request signature based on url, signed_nonce, nonce and data."""
sign = '&'.join([url, signed_nonce, nonce, 'data=' + data])
signature = hmac.new(key=base64.b64decode(signed_nonce),
msg=sign.encode(),
digestmod=hashlib.sha256).digest()
return base64.b64encode(signature).decode() | [
"def",
"gen_signature",
"(",
"url",
":",
"str",
",",
"signed_nonce",
":",
"str",
",",
"nonce",
":",
"str",
",",
"data",
":",
"str",
")",
"->",
"str",
":",
"sign",
"=",
"'&'",
".",
"join",
"(",
"[",
"url",
",",
"signed_nonce",
",",
"nonce",
",",
"... | https://github.com/kvazis/homeassistant/blob/aca227a780f806d861342e3611025a52a3bb4366/custom_components/xiaomi_miot_raw/deps/xiaomi_cloud_new.py#L248-L254 | |
misterch0c/shadowbroker | e3a069bea47a2c1009697941ac214adc6f90aa8d | windows/Resources/Python/Core/Lib/SocketServer.py | python | BaseServer.close_request | (self, request) | Called to clean up an individual request. | Called to clean up an individual request. | [
"Called",
"to",
"clean",
"up",
"an",
"individual",
"request",
"."
] | def close_request(self, request):
"""Called to clean up an individual request."""
pass | [
"def",
"close_request",
"(",
"self",
",",
"request",
")",
":",
"pass"
] | https://github.com/misterch0c/shadowbroker/blob/e3a069bea47a2c1009697941ac214adc6f90aa8d/windows/Resources/Python/Core/Lib/SocketServer.py#L310-L312 | ||
rq/rq | c5a1ef17345e17269085e7f72858ac9bd6faf1dd | rq/local.py | python | Local.__iter__ | (self) | return iter(self.__storage__.items()) | [] | def __iter__(self):
return iter(self.__storage__.items()) | [
"def",
"__iter__",
"(",
"self",
")",
":",
"return",
"iter",
"(",
"self",
".",
"__storage__",
".",
"items",
"(",
")",
")"
] | https://github.com/rq/rq/blob/c5a1ef17345e17269085e7f72858ac9bd6faf1dd/rq/local.py#L57-L58 | |||
Tautulli/Tautulli | 2410eb33805aaac4bd1c5dad0f71e4f15afaf742 | lib/packaging/_manylinux.py | python | _glibc_version_string | () | return _glibc_version_string_confstr() or _glibc_version_string_ctypes() | Returns glibc version string, or None if not using glibc. | Returns glibc version string, or None if not using glibc. | [
"Returns",
"glibc",
"version",
"string",
"or",
"None",
"if",
"not",
"using",
"glibc",
"."
] | def _glibc_version_string() -> Optional[str]:
"""Returns glibc version string, or None if not using glibc."""
return _glibc_version_string_confstr() or _glibc_version_string_ctypes() | [
"def",
"_glibc_version_string",
"(",
")",
"->",
"Optional",
"[",
"str",
"]",
":",
"return",
"_glibc_version_string_confstr",
"(",
")",
"or",
"_glibc_version_string_ctypes",
"(",
")"
] | https://github.com/Tautulli/Tautulli/blob/2410eb33805aaac4bd1c5dad0f71e4f15afaf742/lib/packaging/_manylinux.py#L198-L200 | |
lovelylain/pyctp | fd304de4b50c4ddc31a4190b1caaeb5dec66bc5d | futures/ctp/ApiStruct.py | python | UserRightsAssign.__init__ | (self, BrokerID='', UserID='', DRIdentityID=0) | [] | def __init__(self, BrokerID='', UserID='', DRIdentityID=0):
self.BrokerID = '' #应用单元代码, char[11]
self.UserID = '' #用户代码, char[16]
self.DRIdentityID = '' | [
"def",
"__init__",
"(",
"self",
",",
"BrokerID",
"=",
"''",
",",
"UserID",
"=",
"''",
",",
"DRIdentityID",
"=",
"0",
")",
":",
"self",
".",
"BrokerID",
"=",
"''",
"#应用单元代码, char[11]",
"self",
".",
"UserID",
"=",
"''",
"#用户代码, char[16]",
"self",
".",
"D... | https://github.com/lovelylain/pyctp/blob/fd304de4b50c4ddc31a4190b1caaeb5dec66bc5d/futures/ctp/ApiStruct.py#L5801-L5804 | ||||
biolab/orange3 | 41685e1c7b1d1babe680113685a2d44bcc9fec0b | Orange/projection/manifold.py | python | TSNE.compute_initialization | (self, X) | return initialization | [] | def compute_initialization(self, X):
# Compute the initial positions of individual points
if isinstance(self.initialization, np.ndarray):
initialization = self.initialization
elif self.initialization == "pca":
initialization = openTSNE.initialization.pca(
X, self.n_components, random_state=self.random_state
)
elif self.initialization == "random":
initialization = openTSNE.initialization.random(
X, self.n_components, random_state=self.random_state
)
else:
raise ValueError(
"Invalid initialization `%s`. Please use either `pca` or "
"`random` or provide a numpy array." % self.initialization
)
return initialization | [
"def",
"compute_initialization",
"(",
"self",
",",
"X",
")",
":",
"# Compute the initial positions of individual points",
"if",
"isinstance",
"(",
"self",
".",
"initialization",
",",
"np",
".",
"ndarray",
")",
":",
"initialization",
"=",
"self",
".",
"initialization... | https://github.com/biolab/orange3/blob/41685e1c7b1d1babe680113685a2d44bcc9fec0b/Orange/projection/manifold.py#L456-L474 | |||
TencentCloud/tencentcloud-sdk-python | 3677fd1cdc8c5fd626ce001c13fd3b59d1f279d2 | tencentcloud/ssa/v20180608/models.py | python | DescribeCheckConfigDetailRequest.__init__ | (self) | r"""
:param Id: 检查项ID
:type Id: str | r"""
:param Id: 检查项ID
:type Id: str | [
"r",
":",
"param",
"Id",
":",
"检查项ID",
":",
"type",
"Id",
":",
"str"
] | def __init__(self):
r"""
:param Id: 检查项ID
:type Id: str
"""
self.Id = None | [
"def",
"__init__",
"(",
"self",
")",
":",
"self",
".",
"Id",
"=",
"None"
] | https://github.com/TencentCloud/tencentcloud-sdk-python/blob/3677fd1cdc8c5fd626ce001c13fd3b59d1f279d2/tencentcloud/ssa/v20180608/models.py#L1766-L1771 | ||
linxid/Machine_Learning_Study_Path | 558e82d13237114bbb8152483977806fc0c222af | Machine Learning In Action/Chapter4-NaiveBayes/venv/Lib/site-packages/pip/_vendor/distlib/_backport/shutil.py | python | ignore_patterns | (*patterns) | return _ignore_patterns | Function that can be used as copytree() ignore parameter.
Patterns is a sequence of glob-style patterns
that are used to exclude files | Function that can be used as copytree() ignore parameter. | [
"Function",
"that",
"can",
"be",
"used",
"as",
"copytree",
"()",
"ignore",
"parameter",
"."
] | def ignore_patterns(*patterns):
"""Function that can be used as copytree() ignore parameter.
Patterns is a sequence of glob-style patterns
that are used to exclude files"""
def _ignore_patterns(path, names):
ignored_names = []
for pattern in patterns:
ignored_names.extend(fnmatch.filter(names, pattern))
return set(ignored_names)
return _ignore_patterns | [
"def",
"ignore_patterns",
"(",
"*",
"patterns",
")",
":",
"def",
"_ignore_patterns",
"(",
"path",
",",
"names",
")",
":",
"ignored_names",
"=",
"[",
"]",
"for",
"pattern",
"in",
"patterns",
":",
"ignored_names",
".",
"extend",
"(",
"fnmatch",
".",
"filter"... | https://github.com/linxid/Machine_Learning_Study_Path/blob/558e82d13237114bbb8152483977806fc0c222af/Machine Learning In Action/Chapter4-NaiveBayes/venv/Lib/site-packages/pip/_vendor/distlib/_backport/shutil.py#L152-L162 | |
cea-sec/miasm | 09376c524aedc7920a7eda304d6095e12f6958f4 | miasm/jitter/llvmconvert.py | python | LLVMFunction.expr2cases | (self, expr) | return case2dst, evaluated | Evaluate @expr and return:
- switch value -> dst
- evaluation of the switch value (if any) | Evaluate | [
"Evaluate"
] | def expr2cases(self, expr):
"""
Evaluate @expr and return:
- switch value -> dst
- evaluation of the switch value (if any)
"""
to_eval = expr
dst2case = {}
case2dst = {}
for i, solution in enumerate(possible_values(expr)):
value = solution.value
index = dst2case.get(value, i)
to_eval = to_eval.replace_expr({value: ExprInt(index, value.size)})
dst2case[value] = index
if value.is_int() or value.is_loc():
case2dst[i] = value
else:
case2dst[i] = self.add_ir(value)
evaluated = self.add_ir(to_eval)
return case2dst, evaluated | [
"def",
"expr2cases",
"(",
"self",
",",
"expr",
")",
":",
"to_eval",
"=",
"expr",
"dst2case",
"=",
"{",
"}",
"case2dst",
"=",
"{",
"}",
"for",
"i",
",",
"solution",
"in",
"enumerate",
"(",
"possible_values",
"(",
"expr",
")",
")",
":",
"value",
"=",
... | https://github.com/cea-sec/miasm/blob/09376c524aedc7920a7eda304d6095e12f6958f4/miasm/jitter/llvmconvert.py#L1454-L1476 | |
napari/napari | dbf4158e801fa7a429de8ef1cdee73bf6d64c61e | napari/layers/shapes/_shapes_key_bindings.py | python | activate_direct_mode | (layer: Shapes) | Activate vertex selection tool. | Activate vertex selection tool. | [
"Activate",
"vertex",
"selection",
"tool",
"."
] | def activate_direct_mode(layer: Shapes):
"""Activate vertex selection tool."""
layer.mode = Mode.DIRECT | [
"def",
"activate_direct_mode",
"(",
"layer",
":",
"Shapes",
")",
":",
"layer",
".",
"mode",
"=",
"Mode",
".",
"DIRECT"
] | https://github.com/napari/napari/blob/dbf4158e801fa7a429de8ef1cdee73bf6d64c61e/napari/layers/shapes/_shapes_key_bindings.py#L88-L90 | ||
wuub/SublimeREPL | d17e8649c0d0008a364158d671ac0c7d33d0c896 | sublimerepl.py | python | ReplView.push_history | (self, command) | [] | def push_history(self, command):
self._history.push(command)
self._history_match = None | [
"def",
"push_history",
"(",
"self",
",",
"command",
")",
":",
"self",
".",
"_history",
".",
"push",
"(",
"command",
")",
"self",
".",
"_history_match",
"=",
"None"
] | https://github.com/wuub/SublimeREPL/blob/d17e8649c0d0008a364158d671ac0c7d33d0c896/sublimerepl.py#L390-L392 | ||||
Azure/azure-devops-cli-extension | 11334cd55806bef0b99c3bee5a438eed71e44037 | azure-devops/azext_devops/devops_sdk/v5_1/graph/graph_client.py | python | GraphClient.lookup_subjects | (self, subject_lookup) | return self._deserialize('{GraphSubject}', self._unwrap_collection(response)) | LookupSubjects.
[Preview API] Resolve descriptors to users, groups or scopes (Subjects) in a batch.
:param :class:`<GraphSubjectLookup> <azure.devops.v5_1.graph.models.GraphSubjectLookup>` subject_lookup: A list of descriptors that specifies a subset of subjects to retrieve. Each descriptor uniquely identifies the subject across all instance scopes, but only at a single point in time.
:rtype: {GraphSubject} | LookupSubjects.
[Preview API] Resolve descriptors to users, groups or scopes (Subjects) in a batch.
:param :class:`<GraphSubjectLookup> <azure.devops.v5_1.graph.models.GraphSubjectLookup>` subject_lookup: A list of descriptors that specifies a subset of subjects to retrieve. Each descriptor uniquely identifies the subject across all instance scopes, but only at a single point in time.
:rtype: {GraphSubject} | [
"LookupSubjects",
".",
"[",
"Preview",
"API",
"]",
"Resolve",
"descriptors",
"to",
"users",
"groups",
"or",
"scopes",
"(",
"Subjects",
")",
"in",
"a",
"batch",
".",
":",
"param",
":",
"class",
":",
"<GraphSubjectLookup",
">",
"<azure",
".",
"devops",
".",
... | def lookup_subjects(self, subject_lookup):
"""LookupSubjects.
[Preview API] Resolve descriptors to users, groups or scopes (Subjects) in a batch.
:param :class:`<GraphSubjectLookup> <azure.devops.v5_1.graph.models.GraphSubjectLookup>` subject_lookup: A list of descriptors that specifies a subset of subjects to retrieve. Each descriptor uniquely identifies the subject across all instance scopes, but only at a single point in time.
:rtype: {GraphSubject}
"""
content = self._serialize.body(subject_lookup, 'GraphSubjectLookup')
response = self._send(http_method='POST',
location_id='4dd4d168-11f2-48c4-83e8-756fa0de027c',
version='5.1-preview.1',
content=content)
return self._deserialize('{GraphSubject}', self._unwrap_collection(response)) | [
"def",
"lookup_subjects",
"(",
"self",
",",
"subject_lookup",
")",
":",
"content",
"=",
"self",
".",
"_serialize",
".",
"body",
"(",
"subject_lookup",
",",
"'GraphSubjectLookup'",
")",
"response",
"=",
"self",
".",
"_send",
"(",
"http_method",
"=",
"'POST'",
... | https://github.com/Azure/azure-devops-cli-extension/blob/11334cd55806bef0b99c3bee5a438eed71e44037/azure-devops/azext_devops/devops_sdk/v5_1/graph/graph_client.py#L336-L347 | |
pymedusa/Medusa | 1405fbb6eb8ef4d20fcca24c32ddca52b11f0f38 | ext/boto/vendored/six.py | python | with_metaclass | (meta, *bases) | return type.__new__(metaclass, 'temporary_class', (), {}) | Create a base class with a metaclass. | Create a base class with a metaclass. | [
"Create",
"a",
"base",
"class",
"with",
"a",
"metaclass",
"."
] | def with_metaclass(meta, *bases):
"""Create a base class with a metaclass."""
# This requires a bit of explanation: the basic idea is to make a dummy
# metaclass for one level of class instantiation that replaces itself with
# the actual metaclass.
class metaclass(meta):
def __new__(cls, name, this_bases, d):
return meta(name, bases, d)
return type.__new__(metaclass, 'temporary_class', (), {}) | [
"def",
"with_metaclass",
"(",
"meta",
",",
"*",
"bases",
")",
":",
"# This requires a bit of explanation: the basic idea is to make a dummy",
"# metaclass for one level of class instantiation that replaces itself with",
"# the actual metaclass.",
"class",
"metaclass",
"(",
"meta",
")... | https://github.com/pymedusa/Medusa/blob/1405fbb6eb8ef4d20fcca24c32ddca52b11f0f38/ext/boto/vendored/six.py#L800-L809 | |
lad1337/XDM | 0c1b7009fe00f06f102a6f67c793478f515e7efe | site-packages/logilab/astng/builder.py | python | ASTNGBuilder.file_build | (self, path, modname=None) | return node | build astng from a source code file (i.e. from an ast)
path is expected to be a python source file | build astng from a source code file (i.e. from an ast) | [
"build",
"astng",
"from",
"a",
"source",
"code",
"file",
"(",
"i",
".",
"e",
".",
"from",
"an",
"ast",
")"
] | def file_build(self, path, modname=None):
"""build astng from a source code file (i.e. from an ast)
path is expected to be a python source file
"""
try:
stream, encoding, data = open_source_file(path)
except IOError, exc:
msg = 'Unable to load file %r (%s)' % (path, exc)
raise ASTNGBuildingException(msg)
except SyntaxError, exc: # py3k encoding specification error
raise ASTNGBuildingException(exc)
except LookupError, exc: # unknown encoding
raise ASTNGBuildingException(exc)
# get module name if necessary
if modname is None:
try:
modname = '.'.join(modpath_from_file(path))
except ImportError:
modname = splitext(basename(path))[0]
# build astng representation
node = self.string_build(data, modname, path)
node.file_encoding = encoding
return node | [
"def",
"file_build",
"(",
"self",
",",
"path",
",",
"modname",
"=",
"None",
")",
":",
"try",
":",
"stream",
",",
"encoding",
",",
"data",
"=",
"open_source_file",
"(",
"path",
")",
"except",
"IOError",
",",
"exc",
":",
"msg",
"=",
"'Unable to load file %... | https://github.com/lad1337/XDM/blob/0c1b7009fe00f06f102a6f67c793478f515e7efe/site-packages/logilab/astng/builder.py#L108-L131 | |
numba/numba | bf480b9e0da858a65508c2b17759a72ee6a44c51 | numba/typed/typeddict.py | python | typeddict_call | (context) | return typer | Defines typing logic for ``Dict()``.
Produces Dict[undefined, undefined] | Defines typing logic for ``Dict()``.
Produces Dict[undefined, undefined] | [
"Defines",
"typing",
"logic",
"for",
"Dict",
"()",
".",
"Produces",
"Dict",
"[",
"undefined",
"undefined",
"]"
] | def typeddict_call(context):
"""
Defines typing logic for ``Dict()``.
Produces Dict[undefined, undefined]
"""
def typer():
return types.DictType(types.undefined, types.undefined)
return typer | [
"def",
"typeddict_call",
"(",
"context",
")",
":",
"def",
"typer",
"(",
")",
":",
"return",
"types",
".",
"DictType",
"(",
"types",
".",
"undefined",
",",
"types",
".",
"undefined",
")",
"return",
"typer"
] | https://github.com/numba/numba/blob/bf480b9e0da858a65508c2b17759a72ee6a44c51/numba/typed/typeddict.py#L317-L324 | |
zsdlove/Hades | f3d8c43a40ccd7a1bca2a855d8cccc110c34448a | miniDVM/miniDVM.py | python | miniDVM.analysis | (self,ins) | 默认支持goto表中的指令解析,其他的一律调用default函数处理 | 默认支持goto表中的指令解析,其他的一律调用default函数处理 | [
"默认支持goto表中的指令解析,其他的一律调用default函数处理"
] | def analysis(self,ins):
'''
默认支持goto表中的指令解析,其他的一律调用default函数处理
'''
#try:
'''
寻找函数引用
'''
def funcfind(ins):
p1 = re.compile(r'^L(.*) \d+').findall(ins)#匹配块名
if p1 != [] and p1 != "" and p1 != '[]':
return True
else:
return False
return False
'''
method Hash 校验
'''
def compareMethodHash(ins):
p1 = re.compile(r'^L(.*) \d+').findall(ins)[0]
this_md5=hashlib.md5()
this_md5.update(p1)
method_hash=this_md5.hexdigest()
if method_hash==self.methodHash:
return True,method_hash,p1
else:
return False,method_hash,p1
return False
'''
初始化新栈帧&入栈,需要有一个当前分析函数这么一个变量,计算hash值匹配
需要知道的是,这里块的第一条信息是块的名字
类似:Lfridatest/test/com/myapplication/MainActivity; send(Ljava/lang/String;)V 1这般
'''
if funcfind(ins)==True:#匹配块名,如果是块名且方法名和当前的方法名不一样,那么进行栈帧分配操作
isCurrentMethod,method_hash,mn=compareMethodHash(ins)#对比方法名hash
logging.info("[DVM] - 方法HASH比对%s"%ins)
logging.info("[DVM] - isCurrentMethod:%s"%isCurrentMethod)
if isCurrentMethod==True:#如果是旧的函数
pass
else:
self.methodHash=method_hash
self.methodName=mn
self.push()
else:
ins1=ins.split(" ")#0是编号
#12: if-ge v0, v2, :cond_2e
ins2=re.compile(r'^\d+: (.*)').findall(ins)
if "invoke-virtual" in ins:
logging.info("lalalal--->%s"%ins)
if ins1!=None and ins1!=[''] and len(ins1)>1 and ins2!=[] and ins2!=None:
self.goto_table.get(ins1[1],self.default)(ins2[0]) | [
"def",
"analysis",
"(",
"self",
",",
"ins",
")",
":",
"#try:",
"'''\n 寻找函数引用\n '''",
"def",
"funcfind",
"(",
"ins",
")",
":",
"p1",
"=",
"re",
".",
"compile",
"(",
"r'^L(.*) \\d+'",
")",
".",
"findall",
"(",
"ins",
")",
"#匹配块名",
"if",
"p1",... | https://github.com/zsdlove/Hades/blob/f3d8c43a40ccd7a1bca2a855d8cccc110c34448a/miniDVM/miniDVM.py#L644-L696 | ||
typemytype/drawbot | b64569bfb352acf3ac54d2a91f0a987985685466 | drawBot/drawBotDrawingTools.py | python | DrawBotDrawingTool.lineJoin | (self, value) | Set a line join.
Possible values are `miter`, `round` and `bevel`.
.. downloadcode:: lineJoin.py
# set the stroke color to black
stroke(0)
# set no fill
fill(None)
# set a stroke width
strokeWidth(30)
# set a miter limit
miterLimit(30)
# create a bezier path
path = BezierPath()
# move to a point
path.moveTo((100, 100))
# line to a point
path.lineTo((100, 600))
path.lineTo((160, 100))
# set a line join style
lineJoin("miter")
# draw the path
drawPath(path)
# translate the canvas
translate(300, 0)
# set a line join style
lineJoin("round")
# draw the path
drawPath(path)
# translate the canvas
translate(300, 0)
# set a line join style
lineJoin("bevel")
# draw the path
drawPath(path) | Set a line join. | [
"Set",
"a",
"line",
"join",
"."
] | def lineJoin(self, value):
"""
Set a line join.
Possible values are `miter`, `round` and `bevel`.
.. downloadcode:: lineJoin.py
# set the stroke color to black
stroke(0)
# set no fill
fill(None)
# set a stroke width
strokeWidth(30)
# set a miter limit
miterLimit(30)
# create a bezier path
path = BezierPath()
# move to a point
path.moveTo((100, 100))
# line to a point
path.lineTo((100, 600))
path.lineTo((160, 100))
# set a line join style
lineJoin("miter")
# draw the path
drawPath(path)
# translate the canvas
translate(300, 0)
# set a line join style
lineJoin("round")
# draw the path
drawPath(path)
# translate the canvas
translate(300, 0)
# set a line join style
lineJoin("bevel")
# draw the path
drawPath(path)
"""
self._requiresNewFirstPage = True
self._addInstruction("lineJoin", value) | [
"def",
"lineJoin",
"(",
"self",
",",
"value",
")",
":",
"self",
".",
"_requiresNewFirstPage",
"=",
"True",
"self",
".",
"_addInstruction",
"(",
"\"lineJoin\"",
",",
"value",
")"
] | https://github.com/typemytype/drawbot/blob/b64569bfb352acf3ac54d2a91f0a987985685466/drawBot/drawBotDrawingTools.py#L1131-L1172 | ||
theotherp/nzbhydra | 4b03d7f769384b97dfc60dade4806c0fc987514e | libs/furl/furl.py | python | furl.netloc | (self, netloc) | Params:
netloc: Network location string, like 'google.com' or
'google.com:99'.
Raises: ValueError on invalid port or malformed IPv6 address. | Params:
netloc: Network location string, like 'google.com' or
'google.com:99'.
Raises: ValueError on invalid port or malformed IPv6 address. | [
"Params",
":",
"netloc",
":",
"Network",
"location",
"string",
"like",
"google",
".",
"com",
"or",
"google",
".",
"com",
":",
"99",
".",
"Raises",
":",
"ValueError",
"on",
"invalid",
"port",
"or",
"malformed",
"IPv6",
"address",
"."
] | def netloc(self, netloc):
"""
Params:
netloc: Network location string, like 'google.com' or
'google.com:99'.
Raises: ValueError on invalid port or malformed IPv6 address.
"""
# Raises ValueError on malformed IPv6 addresses.
urllib.parse.urlsplit('http://%s/' % netloc)
username = password = host = port = None
if '@' in netloc:
userpass, netloc = netloc.split('@', 1)
if ':' in userpass:
username, password = userpass.split(':', 1)
else:
username = userpass
if ':' in netloc:
# IPv6 address literal.
if ']' in netloc:
colonpos, bracketpos = netloc.rfind(':'), netloc.rfind(']')
if colonpos > bracketpos and colonpos != bracketpos + 1:
raise ValueError("Invalid netloc: '%s'" % netloc)
elif colonpos > bracketpos and colonpos == bracketpos + 1:
host, port = netloc.rsplit(':', 1)
else:
host = netloc.lower()
else:
host, port = netloc.rsplit(':', 1)
host = host.lower()
else:
host = netloc.lower()
# Avoid side effects by assigning self.port before self.host so
# that if an exception is raised when assigning self.port,
# self.host isn't updated.
self.port = port # Raises ValueError on invalid port.
self.host = host or None
self.username = username or None
self.password = password or None | [
"def",
"netloc",
"(",
"self",
",",
"netloc",
")",
":",
"# Raises ValueError on malformed IPv6 addresses.",
"urllib",
".",
"parse",
".",
"urlsplit",
"(",
"'http://%s/'",
"%",
"netloc",
")",
"username",
"=",
"password",
"=",
"host",
"=",
"port",
"=",
"None",
"if... | https://github.com/theotherp/nzbhydra/blob/4b03d7f769384b97dfc60dade4806c0fc987514e/libs/furl/furl.py#L961-L1002 | ||
bcbio/bcbio-nextgen | c80f9b6b1be3267d1f981b7035e3b72441d258f2 | bcbio/bam/readstats.py | python | _simple_lock | (f) | Simple file lock, times out after 20 second assuming lock is stale | Simple file lock, times out after 20 second assuming lock is stale | [
"Simple",
"file",
"lock",
"times",
"out",
"after",
"20",
"second",
"assuming",
"lock",
"is",
"stale"
] | def _simple_lock(f):
"""Simple file lock, times out after 20 second assuming lock is stale
"""
lock_file = f + ".lock"
timeout = 20
curtime = 0
interval = 2
while os.path.exists(lock_file):
time.sleep(interval)
curtime += interval
if curtime > timeout:
os.remove(lock_file)
with open(lock_file, "w") as out_handle:
out_handle.write("locked")
yield
if os.path.exists(lock_file):
os.remove(lock_file) | [
"def",
"_simple_lock",
"(",
"f",
")",
":",
"lock_file",
"=",
"f",
"+",
"\".lock\"",
"timeout",
"=",
"20",
"curtime",
"=",
"0",
"interval",
"=",
"2",
"while",
"os",
".",
"path",
".",
"exists",
"(",
"lock_file",
")",
":",
"time",
".",
"sleep",
"(",
"... | https://github.com/bcbio/bcbio-nextgen/blob/c80f9b6b1be3267d1f981b7035e3b72441d258f2/bcbio/bam/readstats.py#L105-L121 | ||
WenRichard/Customer-Chatbot | 48508c40574ffac8ced414a5bea799e2c85341ca | smart-chatbot-zero/Rerank/metrics.py | python | multi_recall | (pred_y, true_y, labels) | return rec | 多类的召回率
:param pred_y: 预测结果
:param true_y: 真实结果
:param labels: 标签列表
:return: | 多类的召回率
:param pred_y: 预测结果
:param true_y: 真实结果
:param labels: 标签列表
:return: | [
"多类的召回率",
":",
"param",
"pred_y",
":",
"预测结果",
":",
"param",
"true_y",
":",
"真实结果",
":",
"param",
"labels",
":",
"标签列表",
":",
"return",
":"
] | def multi_recall(pred_y, true_y, labels):
"""
多类的召回率
:param pred_y: 预测结果
:param true_y: 真实结果
:param labels: 标签列表
:return:
"""
if isinstance(pred_y[0], list):
pred_y = [item[0] for item in pred_y]
recalls = [binary_recall(pred_y, true_y, label) for label in labels]
rec = mean(recalls)
return rec | [
"def",
"multi_recall",
"(",
"pred_y",
",",
"true_y",
",",
"labels",
")",
":",
"if",
"isinstance",
"(",
"pred_y",
"[",
"0",
"]",
",",
"list",
")",
":",
"pred_y",
"=",
"[",
"item",
"[",
"0",
"]",
"for",
"item",
"in",
"pred_y",
"]",
"recalls",
"=",
... | https://github.com/WenRichard/Customer-Chatbot/blob/48508c40574ffac8ced414a5bea799e2c85341ca/smart-chatbot-zero/Rerank/metrics.py#L116-L129 | |
pwnieexpress/pwn_plug_sources | 1a23324f5dc2c3de20f9c810269b6a29b2758cad | src/metagoofil/hachoir_parser/video/asf.py | python | Codec.createFields | (self) | [] | def createFields(self):
yield Enum(UInt16(self, "type"), self.type_name)
yield UInt16(self, "name_len", "Name length in character (byte=len*2)")
if self["name_len"].value:
yield String(self, "name", self["name_len"].value*2, "Name", charset="UTF-16-LE", strip=" \0")
yield UInt16(self, "desc_len", "Description length in character (byte=len*2)")
if self["desc_len"].value:
yield String(self, "desc", self["desc_len"].value*2, "Description", charset="UTF-16-LE", strip=" \0")
yield UInt16(self, "info_len")
if self["info_len"].value:
yield RawBytes(self, "info", self["info_len"].value) | [
"def",
"createFields",
"(",
"self",
")",
":",
"yield",
"Enum",
"(",
"UInt16",
"(",
"self",
",",
"\"type\"",
")",
",",
"self",
".",
"type_name",
")",
"yield",
"UInt16",
"(",
"self",
",",
"\"name_len\"",
",",
"\"Name length in character (byte=len*2)\"",
")",
"... | https://github.com/pwnieexpress/pwn_plug_sources/blob/1a23324f5dc2c3de20f9c810269b6a29b2758cad/src/metagoofil/hachoir_parser/video/asf.py#L168-L178 | ||||
ajinabraham/OWASP-Xenotix-XSS-Exploit-Framework | cb692f527e4e819b6c228187c5702d990a180043 | external/Scripting Engine/packages/IronPython.StdLib.2.7.4/content/Lib/smtplib.py | python | LMTP.connect | (self, host='localhost', port=0) | return (code, msg) | Connect to the LMTP daemon, on either a Unix or a TCP socket. | Connect to the LMTP daemon, on either a Unix or a TCP socket. | [
"Connect",
"to",
"the",
"LMTP",
"daemon",
"on",
"either",
"a",
"Unix",
"or",
"a",
"TCP",
"socket",
"."
] | def connect(self, host='localhost', port=0):
"""Connect to the LMTP daemon, on either a Unix or a TCP socket."""
if host[0] != '/':
return SMTP.connect(self, host, port)
# Handle Unix-domain sockets.
try:
self.sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
self.sock.connect(host)
except socket.error, msg:
if self.debuglevel > 0:
print>>stderr, 'connect fail:', host
if self.sock:
self.sock.close()
self.sock = None
raise socket.error, msg
(code, msg) = self.getreply()
if self.debuglevel > 0:
print>>stderr, "connect:", msg
return (code, msg) | [
"def",
"connect",
"(",
"self",
",",
"host",
"=",
"'localhost'",
",",
"port",
"=",
"0",
")",
":",
"if",
"host",
"[",
"0",
"]",
"!=",
"'/'",
":",
"return",
"SMTP",
".",
"connect",
"(",
"self",
",",
"host",
",",
"port",
")",
"# Handle Unix-domain socket... | https://github.com/ajinabraham/OWASP-Xenotix-XSS-Exploit-Framework/blob/cb692f527e4e819b6c228187c5702d990a180043/external/Scripting Engine/packages/IronPython.StdLib.2.7.4/content/Lib/smtplib.py#L803-L822 | |
fonttools/fonttools | 892322aaff6a89bea5927379ec06bc0da3dfb7df | Lib/fontTools/mtiLib/__init__.py | python | parseChainedSubst | (lines, font, lookupMap=None) | return parseContext(lines, font, "ChainContextSubst", lookupMap=lookupMap) | [] | def parseChainedSubst(lines, font, lookupMap=None):
return parseContext(lines, font, "ChainContextSubst", lookupMap=lookupMap) | [
"def",
"parseChainedSubst",
"(",
"lines",
",",
"font",
",",
"lookupMap",
"=",
"None",
")",
":",
"return",
"parseContext",
"(",
"lines",
",",
"font",
",",
"\"ChainContextSubst\"",
",",
"lookupMap",
"=",
"lookupMap",
")"
] | https://github.com/fonttools/fonttools/blob/892322aaff6a89bea5927379ec06bc0da3dfb7df/Lib/fontTools/mtiLib/__init__.py#L780-L781 | |||
coreemu/core | 7e18a7a72023a69a92ad61d87461bd659ba27f7c | daemon/core/emane/nodes.py | python | EmaneNet.setnemposition | (self, iface: CoreInterface) | Publish a NEM location change event using the EMANE event service.
:param iface: interface to set nem position for | Publish a NEM location change event using the EMANE event service. | [
"Publish",
"a",
"NEM",
"location",
"change",
"event",
"using",
"the",
"EMANE",
"event",
"service",
"."
] | def setnemposition(self, iface: CoreInterface) -> None:
"""
Publish a NEM location change event using the EMANE event service.
:param iface: interface to set nem position for
"""
if self.session.emane.service is None:
logging.info("position service not available")
return
position = self._nem_position(iface)
if position:
nemid, lon, lat, alt = position
event = LocationEvent()
event.append(nemid, latitude=lat, longitude=lon, altitude=alt)
self.session.emane.service.publish(0, event) | [
"def",
"setnemposition",
"(",
"self",
",",
"iface",
":",
"CoreInterface",
")",
"->",
"None",
":",
"if",
"self",
".",
"session",
".",
"emane",
".",
"service",
"is",
"None",
":",
"logging",
".",
"info",
"(",
"\"position service not available\"",
")",
"return",... | https://github.com/coreemu/core/blob/7e18a7a72023a69a92ad61d87461bd659ba27f7c/daemon/core/emane/nodes.py#L137-L151 | ||
jgagneastro/coffeegrindsize | 22661ebd21831dba4cf32bfc6ba59fe3d49f879c | App/dist/coffeegrindsize.app/Contents/Resources/lib/python3.7/matplotlib/font_manager.py | python | OSXInstalledFonts | (directories=None, fontext='ttf') | return [path
for directory in directories
for path in list_fonts(directory, get_fontext_synonyms(fontext))] | Get list of font files on OS X. | Get list of font files on OS X. | [
"Get",
"list",
"of",
"font",
"files",
"on",
"OS",
"X",
"."
] | def OSXInstalledFonts(directories=None, fontext='ttf'):
"""Get list of font files on OS X."""
if directories is None:
directories = OSXFontDirectories
return [path
for directory in directories
for path in list_fonts(directory, get_fontext_synonyms(fontext))] | [
"def",
"OSXInstalledFonts",
"(",
"directories",
"=",
"None",
",",
"fontext",
"=",
"'ttf'",
")",
":",
"if",
"directories",
"is",
"None",
":",
"directories",
"=",
"OSXFontDirectories",
"return",
"[",
"path",
"for",
"directory",
"in",
"directories",
"for",
"path"... | https://github.com/jgagneastro/coffeegrindsize/blob/22661ebd21831dba4cf32bfc6ba59fe3d49f879c/App/dist/coffeegrindsize.app/Contents/Resources/lib/python3.7/matplotlib/font_manager.py#L217-L223 | |
DjangoPeng/tensorflow-in-depth | 2138e549dacead769be523fea245941b118f0308 | code/11_rnn_models/11.2_seq2seq.py | python | _extract_argmax_and_embed | (embedding,
output_projection=None,
update_embedding=True) | return loop_function | Get a loop_function that extracts the previous symbol and embeds it.
Args:
embedding: embedding tensor for symbols.
output_projection: None or a pair (W, B). If provided, each fed previous
output will first be multiplied by W and added B.
update_embedding: Boolean; if False, the gradients will not propagate
through the embeddings.
Returns:
A loop function. | Get a loop_function that extracts the previous symbol and embeds it.
Args:
embedding: embedding tensor for symbols.
output_projection: None or a pair (W, B). If provided, each fed previous
output will first be multiplied by W and added B.
update_embedding: Boolean; if False, the gradients will not propagate
through the embeddings.
Returns:
A loop function. | [
"Get",
"a",
"loop_function",
"that",
"extracts",
"the",
"previous",
"symbol",
"and",
"embeds",
"it",
".",
"Args",
":",
"embedding",
":",
"embedding",
"tensor",
"for",
"symbols",
".",
"output_projection",
":",
"None",
"or",
"a",
"pair",
"(",
"W",
"B",
")",
... | def _extract_argmax_and_embed(embedding,
output_projection=None,
update_embedding=True):
"""Get a loop_function that extracts the previous symbol and embeds it.
Args:
embedding: embedding tensor for symbols.
output_projection: None or a pair (W, B). If provided, each fed previous
output will first be multiplied by W and added B.
update_embedding: Boolean; if False, the gradients will not propagate
through the embeddings.
Returns:
A loop function.
"""
def loop_function(prev, _):
if output_projection is not None:
prev = nn_ops.xw_plus_b(prev, output_projection[0], output_projection[1])
prev_symbol = math_ops.argmax(prev, 1)
# Note that gradients will not propagate through the second parameter of
# embedding_lookup.
emb_prev = embedding_ops.embedding_lookup(embedding, prev_symbol)
if not update_embedding:
emb_prev = array_ops.stop_gradient(emb_prev)
return emb_prev
return loop_function | [
"def",
"_extract_argmax_and_embed",
"(",
"embedding",
",",
"output_projection",
"=",
"None",
",",
"update_embedding",
"=",
"True",
")",
":",
"def",
"loop_function",
"(",
"prev",
",",
"_",
")",
":",
"if",
"output_projection",
"is",
"not",
"None",
":",
"prev",
... | https://github.com/DjangoPeng/tensorflow-in-depth/blob/2138e549dacead769be523fea245941b118f0308/code/11_rnn_models/11.2_seq2seq.py#L74-L99 | |
mediadrop/mediadrop | 59b477398fe306932cd93fc6bddb37fd45327373 | mediadrop/lib/helpers.py | python | store_transient_message | (cookie_name, text, time=None, path='/', **kwargs) | return msg | Store a JSON message dict in the named cookie.
The cookie will expire at the end of the session, but should be
explicitly deleted by whoever reads it.
:param cookie_name: The cookie name for this message.
:param text: Message text
:param time: Optional time to report. Defaults to now.
:param path: Optional cookie path
:param kwargs: Passed into the JSON dict
:returns: The message python dict
:rtype: dict | Store a JSON message dict in the named cookie. | [
"Store",
"a",
"JSON",
"message",
"dict",
"in",
"the",
"named",
"cookie",
"."
] | def store_transient_message(cookie_name, text, time=None, path='/', **kwargs):
"""Store a JSON message dict in the named cookie.
The cookie will expire at the end of the session, but should be
explicitly deleted by whoever reads it.
:param cookie_name: The cookie name for this message.
:param text: Message text
:param time: Optional time to report. Defaults to now.
:param path: Optional cookie path
:param kwargs: Passed into the JSON dict
:returns: The message python dict
:rtype: dict
"""
time = datetime.now().strftime('%H:%M, %B %d, %Y')
msg = kwargs
msg['text'] = text
msg['time'] = time or datetime.now().strftime('%H:%M, %B %d, %Y')
new_data = quote(simplejson.dumps(msg))
response.set_cookie(cookie_name, new_data, path=path)
return msg | [
"def",
"store_transient_message",
"(",
"cookie_name",
",",
"text",
",",
"time",
"=",
"None",
",",
"path",
"=",
"'/'",
",",
"*",
"*",
"kwargs",
")",
":",
"time",
"=",
"datetime",
".",
"now",
"(",
")",
".",
"strftime",
"(",
"'%H:%M, %B %d, %Y'",
")",
"ms... | https://github.com/mediadrop/mediadrop/blob/59b477398fe306932cd93fc6bddb37fd45327373/mediadrop/lib/helpers.py#L358-L379 | |
tensorflow/graphics | 86997957324bfbdd85848daae989b4c02588faa0 | tensorflow_graphics/projects/points_to_3Dobjects/transforms/transforms.py | python | color_augmentations | (image, variance=0.4) | return image | Color augmentations. | Color augmentations. | [
"Color",
"augmentations",
"."
] | def color_augmentations(image, variance=0.4):
"""Color augmentations."""
if variance:
print(variance)
image_grayscale = tf.image.rgb_to_grayscale(bgr_to_rgb(image))
image_grayscale_mean = tf.math.reduce_mean(
image_grayscale, axis=[-3, -2, -1], keepdims=True)
brightness_fn = functools.partial(brightness, variance=variance)
contrast_fn = functools.partial(
contrast, image_grayscale_mean=image_grayscale_mean, variance=variance)
saturation_fn = functools.partial(
saturation, image_grayscale=image_grayscale, variance=variance)
function_order = tf.random.shuffle([0, 1, 2])
ii = tf.constant(0)
def _apply_fn(image, ii):
tmp_ii = function_order[ii]
image = tf.switch_case(
tmp_ii, {
0: lambda: brightness_fn(image),
1: lambda: contrast_fn(image),
2: lambda: saturation_fn(image)
})
ii = ii + 1
return image, ii
(image, _) = tf.while_loop(lambda image, ii: tf.less(ii, 3),
_apply_fn(image, ii),
[image, ii])
image = lighting(image)
return image | [
"def",
"color_augmentations",
"(",
"image",
",",
"variance",
"=",
"0.4",
")",
":",
"if",
"variance",
":",
"print",
"(",
"variance",
")",
"image_grayscale",
"=",
"tf",
".",
"image",
".",
"rgb_to_grayscale",
"(",
"bgr_to_rgb",
"(",
"image",
")",
")",
"image_... | https://github.com/tensorflow/graphics/blob/86997957324bfbdd85848daae989b4c02588faa0/tensorflow_graphics/projects/points_to_3Dobjects/transforms/transforms.py#L67-L98 | |
mrkipling/maraschino | c6be9286937783ae01df2d6d8cebfc8b2734a7d7 | lib/rtorrent/rpc/__init__.py | python | call_method | (class_obj, method, *args) | return(ret_value) | Handles single RPC calls
@param class_obj: Peer/File/Torrent/Tracker/RTorrent instance
@type class_obj: object
@param method: L{Method} instance or name of raw RPC method
@type method: Method or str | Handles single RPC calls | [
"Handles",
"single",
"RPC",
"calls"
] | def call_method(class_obj, method, *args):
"""Handles single RPC calls
@param class_obj: Peer/File/Torrent/Tracker/RTorrent instance
@type class_obj: object
@param method: L{Method} instance or name of raw RPC method
@type method: Method or str
"""
if method.is_retriever():
args = args[:-1]
else:
assert args[-1] is not None, "No argument given."
if class_obj.__class__.__name__ == "RTorrent":
rt_obj = class_obj
else:
rt_obj = class_obj._rt_obj
# check if rpc method is even available
if not method.is_available(rt_obj):
_handle_unavailable_rpc_method(method, rt_obj)
m = Multicall(class_obj)
m.add(method, *args)
# only added one method, only getting one result back
ret_value = m.call()[0]
####### OBSOLETE ##########################################################
# if method.is_retriever():
# #value = process_result(method, ret_value)
# value = ret_value #MultiCall already processed the result
# else:
# # we're setting the user's input to method.varname
# # but we'll return the value that xmlrpc gives us
# value = process_result(method, args[-1])
##########################################################################
return(ret_value) | [
"def",
"call_method",
"(",
"class_obj",
",",
"method",
",",
"*",
"args",
")",
":",
"if",
"method",
".",
"is_retriever",
"(",
")",
":",
"args",
"=",
"args",
"[",
":",
"-",
"1",
"]",
"else",
":",
"assert",
"args",
"[",
"-",
"1",
"]",
"is",
"not",
... | https://github.com/mrkipling/maraschino/blob/c6be9286937783ae01df2d6d8cebfc8b2734a7d7/lib/rtorrent/rpc/__init__.py#L184-L222 | |
dbcli/mycli | 2af459dc410c5527322e294127f3367532f387a6 | mycli/sqlcompleter.py | python | SQLCompleter.unescape_name | (self, name) | return name | Unquote a string. | Unquote a string. | [
"Unquote",
"a",
"string",
"."
] | def unescape_name(self, name):
"""Unquote a string."""
if name and name[0] == '"' and name[-1] == '"':
name = name[1:-1]
return name | [
"def",
"unescape_name",
"(",
"self",
",",
"name",
")",
":",
"if",
"name",
"and",
"name",
"[",
"0",
"]",
"==",
"'\"'",
"and",
"name",
"[",
"-",
"1",
"]",
"==",
"'\"'",
":",
"name",
"=",
"name",
"[",
"1",
":",
"-",
"1",
"]",
"return",
"name"
] | https://github.com/dbcli/mycli/blob/2af459dc410c5527322e294127f3367532f387a6/mycli/sqlcompleter.py#L79-L84 | |
JaniceWuo/MovieRecommend | 4c86db64ca45598917d304f535413df3bc9fea65 | movierecommend/venv1/Lib/site-packages/pip-9.0.1-py3.6.egg/pip/_vendor/requests/models.py | python | Response.__bool__ | (self) | return self.ok | Returns true if :attr:`status_code` is 'OK'. | Returns true if :attr:`status_code` is 'OK'. | [
"Returns",
"true",
"if",
":",
"attr",
":",
"status_code",
"is",
"OK",
"."
] | def __bool__(self):
"""Returns true if :attr:`status_code` is 'OK'."""
return self.ok | [
"def",
"__bool__",
"(",
"self",
")",
":",
"return",
"self",
".",
"ok"
] | https://github.com/JaniceWuo/MovieRecommend/blob/4c86db64ca45598917d304f535413df3bc9fea65/movierecommend/venv1/Lib/site-packages/pip-9.0.1-py3.6.egg/pip/_vendor/requests/models.py#L618-L620 | |
petercerno/good-morning | bcc4355df2d3fcc2f7fe97f5c7fda6eb35811815 | good_morning/good_morning.py | python | KeyRatiosDownloader._upload_frames_to_db | (self, ticker, frames,
conn) | u"""Uploads the given array of pandas.DataFrames to the MySQL database.
:param ticker: Morningstar ticker.
:param frames: Array of pandas.DataFrames to be uploaded.
:param conn: MySQL connection. | u"""Uploads the given array of pandas.DataFrames to the MySQL database. | [
"u",
"Uploads",
"the",
"given",
"array",
"of",
"pandas",
".",
"DataFrames",
"to",
"the",
"MySQL",
"database",
"."
] | def _upload_frames_to_db(self, ticker, frames,
conn):
u"""Uploads the given array of pandas.DataFrames to the MySQL database.
:param ticker: Morningstar ticker.
:param frames: Array of pandas.DataFrames to be uploaded.
:param conn: MySQL connection.
"""
for frame in frames:
table_name = self._get_db_table_name(frame)
if not _db_table_exists(table_name, conn):
_db_execute(self._get_db_create_table(frame), conn)
_db_execute(self._get_db_replace_values(ticker, frame), conn) | [
"def",
"_upload_frames_to_db",
"(",
"self",
",",
"ticker",
",",
"frames",
",",
"conn",
")",
":",
"for",
"frame",
"in",
"frames",
":",
"table_name",
"=",
"self",
".",
"_get_db_table_name",
"(",
"frame",
")",
"if",
"not",
"_db_table_exists",
"(",
"table_name",... | https://github.com/petercerno/good-morning/blob/bcc4355df2d3fcc2f7fe97f5c7fda6eb35811815/good_morning/good_morning.py#L189-L201 | ||
localstack/localstack | ec8b72d5c926ae8495ca50ce168494247aef54be | localstack/services/s3/s3_utils.py | python | is_static_website | (headers) | return bool(re.match(S3_STATIC_WEBSITE_HOST_REGEX, headers.get("host", ""))) | Determine if the incoming request is for s3 static website hosting
returns True if the host matches website regex
returns False if the host does not matches website regex | Determine if the incoming request is for s3 static website hosting
returns True if the host matches website regex
returns False if the host does not matches website regex | [
"Determine",
"if",
"the",
"incoming",
"request",
"is",
"for",
"s3",
"static",
"website",
"hosting",
"returns",
"True",
"if",
"the",
"host",
"matches",
"website",
"regex",
"returns",
"False",
"if",
"the",
"host",
"does",
"not",
"matches",
"website",
"regex"
] | def is_static_website(headers):
"""
Determine if the incoming request is for s3 static website hosting
returns True if the host matches website regex
returns False if the host does not matches website regex
"""
return bool(re.match(S3_STATIC_WEBSITE_HOST_REGEX, headers.get("host", ""))) | [
"def",
"is_static_website",
"(",
"headers",
")",
":",
"return",
"bool",
"(",
"re",
".",
"match",
"(",
"S3_STATIC_WEBSITE_HOST_REGEX",
",",
"headers",
".",
"get",
"(",
"\"host\"",
",",
"\"\"",
")",
")",
")"
] | https://github.com/localstack/localstack/blob/ec8b72d5c926ae8495ca50ce168494247aef54be/localstack/services/s3/s3_utils.py#L80-L86 | |
home-assistant/core | 265ebd17a3f17ed8dc1e9bdede03ac8e323f1ab1 | homeassistant/components/alarm_control_panel/group.py | python | async_describe_on_off_states | (
hass: HomeAssistant, registry: GroupIntegrationRegistry
) | Describe group on off states. | Describe group on off states. | [
"Describe",
"group",
"on",
"off",
"states",
"."
] | def async_describe_on_off_states(
hass: HomeAssistant, registry: GroupIntegrationRegistry
) -> None:
"""Describe group on off states."""
registry.on_off_states(
{
STATE_ALARM_ARMED_AWAY,
STATE_ALARM_ARMED_CUSTOM_BYPASS,
STATE_ALARM_ARMED_HOME,
STATE_ALARM_ARMED_NIGHT,
STATE_ALARM_ARMED_VACATION,
STATE_ALARM_TRIGGERED,
},
STATE_OFF,
) | [
"def",
"async_describe_on_off_states",
"(",
"hass",
":",
"HomeAssistant",
",",
"registry",
":",
"GroupIntegrationRegistry",
")",
"->",
"None",
":",
"registry",
".",
"on_off_states",
"(",
"{",
"STATE_ALARM_ARMED_AWAY",
",",
"STATE_ALARM_ARMED_CUSTOM_BYPASS",
",",
"STATE_... | https://github.com/home-assistant/core/blob/265ebd17a3f17ed8dc1e9bdede03ac8e323f1ab1/homeassistant/components/alarm_control_panel/group.py#L18-L32 | ||
sagemath/sage | f9b2db94f675ff16963ccdefba4f1a3393b3fe0d | src/sage/sets/condition_set.py | python | ConditionSet.ambient | (self) | return self._universe | r"""
Return the universe of ``self``.
EXAMPLES::
sage: Evens = ConditionSet(ZZ, is_even); Evens
{ x ∈ Integer Ring : <function is_even at 0x...>(x) }
sage: Evens.ambient()
Integer Ring | r"""
Return the universe of ``self``. | [
"r",
"Return",
"the",
"universe",
"of",
"self",
"."
] | def ambient(self):
r"""
Return the universe of ``self``.
EXAMPLES::
sage: Evens = ConditionSet(ZZ, is_even); Evens
{ x ∈ Integer Ring : <function is_even at 0x...>(x) }
sage: Evens.ambient()
Integer Ring
"""
return self._universe | [
"def",
"ambient",
"(",
"self",
")",
":",
"return",
"self",
".",
"_universe"
] | https://github.com/sagemath/sage/blob/f9b2db94f675ff16963ccdefba4f1a3393b3fe0d/src/sage/sets/condition_set.py#L389-L400 | |
rlworkgroup/garage | b4abe07f0fa9bac2cb70e4a3e315c2e7e5b08507 | src/garage/tf/optimizers/conjugate_gradient_optimizer.py | python | HessianVectorProduct.__getstate__ | (self) | return new_dict | Object.__getstate__.
Returns:
dict: the state to be pickled for the instance. | Object.__getstate__. | [
"Object",
".",
"__getstate__",
"."
] | def __getstate__(self):
"""Object.__getstate__.
Returns:
dict: the state to be pickled for the instance.
"""
new_dict = self.__dict__.copy()
del new_dict['_hvp_fun']
return new_dict | [
"def",
"__getstate__",
"(",
"self",
")",
":",
"new_dict",
"=",
"self",
".",
"__dict__",
".",
"copy",
"(",
")",
"del",
"new_dict",
"[",
"'_hvp_fun'",
"]",
"return",
"new_dict"
] | https://github.com/rlworkgroup/garage/blob/b4abe07f0fa9bac2cb70e4a3e315c2e7e5b08507/src/garage/tf/optimizers/conjugate_gradient_optimizer.py#L82-L91 | |
coffeehb/Some-PoC-oR-ExP | eb757a6255c37cf7a2269482aa3d750a9a80ded1 | 验证Joomla是否存在反序列化漏洞的脚本/批量/hackUtils-master/bs4/element.py | python | Tag.decompose | (self) | Recursively destroys the contents of this tree. | Recursively destroys the contents of this tree. | [
"Recursively",
"destroys",
"the",
"contents",
"of",
"this",
"tree",
"."
] | def decompose(self):
"""Recursively destroys the contents of this tree."""
self.extract()
i = self
while i is not None:
next = i.next_element
i.__dict__.clear()
i.contents = []
i = next | [
"def",
"decompose",
"(",
"self",
")",
":",
"self",
".",
"extract",
"(",
")",
"i",
"=",
"self",
"while",
"i",
"is",
"not",
"None",
":",
"next",
"=",
"i",
".",
"next_element",
"i",
".",
"__dict__",
".",
"clear",
"(",
")",
"i",
".",
"contents",
"=",... | https://github.com/coffeehb/Some-PoC-oR-ExP/blob/eb757a6255c37cf7a2269482aa3d750a9a80ded1/验证Joomla是否存在反序列化漏洞的脚本/批量/hackUtils-master/bs4/element.py#L856-L864 | ||
seemethere/nba_py | f1cd2b0f2702601accf21fef4b721a1564ef4705 | nba_py/game.py | python | BoxscoreUsage.sql_team_usage | (self) | return _api_scrape(self.json, 1) | [] | def sql_team_usage(self):
return _api_scrape(self.json, 1) | [
"def",
"sql_team_usage",
"(",
"self",
")",
":",
"return",
"_api_scrape",
"(",
"self",
".",
"json",
",",
"1",
")"
] | https://github.com/seemethere/nba_py/blob/f1cd2b0f2702601accf21fef4b721a1564ef4705/nba_py/game.py#L106-L107 | |||
etingof/pysnmp | becd15c79c9a6b5696928ecd50bf5cca8b1770a1 | pysnmp/proto/rfc1902.py | python | Bits.withNamedBits | (cls, **values) | return X | Creates a subclass with discreet named bits constraint.
Reduce fully duplicate enumerations along the way. | Creates a subclass with discreet named bits constraint. | [
"Creates",
"a",
"subclass",
"with",
"discreet",
"named",
"bits",
"constraint",
"."
] | def withNamedBits(cls, **values):
"""Creates a subclass with discreet named bits constraint.
Reduce fully duplicate enumerations along the way.
"""
enums = set(cls.namedValues.items())
enums.update(values.items())
class X(cls):
namedValues = namedval.NamedValues(*enums)
X.__name__ = cls.__name__
return X | [
"def",
"withNamedBits",
"(",
"cls",
",",
"*",
"*",
"values",
")",
":",
"enums",
"=",
"set",
"(",
"cls",
".",
"namedValues",
".",
"items",
"(",
")",
")",
"enums",
".",
"update",
"(",
"values",
".",
"items",
"(",
")",
")",
"class",
"X",
"(",
"cls",... | https://github.com/etingof/pysnmp/blob/becd15c79c9a6b5696928ecd50bf5cca8b1770a1/pysnmp/proto/rfc1902.py#L698-L710 | |
scikit-hep/scikit-hep | 506149b352eeb2291f24aef3f40691b5f6be2da7 | skhep/math/vectors.py | python | LorentzVector.tolist | (self) | return list(self.__vector3d) + [self.__t] | Return the LorentzVector as a list. | Return the LorentzVector as a list. | [
"Return",
"the",
"LorentzVector",
"as",
"a",
"list",
"."
] | def tolist(self):
"""Return the LorentzVector as a list."""
return list(self.__vector3d) + [self.__t] | [
"def",
"tolist",
"(",
"self",
")",
":",
"return",
"list",
"(",
"self",
".",
"__vector3d",
")",
"+",
"[",
"self",
".",
"__t",
"]"
] | https://github.com/scikit-hep/scikit-hep/blob/506149b352eeb2291f24aef3f40691b5f6be2da7/skhep/math/vectors.py#L758-L760 | |
vlachoudis/bCNC | 67126b4894dabf6579baf47af8d0f9b7de35e6e3 | bCNC/lib/tkExtra.py | python | TreeSplitter.isempty | (self) | return self.tree is None | [] | def isempty(self): return self.tree is None | [
"def",
"isempty",
"(",
"self",
")",
":",
"return",
"self",
".",
"tree",
"is",
"None"
] | https://github.com/vlachoudis/bCNC/blob/67126b4894dabf6579baf47af8d0f9b7de35e6e3/bCNC/lib/tkExtra.py#L3524-L3524 | |||
brightmart/roberta_zh | 469246096b0c3f43e4de395ad3c09dacee16d591 | tokenization.py | python | WordpieceTokenizer.tokenize | (self, text) | return output_tokens | Tokenizes a piece of text into its word pieces.
This uses a greedy longest-match-first algorithm to perform tokenization
using the given vocabulary.
For example:
input = "unaffable"
output = ["un", "##aff", "##able"]
Args:
text: A single token or whitespace separated tokens. This should have
already been passed through `BasicTokenizer.
Returns:
A list of wordpiece tokens. | Tokenizes a piece of text into its word pieces. | [
"Tokenizes",
"a",
"piece",
"of",
"text",
"into",
"its",
"word",
"pieces",
"."
] | def tokenize(self, text):
"""Tokenizes a piece of text into its word pieces.
This uses a greedy longest-match-first algorithm to perform tokenization
using the given vocabulary.
For example:
input = "unaffable"
output = ["un", "##aff", "##able"]
Args:
text: A single token or whitespace separated tokens. This should have
already been passed through `BasicTokenizer.
Returns:
A list of wordpiece tokens.
"""
text = convert_to_unicode(text)
output_tokens = []
for token in whitespace_tokenize(text):
chars = list(token)
if len(chars) > self.max_input_chars_per_word:
output_tokens.append(self.unk_token)
continue
is_bad = False
start = 0
sub_tokens = []
while start < len(chars):
end = len(chars)
cur_substr = None
while start < end:
substr = "".join(chars[start:end])
if start > 0:
substr = "##" + substr
if substr in self.vocab:
cur_substr = substr
break
end -= 1
if cur_substr is None:
is_bad = True
break
sub_tokens.append(cur_substr)
start = end
if is_bad:
output_tokens.append(self.unk_token)
else:
output_tokens.extend(sub_tokens)
return output_tokens | [
"def",
"tokenize",
"(",
"self",
",",
"text",
")",
":",
"text",
"=",
"convert_to_unicode",
"(",
"text",
")",
"output_tokens",
"=",
"[",
"]",
"for",
"token",
"in",
"whitespace_tokenize",
"(",
"text",
")",
":",
"chars",
"=",
"list",
"(",
"token",
")",
"if... | https://github.com/brightmart/roberta_zh/blob/469246096b0c3f43e4de395ad3c09dacee16d591/tokenization.py#L310-L361 | |
fogleman/pg | 124ea3803c788b2c98c4f3a428e5d26842a67b58 | pg/glfw.py | python | swap_interval | (interval) | Sets the swap interval for the current context.
Wrapper for:
void glfwSwapInterval(int interval); | Sets the swap interval for the current context. | [
"Sets",
"the",
"swap",
"interval",
"for",
"the",
"current",
"context",
"."
] | def swap_interval(interval):
'''
Sets the swap interval for the current context.
Wrapper for:
void glfwSwapInterval(int interval);
'''
_glfw.glfwSwapInterval(interval) | [
"def",
"swap_interval",
"(",
"interval",
")",
":",
"_glfw",
".",
"glfwSwapInterval",
"(",
"interval",
")"
] | https://github.com/fogleman/pg/blob/124ea3803c788b2c98c4f3a428e5d26842a67b58/pg/glfw.py#L1610-L1617 | ||
OpenMined/PySyft | f181ca02d307d57bfff9477610358df1a12e3ac9 | packages/syft/src/syft/lib/util.py | python | copy_static_methods | (from_class: type, to_class: type) | Copies all static methods from one class to another class
This utility was initialized during the creation of the Constructor for PyTorch's "th.Tensor" class. Since we
replace each original constructor (th.Tensor) with on we implement (torch.UppercaseTensorConstructor), we also
need to make sure that our new constructor has any static methods which were previously stored on th.Tensor.
Otherwise, the library might look for them there, not find them, and then trigger an error.
Args:
from_class (Type): the class on which we look for static methods co copy
to_class (Type): the class onto which we copy all static methods found in <from_class> | Copies all static methods from one class to another class | [
"Copies",
"all",
"static",
"methods",
"from",
"one",
"class",
"to",
"another",
"class"
] | def copy_static_methods(from_class: type, to_class: type) -> None:
"""Copies all static methods from one class to another class
This utility was initialized during the creation of the Constructor for PyTorch's "th.Tensor" class. Since we
replace each original constructor (th.Tensor) with on we implement (torch.UppercaseTensorConstructor), we also
need to make sure that our new constructor has any static methods which were previously stored on th.Tensor.
Otherwise, the library might look for them there, not find them, and then trigger an error.
Args:
from_class (Type): the class on which we look for static methods co copy
to_class (Type): the class onto which we copy all static methods found in <from_class>
"""
# there are no static methods if from_class itself is not a type (sometimes funcs get passed in)
for attr in dir(from_class):
if is_static_method(klass=from_class, attr=attr):
setattr(to_class, attr, getattr(from_class, attr)) | [
"def",
"copy_static_methods",
"(",
"from_class",
":",
"type",
",",
"to_class",
":",
"type",
")",
"->",
"None",
":",
"# there are no static methods if from_class itself is not a type (sometimes funcs get passed in)",
"for",
"attr",
"in",
"dir",
"(",
"from_class",
")",
":",... | https://github.com/OpenMined/PySyft/blob/f181ca02d307d57bfff9477610358df1a12e3ac9/packages/syft/src/syft/lib/util.py#L77-L94 | ||
nilearn/nilearn | 9edba4471747efacf21260bf470a346307f52706 | nilearn/externals/tempita/__init__.py | python | get_file_template | (name, from_template) | return from_template.__class__.from_filename(
path, namespace=from_template.namespace,
get_template=from_template.get_template) | [] | def get_file_template(name, from_template):
path = os.path.join(os.path.dirname(from_template.name), name)
return from_template.__class__.from_filename(
path, namespace=from_template.namespace,
get_template=from_template.get_template) | [
"def",
"get_file_template",
"(",
"name",
",",
"from_template",
")",
":",
"path",
"=",
"os",
".",
"path",
".",
"join",
"(",
"os",
".",
"path",
".",
"dirname",
"(",
"from_template",
".",
"name",
")",
",",
"name",
")",
"return",
"from_template",
".",
"__c... | https://github.com/nilearn/nilearn/blob/9edba4471747efacf21260bf470a346307f52706/nilearn/externals/tempita/__init__.py#L82-L86 | |||
khanhnamle1994/natural-language-processing | 01d450d5ac002b0156ef4cf93a07cb508c1bcdc5 | assignment1/.env/lib/python2.7/site-packages/pip/_vendor/distlib/_backport/tarfile.py | python | TarFile._getmember | (self, name, tarinfo=None, normalize=False) | Find an archive member by name from bottom to top.
If tarinfo is given, it is used as the starting point. | Find an archive member by name from bottom to top.
If tarinfo is given, it is used as the starting point. | [
"Find",
"an",
"archive",
"member",
"by",
"name",
"from",
"bottom",
"to",
"top",
".",
"If",
"tarinfo",
"is",
"given",
"it",
"is",
"used",
"as",
"the",
"starting",
"point",
"."
] | def _getmember(self, name, tarinfo=None, normalize=False):
"""Find an archive member by name from bottom to top.
If tarinfo is given, it is used as the starting point.
"""
# Ensure that all members have been loaded.
members = self.getmembers()
# Limit the member search list up to tarinfo.
if tarinfo is not None:
members = members[:members.index(tarinfo)]
if normalize:
name = os.path.normpath(name)
for member in reversed(members):
if normalize:
member_name = os.path.normpath(member.name)
else:
member_name = member.name
if name == member_name:
return member | [
"def",
"_getmember",
"(",
"self",
",",
"name",
",",
"tarinfo",
"=",
"None",
",",
"normalize",
"=",
"False",
")",
":",
"# Ensure that all members have been loaded.",
"members",
"=",
"self",
".",
"getmembers",
"(",
")",
"# Limit the member search list up to tarinfo.",
... | https://github.com/khanhnamle1994/natural-language-processing/blob/01d450d5ac002b0156ef4cf93a07cb508c1bcdc5/assignment1/.env/lib/python2.7/site-packages/pip/_vendor/distlib/_backport/tarfile.py#L2463-L2484 | ||
andresriancho/w3af | cd22e5252243a87aaa6d0ddea47cf58dacfe00a9 | w3af/plugins/crawl/dot_ds_store.py | python | dot_ds_store._check_and_analyze | (self, domain_path) | Check if a .DS_Store filename exists in the domain_path.
:return: None, everything is saved to the self.out_queue. | Check if a .DS_Store filename exists in the domain_path. | [
"Check",
"if",
"a",
".",
"DS_Store",
"filename",
"exists",
"in",
"the",
"domain_path",
"."
] | def _check_and_analyze(self, domain_path):
"""
Check if a .DS_Store filename exists in the domain_path.
:return: None, everything is saved to the self.out_queue.
"""
# Request the file
url = domain_path.url_join(self.DS_STORE)
try:
response = self.http_get_and_parse(url, binary_response=True)
except BaseFrameworkException, w3:
msg = 'Failed to GET .DS_Store file: %s. Exception: %s.'
om.out.debug(msg, (url, w3))
return
# Check if it's a .DS_Store file
if is_404(response):
return
try:
store = DsStore(response.get_raw_body())
entries = store.get_file_entries()
except Exception, e:
om.out.debug('Unexpected error while parsing DS_Store file: "%s"' % e)
return
parsed_url_list = []
for filename in entries:
parsed_url_list.append(domain_path.url_join(filename))
self.worker_pool.map(self.http_get_and_parse, parsed_url_list)
desc = ('A .DS_Store file was found at: %s. The contents of this file'
' disclose filenames')
desc %= (response.get_url())
v = Vuln('.DS_Store file found', desc, severity.LOW, response.id, self.get_name())
v.set_url(response.get_url())
kb.kb.append(self, 'dot_ds_store', v)
om.out.vulnerability(v.get_desc(), severity=v.get_severity()) | [
"def",
"_check_and_analyze",
"(",
"self",
",",
"domain_path",
")",
":",
"# Request the file",
"url",
"=",
"domain_path",
".",
"url_join",
"(",
"self",
".",
"DS_STORE",
")",
"try",
":",
"response",
"=",
"self",
".",
"http_get_and_parse",
"(",
"url",
",",
"bin... | https://github.com/andresriancho/w3af/blob/cd22e5252243a87aaa6d0ddea47cf58dacfe00a9/w3af/plugins/crawl/dot_ds_store.py#L74-L116 | ||
wxWidgets/Phoenix | b2199e299a6ca6d866aa6f3d0888499136ead9d6 | buildtools/backports/six.py | python | with_metaclass | (meta, *bases) | return type.__new__(metaclass, 'temporary_class', (), {}) | Create a base class with a metaclass. | Create a base class with a metaclass. | [
"Create",
"a",
"base",
"class",
"with",
"a",
"metaclass",
"."
] | def with_metaclass(meta, *bases):
"""Create a base class with a metaclass."""
# This requires a bit of explanation: the basic idea is to make a dummy
# metaclass for one level of class instantiation that replaces itself with
# the actual metaclass.
class metaclass(type):
def __new__(cls, name, this_bases, d):
return meta(name, bases, d)
@classmethod
def __prepare__(cls, name, this_bases):
return meta.__prepare__(name, bases)
return type.__new__(metaclass, 'temporary_class', (), {}) | [
"def",
"with_metaclass",
"(",
"meta",
",",
"*",
"bases",
")",
":",
"# This requires a bit of explanation: the basic idea is to make a dummy",
"# metaclass for one level of class instantiation that replaces itself with",
"# the actual metaclass.",
"class",
"metaclass",
"(",
"type",
")... | https://github.com/wxWidgets/Phoenix/blob/b2199e299a6ca6d866aa6f3d0888499136ead9d6/buildtools/backports/six.py#L819-L832 | |
lukelbd/proplot | d0bc9c0857d9295b380b8613ef9aba81d50a067c | proplot/config.py | python | Configurator._load_file | (self, path) | return kw_proplot, kw_matplotlib | Return dictionaries of proplot and matplotlib settings loaded from the file. | Return dictionaries of proplot and matplotlib settings loaded from the file. | [
"Return",
"dictionaries",
"of",
"proplot",
"and",
"matplotlib",
"settings",
"loaded",
"from",
"the",
"file",
"."
] | def _load_file(self, path):
"""
Return dictionaries of proplot and matplotlib settings loaded from the file.
"""
added = set()
path = os.path.expanduser(path)
kw_proplot = {}
kw_matplotlib = {}
with open(path, 'r') as fh:
for idx, line in enumerate(fh):
# Strip comments
message = f'line #{idx + 1} in file {path!r}'
stripped = line.split('#', 1)[0].strip()
if not stripped:
pass # no warning
continue
# Parse the pair
pair = stripped.split(':', 1)
if len(pair) != 2:
warnings._warn_proplot(f'Illegal {message}:\n{line}"')
continue
# Detect duplicates
key, val = map(str.strip, pair)
if key in added:
warnings._warn_proplot(f'Duplicate rc key {key!r} on {message}.')
added.add(key)
# Get child dictionaries. Careful to have informative messages
with warnings.catch_warnings():
warnings.simplefilter('error', warnings.ProplotWarning)
try:
ikw_proplot, ikw_matplotlib = self._get_params(key, val)
except KeyError:
warnings._warn_proplot(
f'Invalid rc key {key!r} on {message}.', 'default'
)
continue
except ValueError as err:
warnings._warn_proplot(
f'Invalid rc val {val!r} for key {key!r} on {message}: {err}', 'default' # noqa: E501
)
continue
except warnings.ProplotWarning as err:
warnings._warn_proplot(
f'Outdated rc key {key!r} on {message}: {err}', 'default'
)
warnings.simplefilter('ignore', warnings.ProplotWarning)
ikw_proplot, ikw_matplotlib = self._get_params(key, val)
# Update the settings
kw_proplot.update(ikw_proplot)
kw_matplotlib.update(ikw_matplotlib)
return kw_proplot, kw_matplotlib | [
"def",
"_load_file",
"(",
"self",
",",
"path",
")",
":",
"added",
"=",
"set",
"(",
")",
"path",
"=",
"os",
".",
"path",
".",
"expanduser",
"(",
"path",
")",
"kw_proplot",
"=",
"{",
"}",
"kw_matplotlib",
"=",
"{",
"}",
"with",
"open",
"(",
"path",
... | https://github.com/lukelbd/proplot/blob/d0bc9c0857d9295b380b8613ef9aba81d50a067c/proplot/config.py#L1376-L1427 | |
oxuva/long-term-tracking-benchmark | fd49bed27af85bb78120598ce65397470a387520 | python/oxuva/assess.py | python | QuantizedAssessment.get | (self, min_time=None, max_time=None) | return assessment_sum(subset) | Get cumulative assessment of interval [min_time, max_time]. | Get cumulative assessment of interval [min_time, max_time]. | [
"Get",
"cumulative",
"assessment",
"of",
"interval",
"[",
"min_time",
"max_time",
"]",
"."
] | def get(self, min_time=None, max_time=None):
'''Get cumulative assessment of interval [min_time, max_time].'''
# Include all bins within [min_time, max_time].
subset = []
for interval, value in self.elems:
u, v = interval
# if min_time <= u <= v <= max_time:
if (min_time is None or min_time <= u) and (max_time is None or v <= max_time):
subset.append(value)
elif (min_time < u < max_time) or (min_time < v < max_time):
# If interval is not within [min_time, max_time],
# then require that it is entirely outside [min_time, max_time].
raise ValueError('interval {} straddles requested {}'.format(
str((u, v)), str((min_time, max_time))))
return assessment_sum(subset) | [
"def",
"get",
"(",
"self",
",",
"min_time",
"=",
"None",
",",
"max_time",
"=",
"None",
")",
":",
"# Include all bins within [min_time, max_time].",
"subset",
"=",
"[",
"]",
"for",
"interval",
",",
"value",
"in",
"self",
".",
"elems",
":",
"u",
",",
"v",
... | https://github.com/oxuva/long-term-tracking-benchmark/blob/fd49bed27af85bb78120598ce65397470a387520/python/oxuva/assess.py#L411-L425 | |
mahmoud/glom | 67cd5a4ed7b21607dfefbafb86d9f93314afd6e1 | glom/core.py | python | format_target_spec_trace | (scope, root_error, width=TRACE_WIDTH, depth=0, prev_target=_MISSING, last_branch=True) | return "\n".join(segments) | unpack a scope into a multi-line but short summary | unpack a scope into a multi-line but short summary | [
"unpack",
"a",
"scope",
"into",
"a",
"multi",
"-",
"line",
"but",
"short",
"summary"
] | def format_target_spec_trace(scope, root_error, width=TRACE_WIDTH, depth=0, prev_target=_MISSING, last_branch=True):
"""
unpack a scope into a multi-line but short summary
"""
segments = []
indent = " " + "|" * depth
tick = "| " if depth else "- "
def mk_fmt(label, t=None):
pre = indent + (t or tick) + label + ": "
fmt_width = width - len(pre)
return lambda v: pre + _format_trace_value(v, fmt_width)
fmt_t = mk_fmt("Target")
fmt_s = mk_fmt("Spec")
fmt_b = mk_fmt("Spec", "+ ")
recurse = lambda s, last=False: format_target_spec_trace(s, root_error, width, depth + 1, prev_target, last)
tb_exc_line = lambda e: "".join(traceback.format_exception_only(type(e), e))[:-1]
fmt_e = lambda e: indent + tick + tb_exc_line(e)
for scope, spec, target, error, branches in _unpack_stack(scope):
if target is not prev_target:
segments.append(fmt_t(target))
prev_target = target
if branches:
segments.append(fmt_b(spec))
segments.extend([recurse(s) for s in branches[:-1]])
segments.append(recurse(branches[-1], last_branch))
else:
segments.append(fmt_s(spec))
if error is not None and error is not root_error:
last_line_error = True
segments.append(fmt_e(error))
else:
last_line_error = False
if depth: # \ on first line, X on last line
remark = lambda s, m: s[:depth + 1] + m + s[depth + 2:]
segments[0] = remark(segments[0], "\\")
if not last_branch or last_line_error:
segments[-1] = remark(segments[-1], "X")
return "\n".join(segments) | [
"def",
"format_target_spec_trace",
"(",
"scope",
",",
"root_error",
",",
"width",
"=",
"TRACE_WIDTH",
",",
"depth",
"=",
"0",
",",
"prev_target",
"=",
"_MISSING",
",",
"last_branch",
"=",
"True",
")",
":",
"segments",
"=",
"[",
"]",
"indent",
"=",
"\" \"",... | https://github.com/mahmoud/glom/blob/67cd5a4ed7b21607dfefbafb86d9f93314afd6e1/glom/core.py#L227-L264 | |
aws-samples/aws-kube-codesuite | ab4e5ce45416b83bffb947ab8d234df5437f4fca | src/kubernetes/client/models/v1_component_condition.py | python | V1ComponentCondition.__repr__ | (self) | return self.to_str() | For `print` and `pprint` | For `print` and `pprint` | [
"For",
"print",
"and",
"pprint"
] | def __repr__(self):
"""
For `print` and `pprint`
"""
return self.to_str() | [
"def",
"__repr__",
"(",
"self",
")",
":",
"return",
"self",
".",
"to_str",
"(",
")"
] | https://github.com/aws-samples/aws-kube-codesuite/blob/ab4e5ce45416b83bffb947ab8d234df5437f4fca/src/kubernetes/client/models/v1_component_condition.py#L180-L184 | |
jimmysong/pb-exercises | c5e64075c47503a40063aa836c06a452af14246d | session5/complete/helper.py | python | encode_varint | (i) | encodes an integer as a varint | encodes an integer as a varint | [
"encodes",
"an",
"integer",
"as",
"a",
"varint"
] | def encode_varint(i):
'''encodes an integer as a varint'''
if i < 0xfd:
return bytes([i])
elif i < 0x10000:
return b'\xfd' + int_to_little_endian(i, 2)
elif i < 0x100000000:
return b'\xfe' + int_to_little_endian(i, 4)
elif i < 0x10000000000000000:
return b'\xff' + int_to_little_endian(i, 8)
else:
raise RuntimeError(f'integer too large: {i}') | [
"def",
"encode_varint",
"(",
"i",
")",
":",
"if",
"i",
"<",
"0xfd",
":",
"return",
"bytes",
"(",
"[",
"i",
"]",
")",
"elif",
"i",
"<",
"0x10000",
":",
"return",
"b'\\xfd'",
"+",
"int_to_little_endian",
"(",
"i",
",",
"2",
")",
"elif",
"i",
"<",
"... | https://github.com/jimmysong/pb-exercises/blob/c5e64075c47503a40063aa836c06a452af14246d/session5/complete/helper.py#L107-L118 | ||
kanzure/nanoengineer | 874e4c9f8a9190f093625b267f9767e19f82e6c4 | cad/src/files/mmp/mmp_dispnames.py | python | get_dispName_for_writemmp | (display) | return dispNames[display] | Turn a display-style code integer (e.g. diDEFAULT; as stored in
Atom.display or Chunk.display) into a display-style code string
as used in the current writing format for mmp files. | Turn a display-style code integer (e.g. diDEFAULT; as stored in
Atom.display or Chunk.display) into a display-style code string
as used in the current writing format for mmp files. | [
"Turn",
"a",
"display",
"-",
"style",
"code",
"integer",
"(",
"e",
".",
"g",
".",
"diDEFAULT",
";",
"as",
"stored",
"in",
"Atom",
".",
"display",
"or",
"Chunk",
".",
"display",
")",
"into",
"a",
"display",
"-",
"style",
"code",
"string",
"as",
"used"... | def get_dispName_for_writemmp(display): #bruce 080324, revised 080328
"""
Turn a display-style code integer (e.g. diDEFAULT; as stored in
Atom.display or Chunk.display) into a display-style code string
as used in the current writing format for mmp files.
"""
if debug_pref_write_new_display_names():
return new_dispNames[display]
return dispNames[display] | [
"def",
"get_dispName_for_writemmp",
"(",
"display",
")",
":",
"#bruce 080324, revised 080328",
"if",
"debug_pref_write_new_display_names",
"(",
")",
":",
"return",
"new_dispNames",
"[",
"display",
"]",
"return",
"dispNames",
"[",
"display",
"]"
] | https://github.com/kanzure/nanoengineer/blob/874e4c9f8a9190f093625b267f9767e19f82e6c4/cad/src/files/mmp/mmp_dispnames.py#L23-L31 | |
TengXiaoDai/DistributedCrawling | f5c2439e6ce68dd9b49bde084d76473ff9ed4963 | Lib/importlib/_bootstrap.py | python | FrozenImporter.get_code | (cls, fullname) | return _imp.get_frozen_object(fullname) | Return the code object for the frozen module. | Return the code object for the frozen module. | [
"Return",
"the",
"code",
"object",
"for",
"the",
"frozen",
"module",
"."
] | def get_code(cls, fullname):
"""Return the code object for the frozen module."""
return _imp.get_frozen_object(fullname) | [
"def",
"get_code",
"(",
"cls",
",",
"fullname",
")",
":",
"return",
"_imp",
".",
"get_frozen_object",
"(",
"fullname",
")"
] | https://github.com/TengXiaoDai/DistributedCrawling/blob/f5c2439e6ce68dd9b49bde084d76473ff9ed4963/Lib/importlib/_bootstrap.py#L829-L831 | |
F8LEFT/DecLLVM | d38e45e3d0dd35634adae1d0cf7f96f3bd96e74c | python/idaapi.py | python | get_output_cursor | (*args) | return _idaapi.get_output_cursor(*args) | get_output_cursor() -> bool | get_output_cursor() -> bool | [
"get_output_cursor",
"()",
"-",
">",
"bool"
] | def get_output_cursor(*args):
"""
get_output_cursor() -> bool
"""
return _idaapi.get_output_cursor(*args) | [
"def",
"get_output_cursor",
"(",
"*",
"args",
")",
":",
"return",
"_idaapi",
".",
"get_output_cursor",
"(",
"*",
"args",
")"
] | https://github.com/F8LEFT/DecLLVM/blob/d38e45e3d0dd35634adae1d0cf7f96f3bd96e74c/python/idaapi.py#L42952-L42956 | |
dimagi/commcare-hq | d67ff1d3b4c51fa050c19e60c3253a79d3452a39 | corehq/motech/openmrs/serializers.py | python | omrs_boolean_to_text | (value) | return 'true' if value else 'false' | [] | def omrs_boolean_to_text(value):
return 'true' if value else 'false' | [
"def",
"omrs_boolean_to_text",
"(",
"value",
")",
":",
"return",
"'true'",
"if",
"value",
"else",
"'false'"
] | https://github.com/dimagi/commcare-hq/blob/d67ff1d3b4c51fa050c19e60c3253a79d3452a39/corehq/motech/openmrs/serializers.py#L67-L68 | |||
pgq/skytools-legacy | 8b7e6c118572a605d28b7a3403c96aeecfd0d272 | python/walmgr.py | python | PostgresConfiguration.__init__ | (self, walmgr, cf_file) | load the configuration from master_config | load the configuration from master_config | [
"load",
"the",
"configuration",
"from",
"master_config"
] | def __init__(self, walmgr, cf_file):
"""load the configuration from master_config"""
self.walmgr = walmgr
self.log = walmgr.log
self.cf_file = cf_file
self.cf_buf = open(self.cf_file, "r").read() | [
"def",
"__init__",
"(",
"self",
",",
"walmgr",
",",
"cf_file",
")",
":",
"self",
".",
"walmgr",
"=",
"walmgr",
"self",
".",
"log",
"=",
"walmgr",
".",
"log",
"self",
".",
"cf_file",
"=",
"cf_file",
"self",
".",
"cf_buf",
"=",
"open",
"(",
"self",
"... | https://github.com/pgq/skytools-legacy/blob/8b7e6c118572a605d28b7a3403c96aeecfd0d272/python/walmgr.py#L239-L244 | ||
KalleHallden/AutoTimer | 2d954216700c4930baa154e28dbddc34609af7ce | env/lib/python2.7/site-packages/pip/_vendor/pkg_resources/__init__.py | python | Distribution.insert_on | (self, path, loc=None, replace=False) | return | Ensure self.location is on path
If replace=False (default):
- If location is already in path anywhere, do nothing.
- Else:
- If it's an egg and its parent directory is on path,
insert just ahead of the parent.
- Else: add to the end of path.
If replace=True:
- If location is already on path anywhere (not eggs)
or higher priority than its parent (eggs)
do nothing.
- Else:
- If it's an egg and its parent directory is on path,
insert just ahead of the parent,
removing any lower-priority entries.
- Else: add it to the front of path. | Ensure self.location is on path | [
"Ensure",
"self",
".",
"location",
"is",
"on",
"path"
] | def insert_on(self, path, loc=None, replace=False):
"""Ensure self.location is on path
If replace=False (default):
- If location is already in path anywhere, do nothing.
- Else:
- If it's an egg and its parent directory is on path,
insert just ahead of the parent.
- Else: add to the end of path.
If replace=True:
- If location is already on path anywhere (not eggs)
or higher priority than its parent (eggs)
do nothing.
- Else:
- If it's an egg and its parent directory is on path,
insert just ahead of the parent,
removing any lower-priority entries.
- Else: add it to the front of path.
"""
loc = loc or self.location
if not loc:
return
nloc = _normalize_cached(loc)
bdir = os.path.dirname(nloc)
npath = [(p and _normalize_cached(p) or p) for p in path]
for p, item in enumerate(npath):
if item == nloc:
if replace:
break
else:
# don't modify path (even removing duplicates) if
# found and not replace
return
elif item == bdir and self.precedence == EGG_DIST:
# if it's an .egg, give it precedence over its directory
# UNLESS it's already been added to sys.path and replace=False
if (not replace) and nloc in npath[p:]:
return
if path is sys.path:
self.check_version_conflict()
path.insert(p, loc)
npath.insert(p, nloc)
break
else:
if path is sys.path:
self.check_version_conflict()
if replace:
path.insert(0, loc)
else:
path.append(loc)
return
# p is the spot where we found or inserted loc; now remove duplicates
while True:
try:
np = npath.index(nloc, p + 1)
except ValueError:
break
else:
del npath[np], path[np]
# ha!
p = np
return | [
"def",
"insert_on",
"(",
"self",
",",
"path",
",",
"loc",
"=",
"None",
",",
"replace",
"=",
"False",
")",
":",
"loc",
"=",
"loc",
"or",
"self",
".",
"location",
"if",
"not",
"loc",
":",
"return",
"nloc",
"=",
"_normalize_cached",
"(",
"loc",
")",
"... | https://github.com/KalleHallden/AutoTimer/blob/2d954216700c4930baa154e28dbddc34609af7ce/env/lib/python2.7/site-packages/pip/_vendor/pkg_resources/__init__.py#L2861-L2927 | |
ReFirmLabs/binwalk | fa0c0bd59b8588814756942fe4cb5452e76c1dcd | src/binwalk/core/common.py | python | unique_file_name | (base_name, extension='') | return fname | Creates a unique file name based on the specified base name.
@base_name - The base name to use for the unique file name.
@extension - The file extension to use for the unique file name.
Returns a unique file string. | Creates a unique file name based on the specified base name. | [
"Creates",
"a",
"unique",
"file",
"name",
"based",
"on",
"the",
"specified",
"base",
"name",
"."
] | def unique_file_name(base_name, extension=''):
'''
Creates a unique file name based on the specified base name.
@base_name - The base name to use for the unique file name.
@extension - The file extension to use for the unique file name.
Returns a unique file string.
'''
idcount = 0
if extension and not extension.startswith('.'):
extension = '.%s' % extension
fname = base_name + extension
while os.path.exists(fname):
fname = "%s-%d%s" % (base_name, idcount, extension)
idcount += 1
return fname | [
"def",
"unique_file_name",
"(",
"base_name",
",",
"extension",
"=",
"''",
")",
":",
"idcount",
"=",
"0",
"if",
"extension",
"and",
"not",
"extension",
".",
"startswith",
"(",
"'.'",
")",
":",
"extension",
"=",
"'.%s'",
"%",
"extension",
"fname",
"=",
"ba... | https://github.com/ReFirmLabs/binwalk/blob/fa0c0bd59b8588814756942fe4cb5452e76c1dcd/src/binwalk/core/common.py#L149-L169 | |
datalad/datalad | d8c8383d878a207bb586415314219a60c345f732 | datalad/interface/results.py | python | YieldField.__init__ | (self, field) | Parameters
----------
field : str
Key of the field to return. | Parameters
----------
field : str
Key of the field to return. | [
"Parameters",
"----------",
"field",
":",
"str",
"Key",
"of",
"the",
"field",
"to",
"return",
"."
] | def __init__(self, field):
"""
Parameters
----------
field : str
Key of the field to return.
"""
self.field = field | [
"def",
"__init__",
"(",
"self",
",",
"field",
")",
":",
"self",
".",
"field",
"=",
"field"
] | https://github.com/datalad/datalad/blob/d8c8383d878a207bb586415314219a60c345f732/datalad/interface/results.py#L187-L194 | ||
elastic/elasticsearch-py | 6ef1adfa3c840a84afda7369cd8e43ae7dc45cdb | elasticsearch/_sync/client/__init__.py | python | Elasticsearch.index | (
self,
*,
index: Any,
document: Any,
id: Optional[Any] = None,
error_trace: Optional[bool] = None,
filter_path: Optional[Union[List[str], str]] = None,
human: Optional[bool] = None,
if_primary_term: Optional[int] = None,
if_seq_no: Optional[Any] = None,
op_type: Optional[Any] = None,
pipeline: Optional[str] = None,
pretty: Optional[bool] = None,
refresh: Optional[Any] = None,
require_alias: Optional[bool] = None,
routing: Optional[Any] = None,
timeout: Optional[Any] = None,
version: Optional[Any] = None,
version_type: Optional[Any] = None,
wait_for_active_shards: Optional[Any] = None,
) | return self._perform_request(__method, __target, headers=__headers, body=__body) | Creates or updates a document in an index.
`<https://www.elastic.co/guide/en/elasticsearch/reference/master/docs-index_.html>`_
:param index: The name of the index
:param document:
:param id: Document ID
:param if_primary_term: only perform the index operation if the last operation
that has changed the document has the specified primary term
:param if_seq_no: only perform the index operation if the last operation that
has changed the document has the specified sequence number
:param op_type: Explicit operation type. Defaults to `index` for requests with
an explicit document ID, and to `create`for requests without an explicit
document ID
:param pipeline: The pipeline id to preprocess incoming documents with
:param refresh: If `true` then refresh the affected shards to make this operation
visible to search, if `wait_for` then wait for a refresh to make this operation
visible to search, if `false` (the default) then do nothing with refreshes.
:param require_alias: When true, requires destination to be an alias. Default
is false
:param routing: Specific routing value
:param timeout: Explicit operation timeout
:param version: Explicit version number for concurrency control
:param version_type: Specific version type
:param wait_for_active_shards: Sets the number of shard copies that must be active
before proceeding with the index operation. Defaults to 1, meaning the primary
shard only. Set to `all` for all shard copies, otherwise set to any non-negative
value less than or equal to the total number of copies for the shard (number
of replicas + 1) | Creates or updates a document in an index. | [
"Creates",
"or",
"updates",
"a",
"document",
"in",
"an",
"index",
"."
] | def index(
self,
*,
index: Any,
document: Any,
id: Optional[Any] = None,
error_trace: Optional[bool] = None,
filter_path: Optional[Union[List[str], str]] = None,
human: Optional[bool] = None,
if_primary_term: Optional[int] = None,
if_seq_no: Optional[Any] = None,
op_type: Optional[Any] = None,
pipeline: Optional[str] = None,
pretty: Optional[bool] = None,
refresh: Optional[Any] = None,
require_alias: Optional[bool] = None,
routing: Optional[Any] = None,
timeout: Optional[Any] = None,
version: Optional[Any] = None,
version_type: Optional[Any] = None,
wait_for_active_shards: Optional[Any] = None,
) -> ObjectApiResponse[Any]:
"""
Creates or updates a document in an index.
`<https://www.elastic.co/guide/en/elasticsearch/reference/master/docs-index_.html>`_
:param index: The name of the index
:param document:
:param id: Document ID
:param if_primary_term: only perform the index operation if the last operation
that has changed the document has the specified primary term
:param if_seq_no: only perform the index operation if the last operation that
has changed the document has the specified sequence number
:param op_type: Explicit operation type. Defaults to `index` for requests with
an explicit document ID, and to `create`for requests without an explicit
document ID
:param pipeline: The pipeline id to preprocess incoming documents with
:param refresh: If `true` then refresh the affected shards to make this operation
visible to search, if `wait_for` then wait for a refresh to make this operation
visible to search, if `false` (the default) then do nothing with refreshes.
:param require_alias: When true, requires destination to be an alias. Default
is false
:param routing: Specific routing value
:param timeout: Explicit operation timeout
:param version: Explicit version number for concurrency control
:param version_type: Specific version type
:param wait_for_active_shards: Sets the number of shard copies that must be active
before proceeding with the index operation. Defaults to 1, meaning the primary
shard only. Set to `all` for all shard copies, otherwise set to any non-negative
value less than or equal to the total number of copies for the shard (number
of replicas + 1)
"""
if index in SKIP_IN_PATH:
raise ValueError("Empty value passed for parameter 'index'")
if document is None:
raise ValueError("Empty value passed for parameter 'document'")
if index not in SKIP_IN_PATH and id not in SKIP_IN_PATH:
__path = f"/{_quote(index)}/_doc/{_quote(id)}"
__method = "PUT"
elif index not in SKIP_IN_PATH:
__path = f"/{_quote(index)}/_doc"
__method = "POST"
else:
raise ValueError("Couldn't find a path for the given parameters")
__query: Dict[str, Any] = {}
if error_trace is not None:
__query["error_trace"] = error_trace
if filter_path is not None:
__query["filter_path"] = filter_path
if human is not None:
__query["human"] = human
if if_primary_term is not None:
__query["if_primary_term"] = if_primary_term
if if_seq_no is not None:
__query["if_seq_no"] = if_seq_no
if op_type is not None:
__query["op_type"] = op_type
if pipeline is not None:
__query["pipeline"] = pipeline
if pretty is not None:
__query["pretty"] = pretty
if refresh is not None:
__query["refresh"] = refresh
if require_alias is not None:
__query["require_alias"] = require_alias
if routing is not None:
__query["routing"] = routing
if timeout is not None:
__query["timeout"] = timeout
if version is not None:
__query["version"] = version
if version_type is not None:
__query["version_type"] = version_type
if wait_for_active_shards is not None:
__query["wait_for_active_shards"] = wait_for_active_shards
__body = document
if __query:
__target = f"{__path}?{_quote_query(__query)}"
else:
__target = __path
__headers = {"accept": "application/json", "content-type": "application/json"}
return self._perform_request(__method, __target, headers=__headers, body=__body) | [
"def",
"index",
"(",
"self",
",",
"*",
",",
"index",
":",
"Any",
",",
"document",
":",
"Any",
",",
"id",
":",
"Optional",
"[",
"Any",
"]",
"=",
"None",
",",
"error_trace",
":",
"Optional",
"[",
"bool",
"]",
"=",
"None",
",",
"filter_path",
":",
"... | https://github.com/elastic/elasticsearch-py/blob/6ef1adfa3c840a84afda7369cd8e43ae7dc45cdb/elasticsearch/_sync/client/__init__.py#L1952-L2054 | |
svpcom/wifibroadcast | 51251b8c484b8c4f548aa3bbb1633e0edbb605dc | telemetry/mavlink.py | python | MAVLink_message.to_string | (self, s) | return r + '_XXX' | desperate attempt to convert a string regardless of what garbage we get | desperate attempt to convert a string regardless of what garbage we get | [
"desperate",
"attempt",
"to",
"convert",
"a",
"string",
"regardless",
"of",
"what",
"garbage",
"we",
"get"
] | def to_string(self, s):
'''desperate attempt to convert a string regardless of what garbage we get'''
try:
return s.decode("utf-8")
except Exception as e:
pass
try:
s2 = s.encode('utf-8', 'ignore')
x = u"%s" % s2
return s2
except Exception:
pass
# so its a nasty one. Let's grab as many characters as we can
r = ''
while s != '':
try:
r2 = r + s[0]
s = s[1:]
r2 = r2.encode('ascii', 'ignore')
x = u"%s" % r2
r = r2
except Exception:
break
return r + '_XXX' | [
"def",
"to_string",
"(",
"self",
",",
"s",
")",
":",
"try",
":",
"return",
"s",
".",
"decode",
"(",
"\"utf-8\"",
")",
"except",
"Exception",
"as",
"e",
":",
"pass",
"try",
":",
"s2",
"=",
"s",
".",
"encode",
"(",
"'utf-8'",
",",
"'ignore'",
")",
... | https://github.com/svpcom/wifibroadcast/blob/51251b8c484b8c4f548aa3bbb1633e0edbb605dc/telemetry/mavlink.py#L128-L151 | |
TencentCloud/tencentcloud-sdk-python | 3677fd1cdc8c5fd626ce001c13fd3b59d1f279d2 | tencentcloud/tat/v20201028/models.py | python | CancelInvocationRequest.__init__ | (self) | r"""
:param InvocationId: 执行活动ID
:type InvocationId: str
:param InstanceIds: 实例ID列表,上限100。支持实例类型:
<li> CVM
<li> LIGHTHOUSE
:type InstanceIds: list of str | r"""
:param InvocationId: 执行活动ID
:type InvocationId: str
:param InstanceIds: 实例ID列表,上限100。支持实例类型:
<li> CVM
<li> LIGHTHOUSE
:type InstanceIds: list of str | [
"r",
":",
"param",
"InvocationId",
":",
"执行活动ID",
":",
"type",
"InvocationId",
":",
"str",
":",
"param",
"InstanceIds",
":",
"实例ID列表,上限100。支持实例类型:",
"<li",
">",
"CVM",
"<li",
">",
"LIGHTHOUSE",
":",
"type",
"InstanceIds",
":",
"list",
"of",
"str"
] | def __init__(self):
r"""
:param InvocationId: 执行活动ID
:type InvocationId: str
:param InstanceIds: 实例ID列表,上限100。支持实例类型:
<li> CVM
<li> LIGHTHOUSE
:type InstanceIds: list of str
"""
self.InvocationId = None
self.InstanceIds = None | [
"def",
"__init__",
"(",
"self",
")",
":",
"self",
".",
"InvocationId",
"=",
"None",
"self",
".",
"InstanceIds",
"=",
"None"
] | https://github.com/TencentCloud/tencentcloud-sdk-python/blob/3677fd1cdc8c5fd626ce001c13fd3b59d1f279d2/tencentcloud/tat/v20201028/models.py#L70-L80 | ||
colinsongf/keyword_spotting | 6f93c5d6356932dc34d9956100e869a4f430a386 | normalize.py | python | FFmpegNormalize.create_input_files | (self, input_files) | Remove nonexisting input files | Remove nonexisting input files | [
"Remove",
"nonexisting",
"input",
"files"
] | def create_input_files(self, input_files):
"""
Remove nonexisting input files
"""
to_remove = []
for input_file in input_files:
if not os.path.exists(input_file):
logger.error(
"file " + input_file + " does not exist, will skip")
to_remove.append(input_file)
for input_file in to_remove:
input_files = [f for f in self.input_files if f != input_file]
self.file_count = len(input_files)
for input_file in input_files:
self.input_files.append(InputFile(input_file, self.args)) | [
"def",
"create_input_files",
"(",
"self",
",",
"input_files",
")",
":",
"to_remove",
"=",
"[",
"]",
"for",
"input_file",
"in",
"input_files",
":",
"if",
"not",
"os",
".",
"path",
".",
"exists",
"(",
"input_file",
")",
":",
"logger",
".",
"error",
"(",
... | https://github.com/colinsongf/keyword_spotting/blob/6f93c5d6356932dc34d9956100e869a4f430a386/normalize.py#L366-L384 | ||
opensistemas-hub/osbrain | a9abc82fb194348cceaabb897b394821fee2f135 | osbrain/agent.py | python | Agent.safe_call | (self, method, *args, **kwargs) | return self._loopback_reqrep('inproc://_loopback_safe', data) | A safe call to a method.
A safe call is simply sent to be executed by the main thread.
Parameters
----------
method : str
Method name to be executed by the main thread.
*args : arguments
Method arguments.
*kwargs : keyword arguments
Method keyword arguments. | A safe call to a method. | [
"A",
"safe",
"call",
"to",
"a",
"method",
"."
] | def safe_call(self, method, *args, **kwargs):
"""
A safe call to a method.
A safe call is simply sent to be executed by the main thread.
Parameters
----------
method : str
Method name to be executed by the main thread.
*args : arguments
Method arguments.
*kwargs : keyword arguments
Method keyword arguments.
"""
if not self._running:
raise RuntimeError(
'Agent must be running to safely execute methods!'
)
data = cloudpickle.dumps((method, args, kwargs))
return self._loopback_reqrep('inproc://_loopback_safe', data) | [
"def",
"safe_call",
"(",
"self",
",",
"method",
",",
"*",
"args",
",",
"*",
"*",
"kwargs",
")",
":",
"if",
"not",
"self",
".",
"_running",
":",
"raise",
"RuntimeError",
"(",
"'Agent must be running to safely execute methods!'",
")",
"data",
"=",
"cloudpickle",... | https://github.com/opensistemas-hub/osbrain/blob/a9abc82fb194348cceaabb897b394821fee2f135/osbrain/agent.py#L351-L371 | |
XX-net/XX-Net | a9898cfcf0084195fb7e69b6bc834e59aecdf14f | python3.8.2/Lib/asyncio/transports.py | python | WriteTransport.write_eof | (self) | Close the write end after flushing buffered data.
(This is like typing ^D into a UNIX program reading from stdin.)
Data may still be received. | Close the write end after flushing buffered data. | [
"Close",
"the",
"write",
"end",
"after",
"flushing",
"buffered",
"data",
"."
] | def write_eof(self):
"""Close the write end after flushing buffered data.
(This is like typing ^D into a UNIX program reading from stdin.)
Data may still be received.
"""
raise NotImplementedError | [
"def",
"write_eof",
"(",
"self",
")",
":",
"raise",
"NotImplementedError"
] | https://github.com/XX-net/XX-Net/blob/a9898cfcf0084195fb7e69b6bc834e59aecdf14f/python3.8.2/Lib/asyncio/transports.py#L119-L126 | ||
mathics/Mathics | 318e06dea8f1c70758a50cb2f95c9900150e3a68 | mathics/builtin/numbers/calculus.py | python | FindRoot.apply_with_x_tuple | (self, f, xtuple, evaluation, options) | return | FindRoot[f_, xtuple_, OptionsPattern[]] | FindRoot[f_, xtuple_, OptionsPattern[]] | [
"FindRoot",
"[",
"f_",
"xtuple_",
"OptionsPattern",
"[]",
"]"
] | def apply_with_x_tuple(self, f, xtuple, evaluation, options):
"FindRoot[f_, xtuple_, OptionsPattern[]]"
f_val = f.evaluate(evaluation)
if f_val.has_form("Equal", 2):
f = Expression("Plus", f_val.leaves[0], f_val.leaves[1])
xtuple_value = xtuple.evaluate(evaluation)
if xtuple_value.has_form("List", None):
nleaves = len(xtuple_value.leaves)
if nleaves == 2:
x, x0 = xtuple.evaluate(evaluation).leaves
elif nleaves == 3:
x, x0, x1 = xtuple.evaluate(evaluation).leaves
options["$$Region"] = (x0, x1)
else:
return
return self.apply(f, x, x0, evaluation, options)
return | [
"def",
"apply_with_x_tuple",
"(",
"self",
",",
"f",
",",
"xtuple",
",",
"evaluation",
",",
"options",
")",
":",
"f_val",
"=",
"f",
".",
"evaluate",
"(",
"evaluation",
")",
"if",
"f_val",
".",
"has_form",
"(",
"\"Equal\"",
",",
"2",
")",
":",
"f",
"="... | https://github.com/mathics/Mathics/blob/318e06dea8f1c70758a50cb2f95c9900150e3a68/mathics/builtin/numbers/calculus.py#L1438-L1456 | |
aaronportnoy/toolbag | 2d39457a7617b2f334d203d8c8cf88a5a25ef1fa | toolbag/agent/dbg/vtrace/tools/win32heap.py | python | getHeapSegChunk | (trace, address) | Find and return the heap, segment, and chunk for the given addres
(or exception). | Find and return the heap, segment, and chunk for the given addres
(or exception). | [
"Find",
"and",
"return",
"the",
"heap",
"segment",
"and",
"chunk",
"for",
"the",
"given",
"addres",
"(",
"or",
"exception",
")",
"."
] | def getHeapSegChunk(trace, address):
"""
Find and return the heap, segment, and chunk for the given addres
(or exception).
"""
for heap in getHeaps(trace):
for seg in heap.getSegments():
segstart = seg.address
segend = seg.getSegmentEnd()
if address < segstart or address > segend:
continue
for chunk in seg.getChunks():
a = chunk.address
b = chunk.address + len(chunk)
if address >= a and address < b:
return heap,seg,chunk
raise ChunkNotFound("No Chunk Found for 0x%.8x" % address) | [
"def",
"getHeapSegChunk",
"(",
"trace",
",",
"address",
")",
":",
"for",
"heap",
"in",
"getHeaps",
"(",
"trace",
")",
":",
"for",
"seg",
"in",
"heap",
".",
"getSegments",
"(",
")",
":",
"segstart",
"=",
"seg",
".",
"address",
"segend",
"=",
"seg",
".... | https://github.com/aaronportnoy/toolbag/blob/2d39457a7617b2f334d203d8c8cf88a5a25ef1fa/toolbag/agent/dbg/vtrace/tools/win32heap.py#L84-L105 | ||
leo-editor/leo-editor | 383d6776d135ef17d73d935a2f0ecb3ac0e99494 | leo/commands/spellCommands.py | python | EnchantWrapper.__init__ | (self, c) | Ctor for EnchantWrapper class. | Ctor for EnchantWrapper class. | [
"Ctor",
"for",
"EnchantWrapper",
"class",
"."
] | def __init__(self, c):
"""Ctor for EnchantWrapper class."""
# pylint: disable=super-init-not-called
self.c = c
self.init_language()
fn = self.find_user_dict()
g.app.spellDict = self.d = self.open_dict_file(fn) | [
"def",
"__init__",
"(",
"self",
",",
"c",
")",
":",
"# pylint: disable=super-init-not-called",
"self",
".",
"c",
"=",
"c",
"self",
".",
"init_language",
"(",
")",
"fn",
"=",
"self",
".",
"find_user_dict",
"(",
")",
"g",
".",
"app",
".",
"spellDict",
"=",... | https://github.com/leo-editor/leo-editor/blob/383d6776d135ef17d73d935a2f0ecb3ac0e99494/leo/commands/spellCommands.py#L307-L313 | ||
blackye/lalascan | e35726e6648525eb47493e39ee63a2a906dbb4b2 | thirdparty_libs/requests_ntlm/ntlm/ntlm.py | python | create_LM_hashed_password_v1 | (passwd) | return res | setup LanManager password | setup LanManager password | [
"setup",
"LanManager",
"password"
] | def create_LM_hashed_password_v1(passwd):
"setup LanManager password"
"create LanManager hashed password"
# if the passwd provided is already a hash, we just return the first half
if re.match(r'^[\w]{32}:[\w]{32}$',passwd):
return binascii.unhexlify(passwd.split(':')[0])
# fix the password length to 14 bytes
passwd = string.upper(passwd)
lm_pw = passwd + '\0' * (14 - len(passwd))
lm_pw = passwd[0:14]
# do hash
magic_str = "KGS!@#$%" # page 57 in [MS-NLMP]
res = ''
dobj = des.DES(lm_pw[0:7])
res = res + dobj.encrypt(magic_str)
dobj = des.DES(lm_pw[7:14])
res = res + dobj.encrypt(magic_str)
return res | [
"def",
"create_LM_hashed_password_v1",
"(",
"passwd",
")",
":",
"\"create LanManager hashed password\"",
"# if the passwd provided is already a hash, we just return the first half",
"if",
"re",
".",
"match",
"(",
"r'^[\\w]{32}:[\\w]{32}$'",
",",
"passwd",
")",
":",
"return",
"b... | https://github.com/blackye/lalascan/blob/e35726e6648525eb47493e39ee63a2a906dbb4b2/thirdparty_libs/requests_ntlm/ntlm/ntlm.py#L378-L400 | |
home-assistant/core | 265ebd17a3f17ed8dc1e9bdede03ac8e323f1ab1 | homeassistant/components/cloud/client.py | python | CloudClient.cloud_stopped | (self) | When the cloud is stopped. | When the cloud is stopped. | [
"When",
"the",
"cloud",
"is",
"stopped",
"."
] | async def cloud_stopped(self) -> None:
"""When the cloud is stopped.""" | [
"async",
"def",
"cloud_stopped",
"(",
"self",
")",
"->",
"None",
":"
] | https://github.com/home-assistant/core/blob/265ebd17a3f17ed8dc1e9bdede03ac8e323f1ab1/homeassistant/components/cloud/client.py#L155-L156 | ||
realpython/book2-exercises | cde325eac8e6d8cff2316601c2e5b36bb46af7d0 | web2py/gluon/contrib/shell.py | python | History.remove_unpicklable_name | (self, name) | Removes a name from the list of unpicklable names, if it exists.
Args:
name: string, the name of the unpicklable global to remove | Removes a name from the list of unpicklable names, if it exists. | [
"Removes",
"a",
"name",
"from",
"the",
"list",
"of",
"unpicklable",
"names",
"if",
"it",
"exists",
"."
] | def remove_unpicklable_name(self, name):
"""Removes a name from the list of unpicklable names, if it exists.
Args:
name: string, the name of the unpicklable global to remove
"""
if name in self.unpicklable_names:
self.unpicklable_names.remove(name) | [
"def",
"remove_unpicklable_name",
"(",
"self",
",",
"name",
")",
":",
"if",
"name",
"in",
"self",
".",
"unpicklable_names",
":",
"self",
".",
"unpicklable_names",
".",
"remove",
"(",
"name",
")"
] | https://github.com/realpython/book2-exercises/blob/cde325eac8e6d8cff2316601c2e5b36bb46af7d0/web2py/gluon/contrib/shell.py#L144-L151 | ||
misterch0c/shadowbroker | e3a069bea47a2c1009697941ac214adc6f90aa8d | windows/Resources/Python/Core/Lib/fractions.py | python | Fraction._add | (a, b) | return Fraction(a.numerator * b.denominator + b.numerator * a.denominator, a.denominator * b.denominator) | a + b | a + b | [
"a",
"+",
"b"
] | def _add(a, b):
"""a + b"""
return Fraction(a.numerator * b.denominator + b.numerator * a.denominator, a.denominator * b.denominator) | [
"def",
"_add",
"(",
"a",
",",
"b",
")",
":",
"return",
"Fraction",
"(",
"a",
".",
"numerator",
"*",
"b",
".",
"denominator",
"+",
"b",
".",
"numerator",
"*",
"a",
".",
"denominator",
",",
"a",
".",
"denominator",
"*",
"b",
".",
"denominator",
")"
] | https://github.com/misterch0c/shadowbroker/blob/e3a069bea47a2c1009697941ac214adc6f90aa8d/windows/Resources/Python/Core/Lib/fractions.py#L335-L337 | |
sxjscience/HKO-7 | adeb05a366d4b57f94a5ddb814af57cc62ffe3c5 | nowcasting/operators/common.py | python | group_add | (lhs, rhs) | return ret | Parameters
----------
lhs : list of mx.sym.Symbol
rhs : list of mx.sym.Symbol
Returns
-------
ret : list of mx.sym.Symbol | [] | def group_add(lhs, rhs):
"""
Parameters
----------
lhs : list of mx.sym.Symbol
rhs : list of mx.sym.Symbol
Returns
-------
ret : list of mx.sym.Symbol
"""
if isinstance(lhs, mx.sym.Symbol):
return lhs + rhs
assert len(lhs) == len(rhs)
ret = []
for i in range(len(lhs)):
if isinstance(lhs[i], list):
ret.append(group_add(lhs[i], rhs[i]))
else:
ret.append(lhs[i] + rhs[i])
return ret | [
"def",
"group_add",
"(",
"lhs",
",",
"rhs",
")",
":",
"if",
"isinstance",
"(",
"lhs",
",",
"mx",
".",
"sym",
".",
"Symbol",
")",
":",
"return",
"lhs",
"+",
"rhs",
"assert",
"len",
"(",
"lhs",
")",
"==",
"len",
"(",
"rhs",
")",
"ret",
"=",
"[",
... | https://github.com/sxjscience/HKO-7/blob/adeb05a366d4b57f94a5ddb814af57cc62ffe3c5/nowcasting/operators/common.py#L313-L334 | ||
fonttools/fonttools | 892322aaff6a89bea5927379ec06bc0da3dfb7df | Lib/fontTools/ttLib/tables/otConverters.py | python | MorxSubtableConverter.write | (self, writer, font, tableDict, value, repeatIndex=None) | [] | def write(self, writer, font, tableDict, value, repeatIndex=None):
covFlags = (value.Reserved & 0x000F0000) >> 16
reverseOrder, logicalOrder = self._PROCESSING_ORDERS_REVERSED[
value.ProcessingOrder]
covFlags |= 0x80 if value.TextDirection == "Vertical" else 0
covFlags |= 0x40 if reverseOrder else 0
covFlags |= 0x20 if value.TextDirection == "Any" else 0
covFlags |= 0x10 if logicalOrder else 0
value.CoverageFlags = covFlags
lengthIndex = len(writer.items)
before = writer.getDataLength()
value.StructLength = 0xdeadbeef
# The high nibble of value.Reserved is actuallly encoded
# into coverageFlags, so we need to clear it here.
origReserved = value.Reserved # including high nibble
value.Reserved = value.Reserved & 0xFFFF # without high nibble
value.compile(writer, font)
value.Reserved = origReserved # restore original value
assert writer.items[lengthIndex] == b"\xde\xad\xbe\xef"
length = writer.getDataLength() - before
writer.items[lengthIndex] = struct.pack(">L", length) | [
"def",
"write",
"(",
"self",
",",
"writer",
",",
"font",
",",
"tableDict",
",",
"value",
",",
"repeatIndex",
"=",
"None",
")",
":",
"covFlags",
"=",
"(",
"value",
".",
"Reserved",
"&",
"0x000F0000",
")",
">>",
"16",
"reverseOrder",
",",
"logicalOrder",
... | https://github.com/fonttools/fonttools/blob/892322aaff6a89bea5927379ec06bc0da3dfb7df/Lib/fontTools/ttLib/tables/otConverters.py#L1117-L1137 | ||||
microsoft/nni | 31f11f51249660930824e888af0d4e022823285c | nni/algorithms/hpo/curvefitting_assessor/model_factory.py | python | CurveModel.likelihood | (self, samples) | return ret | likelihood
Parameters
----------
sample : list
sample is a (1 * NUM_OF_FUNCTIONS) matrix, representing{w1, w2, ... wk}
Returns
-------
float
likelihood | likelihood | [
"likelihood"
] | def likelihood(self, samples):
"""likelihood
Parameters
----------
sample : list
sample is a (1 * NUM_OF_FUNCTIONS) matrix, representing{w1, w2, ... wk}
Returns
-------
float
likelihood
"""
ret = np.ones(NUM_OF_INSTANCE)
for i in range(NUM_OF_INSTANCE):
for j in range(1, self.point_num + 1):
ret[i] *= self.normal_distribution(j, samples[i])
return ret | [
"def",
"likelihood",
"(",
"self",
",",
"samples",
")",
":",
"ret",
"=",
"np",
".",
"ones",
"(",
"NUM_OF_INSTANCE",
")",
"for",
"i",
"in",
"range",
"(",
"NUM_OF_INSTANCE",
")",
":",
"for",
"j",
"in",
"range",
"(",
"1",
",",
"self",
".",
"point_num",
... | https://github.com/microsoft/nni/blob/31f11f51249660930824e888af0d4e022823285c/nni/algorithms/hpo/curvefitting_assessor/model_factory.py#L209-L226 | |
skylander86/lambda-text-extractor | 6da52d077a2fc571e38bfe29c33ae68f6443cd5a | lib-linux_x64/pptx/oxml/chart/chart.py | python | CT_PlotArea.last_ser | (self) | return sers[-1] | Return the last `<c:ser>` element in the last xChart element, based
on series order (not necessarily the same element as document order). | Return the last `<c:ser>` element in the last xChart element, based
on series order (not necessarily the same element as document order). | [
"Return",
"the",
"last",
"<c",
":",
"ser",
">",
"element",
"in",
"the",
"last",
"xChart",
"element",
"based",
"on",
"series",
"order",
"(",
"not",
"necessarily",
"the",
"same",
"element",
"as",
"document",
"order",
")",
"."
] | def last_ser(self):
"""
Return the last `<c:ser>` element in the last xChart element, based
on series order (not necessarily the same element as document order).
"""
last_xChart = self.xCharts[-1]
sers = last_xChart.sers
if not sers:
return None
return sers[-1] | [
"def",
"last_ser",
"(",
"self",
")",
":",
"last_xChart",
"=",
"self",
".",
"xCharts",
"[",
"-",
"1",
"]",
"sers",
"=",
"last_xChart",
".",
"sers",
"if",
"not",
"sers",
":",
"return",
"None",
"return",
"sers",
"[",
"-",
"1",
"]"
] | https://github.com/skylander86/lambda-text-extractor/blob/6da52d077a2fc571e38bfe29c33ae68f6443cd5a/lib-linux_x64/pptx/oxml/chart/chart.py#L188-L197 | |
aceisace/Inkycal | 552744bc5d80769c1015d48fd8b13201683ee679 | inkycal/display/drivers/epd_5_in_83_colour.py | python | EPD.init | (self) | return 0 | [] | def init(self):
if (epdconfig.module_init() != 0):
return -1
self.reset()
self.send_command(0x01) # POWER_SETTING
self.send_data(0x37)
self.send_data(0x00)
self.send_command(0x00) # PANEL_SETTING
self.send_data(0xCF)
self.send_data(0x08)
self.send_command(0x30) # PLL_CONTROL
self.send_data(0x3A) # PLL: 0-15:0x3C, 15+:0x3A
self.send_command(0X82) # VCOM VOLTAGE SETTING
self.send_data(0x28) # all temperature range
self.send_command(0x06) # boost
self.send_data(0xc7)
self.send_data(0xcc)
self.send_data(0x15)
self.send_command(0X50) # VCOM AND DATA INTERVAL SETTING
self.send_data(0x77)
self.send_command(0X60) # TCON SETTING
self.send_data(0x22)
self.send_command(0X65) # FLASH CONTROL
self.send_data(0x00)
self.send_command(0x61) # tres
self.send_data(0x02) # source 600
self.send_data(0x58)
self.send_data(0x01) # gate 448
self.send_data(0xc0)
self.send_command(0xe5) # FLASH MODE
self.send_data(0x03)
self.send_data(0x03)
return 0 | [
"def",
"init",
"(",
"self",
")",
":",
"if",
"(",
"epdconfig",
".",
"module_init",
"(",
")",
"!=",
"0",
")",
":",
"return",
"-",
"1",
"self",
".",
"reset",
"(",
")",
"self",
".",
"send_command",
"(",
"0x01",
")",
"# POWER_SETTING",
"self",
".",
"sen... | https://github.com/aceisace/Inkycal/blob/552744bc5d80769c1015d48fd8b13201683ee679/inkycal/display/drivers/epd_5_in_83_colour.py#L74-L117 | |||
git-cola/git-cola | b48b8028e0c3baf47faf7b074b9773737358163d | cola/qtutils.py | python | add_action | (widget, text, fn, *shortcuts) | return _add_action(widget, text, tip, fn, connect_action, *shortcuts) | [] | def add_action(widget, text, fn, *shortcuts):
tip = text
return _add_action(widget, text, tip, fn, connect_action, *shortcuts) | [
"def",
"add_action",
"(",
"widget",
",",
"text",
",",
"fn",
",",
"*",
"shortcuts",
")",
":",
"tip",
"=",
"text",
"return",
"_add_action",
"(",
"widget",
",",
"text",
",",
"tip",
",",
"fn",
",",
"connect_action",
",",
"*",
"shortcuts",
")"
] | https://github.com/git-cola/git-cola/blob/b48b8028e0c3baf47faf7b074b9773737358163d/cola/qtutils.py#L507-L509 | |||
huggingface/transformers | 623b4f7c63f60cce917677ee704d6c93ee960b4b | examples/pytorch/benchmarking/run_benchmark.py | python | main | () | [] | def main():
parser = HfArgumentParser(PyTorchBenchmarkArguments)
try:
benchmark_args = parser.parse_args_into_dataclasses()[0]
except ValueError as e:
arg_error_msg = "Arg --no_{0} is no longer used, please use --no-{0} instead."
begin_error_msg = " ".join(str(e).split(" ")[:-1])
full_error_msg = ""
depreciated_args = eval(str(e).split(" ")[-1])
wrong_args = []
for arg in depreciated_args:
# arg[2:] removes '--'
if arg[2:] in PyTorchBenchmarkArguments.deprecated_args:
# arg[5:] removes '--no_'
full_error_msg += arg_error_msg.format(arg[5:])
else:
wrong_args.append(arg)
if len(wrong_args) > 0:
full_error_msg = full_error_msg + begin_error_msg + str(wrong_args)
raise ValueError(full_error_msg)
benchmark = PyTorchBenchmark(args=benchmark_args)
benchmark.run() | [
"def",
"main",
"(",
")",
":",
"parser",
"=",
"HfArgumentParser",
"(",
"PyTorchBenchmarkArguments",
")",
"try",
":",
"benchmark_args",
"=",
"parser",
".",
"parse_args_into_dataclasses",
"(",
")",
"[",
"0",
"]",
"except",
"ValueError",
"as",
"e",
":",
"arg_error... | https://github.com/huggingface/transformers/blob/623b4f7c63f60cce917677ee704d6c93ee960b4b/examples/pytorch/benchmarking/run_benchmark.py#L22-L44 | ||||
pculture/mirovideoconverter3 | 27efad91845c8ae544dc27034adb0d3e18ca8f1f | helperscripts/windows-virtualenv/__main__.py | python | movetree | (source_dir, dest_dir) | Move the contents of source_dir into dest_dir
For each file/directory in source dir, copy it to dest_dir. If this would
overwrite a file/directory, then an IOError will be raised | Move the contents of source_dir into dest_dir | [
"Move",
"the",
"contents",
"of",
"source_dir",
"into",
"dest_dir"
] | def movetree(source_dir, dest_dir):
"""Move the contents of source_dir into dest_dir
For each file/directory in source dir, copy it to dest_dir. If this would
overwrite a file/directory, then an IOError will be raised
"""
for name in os.listdir(source_dir):
source_child = os.path.join(source_dir, name)
writeout("* moving %s to %s", name, dest_dir)
shutil.move(source_child, os.path.join(dest_dir, name)) | [
"def",
"movetree",
"(",
"source_dir",
",",
"dest_dir",
")",
":",
"for",
"name",
"in",
"os",
".",
"listdir",
"(",
"source_dir",
")",
":",
"source_child",
"=",
"os",
".",
"path",
".",
"join",
"(",
"source_dir",
",",
"name",
")",
"writeout",
"(",
"\"* mov... | https://github.com/pculture/mirovideoconverter3/blob/27efad91845c8ae544dc27034adb0d3e18ca8f1f/helperscripts/windows-virtualenv/__main__.py#L186-L195 | ||
aiidateam/aiida-core | c743a335480f8bb3a5e4ebd2463a31f9f3b9f9b2 | aiida/backends/sqlalchemy/migrations/versions/a6048f0ffca8_update_linktypes.py | python | upgrade | () | Migrations for the upgrade. | Migrations for the upgrade. | [
"Migrations",
"for",
"the",
"upgrade",
"."
] | def upgrade():
"""Migrations for the upgrade."""
conn = op.get_bind()
# I am first migrating the wrongly declared returnlinks out of
# the InlineCalculations.
# This bug is reported #628 https://github.com/aiidateam/aiida-core/issues/628
# There is an explicit check in the code of the inline calculation
# ensuring that the calculation returns UNSTORED nodes.
# Therefore, no cycle can be created with that migration!
#
# this command:
# 1) selects all links that
# - joins an InlineCalculation (or subclass) as input
# - joins a Data (or subclass) as output
# - is marked as a returnlink.
# 2) set for these links the type to 'createlink'
stmt1 = text(
"""
UPDATE db_dblink set type='createlink' WHERE db_dblink.id IN (
SELECT db_dblink_1.id
FROM db_dbnode AS db_dbnode_1
JOIN db_dblink AS db_dblink_1 ON db_dblink_1.input_id = db_dbnode_1.id
JOIN db_dbnode AS db_dbnode_2 ON db_dblink_1.output_id = db_dbnode_2.id
WHERE db_dbnode_1.type LIKE 'calculation.inline.%'
AND db_dbnode_2.type LIKE 'data.%'
AND db_dblink_1.type = 'returnlink'
)
"""
)
conn.execute(stmt1)
# Now I am updating the link-types that are null because of either an export and subsequent import
# https://github.com/aiidateam/aiida-core/issues/685
# or because the link types don't exist because the links were added before the introduction of link types.
# This is reported here: https://github.com/aiidateam/aiida-core/issues/687
#
# The following sql statement:
# 1) selects all links that
# - joins Data (or subclass) or Code as input
# - joins Calculation (or subclass) as output. This includes WorkCalculation, InlineCalcuation, JobCalculations...
# - has no type (null)
# 2) set for these links the type to 'inputlink'
stmt2 = text(
"""
UPDATE db_dblink set type='inputlink' where id in (
SELECT db_dblink_1.id
FROM db_dbnode AS db_dbnode_1
JOIN db_dblink AS db_dblink_1 ON db_dblink_1.input_id = db_dbnode_1.id
JOIN db_dbnode AS db_dbnode_2 ON db_dblink_1.output_id = db_dbnode_2.id
WHERE ( db_dbnode_1.type LIKE 'data.%' or db_dbnode_1.type = 'code.Code.' )
AND db_dbnode_2.type LIKE 'calculation.%'
AND ( db_dblink_1.type = null OR db_dblink_1.type = '')
);
"""
)
conn.execute(stmt2)
#
# The following sql statement:
# 1) selects all links that
# - join JobCalculation (or subclass) or InlineCalculation as input
# - joins Data (or subclass) as output.
# - has no type (null)
# 2) set for these links the type to 'createlink'
stmt3 = text(
"""
UPDATE db_dblink set type='createlink' where id in (
SELECT db_dblink_1.id
FROM db_dbnode AS db_dbnode_1
JOIN db_dblink AS db_dblink_1 ON db_dblink_1.input_id = db_dbnode_1.id
JOIN db_dbnode AS db_dbnode_2 ON db_dblink_1.output_id = db_dbnode_2.id
WHERE db_dbnode_2.type LIKE 'data.%'
AND (
db_dbnode_1.type LIKE 'calculation.job.%'
OR
db_dbnode_1.type = 'calculation.inline.InlineCalculation.'
)
AND ( db_dblink_1.type = null OR db_dblink_1.type = '')
)
"""
)
conn.execute(stmt3)
# The following sql statement:
# 1) selects all links that
# - join WorkCalculation as input. No subclassing was introduced so far, so only one type string is checked for.
# - join Data (or subclass) as output.
# - has no type (null)
# 2) set for these links the type to 'returnlink'
stmt4 = text(
"""
UPDATE db_dblink set type='returnlink' where id in (
SELECT db_dblink_1.id
FROM db_dbnode AS db_dbnode_1
JOIN db_dblink AS db_dblink_1 ON db_dblink_1.input_id = db_dbnode_1.id
JOIN db_dbnode AS db_dbnode_2 ON db_dblink_1.output_id = db_dbnode_2.id
WHERE db_dbnode_2.type LIKE 'data.%'
AND db_dbnode_1.type = 'calculation.work.WorkCalculation.'
AND ( db_dblink_1.type = null OR db_dblink_1.type = '')
)
"""
)
conn.execute(stmt4)
# Now I update links that are CALLS:
# The following sql statement:
# 1) selects all links that
# - join WorkCalculation as input. No subclassing was introduced so far, so only one type string is checked for.
# - join Calculation (or subclass) as output. Includes JobCalculation and WorkCalculations and all subclasses.
# - has no type (null)
# 2) set for these links the type to 'calllink'
stmt5 = text(
"""
UPDATE db_dblink set type='calllink' where id in (
SELECT db_dblink_1.id
FROM db_dbnode AS db_dbnode_1
JOIN db_dblink AS db_dblink_1 ON db_dblink_1.input_id = db_dbnode_1.id
JOIN db_dbnode AS db_dbnode_2 ON db_dblink_1.output_id = db_dbnode_2.id
WHERE db_dbnode_1.type = 'calculation.work.WorkCalculation.'
AND db_dbnode_2.type LIKE 'calculation.%'
AND ( db_dblink_1.type = null OR db_dblink_1.type = '')
)
"""
)
conn.execute(stmt5) | [
"def",
"upgrade",
"(",
")",
":",
"conn",
"=",
"op",
".",
"get_bind",
"(",
")",
"# I am first migrating the wrongly declared returnlinks out of",
"# the InlineCalculations.",
"# This bug is reported #628 https://github.com/aiidateam/aiida-core/issues/628",
"# There is an explicit check ... | https://github.com/aiidateam/aiida-core/blob/c743a335480f8bb3a5e4ebd2463a31f9f3b9f9b2/aiida/backends/sqlalchemy/migrations/versions/a6048f0ffca8_update_linktypes.py#L28-L149 | ||
wakatime/legacy-python-cli | 9b64548b16ab5ef16603d9a6c2620a16d0df8d46 | wakatime/packages/py27/OpenSSL/crypto.py | python | X509Req.add_extensions | (self, extensions) | Add extensions to the certificate signing request.
:param extensions: The X.509 extensions to add.
:type extensions: iterable of :py:class:`X509Extension`
:return: ``None`` | Add extensions to the certificate signing request. | [
"Add",
"extensions",
"to",
"the",
"certificate",
"signing",
"request",
"."
] | def add_extensions(self, extensions):
"""
Add extensions to the certificate signing request.
:param extensions: The X.509 extensions to add.
:type extensions: iterable of :py:class:`X509Extension`
:return: ``None``
"""
stack = _lib.sk_X509_EXTENSION_new_null()
_openssl_assert(stack != _ffi.NULL)
stack = _ffi.gc(stack, _lib.sk_X509_EXTENSION_free)
for ext in extensions:
if not isinstance(ext, X509Extension):
raise ValueError("One of the elements is not an X509Extension")
# TODO push can fail (here and elsewhere)
_lib.sk_X509_EXTENSION_push(stack, ext._extension)
add_result = _lib.X509_REQ_add_extensions(self._req, stack)
_openssl_assert(add_result == 1) | [
"def",
"add_extensions",
"(",
"self",
",",
"extensions",
")",
":",
"stack",
"=",
"_lib",
".",
"sk_X509_EXTENSION_new_null",
"(",
")",
"_openssl_assert",
"(",
"stack",
"!=",
"_ffi",
".",
"NULL",
")",
"stack",
"=",
"_ffi",
".",
"gc",
"(",
"stack",
",",
"_l... | https://github.com/wakatime/legacy-python-cli/blob/9b64548b16ab5ef16603d9a6c2620a16d0df8d46/wakatime/packages/py27/OpenSSL/crypto.py#L965-L986 | ||
sidewalklabs/s2sphere | d1d067e8c06e5fbaf0cc0158bade947b4a03a438 | s2sphere/sphere.py | python | Interval.__repr__ | (self) | return '{}: ({}, {})'.format(
self.__class__.__name__, self.__bounds[0], self.__bounds[1]) | [] | def __repr__(self):
return '{}: ({}, {})'.format(
self.__class__.__name__, self.__bounds[0], self.__bounds[1]) | [
"def",
"__repr__",
"(",
"self",
")",
":",
"return",
"'{}: ({}, {})'",
".",
"format",
"(",
"self",
".",
"__class__",
".",
"__name__",
",",
"self",
".",
"__bounds",
"[",
"0",
"]",
",",
"self",
".",
"__bounds",
"[",
"1",
"]",
")"
] | https://github.com/sidewalklabs/s2sphere/blob/d1d067e8c06e5fbaf0cc0158bade947b4a03a438/s2sphere/sphere.py#L1966-L1968 | |||
holzschu/Carnets | 44effb10ddfc6aa5c8b0687582a724ba82c6b547 | Library/lib/python3.7/site-packages/sympy/physics/wigner.py | python | gaunt | (l_1, l_2, l_3, m_1, m_2, m_3, prec=None) | return res | r"""
Calculate the Gaunt coefficient.
The Gaunt coefficient is defined as the integral over three
spherical harmonics:
.. math::
\begin{aligned}
\operatorname{Gaunt}(l_1,l_2,l_3,m_1,m_2,m_3)
&=\int Y_{l_1,m_1}(\Omega)
Y_{l_2,m_2}(\Omega) Y_{l_3,m_3}(\Omega) \,d\Omega \\
&=\sqrt{\frac{(2l_1+1)(2l_2+1)(2l_3+1)}{4\pi}}
\operatorname{Wigner3j}(l_1,l_2,l_3,0,0,0)
\operatorname{Wigner3j}(l_1,l_2,l_3,m_1,m_2,m_3)
\end{aligned}
INPUT:
- ``l_1``, ``l_2``, ``l_3``, ``m_1``, ``m_2``, ``m_3`` - integer
- ``prec`` - precision, default: ``None``. Providing a precision can
drastically speed up the calculation.
OUTPUT:
Rational number times the square root of a rational number
(if ``prec=None``), or real number if a precision is given.
Examples
========
>>> from sympy.physics.wigner import gaunt
>>> gaunt(1,0,1,1,0,-1)
-1/(2*sqrt(pi))
>>> gaunt(1000,1000,1200,9,3,-12).n(64)
0.00689500421922113448...
It is an error to use non-integer values for `l` and `m`::
sage: gaunt(1.2,0,1.2,0,0,0)
Traceback (most recent call last):
...
ValueError: l values must be integer
sage: gaunt(1,0,1,1.1,0,-1.1)
Traceback (most recent call last):
...
ValueError: m values must be integer
NOTES:
The Gaunt coefficient obeys the following symmetry rules:
- invariant under any permutation of the columns
.. math::
\begin{aligned}
Y(l_1,l_2,l_3,m_1,m_2,m_3)
&=Y(l_3,l_1,l_2,m_3,m_1,m_2) \\
&=Y(l_2,l_3,l_1,m_2,m_3,m_1) \\
&=Y(l_3,l_2,l_1,m_3,m_2,m_1) \\
&=Y(l_1,l_3,l_2,m_1,m_3,m_2) \\
&=Y(l_2,l_1,l_3,m_2,m_1,m_3)
\end{aligned}
- invariant under space inflection, i.e.
.. math::
Y(l_1,l_2,l_3,m_1,m_2,m_3)
=Y(l_1,l_2,l_3,-m_1,-m_2,-m_3)
- symmetric with respect to the 72 Regge symmetries as inherited
for the `3j` symbols [Regge58]_
- zero for `l_1`, `l_2`, `l_3` not fulfilling triangle relation
- zero for violating any one of the conditions: `l_1 \ge |m_1|`,
`l_2 \ge |m_2|`, `l_3 \ge |m_3|`
- non-zero only for an even sum of the `l_i`, i.e.
`L = l_1 + l_2 + l_3 = 2n` for `n` in `\mathbb{N}`
ALGORITHM:
This function uses the algorithm of [Liberatodebrito82]_ to
calculate the value of the Gaunt coefficient exactly. Note that
the formula contains alternating sums over large factorials and is
therefore unsuitable for finite precision arithmetic and only
useful for a computer algebra system [Rasch03]_.
AUTHORS:
- Jens Rasch (2009-03-24): initial version for Sage | r"""
Calculate the Gaunt coefficient. | [
"r",
"Calculate",
"the",
"Gaunt",
"coefficient",
"."
] | def gaunt(l_1, l_2, l_3, m_1, m_2, m_3, prec=None):
r"""
Calculate the Gaunt coefficient.
The Gaunt coefficient is defined as the integral over three
spherical harmonics:
.. math::
\begin{aligned}
\operatorname{Gaunt}(l_1,l_2,l_3,m_1,m_2,m_3)
&=\int Y_{l_1,m_1}(\Omega)
Y_{l_2,m_2}(\Omega) Y_{l_3,m_3}(\Omega) \,d\Omega \\
&=\sqrt{\frac{(2l_1+1)(2l_2+1)(2l_3+1)}{4\pi}}
\operatorname{Wigner3j}(l_1,l_2,l_3,0,0,0)
\operatorname{Wigner3j}(l_1,l_2,l_3,m_1,m_2,m_3)
\end{aligned}
INPUT:
- ``l_1``, ``l_2``, ``l_3``, ``m_1``, ``m_2``, ``m_3`` - integer
- ``prec`` - precision, default: ``None``. Providing a precision can
drastically speed up the calculation.
OUTPUT:
Rational number times the square root of a rational number
(if ``prec=None``), or real number if a precision is given.
Examples
========
>>> from sympy.physics.wigner import gaunt
>>> gaunt(1,0,1,1,0,-1)
-1/(2*sqrt(pi))
>>> gaunt(1000,1000,1200,9,3,-12).n(64)
0.00689500421922113448...
It is an error to use non-integer values for `l` and `m`::
sage: gaunt(1.2,0,1.2,0,0,0)
Traceback (most recent call last):
...
ValueError: l values must be integer
sage: gaunt(1,0,1,1.1,0,-1.1)
Traceback (most recent call last):
...
ValueError: m values must be integer
NOTES:
The Gaunt coefficient obeys the following symmetry rules:
- invariant under any permutation of the columns
.. math::
\begin{aligned}
Y(l_1,l_2,l_3,m_1,m_2,m_3)
&=Y(l_3,l_1,l_2,m_3,m_1,m_2) \\
&=Y(l_2,l_3,l_1,m_2,m_3,m_1) \\
&=Y(l_3,l_2,l_1,m_3,m_2,m_1) \\
&=Y(l_1,l_3,l_2,m_1,m_3,m_2) \\
&=Y(l_2,l_1,l_3,m_2,m_1,m_3)
\end{aligned}
- invariant under space inflection, i.e.
.. math::
Y(l_1,l_2,l_3,m_1,m_2,m_3)
=Y(l_1,l_2,l_3,-m_1,-m_2,-m_3)
- symmetric with respect to the 72 Regge symmetries as inherited
for the `3j` symbols [Regge58]_
- zero for `l_1`, `l_2`, `l_3` not fulfilling triangle relation
- zero for violating any one of the conditions: `l_1 \ge |m_1|`,
`l_2 \ge |m_2|`, `l_3 \ge |m_3|`
- non-zero only for an even sum of the `l_i`, i.e.
`L = l_1 + l_2 + l_3 = 2n` for `n` in `\mathbb{N}`
ALGORITHM:
This function uses the algorithm of [Liberatodebrito82]_ to
calculate the value of the Gaunt coefficient exactly. Note that
the formula contains alternating sums over large factorials and is
therefore unsuitable for finite precision arithmetic and only
useful for a computer algebra system [Rasch03]_.
AUTHORS:
- Jens Rasch (2009-03-24): initial version for Sage
"""
if int(l_1) != l_1 or int(l_2) != l_2 or int(l_3) != l_3:
raise ValueError("l values must be integer")
if int(m_1) != m_1 or int(m_2) != m_2 or int(m_3) != m_3:
raise ValueError("m values must be integer")
sumL = l_1 + l_2 + l_3
bigL = sumL // 2
a1 = l_1 + l_2 - l_3
if a1 < 0:
return 0
a2 = l_1 - l_2 + l_3
if a2 < 0:
return 0
a3 = -l_1 + l_2 + l_3
if a3 < 0:
return 0
if sumL % 2:
return 0
if (m_1 + m_2 + m_3) != 0:
return 0
if (abs(m_1) > l_1) or (abs(m_2) > l_2) or (abs(m_3) > l_3):
return 0
imin = max(-l_3 + l_1 + m_2, -l_3 + l_2 - m_1, 0)
imax = min(l_2 + m_2, l_1 - m_1, l_1 + l_2 - l_3)
maxfact = max(l_1 + l_2 + l_3 + 1, imax + 1)
_calc_factlist(maxfact)
argsqrt = (2 * l_1 + 1) * (2 * l_2 + 1) * (2 * l_3 + 1) * \
_Factlist[l_1 - m_1] * _Factlist[l_1 + m_1] * _Factlist[l_2 - m_2] * \
_Factlist[l_2 + m_2] * _Factlist[l_3 - m_3] * _Factlist[l_3 + m_3] / \
(4*pi)
ressqrt = sqrt(argsqrt)
prefac = Integer(_Factlist[bigL] * _Factlist[l_2 - l_1 + l_3] *
_Factlist[l_1 - l_2 + l_3] * _Factlist[l_1 + l_2 - l_3])/ \
_Factlist[2 * bigL + 1]/ \
(_Factlist[bigL - l_1] *
_Factlist[bigL - l_2] * _Factlist[bigL - l_3])
sumres = 0
for ii in range(int(imin), int(imax) + 1):
den = _Factlist[ii] * _Factlist[ii + l_3 - l_1 - m_2] * \
_Factlist[l_2 + m_2 - ii] * _Factlist[l_1 - ii - m_1] * \
_Factlist[ii + l_3 - l_2 + m_1] * _Factlist[l_1 + l_2 - l_3 - ii]
sumres = sumres + Integer((-1) ** ii) / den
res = ressqrt * prefac * sumres * Integer((-1) ** (bigL + l_3 + m_1 - m_2))
if prec is not None:
res = res.n(prec)
return res | [
"def",
"gaunt",
"(",
"l_1",
",",
"l_2",
",",
"l_3",
",",
"m_1",
",",
"m_2",
",",
"m_3",
",",
"prec",
"=",
"None",
")",
":",
"if",
"int",
"(",
"l_1",
")",
"!=",
"l_1",
"or",
"int",
"(",
"l_2",
")",
"!=",
"l_2",
"or",
"int",
"(",
"l_3",
")",
... | https://github.com/holzschu/Carnets/blob/44effb10ddfc6aa5c8b0687582a724ba82c6b547/Library/lib/python3.7/site-packages/sympy/physics/wigner.py#L553-L699 | |
larryhastings/gilectomy | 4315ec3f1d6d4f813cc82ce27a24e7f784dbfc1a | Lib/tkinter/__init__.py | python | Misc._getboolean | (self, string) | Internal function. | Internal function. | [
"Internal",
"function",
"."
] | def _getboolean(self, string):
"""Internal function."""
if string:
return self.tk.getboolean(string) | [
"def",
"_getboolean",
"(",
"self",
",",
"string",
")",
":",
"if",
"string",
":",
"return",
"self",
".",
"tk",
".",
"getboolean",
"(",
"string",
")"
] | https://github.com/larryhastings/gilectomy/blob/4315ec3f1d6d4f813cc82ce27a24e7f784dbfc1a/Lib/tkinter/__init__.py#L1136-L1139 | ||
huggingface/transformers | 623b4f7c63f60cce917677ee704d6c93ee960b4b | src/transformers/utils/dummy_vision_objects.py | python | SegformerFeatureExtractor.__init__ | (self, *args, **kwargs) | [] | def __init__(self, *args, **kwargs):
requires_backends(self, ["vision"]) | [
"def",
"__init__",
"(",
"self",
",",
"*",
"args",
",",
"*",
"*",
"kwargs",
")",
":",
"requires_backends",
"(",
"self",
",",
"[",
"\"vision\"",
"]",
")"
] | https://github.com/huggingface/transformers/blob/623b4f7c63f60cce917677ee704d6c93ee960b4b/src/transformers/utils/dummy_vision_objects.py#L73-L74 | ||||
zalando/spilo | 72b447fac1fc9b9e6e2b519dc520a50a5e0fdd25 | postgres-appliance/major_upgrade/inplace_upgrade.py | python | InplaceUpgrade.reset_custom_statistics_target | (self) | [] | def reset_custom_statistics_target(self):
from patroni.postgresql.connection import get_connection_cursor
logger.info('Resetting non-default statistics target before analyze')
self._statistics = defaultdict(lambda: defaultdict(dict))
conn_kwargs = self.postgresql.local_conn_kwargs
for d in self.postgresql.query('SELECT datname FROM pg_catalog.pg_database WHERE datallowconn'):
conn_kwargs['dbname'] = d[0]
with get_connection_cursor(**conn_kwargs) as cur:
cur.execute('SELECT attrelid::regclass, quote_ident(attname), attstattarget '
'FROM pg_catalog.pg_attribute WHERE attnum > 0 AND NOT attisdropped AND attstattarget > 0')
for table, column, target in cur.fetchall():
query = 'ALTER TABLE {0} ALTER COLUMN {1} SET STATISTICS -1'.format(table, column)
logger.info("Executing '%s' in the database=%s. Old value=%s", query, d[0], target)
cur.execute(query)
self._statistics[d[0]][table][column] = target | [
"def",
"reset_custom_statistics_target",
"(",
"self",
")",
":",
"from",
"patroni",
".",
"postgresql",
".",
"connection",
"import",
"get_connection_cursor",
"logger",
".",
"info",
"(",
"'Resetting non-default statistics target before analyze'",
")",
"self",
".",
"_statisti... | https://github.com/zalando/spilo/blob/72b447fac1fc9b9e6e2b519dc520a50a5e0fdd25/postgres-appliance/major_upgrade/inplace_upgrade.py#L411-L428 | ||||
ronreiter/interactive-tutorials | d026d1ae58941863d60eb30a8a94a8650d2bd4bf | suds/properties.py | python | Definition.validate | (self, value) | Validate the I{value} is of the correct class.
@param value: The value to validate.
@type value: any
@raise AttributeError: When I{value} is invalid. | Validate the I{value} is of the correct class. | [
"Validate",
"the",
"I",
"{",
"value",
"}",
"is",
"of",
"the",
"correct",
"class",
"."
] | def validate(self, value):
"""
Validate the I{value} is of the correct class.
@param value: The value to validate.
@type value: any
@raise AttributeError: When I{value} is invalid.
"""
if value is None:
return
if len(self.classes) and not isinstance(value, self.classes):
msg = '"%s" must be: %s' % (self.name, self.classes)
raise AttributeError(msg) | [
"def",
"validate",
"(",
"self",
",",
"value",
")",
":",
"if",
"value",
"is",
"None",
":",
"return",
"if",
"len",
"(",
"self",
".",
"classes",
")",
"and",
"not",
"isinstance",
"(",
"value",
",",
"self",
".",
"classes",
")",
":",
"msg",
"=",
"'\"%s\"... | https://github.com/ronreiter/interactive-tutorials/blob/d026d1ae58941863d60eb30a8a94a8650d2bd4bf/suds/properties.py#L173-L184 | ||
google/timesketch | 1ce6b60e125d104e6644947c6f1dbe1b82ac76b6 | timesketch/lib/analyzers/geoip.py | python | GeoIpClientAdapter.ip2geo | (self, ip_address: str) | Perform a IP to geolocation lookup.
Args:
ip_address - the IPv4 or IPv6 address to geolocate
Returns:
Either:
A tuple comprising of the following in order
- iso_code (str) - the ISO 31661 alpha-2 code of the country
- latitude (str) - the north-south coordinate
- longitude (str) - the east-west coordinate
- country_name (str) - the full country name
- city (str) - the city name that approximates the location
Or None:
- when the IP address does not have a resolvable location | Perform a IP to geolocation lookup. | [
"Perform",
"a",
"IP",
"to",
"geolocation",
"lookup",
"."
] | def ip2geo(self, ip_address: str) -> Union[Tuple[
str, str, str, str, str], None]:
"""Perform a IP to geolocation lookup.
Args:
ip_address - the IPv4 or IPv6 address to geolocate
Returns:
Either:
A tuple comprising of the following in order
- iso_code (str) - the ISO 31661 alpha-2 code of the country
- latitude (str) - the north-south coordinate
- longitude (str) - the east-west coordinate
- country_name (str) - the full country name
- city (str) - the city name that approximates the location
Or None:
- when the IP address does not have a resolvable location
"""
raise NotImplementedError | [
"def",
"ip2geo",
"(",
"self",
",",
"ip_address",
":",
"str",
")",
"->",
"Union",
"[",
"Tuple",
"[",
"str",
",",
"str",
",",
"str",
",",
"str",
",",
"str",
"]",
",",
"None",
"]",
":",
"raise",
"NotImplementedError"
] | https://github.com/google/timesketch/blob/1ce6b60e125d104e6644947c6f1dbe1b82ac76b6/timesketch/lib/analyzers/geoip.py#L56-L74 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.