nwo stringlengths 5 106 | sha stringlengths 40 40 | path stringlengths 4 174 | language stringclasses 1
value | identifier stringlengths 1 140 | parameters stringlengths 0 87.7k | argument_list stringclasses 1
value | return_statement stringlengths 0 426k | docstring stringlengths 0 64.3k | docstring_summary stringlengths 0 26.3k | docstring_tokens list | function stringlengths 18 4.83M | function_tokens list | url stringlengths 83 304 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
GiulioRossetti/cdlib | b2c6311b99725bb2b029556f531d244a2af14a2a | cdlib/algorithms/crisp_partition.py | python | rb_pots | (
g_original: object,
initial_membership: list = None,
weights: list = None,
resolution_parameter: float = 1,
) | return NodeClustering(
coms,
g_original,
"RB Pots",
method_parameters={
"initial_membership": initial_membership,
"weights": weights,
"resolution_parameter": resolution_parameter,
},
) | Rb_pots is a model where the quality function to optimize is:
.. math:: Q = \\sum_{ij} \\left(A_{ij} - \\gamma \\frac{k_i k_j}{2m} \\right)\\delta(\\sigma_i, \\sigma_j)
where :math:`A` is the adjacency matrix, :math:`k_i` is the (weighted) degree of node :math:`i`, :math:`m` is the total number of edges (or total edge weight), :math:`\\sigma_i` denotes the community of node :math:`i` and :math:`\\delta(\\sigma_i, \\sigma_j) = 1` if :math:`\\sigma_i = \\sigma_j` and `0` otherwise.
For directed graphs a slightly different formulation is used, as proposed by Leicht and Newman :
.. math:: Q = \\sum_{ij} \\left(A_{ij} - \\gamma \\frac{k_i^\mathrm{out} k_j^\mathrm{in}}{m} \\right)\\delta(\\sigma_i, \\sigma_j),
where :math:`k_i^\\mathrm{out}` and :math:`k_i^\\mathrm{in}` refers to respectively the outdegree and indegree of node :math:`i` , and :math:`A_{ij}` refers to an edge from :math:`i` to :math:`j`.
Note that this is the same of Leiden algorithm when setting :math:`\\gamma=1` and normalising by :math:`2m`, or :math:`m` for directed graphs.
**Supported Graph Types**
========== ======== ========
Undirected Directed Weighted
========== ======== ========
Yes Yes Yes
========== ======== ========
:param g_original: a networkx/igraph object
:param initial_membership: list of int Initial membership for the partition. If :obj:`None` then defaults to a singleton partition. Deafault None
:param weights: list of double, or edge attribute Weights of edges. Can be either an iterable or an edge attribute. Deafault None
:param resolution_parameter: double >0 A parameter value controlling the coarseness of the clustering. Higher resolutions lead to more communities, while lower resolutions lead to fewer communities. Default 1
:return: NodeClustering object
:Example:
>>> from cdlib import algorithms
>>> import networkx as nx
>>> G = nx.karate_club_graph()
>>> coms = algorithms.rb_pots(G)
:References:
Reichardt, J., & Bornholdt, S. (2006). `Statistical mechanics of community detection. <https://journals.aps.org/pre/abstract/10.1103/PhysRevE.74.016110/>`_ Physical Review E, 74(1), 016110. 10.1103/PhysRevE.74.016110
Leicht, E. A., & Newman, M. E. J. (2008). `Community Structure in Directed Networks. <https://www.ncbi.nlm.nih.gov/pubmed/18517839/>`_ Physical Review Letters, 100(11), 118703. 10.1103/PhysRevLett.100.118703 | Rb_pots is a model where the quality function to optimize is: | [
"Rb_pots",
"is",
"a",
"model",
"where",
"the",
"quality",
"function",
"to",
"optimize",
"is",
":"
] | def rb_pots(
g_original: object,
initial_membership: list = None,
weights: list = None,
resolution_parameter: float = 1,
) -> NodeClustering:
"""
Rb_pots is a model where the quality function to optimize is:
.. math:: Q = \\sum_{ij} \\left(A_{ij} - \\gamma \\frac{k_i k_j}{2m} \\right)\\delta(\\sigma_i, \\sigma_j)
where :math:`A` is the adjacency matrix, :math:`k_i` is the (weighted) degree of node :math:`i`, :math:`m` is the total number of edges (or total edge weight), :math:`\\sigma_i` denotes the community of node :math:`i` and :math:`\\delta(\\sigma_i, \\sigma_j) = 1` if :math:`\\sigma_i = \\sigma_j` and `0` otherwise.
For directed graphs a slightly different formulation is used, as proposed by Leicht and Newman :
.. math:: Q = \\sum_{ij} \\left(A_{ij} - \\gamma \\frac{k_i^\mathrm{out} k_j^\mathrm{in}}{m} \\right)\\delta(\\sigma_i, \\sigma_j),
where :math:`k_i^\\mathrm{out}` and :math:`k_i^\\mathrm{in}` refers to respectively the outdegree and indegree of node :math:`i` , and :math:`A_{ij}` refers to an edge from :math:`i` to :math:`j`.
Note that this is the same of Leiden algorithm when setting :math:`\\gamma=1` and normalising by :math:`2m`, or :math:`m` for directed graphs.
**Supported Graph Types**
========== ======== ========
Undirected Directed Weighted
========== ======== ========
Yes Yes Yes
========== ======== ========
:param g_original: a networkx/igraph object
:param initial_membership: list of int Initial membership for the partition. If :obj:`None` then defaults to a singleton partition. Deafault None
:param weights: list of double, or edge attribute Weights of edges. Can be either an iterable or an edge attribute. Deafault None
:param resolution_parameter: double >0 A parameter value controlling the coarseness of the clustering. Higher resolutions lead to more communities, while lower resolutions lead to fewer communities. Default 1
:return: NodeClustering object
:Example:
>>> from cdlib import algorithms
>>> import networkx as nx
>>> G = nx.karate_club_graph()
>>> coms = algorithms.rb_pots(G)
:References:
Reichardt, J., & Bornholdt, S. (2006). `Statistical mechanics of community detection. <https://journals.aps.org/pre/abstract/10.1103/PhysRevE.74.016110/>`_ Physical Review E, 74(1), 016110. 10.1103/PhysRevE.74.016110
Leicht, E. A., & Newman, M. E. J. (2008). `Community Structure in Directed Networks. <https://www.ncbi.nlm.nih.gov/pubmed/18517839/>`_ Physical Review Letters, 100(11), 118703. 10.1103/PhysRevLett.100.118703
"""
if ig is None:
raise ModuleNotFoundError(
"Optional dependency not satisfied: install igraph to use the selected feature."
)
g = convert_graph_formats(g_original, ig.Graph)
part = leidenalg.find_partition(
g,
leidenalg.RBConfigurationVertexPartition,
resolution_parameter=resolution_parameter,
initial_membership=initial_membership,
weights=weights,
)
coms = [g.vs[x]["name"] for x in part]
return NodeClustering(
coms,
g_original,
"RB Pots",
method_parameters={
"initial_membership": initial_membership,
"weights": weights,
"resolution_parameter": resolution_parameter,
},
) | [
"def",
"rb_pots",
"(",
"g_original",
":",
"object",
",",
"initial_membership",
":",
"list",
"=",
"None",
",",
"weights",
":",
"list",
"=",
"None",
",",
"resolution_parameter",
":",
"float",
"=",
"1",
",",
")",
"->",
"NodeClustering",
":",
"if",
"ig",
"is... | https://github.com/GiulioRossetti/cdlib/blob/b2c6311b99725bb2b029556f531d244a2af14a2a/cdlib/algorithms/crisp_partition.py#L598-L672 | |
CoinAlpha/hummingbot | 36f6149c1644c07cd36795b915f38b8f49b798e7 | hummingbot/connector/exchange/okex/okex_api_user_stream_data_source.py | python | OkexAPIUserStreamDataSource._authenticate_client | (self) | Sends an Authentication request to OKEx's WebSocket API Server | Sends an Authentication request to OKEx's WebSocket API Server | [
"Sends",
"an",
"Authentication",
"request",
"to",
"OKEx",
"s",
"WebSocket",
"API",
"Server"
] | async def _authenticate_client(self):
"""
Sends an Authentication request to OKEx's WebSocket API Server
"""
await self._websocket_connection.send(json.dumps(self._auth.generate_ws_auth()))
resp = await self._websocket_connection.recv()
msg = json.loads(resp)
if msg["event"] != 'login':
self.logger().error(f"Error occurred authenticating to websocket API server. {msg}")
self.logger().info("Successfully authenticated") | [
"async",
"def",
"_authenticate_client",
"(",
"self",
")",
":",
"await",
"self",
".",
"_websocket_connection",
".",
"send",
"(",
"json",
".",
"dumps",
"(",
"self",
".",
"_auth",
".",
"generate_ws_auth",
"(",
")",
")",
")",
"resp",
"=",
"await",
"self",
".... | https://github.com/CoinAlpha/hummingbot/blob/36f6149c1644c07cd36795b915f38b8f49b798e7/hummingbot/connector/exchange/okex/okex_api_user_stream_data_source.py#L50-L62 | ||
tomplus/kubernetes_asyncio | f028cc793e3a2c519be6a52a49fb77ff0b014c9b | kubernetes_asyncio/client/models/v2beta1_resource_metric_status.py | python | V2beta1ResourceMetricStatus.current_average_utilization | (self) | return self._current_average_utilization | Gets the current_average_utilization of this V2beta1ResourceMetricStatus. # noqa: E501
currentAverageUtilization is the current value of the average of the resource metric across all relevant pods, represented as a percentage of the requested value of the resource for the pods. It will only be present if `targetAverageValue` was set in the corresponding metric specification. # noqa: E501
:return: The current_average_utilization of this V2beta1ResourceMetricStatus. # noqa: E501
:rtype: int | Gets the current_average_utilization of this V2beta1ResourceMetricStatus. # noqa: E501 | [
"Gets",
"the",
"current_average_utilization",
"of",
"this",
"V2beta1ResourceMetricStatus",
".",
"#",
"noqa",
":",
"E501"
] | def current_average_utilization(self):
"""Gets the current_average_utilization of this V2beta1ResourceMetricStatus. # noqa: E501
currentAverageUtilization is the current value of the average of the resource metric across all relevant pods, represented as a percentage of the requested value of the resource for the pods. It will only be present if `targetAverageValue` was set in the corresponding metric specification. # noqa: E501
:return: The current_average_utilization of this V2beta1ResourceMetricStatus. # noqa: E501
:rtype: int
"""
return self._current_average_utilization | [
"def",
"current_average_utilization",
"(",
"self",
")",
":",
"return",
"self",
".",
"_current_average_utilization"
] | https://github.com/tomplus/kubernetes_asyncio/blob/f028cc793e3a2c519be6a52a49fb77ff0b014c9b/kubernetes_asyncio/client/models/v2beta1_resource_metric_status.py#L64-L72 | |
alanhamlett/pip-update-requirements | ce875601ef278c8ce00ad586434a978731525561 | pur/packages/pip/_vendor/ipaddress.py | python | _split_optional_netmask | (address) | return addr | Helper to split the netmask and raise AddressValueError if needed | Helper to split the netmask and raise AddressValueError if needed | [
"Helper",
"to",
"split",
"the",
"netmask",
"and",
"raise",
"AddressValueError",
"if",
"needed"
] | def _split_optional_netmask(address):
"""Helper to split the netmask and raise AddressValueError if needed"""
addr = _compat_str(address).split('/')
if len(addr) > 2:
raise AddressValueError("Only one '/' permitted in %r" % address)
return addr | [
"def",
"_split_optional_netmask",
"(",
"address",
")",
":",
"addr",
"=",
"_compat_str",
"(",
"address",
")",
".",
"split",
"(",
"'/'",
")",
"if",
"len",
"(",
"addr",
")",
">",
"2",
":",
"raise",
"AddressValueError",
"(",
"\"Only one '/' permitted in %r\"",
"... | https://github.com/alanhamlett/pip-update-requirements/blob/ce875601ef278c8ce00ad586434a978731525561/pur/packages/pip/_vendor/ipaddress.py#L278-L283 | |
bernwang/latte | b30ea4ee95efdbf52a274f504cb9920c5695acf9 | app/Mask_RCNN/model.py | python | generate_random_rois | (image_shape, count, gt_class_ids, gt_boxes) | return rois | Generates ROI proposals similar to what a region proposal network
would generate.
image_shape: [Height, Width, Depth]
count: Number of ROIs to generate
gt_class_ids: [N] Integer ground truth class IDs
gt_boxes: [N, (y1, x1, y2, x2)] Ground truth boxes in pixels.
Returns: [count, (y1, x1, y2, x2)] ROI boxes in pixels. | Generates ROI proposals similar to what a region proposal network
would generate. | [
"Generates",
"ROI",
"proposals",
"similar",
"to",
"what",
"a",
"region",
"proposal",
"network",
"would",
"generate",
"."
] | def generate_random_rois(image_shape, count, gt_class_ids, gt_boxes):
"""Generates ROI proposals similar to what a region proposal network
would generate.
image_shape: [Height, Width, Depth]
count: Number of ROIs to generate
gt_class_ids: [N] Integer ground truth class IDs
gt_boxes: [N, (y1, x1, y2, x2)] Ground truth boxes in pixels.
Returns: [count, (y1, x1, y2, x2)] ROI boxes in pixels.
"""
# placeholder
rois = np.zeros((count, 4), dtype=np.int32)
# Generate random ROIs around GT boxes (90% of count)
rois_per_box = int(0.9 * count / gt_boxes.shape[0])
for i in range(gt_boxes.shape[0]):
gt_y1, gt_x1, gt_y2, gt_x2 = gt_boxes[i]
h = gt_y2 - gt_y1
w = gt_x2 - gt_x1
# random boundaries
r_y1 = max(gt_y1 - h, 0)
r_y2 = min(gt_y2 + h, image_shape[0])
r_x1 = max(gt_x1 - w, 0)
r_x2 = min(gt_x2 + w, image_shape[1])
# To avoid generating boxes with zero area, we generate double what
# we need and filter out the extra. If we get fewer valid boxes
# than we need, we loop and try again.
while True:
y1y2 = np.random.randint(r_y1, r_y2, (rois_per_box * 2, 2))
x1x2 = np.random.randint(r_x1, r_x2, (rois_per_box * 2, 2))
# Filter out zero area boxes
threshold = 1
y1y2 = y1y2[np.abs(y1y2[:, 0] - y1y2[:, 1]) >=
threshold][:rois_per_box]
x1x2 = x1x2[np.abs(x1x2[:, 0] - x1x2[:, 1]) >=
threshold][:rois_per_box]
if y1y2.shape[0] == rois_per_box and x1x2.shape[0] == rois_per_box:
break
# Sort on axis 1 to ensure x1 <= x2 and y1 <= y2 and then reshape
# into x1, y1, x2, y2 order
x1, x2 = np.split(np.sort(x1x2, axis=1), 2, axis=1)
y1, y2 = np.split(np.sort(y1y2, axis=1), 2, axis=1)
box_rois = np.hstack([y1, x1, y2, x2])
rois[rois_per_box * i:rois_per_box * (i + 1)] = box_rois
# Generate random ROIs anywhere in the image (10% of count)
remaining_count = count - (rois_per_box * gt_boxes.shape[0])
# To avoid generating boxes with zero area, we generate double what
# we need and filter out the extra. If we get fewer valid boxes
# than we need, we loop and try again.
while True:
y1y2 = np.random.randint(0, image_shape[0], (remaining_count * 2, 2))
x1x2 = np.random.randint(0, image_shape[1], (remaining_count * 2, 2))
# Filter out zero area boxes
threshold = 1
y1y2 = y1y2[np.abs(y1y2[:, 0] - y1y2[:, 1]) >=
threshold][:remaining_count]
x1x2 = x1x2[np.abs(x1x2[:, 0] - x1x2[:, 1]) >=
threshold][:remaining_count]
if y1y2.shape[0] == remaining_count and x1x2.shape[0] == remaining_count:
break
# Sort on axis 1 to ensure x1 <= x2 and y1 <= y2 and then reshape
# into x1, y1, x2, y2 order
x1, x2 = np.split(np.sort(x1x2, axis=1), 2, axis=1)
y1, y2 = np.split(np.sort(y1y2, axis=1), 2, axis=1)
global_rois = np.hstack([y1, x1, y2, x2])
rois[-remaining_count:] = global_rois
return rois | [
"def",
"generate_random_rois",
"(",
"image_shape",
",",
"count",
",",
"gt_class_ids",
",",
"gt_boxes",
")",
":",
"# placeholder",
"rois",
"=",
"np",
".",
"zeros",
"(",
"(",
"count",
",",
"4",
")",
",",
"dtype",
"=",
"np",
".",
"int32",
")",
"# Generate r... | https://github.com/bernwang/latte/blob/b30ea4ee95efdbf52a274f504cb9920c5695acf9/app/Mask_RCNN/model.py#L1469-L1540 | |
akaraspt/deepsleepnet | d4906b4875547a45175eaba8bdde280b7b1496f1 | tensorlayer/files.py | python | maybe_download_and_extract | (filename, working_directory, url_source, extract=False, expected_bytes=None) | return filepath | Checks if file exists in working_directory otherwise tries to dowload the file,
and optionally also tries to extract the file if format is ".zip" or ".tar"
Parameters
----------
filename : string
The name of the (to be) dowloaded file.
working_directory : string
A folder path to search for the file in and dowload the file to
url : string
The URL to download the file from
extract : bool, defaults to False
If True, tries to uncompress the dowloaded file is ".tar.gz/.tar.bz2" or ".zip" file
expected_bytes : int/None
If set tries to verify that the downloaded file is of the specified size, otherwise raises an Exception,
defaults to None which corresponds to no check being performed
Returns
----------
filepath to dowloaded (uncompressed) file
Examples
--------
>>> down_file = tl.files.maybe_download_and_extract(filename = 'train-images-idx3-ubyte.gz',
working_directory = 'data/',
url_source = 'http://yann.lecun.com/exdb/mnist/')
>>> tl.files.maybe_download_and_extract(filename = 'ADEChallengeData2016.zip',
working_directory = 'data/',
url_source = 'http://sceneparsing.csail.mit.edu/data/',
extract=True) | Checks if file exists in working_directory otherwise tries to dowload the file,
and optionally also tries to extract the file if format is ".zip" or ".tar" | [
"Checks",
"if",
"file",
"exists",
"in",
"working_directory",
"otherwise",
"tries",
"to",
"dowload",
"the",
"file",
"and",
"optionally",
"also",
"tries",
"to",
"extract",
"the",
"file",
"if",
"format",
"is",
".",
"zip",
"or",
".",
"tar"
] | def maybe_download_and_extract(filename, working_directory, url_source, extract=False, expected_bytes=None):
"""Checks if file exists in working_directory otherwise tries to dowload the file,
and optionally also tries to extract the file if format is ".zip" or ".tar"
Parameters
----------
filename : string
The name of the (to be) dowloaded file.
working_directory : string
A folder path to search for the file in and dowload the file to
url : string
The URL to download the file from
extract : bool, defaults to False
If True, tries to uncompress the dowloaded file is ".tar.gz/.tar.bz2" or ".zip" file
expected_bytes : int/None
If set tries to verify that the downloaded file is of the specified size, otherwise raises an Exception,
defaults to None which corresponds to no check being performed
Returns
----------
filepath to dowloaded (uncompressed) file
Examples
--------
>>> down_file = tl.files.maybe_download_and_extract(filename = 'train-images-idx3-ubyte.gz',
working_directory = 'data/',
url_source = 'http://yann.lecun.com/exdb/mnist/')
>>> tl.files.maybe_download_and_extract(filename = 'ADEChallengeData2016.zip',
working_directory = 'data/',
url_source = 'http://sceneparsing.csail.mit.edu/data/',
extract=True)
"""
# We first define a download function, supporting both Python 2 and 3.
def _download(filename, working_directory, url_source):
def _dlProgress(count, blockSize, totalSize):
if(totalSize != 0):
percent = float(count * blockSize) / float(totalSize) * 100.0
sys.stdout.write("\r" "Downloading " + filename + "...%d%%" % percent)
sys.stdout.flush()
if sys.version_info[0] == 2:
from urllib.request import urlretrieve
else:
from urllib.request import urlretrieve
filepath = os.path.join(working_directory, filename)
urlretrieve(url_source+filename, filepath, reporthook=_dlProgress)
exists_or_mkdir(working_directory, verbose=False)
filepath = os.path.join(working_directory, filename)
if not os.path.exists(filepath):
_download(filename, working_directory, url_source)
print()
statinfo = os.stat(filepath)
print(('Succesfully downloaded', filename, statinfo.st_size, 'bytes.'))
if(not(expected_bytes is None) and (expected_bytes != statinfo.st_size)):
raise Exception('Failed to verify ' + filename + '. Can you get to it with a browser?')
if(extract):
if tarfile.is_tarfile(filepath):
print('Trying to extract tar file')
tarfile.open(filepath, 'r').extractall(working_directory)
print('... Success!')
elif zipfile.is_zipfile(filepath):
print('Trying to extract zip file')
with zipfile.ZipFile(filepath) as zf:
zf.extractall(working_directory)
print('... Success!')
else:
print("Unknown compression_format only .tar.gz/.tar.bz2/.tar and .zip supported")
return filepath | [
"def",
"maybe_download_and_extract",
"(",
"filename",
",",
"working_directory",
",",
"url_source",
",",
"extract",
"=",
"False",
",",
"expected_bytes",
"=",
"None",
")",
":",
"# We first define a download function, supporting both Python 2 and 3.",
"def",
"_download",
"(",
... | https://github.com/akaraspt/deepsleepnet/blob/d4906b4875547a45175eaba8bdde280b7b1496f1/tensorlayer/files.py#L791-L858 | |
emesene/emesene | 4548a4098310e21b16437bb36223a7f632a4f7bc | emesene/gui/qt4ui/widgets/NickEdit.py | python | QLabelEmph.text | (self) | return self._text | Returns the text | Returns the text | [
"Returns",
"the",
"text"
] | def text(self):
'''Returns the text'''
return self._text | [
"def",
"text",
"(",
"self",
")",
":",
"return",
"self",
".",
"_text"
] | https://github.com/emesene/emesene/blob/4548a4098310e21b16437bb36223a7f632a4f7bc/emesene/gui/qt4ui/widgets/NickEdit.py#L130-L132 | |
threat9/routersploit | 3fd394637f5566c4cf6369eecae08c4d27f93cda | routersploit/core/exploit/exploit.py | python | BaseExploit.options | (self) | return list(self.exploit_attributes.keys()) | Returns list of options that user can set.
Returns list of options aggregated by
ExploitionOptionsAggegator metaclass that user can set.
:return: list of options that user can set | Returns list of options that user can set. | [
"Returns",
"list",
"of",
"options",
"that",
"user",
"can",
"set",
"."
] | def options(self):
""" Returns list of options that user can set.
Returns list of options aggregated by
ExploitionOptionsAggegator metaclass that user can set.
:return: list of options that user can set
"""
return list(self.exploit_attributes.keys()) | [
"def",
"options",
"(",
"self",
")",
":",
"return",
"list",
"(",
"self",
".",
"exploit_attributes",
".",
"keys",
"(",
")",
")"
] | https://github.com/threat9/routersploit/blob/3fd394637f5566c4cf6369eecae08c4d27f93cda/routersploit/core/exploit/exploit.py#L60-L69 | |
kra3/py-ga-mob | 523de72dbf4070d9de3e9b78aee8c9777bc53107 | pyga/entities.py | python | CustomVariable.validate | (self) | According to the GA documentation, there is a limit to the combined size of
name and value of 64 bytes after URL encoding,
see http://code.google.com/apis/analytics/docs/tracking/gaTrackingCustomVariables.html#varTypes
and http://xahlee.org/js/google_analytics_tracker_2010-07-01_expanded.js line 563
This limit was increased to 128 bytes BEFORE encoding with the 2012-01 release of ga.js however,
see http://code.google.com/apis/analytics/community/gajs_changelog.html | According to the GA documentation, there is a limit to the combined size of
name and value of 64 bytes after URL encoding,
see http://code.google.com/apis/analytics/docs/tracking/gaTrackingCustomVariables.html#varTypes
and http://xahlee.org/js/google_analytics_tracker_2010-07-01_expanded.js line 563
This limit was increased to 128 bytes BEFORE encoding with the 2012-01 release of ga.js however,
see http://code.google.com/apis/analytics/community/gajs_changelog.html | [
"According",
"to",
"the",
"GA",
"documentation",
"there",
"is",
"a",
"limit",
"to",
"the",
"combined",
"size",
"of",
"name",
"and",
"value",
"of",
"64",
"bytes",
"after",
"URL",
"encoding",
"see",
"http",
":",
"//",
"code",
".",
"google",
".",
"com",
"... | def validate(self):
'''
According to the GA documentation, there is a limit to the combined size of
name and value of 64 bytes after URL encoding,
see http://code.google.com/apis/analytics/docs/tracking/gaTrackingCustomVariables.html#varTypes
and http://xahlee.org/js/google_analytics_tracker_2010-07-01_expanded.js line 563
This limit was increased to 128 bytes BEFORE encoding with the 2012-01 release of ga.js however,
see http://code.google.com/apis/analytics/community/gajs_changelog.html
'''
if len('%s%s' % (self.name, self.value)) > 128:
raise exceptions.ValidationError('Custom Variable combined name and value length must not be larger than 128 bytes.') | [
"def",
"validate",
"(",
"self",
")",
":",
"if",
"len",
"(",
"'%s%s'",
"%",
"(",
"self",
".",
"name",
",",
"self",
".",
"value",
")",
")",
">",
"128",
":",
"raise",
"exceptions",
".",
"ValidationError",
"(",
"'Custom Variable combined name and value length mu... | https://github.com/kra3/py-ga-mob/blob/523de72dbf4070d9de3e9b78aee8c9777bc53107/pyga/entities.py#L173-L183 | ||
googleads/google-ads-python | 2a1d6062221f6aad1992a6bcca0e7e4a93d2db86 | google/ads/googleads/v9/services/services/ad_group_criterion_customizer_service/client.py | python | AdGroupCriterionCustomizerServiceClient.parse_customizer_attribute_path | (path: str) | return m.groupdict() if m else {} | Parse a customizer_attribute path into its component segments. | Parse a customizer_attribute path into its component segments. | [
"Parse",
"a",
"customizer_attribute",
"path",
"into",
"its",
"component",
"segments",
"."
] | def parse_customizer_attribute_path(path: str) -> Dict[str, str]:
"""Parse a customizer_attribute path into its component segments."""
m = re.match(
r"^customers/(?P<customer_id>.+?)/customizerAttributes/(?P<customizer_attribute_id>.+?)$",
path,
)
return m.groupdict() if m else {} | [
"def",
"parse_customizer_attribute_path",
"(",
"path",
":",
"str",
")",
"->",
"Dict",
"[",
"str",
",",
"str",
"]",
":",
"m",
"=",
"re",
".",
"match",
"(",
"r\"^customers/(?P<customer_id>.+?)/customizerAttributes/(?P<customizer_attribute_id>.+?)$\"",
",",
"path",
",",
... | https://github.com/googleads/google-ads-python/blob/2a1d6062221f6aad1992a6bcca0e7e4a93d2db86/google/ads/googleads/v9/services/services/ad_group_criterion_customizer_service/client.py#L236-L242 | |
kanzure/nanoengineer | 874e4c9f8a9190f093625b267f9767e19f82e6c4 | cad/src/graphics/drawing/ColorSortedDisplayList.py | python | ColorSortedDisplayList.__del__ | (self) | return | Called by Python when an object is being freed. | Called by Python when an object is being freed. | [
"Called",
"by",
"Python",
"when",
"an",
"object",
"is",
"being",
"freed",
"."
] | def __del__(self): # Russ 080915
"""
Called by Python when an object is being freed.
"""
self.destroy()
return | [
"def",
"__del__",
"(",
"self",
")",
":",
"# Russ 080915",
"self",
".",
"destroy",
"(",
")",
"return"
] | https://github.com/kanzure/nanoengineer/blob/874e4c9f8a9190f093625b267f9767e19f82e6c4/cad/src/graphics/drawing/ColorSortedDisplayList.py#L975-L980 | |
letmaik/rawpy | 183b36ee49268d2f9097ceb88d573b552e4f5919 | rawpy/__init__.py | python | imread | (pathOrFile) | return d | Convenience function that creates a :class:`rawpy.RawPy` instance, opens the given file,
and returns the :class:`rawpy.RawPy` instance for further processing.
:param str|file pathOrFile: path or file object of RAW image that will be read
:rtype: :class:`rawpy.RawPy` | Convenience function that creates a :class:`rawpy.RawPy` instance, opens the given file,
and returns the :class:`rawpy.RawPy` instance for further processing.
:param str|file pathOrFile: path or file object of RAW image that will be read
:rtype: :class:`rawpy.RawPy` | [
"Convenience",
"function",
"that",
"creates",
"a",
":",
"class",
":",
"rawpy",
".",
"RawPy",
"instance",
"opens",
"the",
"given",
"file",
"and",
"returns",
"the",
":",
"class",
":",
"rawpy",
".",
"RawPy",
"instance",
"for",
"further",
"processing",
".",
":... | def imread(pathOrFile):
"""
Convenience function that creates a :class:`rawpy.RawPy` instance, opens the given file,
and returns the :class:`rawpy.RawPy` instance for further processing.
:param str|file pathOrFile: path or file object of RAW image that will be read
:rtype: :class:`rawpy.RawPy`
"""
d = RawPy()
if hasattr(pathOrFile, 'read'):
d.open_buffer(pathOrFile)
else:
d.open_file(pathOrFile)
return d | [
"def",
"imread",
"(",
"pathOrFile",
")",
":",
"d",
"=",
"RawPy",
"(",
")",
"if",
"hasattr",
"(",
"pathOrFile",
",",
"'read'",
")",
":",
"d",
".",
"open_buffer",
"(",
"pathOrFile",
")",
"else",
":",
"d",
".",
"open_file",
"(",
"pathOrFile",
")",
"retu... | https://github.com/letmaik/rawpy/blob/183b36ee49268d2f9097ceb88d573b552e4f5919/rawpy/__init__.py#L8-L21 | |
tensorflow/privacy | 867f3d4c5566b21433a6a1bed998094d1479b4d5 | tensorflow_privacy/privacy/dp_query/quantile_adaptive_clip_sum_query.py | python | QuantileAdaptiveClipSumQuery.initial_sample_state | (self, template) | return self._SampleState(
self._sum_query.initial_sample_state(template),
self._quantile_estimator_query.initial_sample_state()) | Implements `tensorflow_privacy.DPQuery.initial_sample_state`. | Implements `tensorflow_privacy.DPQuery.initial_sample_state`. | [
"Implements",
"tensorflow_privacy",
".",
"DPQuery",
".",
"initial_sample_state",
"."
] | def initial_sample_state(self, template):
"""Implements `tensorflow_privacy.DPQuery.initial_sample_state`."""
return self._SampleState(
self._sum_query.initial_sample_state(template),
self._quantile_estimator_query.initial_sample_state()) | [
"def",
"initial_sample_state",
"(",
"self",
",",
"template",
")",
":",
"return",
"self",
".",
"_SampleState",
"(",
"self",
".",
"_sum_query",
".",
"initial_sample_state",
"(",
"template",
")",
",",
"self",
".",
"_quantile_estimator_query",
".",
"initial_sample_sta... | https://github.com/tensorflow/privacy/blob/867f3d4c5566b21433a6a1bed998094d1479b4d5/tensorflow_privacy/privacy/dp_query/quantile_adaptive_clip_sum_query.py#L109-L113 | |
Jenyay/outwiker | 50530cf7b3f71480bb075b2829bc0669773b835b | plugins/updatenotifier/updatenotifier/libs/jinja2/bccache.py | python | BytecodeCache.get_cache_key | (self, name, filename=None) | return hash.hexdigest() | Returns the unique hash key for this template name. | Returns the unique hash key for this template name. | [
"Returns",
"the",
"unique",
"hash",
"key",
"for",
"this",
"template",
"name",
"."
] | def get_cache_key(self, name, filename=None):
"""Returns the unique hash key for this template name."""
hash = sha1(name.encode('utf-8'))
if filename is not None:
filename = '|' + filename
if isinstance(filename, text_type):
filename = filename.encode('utf-8')
hash.update(filename)
return hash.hexdigest() | [
"def",
"get_cache_key",
"(",
"self",
",",
"name",
",",
"filename",
"=",
"None",
")",
":",
"hash",
"=",
"sha1",
"(",
"name",
".",
"encode",
"(",
"'utf-8'",
")",
")",
"if",
"filename",
"is",
"not",
"None",
":",
"filename",
"=",
"'|'",
"+",
"filename",
... | https://github.com/Jenyay/outwiker/blob/50530cf7b3f71480bb075b2829bc0669773b835b/plugins/updatenotifier/updatenotifier/libs/jinja2/bccache.py#L166-L174 | |
1040003585/WebScrapingWithPython | a770fa5b03894076c8c9539b1ffff34424ffc016 | ResourceCode/wswp-places-c573d29efa3a/web2py/scripts/sync_languages.py | python | sync_language | (d, data) | return d | this function makes sure a translated string will be prefered over an untranslated
string when syncing languages between apps. when both are translated, it prefers the
latter app, as did the original script | this function makes sure a translated string will be prefered over an untranslated
string when syncing languages between apps. when both are translated, it prefers the
latter app, as did the original script | [
"this",
"function",
"makes",
"sure",
"a",
"translated",
"string",
"will",
"be",
"prefered",
"over",
"an",
"untranslated",
"string",
"when",
"syncing",
"languages",
"between",
"apps",
".",
"when",
"both",
"are",
"translated",
"it",
"prefers",
"the",
"latter",
"... | def sync_language(d, data):
''' this function makes sure a translated string will be prefered over an untranslated
string when syncing languages between apps. when both are translated, it prefers the
latter app, as did the original script
'''
for key in data:
# if this string is not in the allready translated data, add it
if key not in d:
d[key] = data[key]
# see if there is a translated string in the original list, but not in the new list
elif (
((d[key] != '') or (d[key] != key)) and
((data[key] == '') or (data[key] == key))
):
d[key] = d[key]
# any other case (wether there is or there isn't a translated string)
else:
d[key] = data[key]
return d | [
"def",
"sync_language",
"(",
"d",
",",
"data",
")",
":",
"for",
"key",
"in",
"data",
":",
"# if this string is not in the allready translated data, add it",
"if",
"key",
"not",
"in",
"d",
":",
"d",
"[",
"key",
"]",
"=",
"data",
"[",
"key",
"]",
"# see if the... | https://github.com/1040003585/WebScrapingWithPython/blob/a770fa5b03894076c8c9539b1ffff34424ffc016/ResourceCode/wswp-places-c573d29efa3a/web2py/scripts/sync_languages.py#L14-L34 | |
pythondigest/pythondigest | 38acb6a04cfaf16fad48b31d0f7f4602a641165f | jobs/management/commands/import_jobs.py | python | is_not_excl | (words: list, item: dict) | return not bool([i for _, x in item.items() for i in words if i in str(x)]) | Возвращает True если ни один из элементов item
не содержит слов из списка на исключения
:param words:
:param item:
:return: | Возвращает True если ни один из элементов item
не содержит слов из списка на исключения
:param words:
:param item:
:return: | [
"Возвращает",
"True",
"если",
"ни",
"один",
"из",
"элементов",
"item",
"не",
"содержит",
"слов",
"из",
"списка",
"на",
"исключения",
":",
"param",
"words",
":",
":",
"param",
"item",
":",
":",
"return",
":"
] | def is_not_excl(words: list, item: dict) -> bool:
"""
Возвращает True если ни один из элементов item
не содержит слов из списка на исключения
:param words:
:param item:
:return:
"""
return not bool([i for _, x in item.items() for i in words if i in str(x)]) | [
"def",
"is_not_excl",
"(",
"words",
":",
"list",
",",
"item",
":",
"dict",
")",
"->",
"bool",
":",
"return",
"not",
"bool",
"(",
"[",
"i",
"for",
"_",
",",
"x",
"in",
"item",
".",
"items",
"(",
")",
"for",
"i",
"in",
"words",
"if",
"i",
"in",
... | https://github.com/pythondigest/pythondigest/blob/38acb6a04cfaf16fad48b31d0f7f4602a641165f/jobs/management/commands/import_jobs.py#L99-L107 | |
JaniceWuo/MovieRecommend | 4c86db64ca45598917d304f535413df3bc9fea65 | movierecommend/venv1/Lib/site-packages/django/db/models/functions/base.py | python | Substr.as_sqlite | (self, compiler, connection) | return super(Substr, self).as_sql(compiler, connection, function='SUBSTR') | [] | def as_sqlite(self, compiler, connection):
return super(Substr, self).as_sql(compiler, connection, function='SUBSTR') | [
"def",
"as_sqlite",
"(",
"self",
",",
"compiler",
",",
"connection",
")",
":",
"return",
"super",
"(",
"Substr",
",",
"self",
")",
".",
"as_sql",
"(",
"compiler",
",",
"connection",
",",
"function",
"=",
"'SUBSTR'",
")"
] | https://github.com/JaniceWuo/MovieRecommend/blob/4c86db64ca45598917d304f535413df3bc9fea65/movierecommend/venv1/Lib/site-packages/django/db/models/functions/base.py#L216-L217 | |||
iGio90/Dwarf | bb3011cdffd209c7e3f5febe558053bf649ca69c | dwarf_debugger/ui/panels/panel_objc_inspector.py | python | ObjCInspector._on_enumerate_objc_modules | (self, modules) | Fills the ModulesList with data | Fills the ModulesList with data | [
"Fills",
"the",
"ModulesList",
"with",
"data"
] | def _on_enumerate_objc_modules(self, modules):
""" Fills the ModulesList with data
"""
if self._ObjC_modules is None:
return
self._ObjC_modules.clear()
for module in modules:
self.add_module(module) | [
"def",
"_on_enumerate_objc_modules",
"(",
"self",
",",
"modules",
")",
":",
"if",
"self",
".",
"_ObjC_modules",
"is",
"None",
":",
"return",
"self",
".",
"_ObjC_modules",
".",
"clear",
"(",
")",
"for",
"module",
"in",
"modules",
":",
"self",
".",
"add_modu... | https://github.com/iGio90/Dwarf/blob/bb3011cdffd209c7e3f5febe558053bf649ca69c/dwarf_debugger/ui/panels/panel_objc_inspector.py#L267-L275 | ||
demisto/content | 5c664a65b992ac8ca90ac3f11b1b2cdf11ee9b07 | Packs/BmcHelixRemedyForce/Integrations/BmcHelixRemedyForce/BMCHelixRemedyforce.py | python | remove_extra_space_from_args | (args: Dict[str, str]) | return {key: value.strip() for (key, value) in args.items() if value and len(value.strip()) > 0} | Remove leading and trailing spaces from all the arguments and remove empty arguments
:param args: Dictionary of arguments
:return: Dictionary of arguments | Remove leading and trailing spaces from all the arguments and remove empty arguments
:param args: Dictionary of arguments
:return: Dictionary of arguments | [
"Remove",
"leading",
"and",
"trailing",
"spaces",
"from",
"all",
"the",
"arguments",
"and",
"remove",
"empty",
"arguments",
":",
"param",
"args",
":",
"Dictionary",
"of",
"arguments",
":",
"return",
":",
"Dictionary",
"of",
"arguments"
] | def remove_extra_space_from_args(args: Dict[str, str]) -> Dict[str, str]:
"""
Remove leading and trailing spaces from all the arguments and remove empty arguments
:param args: Dictionary of arguments
:return: Dictionary of arguments
"""
return {key: value.strip() for (key, value) in args.items() if value and len(value.strip()) > 0} | [
"def",
"remove_extra_space_from_args",
"(",
"args",
":",
"Dict",
"[",
"str",
",",
"str",
"]",
")",
"->",
"Dict",
"[",
"str",
",",
"str",
"]",
":",
"return",
"{",
"key",
":",
"value",
".",
"strip",
"(",
")",
"for",
"(",
"key",
",",
"value",
")",
"... | https://github.com/demisto/content/blob/5c664a65b992ac8ca90ac3f11b1b2cdf11ee9b07/Packs/BmcHelixRemedyForce/Integrations/BmcHelixRemedyForce/BMCHelixRemedyforce.py#L1499-L1505 | |
securesystemslab/zippy | ff0e84ac99442c2c55fe1d285332cfd4e185e089 | zippy/benchmarks/src/benchmarks/sympy/sympy/utilities/autowrap.py | python | binary_function | (symfunc, expr, **kwargs) | return implemented_function(symfunc, binary) | Returns a sympy function with expr as binary implementation
This is a convenience function that automates the steps needed to
autowrap the SymPy expression and attaching it to a Function object
with implemented_function().
>>> from sympy.abc import x, y
>>> from sympy.utilities.autowrap import binary_function
>>> expr = ((x - y)**(25)).expand()
>>> f = binary_function('f', expr)
>>> type(f)
<class 'sympy.core.function.UndefinedFunction'>
>>> 2*f(x, y)
2*f(x, y)
>>> f(x, y).evalf(2, subs={x: 1, y: 2})
-1.0 | Returns a sympy function with expr as binary implementation | [
"Returns",
"a",
"sympy",
"function",
"with",
"expr",
"as",
"binary",
"implementation"
] | def binary_function(symfunc, expr, **kwargs):
"""Returns a sympy function with expr as binary implementation
This is a convenience function that automates the steps needed to
autowrap the SymPy expression and attaching it to a Function object
with implemented_function().
>>> from sympy.abc import x, y
>>> from sympy.utilities.autowrap import binary_function
>>> expr = ((x - y)**(25)).expand()
>>> f = binary_function('f', expr)
>>> type(f)
<class 'sympy.core.function.UndefinedFunction'>
>>> 2*f(x, y)
2*f(x, y)
>>> f(x, y).evalf(2, subs={x: 1, y: 2})
-1.0
"""
binary = autowrap(expr, **kwargs)
return implemented_function(symfunc, binary) | [
"def",
"binary_function",
"(",
"symfunc",
",",
"expr",
",",
"*",
"*",
"kwargs",
")",
":",
"binary",
"=",
"autowrap",
"(",
"expr",
",",
"*",
"*",
"kwargs",
")",
"return",
"implemented_function",
"(",
"symfunc",
",",
"binary",
")"
] | https://github.com/securesystemslab/zippy/blob/ff0e84ac99442c2c55fe1d285332cfd4e185e089/zippy/benchmarks/src/benchmarks/sympy/sympy/utilities/autowrap.py#L409-L428 | |
Q2h1Cg/CMS-Exploit-Framework | 6bc54e33f316c81f97e16e10b12c7da589efbbd4 | lib/requests/sessions.py | python | Session.head | (self, url, **kwargs) | return self.request('HEAD', url, **kwargs) | Sends a HEAD request. Returns :class:`Response` object.
:param url: URL for the new :class:`Request` object.
:param \*\*kwargs: Optional arguments that ``request`` takes. | Sends a HEAD request. Returns :class:`Response` object. | [
"Sends",
"a",
"HEAD",
"request",
".",
"Returns",
":",
"class",
":",
"Response",
"object",
"."
] | def head(self, url, **kwargs):
"""Sends a HEAD request. Returns :class:`Response` object.
:param url: URL for the new :class:`Request` object.
:param \*\*kwargs: Optional arguments that ``request`` takes.
"""
kwargs.setdefault('allow_redirects', False)
return self.request('HEAD', url, **kwargs) | [
"def",
"head",
"(",
"self",
",",
"url",
",",
"*",
"*",
"kwargs",
")",
":",
"kwargs",
".",
"setdefault",
"(",
"'allow_redirects'",
",",
"False",
")",
"return",
"self",
".",
"request",
"(",
"'HEAD'",
",",
"url",
",",
"*",
"*",
"kwargs",
")"
] | https://github.com/Q2h1Cg/CMS-Exploit-Framework/blob/6bc54e33f316c81f97e16e10b12c7da589efbbd4/lib/requests/sessions.py#L407-L415 | |
arizvisa/ida-minsc | 8627a60f047b5e55d3efeecde332039cd1a16eea | base/instruction.py | python | size | () | return size(ui.current.address()) | Returns the length of the instruction at the current address. | Returns the length of the instruction at the current address. | [
"Returns",
"the",
"length",
"of",
"the",
"instruction",
"at",
"the",
"current",
"address",
"."
] | def size():
'''Returns the length of the instruction at the current address.'''
return size(ui.current.address()) | [
"def",
"size",
"(",
")",
":",
"return",
"size",
"(",
"ui",
".",
"current",
".",
"address",
"(",
")",
")"
] | https://github.com/arizvisa/ida-minsc/blob/8627a60f047b5e55d3efeecde332039cd1a16eea/base/instruction.py#L123-L125 | |
RasaHQ/rasa | 54823b68c1297849ba7ae841a4246193cd1223a1 | rasa/engine/recipes/recipe.py | python | Recipe.graph_config_for_recipe | (
self,
config: Dict,
cli_parameters: Dict[Text, Any],
training_type: TrainingType = TrainingType.BOTH,
is_finetuning: bool = False,
) | Converts a config to a graph compatible model configuration.
Args:
config: The config which the `Recipe` is supposed to convert.
cli_parameters: Potential CLI params which should be interpolated into the
components configs.
training_type: The current training type. Can be used to omit / add certain
parts of the graphs.
is_finetuning: If `True` then the components should load themselves from
trained version of themselves instead of using `create` to start from
scratch.
Returns:
The model configuration which enables to run the model as a graph for
training and prediction. | Converts a config to a graph compatible model configuration. | [
"Converts",
"a",
"config",
"to",
"a",
"graph",
"compatible",
"model",
"configuration",
"."
] | def graph_config_for_recipe(
self,
config: Dict,
cli_parameters: Dict[Text, Any],
training_type: TrainingType = TrainingType.BOTH,
is_finetuning: bool = False,
) -> GraphModelConfiguration:
"""Converts a config to a graph compatible model configuration.
Args:
config: The config which the `Recipe` is supposed to convert.
cli_parameters: Potential CLI params which should be interpolated into the
components configs.
training_type: The current training type. Can be used to omit / add certain
parts of the graphs.
is_finetuning: If `True` then the components should load themselves from
trained version of themselves instead of using `create` to start from
scratch.
Returns:
The model configuration which enables to run the model as a graph for
training and prediction.
"""
... | [
"def",
"graph_config_for_recipe",
"(",
"self",
",",
"config",
":",
"Dict",
",",
"cli_parameters",
":",
"Dict",
"[",
"Text",
",",
"Any",
"]",
",",
"training_type",
":",
"TrainingType",
"=",
"TrainingType",
".",
"BOTH",
",",
"is_finetuning",
":",
"bool",
"=",
... | https://github.com/RasaHQ/rasa/blob/54823b68c1297849ba7ae841a4246193cd1223a1/rasa/engine/recipes/recipe.py#L53-L76 | ||
thmoa/octopus | cb9e6b68b9d995241c3d30538d4f33740a446353 | smpl/batch_smpl.py | python | undo_chumpy | (x) | return x if isinstance(x, np.ndarray) else x.r | [] | def undo_chumpy(x):
return x if isinstance(x, np.ndarray) else x.r | [
"def",
"undo_chumpy",
"(",
"x",
")",
":",
"return",
"x",
"if",
"isinstance",
"(",
"x",
",",
"np",
".",
"ndarray",
")",
"else",
"x",
".",
"r"
] | https://github.com/thmoa/octopus/blob/cb9e6b68b9d995241c3d30538d4f33740a446353/smpl/batch_smpl.py#L56-L57 | |||
scour-project/scour | 0609c596766ec98e4e2092b49bd03b802702ba1a | scour/svg_regex.py | python | SVGPathParser.rule_coordinate | (self, next_val_fn, token) | return x, token | [] | def rule_coordinate(self, next_val_fn, token):
if token[0] not in self.number_tokens:
raise SyntaxError("expecting a number; got %r" % (token,))
x = getcontext().create_decimal(token[1])
token = next_val_fn()
return x, token | [
"def",
"rule_coordinate",
"(",
"self",
",",
"next_val_fn",
",",
"token",
")",
":",
"if",
"token",
"[",
"0",
"]",
"not",
"in",
"self",
".",
"number_tokens",
":",
"raise",
"SyntaxError",
"(",
"\"expecting a number; got %r\"",
"%",
"(",
"token",
",",
")",
")"... | https://github.com/scour-project/scour/blob/0609c596766ec98e4e2092b49bd03b802702ba1a/scour/svg_regex.py#L282-L287 | |||
minerllabs/minerl | 0123527c334c96ebb3f0cf313df1552fa4302691 | minerl/herobraine/hero/handlers/agent/action.py | python | Action.__init__ | (self, command: str, space: spaces.MineRLSpace) | Initializes the space of the handler with a gym.spaces.Dict
of all of the spaces for each individual command. | Initializes the space of the handler with a gym.spaces.Dict
of all of the spaces for each individual command. | [
"Initializes",
"the",
"space",
"of",
"the",
"handler",
"with",
"a",
"gym",
".",
"spaces",
".",
"Dict",
"of",
"all",
"of",
"the",
"spaces",
"for",
"each",
"individual",
"command",
"."
] | def __init__(self, command: str, space: spaces.MineRLSpace):
"""
Initializes the space of the handler with a gym.spaces.Dict
of all of the spaces for each individual command.
"""
self._command = command
super().__init__(space) | [
"def",
"__init__",
"(",
"self",
",",
"command",
":",
"str",
",",
"space",
":",
"spaces",
".",
"MineRLSpace",
")",
":",
"self",
".",
"_command",
"=",
"command",
"super",
"(",
")",
".",
"__init__",
"(",
"space",
")"
] | https://github.com/minerllabs/minerl/blob/0123527c334c96ebb3f0cf313df1552fa4302691/minerl/herobraine/hero/handlers/agent/action.py#L23-L29 | ||
xonsh/xonsh | b76d6f994f22a4078f602f8b386f4ec280c8461f | xonsh/parsers/base.py | python | BaseParser.p_subproc_atoms_single | (self, p) | subproc_atoms : subproc_atom | subproc_atoms : subproc_atom | [
"subproc_atoms",
":",
"subproc_atom"
] | def p_subproc_atoms_single(self, p):
"""subproc_atoms : subproc_atom"""
p[0] = [p[1]] | [
"def",
"p_subproc_atoms_single",
"(",
"self",
",",
"p",
")",
":",
"p",
"[",
"0",
"]",
"=",
"[",
"p",
"[",
"1",
"]",
"]"
] | https://github.com/xonsh/xonsh/blob/b76d6f994f22a4078f602f8b386f4ec280c8461f/xonsh/parsers/base.py#L3130-L3132 | ||
WenmuZhou/DBNet.pytorch | 678b2ae55e018c6c16d5ac182558517a154a91ed | data_loader/modules/make_border_map.py | python | MakeBorderMap.draw_border_map | (self, polygon, canvas, mask) | [] | def draw_border_map(self, polygon, canvas, mask):
polygon = np.array(polygon)
assert polygon.ndim == 2
assert polygon.shape[1] == 2
polygon_shape = Polygon(polygon)
if polygon_shape.area <= 0:
return
distance = polygon_shape.area * (1 - np.power(self.shrink_ratio, 2)) / polygon_shape.length
subject = [tuple(l) for l in polygon]
padding = pyclipper.PyclipperOffset()
padding.AddPath(subject, pyclipper.JT_ROUND,
pyclipper.ET_CLOSEDPOLYGON)
padded_polygon = np.array(padding.Execute(distance)[0])
cv2.fillPoly(mask, [padded_polygon.astype(np.int32)], 1.0)
xmin = padded_polygon[:, 0].min()
xmax = padded_polygon[:, 0].max()
ymin = padded_polygon[:, 1].min()
ymax = padded_polygon[:, 1].max()
width = xmax - xmin + 1
height = ymax - ymin + 1
polygon[:, 0] = polygon[:, 0] - xmin
polygon[:, 1] = polygon[:, 1] - ymin
xs = np.broadcast_to(
np.linspace(0, width - 1, num=width).reshape(1, width), (height, width))
ys = np.broadcast_to(
np.linspace(0, height - 1, num=height).reshape(height, 1), (height, width))
distance_map = np.zeros(
(polygon.shape[0], height, width), dtype=np.float32)
for i in range(polygon.shape[0]):
j = (i + 1) % polygon.shape[0]
absolute_distance = self.distance(xs, ys, polygon[i], polygon[j])
distance_map[i] = np.clip(absolute_distance / distance, 0, 1)
distance_map = distance_map.min(axis=0)
xmin_valid = min(max(0, xmin), canvas.shape[1] - 1)
xmax_valid = min(max(0, xmax), canvas.shape[1] - 1)
ymin_valid = min(max(0, ymin), canvas.shape[0] - 1)
ymax_valid = min(max(0, ymax), canvas.shape[0] - 1)
canvas[ymin_valid:ymax_valid + 1, xmin_valid:xmax_valid + 1] = np.fmax(
1 - distance_map[
ymin_valid - ymin:ymax_valid - ymax + height,
xmin_valid - xmin:xmax_valid - xmax + width],
canvas[ymin_valid:ymax_valid + 1, xmin_valid:xmax_valid + 1]) | [
"def",
"draw_border_map",
"(",
"self",
",",
"polygon",
",",
"canvas",
",",
"mask",
")",
":",
"polygon",
"=",
"np",
".",
"array",
"(",
"polygon",
")",
"assert",
"polygon",
".",
"ndim",
"==",
"2",
"assert",
"polygon",
".",
"shape",
"[",
"1",
"]",
"==",... | https://github.com/WenmuZhou/DBNet.pytorch/blob/678b2ae55e018c6c16d5ac182558517a154a91ed/data_loader/modules/make_border_map.py#L37-L85 | ||||
ywangd/stash | 773d15b8fb3853a65c15fe160bf5584c99437170 | system/shhistory.py | python | ShHistory.add | (self, line, always=False) | Add a line to the history.
:param line: line to add to history
:type line: str
:param always: always add this line, regardless of config
:type always: bool | Add a line to the history.
:param line: line to add to history
:type line: str
:param always: always add this line, regardless of config
:type always: bool | [
"Add",
"a",
"line",
"to",
"the",
"history",
".",
":",
"param",
"line",
":",
"line",
"to",
"add",
"to",
"history",
":",
"type",
"line",
":",
"str",
":",
"param",
"always",
":",
"always",
"add",
"this",
"line",
"regardless",
"of",
"config",
":",
"type"... | def add(self, line, always=False):
"""
Add a line to the history.
:param line: line to add to history
:type line: str
:param always: always add this line, regardless of config
:type always: bool
"""
if self._current not in self._histories:
self._histories[self._current] = []
stripped = line.strip()
last_line = (self._histories[self._current][-1] if len(self._histories[self._current]) > 0 else None)
if not always:
# check if this line should be added
if stripped == last_line and not self.allow_double:
# prevent double lines
return
if line.startswith(" ") and self.hide_whitespace:
# hide lines starting with a whitespace
return
self._histories[self._current].append(stripped)
# ensure maxsize
while len(self._histories[self._current]) > max(0, self.maxsize):
self._histories[self._current].pop(0)
# reset index
self.reset_idx() | [
"def",
"add",
"(",
"self",
",",
"line",
",",
"always",
"=",
"False",
")",
":",
"if",
"self",
".",
"_current",
"not",
"in",
"self",
".",
"_histories",
":",
"self",
".",
"_histories",
"[",
"self",
".",
"_current",
"]",
"=",
"[",
"]",
"stripped",
"=",... | https://github.com/ywangd/stash/blob/773d15b8fb3853a65c15fe160bf5584c99437170/system/shhistory.py#L100-L126 | ||
sfu-db/dataprep | 6dfb9c659e8bf73f07978ae195d0372495c6f118 | dataprep/clean/clean_date.py | python | _transform_timezone | (
result_str: str,
timezone_token: str,
timezone: str,
utc_add: str,
utc_offset_hours: int,
utc_offset_minutes: int,
) | return result | This function transform parsed month into target format
Parameters
----------
result_str
result string
timezone_token
token of timezone in target format
timezone
value of timezone string
tz_info
information of timezone, including offset hours and mins comparing to UTC | This function transform parsed month into target format
Parameters
----------
result_str
result string
timezone_token
token of timezone in target format
timezone
value of timezone string
tz_info
information of timezone, including offset hours and mins comparing to UTC | [
"This",
"function",
"transform",
"parsed",
"month",
"into",
"target",
"format",
"Parameters",
"----------",
"result_str",
"result",
"string",
"timezone_token",
"token",
"of",
"timezone",
"in",
"target",
"format",
"timezone",
"value",
"of",
"timezone",
"string",
"tz_... | def _transform_timezone(
result_str: str,
timezone_token: str,
timezone: str,
utc_add: str,
utc_offset_hours: int,
utc_offset_minutes: int,
) -> str:
"""
This function transform parsed month into target format
Parameters
----------
result_str
result string
timezone_token
token of timezone in target format
timezone
value of timezone string
tz_info
information of timezone, including offset hours and mins comparing to UTC
"""
# pylint: disable=too-many-arguments
result = deepcopy(result_str)
if timezone_token != "":
if timezone_token == "z":
result = result.replace(timezone_token, timezone)
elif timezone_token == "Z":
offset_hours_str = str(int(utc_offset_hours))
if len(offset_hours_str) == 1:
offset_hours_str = f"{0}{offset_hours_str}"
offset_minutes_str = str(int(utc_offset_minutes))
if len(offset_minutes_str) == 1:
offset_minutes_str = f"{0}{offset_minutes_str}"
result = result.replace(
timezone_token, f"UTC{utc_add}{offset_hours_str}:{offset_minutes_str}"
)
return result | [
"def",
"_transform_timezone",
"(",
"result_str",
":",
"str",
",",
"timezone_token",
":",
"str",
",",
"timezone",
":",
"str",
",",
"utc_add",
":",
"str",
",",
"utc_offset_hours",
":",
"int",
",",
"utc_offset_minutes",
":",
"int",
",",
")",
"->",
"str",
":",... | https://github.com/sfu-db/dataprep/blob/6dfb9c659e8bf73f07978ae195d0372495c6f118/dataprep/clean/clean_date.py#L1046-L1082 | |
rhinstaller/anaconda | 63edc8680f1b05cbfe11bef28703acba808c5174 | pyanaconda/modules/boss/install_manager/install_manager.py | python | InstallManager.collect_requirements | (self) | return requirements | Collect requirements of the modules.
:return: a list of requirements | Collect requirements of the modules. | [
"Collect",
"requirements",
"of",
"the",
"modules",
"."
] | def collect_requirements(self):
"""Collect requirements of the modules.
:return: a list of requirements
"""
requirements = []
for observer in self._module_observers:
if not observer.is_service_available:
log.warning("Module %s not available!", observer.service_name)
continue
module_name = observer.service_name
module_requirements = Requirement.from_structure_list(
observer.proxy.CollectRequirements()
)
log.debug("Module %s requires: %s", module_name, module_requirements)
requirements.extend(module_requirements)
return requirements | [
"def",
"collect_requirements",
"(",
"self",
")",
":",
"requirements",
"=",
"[",
"]",
"for",
"observer",
"in",
"self",
".",
"_module_observers",
":",
"if",
"not",
"observer",
".",
"is_service_available",
":",
"log",
".",
"warning",
"(",
"\"Module %s not available... | https://github.com/rhinstaller/anaconda/blob/63edc8680f1b05cbfe11bef28703acba808c5174/pyanaconda/modules/boss/install_manager/install_manager.py#L42-L62 | |
pytransitions/transitions | 9663094f4566c016b11563e7a7d6d3802593845c | transitions/extensions/locking.py | python | LockedMachine.remove_model | (self, model) | return _super(LockedMachine, self).remove_model(models) | Extends `transitions.core.Machine.remove_model` by removing model specific context maps
from the machine when the model itself is removed. | Extends `transitions.core.Machine.remove_model` by removing model specific context maps
from the machine when the model itself is removed. | [
"Extends",
"transitions",
".",
"core",
".",
"Machine",
".",
"remove_model",
"by",
"removing",
"model",
"specific",
"context",
"maps",
"from",
"the",
"machine",
"when",
"the",
"model",
"itself",
"is",
"removed",
"."
] | def remove_model(self, model):
""" Extends `transitions.core.Machine.remove_model` by removing model specific context maps
from the machine when the model itself is removed. """
models = listify(model)
for mod in models:
del self.model_context_map[id(mod)]
return _super(LockedMachine, self).remove_model(models) | [
"def",
"remove_model",
"(",
"self",
",",
"model",
")",
":",
"models",
"=",
"listify",
"(",
"model",
")",
"for",
"mod",
"in",
"models",
":",
"del",
"self",
".",
"model_context_map",
"[",
"id",
"(",
"mod",
")",
"]",
"return",
"_super",
"(",
"LockedMachin... | https://github.com/pytransitions/transitions/blob/9663094f4566c016b11563e7a7d6d3802593845c/transitions/extensions/locking.py#L154-L162 | |
user-cont/conu | 0d8962560f6f7f17fe1be0d434a4809e2a0ea51d | conu/backend/origin/backend.py | python | OpenshiftBackend.import_image | (self, imported_image_name, image_name) | return imported_image_name | Import image using `oc import-image` command.
:param imported_image_name: str, short name of an image in internal registry, example:
- hello-openshift:latest
:param image_name: full repository name, example:
- docker.io/openshift/hello-openshift:latest
:return: str, short name in internal registry | Import image using `oc import-image` command. | [
"Import",
"image",
"using",
"oc",
"import",
"-",
"image",
"command",
"."
] | def import_image(self, imported_image_name, image_name):
"""Import image using `oc import-image` command.
:param imported_image_name: str, short name of an image in internal registry, example:
- hello-openshift:latest
:param image_name: full repository name, example:
- docker.io/openshift/hello-openshift:latest
:return: str, short name in internal registry
"""
c = self._oc_command(["import-image", imported_image_name,
"--from=%s" % image_name, "--confirm"])
logger.info("Importing image from: %s, as: %s", image_name, imported_image_name)
try:
o = run_cmd(c, return_output=True, ignore_status=True)
logger.debug(o)
except subprocess.CalledProcessError as ex:
raise ConuException("oc import-image failed: %s" % ex)
return imported_image_name | [
"def",
"import_image",
"(",
"self",
",",
"imported_image_name",
",",
"image_name",
")",
":",
"c",
"=",
"self",
".",
"_oc_command",
"(",
"[",
"\"import-image\"",
",",
"imported_image_name",
",",
"\"--from=%s\"",
"%",
"image_name",
",",
"\"--confirm\"",
"]",
")",
... | https://github.com/user-cont/conu/blob/0d8962560f6f7f17fe1be0d434a4809e2a0ea51d/conu/backend/origin/backend.py#L233-L254 | |
TencentCloud/tencentcloud-sdk-python | 3677fd1cdc8c5fd626ce001c13fd3b59d1f279d2 | tencentcloud/cls/v20201016/models.py | python | DescribeExportsRequest.__init__ | (self) | r"""
:param TopicId: 日志主题ID
:type TopicId: str
:param Offset: 分页的偏移量,默认值为0
:type Offset: int
:param Limit: 分页单页限制数目,默认值为20,最大值100
:type Limit: int | r"""
:param TopicId: 日志主题ID
:type TopicId: str
:param Offset: 分页的偏移量,默认值为0
:type Offset: int
:param Limit: 分页单页限制数目,默认值为20,最大值100
:type Limit: int | [
"r",
":",
"param",
"TopicId",
":",
"日志主题ID",
":",
"type",
"TopicId",
":",
"str",
":",
"param",
"Offset",
":",
"分页的偏移量,默认值为0",
":",
"type",
"Offset",
":",
"int",
":",
"param",
"Limit",
":",
"分页单页限制数目,默认值为20,最大值100",
":",
"type",
"Limit",
":",
"int"
] | def __init__(self):
r"""
:param TopicId: 日志主题ID
:type TopicId: str
:param Offset: 分页的偏移量,默认值为0
:type Offset: int
:param Limit: 分页单页限制数目,默认值为20,最大值100
:type Limit: int
"""
self.TopicId = None
self.Offset = None
self.Limit = None | [
"def",
"__init__",
"(",
"self",
")",
":",
"self",
".",
"TopicId",
"=",
"None",
"self",
".",
"Offset",
"=",
"None",
"self",
".",
"Limit",
"=",
"None"
] | https://github.com/TencentCloud/tencentcloud-sdk-python/blob/3677fd1cdc8c5fd626ce001c13fd3b59d1f279d2/tencentcloud/cls/v20201016/models.py#L2241-L2252 | ||
IronLanguages/ironpython2 | 51fdedeeda15727717fb8268a805f71b06c0b9f1 | Src/StdLib/Lib/Cookie.py | python | BaseCookie.js_output | (self, attrs=None) | return _nulljoin(result) | Return a string suitable for JavaScript. | Return a string suitable for JavaScript. | [
"Return",
"a",
"string",
"suitable",
"for",
"JavaScript",
"."
] | def js_output(self, attrs=None):
"""Return a string suitable for JavaScript."""
result = []
items = self.items()
items.sort()
for K,V in items:
result.append( V.js_output(attrs) )
return _nulljoin(result) | [
"def",
"js_output",
"(",
"self",
",",
"attrs",
"=",
"None",
")",
":",
"result",
"=",
"[",
"]",
"items",
"=",
"self",
".",
"items",
"(",
")",
"items",
".",
"sort",
"(",
")",
"for",
"K",
",",
"V",
"in",
"items",
":",
"result",
".",
"append",
"(",... | https://github.com/IronLanguages/ironpython2/blob/51fdedeeda15727717fb8268a805f71b06c0b9f1/Src/StdLib/Lib/Cookie.py#L623-L630 | |
replit-archive/empythoned | 977ec10ced29a3541a4973dc2b59910805695752 | cpython/Lib/idlelib/configHandler.py | python | IdleConf.GetHighlight | (self, theme, element, fgBg=None) | return individual highlighting theme elements.
fgBg - string ('fg'or'bg') or None, if None return a dictionary
containing fg and bg colours (appropriate for passing to Tkinter in,
e.g., a tag_config call), otherwise fg or bg colour only as specified. | return individual highlighting theme elements.
fgBg - string ('fg'or'bg') or None, if None return a dictionary
containing fg and bg colours (appropriate for passing to Tkinter in,
e.g., a tag_config call), otherwise fg or bg colour only as specified. | [
"return",
"individual",
"highlighting",
"theme",
"elements",
".",
"fgBg",
"-",
"string",
"(",
"fg",
"or",
"bg",
")",
"or",
"None",
"if",
"None",
"return",
"a",
"dictionary",
"containing",
"fg",
"and",
"bg",
"colours",
"(",
"appropriate",
"for",
"passing",
... | def GetHighlight(self, theme, element, fgBg=None):
"""
return individual highlighting theme elements.
fgBg - string ('fg'or'bg') or None, if None return a dictionary
containing fg and bg colours (appropriate for passing to Tkinter in,
e.g., a tag_config call), otherwise fg or bg colour only as specified.
"""
if self.defaultCfg['highlight'].has_section(theme):
themeDict=self.GetThemeDict('default',theme)
else:
themeDict=self.GetThemeDict('user',theme)
fore=themeDict[element+'-foreground']
if element=='cursor': #there is no config value for cursor bg
back=themeDict['normal-background']
else:
back=themeDict[element+'-background']
highlight={"foreground": fore,"background": back}
if not fgBg: #return dict of both colours
return highlight
else: #return specified colour only
if fgBg == 'fg':
return highlight["foreground"]
if fgBg == 'bg':
return highlight["background"]
else:
raise InvalidFgBg, 'Invalid fgBg specified' | [
"def",
"GetHighlight",
"(",
"self",
",",
"theme",
",",
"element",
",",
"fgBg",
"=",
"None",
")",
":",
"if",
"self",
".",
"defaultCfg",
"[",
"'highlight'",
"]",
".",
"has_section",
"(",
"theme",
")",
":",
"themeDict",
"=",
"self",
".",
"GetThemeDict",
"... | https://github.com/replit-archive/empythoned/blob/977ec10ced29a3541a4973dc2b59910805695752/cpython/Lib/idlelib/configHandler.py#L282-L307 | ||
dimagi/commcare-hq | d67ff1d3b4c51fa050c19e60c3253a79d3452a39 | corehq/apps/app_manager/models.py | python | CachedStringProperty.__init__ | (self, key) | [] | def __init__(self, key):
self.get_key = key | [
"def",
"__init__",
"(",
"self",
",",
"key",
")",
":",
"self",
".",
"get_key",
"=",
"key"
] | https://github.com/dimagi/commcare-hq/blob/d67ff1d3b4c51fa050c19e60c3253a79d3452a39/corehq/apps/app_manager/models.py#L745-L746 | ||||
facebookresearch/pyrobot | 27ffd64bbb7ce3ff6ec4b2122d84b438d5641d0f | src/pyrobot/sawyer/arm.py | python | SawyerArm._pub_joint_torques | (self, torques) | [] | def _pub_joint_torques(self, torques):
command_msg = JointCommand()
command_msg.names = self.arm_joint_names
command_msg.effort = torques
command_msg.mode = JointCommand.TORQUE_MODE
command_msg.header.stamp = rospy.Time.now()
self.joint_pub.publish(command_msg) | [
"def",
"_pub_joint_torques",
"(",
"self",
",",
"torques",
")",
":",
"command_msg",
"=",
"JointCommand",
"(",
")",
"command_msg",
".",
"names",
"=",
"self",
".",
"arm_joint_names",
"command_msg",
".",
"effort",
"=",
"torques",
"command_msg",
".",
"mode",
"=",
... | https://github.com/facebookresearch/pyrobot/blob/27ffd64bbb7ce3ff6ec4b2122d84b438d5641d0f/src/pyrobot/sawyer/arm.py#L89-L95 | ||||
jchanvfx/NodeGraphQt | 8b810ef469f839176f9c26bdd6496ff34d9b64a2 | NodeGraphQt/widgets/properties.py | python | NodePropWidget._on_close | (self) | called by the close button. | called by the close button. | [
"called",
"by",
"the",
"close",
"button",
"."
] | def _on_close(self):
"""
called by the close button.
"""
self.property_closed.emit(self.__node_id) | [
"def",
"_on_close",
"(",
"self",
")",
":",
"self",
".",
"property_closed",
".",
"emit",
"(",
"self",
".",
"__node_id",
")"
] | https://github.com/jchanvfx/NodeGraphQt/blob/8b810ef469f839176f9c26bdd6496ff34d9b64a2/NodeGraphQt/widgets/properties.py#L861-L865 | ||
iclavera/learning_to_adapt | bd7d99ba402521c96631e7d09714128f549db0f1 | learning_to_adapt/mujoco_py/mjtypes.py | python | MjDataWrapper.efc_margin | (self) | return arr | [] | def efc_margin(self):
arr = np.reshape(np.fromiter(self._wrapped.contents.efc_margin, dtype=np.double, count=(self._size_src.njmax*1)), (self._size_src.njmax, 1, ))
arr.setflags(write=False)
return arr | [
"def",
"efc_margin",
"(",
"self",
")",
":",
"arr",
"=",
"np",
".",
"reshape",
"(",
"np",
".",
"fromiter",
"(",
"self",
".",
"_wrapped",
".",
"contents",
".",
"efc_margin",
",",
"dtype",
"=",
"np",
".",
"double",
",",
"count",
"=",
"(",
"self",
".",... | https://github.com/iclavera/learning_to_adapt/blob/bd7d99ba402521c96631e7d09714128f549db0f1/learning_to_adapt/mujoco_py/mjtypes.py#L2932-L2935 | |||
tensorflow/neural-structured-learning | a43fcfca1f97ecc0ee99e688e5c8bf16c8fb6629 | research/kg_hyp_emb/datasets/process.py | python | to_np_array | (dataset_file, ent2idx, rel2idx) | return np.array(examples).astype('int64') | Map raw dataset file to numpy array with unique ids.
Args:
dataset_file: Path to file contatining raw triples in a split.
ent2idx: Dictionary mapping raw entities to unique ids.
rel2idx: Dictionary mapping raw relations to unique ids.
Returns:
Numpy array of size n_examples x 3 mapping the raw dataset file to ids. | Map raw dataset file to numpy array with unique ids. | [
"Map",
"raw",
"dataset",
"file",
"to",
"numpy",
"array",
"with",
"unique",
"ids",
"."
] | def to_np_array(dataset_file, ent2idx, rel2idx):
"""Map raw dataset file to numpy array with unique ids.
Args:
dataset_file: Path to file contatining raw triples in a split.
ent2idx: Dictionary mapping raw entities to unique ids.
rel2idx: Dictionary mapping raw relations to unique ids.
Returns:
Numpy array of size n_examples x 3 mapping the raw dataset file to ids.
"""
examples = []
with open(dataset_file, 'r') as lines:
for line in lines:
lhs, rel, rhs = line.strip().split('\t')
try:
examples.append([ent2idx[lhs], rel2idx[rel], ent2idx[rhs]])
except ValueError:
continue
return np.array(examples).astype('int64') | [
"def",
"to_np_array",
"(",
"dataset_file",
",",
"ent2idx",
",",
"rel2idx",
")",
":",
"examples",
"=",
"[",
"]",
"with",
"open",
"(",
"dataset_file",
",",
"'r'",
")",
"as",
"lines",
":",
"for",
"line",
"in",
"lines",
":",
"lhs",
",",
"rel",
",",
"rhs"... | https://github.com/tensorflow/neural-structured-learning/blob/a43fcfca1f97ecc0ee99e688e5c8bf16c8fb6629/research/kg_hyp_emb/datasets/process.py#L51-L70 | |
tp4a/teleport | 1fafd34f1f775d2cf80ea4af6e44468d8e0b24ad | server/www/packages/packages-windows/x86/mako/cache.py | python | Cache._get_cache_kw | (self, kw, context) | return tmpl_kw | [] | def _get_cache_kw(self, kw, context):
defname = kw.pop("__M_defname", None)
if not defname:
tmpl_kw = self.template.cache_args.copy()
tmpl_kw.update(kw)
elif defname in self._def_regions:
tmpl_kw = self._def_regions[defname]
else:
tmpl_kw = self.template.cache_args.copy()
tmpl_kw.update(kw)
self._def_regions[defname] = tmpl_kw
if context and self.impl.pass_context:
tmpl_kw = tmpl_kw.copy()
tmpl_kw.setdefault("context", context)
return tmpl_kw | [
"def",
"_get_cache_kw",
"(",
"self",
",",
"kw",
",",
"context",
")",
":",
"defname",
"=",
"kw",
".",
"pop",
"(",
"\"__M_defname\"",
",",
"None",
")",
"if",
"not",
"defname",
":",
"tmpl_kw",
"=",
"self",
".",
"template",
".",
"cache_args",
".",
"copy",
... | https://github.com/tp4a/teleport/blob/1fafd34f1f775d2cf80ea4af6e44468d8e0b24ad/server/www/packages/packages-windows/x86/mako/cache.py#L167-L181 | |||
kubeflow-kale/kale | bda9d296822e56ba8fe76b0072e656005da04905 | backend/kale/common/utils.py | python | comment_magic_commands | (code) | return re.sub(magic_pattern, r'#\1', code.strip()) | Comment the magic commands in a code block. | Comment the magic commands in a code block. | [
"Comment",
"the",
"magic",
"commands",
"in",
"a",
"code",
"block",
"."
] | def comment_magic_commands(code):
"""Comment the magic commands in a code block."""
magic_pattern = re.compile(r'^(\s*%%?.*)$', re.MULTILINE)
return re.sub(magic_pattern, r'#\1', code.strip()) | [
"def",
"comment_magic_commands",
"(",
"code",
")",
":",
"magic_pattern",
"=",
"re",
".",
"compile",
"(",
"r'^(\\s*%%?.*)$'",
",",
"re",
".",
"MULTILINE",
")",
"return",
"re",
".",
"sub",
"(",
"magic_pattern",
",",
"r'#\\1'",
",",
"code",
".",
"strip",
"(",... | https://github.com/kubeflow-kale/kale/blob/bda9d296822e56ba8fe76b0072e656005da04905/backend/kale/common/utils.py#L101-L104 | |
dictation-toolbox/dragonfly | a2b8f8e8ed1182465b831205b9804323beea112a | dragonfly/engines/backend_natlink/__init__.py | python | get_engine | (**kwargs) | return _engine | Retrieve the Natlink back-end engine object.
:param \\**kwargs: optional keyword arguments passed through to the
engine for engine-specific configuration. | Retrieve the Natlink back-end engine object. | [
"Retrieve",
"the",
"Natlink",
"back",
"-",
"end",
"engine",
"object",
"."
] | def get_engine(**kwargs):
"""
Retrieve the Natlink back-end engine object.
:param \\**kwargs: optional keyword arguments passed through to the
engine for engine-specific configuration.
"""
global _engine
if not _engine:
from .engine import NatlinkEngine
_engine = NatlinkEngine(**kwargs)
return _engine | [
"def",
"get_engine",
"(",
"*",
"*",
"kwargs",
")",
":",
"global",
"_engine",
"if",
"not",
"_engine",
":",
"from",
".",
"engine",
"import",
"NatlinkEngine",
"_engine",
"=",
"NatlinkEngine",
"(",
"*",
"*",
"kwargs",
")",
"return",
"_engine"
] | https://github.com/dictation-toolbox/dragonfly/blob/a2b8f8e8ed1182465b831205b9804323beea112a/dragonfly/engines/backend_natlink/__init__.py#L111-L122 | |
cogitas3d/OrtogOnBlender | 881e93f5beb2263e44c270974dd0e81deca44762 | BooleanaOsteo.py | python | BooleanaOsteoInter.execute | (self, context) | return {'FINISHED'} | [] | def execute(self, context):
BooleanaOsteoInterDef(self, context)
# bpy.ops.object.collection_link(collection='Collection')
return {'FINISHED'} | [
"def",
"execute",
"(",
"self",
",",
"context",
")",
":",
"BooleanaOsteoInterDef",
"(",
"self",
",",
"context",
")",
"# bpy.ops.object.collection_link(collection='Collection')",
"return",
"{",
"'FINISHED'",
"}"
] | https://github.com/cogitas3d/OrtogOnBlender/blob/881e93f5beb2263e44c270974dd0e81deca44762/BooleanaOsteo.py#L946-L949 | |||
Khan/gae_mini_profiler | 275e6e67c751b621f1e65c24c9a8a15631799fa4 | profiler.py | python | RequestProfiler.appstats_results | (self) | return results | Return the RPC profiler (appstats) results for this request, if any.
This will return a dictionary containing results from appstats or an
empty result set if appstats profiling is disabled. | Return the RPC profiler (appstats) results for this request, if any. | [
"Return",
"the",
"RPC",
"profiler",
"(",
"appstats",
")",
"results",
"for",
"this",
"request",
"if",
"any",
"."
] | def appstats_results(self):
"""Return the RPC profiler (appstats) results for this request, if any.
This will return a dictionary containing results from appstats or an
empty result set if appstats profiling is disabled."""
results = {
"calls": [],
"total_time": 0,
}
if self.appstats_prof:
results.update(self.appstats_prof.results())
return results | [
"def",
"appstats_results",
"(",
"self",
")",
":",
"results",
"=",
"{",
"\"calls\"",
":",
"[",
"]",
",",
"\"total_time\"",
":",
"0",
",",
"}",
"if",
"self",
".",
"appstats_prof",
":",
"results",
".",
"update",
"(",
"self",
".",
"appstats_prof",
".",
"re... | https://github.com/Khan/gae_mini_profiler/blob/275e6e67c751b621f1e65c24c9a8a15631799fa4/profiler.py#L441-L455 | |
softlayer/softlayer-python | cdef7d63c66413197a9a97b0414de9f95887a82a | SoftLayer/CLI/firewall/cancel.py | python | cli | (env, identifier) | Cancels a firewall. | Cancels a firewall. | [
"Cancels",
"a",
"firewall",
"."
] | def cli(env, identifier):
"""Cancels a firewall."""
mgr = SoftLayer.FirewallManager(env.client)
firewall_type, firewall_id = firewall.parse_id(identifier)
if not (env.skip_confirmations or
formatting.confirm("This action will cancel a firewall from your "
"account. Continue?")):
raise exceptions.CLIAbort('Aborted.')
if firewall_type in ['vs', 'server']:
mgr.cancel_firewall(firewall_id, dedicated=False)
elif firewall_type == 'vlan':
mgr.cancel_firewall(firewall_id, dedicated=True)
else:
raise exceptions.CLIAbort('Unknown firewall type: %s' % firewall_type)
env.fout('Firewall with id %s is being cancelled!' % identifier) | [
"def",
"cli",
"(",
"env",
",",
"identifier",
")",
":",
"mgr",
"=",
"SoftLayer",
".",
"FirewallManager",
"(",
"env",
".",
"client",
")",
"firewall_type",
",",
"firewall_id",
"=",
"firewall",
".",
"parse_id",
"(",
"identifier",
")",
"if",
"not",
"(",
"env"... | https://github.com/softlayer/softlayer-python/blob/cdef7d63c66413197a9a97b0414de9f95887a82a/SoftLayer/CLI/firewall/cancel.py#L16-L34 | ||
chengzhengxin/groupsoftmax-simpledet | 3f63a00998c57fee25241cf43a2e8600893ea462 | config/resnet_v1b/tridentnet_r152v1bc4_c5_1x.py | python | get_config | (is_train) | return General, KvstoreParam, RpnParam, RoiParam, BboxParam, DatasetParam, \
ModelParam, OptimizeParam, TestParam, \
transform, data_name, label_name, metric_list | [] | def get_config(is_train):
class General:
log_frequency = 10
name = __name__.rsplit("/")[-1].rsplit(".")[-1]
batch_image = 1 if is_train else 1
fp16 = False
class Trident:
num_branch = 3
train_scaleaware = True
test_scaleaware = True
branch_ids = range(num_branch)
branch_dilates = [1, 2, 3]
valid_ranges = [(0, 90), (30, 160), (90, -1)]
valid_ranges_on_origin = True
branch_bn_shared = True
branch_conv_shared = True
branch_deform = False
assert num_branch == len(branch_ids)
assert num_branch == len(valid_ranges)
class KvstoreParam:
kvstore = "local"
batch_image = General.batch_image
gpus = [0, 1, 2, 3, 4, 5, 6, 7]
fp16 = General.fp16
class NormalizeParam:
# normalizer = normalizer_factory(type="syncbn", ndev=len(KvstoreParam.gpus))
normalizer = normalizer_factory(type="fixbn")
class BackboneParam:
fp16 = General.fp16
normalizer = NormalizeParam.normalizer
depth = 152
num_branch = Trident.num_branch
branch_ids = Trident.branch_ids
branch_dilates = Trident.branch_dilates
branch_bn_shared = Trident.branch_bn_shared
branch_conv_shared = Trident.branch_conv_shared
branch_deform = Trident.branch_deform
class NeckParam:
fp16 = General.fp16
normalizer = NormalizeParam.normalizer
class RpnParam:
fp16 = General.fp16
normalizer = NormalizeParam.normalizer
batch_image = General.batch_image * Trident.num_branch
class anchor_generate:
scale = (2, 4, 8, 16, 32)
ratio = (0.5, 1.0, 2.0)
stride = 16
image_anchor = 256
class head:
conv_channel = 512
mean = (0, 0, 0, 0)
std = (1, 1, 1, 1)
class proposal:
pre_nms_top_n = 12000 if is_train else 6000
post_nms_top_n = 500 if is_train else 300
nms_thr = 0.7
min_bbox_side = 0
class subsample_proposal:
proposal_wo_gt = True
image_roi = 128
fg_fraction = 0.5
fg_thr = 0.5
bg_thr_hi = 0.5
bg_thr_lo = 0.0
class bbox_target:
num_reg_class = 2
class_agnostic = True
weight = (1.0, 1.0, 1.0, 1.0)
mean = (0.0, 0.0, 0.0, 0.0)
std = (0.1, 0.1, 0.2, 0.2)
class BboxParam:
fp16 = General.fp16
normalizer = NormalizeParam.normalizer
num_class = 1 + 80
image_roi = 128
batch_image = General.batch_image * Trident.num_branch
class regress_target:
class_agnostic = True
mean = (0.0, 0.0, 0.0, 0.0)
std = (0.1, 0.1, 0.2, 0.2)
class RoiParam:
fp16 = General.fp16
normalizer = NormalizeParam.normalizer
out_size = 7
stride = 16
class DatasetParam:
if is_train:
image_set = ("coco_train2014", "coco_valminusminival2014")
else:
image_set = ("coco_minival2014", )
backbone = Backbone(BackboneParam)
neck = Neck(NeckParam)
rpn_head = RpnHead(RpnParam)
roi_extractor = RoiExtractor(RoiParam)
bbox_head = BboxHead(BboxParam)
detector = Detector()
if is_train:
train_sym = detector.get_train_symbol(
backbone, neck, rpn_head, roi_extractor, bbox_head,
num_branch=Trident.num_branch, scaleaware=Trident.train_scaleaware)
rpn_test_sym = None
test_sym = None
else:
train_sym = None
rpn_test_sym = detector.get_rpn_test_symbol(backbone, neck, rpn_head, Trident.num_branch)
test_sym = detector.get_test_symbol(
backbone, neck, rpn_head, roi_extractor, bbox_head, num_branch=Trident.num_branch)
class ModelParam:
train_symbol = train_sym
test_symbol = test_sym
rpn_test_symbol = rpn_test_sym
from_scratch = False
random = True
memonger = False
memonger_until = "stage3_unit21_plus"
class pretrain:
prefix = "pretrain_model/resnet%s_v1b" % BackboneParam.depth
epoch = 0
fixed_param = ["conv0", "stage1", "gamma", "beta"]
class OptimizeParam:
class optimizer:
type = "sgd"
lr = 0.01 / 8 * len(KvstoreParam.gpus) * KvstoreParam.batch_image
momentum = 0.9
wd = 0.0001
clip_gradient = 5
class schedule:
begin_epoch = 0
end_epoch = 6
lr_iter = [60000 * 16 // (len(KvstoreParam.gpus) * KvstoreParam.batch_image),
80000 * 16 // (len(KvstoreParam.gpus) * KvstoreParam.batch_image)]
class warmup:
type = "gradual"
lr = 0.0
iter = 3000 * 16 // (len(KvstoreParam.gpus) * KvstoreParam.batch_image)
class TestParam:
min_det_score = 0.001
max_det_per_image = 100
process_roidb = lambda x: x
if Trident.test_scaleaware:
process_output = lambda x, y: process_branch_outputs(
x, Trident.num_branch, Trident.valid_ranges, Trident.valid_ranges_on_origin)
else:
process_output = lambda x, y: x
process_rpn_output = lambda x, y: process_branch_rpn_outputs(x, Trident.num_branch)
class model:
prefix = "experiments/{}/checkpoint".format(General.name)
epoch = OptimizeParam.schedule.end_epoch
class nms:
type = "nms"
thr = 0.5
class coco:
annotation = "data/coco/annotations/instances_minival2014.json"
# data processing
class NormParam:
mean = tuple(i * 255 for i in (0.485, 0.456, 0.406)) # RGB order
std = tuple(i * 255 for i in (0.229, 0.224, 0.225))
class ResizeParam:
short = 800
long = 1200 if is_train else 2000
class PadParam:
short = 800
long = 1200 if is_train else 2000
max_num_gt = 100
class ScaleRange:
valid_ranges = Trident.valid_ranges
cal_on_origin = Trident.valid_ranges_on_origin # True: valid_ranges on origin image scale / valid_ranges on resized image scale
class AnchorTarget2DParam:
class generate:
short = 800 // 16
long = 1200 // 16
stride = 16
scales = (2, 4, 8, 16, 32)
aspects = (0.5, 1.0, 2.0)
class assign:
allowed_border = 0
pos_thr = 0.7
neg_thr = 0.3
min_pos_thr = 0.0
class sample:
image_anchor = 256
pos_fraction = 0.5
class trident:
invalid_anchor_threshd = 0.3
class RenameParam:
mapping = dict(image="data")
from core.detection_input import ReadRoiRecord, Resize2DImageBbox, \
ConvertImageFromHwcToChw, Flip2DImageBbox, Pad2DImageBbox, \
RenameRecord, Norm2DImage
from models.tridentnet.input import ScaleAwareRange, TridentAnchorTarget2D
if is_train:
transform = [
ReadRoiRecord(None),
Norm2DImage(NormParam),
Resize2DImageBbox(ResizeParam),
Flip2DImageBbox(),
Pad2DImageBbox(PadParam),
ConvertImageFromHwcToChw(),
ScaleAwareRange(ScaleRange),
TridentAnchorTarget2D(AnchorTarget2DParam),
RenameRecord(RenameParam.mapping)
]
data_name = ["data", "im_info", "gt_bbox"]
if Trident.train_scaleaware:
data_name.append("valid_ranges")
label_name = ["rpn_cls_label", "rpn_reg_target", "rpn_reg_weight"]
else:
transform = [
ReadRoiRecord(None),
Norm2DImage(NormParam),
Resize2DImageBbox(ResizeParam),
ConvertImageFromHwcToChw(),
RenameRecord(RenameParam.mapping)
]
data_name = ["data", "im_info", "im_id", "rec_id"]
label_name = []
import core.detection_metric as metric
rpn_acc_metric = metric.AccWithIgnore(
"RpnAcc",
["rpn_cls_loss_output"],
["rpn_cls_label"]
)
rpn_l1_metric = metric.L1(
"RpnL1",
["rpn_reg_loss_output"],
["rpn_cls_label"]
)
# for bbox, the label is generated in network so it is an output
box_acc_metric = metric.AccWithIgnore(
"RcnnAcc",
["bbox_cls_loss_output", "bbox_label_blockgrad_output"],
[]
)
box_l1_metric = metric.L1(
"RcnnL1",
["bbox_reg_loss_output", "bbox_label_blockgrad_output"],
[]
)
metric_list = [rpn_acc_metric, rpn_l1_metric, box_acc_metric, box_l1_metric]
return General, KvstoreParam, RpnParam, RoiParam, BboxParam, DatasetParam, \
ModelParam, OptimizeParam, TestParam, \
transform, data_name, label_name, metric_list | [
"def",
"get_config",
"(",
"is_train",
")",
":",
"class",
"General",
":",
"log_frequency",
"=",
"10",
"name",
"=",
"__name__",
".",
"rsplit",
"(",
"\"/\"",
")",
"[",
"-",
"1",
"]",
".",
"rsplit",
"(",
"\".\"",
")",
"[",
"-",
"1",
"]",
"batch_image",
... | https://github.com/chengzhengxin/groupsoftmax-simpledet/blob/3f63a00998c57fee25241cf43a2e8600893ea462/config/resnet_v1b/tridentnet_r152v1bc4_c5_1x.py#L11-L307 | |||
Kozea/WeasyPrint | 6cce2978165134e37683cb5b3d156cac6a11a7f9 | weasyprint/svg/utils.py | python | point | (svg, string, font_size) | Pop first two size values from a string. | Pop first two size values from a string. | [
"Pop",
"first",
"two",
"size",
"values",
"from",
"a",
"string",
"."
] | def point(svg, string, font_size):
"""Pop first two size values from a string."""
match = re.match('(.*?) (.*?)(?: |$)', string)
if match:
x, y = match.group(1, 2)
string = string[match.end():]
return (*svg.point(x, y, font_size), string)
else:
raise PointError | [
"def",
"point",
"(",
"svg",
",",
"string",
",",
"font_size",
")",
":",
"match",
"=",
"re",
".",
"match",
"(",
"'(.*?) (.*?)(?: |$)'",
",",
"string",
")",
"if",
"match",
":",
"x",
",",
"y",
"=",
"match",
".",
"group",
"(",
"1",
",",
"2",
")",
"str... | https://github.com/Kozea/WeasyPrint/blob/6cce2978165134e37683cb5b3d156cac6a11a7f9/weasyprint/svg/utils.py#L62-L70 | ||
tribe29/checkmk | 6260f2512e159e311f426e16b84b19d0b8e9ad0c | cmk/gui/plugins/userdb/htpasswd.py | python | Htpasswd.load | (self) | return entries | Loads the contents of a valid htpasswd file into a dictionary and returns the dictionary | Loads the contents of a valid htpasswd file into a dictionary and returns the dictionary | [
"Loads",
"the",
"contents",
"of",
"a",
"valid",
"htpasswd",
"file",
"into",
"a",
"dictionary",
"and",
"returns",
"the",
"dictionary"
] | def load(self) -> dict[UserId, str]:
"""Loads the contents of a valid htpasswd file into a dictionary and returns the dictionary"""
entries = {}
with self._path.open(encoding="utf-8") as f:
for l in f:
if ":" not in l:
continue
user_id, pw_hash = l.split(":", 1)
entries[UserId(user_id)] = pw_hash.rstrip("\n")
return entries | [
"def",
"load",
"(",
"self",
")",
"->",
"dict",
"[",
"UserId",
",",
"str",
"]",
":",
"entries",
"=",
"{",
"}",
"with",
"self",
".",
"_path",
".",
"open",
"(",
"encoding",
"=",
"\"utf-8\"",
")",
"as",
"f",
":",
"for",
"l",
"in",
"f",
":",
"if",
... | https://github.com/tribe29/checkmk/blob/6260f2512e159e311f426e16b84b19d0b8e9ad0c/cmk/gui/plugins/userdb/htpasswd.py#L48-L60 | |
andresriancho/w3af | cd22e5252243a87aaa6d0ddea47cf58dacfe00a9 | w3af/core/data/kb/info.py | python | Info.add_to_highlight | (self, *str_match) | [] | def add_to_highlight(self, *str_match):
for s in str_match:
if not isinstance(s, basestring):
raise TypeError('Only able to highlight strings.')
self._string_matches.add(s) | [
"def",
"add_to_highlight",
"(",
"self",
",",
"*",
"str_match",
")",
":",
"for",
"s",
"in",
"str_match",
":",
"if",
"not",
"isinstance",
"(",
"s",
",",
"basestring",
")",
":",
"raise",
"TypeError",
"(",
"'Only able to highlight strings.'",
")",
"self",
".",
... | https://github.com/andresriancho/w3af/blob/cd22e5252243a87aaa6d0ddea47cf58dacfe00a9/w3af/core/data/kb/info.py#L627-L632 | ||||
mu-editor/mu | 5a5d7723405db588f67718a63a0ec0ecabebae33 | mu/__main__.py | python | main | () | [] | def main():
run() | [
"def",
"main",
"(",
")",
":",
"run",
"(",
")"
] | https://github.com/mu-editor/mu/blob/5a5d7723405db588f67718a63a0ec0ecabebae33/mu/__main__.py#L4-L5 | ||||
realpython/book2-exercises | cde325eac8e6d8cff2316601c2e5b36bb46af7d0 | web2py/gluon/contrib/pypyodbc.py | python | Cursor.columns | (self, table=None, catalog=None, schema=None, column=None) | return self | Return a list with all columns | Return a list with all columns | [
"Return",
"a",
"list",
"with",
"all",
"columns"
] | def columns(self, table=None, catalog=None, schema=None, column=None):
"""Return a list with all columns"""
if not self.connection:
self.close()
l_catalog = l_schema = l_table = l_column = 0
if unicode in [type(x) for x in (table, catalog, schema,column)]:
string_p = lambda x:wchar_pointer(UCS_buf(x))
API_f = ODBC_API.SQLColumnsW
else:
string_p = ctypes.c_char_p
API_f = ODBC_API.SQLColumns
if catalog is not None:
l_catalog = len(catalog)
catalog = string_p(catalog)
if schema is not None:
l_schema = len(schema)
schema = string_p(schema)
if table is not None:
l_table = len(table)
table = string_p(table)
if column is not None:
l_column = len(column)
column = string_p(column)
self._free_stmt()
self._last_param_types = None
self.statement = None
ret = API_f(self.stmt_h,
catalog, l_catalog,
schema, l_schema,
table, l_table,
column, l_column)
check_success(self, ret)
self._NumOfRows()
self._UpdateDesc()
#self._BindCols()
return self | [
"def",
"columns",
"(",
"self",
",",
"table",
"=",
"None",
",",
"catalog",
"=",
"None",
",",
"schema",
"=",
"None",
",",
"column",
"=",
"None",
")",
":",
"if",
"not",
"self",
".",
"connection",
":",
"self",
".",
"close",
"(",
")",
"l_catalog",
"=",
... | https://github.com/realpython/book2-exercises/blob/cde325eac8e6d8cff2316601c2e5b36bb46af7d0/web2py/gluon/contrib/pypyodbc.py#L2047-L2090 | |
kupferlauncher/kupfer | 1c1e9bcbce05a82f503f68f8b3955c20b02639b3 | kupfer/pretty.py | python | OutputMixin.output_exc | (self, exc_info=None) | Output current exception, or use @exc_info if given | Output current exception, or use | [
"Output",
"current",
"exception",
"or",
"use"
] | def output_exc(self, exc_info=None):
"""Output current exception, or use @exc_info if given"""
etype, value, tb = (exc_info or sys.exc_info())
if debug:
self._output_core("Exception in ", "", "\n", sys.stderr)
traceback.print_exception(etype, value, tb, file=sys.stderr)
else:
msg = "%s: %s" % (etype.__name__, value)
self._output_core("Exception in ", " ", "\n", sys.stderr, msg) | [
"def",
"output_exc",
"(",
"self",
",",
"exc_info",
"=",
"None",
")",
":",
"etype",
",",
"value",
",",
"tb",
"=",
"(",
"exc_info",
"or",
"sys",
".",
"exc_info",
"(",
")",
")",
"if",
"debug",
":",
"self",
".",
"_output_core",
"(",
"\"Exception in \"",
... | https://github.com/kupferlauncher/kupfer/blob/1c1e9bcbce05a82f503f68f8b3955c20b02639b3/kupfer/pretty.py#L30-L38 | ||
braincorp/PVM | 3de2683634f372d2ac5aaa8b19e8ff23420d94d1 | PVM_framework/CoreUtils.py | python | run_model | (prop_dict, manager, port=9000) | Simplified way of running a model.
:param prop_dict: model dictionary
:param manager: execution manager
:param port: port for setting up a debugging server
:return: | Simplified way of running a model.
:param prop_dict: model dictionary
:param manager: execution manager
:param port: port for setting up a debugging server
:return: | [
"Simplified",
"way",
"of",
"running",
"a",
"model",
".",
":",
"param",
"prop_dict",
":",
"model",
"dictionary",
":",
"param",
"manager",
":",
"execution",
"manager",
":",
"param",
"port",
":",
"port",
"for",
"setting",
"up",
"a",
"debugging",
"server",
":"... | def run_model(prop_dict, manager, port=9000):
"""
Simplified way of running a model.
:param prop_dict: model dictionary
:param manager: execution manager
:param port: port for setting up a debugging server
:return:
"""
executor = ModelExecution(prop_dict=prop_dict, manager=manager, port=port)
executor.start(blocking=True)
executor.finish() | [
"def",
"run_model",
"(",
"prop_dict",
",",
"manager",
",",
"port",
"=",
"9000",
")",
":",
"executor",
"=",
"ModelExecution",
"(",
"prop_dict",
"=",
"prop_dict",
",",
"manager",
"=",
"manager",
",",
"port",
"=",
"port",
")",
"executor",
".",
"start",
"(",... | https://github.com/braincorp/PVM/blob/3de2683634f372d2ac5aaa8b19e8ff23420d94d1/PVM_framework/CoreUtils.py#L452-L462 | ||
awslabs/autogluon | 7309118f2ab1c9519f25acf61a283a95af95842b | core/src/autogluon/core/models/ensemble/stacker_ensemble_model.py | python | StackerEnsembleModel._set_default_params | (self) | [] | def _set_default_params(self):
default_params = {'use_orig_features': True, 'max_base_models': 25, 'max_base_models_per_type': 5}
for param, val in default_params.items():
self._set_default_param_value(param, val)
super()._set_default_params() | [
"def",
"_set_default_params",
"(",
"self",
")",
":",
"default_params",
"=",
"{",
"'use_orig_features'",
":",
"True",
",",
"'max_base_models'",
":",
"25",
",",
"'max_base_models_per_type'",
":",
"5",
"}",
"for",
"param",
",",
"val",
"in",
"default_params",
".",
... | https://github.com/awslabs/autogluon/blob/7309118f2ab1c9519f25acf61a283a95af95842b/core/src/autogluon/core/models/ensemble/stacker_ensemble_model.py#L97-L101 | ||||
CGATOxford/cgat | 326aad4694bdfae8ddc194171bb5d73911243947 | obsolete/pipeline_proj012_chipseq.py | python | full | () | run the full pipeline. | run the full pipeline. | [
"run",
"the",
"full",
"pipeline",
"."
] | def full():
'''run the full pipeline.'''
pass | [
"def",
"full",
"(",
")",
":",
"pass"
] | https://github.com/CGATOxford/cgat/blob/326aad4694bdfae8ddc194171bb5d73911243947/obsolete/pipeline_proj012_chipseq.py#L1155-L1157 | ||
wucng/TensorExpand | 4ea58f64f5c5082b278229b799c9f679536510b7 | TensorExpand/Object detection/Mask RCNN/CharlesShang_FastMaskRCNN/libs/boxes/gprof2dot.py | python | DotWriter.end_graph | (self) | [] | def end_graph(self):
self.write('}\n') | [
"def",
"end_graph",
"(",
"self",
")",
":",
"self",
".",
"write",
"(",
"'}\\n'",
")"
] | https://github.com/wucng/TensorExpand/blob/4ea58f64f5c5082b278229b799c9f679536510b7/TensorExpand/Object detection/Mask RCNN/CharlesShang_FastMaskRCNN/libs/boxes/gprof2dot.py#L3034-L3035 | ||||
EPFL-LCN/neuronaldynamics-exercises | 18c0d573c943eeff1bfa496f3dcbbf358aed5b62 | neurodynex3/tools/spike_tools.py | python | get_averaged_single_neuron_power_spectrum | (spike_monitor, sampling_frequency,
window_t_min, window_t_max,
nr_neurons_average=100, subtract_mean=False) | return freq, mean_ps, all_ps_dict, mean_firing_rate, mean_firing_freqs_per_neuron | averaged power-spectrum of spike trains in the time window [window_t_min, window_t_max).
The power spectrum of every single neuron's spike train is computed. Then the average
across all single-neuron powers is computed. In order to limit the compuation time, the
number of neurons taken to compute the average is limited to nr_neurons_average which defaults to 100
Args:
spike_monitor (SpikeMonitor) : Brian2 SpikeMonitor
sampling_frequency (Quantity): sampling frequency used to discretize the spike trains.
window_t_min (Quantity): Lower bound of the time window: t>=window_t_min. Spikes
before window_t_min are not taken into account (set a lower bound if you want to exclude an initial
transient in the population activity)
window_t_max (Quantity): Upper bound of the time window: t<window_t_max.
nr_neurons_average (int): Number of neurons over which the average is taken.
subtract_mean (bool): If true, the mean value of the signal is subtracted before FFT. Default is False
Returns:
freq, mean_ps, all_ps_dict, mean_firing_rate, mean_firing_freqs_per_neuron | averaged power-spectrum of spike trains in the time window [window_t_min, window_t_max).
The power spectrum of every single neuron's spike train is computed. Then the average
across all single-neuron powers is computed. In order to limit the compuation time, the
number of neurons taken to compute the average is limited to nr_neurons_average which defaults to 100 | [
"averaged",
"power",
"-",
"spectrum",
"of",
"spike",
"trains",
"in",
"the",
"time",
"window",
"[",
"window_t_min",
"window_t_max",
")",
".",
"The",
"power",
"spectrum",
"of",
"every",
"single",
"neuron",
"s",
"spike",
"train",
"is",
"computed",
".",
"Then",
... | def get_averaged_single_neuron_power_spectrum(spike_monitor, sampling_frequency,
window_t_min, window_t_max,
nr_neurons_average=100, subtract_mean=False):
"""
averaged power-spectrum of spike trains in the time window [window_t_min, window_t_max).
The power spectrum of every single neuron's spike train is computed. Then the average
across all single-neuron powers is computed. In order to limit the compuation time, the
number of neurons taken to compute the average is limited to nr_neurons_average which defaults to 100
Args:
spike_monitor (SpikeMonitor) : Brian2 SpikeMonitor
sampling_frequency (Quantity): sampling frequency used to discretize the spike trains.
window_t_min (Quantity): Lower bound of the time window: t>=window_t_min. Spikes
before window_t_min are not taken into account (set a lower bound if you want to exclude an initial
transient in the population activity)
window_t_max (Quantity): Upper bound of the time window: t<window_t_max.
nr_neurons_average (int): Number of neurons over which the average is taken.
subtract_mean (bool): If true, the mean value of the signal is subtracted before FFT. Default is False
Returns:
freq, mean_ps, all_ps_dict, mean_firing_rate, mean_firing_freqs_per_neuron
"""
assert isinstance(spike_monitor, b2.SpikeMonitor), \
"spike_monitor is not of type SpikeMonitor"
spiketrains = spike_monitor.spike_trains()
nr_neurons = len(spiketrains)
sample_neurons = []
nr_samples = 0
if nr_neurons <= nr_neurons_average:
sample_neurons = range(nr_neurons)
nr_samples = nr_neurons
else:
idxs = np.arange(nr_neurons)
np.random.shuffle(idxs)
sample_neurons = idxs[:(nr_neurons_average)]
nr_samples = nr_neurons_average
sptrs = filter_spike_trains(spike_monitor.spike_trains(), window_t_min, window_t_max, sample_neurons)
time_window_size = window_t_max - window_t_min
discretization_dt = 1./sampling_frequency
if window_t_max is None:
window_t_max = max(spike_monitor.t)
vector_length = 1+int(math.ceil(time_window_size/discretization_dt)) # +1: space for rounding issues
freq = 0
spike_count = 0
all_ps = np.zeros([nr_samples, vector_length/2], float)
all_ps_dict = dict()
mean_firing_freqs_per_neuron = dict()
for i in range(nr_samples):
idx = sample_neurons[i]
vec = _spike_train_2_binary_vector(
sptrs[idx]-window_t_min, vector_length, discretization_dt=discretization_dt)
ps, freq = _get_spike_train_power_spectrum(vec, discretization_dt, subtract_mean)
all_ps[i, :] = ps
all_ps_dict[idx] = ps
nr_spikes = len(sptrs[idx])
nu_avg = nr_spikes / time_window_size
# print(nu_avg)
mean_firing_freqs_per_neuron[idx] = nu_avg
spike_count += nr_spikes # count in the subsample which is filtered to [window_t_min, window_t_max]
mean_ps = np.mean(all_ps, 0)
mean_firing_rate = spike_count / nr_samples / time_window_size
print("mean_firing_rate:{}".format(mean_firing_rate))
return freq, mean_ps, all_ps_dict, mean_firing_rate, mean_firing_freqs_per_neuron | [
"def",
"get_averaged_single_neuron_power_spectrum",
"(",
"spike_monitor",
",",
"sampling_frequency",
",",
"window_t_min",
",",
"window_t_max",
",",
"nr_neurons_average",
"=",
"100",
",",
"subtract_mean",
"=",
"False",
")",
":",
"assert",
"isinstance",
"(",
"spike_monito... | https://github.com/EPFL-LCN/neuronaldynamics-exercises/blob/18c0d573c943eeff1bfa496f3dcbbf358aed5b62/neurodynex3/tools/spike_tools.py#L307-L374 | |
Xavier-Lam/wechat-django | 258e193e9ec9558709e889fd105c9bf474b013e6 | wechat_django/oauth/authentication.py | python | WeChatOAuthSessionAuthentication.authenticate | (self, request) | return rv | [] | def authenticate(self, request):
session_key = request.wechat.session_key
request.wechat._openid = request.session.get(session_key)
rv = super(WeChatOAuthSessionAuthentication, self).authenticate(
request)
if rv is not None:
request.session[session_key] = rv[1]
return rv | [
"def",
"authenticate",
"(",
"self",
",",
"request",
")",
":",
"session_key",
"=",
"request",
".",
"wechat",
".",
"session_key",
"request",
".",
"wechat",
".",
"_openid",
"=",
"request",
".",
"session",
".",
"get",
"(",
"session_key",
")",
"rv",
"=",
"sup... | https://github.com/Xavier-Lam/wechat-django/blob/258e193e9ec9558709e889fd105c9bf474b013e6/wechat_django/oauth/authentication.py#L66-L73 | |||
nschloe/pygmsh | 3e7eea6fae3b4cd5e9f2c2d52b3686d3e5a1a725 | src/pygmsh/common/geometry.py | python | CommonGeometry.add_physical | (self, entities, label: str | None = None) | [] | def add_physical(self, entities, label: str | None = None):
if label in [label for _, label in self._PHYSICAL_QUEUE]:
raise ValueError(f'Label "{label}" already exists.')
if not isinstance(entities, list):
entities = [entities]
# make sure the dimensionality is the same for all entities
dim = entities[0].dim
for e in entities:
assert e.dim == dim
if label is None:
# 2021-02-18
warnings.warn(
"Physical groups without label are deprecated. "
'Use add_physical(entities, "dummy").'
)
else:
if not isinstance(label, str):
raise ValueError(f"Physical label must be string, not {type(label)}.")
self._PHYSICAL_QUEUE.append((entities, label)) | [
"def",
"add_physical",
"(",
"self",
",",
"entities",
",",
"label",
":",
"str",
"|",
"None",
"=",
"None",
")",
":",
"if",
"label",
"in",
"[",
"label",
"for",
"_",
",",
"label",
"in",
"self",
".",
"_PHYSICAL_QUEUE",
"]",
":",
"raise",
"ValueError",
"("... | https://github.com/nschloe/pygmsh/blob/3e7eea6fae3b4cd5e9f2c2d52b3686d3e5a1a725/src/pygmsh/common/geometry.py#L103-L125 | ||||
robinhood/faust | 01b4c0ad8390221db71751d80001b0fd879291e2 | faust/transport/consumer.py | python | ThreadDelegateConsumer.close | (self) | Close consumer for graceful shutdown. | Close consumer for graceful shutdown. | [
"Close",
"consumer",
"for",
"graceful",
"shutdown",
"."
] | def close(self) -> None:
"""Close consumer for graceful shutdown."""
self._thread.close() | [
"def",
"close",
"(",
"self",
")",
"->",
"None",
":",
"self",
".",
"_thread",
".",
"close",
"(",
")"
] | https://github.com/robinhood/faust/blob/01b4c0ad8390221db71751d80001b0fd879291e2/faust/transport/consumer.py#L1328-L1330 | ||
bcbio/bcbio-nextgen | c80f9b6b1be3267d1f981b7035e3b72441d258f2 | bcbio/pipeline/genome.py | python | _get_galaxy_loc_file | (name, galaxy_dt, ref_dir, galaxy_base) | return loc_file, need_remap | Retrieve Galaxy *.loc file for the given reference/aligner name.
First tries to find an aligner specific *.loc file. If not defined
or does not exist, then we need to try and remap it from the
default reference file | Retrieve Galaxy *.loc file for the given reference/aligner name. | [
"Retrieve",
"Galaxy",
"*",
".",
"loc",
"file",
"for",
"the",
"given",
"reference",
"/",
"aligner",
"name",
"."
] | def _get_galaxy_loc_file(name, galaxy_dt, ref_dir, galaxy_base):
"""Retrieve Galaxy *.loc file for the given reference/aligner name.
First tries to find an aligner specific *.loc file. If not defined
or does not exist, then we need to try and remap it from the
default reference file
"""
if "file" in galaxy_dt and os.path.exists(os.path.join(galaxy_base, galaxy_dt["file"])):
loc_file = os.path.join(galaxy_base, galaxy_dt["file"])
need_remap = False
elif alignment.TOOLS[name].galaxy_loc_file is None:
loc_file = os.path.join(ref_dir, alignment.BASE_LOCATION_FILE)
need_remap = True
else:
loc_file = os.path.join(ref_dir, alignment.TOOLS[name].galaxy_loc_file)
need_remap = False
if not os.path.exists(loc_file):
loc_file = os.path.join(ref_dir, alignment.BASE_LOCATION_FILE)
need_remap = True
return loc_file, need_remap | [
"def",
"_get_galaxy_loc_file",
"(",
"name",
",",
"galaxy_dt",
",",
"ref_dir",
",",
"galaxy_base",
")",
":",
"if",
"\"file\"",
"in",
"galaxy_dt",
"and",
"os",
".",
"path",
".",
"exists",
"(",
"os",
".",
"path",
".",
"join",
"(",
"galaxy_base",
",",
"galax... | https://github.com/bcbio/bcbio-nextgen/blob/c80f9b6b1be3267d1f981b7035e3b72441d258f2/bcbio/pipeline/genome.py#L117-L136 | |
LinkedInAttic/indextank-service | 880c6295ce8e7a3a55bf9b3777cc35c7680e0d7e | storefront/boto/rds/__init__.py | python | RDSConnection.get_all_events | (self, source_identifier=None, source_type=None,
start_time=None, end_time=None,
max_records=None, marker=None) | return self.get_list('DescribeEvents', params, [('Event', Event)]) | Get information about events related to your DBInstances,
DBSecurityGroups and DBParameterGroups.
:type source_identifier: str
:param source_identifier: If supplied, the events returned will be
limited to those that apply to the identified
source. The value of this parameter depends
on the value of source_type. If neither
parameter is specified, all events in the time
span will be returned.
:type source_type: str
:param source_type: Specifies how the source_identifier should
be interpreted. Valid values are:
b-instance | db-security-group |
db-parameter-group | db-snapshot
:type start_time: datetime
:param start_time: The beginning of the time interval for events.
If not supplied, all available events will
be returned.
:type end_time: datetime
:param end_time: The ending of the time interval for events.
If not supplied, all available events will
be returned.
:type max_records: int
:param max_records: The maximum number of records to be returned.
If more results are available, a MoreToken will
be returned in the response that can be used to
retrieve additional records. Default is 100.
:type marker: str
:param marker: The marker provided by a previous request.
:rtype: list
:return: A list of class:`boto.rds.event.Event` | Get information about events related to your DBInstances,
DBSecurityGroups and DBParameterGroups.
:type source_identifier: str
:param source_identifier: If supplied, the events returned will be
limited to those that apply to the identified
source. The value of this parameter depends
on the value of source_type. If neither
parameter is specified, all events in the time
span will be returned. | [
"Get",
"information",
"about",
"events",
"related",
"to",
"your",
"DBInstances",
"DBSecurityGroups",
"and",
"DBParameterGroups",
".",
":",
"type",
"source_identifier",
":",
"str",
":",
"param",
"source_identifier",
":",
"If",
"supplied",
"the",
"events",
"returned",... | def get_all_events(self, source_identifier=None, source_type=None,
start_time=None, end_time=None,
max_records=None, marker=None):
"""
Get information about events related to your DBInstances,
DBSecurityGroups and DBParameterGroups.
:type source_identifier: str
:param source_identifier: If supplied, the events returned will be
limited to those that apply to the identified
source. The value of this parameter depends
on the value of source_type. If neither
parameter is specified, all events in the time
span will be returned.
:type source_type: str
:param source_type: Specifies how the source_identifier should
be interpreted. Valid values are:
b-instance | db-security-group |
db-parameter-group | db-snapshot
:type start_time: datetime
:param start_time: The beginning of the time interval for events.
If not supplied, all available events will
be returned.
:type end_time: datetime
:param end_time: The ending of the time interval for events.
If not supplied, all available events will
be returned.
:type max_records: int
:param max_records: The maximum number of records to be returned.
If more results are available, a MoreToken will
be returned in the response that can be used to
retrieve additional records. Default is 100.
:type marker: str
:param marker: The marker provided by a previous request.
:rtype: list
:return: A list of class:`boto.rds.event.Event`
"""
params = {}
if source_identifier and source_type:
params['SourceIdentifier'] = source_identifier
params['SourceType'] = source_type
if start_time:
params['StartTime'] = start_time.isoformat()
if end_time:
params['EndTime'] = end_time.isoformat()
if max_records:
params['MaxRecords'] = max_records
if marker:
params['Marker'] = marker
return self.get_list('DescribeEvents', params, [('Event', Event)]) | [
"def",
"get_all_events",
"(",
"self",
",",
"source_identifier",
"=",
"None",
",",
"source_type",
"=",
"None",
",",
"start_time",
"=",
"None",
",",
"end_time",
"=",
"None",
",",
"max_records",
"=",
"None",
",",
"marker",
"=",
"None",
")",
":",
"params",
"... | https://github.com/LinkedInAttic/indextank-service/blob/880c6295ce8e7a3a55bf9b3777cc35c7680e0d7e/storefront/boto/rds/__init__.py#L756-L811 | |
benadida/helios-server | 19555c3e5e86b4301264ccb26587c51aec8cbbd2 | helios/crypto/elgamal.py | python | SecretKey.prove_sk | (self, challenge_generator) | return DLogProof(commitment, challenge, response) | Generate a PoK of the secret key
Prover generates w, a random integer modulo q, and computes commitment = g^w mod p.
Verifier provides challenge modulo q.
Prover computes response = w + x*challenge mod q, where x is the secret key. | Generate a PoK of the secret key
Prover generates w, a random integer modulo q, and computes commitment = g^w mod p.
Verifier provides challenge modulo q.
Prover computes response = w + x*challenge mod q, where x is the secret key. | [
"Generate",
"a",
"PoK",
"of",
"the",
"secret",
"key",
"Prover",
"generates",
"w",
"a",
"random",
"integer",
"modulo",
"q",
"and",
"computes",
"commitment",
"=",
"g^w",
"mod",
"p",
".",
"Verifier",
"provides",
"challenge",
"modulo",
"q",
".",
"Prover",
"com... | def prove_sk(self, challenge_generator):
"""
Generate a PoK of the secret key
Prover generates w, a random integer modulo q, and computes commitment = g^w mod p.
Verifier provides challenge modulo q.
Prover computes response = w + x*challenge mod q, where x is the secret key.
"""
w = random.mpz_lt(self.pk.q)
commitment = pow(self.pk.g, w, self.pk.p)
challenge = challenge_generator(commitment) % self.pk.q
response = (w + (self.x * challenge)) % self.pk.q
return DLogProof(commitment, challenge, response) | [
"def",
"prove_sk",
"(",
"self",
",",
"challenge_generator",
")",
":",
"w",
"=",
"random",
".",
"mpz_lt",
"(",
"self",
".",
"pk",
".",
"q",
")",
"commitment",
"=",
"pow",
"(",
"self",
".",
"pk",
".",
"g",
",",
"w",
",",
"self",
".",
"pk",
".",
"... | https://github.com/benadida/helios-server/blob/19555c3e5e86b4301264ccb26587c51aec8cbbd2/helios/crypto/elgamal.py#L205-L217 | |
adulau/Forban | 4b06c8a2e2f18ff872ca20a534587a5f15a692fa | lib/ext/cherrypy/lib/httpauth.py | python | md5SessionKey | (params, password) | return _A1 (params_copy, password) | If the "algorithm" directive's value is "MD5-sess", then A1
[the session key] is calculated only once - on the first request by the
client following receipt of a WWW-Authenticate challenge from the server.
This creates a 'session key' for the authentication of subsequent
requests and responses which is different for each "authentication
session", thus limiting the amount of material hashed with any one
key.
Because the server need only use the hash of the user
credentials in order to create the A1 value, this construction could
be used in conjunction with a third party authentication service so
that the web server would not need the actual password value. The
specification of such a protocol is beyond the scope of this
specification. | If the "algorithm" directive's value is "MD5-sess", then A1
[the session key] is calculated only once - on the first request by the
client following receipt of a WWW-Authenticate challenge from the server. | [
"If",
"the",
"algorithm",
"directive",
"s",
"value",
"is",
"MD5",
"-",
"sess",
"then",
"A1",
"[",
"the",
"session",
"key",
"]",
"is",
"calculated",
"only",
"once",
"-",
"on",
"the",
"first",
"request",
"by",
"the",
"client",
"following",
"receipt",
"of",... | def md5SessionKey (params, password):
"""
If the "algorithm" directive's value is "MD5-sess", then A1
[the session key] is calculated only once - on the first request by the
client following receipt of a WWW-Authenticate challenge from the server.
This creates a 'session key' for the authentication of subsequent
requests and responses which is different for each "authentication
session", thus limiting the amount of material hashed with any one
key.
Because the server need only use the hash of the user
credentials in order to create the A1 value, this construction could
be used in conjunction with a third party authentication service so
that the web server would not need the actual password value. The
specification of such a protocol is beyond the scope of this
specification.
"""
keys = ("username", "realm", "nonce", "cnonce")
params_copy = {}
for key in keys:
params_copy[key] = params[key]
params_copy["algorithm"] = MD5_SESS
return _A1 (params_copy, password) | [
"def",
"md5SessionKey",
"(",
"params",
",",
"password",
")",
":",
"keys",
"=",
"(",
"\"username\"",
",",
"\"realm\"",
",",
"\"nonce\"",
",",
"\"cnonce\"",
")",
"params_copy",
"=",
"{",
"}",
"for",
"key",
"in",
"keys",
":",
"params_copy",
"[",
"key",
"]",... | https://github.com/adulau/Forban/blob/4b06c8a2e2f18ff872ca20a534587a5f15a692fa/lib/ext/cherrypy/lib/httpauth.py#L188-L213 | |
JasperSnoek/spearmint | b37a541be1ea035f82c7c82bbd93f5b4320e7d91 | spearmint/spearmint/chooser/cma.py | python | FitnessFunctions.rosen_nesterov | (self, x, rho=100) | return f | needs exponential number of steps in a non-increasing f-sequence.
x_0 = (-1,1,...,1)
See Jarre (2011) "On Nesterov's Smooth Chebyshev-Rosenbrock Function" | needs exponential number of steps in a non-increasing f-sequence. | [
"needs",
"exponential",
"number",
"of",
"steps",
"in",
"a",
"non",
"-",
"increasing",
"f",
"-",
"sequence",
"."
] | def rosen_nesterov(self, x, rho=100):
"""needs exponential number of steps in a non-increasing f-sequence.
x_0 = (-1,1,...,1)
See Jarre (2011) "On Nesterov's Smooth Chebyshev-Rosenbrock Function"
"""
f = 0.25 * (x[0] - 1)**2
f += rho * sum((x[1:] - 2 * x[:-1]**2 + 1)**2)
return f | [
"def",
"rosen_nesterov",
"(",
"self",
",",
"x",
",",
"rho",
"=",
"100",
")",
":",
"f",
"=",
"0.25",
"*",
"(",
"x",
"[",
"0",
"]",
"-",
"1",
")",
"**",
"2",
"f",
"+=",
"rho",
"*",
"sum",
"(",
"(",
"x",
"[",
"1",
":",
"]",
"-",
"2",
"*",
... | https://github.com/JasperSnoek/spearmint/blob/b37a541be1ea035f82c7c82bbd93f5b4320e7d91/spearmint/spearmint/chooser/cma.py#L6697-L6706 | |
emmetio/livestyle-sublime-old | c42833c046e9b2f53ebce3df3aa926528f5a33b5 | lsutils/diff.py | python | _on_diff_editor_sources | (data, sender) | [] | def _on_diff_editor_sources(data, sender):
logger.debug('Received diff sources response: %s' % ws.format_message(data))
if not data['success']:
logger.error('[ws] %s' % data.get('result', ''))
_on_diff_complete(data.get('file'), None, None)
else:
r = data.get('result', {})
_on_diff_complete(data.get('file'), r.get('patches'), r.get('source')) | [
"def",
"_on_diff_editor_sources",
"(",
"data",
",",
"sender",
")",
":",
"logger",
".",
"debug",
"(",
"'Received diff sources response: %s'",
"%",
"ws",
".",
"format_message",
"(",
"data",
")",
")",
"if",
"not",
"data",
"[",
"'success'",
"]",
":",
"logger",
"... | https://github.com/emmetio/livestyle-sublime-old/blob/c42833c046e9b2f53ebce3df3aa926528f5a33b5/lsutils/diff.py#L241-L248 | ||||
zhl2008/awd-platform | 0416b31abea29743387b10b3914581fbe8e7da5e | web_flaskbb/Python-2.7.9/Lib/lib2to3/fixes/fix_import.py | python | FixImport.start_tree | (self, tree, name) | [] | def start_tree(self, tree, name):
super(FixImport, self).start_tree(tree, name)
self.skip = "absolute_import" in tree.future_features | [
"def",
"start_tree",
"(",
"self",
",",
"tree",
",",
"name",
")",
":",
"super",
"(",
"FixImport",
",",
"self",
")",
".",
"start_tree",
"(",
"tree",
",",
"name",
")",
"self",
".",
"skip",
"=",
"\"absolute_import\"",
"in",
"tree",
".",
"future_features"
] | https://github.com/zhl2008/awd-platform/blob/0416b31abea29743387b10b3914581fbe8e7da5e/web_flaskbb/Python-2.7.9/Lib/lib2to3/fixes/fix_import.py#L47-L49 | ||||
jgagneastro/coffeegrindsize | 22661ebd21831dba4cf32bfc6ba59fe3d49f879c | App/venv/lib/python3.7/site-packages/setuptools/msvc.py | python | SystemInfo.WindowsSdkDir | (self) | return sdkdir | Microsoft Windows SDK directory. | Microsoft Windows SDK directory. | [
"Microsoft",
"Windows",
"SDK",
"directory",
"."
] | def WindowsSdkDir(self):
"""
Microsoft Windows SDK directory.
"""
sdkdir = ''
for ver in self.WindowsSdkVersion:
# Try to get it from registry
loc = os.path.join(self.ri.windows_sdk, 'v%s' % ver)
sdkdir = self.ri.lookup(loc, 'installationfolder')
if sdkdir:
break
if not sdkdir or not os.path.isdir(sdkdir):
# Try to get "VC++ for Python" version from registry
path = os.path.join(self.ri.vc_for_python, '%0.1f' % self.vc_ver)
install_base = self.ri.lookup(path, 'installdir')
if install_base:
sdkdir = os.path.join(install_base, 'WinSDK')
if not sdkdir or not os.path.isdir(sdkdir):
# If fail, use default new path
for ver in self.WindowsSdkVersion:
intver = ver[:ver.rfind('.')]
path = r'Microsoft SDKs\Windows Kits\%s' % (intver)
d = os.path.join(self.ProgramFiles, path)
if os.path.isdir(d):
sdkdir = d
if not sdkdir or not os.path.isdir(sdkdir):
# If fail, use default old path
for ver in self.WindowsSdkVersion:
path = r'Microsoft SDKs\Windows\v%s' % ver
d = os.path.join(self.ProgramFiles, path)
if os.path.isdir(d):
sdkdir = d
if not sdkdir:
# If fail, use Platform SDK
sdkdir = os.path.join(self.VCInstallDir, 'PlatformSDK')
return sdkdir | [
"def",
"WindowsSdkDir",
"(",
"self",
")",
":",
"sdkdir",
"=",
"''",
"for",
"ver",
"in",
"self",
".",
"WindowsSdkVersion",
":",
"# Try to get it from registry",
"loc",
"=",
"os",
".",
"path",
".",
"join",
"(",
"self",
".",
"ri",
".",
"windows_sdk",
",",
"... | https://github.com/jgagneastro/coffeegrindsize/blob/22661ebd21831dba4cf32bfc6ba59fe3d49f879c/App/venv/lib/python3.7/site-packages/setuptools/msvc.py#L607-L642 | |
mihaip/mail-trends | 312b5be7f8f0a5933b05c75e229eda8e44c3c920 | stats/group.py | python | StatColumnGroup.__init__ | (self, *args) | [] | def __init__(self, *args):
StatGroup.__init__(self)
for stat in args:
self._AddStat(stat) | [
"def",
"__init__",
"(",
"self",
",",
"*",
"args",
")",
":",
"StatGroup",
".",
"__init__",
"(",
"self",
")",
"for",
"stat",
"in",
"args",
":",
"self",
".",
"_AddStat",
"(",
"stat",
")"
] | https://github.com/mihaip/mail-trends/blob/312b5be7f8f0a5933b05c75e229eda8e44c3c920/stats/group.py#L115-L118 | ||||
WikidPad/WikidPad | 558109638807bc76b4672922686e416ab2d5f79c | WikidPad/lib/aui/tabart.py | python | ChromeTabArt.DrawTab | (self, dc, wnd, page, in_rect, close_button_state, paint_control=False) | return out_tab_rect, out_button_rect, x_extent | Draws a single tab.
:param `dc`: a :class:`wx.DC` device context;
:param `wnd`: a :class:`wx.Window` instance object;
:param `page`: the tab control page associated with the tab;
:param wx.Rect `in_rect`: rectangle the tab should be confined to;
:param integer `close_button_state`: the state of the close button on the tab;
:param bool `paint_control`: whether to draw the control inside a tab (if any) on a :class:`MemoryDC`. | Draws a single tab. | [
"Draws",
"a",
"single",
"tab",
"."
] | def DrawTab(self, dc, wnd, page, in_rect, close_button_state, paint_control=False):
"""
Draws a single tab.
:param `dc`: a :class:`wx.DC` device context;
:param `wnd`: a :class:`wx.Window` instance object;
:param `page`: the tab control page associated with the tab;
:param wx.Rect `in_rect`: rectangle the tab should be confined to;
:param integer `close_button_state`: the state of the close button on the tab;
:param bool `paint_control`: whether to draw the control inside a tab (if any) on a :class:`MemoryDC`.
"""
# Chrome tab style
control = page.control
# figure out the size of the tab
tab_size, x_extent = self.GetTabSize(dc, wnd, page.caption, page.bitmap, page.active,
close_button_state, control)
agwFlags = self.GetAGWFlags()
tab_height = self._tab_ctrl_height - 1
tab_width = tab_size[0]
tab_x = in_rect.x
tab_y = in_rect.y + in_rect.height - tab_height
clip_width = tab_width
if tab_x + clip_width > in_rect.x + in_rect.width - 4:
clip_width = (in_rect.x + in_rect.width) - tab_x - 4
dc.SetClippingRegion(tab_x, tab_y, clip_width + 1, tab_height - 3)
drawn_tab_yoff = 1
if page.active:
left = self._leftActiveBmp
center = self._centerActiveBmp
right = self._rightActiveBmp
else:
left = self._leftInactiveBmp
center = self._centerInactiveBmp
right = self._rightInactiveBmp
dc.DrawBitmap(left, tab_x, tab_y)
leftw = left.GetWidth()
centerw = center.GetWidth()
rightw = right.GetWidth()
available = tab_x + tab_width - rightw
posx = tab_x + leftw
while 1:
if posx >= available:
break
dc.DrawBitmap(center, posx, tab_y)
posx += centerw
dc.DrawBitmap(right, posx, tab_y)
drawn_tab_height = center.GetHeight()
text_offset = tab_x + leftw
close_button_width = 0
if close_button_state != AUI_BUTTON_STATE_HIDDEN:
close_button_width = self._active_close_bmp.GetWidth()
if agwFlags & AUI_NB_CLOSE_ON_TAB_LEFT:
text_offset += close_button_width
if not page.enabled:
dc.SetTextForeground(wx.SystemSettings.GetColour(wx.SYS_COLOUR_GRAYTEXT))
pagebitmap = page.dis_bitmap
else:
dc.SetTextForeground(page.text_colour)
pagebitmap = page.bitmap
bitmap_offset = 0
if pagebitmap.IsOk():
bitmap_offset = tab_x + leftw
if agwFlags & AUI_NB_CLOSE_ON_TAB_LEFT and close_button_width:
bitmap_offset += close_button_width
# draw bitmap
dc.DrawBitmap(pagebitmap, bitmap_offset,
drawn_tab_yoff + (drawn_tab_height/2) - (pagebitmap.GetHeight()/2),
True)
text_offset = bitmap_offset + pagebitmap.GetWidth()
text_offset += 3 # bitmap padding
else:
if agwFlags & AUI_NB_CLOSE_ON_TAB_LEFT == 0 or not close_button_width:
text_offset = tab_x + leftw
# if the caption is empty, measure some temporary text
caption = page.caption
if caption == "":
caption = "Xj"
if page.active:
dc.SetFont(self._selected_font)
textx, texty, dummy = dc.GetFullMultiLineTextExtent(caption)
else:
dc.SetFont(self._normal_font)
textx, texty, dummy = dc.GetFullMultiLineTextExtent(caption)
if agwFlags & AUI_NB_CLOSE_ON_TAB_LEFT:
draw_text = ChopText(dc, caption, tab_width - (text_offset-tab_x) - leftw)
else:
draw_text = ChopText(dc, caption, tab_width - (text_offset-tab_x) - close_button_width - leftw)
ypos = drawn_tab_yoff + drawn_tab_height/2 - texty/2 - 1
if control:
if control.GetPosition() != wx.Point(text_offset+1, ypos):
control.SetPosition(wx.Point(text_offset+1, ypos))
if not control.IsShown():
control.Show()
if paint_control:
bmp = TakeScreenShot(control.GetScreenRect())
dc.DrawBitmap(bmp, text_offset+1, ypos, True)
controlW, controlH = control.GetSize()
text_offset += controlW + 4
# draw tab text
rectx, recty, dummy = dc.GetFullMultiLineTextExtent(draw_text)
dc.DrawLabel(draw_text, wx.Rect(text_offset, ypos, rectx, recty))
out_button_rect = wx.Rect()
# draw 'x' on tab (if enabled)
if close_button_state != AUI_BUTTON_STATE_HIDDEN:
close_button_width = self._active_close_bmp.GetWidth()
bmp = self._disabled_close_bmp
if close_button_state == AUI_BUTTON_STATE_HOVER:
bmp = self._hover_close_bmp
elif close_button_state == AUI_BUTTON_STATE_PRESSED:
bmp = self._pressed_close_bmp
if agwFlags & AUI_NB_CLOSE_ON_TAB_LEFT:
rect = wx.Rect(tab_x + leftw - 2,
drawn_tab_yoff + (drawn_tab_height / 2) - (bmp.GetHeight() / 2) + 1,
close_button_width, tab_height)
else:
rect = wx.Rect(tab_x + tab_width - close_button_width - rightw + 2,
drawn_tab_yoff + (drawn_tab_height / 2) - (bmp.GetHeight() / 2) + 1,
close_button_width, tab_height)
if agwFlags & AUI_NB_BOTTOM:
rect.y -= 1
# Indent the button if it is pressed down:
rect = IndentPressedBitmap(rect, close_button_state)
dc.DrawBitmap(bmp, rect.x, rect.y, True)
out_button_rect = rect
out_tab_rect = wx.Rect(tab_x, tab_y, tab_width, tab_height)
dc.DestroyClippingRegion()
return out_tab_rect, out_button_rect, x_extent | [
"def",
"DrawTab",
"(",
"self",
",",
"dc",
",",
"wnd",
",",
"page",
",",
"in_rect",
",",
"close_button_state",
",",
"paint_control",
"=",
"False",
")",
":",
"# Chrome tab style",
"control",
"=",
"page",
".",
"control",
"# figure out the size of the tab",
"tab_siz... | https://github.com/WikidPad/WikidPad/blob/558109638807bc76b4672922686e416ab2d5f79c/WikidPad/lib/aui/tabart.py#L2639-L2801 | |
YouChouNoBB/data-mining-introduction | 8cbbeccaa7200d5e6a7f0d259393794feae3151e | randomForest-base.py | python | DecisionTree.predict | (self,X) | return X.apply(lambda x:self.id3c45_predict(self.tree,x),axis=1).values | [] | def predict(self,X):
if self.method=='gini':
return X.apply(lambda x:self.cart_predict(self.tree,x),axis=1).values
return X.apply(lambda x:self.id3c45_predict(self.tree,x),axis=1).values | [
"def",
"predict",
"(",
"self",
",",
"X",
")",
":",
"if",
"self",
".",
"method",
"==",
"'gini'",
":",
"return",
"X",
".",
"apply",
"(",
"lambda",
"x",
":",
"self",
".",
"cart_predict",
"(",
"self",
".",
"tree",
",",
"x",
")",
",",
"axis",
"=",
"... | https://github.com/YouChouNoBB/data-mining-introduction/blob/8cbbeccaa7200d5e6a7f0d259393794feae3151e/randomForest-base.py#L205-L208 | |||
sentinel-hub/eo-learn | cf964eaf173668d6a374675dbd7c1d244264c11d | mask/eolearn/mask/cloud_mask.py | python | CloudMaskTask._map_sequence | (data, func2d) | return output | Iterate over time and band dimensions and apply a function to each slice.
Returns a new array with the combined results.
:param data: input array
:type data: array of shape (timestamps, rows, columns, channels)
:param func2d: Mapping function that is applied on each 2d image slice. All outputs must have the same shape.
:type func2d: function (rows, columns) -> (new_rows, new_columns) | Iterate over time and band dimensions and apply a function to each slice.
Returns a new array with the combined results. | [
"Iterate",
"over",
"time",
"and",
"band",
"dimensions",
"and",
"apply",
"a",
"function",
"to",
"each",
"slice",
".",
"Returns",
"a",
"new",
"array",
"with",
"the",
"combined",
"results",
"."
] | def _map_sequence(data, func2d):
""" Iterate over time and band dimensions and apply a function to each slice.
Returns a new array with the combined results.
:param data: input array
:type data: array of shape (timestamps, rows, columns, channels)
:param func2d: Mapping function that is applied on each 2d image slice. All outputs must have the same shape.
:type func2d: function (rows, columns) -> (new_rows, new_columns)
"""
# Map over channel dimension on 3d tensor
def func3d(dim):
return map_over_axis(dim, func2d, axis=2)
# Map over time dimension on 4d tensor
def func4d(dim):
return map_over_axis(dim, func3d, axis=0)
output = func4d(data)
return output | [
"def",
"_map_sequence",
"(",
"data",
",",
"func2d",
")",
":",
"# Map over channel dimension on 3d tensor",
"def",
"func3d",
"(",
"dim",
")",
":",
"return",
"map_over_axis",
"(",
"dim",
",",
"func2d",
",",
"axis",
"=",
"2",
")",
"# Map over time dimension on 4d ten... | https://github.com/sentinel-hub/eo-learn/blob/cf964eaf173668d6a374675dbd7c1d244264c11d/mask/eolearn/mask/cloud_mask.py#L322-L341 | |
e2nIEE/pandapower | 12bd83d7c4e1bf3fa338dab2db649c3cd3db0cfb | pandapower/pypower/fdpf.py | python | fdpf | (Ybus, Sbus, V0, Bp, Bpp, ref, pv, pq, ppopt=None) | return V, converged, i | Solves the power flow using a fast decoupled method.
Solves for bus voltages given the full system admittance matrix (for
all buses), the complex bus power injection vector (for all buses),
the initial vector of complex bus voltages, the FDPF matrices B prime
and B double prime, and column vectors with the lists of bus indices
for the swing bus, PV buses, and PQ buses, respectively. The bus voltage
vector contains the set point for generator (including ref bus)
buses, and the reference angle of the swing bus, as well as an initial
guess for remaining magnitudes and angles. C{ppopt} is a PYPOWER options
vector which can be used to set the termination tolerance, maximum
number of iterations, and output options (see L{ppoption} for details).
Uses default options if this parameter is not given. Returns the
final complex voltages, a flag which indicates whether it converged
or not, and the number of iterations performed.
@see: L{runpf}
@author: Ray Zimmerman (PSERC Cornell) | Solves the power flow using a fast decoupled method. | [
"Solves",
"the",
"power",
"flow",
"using",
"a",
"fast",
"decoupled",
"method",
"."
] | def fdpf(Ybus, Sbus, V0, Bp, Bpp, ref, pv, pq, ppopt=None):
"""Solves the power flow using a fast decoupled method.
Solves for bus voltages given the full system admittance matrix (for
all buses), the complex bus power injection vector (for all buses),
the initial vector of complex bus voltages, the FDPF matrices B prime
and B double prime, and column vectors with the lists of bus indices
for the swing bus, PV buses, and PQ buses, respectively. The bus voltage
vector contains the set point for generator (including ref bus)
buses, and the reference angle of the swing bus, as well as an initial
guess for remaining magnitudes and angles. C{ppopt} is a PYPOWER options
vector which can be used to set the termination tolerance, maximum
number of iterations, and output options (see L{ppoption} for details).
Uses default options if this parameter is not given. Returns the
final complex voltages, a flag which indicates whether it converged
or not, and the number of iterations performed.
@see: L{runpf}
@author: Ray Zimmerman (PSERC Cornell)
"""
if ppopt is None:
ppopt = ppoption()
## options
tol = ppopt['PF_TOL']
max_it = ppopt['PF_MAX_IT_FD']
verbose = ppopt['VERBOSE']
## initialize
converged = 0
i = 0
V = V0
Va = angle(V)
Vm = abs(V)
## set up indexing for updating V
#npv = len(pv)
#npq = len(pq)
pvpq = r_[pv, pq]
## evaluate initial mismatch
mis = (V * conj(Ybus * V) - Sbus) / Vm
P = mis[pvpq].real
Q = mis[pq].imag
## check tolerance
normP = linalg.norm(P, Inf)
normQ = linalg.norm(Q, Inf)
if verbose > 1:
sys.stdout.write('\niteration max mismatch (p.u.) ')
sys.stdout.write('\ntype # P Q ')
sys.stdout.write('\n---- ---- ----------- -----------')
sys.stdout.write('\n - %3d %10.3e %10.3e' % (i, normP, normQ))
if normP < tol and normQ < tol:
converged = 1
if verbose > 1:
sys.stdout.write('\nConverged!\n')
## reduce B matrices
Bp = Bp[array([pvpq]).T, pvpq].tocsc() # splu requires a CSC matrix
Bpp = Bpp[array([pq]).T, pq].tocsc()
## factor B matrices
Bp_solver = splu(Bp)
Bpp_solver = splu(Bpp)
## do P and Q iterations
while (not converged and i < max_it):
## update iteration counter
i = i + 1
##----- do P iteration, update Va -----
dVa = -Bp_solver.solve(P)
## update voltage
Va[pvpq] = Va[pvpq] + dVa
V = Vm * exp(1j * Va)
## evalute mismatch
mis = (V * conj(Ybus * V) - Sbus) / Vm
P = mis[pvpq].real
Q = mis[pq].imag
## check tolerance
normP = linalg.norm(P, Inf)
normQ = linalg.norm(Q, Inf)
if verbose > 1:
sys.stdout.write("\n %s %3d %10.3e %10.3e" %
(type,i, normP, normQ))
if normP < tol and normQ < tol:
converged = 1
if verbose:
sys.stdout.write('\nFast-decoupled power flow converged in %d '
'P-iterations and %d Q-iterations.\n' % (i, i - 1))
break
##----- do Q iteration, update Vm -----
dVm = -Bpp_solver.solve(Q)
## update voltage
Vm[pq] = Vm[pq] + dVm
V = Vm * exp(1j * Va)
## evalute mismatch
mis = (V * conj(Ybus * V) - Sbus) / Vm
P = mis[pvpq].real
Q = mis[pq].imag
## check tolerance
normP = linalg.norm(P, Inf)
normQ = linalg.norm(Q, Inf)
if verbose > 1:
sys.stdout.write('\n Q %3d %10.3e %10.3e' % (i, normP, normQ))
if normP < tol and normQ < tol:
converged = 1
if verbose:
sys.stdout.write('\nFast-decoupled power flow converged in %d '
'P-iterations and %d Q-iterations.\n' % (i, i))
break
if verbose:
if not converged:
sys.stdout.write('\nFast-decoupled power flow did not converge in '
'%d iterations.' % i)
return V, converged, i | [
"def",
"fdpf",
"(",
"Ybus",
",",
"Sbus",
",",
"V0",
",",
"Bp",
",",
"Bpp",
",",
"ref",
",",
"pv",
",",
"pq",
",",
"ppopt",
"=",
"None",
")",
":",
"if",
"ppopt",
"is",
"None",
":",
"ppopt",
"=",
"ppoption",
"(",
")",
"## options",
"tol",
"=",
... | https://github.com/e2nIEE/pandapower/blob/12bd83d7c4e1bf3fa338dab2db649c3cd3db0cfb/pandapower/pypower/fdpf.py#L16-L142 | |
pydata/xarray | 9226c7ac87b3eb246f7a7e49f8f0f23d68951624 | xarray/util/generate_ops.py | python | render | (ops_info, is_module) | Render the module or stub file. | Render the module or stub file. | [
"Render",
"the",
"module",
"or",
"stub",
"file",
"."
] | def render(ops_info, is_module):
"""Render the module or stub file."""
yield MODULE_PREAMBLE if is_module else STUBFILE_PREAMBLE
for cls_name, method_blocks in ops_info.items():
yield CLASS_PREAMBLE.format(cls_name=cls_name, newline="\n" * is_module)
yield from _render_classbody(method_blocks, is_module) | [
"def",
"render",
"(",
"ops_info",
",",
"is_module",
")",
":",
"yield",
"MODULE_PREAMBLE",
"if",
"is_module",
"else",
"STUBFILE_PREAMBLE",
"for",
"cls_name",
",",
"method_blocks",
"in",
"ops_info",
".",
"items",
"(",
")",
":",
"yield",
"CLASS_PREAMBLE",
".",
"f... | https://github.com/pydata/xarray/blob/9226c7ac87b3eb246f7a7e49f8f0f23d68951624/xarray/util/generate_ops.py#L232-L238 | ||
peterdsharpe/AeroSandbox | ded68b0465f2bfdcaf4bc90abd8c91be0addcaba | aerosandbox/aerodynamics/aero_3D/aero_buildup_submodels/fuselage_aerodynamics.py | python | fuselage_aerodynamics | (
fuselage: Fuselage,
op_point: OperatingPoint,
) | return {
"F_g": F_g,
"F_b": F_b,
"F_w": F_w,
"M_g": M_g,
"M_b": M_b,
"M_w": M_w,
"L" : -F_w[2],
"Y" : F_w[1],
"D" : -F_w[0],
"l_b": M_b[0],
"m_b": M_b[1],
"n_b": M_b[2]
} | Estimates the aerodynamic forces, moments, and derivatives on a fuselage in isolation.
Assumes:
* The fuselage is a body of revolution aligned with the x_b axis.
* The angle between the nose and the freestream is less than 90 degrees.
Moments are given with the reference at Fuselage [0, 0, 0].
Uses methods from Jorgensen, Leland Howard. "Prediction of Static Aerodynamic Characteristics for Slender Bodies
Alone and with Lifting Surfaces to Very High Angles of Attack". NASA TR R-474. 1977.
Args:
fuselage: A Fuselage object that you wish to analyze.
op_point: The OperatingPoint that you wish to analyze the fuselage at.
Returns: | Estimates the aerodynamic forces, moments, and derivatives on a fuselage in isolation. | [
"Estimates",
"the",
"aerodynamic",
"forces",
"moments",
"and",
"derivatives",
"on",
"a",
"fuselage",
"in",
"isolation",
"."
] | def fuselage_aerodynamics(
fuselage: Fuselage,
op_point: OperatingPoint,
):
"""
Estimates the aerodynamic forces, moments, and derivatives on a fuselage in isolation.
Assumes:
* The fuselage is a body of revolution aligned with the x_b axis.
* The angle between the nose and the freestream is less than 90 degrees.
Moments are given with the reference at Fuselage [0, 0, 0].
Uses methods from Jorgensen, Leland Howard. "Prediction of Static Aerodynamic Characteristics for Slender Bodies
Alone and with Lifting Surfaces to Very High Angles of Attack". NASA TR R-474. 1977.
Args:
fuselage: A Fuselage object that you wish to analyze.
op_point: The OperatingPoint that you wish to analyze the fuselage at.
Returns:
"""
fuselage.Re = op_point.reynolds(reference_length=fuselage.length())
####### Reference quantities (Set these 1 here, just so we can follow Jorgensen syntax.)
# Outputs of this function should be invariant of these quantities, if normalization has been done correctly.
S_ref = 1 # m^2
c_ref = 1 # m
####### Fuselage zero-lift drag estimation
### Forebody drag
C_f_forebody = aerolib.Cf_flat_plate(
Re_L=fuselage.Re
)
### Base Drag
C_D_base = 0.029 / np.sqrt(C_f_forebody) * fuselage.area_base() / S_ref
### Skin friction drag
C_D_skin = C_f_forebody * fuselage.area_wetted() / S_ref
### Total zero-lift drag
C_D_zero_lift = C_D_skin + C_D_base
####### Jorgensen model
### First, merge the alpha and beta into a single "generalized alpha", which represents the degrees between the fuselage axis and the freestream.
x_w, y_w, z_w = op_point.convert_axes(
1, 0, 0, from_axes="body", to_axes="wind"
)
generalized_alpha = np.arccosd(x_w / (1 + 1e-14))
sin_generalized_alpha = np.sind(generalized_alpha)
cos_generalized_alpha = x_w
# ### Limit generalized alpha to -90 < alpha < 90, for now.
# generalized_alpha = np.clip(generalized_alpha, -90, 90)
# # TODO make the drag/moment functions not give negative results for alpha > 90.
alpha_fractional_component = -z_w / np.sqrt(
y_w ** 2 + z_w ** 2 + 1e-16) # The fraction of any "generalized lift" to be in the direction of alpha
beta_fractional_component = y_w / np.sqrt(
y_w ** 2 + z_w ** 2 + 1e-16) # The fraction of any "generalized lift" to be in the direction of beta
### Compute normal quantities
### Note the (N)ormal, (A)ligned coordinate system. (See Jorgensen for definitions.)
# M_n = sin_generalized_alpha * op_point.mach()
Re_n = sin_generalized_alpha * fuselage.Re
# V_n = sin_generalized_alpha * op_point.velocity
q = op_point.dynamic_pressure()
x_nose = fuselage.xsecs[0].xyz_c[0]
x_m = 0 - x_nose
x_c = fuselage.x_centroid_projected() - x_nose
##### Potential flow crossflow model
C_N_p = ( # Normal force coefficient due to potential flow. (Jorgensen Eq. 2.12, part 1)
fuselage.area_base() / S_ref * np.sind(2 * generalized_alpha) * np.cosd(generalized_alpha / 2)
)
C_m_p = (
(
fuselage.volume() - fuselage.area_base() * (fuselage.length() - x_m)
) / (
S_ref * c_ref
) * np.sind(2 * generalized_alpha) * np.cosd(generalized_alpha / 2)
)
##### Viscous crossflow model
C_d_n = np.where(
Re_n != 0,
aerolib.Cd_cylinder(Re_D=Re_n), # Replace with 1.20 from Jorgensen Table 1 if not working well
0
)
eta = jorgensen_eta(fuselage.fineness_ratio())
C_N_v = ( # Normal force coefficient due to viscous crossflow. (Jorgensen Eq. 2.12, part 2)
eta * C_d_n * fuselage.area_projected() / S_ref * sin_generalized_alpha ** 2
)
C_m_v = (
eta * C_d_n * fuselage.area_projected() / S_ref * (x_m - x_c) / c_ref * sin_generalized_alpha ** 2
)
##### Total C_N model
C_N = C_N_p + C_N_v
C_m_generalized = C_m_p + C_m_v
##### Total C_A model
C_A = C_D_zero_lift * cos_generalized_alpha * np.abs(cos_generalized_alpha)
##### Convert to lift, drag
C_L_generalized = C_N * cos_generalized_alpha - C_A * sin_generalized_alpha
C_D = C_N * sin_generalized_alpha + C_A * cos_generalized_alpha
### Set proper directions
C_L = C_L_generalized * alpha_fractional_component
C_Y = -C_L_generalized * beta_fractional_component
C_l = 0
C_m = C_m_generalized * alpha_fractional_component
C_n = -C_m_generalized * beta_fractional_component
### Un-normalize
L = C_L * q * S_ref
Y = C_Y * q * S_ref
D = C_D * q * S_ref
l_w = C_l * q * S_ref * c_ref
m_w = C_m * q * S_ref * c_ref
n_w = C_n * q * S_ref * c_ref
### Convert to axes coordinates for reporting
F_w = (
-D,
Y,
-L
)
F_b = op_point.convert_axes(*F_w, from_axes="wind", to_axes="body")
F_g = op_point.convert_axes(*F_b, from_axes="body", to_axes="geometry")
M_w = (
l_w,
m_w,
n_w,
)
M_b = op_point.convert_axes(*M_w, from_axes="wind", to_axes="body")
M_g = op_point.convert_axes(*M_b, from_axes="body", to_axes="geometry")
return {
"F_g": F_g,
"F_b": F_b,
"F_w": F_w,
"M_g": M_g,
"M_b": M_b,
"M_w": M_w,
"L" : -F_w[2],
"Y" : F_w[1],
"D" : -F_w[0],
"l_b": M_b[0],
"m_b": M_b[1],
"n_b": M_b[2]
} | [
"def",
"fuselage_aerodynamics",
"(",
"fuselage",
":",
"Fuselage",
",",
"op_point",
":",
"OperatingPoint",
",",
")",
":",
"fuselage",
".",
"Re",
"=",
"op_point",
".",
"reynolds",
"(",
"reference_length",
"=",
"fuselage",
".",
"length",
"(",
")",
")",
"#######... | https://github.com/peterdsharpe/AeroSandbox/blob/ded68b0465f2bfdcaf4bc90abd8c91be0addcaba/aerosandbox/aerodynamics/aero_3D/aero_buildup_submodels/fuselage_aerodynamics.py#L32-L193 | |
ucsb-seclab/karonte | 427ac313e596f723e40768b95d13bd7a9fc92fd8 | tool/bdg/binary_dependency_graph.py | python | BinaryDependencyGraph.run | (self) | Run the Binary Dependency Graph analysis
:return: the binary dependency graph | Run the Binary Dependency Graph analysis
:return: the binary dependency graph | [
"Run",
"the",
"Binary",
"Dependency",
"Graph",
"analysis",
":",
"return",
":",
"the",
"binary",
"dependency",
"graph"
] | def run(self):
"""
Run the Binary Dependency Graph analysis
:return: the binary dependency graph
"""
self._start_time = time.time()
self._build_dependency_graph()
self._end_time = time.time() | [
"def",
"run",
"(",
"self",
")",
":",
"self",
".",
"_start_time",
"=",
"time",
".",
"time",
"(",
")",
"self",
".",
"_build_dependency_graph",
"(",
")",
"self",
".",
"_end_time",
"=",
"time",
".",
"time",
"(",
")"
] | https://github.com/ucsb-seclab/karonte/blob/427ac313e596f723e40768b95d13bd7a9fc92fd8/tool/bdg/binary_dependency_graph.py#L887-L895 | ||
HymanLiuTS/flaskTs | 286648286976e85d9b9a5873632331efcafe0b21 | flasky/lib/python2.7/site-packages/pkg_resources/__init__.py | python | Distribution.egg_name | (self) | return filename | Return what this distribution's standard .egg filename should be | Return what this distribution's standard .egg filename should be | [
"Return",
"what",
"this",
"distribution",
"s",
"standard",
".",
"egg",
"filename",
"should",
"be"
] | def egg_name(self):
"""Return what this distribution's standard .egg filename should be"""
filename = "%s-%s-py%s" % (
to_filename(self.project_name), to_filename(self.version),
self.py_version or PY_MAJOR
)
if self.platform:
filename += '-' + self.platform
return filename | [
"def",
"egg_name",
"(",
"self",
")",
":",
"filename",
"=",
"\"%s-%s-py%s\"",
"%",
"(",
"to_filename",
"(",
"self",
".",
"project_name",
")",
",",
"to_filename",
"(",
"self",
".",
"version",
")",
",",
"self",
".",
"py_version",
"or",
"PY_MAJOR",
")",
"if"... | https://github.com/HymanLiuTS/flaskTs/blob/286648286976e85d9b9a5873632331efcafe0b21/flasky/lib/python2.7/site-packages/pkg_resources/__init__.py#L2560-L2569 | |
open-mmlab/mmdetection3d | c7272063e818bcf33aebc498a017a95c8d065143 | tools/create_data.py | python | nuscenes_data_prep | (root_path,
info_prefix,
version,
dataset_name,
out_dir,
max_sweeps=10) | Prepare data related to nuScenes dataset.
Related data consists of '.pkl' files recording basic infos,
2D annotations and groundtruth database.
Args:
root_path (str): Path of dataset root.
info_prefix (str): The prefix of info filenames.
version (str): Dataset version.
dataset_name (str): The dataset class name.
out_dir (str): Output directory of the groundtruth database info.
max_sweeps (int): Number of input consecutive frames. Default: 10 | Prepare data related to nuScenes dataset. | [
"Prepare",
"data",
"related",
"to",
"nuScenes",
"dataset",
"."
] | def nuscenes_data_prep(root_path,
info_prefix,
version,
dataset_name,
out_dir,
max_sweeps=10):
"""Prepare data related to nuScenes dataset.
Related data consists of '.pkl' files recording basic infos,
2D annotations and groundtruth database.
Args:
root_path (str): Path of dataset root.
info_prefix (str): The prefix of info filenames.
version (str): Dataset version.
dataset_name (str): The dataset class name.
out_dir (str): Output directory of the groundtruth database info.
max_sweeps (int): Number of input consecutive frames. Default: 10
"""
nuscenes_converter.create_nuscenes_infos(
root_path, info_prefix, version=version, max_sweeps=max_sweeps)
if version == 'v1.0-test':
info_test_path = osp.join(root_path, f'{info_prefix}_infos_test.pkl')
nuscenes_converter.export_2d_annotation(
root_path, info_test_path, version=version)
return
info_train_path = osp.join(root_path, f'{info_prefix}_infos_train.pkl')
info_val_path = osp.join(root_path, f'{info_prefix}_infos_val.pkl')
nuscenes_converter.export_2d_annotation(
root_path, info_train_path, version=version)
nuscenes_converter.export_2d_annotation(
root_path, info_val_path, version=version)
create_groundtruth_database(dataset_name, root_path, info_prefix,
f'{out_dir}/{info_prefix}_infos_train.pkl') | [
"def",
"nuscenes_data_prep",
"(",
"root_path",
",",
"info_prefix",
",",
"version",
",",
"dataset_name",
",",
"out_dir",
",",
"max_sweeps",
"=",
"10",
")",
":",
"nuscenes_converter",
".",
"create_nuscenes_infos",
"(",
"root_path",
",",
"info_prefix",
",",
"version"... | https://github.com/open-mmlab/mmdetection3d/blob/c7272063e818bcf33aebc498a017a95c8d065143/tools/create_data.py#L47-L82 | ||
PINTO0309/PINTO_model_zoo | 2924acda7a7d541d8712efd7cc4fd1c61ef5bddd | 201_CityscapesSOTA/demo/demo_CityscapesSOTA_onnx.py | python | main | () | [] | def main():
parser = argparse.ArgumentParser()
parser.add_argument("--device", type=int, default=0)
parser.add_argument("--movie", type=str, default=None)
parser.add_argument(
"--model",
type=str,
default='saved_model_180x320/model_float32.onnx',
)
parser.add_argument(
"--input_size",
type=str,
default='180,320',
)
args = parser.parse_args()
model_path = args.model
input_size = [int(i) for i in args.input_size.split(',')]
cap_device = args.device
if args.movie is not None:
cap_device = args.movie
# Initialize video capture
cap = cv.VideoCapture(cap_device)
# Load model
onnx_session = onnxruntime.InferenceSession(model_path)
while True:
start_time = time.time()
# Capture read
ret, frame = cap.read()
if not ret:
break
debug_image = copy.deepcopy(frame)
# Inference execution
segmentation_map = run_inference(
onnx_session,
input_size,
frame,
)
elapsed_time = time.time() - start_time
# Draw
debug_image = draw_debug(
debug_image,
elapsed_time,
segmentation_map,
)
key = cv.waitKey(1)
if key == 27: # ESC
break
cv.imshow('CityscapesSOTA Demo', debug_image)
cap.release()
cv.destroyAllWindows() | [
"def",
"main",
"(",
")",
":",
"parser",
"=",
"argparse",
".",
"ArgumentParser",
"(",
")",
"parser",
".",
"add_argument",
"(",
"\"--device\"",
",",
"type",
"=",
"int",
",",
"default",
"=",
"0",
")",
"parser",
".",
"add_argument",
"(",
"\"--movie\"",
",",
... | https://github.com/PINTO0309/PINTO_model_zoo/blob/2924acda7a7d541d8712efd7cc4fd1c61ef5bddd/201_CityscapesSOTA/demo/demo_CityscapesSOTA_onnx.py#L42-L104 | ||||
Sprytile/Sprytile | 6b68d0069aef5bfed6ab40d1d5a94a3382b41619 | rx/linq/observable/reduce.py | python | reduce | (self, accumulator, seed=None) | Applies an accumulator function over an observable sequence,
returning the result of the aggregation as a single element in the
result sequence. The specified seed value is used as the initial
accumulator value.
For aggregation behavior with incremental intermediate results, see
Observable.scan.
Example:
1 - res = source.reduce(lambda acc, x: acc + x)
2 - res = source.reduce(lambda acc, x: acc + x, 0)
Keyword arguments:
:param types.FunctionType accumulator: An accumulator function to be
invoked on each element.
:param T seed: Optional initial accumulator value.
:returns: An observable sequence containing a single element with the
final accumulator value.
:rtype: Observable | Applies an accumulator function over an observable sequence,
returning the result of the aggregation as a single element in the
result sequence. The specified seed value is used as the initial
accumulator value. | [
"Applies",
"an",
"accumulator",
"function",
"over",
"an",
"observable",
"sequence",
"returning",
"the",
"result",
"of",
"the",
"aggregation",
"as",
"a",
"single",
"element",
"in",
"the",
"result",
"sequence",
".",
"The",
"specified",
"seed",
"value",
"is",
"us... | def reduce(self, accumulator, seed=None):
"""Applies an accumulator function over an observable sequence,
returning the result of the aggregation as a single element in the
result sequence. The specified seed value is used as the initial
accumulator value.
For aggregation behavior with incremental intermediate results, see
Observable.scan.
Example:
1 - res = source.reduce(lambda acc, x: acc + x)
2 - res = source.reduce(lambda acc, x: acc + x, 0)
Keyword arguments:
:param types.FunctionType accumulator: An accumulator function to be
invoked on each element.
:param T seed: Optional initial accumulator value.
:returns: An observable sequence containing a single element with the
final accumulator value.
:rtype: Observable
"""
if seed is not None:
return self.scan(accumulator, seed=seed).start_with(seed).last()
else:
return self.scan(accumulator).last() | [
"def",
"reduce",
"(",
"self",
",",
"accumulator",
",",
"seed",
"=",
"None",
")",
":",
"if",
"seed",
"is",
"not",
"None",
":",
"return",
"self",
".",
"scan",
"(",
"accumulator",
",",
"seed",
"=",
"seed",
")",
".",
"start_with",
"(",
"seed",
")",
"."... | https://github.com/Sprytile/Sprytile/blob/6b68d0069aef5bfed6ab40d1d5a94a3382b41619/rx/linq/observable/reduce.py#L6-L32 | ||
tensorflow/graphics | 86997957324bfbdd85848daae989b4c02588faa0 | tensorflow_graphics/projects/points_to_3Dobjects/utils/tf_utils.py | python | get_next_sample_dataset | (dataset_iter) | return sample | Get next sample. | Get next sample. | [
"Get",
"next",
"sample",
"."
] | def get_next_sample_dataset(dataset_iter):
"""Get next sample."""
try:
sample = next(dataset_iter)
except (StopIteration, RuntimeError) as e:
if "Can't copy Tensor with type" in str(e):
sample = None
elif isinstance(e, StopIteration):
sample = None
else:
raise e
return sample | [
"def",
"get_next_sample_dataset",
"(",
"dataset_iter",
")",
":",
"try",
":",
"sample",
"=",
"next",
"(",
"dataset_iter",
")",
"except",
"(",
"StopIteration",
",",
"RuntimeError",
")",
"as",
"e",
":",
"if",
"\"Can't copy Tensor with type\"",
"in",
"str",
"(",
"... | https://github.com/tensorflow/graphics/blob/86997957324bfbdd85848daae989b4c02588faa0/tensorflow_graphics/projects/points_to_3Dobjects/utils/tf_utils.py#L115-L126 | |
sympy/sympy | d822fcba181155b85ff2b29fe525adbafb22b448 | sympy/logic/boolalg.py | python | anf_coeffs | (truthvalues) | return coeffs[0] | Convert a list of truth values of some boolean expression
to the list of coefficients of the polynomial mod 2 (exclusive
disjunction) representing the boolean expression in ANF
(i.e., the "Zhegalkin polynomial").
There are `2^n` possible Zhegalkin monomials in `n` variables, since
each monomial is fully specified by the presence or absence of
each variable.
We can enumerate all the monomials. For example, boolean
function with four variables ``(a, b, c, d)`` can contain
up to `2^4 = 16` monomials. The 13-th monomial is the
product ``a & b & d``, because 13 in binary is 1, 1, 0, 1.
A given monomial's presence or absence in a polynomial corresponds
to that monomial's coefficient being 1 or 0 respectively.
Examples
========
>>> from sympy.logic.boolalg import anf_coeffs, bool_monomial, Xor
>>> from sympy.abc import a, b, c
>>> truthvalues = [0, 1, 1, 0, 0, 1, 0, 1]
>>> coeffs = anf_coeffs(truthvalues)
>>> coeffs
[0, 1, 1, 0, 0, 0, 1, 0]
>>> polynomial = Xor(*[
... bool_monomial(k, [a, b, c])
... for k, coeff in enumerate(coeffs) if coeff == 1
... ])
>>> polynomial
b ^ c ^ (a & b) | Convert a list of truth values of some boolean expression
to the list of coefficients of the polynomial mod 2 (exclusive
disjunction) representing the boolean expression in ANF
(i.e., the "Zhegalkin polynomial"). | [
"Convert",
"a",
"list",
"of",
"truth",
"values",
"of",
"some",
"boolean",
"expression",
"to",
"the",
"list",
"of",
"coefficients",
"of",
"the",
"polynomial",
"mod",
"2",
"(",
"exclusive",
"disjunction",
")",
"representing",
"the",
"boolean",
"expression",
"in"... | def anf_coeffs(truthvalues):
"""
Convert a list of truth values of some boolean expression
to the list of coefficients of the polynomial mod 2 (exclusive
disjunction) representing the boolean expression in ANF
(i.e., the "Zhegalkin polynomial").
There are `2^n` possible Zhegalkin monomials in `n` variables, since
each monomial is fully specified by the presence or absence of
each variable.
We can enumerate all the monomials. For example, boolean
function with four variables ``(a, b, c, d)`` can contain
up to `2^4 = 16` monomials. The 13-th monomial is the
product ``a & b & d``, because 13 in binary is 1, 1, 0, 1.
A given monomial's presence or absence in a polynomial corresponds
to that monomial's coefficient being 1 or 0 respectively.
Examples
========
>>> from sympy.logic.boolalg import anf_coeffs, bool_monomial, Xor
>>> from sympy.abc import a, b, c
>>> truthvalues = [0, 1, 1, 0, 0, 1, 0, 1]
>>> coeffs = anf_coeffs(truthvalues)
>>> coeffs
[0, 1, 1, 0, 0, 0, 1, 0]
>>> polynomial = Xor(*[
... bool_monomial(k, [a, b, c])
... for k, coeff in enumerate(coeffs) if coeff == 1
... ])
>>> polynomial
b ^ c ^ (a & b)
"""
s = '{:b}'.format(len(truthvalues))
n = len(s) - 1
if len(truthvalues) != 2**n:
raise ValueError("The number of truth values must be a power of two, "
"got %d" % len(truthvalues))
coeffs = [[v] for v in truthvalues]
for i in range(n):
tmp = []
for j in range(2 ** (n-i-1)):
tmp.append(coeffs[2*j] +
list(map(lambda x, y: x^y, coeffs[2*j], coeffs[2*j+1])))
coeffs = tmp
return coeffs[0] | [
"def",
"anf_coeffs",
"(",
"truthvalues",
")",
":",
"s",
"=",
"'{:b}'",
".",
"format",
"(",
"len",
"(",
"truthvalues",
")",
")",
"n",
"=",
"len",
"(",
"s",
")",
"-",
"1",
"if",
"len",
"(",
"truthvalues",
")",
"!=",
"2",
"**",
"n",
":",
"raise",
... | https://github.com/sympy/sympy/blob/d822fcba181155b85ff2b29fe525adbafb22b448/sympy/logic/boolalg.py#L2543-L2595 | |
LGE-ARC-AdvancedAI/auptimizer | 50f6e3b4e0cb9146ca90fd74b9b24ca97ae22617 | src/aup/Proposer/hyperopt/fmin.py | python | space_eval | (space, hp_assignment) | return rval | Compute a point in a search space from a hyperparameter assignment.
Parameters:
-----------
space - a pyll graph involving hp nodes (see `pyll_utils`).
hp_assignment - a dictionary mapping hp node labels to values. | Compute a point in a search space from a hyperparameter assignment. | [
"Compute",
"a",
"point",
"in",
"a",
"search",
"space",
"from",
"a",
"hyperparameter",
"assignment",
"."
] | def space_eval(space, hp_assignment):
"""Compute a point in a search space from a hyperparameter assignment.
Parameters:
-----------
space - a pyll graph involving hp nodes (see `pyll_utils`).
hp_assignment - a dictionary mapping hp node labels to values.
"""
space = pyll.as_apply(space)
nodes = pyll.toposort(space)
memo = {}
for node in nodes:
if node.name == 'hyperopt_param':
label = node.arg['label'].eval()
if label in hp_assignment:
memo[node] = hp_assignment[label]
rval = pyll.rec_eval(space, memo=memo)
return rval | [
"def",
"space_eval",
"(",
"space",
",",
"hp_assignment",
")",
":",
"space",
"=",
"pyll",
".",
"as_apply",
"(",
"space",
")",
"nodes",
"=",
"pyll",
".",
"toposort",
"(",
"space",
")",
"memo",
"=",
"{",
"}",
"for",
"node",
"in",
"nodes",
":",
"if",
"... | https://github.com/LGE-ARC-AdvancedAI/auptimizer/blob/50f6e3b4e0cb9146ca90fd74b9b24ca97ae22617/src/aup/Proposer/hyperopt/fmin.py#L383-L401 | |
MishaLaskin/curl | 8416d6e3869e38ca0e46fcbc54a2f784dc09d7fc | curl_sac.py | python | CURL.encode | (self, x, detach=False, ema=False) | return z_out | Encoder: z_t = e(x_t)
:param x: x_t, x y coordinates
:return: z_t, value in r2 | Encoder: z_t = e(x_t)
:param x: x_t, x y coordinates
:return: z_t, value in r2 | [
"Encoder",
":",
"z_t",
"=",
"e",
"(",
"x_t",
")",
":",
"param",
"x",
":",
"x_t",
"x",
"y",
"coordinates",
":",
"return",
":",
"z_t",
"value",
"in",
"r2"
] | def encode(self, x, detach=False, ema=False):
"""
Encoder: z_t = e(x_t)
:param x: x_t, x y coordinates
:return: z_t, value in r2
"""
if ema:
with torch.no_grad():
z_out = self.encoder_target(x)
else:
z_out = self.encoder(x)
if detach:
z_out = z_out.detach()
return z_out | [
"def",
"encode",
"(",
"self",
",",
"x",
",",
"detach",
"=",
"False",
",",
"ema",
"=",
"False",
")",
":",
"if",
"ema",
":",
"with",
"torch",
".",
"no_grad",
"(",
")",
":",
"z_out",
"=",
"self",
".",
"encoder_target",
"(",
"x",
")",
"else",
":",
... | https://github.com/MishaLaskin/curl/blob/8416d6e3869e38ca0e46fcbc54a2f784dc09d7fc/curl_sac.py#L201-L215 | |
sagemath/sage | f9b2db94f675ff16963ccdefba4f1a3393b3fe0d | src/sage/arith/misc.py | python | random_prime | (n, proof=None, lbound=2) | r"""
Return a random prime `p` between ``lbound`` and `n`.
The returned prime `p` satisfies ``lbound`` `\leq p \leq n`.
The returned prime `p` is chosen uniformly at random from the set
of prime numbers less than or equal to `n`.
INPUT:
- ``n`` - an integer >= 2.
- ``proof`` - bool or None (default: None) If False, the function uses a
pseudo-primality test, which is much faster for really big numbers but
does not provide a proof of primality. If None, uses the global default
(see :mod:`sage.structure.proof.proof`)
- ``lbound`` - an integer >= 2, lower bound for the chosen primes
EXAMPLES::
sage: p = random_prime(100000)
sage: p.is_prime()
True
sage: p <= 100000
True
sage: random_prime(2)
2
Here we generate a random prime between 100 and 200::
sage: p = random_prime(200, lbound=100)
sage: p.is_prime()
True
sage: 100 <= p <= 200
True
If all we care about is finding a pseudo prime, then we can pass
in ``proof=False`` ::
sage: p = random_prime(200, proof=False, lbound=100)
sage: p.is_pseudoprime()
True
sage: 100 <= p <= 200
True
TESTS::
sage: type(random_prime(2))
<class 'sage.rings.integer.Integer'>
sage: type(random_prime(100))
<class 'sage.rings.integer.Integer'>
sage: random_prime(1, lbound=-2) #caused Sage hang #10112
Traceback (most recent call last):
...
ValueError: n must be greater than or equal to 2
sage: random_prime(126, lbound=114)
Traceback (most recent call last):
...
ValueError: There are no primes between 114 and 126 (inclusive)
AUTHORS:
- Jon Hanke (2006-08-08): with standard Stein cleanup
- Jonathan Bober (2007-03-17) | r"""
Return a random prime `p` between ``lbound`` and `n`. | [
"r",
"Return",
"a",
"random",
"prime",
"p",
"between",
"lbound",
"and",
"n",
"."
] | def random_prime(n, proof=None, lbound=2):
r"""
Return a random prime `p` between ``lbound`` and `n`.
The returned prime `p` satisfies ``lbound`` `\leq p \leq n`.
The returned prime `p` is chosen uniformly at random from the set
of prime numbers less than or equal to `n`.
INPUT:
- ``n`` - an integer >= 2.
- ``proof`` - bool or None (default: None) If False, the function uses a
pseudo-primality test, which is much faster for really big numbers but
does not provide a proof of primality. If None, uses the global default
(see :mod:`sage.structure.proof.proof`)
- ``lbound`` - an integer >= 2, lower bound for the chosen primes
EXAMPLES::
sage: p = random_prime(100000)
sage: p.is_prime()
True
sage: p <= 100000
True
sage: random_prime(2)
2
Here we generate a random prime between 100 and 200::
sage: p = random_prime(200, lbound=100)
sage: p.is_prime()
True
sage: 100 <= p <= 200
True
If all we care about is finding a pseudo prime, then we can pass
in ``proof=False`` ::
sage: p = random_prime(200, proof=False, lbound=100)
sage: p.is_pseudoprime()
True
sage: 100 <= p <= 200
True
TESTS::
sage: type(random_prime(2))
<class 'sage.rings.integer.Integer'>
sage: type(random_prime(100))
<class 'sage.rings.integer.Integer'>
sage: random_prime(1, lbound=-2) #caused Sage hang #10112
Traceback (most recent call last):
...
ValueError: n must be greater than or equal to 2
sage: random_prime(126, lbound=114)
Traceback (most recent call last):
...
ValueError: There are no primes between 114 and 126 (inclusive)
AUTHORS:
- Jon Hanke (2006-08-08): with standard Stein cleanup
- Jonathan Bober (2007-03-17)
"""
# since we do not want current_randstate to get
# pulled when you say "from sage.arith.misc import *".
from sage.structure.proof.proof import get_flag
proof = get_flag(proof, "arithmetic")
n = ZZ(n)
if n < 2:
raise ValueError("n must be greater than or equal to 2")
if n < lbound:
raise ValueError("n must be at least lbound: %s"%(lbound))
elif n == 2:
return n
lbound = max(2, lbound)
if lbound > 2:
if lbound == 3 or n <= 2*lbound - 2:
# check for Betrand's postulate (proved by Chebyshev)
if lbound < 25 or n <= 6*lbound/5:
# see J. Nagura, Proc. Japan Acad. 28, (1952). 177-181.
if lbound < 2010760 or n <= 16598*lbound/16597:
# see L. Schoenfeld, Math. Comp. 30 (1976), no. 134, 337-360.
if proof:
smallest_prime = ZZ(lbound-1).next_prime()
else:
smallest_prime = ZZ(lbound-1).next_probable_prime()
if smallest_prime > n:
raise ValueError("There are no primes between %s and %s (inclusive)" % (lbound, n))
if proof:
prime_test = is_prime
else:
prime_test = is_pseudoprime
randint = ZZ.random_element
while True:
# In order to ensure that the returned prime is chosen
# uniformly from the set of primes it is necessary to
# choose a random number and then test for primality.
# The method of choosing a random number and then returning
# the closest prime smaller than it would typically not,
# for example, return the first of a pair of twin primes.
p = randint(lbound, n)
if prime_test(p):
return p | [
"def",
"random_prime",
"(",
"n",
",",
"proof",
"=",
"None",
",",
"lbound",
"=",
"2",
")",
":",
"# since we do not want current_randstate to get",
"# pulled when you say \"from sage.arith.misc import *\".",
"from",
"sage",
".",
"structure",
".",
"proof",
".",
"proof",
... | https://github.com/sagemath/sage/blob/f9b2db94f675ff16963ccdefba4f1a3393b3fe0d/src/sage/arith/misc.py#L1317-L1426 | ||
ChineseGLUE/ChineseGLUE | 1591b85cf5427c2ff60f718d359ecb71d2b44879 | baselines/models_pytorch/classifier_pytorch/transformers/tokenization_bert.py | python | BertTokenizer.convert_tokens_to_string | (self, tokens) | return out_string | Converts a sequence of tokens (string) in a single string. | Converts a sequence of tokens (string) in a single string. | [
"Converts",
"a",
"sequence",
"of",
"tokens",
"(",
"string",
")",
"in",
"a",
"single",
"string",
"."
] | def convert_tokens_to_string(self, tokens):
""" Converts a sequence of tokens (string) in a single string. """
out_string = ' '.join(tokens).replace(' ##', '').strip()
return out_string | [
"def",
"convert_tokens_to_string",
"(",
"self",
",",
"tokens",
")",
":",
"out_string",
"=",
"' '",
".",
"join",
"(",
"tokens",
")",
".",
"replace",
"(",
"' ##'",
",",
"''",
")",
".",
"strip",
"(",
")",
"return",
"out_string"
] | https://github.com/ChineseGLUE/ChineseGLUE/blob/1591b85cf5427c2ff60f718d359ecb71d2b44879/baselines/models_pytorch/classifier_pytorch/transformers/tokenization_bert.py#L191-L194 | |
ctypesgen/ctypesgen | cef9a7ac58a50d0ae4f260abdeb75e0a71398187 | ctypesgen/parser/cgrammar.py | python | p_member_declarator_list | (p) | member_declarator_list : member_declarator
| member_declarator_list COMMA member_declarator | member_declarator_list : member_declarator
| member_declarator_list COMMA member_declarator | [
"member_declarator_list",
":",
"member_declarator",
"|",
"member_declarator_list",
"COMMA",
"member_declarator"
] | def p_member_declarator_list(p):
""" member_declarator_list : member_declarator
| member_declarator_list COMMA member_declarator
"""
if len(p) == 2:
p[0] = (p[1],)
else:
p[0] = p[1] + (p[3],) | [
"def",
"p_member_declarator_list",
"(",
"p",
")",
":",
"if",
"len",
"(",
"p",
")",
"==",
"2",
":",
"p",
"[",
"0",
"]",
"=",
"(",
"p",
"[",
"1",
"]",
",",
")",
"else",
":",
"p",
"[",
"0",
"]",
"=",
"p",
"[",
"1",
"]",
"+",
"(",
"p",
"[",... | https://github.com/ctypesgen/ctypesgen/blob/cef9a7ac58a50d0ae4f260abdeb75e0a71398187/ctypesgen/parser/cgrammar.py#L839-L846 | ||
zaxlct/imooc-django | daf1ced745d3d21989e8191b658c293a511b37fd | extra_apps/xadmin/plugins/actions.py | python | ActionPlugin.get_list_display | (self, list_display) | return list_display | [] | def get_list_display(self, list_display):
if self.actions:
list_display.insert(0, 'action_checkbox')
self.admin_view.action_checkbox = action_checkbox
return list_display | [
"def",
"get_list_display",
"(",
"self",
",",
"list_display",
")",
":",
"if",
"self",
".",
"actions",
":",
"list_display",
".",
"insert",
"(",
"0",
",",
"'action_checkbox'",
")",
"self",
".",
"admin_view",
".",
"action_checkbox",
"=",
"action_checkbox",
"return... | https://github.com/zaxlct/imooc-django/blob/daf1ced745d3d21989e8191b658c293a511b37fd/extra_apps/xadmin/plugins/actions.py#L144-L148 | |||
realpython/book2-exercises | cde325eac8e6d8cff2316601c2e5b36bb46af7d0 | web2py-rest/gluon/contrib/aes.py | python | AES.mix_columns_inv | (self, block) | Similar to mix_columns above, but performed in inverse for decryption. | Similar to mix_columns above, but performed in inverse for decryption. | [
"Similar",
"to",
"mix_columns",
"above",
"but",
"performed",
"in",
"inverse",
"for",
"decryption",
"."
] | def mix_columns_inv(self, block):
"""Similar to mix_columns above, but performed in inverse for decryption."""
# Cache global multiplication tables (see below)
mul_9 = gf_mul_by_9
mul_11 = gf_mul_by_11
mul_13 = gf_mul_by_13
mul_14 = gf_mul_by_14
# Since we're dealing with a transposed matrix, columns are already
# sequential
for i in xrange(4):
col = i * 4
v0, v1, v2, v3 = (block[col], block[col + 1], block[col + 2],
block[col + 3])
#v0, v1, v2, v3 = block[col:col+4]
block[col ] = mul_14[v0] ^ mul_9[v3] ^ mul_13[v2] ^ mul_11[v1]
block[col+1] = mul_14[v1] ^ mul_9[v0] ^ mul_13[v3] ^ mul_11[v2]
block[col+2] = mul_14[v2] ^ mul_9[v1] ^ mul_13[v0] ^ mul_11[v3]
block[col+3] = mul_14[v3] ^ mul_9[v2] ^ mul_13[v1] ^ mul_11[v0] | [
"def",
"mix_columns_inv",
"(",
"self",
",",
"block",
")",
":",
"# Cache global multiplication tables (see below)",
"mul_9",
"=",
"gf_mul_by_9",
"mul_11",
"=",
"gf_mul_by_11",
"mul_13",
"=",
"gf_mul_by_13",
"mul_14",
"=",
"gf_mul_by_14",
"# Since we're dealing with a transpo... | https://github.com/realpython/book2-exercises/blob/cde325eac8e6d8cff2316601c2e5b36bb46af7d0/web2py-rest/gluon/contrib/aes.py#L236-L257 | ||
androguard/androguard | 8d091cbb309c0c50bf239f805cc1e0931b8dcddc | androguard/decompiler/dad/util.py | python | create_png | (cls_name, meth_name, graph, dir_name='graphs2') | Creates a PNG from a given :class:`~androguard.decompiler.dad.graph.Graph`.
:param str cls_name: name of the class
:param str meth_name: name of the method
:param androguard.decompiler.dad.graph.Graph graph:
:param str dir_name: output directory | Creates a PNG from a given :class:`~androguard.decompiler.dad.graph.Graph`. | [
"Creates",
"a",
"PNG",
"from",
"a",
"given",
":",
"class",
":",
"~androguard",
".",
"decompiler",
".",
"dad",
".",
"graph",
".",
"Graph",
"."
] | def create_png(cls_name, meth_name, graph, dir_name='graphs2'):
"""
Creates a PNG from a given :class:`~androguard.decompiler.dad.graph.Graph`.
:param str cls_name: name of the class
:param str meth_name: name of the method
:param androguard.decompiler.dad.graph.Graph graph:
:param str dir_name: output directory
"""
m_name = ''.join(x for x in meth_name if x.isalnum())
name = ''.join((cls_name.split('/')[-1][:-1], '#', m_name))
graph.draw(name, dir_name) | [
"def",
"create_png",
"(",
"cls_name",
",",
"meth_name",
",",
"graph",
",",
"dir_name",
"=",
"'graphs2'",
")",
":",
"m_name",
"=",
"''",
".",
"join",
"(",
"x",
"for",
"x",
"in",
"meth_name",
"if",
"x",
".",
"isalnum",
"(",
")",
")",
"name",
"=",
"''... | https://github.com/androguard/androguard/blob/8d091cbb309c0c50bf239f805cc1e0931b8dcddc/androguard/decompiler/dad/util.py#L202-L213 | ||
DIYer22/boxx | d271bc375a33e01e616a0f74ce028e6d77d1820e | boxx/tool/toolLog.py | python | getNameFromCodeObj | (code, pretty=True) | return name | [] | def getNameFromCodeObj(code, pretty=True):
name = code.co_name
filee = code.co_filename
if pretty:
if name == '<module>':
if filee.startswith('<ipython-input-'):
name = 'ipython-input'
else:
name = '%s'%os.path.basename(filee)
name = '\x1b[36m%s\x1b[0m'%name
if name == '<lambda>':
return 'lambda'
return name | [
"def",
"getNameFromCodeObj",
"(",
"code",
",",
"pretty",
"=",
"True",
")",
":",
"name",
"=",
"code",
".",
"co_name",
"filee",
"=",
"code",
".",
"co_filename",
"if",
"pretty",
":",
"if",
"name",
"==",
"'<module>'",
":",
"if",
"filee",
".",
"startswith",
... | https://github.com/DIYer22/boxx/blob/d271bc375a33e01e616a0f74ce028e6d77d1820e/boxx/tool/toolLog.py#L1016-L1028 | |||
buke/GreenOdoo | 3d8c55d426fb41fdb3f2f5a1533cfe05983ba1df | runtime/python/lib/python2.7/distutils/core.py | python | run_setup | (script_name, script_args=None, stop_after="run") | return _setup_distribution | Run a setup script in a somewhat controlled environment, and
return the Distribution instance that drives things. This is useful
if you need to find out the distribution meta-data (passed as
keyword args from 'script' to 'setup()', or the contents of the
config files or command-line.
'script_name' is a file that will be run with 'execfile()';
'sys.argv[0]' will be replaced with 'script' for the duration of the
call. 'script_args' is a list of strings; if supplied,
'sys.argv[1:]' will be replaced by 'script_args' for the duration of
the call.
'stop_after' tells 'setup()' when to stop processing; possible
values:
init
stop after the Distribution instance has been created and
populated with the keyword arguments to 'setup()'
config
stop after config files have been parsed (and their data
stored in the Distribution instance)
commandline
stop after the command-line ('sys.argv[1:]' or 'script_args')
have been parsed (and the data stored in the Distribution)
run [default]
stop after all commands have been run (the same as if 'setup()'
had been called in the usual way
Returns the Distribution instance, which provides all information
used to drive the Distutils. | Run a setup script in a somewhat controlled environment, and
return the Distribution instance that drives things. This is useful
if you need to find out the distribution meta-data (passed as
keyword args from 'script' to 'setup()', or the contents of the
config files or command-line. | [
"Run",
"a",
"setup",
"script",
"in",
"a",
"somewhat",
"controlled",
"environment",
"and",
"return",
"the",
"Distribution",
"instance",
"that",
"drives",
"things",
".",
"This",
"is",
"useful",
"if",
"you",
"need",
"to",
"find",
"out",
"the",
"distribution",
"... | def run_setup(script_name, script_args=None, stop_after="run"):
"""Run a setup script in a somewhat controlled environment, and
return the Distribution instance that drives things. This is useful
if you need to find out the distribution meta-data (passed as
keyword args from 'script' to 'setup()', or the contents of the
config files or command-line.
'script_name' is a file that will be run with 'execfile()';
'sys.argv[0]' will be replaced with 'script' for the duration of the
call. 'script_args' is a list of strings; if supplied,
'sys.argv[1:]' will be replaced by 'script_args' for the duration of
the call.
'stop_after' tells 'setup()' when to stop processing; possible
values:
init
stop after the Distribution instance has been created and
populated with the keyword arguments to 'setup()'
config
stop after config files have been parsed (and their data
stored in the Distribution instance)
commandline
stop after the command-line ('sys.argv[1:]' or 'script_args')
have been parsed (and the data stored in the Distribution)
run [default]
stop after all commands have been run (the same as if 'setup()'
had been called in the usual way
Returns the Distribution instance, which provides all information
used to drive the Distutils.
"""
if stop_after not in ('init', 'config', 'commandline', 'run'):
raise ValueError, "invalid value for 'stop_after': %r" % (stop_after,)
global _setup_stop_after, _setup_distribution
_setup_stop_after = stop_after
save_argv = sys.argv
g = {'__file__': script_name}
l = {}
try:
try:
sys.argv[0] = script_name
if script_args is not None:
sys.argv[1:] = script_args
f = open(script_name)
try:
exec f.read() in g, l
finally:
f.close()
finally:
sys.argv = save_argv
_setup_stop_after = None
except SystemExit:
# Hmm, should we do something if exiting with a non-zero code
# (ie. error)?
pass
except:
raise
if _setup_distribution is None:
raise RuntimeError, \
("'distutils.core.setup()' was never called -- "
"perhaps '%s' is not a Distutils setup script?") % \
script_name
# I wonder if the setup script's namespace -- g and l -- would be of
# any interest to callers?
return _setup_distribution | [
"def",
"run_setup",
"(",
"script_name",
",",
"script_args",
"=",
"None",
",",
"stop_after",
"=",
"\"run\"",
")",
":",
"if",
"stop_after",
"not",
"in",
"(",
"'init'",
",",
"'config'",
",",
"'commandline'",
",",
"'run'",
")",
":",
"raise",
"ValueError",
",",... | https://github.com/buke/GreenOdoo/blob/3d8c55d426fb41fdb3f2f5a1533cfe05983ba1df/runtime/python/lib/python2.7/distutils/core.py#L174-L242 | |
smart-mobile-software/gitstack | d9fee8f414f202143eb6e620529e8e5539a2af56 | python/Lib/site-packages/django/core/mail/message.py | python | EmailMessage.attach_file | (self, path, mimetype=None) | Attaches a file from the filesystem. | Attaches a file from the filesystem. | [
"Attaches",
"a",
"file",
"from",
"the",
"filesystem",
"."
] | def attach_file(self, path, mimetype=None):
"""Attaches a file from the filesystem."""
filename = os.path.basename(path)
content = open(path, 'rb').read()
self.attach(filename, content, mimetype) | [
"def",
"attach_file",
"(",
"self",
",",
"path",
",",
"mimetype",
"=",
"None",
")",
":",
"filename",
"=",
"os",
".",
"path",
".",
"basename",
"(",
"path",
")",
"content",
"=",
"open",
"(",
"path",
",",
"'rb'",
")",
".",
"read",
"(",
")",
"self",
"... | https://github.com/smart-mobile-software/gitstack/blob/d9fee8f414f202143eb6e620529e8e5539a2af56/python/Lib/site-packages/django/core/mail/message.py#L265-L269 | ||
googleads/google-ads-python | 2a1d6062221f6aad1992a6bcca0e7e4a93d2db86 | google/ads/googleads/v9/services/services/campaign_service/transports/grpc.py | python | CampaignServiceGrpcTransport.get_campaign | (
self,
) | return self._stubs["get_campaign"] | r"""Return a callable for the get campaign method over gRPC.
Returns the requested campaign in full detail.
List of thrown errors: `AuthenticationError <>`__
`AuthorizationError <>`__ `HeaderError <>`__
`InternalError <>`__ `QuotaError <>`__ `RequestError <>`__
Returns:
Callable[[~.GetCampaignRequest],
~.Campaign]:
A function that, when called, will call the underlying RPC
on the server. | r"""Return a callable for the get campaign method over gRPC. | [
"r",
"Return",
"a",
"callable",
"for",
"the",
"get",
"campaign",
"method",
"over",
"gRPC",
"."
] | def get_campaign(
self,
) -> Callable[[campaign_service.GetCampaignRequest], campaign.Campaign]:
r"""Return a callable for the get campaign method over gRPC.
Returns the requested campaign in full detail.
List of thrown errors: `AuthenticationError <>`__
`AuthorizationError <>`__ `HeaderError <>`__
`InternalError <>`__ `QuotaError <>`__ `RequestError <>`__
Returns:
Callable[[~.GetCampaignRequest],
~.Campaign]:
A function that, when called, will call the underlying RPC
on the server.
"""
# Generate a "stub function" on-the-fly which will actually make
# the request.
# gRPC handles serialization and deserialization, so we just need
# to pass in the functions for each.
if "get_campaign" not in self._stubs:
self._stubs["get_campaign"] = self.grpc_channel.unary_unary(
"/google.ads.googleads.v9.services.CampaignService/GetCampaign",
request_serializer=campaign_service.GetCampaignRequest.serialize,
response_deserializer=campaign.Campaign.deserialize,
)
return self._stubs["get_campaign"] | [
"def",
"get_campaign",
"(",
"self",
",",
")",
"->",
"Callable",
"[",
"[",
"campaign_service",
".",
"GetCampaignRequest",
"]",
",",
"campaign",
".",
"Campaign",
"]",
":",
"# Generate a \"stub function\" on-the-fly which will actually make",
"# the request.",
"# gRPC handle... | https://github.com/googleads/google-ads-python/blob/2a1d6062221f6aad1992a6bcca0e7e4a93d2db86/google/ads/googleads/v9/services/services/campaign_service/transports/grpc.py#L216-L243 | |
mdiazcl/fuzzbunch-debian | 2b76c2249ade83a389ae3badb12a1bd09901fd2c | windows/Resources/Python/Core/Lib/lib-tk/tkFont.py | python | Font.actual | (self, option=None) | Return actual font attributes | Return actual font attributes | [
"Return",
"actual",
"font",
"attributes"
] | def actual(self, option=None):
"""Return actual font attributes"""
if option:
return self._call('font', 'actual', self.name, '-' + option)
else:
return self._mkdict(self._split(self._call('font', 'actual', self.name))) | [
"def",
"actual",
"(",
"self",
",",
"option",
"=",
"None",
")",
":",
"if",
"option",
":",
"return",
"self",
".",
"_call",
"(",
"'font'",
",",
"'actual'",
",",
"self",
".",
"name",
",",
"'-'",
"+",
"option",
")",
"else",
":",
"return",
"self",
".",
... | https://github.com/mdiazcl/fuzzbunch-debian/blob/2b76c2249ade83a389ae3badb12a1bd09901fd2c/windows/Resources/Python/Core/Lib/lib-tk/tkFont.py#L111-L116 | ||
ricequant/rqalpha-mod-ctp | bfd40801f9a182226a911cac74660f62993eb6db | rqalpha_mod_ctp/ctp/pyctp/linux64_36/__init__.py | python | TraderApi.ReqOrderAction | (self, pInputOrderAction, nRequestID) | return 0 | 报单操作请求 | 报单操作请求 | [
"报单操作请求"
] | def ReqOrderAction(self, pInputOrderAction, nRequestID):
"""报单操作请求"""
return 0 | [
"def",
"ReqOrderAction",
"(",
"self",
",",
"pInputOrderAction",
",",
"nRequestID",
")",
":",
"return",
"0"
] | https://github.com/ricequant/rqalpha-mod-ctp/blob/bfd40801f9a182226a911cac74660f62993eb6db/rqalpha_mod_ctp/ctp/pyctp/linux64_36/__init__.py#L265-L267 | |
JacquesLucke/animation_nodes | b1e3ace8dcb0a771fd882fc3ac4e490b009fa0d1 | animation_nodes/nodes/mesh/get_linked_vertices.py | python | GetLinkedVerticesNode.create | (self) | [] | def create(self):
self.newInput("Mesh", "Mesh", "mesh")
self.newInput("Integer", "Vextex Index", "vertexIndex")
self.newOutput("Integer List", "Vertices", "vertexIndices")
self.newOutput("Integer List", "Edges", "edgeIndices")
self.newOutput("Integer", "Amount", "amount") | [
"def",
"create",
"(",
"self",
")",
":",
"self",
".",
"newInput",
"(",
"\"Mesh\"",
",",
"\"Mesh\"",
",",
"\"mesh\"",
")",
"self",
".",
"newInput",
"(",
"\"Integer\"",
",",
"\"Vextex Index\"",
",",
"\"vertexIndex\"",
")",
"self",
".",
"newOutput",
"(",
"\"In... | https://github.com/JacquesLucke/animation_nodes/blob/b1e3ace8dcb0a771fd882fc3ac4e490b009fa0d1/animation_nodes/nodes/mesh/get_linked_vertices.py#L10-L16 | ||||
tdamdouni/Pythonista | 3e082d53b6b9b501a3c8cf3251a8ad4c8be9c2ad | _2016/mysql-connector-pythonista/conversion.py | python | MySQLConverter._DATETIME_to_python | (self, v, dsc=None) | return pv | Returns DATETIME column type as datetime.datetime type. | Returns DATETIME column type as datetime.datetime type. | [
"Returns",
"DATETIME",
"column",
"type",
"as",
"datetime",
".",
"datetime",
"type",
"."
] | def _DATETIME_to_python(self, v, dsc=None):
"""
Returns DATETIME column type as datetime.datetime type.
"""
pv = None
try:
(sd, st) = v.split(' ')
if len(st) > 8:
(hms, fs) = st.split('.')
fs = int(fs.ljust(6, '0'))
else:
hms = st
fs = 0
dt = [ int(v) for v in sd.split('-') ] +\
[ int(v) for v in hms.split(':') ] + [fs,]
pv = datetime.datetime(*dt)
except ValueError:
pv = None
return pv | [
"def",
"_DATETIME_to_python",
"(",
"self",
",",
"v",
",",
"dsc",
"=",
"None",
")",
":",
"pv",
"=",
"None",
"try",
":",
"(",
"sd",
",",
"st",
")",
"=",
"v",
".",
"split",
"(",
"' '",
")",
"if",
"len",
"(",
"st",
")",
">",
"8",
":",
"(",
"hms... | https://github.com/tdamdouni/Pythonista/blob/3e082d53b6b9b501a3c8cf3251a8ad4c8be9c2ad/_2016/mysql-connector-pythonista/conversion.py#L344-L363 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.