nwo
stringlengths
5
106
sha
stringlengths
40
40
path
stringlengths
4
174
language
stringclasses
1 value
identifier
stringlengths
1
140
parameters
stringlengths
0
87.7k
argument_list
stringclasses
1 value
return_statement
stringlengths
0
426k
docstring
stringlengths
0
64.3k
docstring_summary
stringlengths
0
26.3k
docstring_tokens
list
function
stringlengths
18
4.83M
function_tokens
list
url
stringlengths
83
304
oracle/oci-python-sdk
3c1604e4e212008fb6718e2f68cdb5ef71fd5793
src/oci/vulnerability_scanning/vulnerability_scanning_client.py
python
VulnerabilityScanningClient.list_host_vulnerabilities
(self, compartment_id, **kwargs)
Retrieves a list of HostVulnerabilitySummary objects in a compartment. You can filter and sort the vulnerabilities by problem severity and time. A host vulnerability describes a security issue that was detected in scans of one or more compute instances. :param str compartment_id: (required) The ID of the compartment in which to list resources. :param int limit: (optional) The maximum number of items to return. :param str page: (optional) The page token representing the page at which to start retrieving results. This is usually retrieved from a previous list call. :param str severity: (optional) A filter to return only resources that have a severity that matches the given severity Allowed values are: "NONE", "LOW", "MEDIUM", "HIGH", "CRITICAL" :param str name: (optional) A filter to return only resources that match the entire name given. :param str cve_reference: (optional) Parameter to filter by CVE reference number for vulnerabilities :param str vulnerability_type: (optional) The field to filter vulnerabilities based on its type. Only one value can be provided. Allowed values are: "CVE", "PROBLEM" :param str sort_order: (optional) The sort order to use, either 'ASC' or 'DESC'. Allowed values are: "ASC", "DESC" :param str sort_by: (optional) The field to sort by. Only one sort order may be provided. Default order for 'name' is Ascending. Default order for other values is descending. If no value is specified name is default. Allowed values are: "name", "severity", "impactedHosts", "firstDetected", "lastDetected" :param str opc_request_id: (optional) The client request ID for tracing. :param obj retry_strategy: (optional) A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level. This should be one of the strategies available in the :py:mod:`~oci.retry` module. This operation will not retry by default, users can also use the convenient :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY` provided by the SDK to enable retries for it. The specifics of the default retry strategy are described `here <https://docs.oracle.com/en-us/iaas/tools/python/latest/sdk_behaviors/retries.html>`__. To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`. :return: A :class:`~oci.response.Response` object with data of type :class:`~oci.vulnerability_scanning.models.HostVulnerabilitySummaryCollection` :rtype: :class:`~oci.response.Response` :example: Click `here <https://docs.cloud.oracle.com/en-us/iaas/tools/python-sdk-examples/latest/vulnerabilityscanning/list_host_vulnerabilities.py.html>`__ to see an example of how to use list_host_vulnerabilities API.
Retrieves a list of HostVulnerabilitySummary objects in a compartment. You can filter and sort the vulnerabilities by problem severity and time. A host vulnerability describes a security issue that was detected in scans of one or more compute instances.
[ "Retrieves", "a", "list", "of", "HostVulnerabilitySummary", "objects", "in", "a", "compartment", ".", "You", "can", "filter", "and", "sort", "the", "vulnerabilities", "by", "problem", "severity", "and", "time", ".", "A", "host", "vulnerability", "describes", "a"...
def list_host_vulnerabilities(self, compartment_id, **kwargs): """ Retrieves a list of HostVulnerabilitySummary objects in a compartment. You can filter and sort the vulnerabilities by problem severity and time. A host vulnerability describes a security issue that was detected in scans of one or more compute instances. :param str compartment_id: (required) The ID of the compartment in which to list resources. :param int limit: (optional) The maximum number of items to return. :param str page: (optional) The page token representing the page at which to start retrieving results. This is usually retrieved from a previous list call. :param str severity: (optional) A filter to return only resources that have a severity that matches the given severity Allowed values are: "NONE", "LOW", "MEDIUM", "HIGH", "CRITICAL" :param str name: (optional) A filter to return only resources that match the entire name given. :param str cve_reference: (optional) Parameter to filter by CVE reference number for vulnerabilities :param str vulnerability_type: (optional) The field to filter vulnerabilities based on its type. Only one value can be provided. Allowed values are: "CVE", "PROBLEM" :param str sort_order: (optional) The sort order to use, either 'ASC' or 'DESC'. Allowed values are: "ASC", "DESC" :param str sort_by: (optional) The field to sort by. Only one sort order may be provided. Default order for 'name' is Ascending. Default order for other values is descending. If no value is specified name is default. Allowed values are: "name", "severity", "impactedHosts", "firstDetected", "lastDetected" :param str opc_request_id: (optional) The client request ID for tracing. :param obj retry_strategy: (optional) A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level. This should be one of the strategies available in the :py:mod:`~oci.retry` module. This operation will not retry by default, users can also use the convenient :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY` provided by the SDK to enable retries for it. The specifics of the default retry strategy are described `here <https://docs.oracle.com/en-us/iaas/tools/python/latest/sdk_behaviors/retries.html>`__. To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`. :return: A :class:`~oci.response.Response` object with data of type :class:`~oci.vulnerability_scanning.models.HostVulnerabilitySummaryCollection` :rtype: :class:`~oci.response.Response` :example: Click `here <https://docs.cloud.oracle.com/en-us/iaas/tools/python-sdk-examples/latest/vulnerabilityscanning/list_host_vulnerabilities.py.html>`__ to see an example of how to use list_host_vulnerabilities API. """ resource_path = "/hostVulnerabilities" method = "GET" # Don't accept unknown kwargs expected_kwargs = [ "retry_strategy", "limit", "page", "severity", "name", "cve_reference", "vulnerability_type", "sort_order", "sort_by", "opc_request_id" ] extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs] if extra_kwargs: raise ValueError( "list_host_vulnerabilities got unknown kwargs: {!r}".format(extra_kwargs)) if 'severity' in kwargs: severity_allowed_values = ["NONE", "LOW", "MEDIUM", "HIGH", "CRITICAL"] if kwargs['severity'] not in severity_allowed_values: raise ValueError( "Invalid value for `severity`, must be one of {0}".format(severity_allowed_values) ) if 'vulnerability_type' in kwargs: vulnerability_type_allowed_values = ["CVE", "PROBLEM"] if kwargs['vulnerability_type'] not in vulnerability_type_allowed_values: raise ValueError( "Invalid value for `vulnerability_type`, must be one of {0}".format(vulnerability_type_allowed_values) ) if 'sort_order' in kwargs: sort_order_allowed_values = ["ASC", "DESC"] if kwargs['sort_order'] not in sort_order_allowed_values: raise ValueError( "Invalid value for `sort_order`, must be one of {0}".format(sort_order_allowed_values) ) if 'sort_by' in kwargs: sort_by_allowed_values = ["name", "severity", "impactedHosts", "firstDetected", "lastDetected"] if kwargs['sort_by'] not in sort_by_allowed_values: raise ValueError( "Invalid value for `sort_by`, must be one of {0}".format(sort_by_allowed_values) ) query_params = { "compartmentId": compartment_id, "limit": kwargs.get("limit", missing), "page": kwargs.get("page", missing), "severity": kwargs.get("severity", missing), "name": kwargs.get("name", missing), "cveReference": kwargs.get("cve_reference", missing), "vulnerabilityType": kwargs.get("vulnerability_type", missing), "sortOrder": kwargs.get("sort_order", missing), "sortBy": kwargs.get("sort_by", missing) } query_params = {k: v for (k, v) in six.iteritems(query_params) if v is not missing and v is not None} header_params = { "accept": "application/json", "content-type": "application/json", "opc-request-id": kwargs.get("opc_request_id", missing) } header_params = {k: v for (k, v) in six.iteritems(header_params) if v is not missing and v is not None} retry_strategy = self.base_client.get_preferred_retry_strategy( operation_retry_strategy=kwargs.get('retry_strategy'), client_retry_strategy=self.retry_strategy ) if retry_strategy: if not isinstance(retry_strategy, retry.NoneRetryStrategy): self.base_client.add_opc_client_retries_header(header_params) retry_strategy.add_circuit_breaker_callback(self.circuit_breaker_callback) return retry_strategy.make_retrying_call( self.base_client.call_api, resource_path=resource_path, method=method, query_params=query_params, header_params=header_params, response_type="HostVulnerabilitySummaryCollection") else: return self.base_client.call_api( resource_path=resource_path, method=method, query_params=query_params, header_params=header_params, response_type="HostVulnerabilitySummaryCollection")
[ "def", "list_host_vulnerabilities", "(", "self", ",", "compartment_id", ",", "*", "*", "kwargs", ")", ":", "resource_path", "=", "\"/hostVulnerabilities\"", "method", "=", "\"GET\"", "# Don't accept unknown kwargs", "expected_kwargs", "=", "[", "\"retry_strategy\"", ","...
https://github.com/oracle/oci-python-sdk/blob/3c1604e4e212008fb6718e2f68cdb5ef71fd5793/src/oci/vulnerability_scanning/vulnerability_scanning_client.py#L4391-L4539
karanchahal/distiller
a17ec06cbeafcdd2aea19d7c7663033c951392f5
models/vision/mnasnet.py
python
MNASNet._initialize_weights
(self)
[]
def _initialize_weights(self): for m in self.modules(): if isinstance(m, nn.Conv2d): nn.init.kaiming_normal_(m.weight, mode="fan_out", nonlinearity="relu") if m.bias is not None: nn.init.zeros_(m.bias) elif isinstance(m, nn.BatchNorm2d): nn.init.ones_(m.weight) nn.init.zeros_(m.bias) elif isinstance(m, nn.Linear): nn.init.kaiming_uniform_(m.weight, mode="fan_out", nonlinearity="sigmoid") nn.init.zeros_(m.bias)
[ "def", "_initialize_weights", "(", "self", ")", ":", "for", "m", "in", "self", ".", "modules", "(", ")", ":", "if", "isinstance", "(", "m", ",", "nn", ".", "Conv2d", ")", ":", "nn", ".", "init", ".", "kaiming_normal_", "(", "m", ".", "weight", ",",...
https://github.com/karanchahal/distiller/blob/a17ec06cbeafcdd2aea19d7c7663033c951392f5/models/vision/mnasnet.py#L140-L153
rogerbinns/apsw
00a7eb5138f2f00976f0c76cb51906afecbe298f
tools/example2rst.py
python
rstout
(filename)
return op
[]
def rstout(filename): op = [] op.extend(""" .. Automatically generated by example2rst.py. Edit that file not this one! Example ======= This code demonstrates usage of the APSW api. It gives you a good overview of all the things that can be done. Also included is output so you can see what gets printed when you run the code. .. code-block:: python """.split("\n")) counter = 0 prefix = " " for line in open(filename, "rt"): line = line.rstrip() if "@@CAPTURE" in line: continue if "@@ENDCAPTURE" not in line: if line.startswith("#") and "@@" in line: p = line.index("@@") name = line[p + 2:].strip() line = line[:p].rstrip() for i in range(-1, -99, -1): # look backwards for first non-comment line if not op[i].strip().startswith("#"): op.insert(i, "") op.insert(i, ".. _" + name + ":") op.insert(i, "") op.insert(i, ".. code-block:: python") op.insert(i, "") break op.append(prefix + line) continue op.append("") op.append(".. code-block:: text") op.append("") for line in open(".tmpop-%s-%d" % (filename, counter), "rt"): line = line.rstrip() op.append(" | " + re.sub("u'([^']*)'", r"'\1'", line)) op.append("") op.append(".. code-block:: python") op.append("") os.remove(".tmpop-%s-%d" % (filename, counter)) counter += 1 ### Peephole optimizations while True: b4 = op # get rid of double blank lines op2 = [] for i in range(len(op)): if i + 1 < len(op) and len(op[i].strip()) == 0 and len(op[i + 1].strip()) == 0: continue if len(op[i].strip()) == 0: # make whitespace only lines be zero length op2.append("") else: op2.append(op[i]) op = op2 # if there is a code block followed by a label then drop the code block op2 = [] for i in range(len(op)): if i + 2 < len(op) and op[i].startswith(".. code-block::") and op[i + 2].startswith(".. _"): continue op2.append(op[i]) op = op2 if op == b4: break return op
[ "def", "rstout", "(", "filename", ")", ":", "op", "=", "[", "]", "op", ".", "extend", "(", "\"\"\"\n.. Automatically generated by example2rst.py. Edit that file\n not this one!\n\nExample\n=======\n\nThis code demonstrates usage of the APSW api. It gives you a good\noverview of all t...
https://github.com/rogerbinns/apsw/blob/00a7eb5138f2f00976f0c76cb51906afecbe298f/tools/example2rst.py#L50-L124
sqlalchemy/sqlalchemy
eb716884a4abcabae84a6aaba105568e925b7d27
lib/sqlalchemy/dialects/mysql/base.py
python
MySQLTypeCompiler._extend_string
(self, type_, defaults, spec)
return " ".join( [c for c in (spec, charset, collation) if c is not None] )
Extend a string-type declaration with standard SQL CHARACTER SET / COLLATE annotations and MySQL specific extensions.
Extend a string-type declaration with standard SQL CHARACTER SET / COLLATE annotations and MySQL specific extensions.
[ "Extend", "a", "string", "-", "type", "declaration", "with", "standard", "SQL", "CHARACTER", "SET", "/", "COLLATE", "annotations", "and", "MySQL", "specific", "extensions", "." ]
def _extend_string(self, type_, defaults, spec): """Extend a string-type declaration with standard SQL CHARACTER SET / COLLATE annotations and MySQL specific extensions. """ def attr(name): return getattr(type_, name, defaults.get(name)) if attr("charset"): charset = "CHARACTER SET %s" % attr("charset") elif attr("ascii"): charset = "ASCII" elif attr("unicode"): charset = "UNICODE" else: charset = None if attr("collation"): collation = "COLLATE %s" % type_.collation elif attr("binary"): collation = "BINARY" else: collation = None if attr("national"): # NATIONAL (aka NCHAR/NVARCHAR) trumps charsets. return " ".join( [c for c in ("NATIONAL", spec, collation) if c is not None] ) return " ".join( [c for c in (spec, charset, collation) if c is not None] )
[ "def", "_extend_string", "(", "self", ",", "type_", ",", "defaults", ",", "spec", ")", ":", "def", "attr", "(", "name", ")", ":", "return", "getattr", "(", "type_", ",", "name", ",", "defaults", ".", "get", "(", "name", ")", ")", "if", "attr", "(",...
https://github.com/sqlalchemy/sqlalchemy/blob/eb716884a4abcabae84a6aaba105568e925b7d27/lib/sqlalchemy/dialects/mysql/base.py#L1998-L2030
micropython/micropython-lib
cdd260f0792d04a1ded99171b4c7a2582b7856b4
python-stdlib/base64/base64.py
python
b64encode
(s, altchars=None)
return encoded
Encode a byte string using Base64. s is the byte string to encode. Optional altchars must be a byte string of length 2 which specifies an alternative alphabet for the '+' and '/' characters. This allows an application to e.g. generate url or filesystem safe Base64 strings. The encoded byte string is returned.
Encode a byte string using Base64.
[ "Encode", "a", "byte", "string", "using", "Base64", "." ]
def b64encode(s, altchars=None): """Encode a byte string using Base64. s is the byte string to encode. Optional altchars must be a byte string of length 2 which specifies an alternative alphabet for the '+' and '/' characters. This allows an application to e.g. generate url or filesystem safe Base64 strings. The encoded byte string is returned. """ if not isinstance(s, bytes_types): raise TypeError("expected bytes, not %s" % s.__class__.__name__) # Strip off the trailing newline encoded = binascii.b2a_base64(s)[:-1] if altchars is not None: if not isinstance(altchars, bytes_types): raise TypeError("expected bytes, not %s" % altchars.__class__.__name__) assert len(altchars) == 2, repr(altchars) return encoded.translate(bytes.maketrans(b"+/", altchars)) return encoded
[ "def", "b64encode", "(", "s", ",", "altchars", "=", "None", ")", ":", "if", "not", "isinstance", "(", "s", ",", "bytes_types", ")", ":", "raise", "TypeError", "(", "\"expected bytes, not %s\"", "%", "s", ".", "__class__", ".", "__name__", ")", "# Strip off...
https://github.com/micropython/micropython-lib/blob/cdd260f0792d04a1ded99171b4c7a2582b7856b4/python-stdlib/base64/base64.py#L58-L77
allenai/allennlp
a3d71254fcc0f3615910e9c3d48874515edf53e0
allennlp/data/fields/tensor_field.py
python
TensorField.as_tensor
(self, padding_lengths: Dict[str, int])
return torch.nn.functional.pad(tensor, pad, value=self.padding_value)
[]
def as_tensor(self, padding_lengths: Dict[str, int]) -> torch.Tensor: tensor = self.tensor while len(tensor.size()) < len(padding_lengths): tensor = tensor.unsqueeze(-1) pad = [ padding for i, dimension_size in reversed(list(enumerate(tensor.size()))) for padding in [0, padding_lengths["dimension_" + str(i)] - dimension_size] ] return torch.nn.functional.pad(tensor, pad, value=self.padding_value)
[ "def", "as_tensor", "(", "self", ",", "padding_lengths", ":", "Dict", "[", "str", ",", "int", "]", ")", "->", "torch", ".", "Tensor", ":", "tensor", "=", "self", ".", "tensor", "while", "len", "(", "tensor", ".", "size", "(", ")", ")", "<", "len", ...
https://github.com/allenai/allennlp/blob/a3d71254fcc0f3615910e9c3d48874515edf53e0/allennlp/data/fields/tensor_field.py#L42-L51
WenlongZhang0517/RankSRGAN
b313c24a25c9844d1d0c7ea8fd1e35da00ad8975
codes/data/util.py
python
_get_paths_from_lmdb
(dataroot)
return paths, sizes
get image path list from lmdb meta info
get image path list from lmdb meta info
[ "get", "image", "path", "list", "from", "lmdb", "meta", "info" ]
def _get_paths_from_lmdb(dataroot): """get image path list from lmdb meta info""" meta_info = pickle.load(open(os.path.join(dataroot, 'meta_info.pkl'), 'rb')) paths = meta_info['keys'] sizes = meta_info['resolution'] if len(sizes) == 1: sizes = sizes * len(paths) return paths, sizes
[ "def", "_get_paths_from_lmdb", "(", "dataroot", ")", ":", "meta_info", "=", "pickle", ".", "load", "(", "open", "(", "os", ".", "path", ".", "join", "(", "dataroot", ",", "'meta_info.pkl'", ")", ",", "'rb'", ")", ")", "paths", "=", "meta_info", "[", "'...
https://github.com/WenlongZhang0517/RankSRGAN/blob/b313c24a25c9844d1d0c7ea8fd1e35da00ad8975/codes/data/util.py#L35-L42
espeed/bulbs
628e5b14f0249f9ca4fa1ceea6f2af2dca45f75a
bulbs/model.py
python
Relationship.get_index_name
(cls, config)
return cls.get_label(config)
Returns the index name. :param config: Config object. :type config: bulbs.config.Config :rtype: str
Returns the index name.
[ "Returns", "the", "index", "name", "." ]
def get_index_name(cls, config): """ Returns the index name. :param config: Config object. :type config: bulbs.config.Config :rtype: str """ return cls.get_label(config)
[ "def", "get_index_name", "(", "cls", ",", "config", ")", ":", "return", "cls", ".", "get_label", "(", "config", ")" ]
https://github.com/espeed/bulbs/blob/628e5b14f0249f9ca4fa1ceea6f2af2dca45f75a/bulbs/model.py#L704-L714
mchristopher/PokemonGo-DesktopMap
ec37575f2776ee7d64456e2a1f6b6b78830b4fe0
app/pylibs/win32/gevent/hub.py
python
Hub.join
(self, timeout=None)
return False
Wait for the event loop to finish. Exits only when there are no more spawned greenlets, started servers, active timeouts or watchers. If *timeout* is provided, wait no longer for the specified number of seconds. Returns True if exited because the loop finished execution. Returns False if exited because of timeout expired.
Wait for the event loop to finish. Exits only when there are no more spawned greenlets, started servers, active timeouts or watchers.
[ "Wait", "for", "the", "event", "loop", "to", "finish", ".", "Exits", "only", "when", "there", "are", "no", "more", "spawned", "greenlets", "started", "servers", "active", "timeouts", "or", "watchers", "." ]
def join(self, timeout=None): """Wait for the event loop to finish. Exits only when there are no more spawned greenlets, started servers, active timeouts or watchers. If *timeout* is provided, wait no longer for the specified number of seconds. Returns True if exited because the loop finished execution. Returns False if exited because of timeout expired. """ assert getcurrent() is self.parent, "only possible from the MAIN greenlet" if self.dead: return True waiter = Waiter() if timeout is not None: timeout = self.loop.timer(timeout, ref=False) timeout.start(waiter.switch) try: try: waiter.get() except LoopExit: return True finally: if timeout is not None: timeout.stop() return False
[ "def", "join", "(", "self", ",", "timeout", "=", "None", ")", ":", "assert", "getcurrent", "(", ")", "is", "self", ".", "parent", ",", "\"only possible from the MAIN greenlet\"", "if", "self", ".", "dead", ":", "return", "True", "waiter", "=", "Waiter", "(...
https://github.com/mchristopher/PokemonGo-DesktopMap/blob/ec37575f2776ee7d64456e2a1f6b6b78830b4fe0/app/pylibs/win32/gevent/hub.py#L676-L703
NVlabs/condensa
ff2fd0f9d997ce36b574f4c9bed2bb7cffba835d
condensa/dtypes.py
python
DType.name
(self)
return _DTYPE_TO_STRING[self._dtype]
[]
def name(self): return _DTYPE_TO_STRING[self._dtype]
[ "def", "name", "(", "self", ")", ":", "return", "_DTYPE_TO_STRING", "[", "self", ".", "_dtype", "]" ]
https://github.com/NVlabs/condensa/blob/ff2fd0f9d997ce36b574f4c9bed2bb7cffba835d/condensa/dtypes.py#L25-L26
microsoft/azure-devops-python-api
451cade4c475482792cbe9e522c1fee32393139e
azure-devops/azure/devops/v5_1/git/git_client_base.py
python
GitClientBase.get_blobs_zip
(self, blob_ids, repository_id, project=None, filename=None, **kwargs)
return self._client.stream_download(response, callback=callback)
GetBlobsZip. Gets one or more blobs in a zip file download. :param [str] blob_ids: Blob IDs (SHA1 hashes) to be returned in the zip file. :param str repository_id: The name or ID of the repository. :param str project: Project ID or project name :param str filename: :rtype: object
GetBlobsZip. Gets one or more blobs in a zip file download. :param [str] blob_ids: Blob IDs (SHA1 hashes) to be returned in the zip file. :param str repository_id: The name or ID of the repository. :param str project: Project ID or project name :param str filename: :rtype: object
[ "GetBlobsZip", ".", "Gets", "one", "or", "more", "blobs", "in", "a", "zip", "file", "download", ".", ":", "param", "[", "str", "]", "blob_ids", ":", "Blob", "IDs", "(", "SHA1", "hashes", ")", "to", "be", "returned", "in", "the", "zip", "file", ".", ...
def get_blobs_zip(self, blob_ids, repository_id, project=None, filename=None, **kwargs): """GetBlobsZip. Gets one or more blobs in a zip file download. :param [str] blob_ids: Blob IDs (SHA1 hashes) to be returned in the zip file. :param str repository_id: The name or ID of the repository. :param str project: Project ID or project name :param str filename: :rtype: object """ route_values = {} if project is not None: route_values['project'] = self._serialize.url('project', project, 'str') if repository_id is not None: route_values['repositoryId'] = self._serialize.url('repository_id', repository_id, 'str') query_parameters = {} if filename is not None: query_parameters['filename'] = self._serialize.query('filename', filename, 'str') content = self._serialize.body(blob_ids, '[str]') response = self._send(http_method='POST', location_id='7b28e929-2c99-405d-9c5c-6167a06e6816', version='5.1', route_values=route_values, query_parameters=query_parameters, content=content, accept_media_type='application/zip') if "callback" in kwargs: callback = kwargs["callback"] else: callback = None return self._client.stream_download(response, callback=callback)
[ "def", "get_blobs_zip", "(", "self", ",", "blob_ids", ",", "repository_id", ",", "project", "=", "None", ",", "filename", "=", "None", ",", "*", "*", "kwargs", ")", ":", "route_values", "=", "{", "}", "if", "project", "is", "not", "None", ":", "route_v...
https://github.com/microsoft/azure-devops-python-api/blob/451cade4c475482792cbe9e522c1fee32393139e/azure-devops/azure/devops/v5_1/git/git_client_base.py#L139-L168
selfteaching/selfteaching-python-camp
9982ee964b984595e7d664b07c389cddaf158f1e
19100205/Ceasar1978/pip-19.0.3/src/pip/_vendor/webencodings/__init__.py
python
_get_encoding
(encoding_or_label)
return encoding
Accept either an encoding object or label. :param encoding: An :class:`Encoding` object or a label string. :returns: An :class:`Encoding` object. :raises: :exc:`~exceptions.LookupError` for an unknown label.
Accept either an encoding object or label.
[ "Accept", "either", "an", "encoding", "object", "or", "label", "." ]
def _get_encoding(encoding_or_label): """ Accept either an encoding object or label. :param encoding: An :class:`Encoding` object or a label string. :returns: An :class:`Encoding` object. :raises: :exc:`~exceptions.LookupError` for an unknown label. """ if hasattr(encoding_or_label, 'codec_info'): return encoding_or_label encoding = lookup(encoding_or_label) if encoding is None: raise LookupError('Unknown encoding label: %r' % encoding_or_label) return encoding
[ "def", "_get_encoding", "(", "encoding_or_label", ")", ":", "if", "hasattr", "(", "encoding_or_label", ",", "'codec_info'", ")", ":", "return", "encoding_or_label", "encoding", "=", "lookup", "(", "encoding_or_label", ")", "if", "encoding", "is", "None", ":", "r...
https://github.com/selfteaching/selfteaching-python-camp/blob/9982ee964b984595e7d664b07c389cddaf158f1e/19100205/Ceasar1978/pip-19.0.3/src/pip/_vendor/webencodings/__init__.py#L91-L106
pandas-dev/pandas
5ba7d714014ae8feaccc0dd4a98890828cf2832d
pandas/core/frame.py
python
DataFrame.corrwith
(self, other, axis: Axis = 0, drop=False, method="pearson")
return correl
Compute pairwise correlation. Pairwise correlation is computed between rows or columns of DataFrame with rows or columns of Series or DataFrame. DataFrames are first aligned along both axes before computing the correlations. Parameters ---------- other : DataFrame, Series Object with which to compute correlations. axis : {0 or 'index', 1 or 'columns'}, default 0 The axis to use. 0 or 'index' to compute column-wise, 1 or 'columns' for row-wise. drop : bool, default False Drop missing indices from result. method : {'pearson', 'kendall', 'spearman'} or callable Method of correlation: * pearson : standard correlation coefficient * kendall : Kendall Tau correlation coefficient * spearman : Spearman rank correlation * callable: callable with input two 1d ndarrays and returning a float. Returns ------- Series Pairwise correlations. See Also -------- DataFrame.corr : Compute pairwise correlation of columns.
Compute pairwise correlation.
[ "Compute", "pairwise", "correlation", "." ]
def corrwith(self, other, axis: Axis = 0, drop=False, method="pearson") -> Series: """ Compute pairwise correlation. Pairwise correlation is computed between rows or columns of DataFrame with rows or columns of Series or DataFrame. DataFrames are first aligned along both axes before computing the correlations. Parameters ---------- other : DataFrame, Series Object with which to compute correlations. axis : {0 or 'index', 1 or 'columns'}, default 0 The axis to use. 0 or 'index' to compute column-wise, 1 or 'columns' for row-wise. drop : bool, default False Drop missing indices from result. method : {'pearson', 'kendall', 'spearman'} or callable Method of correlation: * pearson : standard correlation coefficient * kendall : Kendall Tau correlation coefficient * spearman : Spearman rank correlation * callable: callable with input two 1d ndarrays and returning a float. Returns ------- Series Pairwise correlations. See Also -------- DataFrame.corr : Compute pairwise correlation of columns. """ axis = self._get_axis_number(axis) this = self._get_numeric_data() if isinstance(other, Series): return this.apply(lambda x: other.corr(x, method=method), axis=axis) other = other._get_numeric_data() left, right = this.align(other, join="inner", copy=False) if axis == 1: left = left.T right = right.T if method == "pearson": # mask missing values left = left + right * 0 right = right + left * 0 # demeaned data ldem = left - left.mean() rdem = right - right.mean() num = (ldem * rdem).sum() dom = (left.count() - 1) * left.std() * right.std() correl = num / dom elif method in ["kendall", "spearman"] or callable(method): def c(x): return nanops.nancorr(x[0], x[1], method=method) correl = self._constructor_sliced( map(c, zip(left.values.T, right.values.T)), index=left.columns ) else: raise ValueError( f"Invalid method {method} was passed, " "valid methods are: 'pearson', 'kendall', " "'spearman', or callable" ) if not drop: # Find non-matching labels along the given axis # and append missing correlations (GH 22375) raxis = 1 if axis == 0 else 0 result_index = this._get_axis(raxis).union(other._get_axis(raxis)) idx_diff = result_index.difference(correl.index) if len(idx_diff) > 0: correl = correl._append( Series([np.nan] * len(idx_diff), index=idx_diff) ) return correl
[ "def", "corrwith", "(", "self", ",", "other", ",", "axis", ":", "Axis", "=", "0", ",", "drop", "=", "False", ",", "method", "=", "\"pearson\"", ")", "->", "Series", ":", "axis", "=", "self", ".", "_get_axis_number", "(", "axis", ")", "this", "=", "...
https://github.com/pandas-dev/pandas/blob/5ba7d714014ae8feaccc0dd4a98890828cf2832d/pandas/core/frame.py#L9670-L9761
yumaojun03/blog-python-app
92ecad4c693090d67351d022e47b2d5be901d25d
www/transwarp/web.py
python
_RedirectError.__init__
(self, code, location)
Init an HttpError with response code.
Init an HttpError with response code.
[ "Init", "an", "HttpError", "with", "response", "code", "." ]
def __init__(self, code, location): """ Init an HttpError with response code. """ super(_RedirectError, self).__init__(code) self.location = location
[ "def", "__init__", "(", "self", ",", "code", ",", "location", ")", ":", "super", "(", "_RedirectError", ",", "self", ")", ".", "__init__", "(", "code", ")", "self", ".", "location", "=", "location" ]
https://github.com/yumaojun03/blog-python-app/blob/92ecad4c693090d67351d022e47b2d5be901d25d/www/transwarp/web.py#L296-L301
CLUEbenchmark/CLUE
5bd39732734afecb490cf18a5212e692dbf2c007
baselines/models_pytorch/classifier_pytorch/convert_albert_original_tf_checkpoint_to_pytorch.py
python
convert_tf_checkpoint_to_pytorch
(tf_checkpoint_path, bert_config_file, pytorch_dump_path)
[]
def convert_tf_checkpoint_to_pytorch(tf_checkpoint_path, bert_config_file, pytorch_dump_path): # Initialise PyTorch model config = BertConfig.from_json_file(bert_config_file) print("Building PyTorch model from configuration: {}".format(str(config))) model = AlbertForPreTraining(config) # Load weights from tf checkpoint load_tf_weights_in_albert(model, config, tf_checkpoint_path) # Save pytorch-model print("Save PyTorch model to {}".format(pytorch_dump_path)) torch.save(model.state_dict(), pytorch_dump_path)
[ "def", "convert_tf_checkpoint_to_pytorch", "(", "tf_checkpoint_path", ",", "bert_config_file", ",", "pytorch_dump_path", ")", ":", "# Initialise PyTorch model", "config", "=", "BertConfig", ".", "from_json_file", "(", "bert_config_file", ")", "print", "(", "\"Building PyTor...
https://github.com/CLUEbenchmark/CLUE/blob/5bd39732734afecb490cf18a5212e692dbf2c007/baselines/models_pytorch/classifier_pytorch/convert_albert_original_tf_checkpoint_to_pytorch.py#L15-L26
plotly/plotly.py
cfad7862594b35965c0e000813bd7805e8494a5b
packages/python/plotly/plotly/graph_objs/scatter3d/_error_x.py
python
ErrorX.color
(self)
return self["color"]
Sets the stoke color of the error bars. The 'color' property is a color and may be specified as: - A hex string (e.g. '#ff0000') - An rgb/rgba string (e.g. 'rgb(255,0,0)') - An hsl/hsla string (e.g. 'hsl(0,100%,50%)') - An hsv/hsva string (e.g. 'hsv(0,100%,100%)') - A named CSS color: aliceblue, antiquewhite, aqua, aquamarine, azure, beige, bisque, black, blanchedalmond, blue, blueviolet, brown, burlywood, cadetblue, chartreuse, chocolate, coral, cornflowerblue, cornsilk, crimson, cyan, darkblue, darkcyan, darkgoldenrod, darkgray, darkgrey, darkgreen, darkkhaki, darkmagenta, darkolivegreen, darkorange, darkorchid, darkred, darksalmon, darkseagreen, darkslateblue, darkslategray, darkslategrey, darkturquoise, darkviolet, deeppink, deepskyblue, dimgray, dimgrey, dodgerblue, firebrick, floralwhite, forestgreen, fuchsia, gainsboro, ghostwhite, gold, goldenrod, gray, grey, green, greenyellow, honeydew, hotpink, indianred, indigo, ivory, khaki, lavender, lavenderblush, lawngreen, lemonchiffon, lightblue, lightcoral, lightcyan, lightgoldenrodyellow, lightgray, lightgrey, lightgreen, lightpink, lightsalmon, lightseagreen, lightskyblue, lightslategray, lightslategrey, lightsteelblue, lightyellow, lime, limegreen, linen, magenta, maroon, mediumaquamarine, mediumblue, mediumorchid, mediumpurple, mediumseagreen, mediumslateblue, mediumspringgreen, mediumturquoise, mediumvioletred, midnightblue, mintcream, mistyrose, moccasin, navajowhite, navy, oldlace, olive, olivedrab, orange, orangered, orchid, palegoldenrod, palegreen, paleturquoise, palevioletred, papayawhip, peachpuff, peru, pink, plum, powderblue, purple, red, rosybrown, royalblue, rebeccapurple, saddlebrown, salmon, sandybrown, seagreen, seashell, sienna, silver, skyblue, slateblue, slategray, slategrey, snow, springgreen, steelblue, tan, teal, thistle, tomato, turquoise, violet, wheat, white, whitesmoke, yellow, yellowgreen Returns ------- str
Sets the stoke color of the error bars. The 'color' property is a color and may be specified as: - A hex string (e.g. '#ff0000') - An rgb/rgba string (e.g. 'rgb(255,0,0)') - An hsl/hsla string (e.g. 'hsl(0,100%,50%)') - An hsv/hsva string (e.g. 'hsv(0,100%,100%)') - A named CSS color: aliceblue, antiquewhite, aqua, aquamarine, azure, beige, bisque, black, blanchedalmond, blue, blueviolet, brown, burlywood, cadetblue, chartreuse, chocolate, coral, cornflowerblue, cornsilk, crimson, cyan, darkblue, darkcyan, darkgoldenrod, darkgray, darkgrey, darkgreen, darkkhaki, darkmagenta, darkolivegreen, darkorange, darkorchid, darkred, darksalmon, darkseagreen, darkslateblue, darkslategray, darkslategrey, darkturquoise, darkviolet, deeppink, deepskyblue, dimgray, dimgrey, dodgerblue, firebrick, floralwhite, forestgreen, fuchsia, gainsboro, ghostwhite, gold, goldenrod, gray, grey, green, greenyellow, honeydew, hotpink, indianred, indigo, ivory, khaki, lavender, lavenderblush, lawngreen, lemonchiffon, lightblue, lightcoral, lightcyan, lightgoldenrodyellow, lightgray, lightgrey, lightgreen, lightpink, lightsalmon, lightseagreen, lightskyblue, lightslategray, lightslategrey, lightsteelblue, lightyellow, lime, limegreen, linen, magenta, maroon, mediumaquamarine, mediumblue, mediumorchid, mediumpurple, mediumseagreen, mediumslateblue, mediumspringgreen, mediumturquoise, mediumvioletred, midnightblue, mintcream, mistyrose, moccasin, navajowhite, navy, oldlace, olive, olivedrab, orange, orangered, orchid, palegoldenrod, palegreen, paleturquoise, palevioletred, papayawhip, peachpuff, peru, pink, plum, powderblue, purple, red, rosybrown, royalblue, rebeccapurple, saddlebrown, salmon, sandybrown, seagreen, seashell, sienna, silver, skyblue, slateblue, slategray, slategrey, snow, springgreen, steelblue, tan, teal, thistle, tomato, turquoise, violet, wheat, white, whitesmoke, yellow, yellowgreen
[ "Sets", "the", "stoke", "color", "of", "the", "error", "bars", ".", "The", "color", "property", "is", "a", "color", "and", "may", "be", "specified", "as", ":", "-", "A", "hex", "string", "(", "e", ".", "g", ".", "#ff0000", ")", "-", "An", "rgb", ...
def color(self): """ Sets the stoke color of the error bars. The 'color' property is a color and may be specified as: - A hex string (e.g. '#ff0000') - An rgb/rgba string (e.g. 'rgb(255,0,0)') - An hsl/hsla string (e.g. 'hsl(0,100%,50%)') - An hsv/hsva string (e.g. 'hsv(0,100%,100%)') - A named CSS color: aliceblue, antiquewhite, aqua, aquamarine, azure, beige, bisque, black, blanchedalmond, blue, blueviolet, brown, burlywood, cadetblue, chartreuse, chocolate, coral, cornflowerblue, cornsilk, crimson, cyan, darkblue, darkcyan, darkgoldenrod, darkgray, darkgrey, darkgreen, darkkhaki, darkmagenta, darkolivegreen, darkorange, darkorchid, darkred, darksalmon, darkseagreen, darkslateblue, darkslategray, darkslategrey, darkturquoise, darkviolet, deeppink, deepskyblue, dimgray, dimgrey, dodgerblue, firebrick, floralwhite, forestgreen, fuchsia, gainsboro, ghostwhite, gold, goldenrod, gray, grey, green, greenyellow, honeydew, hotpink, indianred, indigo, ivory, khaki, lavender, lavenderblush, lawngreen, lemonchiffon, lightblue, lightcoral, lightcyan, lightgoldenrodyellow, lightgray, lightgrey, lightgreen, lightpink, lightsalmon, lightseagreen, lightskyblue, lightslategray, lightslategrey, lightsteelblue, lightyellow, lime, limegreen, linen, magenta, maroon, mediumaquamarine, mediumblue, mediumorchid, mediumpurple, mediumseagreen, mediumslateblue, mediumspringgreen, mediumturquoise, mediumvioletred, midnightblue, mintcream, mistyrose, moccasin, navajowhite, navy, oldlace, olive, olivedrab, orange, orangered, orchid, palegoldenrod, palegreen, paleturquoise, palevioletred, papayawhip, peachpuff, peru, pink, plum, powderblue, purple, red, rosybrown, royalblue, rebeccapurple, saddlebrown, salmon, sandybrown, seagreen, seashell, sienna, silver, skyblue, slateblue, slategray, slategrey, snow, springgreen, steelblue, tan, teal, thistle, tomato, turquoise, violet, wheat, white, whitesmoke, yellow, yellowgreen Returns ------- str """ return self["color"]
[ "def", "color", "(", "self", ")", ":", "return", "self", "[", "\"color\"", "]" ]
https://github.com/plotly/plotly.py/blob/cfad7862594b35965c0e000813bd7805e8494a5b/packages/python/plotly/plotly/graph_objs/scatter3d/_error_x.py#L116-L166
pypa/pipenv
b21baade71a86ab3ee1429f71fbc14d4f95fb75d
pipenv/patched/notpip/_vendor/idna/uts46data.py
python
_seg_72
()
return [ (0x1F109, '3', '8,'), (0x1F10A, '3', '9,'), (0x1F10B, 'V'), (0x1F110, '3', '(a)'), (0x1F111, '3', '(b)'), (0x1F112, '3', '(c)'), (0x1F113, '3', '(d)'), (0x1F114, '3', '(e)'), (0x1F115, '3', '(f)'), (0x1F116, '3', '(g)'), (0x1F117, '3', '(h)'), (0x1F118, '3', '(i)'), (0x1F119, '3', '(j)'), (0x1F11A, '3', '(k)'), (0x1F11B, '3', '(l)'), (0x1F11C, '3', '(m)'), (0x1F11D, '3', '(n)'), (0x1F11E, '3', '(o)'), (0x1F11F, '3', '(p)'), (0x1F120, '3', '(q)'), (0x1F121, '3', '(r)'), (0x1F122, '3', '(s)'), (0x1F123, '3', '(t)'), (0x1F124, '3', '(u)'), (0x1F125, '3', '(v)'), (0x1F126, '3', '(w)'), (0x1F127, '3', '(x)'), (0x1F128, '3', '(y)'), (0x1F129, '3', '(z)'), (0x1F12A, 'M', '〔s〕'), (0x1F12B, 'M', 'c'), (0x1F12C, 'M', 'r'), (0x1F12D, 'M', 'cd'), (0x1F12E, 'M', 'wz'), (0x1F12F, 'V'), (0x1F130, 'M', 'a'), (0x1F131, 'M', 'b'), (0x1F132, 'M', 'c'), (0x1F133, 'M', 'd'), (0x1F134, 'M', 'e'), (0x1F135, 'M', 'f'), (0x1F136, 'M', 'g'), (0x1F137, 'M', 'h'), (0x1F138, 'M', 'i'), (0x1F139, 'M', 'j'), (0x1F13A, 'M', 'k'), (0x1F13B, 'M', 'l'), (0x1F13C, 'M', 'm'), (0x1F13D, 'M', 'n'), (0x1F13E, 'M', 'o'), (0x1F13F, 'M', 'p'), (0x1F140, 'M', 'q'), (0x1F141, 'M', 'r'), (0x1F142, 'M', 's'), (0x1F143, 'M', 't'), (0x1F144, 'M', 'u'), (0x1F145, 'M', 'v'), (0x1F146, 'M', 'w'), (0x1F147, 'M', 'x'), (0x1F148, 'M', 'y'), (0x1F149, 'M', 'z'), (0x1F14A, 'M', 'hv'), (0x1F14B, 'M', 'mv'), (0x1F14C, 'M', 'sd'), (0x1F14D, 'M', 'ss'), (0x1F14E, 'M', 'ppv'), (0x1F14F, 'M', 'wc'), (0x1F150, 'V'), (0x1F16A, 'M', 'mc'), (0x1F16B, 'M', 'md'), (0x1F16C, 'M', 'mr'), (0x1F16D, 'V'), (0x1F190, 'M', 'dj'), (0x1F191, 'V'), (0x1F1AE, 'X'), (0x1F1E6, 'V'), (0x1F200, 'M', 'ほか'), (0x1F201, 'M', 'ココ'), (0x1F202, 'M', 'サ'), (0x1F203, 'X'), (0x1F210, 'M', '手'), (0x1F211, 'M', '字'), (0x1F212, 'M', '双'), (0x1F213, 'M', 'デ'), (0x1F214, 'M', '二'), (0x1F215, 'M', '多'), (0x1F216, 'M', '解'), (0x1F217, 'M', '天'), (0x1F218, 'M', '交'), (0x1F219, 'M', '映'), (0x1F21A, 'M', '無'), (0x1F21B, 'M', '料'), (0x1F21C, 'M', '前'), (0x1F21D, 'M', '後'), (0x1F21E, 'M', '再'), (0x1F21F, 'M', '新'), (0x1F220, 'M', '初'), (0x1F221, 'M', '終'), (0x1F222, 'M', '生'), (0x1F223, 'M', '販'), ]
[]
def _seg_72(): # type: () -> List[Union[Tuple[int, str], Tuple[int, str, str]]] return [ (0x1F109, '3', '8,'), (0x1F10A, '3', '9,'), (0x1F10B, 'V'), (0x1F110, '3', '(a)'), (0x1F111, '3', '(b)'), (0x1F112, '3', '(c)'), (0x1F113, '3', '(d)'), (0x1F114, '3', '(e)'), (0x1F115, '3', '(f)'), (0x1F116, '3', '(g)'), (0x1F117, '3', '(h)'), (0x1F118, '3', '(i)'), (0x1F119, '3', '(j)'), (0x1F11A, '3', '(k)'), (0x1F11B, '3', '(l)'), (0x1F11C, '3', '(m)'), (0x1F11D, '3', '(n)'), (0x1F11E, '3', '(o)'), (0x1F11F, '3', '(p)'), (0x1F120, '3', '(q)'), (0x1F121, '3', '(r)'), (0x1F122, '3', '(s)'), (0x1F123, '3', '(t)'), (0x1F124, '3', '(u)'), (0x1F125, '3', '(v)'), (0x1F126, '3', '(w)'), (0x1F127, '3', '(x)'), (0x1F128, '3', '(y)'), (0x1F129, '3', '(z)'), (0x1F12A, 'M', '〔s〕'), (0x1F12B, 'M', 'c'), (0x1F12C, 'M', 'r'), (0x1F12D, 'M', 'cd'), (0x1F12E, 'M', 'wz'), (0x1F12F, 'V'), (0x1F130, 'M', 'a'), (0x1F131, 'M', 'b'), (0x1F132, 'M', 'c'), (0x1F133, 'M', 'd'), (0x1F134, 'M', 'e'), (0x1F135, 'M', 'f'), (0x1F136, 'M', 'g'), (0x1F137, 'M', 'h'), (0x1F138, 'M', 'i'), (0x1F139, 'M', 'j'), (0x1F13A, 'M', 'k'), (0x1F13B, 'M', 'l'), (0x1F13C, 'M', 'm'), (0x1F13D, 'M', 'n'), (0x1F13E, 'M', 'o'), (0x1F13F, 'M', 'p'), (0x1F140, 'M', 'q'), (0x1F141, 'M', 'r'), (0x1F142, 'M', 's'), (0x1F143, 'M', 't'), (0x1F144, 'M', 'u'), (0x1F145, 'M', 'v'), (0x1F146, 'M', 'w'), (0x1F147, 'M', 'x'), (0x1F148, 'M', 'y'), (0x1F149, 'M', 'z'), (0x1F14A, 'M', 'hv'), (0x1F14B, 'M', 'mv'), (0x1F14C, 'M', 'sd'), (0x1F14D, 'M', 'ss'), (0x1F14E, 'M', 'ppv'), (0x1F14F, 'M', 'wc'), (0x1F150, 'V'), (0x1F16A, 'M', 'mc'), (0x1F16B, 'M', 'md'), (0x1F16C, 'M', 'mr'), (0x1F16D, 'V'), (0x1F190, 'M', 'dj'), (0x1F191, 'V'), (0x1F1AE, 'X'), (0x1F1E6, 'V'), (0x1F200, 'M', 'ほか'), (0x1F201, 'M', 'ココ'), (0x1F202, 'M', 'サ'), (0x1F203, 'X'), (0x1F210, 'M', '手'), (0x1F211, 'M', '字'), (0x1F212, 'M', '双'), (0x1F213, 'M', 'デ'), (0x1F214, 'M', '二'), (0x1F215, 'M', '多'), (0x1F216, 'M', '解'), (0x1F217, 'M', '天'), (0x1F218, 'M', '交'), (0x1F219, 'M', '映'), (0x1F21A, 'M', '無'), (0x1F21B, 'M', '料'), (0x1F21C, 'M', '前'), (0x1F21D, 'M', '後'), (0x1F21E, 'M', '再'), (0x1F21F, 'M', '新'), (0x1F220, 'M', '初'), (0x1F221, 'M', '終'), (0x1F222, 'M', '生'), (0x1F223, 'M', '販'), ]
[ "def", "_seg_72", "(", ")", ":", "# type: () -> List[Union[Tuple[int, str], Tuple[int, str, str]]]", "return", "[", "(", "0x1F109", ",", "'3'", ",", "'8,'", ")", ",", "(", "0x1F10A", ",", "'3'", ",", "'9,'", ")", ",", "(", "0x1F10B", ",", "'V'", ")", ",", ...
https://github.com/pypa/pipenv/blob/b21baade71a86ab3ee1429f71fbc14d4f95fb75d/pipenv/patched/notpip/_vendor/idna/uts46data.py#L7569-L7672
siznax/wptools
788cdc2078696dacb14652d5f2ad098a585e4763
wptools/site.py
python
WPToolsSite._set_siteinfo
(self)
capture API sitematrix data in data attribute
capture API sitematrix data in data attribute
[ "capture", "API", "sitematrix", "data", "in", "data", "attribute" ]
def _set_siteinfo(self): """ capture API sitematrix data in data attribute """ data = self._load_response('siteinfo').get('query') mostviewed = data.get('mostviewed') self.data['mostviewed'] = [] for item in mostviewed[1:]: if item['ns'] == 0: self.data['mostviewed'].append(item) general = data.get('general') self.params.update({'title': general.get('sitename')}) self.params.update({'lang': general.get('lang')}) self.data['site'] = general.get('wikiid') info = {} for item in general: ginfo = general.get(item) if ginfo: info[item] = ginfo self.data['info'] = info siteviews = data.get('siteviews') if siteviews: values = [x for x in siteviews.values() if x] if values: self.data['siteviews'] = int(sum(values) / len(values)) else: self.data['siteviews'] = 0 stats = data.get('statistics') for item in stats: self.data[item] = stats[item]
[ "def", "_set_siteinfo", "(", "self", ")", ":", "data", "=", "self", ".", "_load_response", "(", "'siteinfo'", ")", ".", "get", "(", "'query'", ")", "mostviewed", "=", "data", ".", "get", "(", "'mostviewed'", ")", "self", ".", "data", "[", "'mostviewed'",...
https://github.com/siznax/wptools/blob/788cdc2078696dacb14652d5f2ad098a585e4763/wptools/site.py#L61-L96
kovidgoyal/calibre
2b41671370f2a9eb1109b9ae901ccf915f1bd0c8
src/calibre/library/schema_upgrades.py
python
SchemaUpgrade.upgrade_version_6
(self)
Show authors in order
Show authors in order
[ "Show", "authors", "in", "order" ]
def upgrade_version_6(self): 'Show authors in order' self.conn.executescript(''' BEGIN TRANSACTION; DROP VIEW meta; CREATE VIEW meta AS SELECT id, title, (SELECT sortconcat(bal.id, name) FROM books_authors_link AS bal JOIN authors ON(author = authors.id) WHERE book = books.id) authors, (SELECT name FROM publishers WHERE publishers.id IN (SELECT publisher from books_publishers_link WHERE book=books.id)) publisher, (SELECT rating FROM ratings WHERE ratings.id IN (SELECT rating from books_ratings_link WHERE book=books.id)) rating, timestamp, (SELECT MAX(uncompressed_size) FROM data WHERE book=books.id) size, (SELECT concat(name) FROM tags WHERE tags.id IN (SELECT tag from books_tags_link WHERE book=books.id)) tags, (SELECT text FROM comments WHERE book=books.id) comments, (SELECT name FROM series WHERE series.id IN (SELECT series FROM books_series_link WHERE book=books.id)) series, series_index, sort, author_sort, (SELECT concat(format) FROM data WHERE data.book=books.id) formats, isbn, path, lccn, pubdate, flags FROM books; END TRANSACTION; ''')
[ "def", "upgrade_version_6", "(", "self", ")", ":", "self", ".", "conn", ".", "executescript", "(", "'''\n BEGIN TRANSACTION;\n DROP VIEW meta;\n CREATE VIEW meta AS\n SELECT id, title,\n (SELECT sortconcat(bal.id, name) FROM books_authors_link AS bal...
https://github.com/kovidgoyal/calibre/blob/2b41671370f2a9eb1109b9ae901ccf915f1bd0c8/src/calibre/library/schema_upgrades.py#L165-L191
mothran/bunny
cbdb98a0ff36b488bbc2659a46365509f0a1045b
libbunny/bunny.py
python
Bunny.__init__
(self)
Setup and build the bunny model and starts the read_packet_thread()
Setup and build the bunny model and starts the read_packet_thread()
[ "Setup", "and", "build", "the", "bunny", "model", "and", "starts", "the", "read_packet_thread", "()" ]
def __init__(self): """ Setup and build the bunny model and starts the read_packet_thread() """ self.inandout = SendRec() self.cryptor = AEScrypt() self.model = TrafficModel() # each item should be an full bunny message that can be passed to the .decrypt() method # TODO: put a upper bound of number of messages or a cleanup thread to clear out old messages # if not consumed. self.msg_queue = Queue.LifoQueue() # The out queue is a FiFo Queue because it maintaines the ordering of the bunny data # format: [data, Bool (relay or not)] self.out_queue = Queue.Queue() # The Deque is used because it is a thread safe iterable that can be filled with 'seen' # messages between the send and recv threads. self.msg_deque = [] # init the threads and name them self.workers = [BunnyReadThread(self.msg_queue, self.out_queue, self.inandout, self.model, self.cryptor), \ BroadCaster(self.out_queue, self.inandout, self.model)] self.workers[0].name = "BunnyReadThread" self.workers[1].name = "BroadCasterThread" # spin up the threads for worker in self.workers: worker.daemon = True worker.start()
[ "def", "__init__", "(", "self", ")", ":", "self", ".", "inandout", "=", "SendRec", "(", ")", "self", ".", "cryptor", "=", "AEScrypt", "(", ")", "self", ".", "model", "=", "TrafficModel", "(", ")", "# each item should be an full bunny message that can be passed t...
https://github.com/mothran/bunny/blob/cbdb98a0ff36b488bbc2659a46365509f0a1045b/libbunny/bunny.py#L44-L77
osmr/imgclsmob
f2993d3ce73a2f7ddba05da3891defb08547d504
chainer_/chainercv2/models/nasnet.py
python
nasnet_4a1056
(**kwargs)
return get_nasnet( repeat=4, penultimate_filters=1056, init_block_channels=32, final_pool_size=7, extra_padding=True, skip_reduction_layer_input=False, in_size=(224, 224), model_name="nasnet_4a1056", **kwargs)
NASNet-A 4@1056 (NASNet-A-Mobile) model from 'Learning Transferable Architectures for Scalable Image Recognition,' https://arxiv.org/abs/1707.07012. Parameters: ---------- pretrained : bool, default False Whether to load the pretrained weights for model. root : str, default '~/.chainer/models' Location for keeping the model parameters.
NASNet-A 4@1056 (NASNet-A-Mobile) model from 'Learning Transferable Architectures for Scalable Image Recognition,' https://arxiv.org/abs/1707.07012.
[ "NASNet", "-", "A", "4@1056", "(", "NASNet", "-", "A", "-", "Mobile", ")", "model", "from", "Learning", "Transferable", "Architectures", "for", "Scalable", "Image", "Recognition", "https", ":", "//", "arxiv", ".", "org", "/", "abs", "/", "1707", ".", "07...
def nasnet_4a1056(**kwargs): """ NASNet-A 4@1056 (NASNet-A-Mobile) model from 'Learning Transferable Architectures for Scalable Image Recognition,' https://arxiv.org/abs/1707.07012. Parameters: ---------- pretrained : bool, default False Whether to load the pretrained weights for model. root : str, default '~/.chainer/models' Location for keeping the model parameters. """ return get_nasnet( repeat=4, penultimate_filters=1056, init_block_channels=32, final_pool_size=7, extra_padding=True, skip_reduction_layer_input=False, in_size=(224, 224), model_name="nasnet_4a1056", **kwargs)
[ "def", "nasnet_4a1056", "(", "*", "*", "kwargs", ")", ":", "return", "get_nasnet", "(", "repeat", "=", "4", ",", "penultimate_filters", "=", "1056", ",", "init_block_channels", "=", "32", ",", "final_pool_size", "=", "7", ",", "extra_padding", "=", "True", ...
https://github.com/osmr/imgclsmob/blob/f2993d3ce73a2f7ddba05da3891defb08547d504/chainer_/chainercv2/models/nasnet.py#L1259-L1280
dulwich/dulwich
1f66817d712e3563ce1ff53b1218491a2eae39da
dulwich/config.py
python
ConfigDict.__init__
(self, values=None, encoding=None)
Create a new ConfigDict.
Create a new ConfigDict.
[ "Create", "a", "new", "ConfigDict", "." ]
def __init__(self, values=None, encoding=None): """Create a new ConfigDict.""" if encoding is None: encoding = sys.getdefaultencoding() self.encoding = encoding self._values = CaseInsensitiveDict.make(values)
[ "def", "__init__", "(", "self", ",", "values", "=", "None", ",", "encoding", "=", "None", ")", ":", "if", "encoding", "is", "None", ":", "encoding", "=", "sys", ".", "getdefaultencoding", "(", ")", "self", ".", "encoding", "=", "encoding", "self", ".",...
https://github.com/dulwich/dulwich/blob/1f66817d712e3563ce1ff53b1218491a2eae39da/dulwich/config.py#L195-L200
oanda/v20-python
f28192f4a31bce038cf6dfa302f5878bec192fe5
src/v20/transaction.py
python
CreateTransaction.__init__
(self, **kwargs)
Create a new CreateTransaction instance
Create a new CreateTransaction instance
[ "Create", "a", "new", "CreateTransaction", "instance" ]
def __init__(self, **kwargs): """ Create a new CreateTransaction instance """ super(CreateTransaction, self).__init__() # # The Transaction's Identifier. # self.id = kwargs.get("id") # # The date/time when the Transaction was created. # self.time = kwargs.get("time") # # The ID of the user that initiated the creation of the Transaction. # self.userID = kwargs.get("userID") # # The ID of the Account the Transaction was created for. # self.accountID = kwargs.get("accountID") # # The ID of the "batch" that the Transaction belongs to. Transactions # in the same batch are applied to the Account simultaneously. # self.batchID = kwargs.get("batchID") # # The Request ID of the request which generated the transaction. # self.requestID = kwargs.get("requestID") # # The Type of the Transaction. Always set to "CREATE" in a # CreateTransaction. # self.type = kwargs.get("type", "CREATE") # # The ID of the Division that the Account is in # self.divisionID = kwargs.get("divisionID") # # The ID of the Site that the Account was created at # self.siteID = kwargs.get("siteID") # # The ID of the user that the Account was created for # self.accountUserID = kwargs.get("accountUserID") # # The number of the Account within the site/division/user # self.accountNumber = kwargs.get("accountNumber") # # The home currency of the Account # self.homeCurrency = kwargs.get("homeCurrency")
[ "def", "__init__", "(", "self", ",", "*", "*", "kwargs", ")", ":", "super", "(", "CreateTransaction", ",", "self", ")", ".", "__init__", "(", ")", "#", "# The Transaction's Identifier.", "#", "self", ".", "id", "=", "kwargs", ".", "get", "(", "\"id\"", ...
https://github.com/oanda/v20-python/blob/f28192f4a31bce038cf6dfa302f5878bec192fe5/src/v20/transaction.py#L174-L240
IronLanguages/main
a949455434b1fda8c783289e897e78a9a0caabb5
External.LCA_RESTRICTED/Languages/CPython/27/Lib/imaplib.py
python
IMAP4.logout
(self)
return typ, dat
Shutdown connection to server. (typ, [data]) = <instance>.logout() Returns server 'BYE' response.
Shutdown connection to server.
[ "Shutdown", "connection", "to", "server", "." ]
def logout(self): """Shutdown connection to server. (typ, [data]) = <instance>.logout() Returns server 'BYE' response. """ self.state = 'LOGOUT' try: typ, dat = self._simple_command('LOGOUT') except: typ, dat = 'NO', ['%s: %s' % sys.exc_info()[:2]] self.shutdown() if 'BYE' in self.untagged_responses: return 'BYE', self.untagged_responses['BYE'] return typ, dat
[ "def", "logout", "(", "self", ")", ":", "self", ".", "state", "=", "'LOGOUT'", "try", ":", "typ", ",", "dat", "=", "self", ".", "_simple_command", "(", "'LOGOUT'", ")", "except", ":", "typ", ",", "dat", "=", "'NO'", ",", "[", "'%s: %s'", "%", "sys"...
https://github.com/IronLanguages/main/blob/a949455434b1fda8c783289e897e78a9a0caabb5/External.LCA_RESTRICTED/Languages/CPython/27/Lib/imaplib.py#L527-L540
timonwong/OmniMarkupPreviewer
21921ac7a99d2b5924a2219b33679a5b53621392
OmniMarkupLib/Renderers/libs/python2/genshi/filters/transform.py
python
SubstituteTransformation.__call__
(self, stream)
Apply the transform filter to the marked stream. :param stream: The marked event stream to filter
Apply the transform filter to the marked stream.
[ "Apply", "the", "transform", "filter", "to", "the", "marked", "stream", "." ]
def __call__(self, stream): """Apply the transform filter to the marked stream. :param stream: The marked event stream to filter """ for mark, (kind, data, pos) in stream: if mark is not None and kind is TEXT: new_data = self.pattern.sub(self.replace, data, self.count) if isinstance(data, Markup): data = Markup(new_data) else: data = new_data yield mark, (kind, data, pos)
[ "def", "__call__", "(", "self", ",", "stream", ")", ":", "for", "mark", ",", "(", "kind", ",", "data", ",", "pos", ")", "in", "stream", ":", "if", "mark", "is", "not", "None", "and", "kind", "is", "TEXT", ":", "new_data", "=", "self", ".", "patte...
https://github.com/timonwong/OmniMarkupPreviewer/blob/21921ac7a99d2b5924a2219b33679a5b53621392/OmniMarkupLib/Renderers/libs/python2/genshi/filters/transform.py#L1000-L1012
PyCQA/astroid
a815443f62faae05249621a396dcf0afd884a619
astroid/nodes/as_string.py
python
AsStringVisitor.visit_pass
(self, node)
return "pass"
return an astroid.Pass node as string
return an astroid.Pass node as string
[ "return", "an", "astroid", ".", "Pass", "node", "as", "string" ]
def visit_pass(self, node): """return an astroid.Pass node as string""" return "pass"
[ "def", "visit_pass", "(", "self", ",", "node", ")", ":", "return", "\"pass\"" ]
https://github.com/PyCQA/astroid/blob/a815443f62faae05249621a396dcf0afd884a619/astroid/nodes/as_string.py#L436-L438
urwid/urwid
e2423b5069f51d318ea1ac0f355a0efe5448f7eb
urwid/container.py
python
Pile.mouse_event
(self, size, event, button, col, row, focus)
return w.mouse_event(tsize, event, button, col, row-wrow, focus)
Pass the event to the contained widget. May change focus on button 1 press.
Pass the event to the contained widget. May change focus on button 1 press.
[ "Pass", "the", "event", "to", "the", "contained", "widget", ".", "May", "change", "focus", "on", "button", "1", "press", "." ]
def mouse_event(self, size, event, button, col, row, focus): """ Pass the event to the contained widget. May change focus on button 1 press. """ wrow = 0 item_rows = self.get_item_rows(size, focus) for i, (r, w) in enumerate(zip(item_rows, (w for (w, options) in self.contents))): if wrow + r > row: break wrow += r else: return False focus = focus and self.focus_item == w if is_mouse_press(event) and button == 1: if w.selectable(): self.focus_position = i if not hasattr(w, 'mouse_event'): return False tsize = self.get_item_size(size, i, focus, item_rows) return w.mouse_event(tsize, event, button, col, row-wrow, focus)
[ "def", "mouse_event", "(", "self", ",", "size", ",", "event", ",", "button", ",", "col", ",", "row", ",", "focus", ")", ":", "wrow", "=", "0", "item_rows", "=", "self", ".", "get_item_rows", "(", "size", ",", "focus", ")", "for", "i", ",", "(", "...
https://github.com/urwid/urwid/blob/e2423b5069f51d318ea1ac0f355a0efe5448f7eb/urwid/container.py#L1701-L1726
selfteaching/selfteaching-python-camp
9982ee964b984595e7d664b07c389cddaf158f1e
19100205/Ceasar1978/pip-19.0.3/src/pip/_internal/req/req_uninstall.py
python
UninstallPathSet._allowed_to_proceed
(self, verbose)
return ask('Proceed (y/n)? ', ('y', 'n')) == 'y'
Display which files would be deleted and prompt for confirmation
Display which files would be deleted and prompt for confirmation
[ "Display", "which", "files", "would", "be", "deleted", "and", "prompt", "for", "confirmation" ]
def _allowed_to_proceed(self, verbose): """Display which files would be deleted and prompt for confirmation """ def _display(msg, paths): if not paths: return logger.info(msg) with indent_log(): for path in sorted(compact(paths)): logger.info(path) if not verbose: will_remove, will_skip = compress_for_output_listing(self.paths) else: # In verbose mode, display all the files that are going to be # deleted. will_remove = list(self.paths) will_skip = set() _display('Would remove:', will_remove) _display('Would not remove (might be manually added):', will_skip) _display('Would not remove (outside of prefix):', self._refuse) if verbose: _display('Will actually move:', compress_for_rename(self.paths)) return ask('Proceed (y/n)? ', ('y', 'n')) == 'y'
[ "def", "_allowed_to_proceed", "(", "self", ",", "verbose", ")", ":", "def", "_display", "(", "msg", ",", "paths", ")", ":", "if", "not", "paths", ":", "return", "logger", ".", "info", "(", "msg", ")", "with", "indent_log", "(", ")", ":", "for", "path...
https://github.com/selfteaching/selfteaching-python-camp/blob/9982ee964b984595e7d664b07c389cddaf158f1e/19100205/Ceasar1978/pip-19.0.3/src/pip/_internal/req/req_uninstall.py#L368-L395
holzschu/Carnets
44effb10ddfc6aa5c8b0687582a724ba82c6b547
Library/lib/python3.7/site-packages/networkx/linalg/attrmatrix.py
python
_node_value
(G, node_attr)
return value
Returns a function that returns a value from G.nodes[u]. We return a function expecting a node as its sole argument. Then, in the simplest scenario, the returned function will return G.nodes[u][node_attr]. However, we also handle the case when `node_attr` is None or when it is a function itself. Parameters ---------- G : graph A NetworkX graph node_attr : {None, str, callable} Specification of how the value of the node attribute should be obtained from the node attribute dictionary. Returns ------- value : function A function expecting a node as its sole argument. The function will returns a value from G.nodes[u] that depends on `edge_attr`.
Returns a function that returns a value from G.nodes[u].
[ "Returns", "a", "function", "that", "returns", "a", "value", "from", "G", ".", "nodes", "[", "u", "]", "." ]
def _node_value(G, node_attr): """Returns a function that returns a value from G.nodes[u]. We return a function expecting a node as its sole argument. Then, in the simplest scenario, the returned function will return G.nodes[u][node_attr]. However, we also handle the case when `node_attr` is None or when it is a function itself. Parameters ---------- G : graph A NetworkX graph node_attr : {None, str, callable} Specification of how the value of the node attribute should be obtained from the node attribute dictionary. Returns ------- value : function A function expecting a node as its sole argument. The function will returns a value from G.nodes[u] that depends on `edge_attr`. """ if node_attr is None: def value(u): return u elif not hasattr(node_attr, '__call__'): # assume it is a key for the node attribute dictionary def value(u): return G.nodes[u][node_attr] else: # Advanced: Allow users to specify something else. # # For example, # node_attr = lambda u: G.nodes[u].get('size', .5) * 3 # value = node_attr return value
[ "def", "_node_value", "(", "G", ",", "node_attr", ")", ":", "if", "node_attr", "is", "None", ":", "def", "value", "(", "u", ")", ":", "return", "u", "elif", "not", "hasattr", "(", "node_attr", ",", "'__call__'", ")", ":", "# assume it is a key for the node...
https://github.com/holzschu/Carnets/blob/44effb10ddfc6aa5c8b0687582a724ba82c6b547/Library/lib/python3.7/site-packages/networkx/linalg/attrmatrix.py#L10-L47
PaddlePaddle/PARL
5fb7a5d2b5d0f0dac57fdf4acb9e79485c7efa96
parl/utils/replay_memory.py
python
ReplayMemory.load_from_d4rl
(self, dataset)
load data from d4rl dataset(https://github.com/rail-berkeley/d4rl#using-d4rl) to replay memory. Args: dataset(dict): dataset that contains: observations (np.float32): shape of (batch_size, obs_dim), next_observations (np.int32): shape of (batch_size, obs_dim), actions (np.float32): shape of (batch_size, act_dim), rewards (np.float32): shape of (batch_size), terminals (bool): shape of (batch_size) Example: .. code-block:: python import gym import d4rl env = gym.make("hopper-medium-v0") rpm = ReplayMemory(max_size=int(2e6), obs_dim=11, act_dim=3) rpm.load_from_d4rl(d4rl.qlearning_dataset(env)) # Output # Dataset Info: # key: observations, shape: (999981, 11), dtype: float32 # key: actions, shape: (999981, 3), dtype: float32 # key: next_observations, shape: (999981, 11), dtype: float32 # key: rewards, shape: (999981,), dtype: float32 # key: terminals, shape: (999981,), dtype: bool # Number of terminals on: 3045
load data from d4rl dataset(https://github.com/rail-berkeley/d4rl#using-d4rl) to replay memory.
[ "load", "data", "from", "d4rl", "dataset", "(", "https", ":", "//", "github", ".", "com", "/", "rail", "-", "berkeley", "/", "d4rl#using", "-", "d4rl", ")", "to", "replay", "memory", "." ]
def load_from_d4rl(self, dataset): """ load data from d4rl dataset(https://github.com/rail-berkeley/d4rl#using-d4rl) to replay memory. Args: dataset(dict): dataset that contains: observations (np.float32): shape of (batch_size, obs_dim), next_observations (np.int32): shape of (batch_size, obs_dim), actions (np.float32): shape of (batch_size, act_dim), rewards (np.float32): shape of (batch_size), terminals (bool): shape of (batch_size) Example: .. code-block:: python import gym import d4rl env = gym.make("hopper-medium-v0") rpm = ReplayMemory(max_size=int(2e6), obs_dim=11, act_dim=3) rpm.load_from_d4rl(d4rl.qlearning_dataset(env)) # Output # Dataset Info: # key: observations, shape: (999981, 11), dtype: float32 # key: actions, shape: (999981, 3), dtype: float32 # key: next_observations, shape: (999981, 11), dtype: float32 # key: rewards, shape: (999981,), dtype: float32 # key: terminals, shape: (999981,), dtype: bool # Number of terminals on: 3045 """ logger.info("Dataset Info: ") for key in dataset: logger.info('key: {},\tshape: {},\tdtype: {}'.format( key, dataset[key].shape, dataset[key].dtype)) assert 'observations' in dataset assert 'next_observations' in dataset assert 'actions' in dataset assert 'rewards' in dataset assert 'terminals' in dataset self.obs = dataset['observations'] self.next_obs = dataset['next_observations'] self.action = dataset['actions'] self.reward = dataset['rewards'] self.terminal = dataset['terminals'] self._curr_size = dataset['terminals'].shape[0] assert self._curr_size <= self.max_size, 'please set a proper max_size for ReplayMemory' logger.info('Number of terminals on: {}'.format(self.terminal.sum()))
[ "def", "load_from_d4rl", "(", "self", ",", "dataset", ")", ":", "logger", ".", "info", "(", "\"Dataset Info: \"", ")", "for", "key", "in", "dataset", ":", "logger", ".", "info", "(", "'key: {},\\tshape: {},\\tdtype: {}'", ".", "format", "(", "key", ",", "dat...
https://github.com/PaddlePaddle/PARL/blob/5fb7a5d2b5d0f0dac57fdf4acb9e79485c7efa96/parl/utils/replay_memory.py#L149-L199
demisto/content
5c664a65b992ac8ca90ac3f11b1b2cdf11ee9b07
Packs/OpsGenie/Integrations/OpsGenieV3/OpsGenieV3.py
python
fetch_incidents_command
(client: Client, params: Dict[str, Any], last_run: Optional[Dict] = None)
return incidents + alerts, {ALERT_TYPE: {'lastRun': last_run_alerts, 'next_page': next_page_alerts}, INCIDENT_TYPE: {'lastRun': last_run_incidents, 'next_page': next_page_incidents} }
Uses to fetch incidents into Demisto Documentation: https://github.com/demisto/content/tree/master/docs/fetching_incidents Args: client: Client object with request last_run: Last fetch object occurs params: demisto params Returns: incidents, new last_run
Uses to fetch incidents into Demisto Documentation: https://github.com/demisto/content/tree/master/docs/fetching_incidents
[ "Uses", "to", "fetch", "incidents", "into", "Demisto", "Documentation", ":", "https", ":", "//", "github", ".", "com", "/", "demisto", "/", "content", "/", "tree", "/", "master", "/", "docs", "/", "fetching_incidents" ]
def fetch_incidents_command(client: Client, params: Dict[str, Any], last_run: Optional[Dict] = None) -> Tuple[List[Dict[str, Any]], Dict]: """Uses to fetch incidents into Demisto Documentation: https://github.com/demisto/content/tree/master/docs/fetching_incidents Args: client: Client object with request last_run: Last fetch object occurs params: demisto params Returns: incidents, new last_run """ demisto.debug(f"Got incidentType={params.get('event_types')}") event_type = params.get('event_types', [ALL_TYPE]) demisto.debug(f"Got event_type={event_type}") now = _get_utc_now() incidents = [] alerts = [] last_run_alerts = demisto.get(last_run, f"{ALERT_TYPE}.lastRun") next_page_alerts = demisto.get(last_run, f"{ALERT_TYPE}.next_page") last_run_incidents = demisto.get(last_run, f"{INCIDENT_TYPE}.lastRun") next_page_incidents = demisto.get(last_run, f"{INCIDENT_TYPE}.next_page") query = params.get('query') limit = int(params.get('max_fetch', 50)) fetch_time = params.get('first_fetch', '3 days').strip() status = params.get('status') priority = params.get('priority') tags = params.get('tags') if ALERT_TYPE in event_type or ALL_TYPE in event_type: alerts, next_page_alerts, last_run_alerts = fetch_incidents_by_type(client, query, limit, fetch_time, status, priority, tags, client.list_alerts, now, demisto.get(last_run, f"{ALERT_TYPE}")) if INCIDENT_TYPE in event_type or ALL_TYPE in event_type: incidents, next_page_incidents, last_run_incidents = fetch_incidents_by_type(client, query, limit, fetch_time, status, priority, tags, client.list_incidents, now, demisto.get(last_run, f"{INCIDENT_TYPE}")) return incidents + alerts, {ALERT_TYPE: {'lastRun': last_run_alerts, 'next_page': next_page_alerts}, INCIDENT_TYPE: {'lastRun': last_run_incidents, 'next_page': next_page_incidents} }
[ "def", "fetch_incidents_command", "(", "client", ":", "Client", ",", "params", ":", "Dict", "[", "str", ",", "Any", "]", ",", "last_run", ":", "Optional", "[", "Dict", "]", "=", "None", ")", "->", "Tuple", "[", "List", "[", "Dict", "[", "str", ",", ...
https://github.com/demisto/content/blob/5c664a65b992ac8ca90ac3f11b1b2cdf11ee9b07/Packs/OpsGenie/Integrations/OpsGenieV3/OpsGenieV3.py#L967-L1023
uwdata/termite-data-server
1085571407c627bdbbd21c352e793fed65d09599
web2py/gluon/contrib/gateways/fcgi.py
python
Server.__init__
(self, handler=None, maxwrite=8192, bindAddress=None, umask=None, multiplexed=False)
handler, if present, must reference a function or method that takes one argument: a Request object. If handler is not specified at creation time, Server *must* be subclassed. (The handler method below is abstract.) maxwrite is the maximum number of bytes (per Record) to write to the server. I've noticed mod_fastcgi has a relatively small receive buffer (8K or so). bindAddress, if present, must either be a string or a 2-tuple. If present, run() will open its own listening socket. You would use this if you wanted to run your application as an 'external' FastCGI app. (i.e. the webserver would no longer be responsible for starting your app) If a string, it will be interpreted as a filename and a UNIX socket will be opened. If a tuple, the first element, a string, is the interface name/IP to bind to, and the second element (an int) is the port number. Set multiplexed to True if you want to handle multiple requests per connection. Some FastCGI backends (namely mod_fastcgi) don't multiplex requests at all, so by default this is off (which saves on thread creation/locking overhead). If threads aren't available, this keyword is ignored; it's not possible to multiplex requests at all.
handler, if present, must reference a function or method that takes one argument: a Request object. If handler is not specified at creation time, Server *must* be subclassed. (The handler method below is abstract.)
[ "handler", "if", "present", "must", "reference", "a", "function", "or", "method", "that", "takes", "one", "argument", ":", "a", "Request", "object", ".", "If", "handler", "is", "not", "specified", "at", "creation", "time", "Server", "*", "must", "*", "be",...
def __init__(self, handler=None, maxwrite=8192, bindAddress=None, umask=None, multiplexed=False): """ handler, if present, must reference a function or method that takes one argument: a Request object. If handler is not specified at creation time, Server *must* be subclassed. (The handler method below is abstract.) maxwrite is the maximum number of bytes (per Record) to write to the server. I've noticed mod_fastcgi has a relatively small receive buffer (8K or so). bindAddress, if present, must either be a string or a 2-tuple. If present, run() will open its own listening socket. You would use this if you wanted to run your application as an 'external' FastCGI app. (i.e. the webserver would no longer be responsible for starting your app) If a string, it will be interpreted as a filename and a UNIX socket will be opened. If a tuple, the first element, a string, is the interface name/IP to bind to, and the second element (an int) is the port number. Set multiplexed to True if you want to handle multiple requests per connection. Some FastCGI backends (namely mod_fastcgi) don't multiplex requests at all, so by default this is off (which saves on thread creation/locking overhead). If threads aren't available, this keyword is ignored; it's not possible to multiplex requests at all. """ if handler is not None: self.handler = handler self.maxwrite = maxwrite if thread_available: try: import resource # Attempt to glean the maximum number of connections # from the OS. maxConns = resource.getrlimit(resource.RLIMIT_NOFILE)[0] except ImportError: maxConns = 100 # Just some made up number. maxReqs = maxConns if multiplexed: self._connectionClass = MultiplexedConnection maxReqs *= 5 # Another made up number. else: self._connectionClass = Connection self.capability = { FCGI_MAX_CONNS: maxConns, FCGI_MAX_REQS: maxReqs, FCGI_MPXS_CONNS: multiplexed and 1 or 0 } else: self._connectionClass = Connection self.capability = { # If threads aren't available, these are pretty much correct. FCGI_MAX_CONNS: 1, FCGI_MAX_REQS: 1, FCGI_MPXS_CONNS: 0 } self._bindAddress = bindAddress self._umask = umask
[ "def", "__init__", "(", "self", ",", "handler", "=", "None", ",", "maxwrite", "=", "8192", ",", "bindAddress", "=", "None", ",", "umask", "=", "None", ",", "multiplexed", "=", "False", ")", ":", "if", "handler", "is", "not", "None", ":", "self", ".",...
https://github.com/uwdata/termite-data-server/blob/1085571407c627bdbbd21c352e793fed65d09599/web2py/gluon/contrib/gateways/fcgi.py#L924-L983
odlgroup/odl
0b088df8dc4621c68b9414c3deff9127f4c4f11d
odl/contrib/torch/operator.py
python
copy_if_zero_strides
(arr)
return arr.copy() if 0 in arr.strides else arr
Workaround for NumPy issue #9165 with 0 in arr.strides.
Workaround for NumPy issue #9165 with 0 in arr.strides.
[ "Workaround", "for", "NumPy", "issue", "#9165", "with", "0", "in", "arr", ".", "strides", "." ]
def copy_if_zero_strides(arr): """Workaround for NumPy issue #9165 with 0 in arr.strides.""" assert isinstance(arr, np.ndarray) return arr.copy() if 0 in arr.strides else arr
[ "def", "copy_if_zero_strides", "(", "arr", ")", ":", "assert", "isinstance", "(", "arr", ",", "np", ".", "ndarray", ")", "return", "arr", ".", "copy", "(", ")", "if", "0", "in", "arr", ".", "strides", "else", "arr" ]
https://github.com/odlgroup/odl/blob/0b088df8dc4621c68b9414c3deff9127f4c4f11d/odl/contrib/torch/operator.py#L513-L516
limodou/ulipad
4c7d590234f39cac80bb1d36dca095b646e287fb
packages/docutils/nodes.py
python
Node.deepcopy
(self)
Return a deep copy of self (also copying children).
Return a deep copy of self (also copying children).
[ "Return", "a", "deep", "copy", "of", "self", "(", "also", "copying", "children", ")", "." ]
def deepcopy(self): """Return a deep copy of self (also copying children).""" raise NotImplementedError
[ "def", "deepcopy", "(", "self", ")", ":", "raise", "NotImplementedError" ]
https://github.com/limodou/ulipad/blob/4c7d590234f39cac80bb1d36dca095b646e287fb/packages/docutils/nodes.py#L87-L89
django/django
0a17666045de6739ae1c2ac695041823d5f827f7
django/core/management/color.py
python
no_style
()
return make_style('nocolor')
Return a Style object with no color scheme.
Return a Style object with no color scheme.
[ "Return", "a", "Style", "object", "with", "no", "color", "scheme", "." ]
def no_style(): """ Return a Style object with no color scheme. """ return make_style('nocolor')
[ "def", "no_style", "(", ")", ":", "return", "make_style", "(", "'nocolor'", ")" ]
https://github.com/django/django/blob/0a17666045de6739ae1c2ac695041823d5f827f7/django/core/management/color.py#L94-L98
triaquae/triaquae
bbabf736b3ba56a0c6498e7f04e16c13b8b8f2b9
TriAquae/models/django/db/backends/sqlite3/base.py
python
parse_datetime_with_timezone_support
(value)
return dt
[]
def parse_datetime_with_timezone_support(value): dt = parse_datetime(value) # Confirm that dt is naive before overwriting its tzinfo. if dt is not None and settings.USE_TZ and timezone.is_naive(dt): dt = dt.replace(tzinfo=timezone.utc) return dt
[ "def", "parse_datetime_with_timezone_support", "(", "value", ")", ":", "dt", "=", "parse_datetime", "(", "value", ")", "# Confirm that dt is naive before overwriting its tzinfo.", "if", "dt", "is", "not", "None", "and", "settings", ".", "USE_TZ", "and", "timezone", "....
https://github.com/triaquae/triaquae/blob/bbabf736b3ba56a0c6498e7f04e16c13b8b8f2b9/TriAquae/models/django/db/backends/sqlite3/base.py#L40-L45
chapmanb/bcbb
dbfb52711f0bfcc1d26c5a5b53c9ff4f50dc0027
gff/BCBio/GFF/GFFParser.py
python
parse
(gff_files, base_dict=None, limit_info=None, target_lines=None)
High level interface to parse GFF files into SeqRecords and SeqFeatures.
High level interface to parse GFF files into SeqRecords and SeqFeatures.
[ "High", "level", "interface", "to", "parse", "GFF", "files", "into", "SeqRecords", "and", "SeqFeatures", "." ]
def parse(gff_files, base_dict=None, limit_info=None, target_lines=None): """High level interface to parse GFF files into SeqRecords and SeqFeatures. """ parser = GFFParser() for rec in parser.parse_in_parts(gff_files, base_dict, limit_info, target_lines): yield rec
[ "def", "parse", "(", "gff_files", ",", "base_dict", "=", "None", ",", "limit_info", "=", "None", ",", "target_lines", "=", "None", ")", ":", "parser", "=", "GFFParser", "(", ")", "for", "rec", "in", "parser", ".", "parse_in_parts", "(", "gff_files", ",",...
https://github.com/chapmanb/bcbb/blob/dbfb52711f0bfcc1d26c5a5b53c9ff4f50dc0027/gff/BCBio/GFF/GFFParser.py#L776-L782
wistbean/learn_python3_spider
73c873f4845f4385f097e5057407d03dd37a117b
stackoverflow/venv/lib/python3.6/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/pkg_resources/__init__.py
python
Distribution.insert_on
(self, path, loc=None, replace=False)
return
Ensure self.location is on path If replace=False (default): - If location is already in path anywhere, do nothing. - Else: - If it's an egg and its parent directory is on path, insert just ahead of the parent. - Else: add to the end of path. If replace=True: - If location is already on path anywhere (not eggs) or higher priority than its parent (eggs) do nothing. - Else: - If it's an egg and its parent directory is on path, insert just ahead of the parent, removing any lower-priority entries. - Else: add it to the front of path.
Ensure self.location is on path
[ "Ensure", "self", ".", "location", "is", "on", "path" ]
def insert_on(self, path, loc=None, replace=False): """Ensure self.location is on path If replace=False (default): - If location is already in path anywhere, do nothing. - Else: - If it's an egg and its parent directory is on path, insert just ahead of the parent. - Else: add to the end of path. If replace=True: - If location is already on path anywhere (not eggs) or higher priority than its parent (eggs) do nothing. - Else: - If it's an egg and its parent directory is on path, insert just ahead of the parent, removing any lower-priority entries. - Else: add it to the front of path. """ loc = loc or self.location if not loc: return nloc = _normalize_cached(loc) bdir = os.path.dirname(nloc) npath = [(p and _normalize_cached(p) or p) for p in path] for p, item in enumerate(npath): if item == nloc: if replace: break else: # don't modify path (even removing duplicates) if # found and not replace return elif item == bdir and self.precedence == EGG_DIST: # if it's an .egg, give it precedence over its directory # UNLESS it's already been added to sys.path and replace=False if (not replace) and nloc in npath[p:]: return if path is sys.path: self.check_version_conflict() path.insert(p, loc) npath.insert(p, nloc) break else: if path is sys.path: self.check_version_conflict() if replace: path.insert(0, loc) else: path.append(loc) return # p is the spot where we found or inserted loc; now remove duplicates while True: try: np = npath.index(nloc, p + 1) except ValueError: break else: del npath[np], path[np] # ha! p = np return
[ "def", "insert_on", "(", "self", ",", "path", ",", "loc", "=", "None", ",", "replace", "=", "False", ")", ":", "loc", "=", "loc", "or", "self", ".", "location", "if", "not", "loc", ":", "return", "nloc", "=", "_normalize_cached", "(", "loc", ")", "...
https://github.com/wistbean/learn_python3_spider/blob/73c873f4845f4385f097e5057407d03dd37a117b/stackoverflow/venv/lib/python3.6/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/pkg_resources/__init__.py#L2746-L2812
vyos/vyos-1x
6e8a8934a7d4e1b21d7c828e372303683b499b56
python/vyos/ifconfig/section.py
python
Section._sort_interfaces
(cls, generator)
return l
return a list of the sorted interface by number, vlan, qinq
return a list of the sorted interface by number, vlan, qinq
[ "return", "a", "list", "of", "the", "sorted", "interface", "by", "number", "vlan", "qinq" ]
def _sort_interfaces(cls, generator): """ return a list of the sorted interface by number, vlan, qinq """ def key(ifname): value = 0 parts = re.split(r'([^0-9]+)([0-9]+)[.]?([0-9]+)?[.]?([0-9]+)?', ifname) length = len(parts) name = parts[1] if length >= 3 else parts[0] # the +1 makes sure eth0.0.0 after eth0.0 number = int(parts[2]) + 1 if length >= 4 and parts[2] is not None else 0 vlan = int(parts[3]) + 1 if length >= 5 and parts[3] is not None else 0 qinq = int(parts[4]) + 1 if length >= 6 and parts[4] is not None else 0 # so that "lo" (or short names) are handled (as "loa") for n in (name + 'aaa')[:3]: value *= 100 value += (ord(n) - ord('a')) value += number # vlan are 16 bits, so this can not overflow value = (value << 16) + vlan value = (value << 16) + qinq return value l = list(generator) l.sort(key=key) return l
[ "def", "_sort_interfaces", "(", "cls", ",", "generator", ")", ":", "def", "key", "(", "ifname", ")", ":", "value", "=", "0", "parts", "=", "re", ".", "split", "(", "r'([^0-9]+)([0-9]+)[.]?([0-9]+)?[.]?([0-9]+)?'", ",", "ifname", ")", "length", "=", "len", ...
https://github.com/vyos/vyos-1x/blob/6e8a8934a7d4e1b21d7c828e372303683b499b56/python/vyos/ifconfig/section.py#L109-L135
buke/GreenOdoo
3d8c55d426fb41fdb3f2f5a1533cfe05983ba1df
runtime/python/lib/python2.7/collections.py
python
OrderedDict.iteritems
(self)
od.iteritems -> an iterator over the (key, value) pairs in od
od.iteritems -> an iterator over the (key, value) pairs in od
[ "od", ".", "iteritems", "-", ">", "an", "iterator", "over", "the", "(", "key", "value", ")", "pairs", "in", "od" ]
def iteritems(self): 'od.iteritems -> an iterator over the (key, value) pairs in od' for k in self: yield (k, self[k])
[ "def", "iteritems", "(", "self", ")", ":", "for", "k", "in", "self", ":", "yield", "(", "k", ",", "self", "[", "k", "]", ")" ]
https://github.com/buke/GreenOdoo/blob/3d8c55d426fb41fdb3f2f5a1533cfe05983ba1df/runtime/python/lib/python2.7/collections.py#L121-L124
theislab/scvelo
1805ab4a72d3f34496f0ef246500a159f619d3a2
docs/source/conf.py
python
modurl
(qualname)
return f"{github_url}/{path}{fragment}"
Get the full GitHub URL for some object’s qualname.
Get the full GitHub URL for some object’s qualname.
[ "Get", "the", "full", "GitHub", "URL", "for", "some", "object’s", "qualname", "." ]
def modurl(qualname): """Get the full GitHub URL for some object’s qualname.""" obj, module = get_obj_module(qualname) github_url = github_url_scvelo try: path = PurePosixPath(Path(module.__file__).resolve().relative_to(project_dir)) except ValueError: # trying to document something from another package github_url = ( github_url_read_loom if "read_loom" in qualname else github_url_read if "read" in qualname else github_url_scanpy ) path = "/".join(module.__file__.split("/")[-2:]) start, end = get_linenos(obj) fragment = f"#L{start}-L{end}" if start and end else "" return f"{github_url}/{path}{fragment}"
[ "def", "modurl", "(", "qualname", ")", ":", "obj", ",", "module", "=", "get_obj_module", "(", "qualname", ")", "github_url", "=", "github_url_scvelo", "try", ":", "path", "=", "PurePosixPath", "(", "Path", "(", "module", ".", "__file__", ")", ".", "resolve...
https://github.com/theislab/scvelo/blob/1805ab4a72d3f34496f0ef246500a159f619d3a2/docs/source/conf.py#L250-L268
omz/PythonistaAppTemplate
f560f93f8876d82a21d108977f90583df08d55af
PythonistaAppTemplate/PythonistaKit.framework/pylib/mimify.py
python
mime_decode
(line)
return newline + line[pos:]
Decode a single line of quoted-printable text to 8bit.
Decode a single line of quoted-printable text to 8bit.
[ "Decode", "a", "single", "line", "of", "quoted", "-", "printable", "text", "to", "8bit", "." ]
def mime_decode(line): """Decode a single line of quoted-printable text to 8bit.""" newline = '' pos = 0 while 1: res = mime_code.search(line, pos) if res is None: break newline = newline + line[pos:res.start(0)] + \ chr(int(res.group(1), 16)) pos = res.end(0) return newline + line[pos:]
[ "def", "mime_decode", "(", "line", ")", ":", "newline", "=", "''", "pos", "=", "0", "while", "1", ":", "res", "=", "mime_code", ".", "search", "(", "line", ",", "pos", ")", "if", "res", "is", "None", ":", "break", "newline", "=", "newline", "+", ...
https://github.com/omz/PythonistaAppTemplate/blob/f560f93f8876d82a21d108977f90583df08d55af/PythonistaAppTemplate/PythonistaKit.framework/pylib/mimify.py#L93-L104
maas/maas
db2f89970c640758a51247c59bf1ec6f60cf4ab5
src/maasserver/api/nodes.py
python
NodeHandler.delete
(self, request, system_id)
return rc.DELETED
@description-title Delete a node @description Deletes a node with a given system_id. @param (string) "{system_id}" [required=true] A node's system_id. @success (http-status-code) "204" 204 @error (http-status-code) "404" 404 @error (content) "not-found" The requested node is not found. @error-example "not-found" Not Found @error (http-status-code) "403" 403 @error (content) "no-perms" The user is not authorized to delete the node.
@description-title Delete a node @description Deletes a node with a given system_id.
[ "@description", "-", "title", "Delete", "a", "node", "@description", "Deletes", "a", "node", "with", "a", "given", "system_id", "." ]
def delete(self, request, system_id): """@description-title Delete a node @description Deletes a node with a given system_id. @param (string) "{system_id}" [required=true] A node's system_id. @success (http-status-code) "204" 204 @error (http-status-code) "404" 404 @error (content) "not-found" The requested node is not found. @error-example "not-found" Not Found @error (http-status-code) "403" 403 @error (content) "no-perms" The user is not authorized to delete the node. """ node = self.model.objects.get_node_or_404( system_id=system_id, user=request.user, perm=NodePermission.admin ) node.as_self().delete() return rc.DELETED
[ "def", "delete", "(", "self", ",", "request", ",", "system_id", ")", ":", "node", "=", "self", ".", "model", ".", "objects", ".", "get_node_or_404", "(", "system_id", "=", "system_id", ",", "user", "=", "request", ".", "user", ",", "perm", "=", "NodePe...
https://github.com/maas/maas/blob/db2f89970c640758a51247c59bf1ec6f60cf4ab5/src/maasserver/api/nodes.py#L519-L540
fluentpython/example-code-2e
80f7f84274a47579e59c29a4657691525152c9d5
12-seq-hacking/vector_v2.py
python
Vector.__init__
(self, components)
[]
def __init__(self, components): self._components = array(self.typecode, components)
[ "def", "__init__", "(", "self", ",", "components", ")", ":", "self", ".", "_components", "=", "array", "(", "self", ".", "typecode", ",", "components", ")" ]
https://github.com/fluentpython/example-code-2e/blob/80f7f84274a47579e59c29a4657691525152c9d5/12-seq-hacking/vector_v2.py#L121-L122
paulwinex/pw_MultiScriptEditor
e447e99f87cb07e238baf693b7e124e50efdbc51
multi_script_editor/managers/nuke/main.py
python
ofxPluginPath
()
return ['',]
nuke.ofxPluginPath() -> String list List of all the directories Nuke searched for OFX plugins in. @return: String list
nuke.ofxPluginPath() -> String list
[ "nuke", ".", "ofxPluginPath", "()", "-", ">", "String", "list" ]
def ofxPluginPath(): """nuke.ofxPluginPath() -> String list List of all the directories Nuke searched for OFX plugins in. @return: String list""" return ['',]
[ "def", "ofxPluginPath", "(", ")", ":", "return", "[", "''", ",", "]" ]
https://github.com/paulwinex/pw_MultiScriptEditor/blob/e447e99f87cb07e238baf693b7e124e50efdbc51/multi_script_editor/managers/nuke/main.py#L5671-L5677
salesforce/glad
cc3217437128578a1942582670104bc214506a5e
dataset.py
python
Ontology.from_dict
(cls, d)
return cls(**d)
[]
def from_dict(cls, d): return cls(**d)
[ "def", "from_dict", "(", "cls", ",", "d", ")", ":", "return", "cls", "(", "*", "*", "d", ")" ]
https://github.com/salesforce/glad/blob/cc3217437128578a1942582670104bc214506a5e/dataset.py#L197-L198
baidu/DuReader
43577e29435f5abcb7b02ce6a0019b3f42b1221d
DuReader-2.0/tensorflow/rc_model.py
python
RCModel._compute_loss
(self)
The loss function
The loss function
[ "The", "loss", "function" ]
def _compute_loss(self): """ The loss function """ def sparse_nll_loss(probs, labels, epsilon=1e-9, scope=None): """ negative log likelyhood loss """ with tf.name_scope(scope, "log_loss"): labels = tf.one_hot(labels, tf.shape(probs)[1], axis=1) losses = - tf.reduce_sum(labels * tf.log(probs + epsilon), 1) return losses self.start_loss = sparse_nll_loss(probs=self.start_probs, labels=self.start_label) self.end_loss = sparse_nll_loss(probs=self.end_probs, labels=self.end_label) self.all_params = tf.trainable_variables() self.loss = tf.reduce_mean(tf.add(self.start_loss, self.end_loss)) if self.weight_decay > 0: with tf.variable_scope('l2_loss'): l2_loss = tf.add_n([tf.nn.l2_loss(v) for v in self.all_params]) self.loss += self.weight_decay * l2_loss
[ "def", "_compute_loss", "(", "self", ")", ":", "def", "sparse_nll_loss", "(", "probs", ",", "labels", ",", "epsilon", "=", "1e-9", ",", "scope", "=", "None", ")", ":", "\"\"\"\n negative log likelyhood loss\n \"\"\"", "with", "tf", ".", "name...
https://github.com/baidu/DuReader/blob/43577e29435f5abcb7b02ce6a0019b3f42b1221d/DuReader-2.0/tensorflow/rc_model.py#L179-L200
apple/ccs-calendarserver
13c706b985fb728b9aab42dc0fef85aae21921c3
twistedcaldav/directory/calendaruserproxyloader.py
python
XMLCalendarUserProxyLoader._parseMembers
(self, node, addto)
[]
def _parseMembers(self, node, addto): for child in node: if child.tag == ELEMENT_MEMBER: addto.add(child.text)
[ "def", "_parseMembers", "(", "self", ",", "node", ",", "addto", ")", ":", "for", "child", "in", "node", ":", "if", "child", ".", "tag", "==", "ELEMENT_MEMBER", ":", "addto", ".", "add", "(", "child", ".", "text", ")" ]
https://github.com/apple/ccs-calendarserver/blob/13c706b985fb728b9aab42dc0fef85aae21921c3/twistedcaldav/directory/calendaruserproxyloader.py#L109-L112
pyparallel/pyparallel
11e8c6072d48c8f13641925d17b147bf36ee0ba3
Lib/collections/abc.py
python
Mapping.keys
(self)
return KeysView(self)
D.keys() -> a set-like object providing a view on D's keys
D.keys() -> a set-like object providing a view on D's keys
[ "D", ".", "keys", "()", "-", ">", "a", "set", "-", "like", "object", "providing", "a", "view", "on", "D", "s", "keys" ]
def keys(self): "D.keys() -> a set-like object providing a view on D's keys" return KeysView(self)
[ "def", "keys", "(", "self", ")", ":", "return", "KeysView", "(", "self", ")" ]
https://github.com/pyparallel/pyparallel/blob/11e8c6072d48c8f13641925d17b147bf36ee0ba3/Lib/collections/abc.py#L412-L414
dreamworksanimation/usdmanager
dfd0825300c45d3bba25a585bfd0785d5bc50cb0
usdmanager/__init__.py
python
UsdMngrWindow.customTabWidgetContextMenu
(self, pos)
Slot for the right-click context menu for the tab widget. :Parameters: pos : `QtCore.QPoint` Position of the right-click
Slot for the right-click context menu for the tab widget. :Parameters: pos : `QtCore.QPoint` Position of the right-click
[ "Slot", "for", "the", "right", "-", "click", "context", "menu", "for", "the", "tab", "widget", ".", ":", "Parameters", ":", "pos", ":", "QtCore", ".", "QPoint", "Position", "of", "the", "right", "-", "click" ]
def customTabWidgetContextMenu(self, pos): """ Slot for the right-click context menu for the tab widget. :Parameters: pos : `QtCore.QPoint` Position of the right-click """ menu = QtWidgets.QMenu(self) menu.addAction(self.actionNewTab) menu.addSeparator() menu.addAction(self.actionRefreshTab) menu.addAction(self.actionDuplicateTab) menu.addSeparator() menu.addAction(self.actionCloseTab) menu.addAction(self.actionCloseOther) menu.addAction(self.actionCloseRight) self.contextMenuPos = self.tabWidget.tabBar.mapFromParent(pos) indexOfClickedTab = self.tabWidget.tabBar.tabAt(self.contextMenuPos) # Save the original state so we don't mess with the menu action, since this one action is re-used. # TODO: Maybe make a new action instead of reusing this. state = self.actionCloseTab.isEnabled() if indexOfClickedTab == -1: self.actionCloseTab.setEnabled(False) self.actionCloseOther.setEnabled(False) self.actionCloseRight.setEnabled(False) self.actionRefreshTab.setEnabled(False) self.actionDuplicateTab.setEnabled(False) else: self.actionCloseTab.setEnabled(True) self.actionCloseOther.setEnabled(self.tabWidget.count() > 1) self.actionCloseRight.setEnabled(indexOfClickedTab < self.tabWidget.count() - 1) self.actionRefreshTab.setEnabled(bool(self.tabWidget.widget(indexOfClickedTab).getCurrentPath())) self.actionDuplicateTab.setEnabled(True) menu.exec_(self.tabWidget.mapToGlobal(pos)) del menu self.contextMenuPos = None # Restore previous action state. self.actionCloseTab.setEnabled(state)
[ "def", "customTabWidgetContextMenu", "(", "self", ",", "pos", ")", ":", "menu", "=", "QtWidgets", ".", "QMenu", "(", "self", ")", "menu", ".", "addAction", "(", "self", ".", "actionNewTab", ")", "menu", ".", "addSeparator", "(", ")", "menu", ".", "addAct...
https://github.com/dreamworksanimation/usdmanager/blob/dfd0825300c45d3bba25a585bfd0785d5bc50cb0/usdmanager/__init__.py#L652-L694
nipy/nibabel
4703f4d8e32be4cec30e829c2d93ebe54759bb62
nibabel/casting.py
python
longdouble_precision_improved
()
return not longdouble_lte_float64() and _LD_LTE_FLOAT64
True if longdouble precision increased since initial import This can happen on Windows compiled with MSVC. It may be because libraries compiled with mingw (longdouble is Intel80) get linked to numpy compiled with MSVC (longdouble is Float64)
True if longdouble precision increased since initial import
[ "True", "if", "longdouble", "precision", "increased", "since", "initial", "import" ]
def longdouble_precision_improved(): """ True if longdouble precision increased since initial import This can happen on Windows compiled with MSVC. It may be because libraries compiled with mingw (longdouble is Intel80) get linked to numpy compiled with MSVC (longdouble is Float64) """ return not longdouble_lte_float64() and _LD_LTE_FLOAT64
[ "def", "longdouble_precision_improved", "(", ")", ":", "return", "not", "longdouble_lte_float64", "(", ")", "and", "_LD_LTE_FLOAT64" ]
https://github.com/nipy/nibabel/blob/4703f4d8e32be4cec30e829c2d93ebe54759bb62/nibabel/casting.py#L681-L688
lovelylain/pyctp
fd304de4b50c4ddc31a4190b1caaeb5dec66bc5d
option/ctp/ApiStruct.py
python
ReqFutureSignOut.__init__
(self, TradeCode='', BankID='', BankBranchID='', BrokerID='', BrokerBranchID='', TradeDate='', TradeTime='', BankSerial='', TradingDay='', PlateSerial=0, LastFragment=LF_Yes, SessionID=0, InstallID=0, UserID='', Digest='', CurrencyID='', DeviceID='', BrokerIDByBank='', OperNo='', RequestID=0, TID=0)
[]
def __init__(self, TradeCode='', BankID='', BankBranchID='', BrokerID='', BrokerBranchID='', TradeDate='', TradeTime='', BankSerial='', TradingDay='', PlateSerial=0, LastFragment=LF_Yes, SessionID=0, InstallID=0, UserID='', Digest='', CurrencyID='', DeviceID='', BrokerIDByBank='', OperNo='', RequestID=0, TID=0): self.TradeCode = '' #业务功能码, char[7] self.BankID = '' #银行代码, char[4] self.BankBranchID = 'BankBrchID' #银行分支机构代码, char[5] self.BrokerID = '' #期商代码, char[11] self.BrokerBranchID = 'FutureBranchID' #期商分支机构代码, char[31] self.TradeDate = '' #交易日期, char[9] self.TradeTime = '' #交易时间, char[9] self.BankSerial = '' #银行流水号, char[13] self.TradingDay = 'TradeDate' #交易系统日期 , char[9] self.PlateSerial = 'Serial' #银期平台消息流水号, int self.LastFragment = '' #最后分片标志, char self.SessionID = '' #会话号, int self.InstallID = '' #安装编号, int self.UserID = '' #用户标识, char[16] self.Digest = '' #摘要, char[36] self.CurrencyID = '' #币种代码, char[4] self.DeviceID = '' #渠道标志, char[3] self.BrokerIDByBank = 'BankCodingForFuture' #期货公司银行编码, char[33] self.OperNo = '' #交易柜员, char[17] self.RequestID = '' #请求编号, int self.TID = ''
[ "def", "__init__", "(", "self", ",", "TradeCode", "=", "''", ",", "BankID", "=", "''", ",", "BankBranchID", "=", "''", ",", "BrokerID", "=", "''", ",", "BrokerBranchID", "=", "''", ",", "TradeDate", "=", "''", ",", "TradeTime", "=", "''", ",", "BankS...
https://github.com/lovelylain/pyctp/blob/fd304de4b50c4ddc31a4190b1caaeb5dec66bc5d/option/ctp/ApiStruct.py#L5866-L5887
plotly/plotly.py
cfad7862594b35965c0e000813bd7805e8494a5b
packages/python/plotly/plotly/graph_objs/indicator/gauge/_axis.py
python
Axis.dtick
(self)
return self["dtick"]
Sets the step in-between ticks on this axis. Use with `tick0`. Must be a positive number, or special strings available to "log" and "date" axes. If the axis `type` is "log", then ticks are set every 10^(n*dtick) where n is the tick number. For example, to set a tick mark at 1, 10, 100, 1000, ... set dtick to 1. To set tick marks at 1, 100, 10000, ... set dtick to 2. To set tick marks at 1, 5, 25, 125, 625, 3125, ... set dtick to log_10(5), or 0.69897000433. "log" has several special values; "L<f>", where `f` is a positive number, gives ticks linearly spaced in value (but not position). For example `tick0` = 0.1, `dtick` = "L0.5" will put ticks at 0.1, 0.6, 1.1, 1.6 etc. To show powers of 10 plus small digits between, use "D1" (all digits) or "D2" (only 2 and 5). `tick0` is ignored for "D1" and "D2". If the axis `type` is "date", then you must convert the time to milliseconds. For example, to set the interval between ticks to one day, set `dtick` to 86400000.0. "date" also has special values "M<n>" gives ticks spaced by a number of months. `n` must be a positive integer. To set ticks on the 15th of every third month, set `tick0` to "2000-01-15" and `dtick` to "M3". To set ticks every 4 years, set `dtick` to "M48" The 'dtick' property accepts values of any type Returns ------- Any
Sets the step in-between ticks on this axis. Use with `tick0`. Must be a positive number, or special strings available to "log" and "date" axes. If the axis `type` is "log", then ticks are set every 10^(n*dtick) where n is the tick number. For example, to set a tick mark at 1, 10, 100, 1000, ... set dtick to 1. To set tick marks at 1, 100, 10000, ... set dtick to 2. To set tick marks at 1, 5, 25, 125, 625, 3125, ... set dtick to log_10(5), or 0.69897000433. "log" has several special values; "L<f>", where `f` is a positive number, gives ticks linearly spaced in value (but not position). For example `tick0` = 0.1, `dtick` = "L0.5" will put ticks at 0.1, 0.6, 1.1, 1.6 etc. To show powers of 10 plus small digits between, use "D1" (all digits) or "D2" (only 2 and 5). `tick0` is ignored for "D1" and "D2". If the axis `type` is "date", then you must convert the time to milliseconds. For example, to set the interval between ticks to one day, set `dtick` to 86400000.0. "date" also has special values "M<n>" gives ticks spaced by a number of months. `n` must be a positive integer. To set ticks on the 15th of every third month, set `tick0` to "2000-01-15" and `dtick` to "M3". To set ticks every 4 years, set `dtick` to "M48" The 'dtick' property accepts values of any type
[ "Sets", "the", "step", "in", "-", "between", "ticks", "on", "this", "axis", ".", "Use", "with", "tick0", ".", "Must", "be", "a", "positive", "number", "or", "special", "strings", "available", "to", "log", "and", "date", "axes", ".", "If", "the", "axis"...
def dtick(self): """ Sets the step in-between ticks on this axis. Use with `tick0`. Must be a positive number, or special strings available to "log" and "date" axes. If the axis `type` is "log", then ticks are set every 10^(n*dtick) where n is the tick number. For example, to set a tick mark at 1, 10, 100, 1000, ... set dtick to 1. To set tick marks at 1, 100, 10000, ... set dtick to 2. To set tick marks at 1, 5, 25, 125, 625, 3125, ... set dtick to log_10(5), or 0.69897000433. "log" has several special values; "L<f>", where `f` is a positive number, gives ticks linearly spaced in value (but not position). For example `tick0` = 0.1, `dtick` = "L0.5" will put ticks at 0.1, 0.6, 1.1, 1.6 etc. To show powers of 10 plus small digits between, use "D1" (all digits) or "D2" (only 2 and 5). `tick0` is ignored for "D1" and "D2". If the axis `type` is "date", then you must convert the time to milliseconds. For example, to set the interval between ticks to one day, set `dtick` to 86400000.0. "date" also has special values "M<n>" gives ticks spaced by a number of months. `n` must be a positive integer. To set ticks on the 15th of every third month, set `tick0` to "2000-01-15" and `dtick` to "M3". To set ticks every 4 years, set `dtick` to "M48" The 'dtick' property accepts values of any type Returns ------- Any """ return self["dtick"]
[ "def", "dtick", "(", "self", ")", ":", "return", "self", "[", "\"dtick\"", "]" ]
https://github.com/plotly/plotly.py/blob/cfad7862594b35965c0e000813bd7805e8494a5b/packages/python/plotly/plotly/graph_objs/indicator/gauge/_axis.py#L45-L74
golismero/golismero
7d605b937e241f51c1ca4f47b20f755eeefb9d76
golismero/api/net/web_utils.py
python
data_from_http_response
(response)
return data
Extracts data from an HTTP response. :param response: HTTP response. :type response: HTTP_Response :returns: Extracted data, or None if no data was found. :rtype: Data | None
Extracts data from an HTTP response.
[ "Extracts", "data", "from", "an", "HTTP", "response", "." ]
def data_from_http_response(response): """ Extracts data from an HTTP response. :param response: HTTP response. :type response: HTTP_Response :returns: Extracted data, or None if no data was found. :rtype: Data | None """ # If we have no data, return None. if not response.data: return None # Get the MIME content type. content_type = response.content_type # Strip the content type modifiers. if ";" in content_type: content_type = content_type[:content_type.find(";")] # Sanitize the content type. content_type = content_type.strip().lower() if "/" not in content_type: return None # Parse the data. data = None try: # HTML pages. if content_type == "text/html": from ..data.information.html import HTML data = HTML(response.data) # Plain text data. elif content_type.startswith("text/"): from ..data.information.text import Text data = Text(response.data, response.content_type) # Image files. elif content_type.startswith("image/"): from ..data.information.image import Image data = Image(response.data, response.content_type) # Catch errors and throw warnings instead. except Exception, e: ##raise # XXX DEBUG warn(str(e), RuntimeWarning) # Anything we don't know how to parse we treat as binary. if data is None: from ..data.information.binary import Binary data = Binary(response.data, response.content_type) # Associate the data to the response. data.add_information(response) # Return the data. return data
[ "def", "data_from_http_response", "(", "response", ")", ":", "# If we have no data, return None.", "if", "not", "response", ".", "data", ":", "return", "None", "# Get the MIME content type.", "content_type", "=", "response", ".", "content_type", "# Strip the content type mo...
https://github.com/golismero/golismero/blob/7d605b937e241f51c1ca4f47b20f755eeefb9d76/golismero/api/net/web_utils.py#L116-L176
tomplus/kubernetes_asyncio
f028cc793e3a2c519be6a52a49fb77ff0b014c9b
kubernetes_asyncio/client/models/v1alpha1_priority_class_list.py
python
V1alpha1PriorityClassList.to_dict
(self)
return result
Returns the model properties as a dict
Returns the model properties as a dict
[ "Returns", "the", "model", "properties", "as", "a", "dict" ]
def to_dict(self): """Returns the model properties as a dict""" result = {} for attr, _ in six.iteritems(self.openapi_types): value = getattr(self, attr) if isinstance(value, list): result[attr] = list(map( lambda x: x.to_dict() if hasattr(x, "to_dict") else x, value )) elif hasattr(value, "to_dict"): result[attr] = value.to_dict() elif isinstance(value, dict): result[attr] = dict(map( lambda item: (item[0], item[1].to_dict()) if hasattr(item[1], "to_dict") else item, value.items() )) else: result[attr] = value return result
[ "def", "to_dict", "(", "self", ")", ":", "result", "=", "{", "}", "for", "attr", ",", "_", "in", "six", ".", "iteritems", "(", "self", ".", "openapi_types", ")", ":", "value", "=", "getattr", "(", "self", ",", "attr", ")", "if", "isinstance", "(", ...
https://github.com/tomplus/kubernetes_asyncio/blob/f028cc793e3a2c519be6a52a49fb77ff0b014c9b/kubernetes_asyncio/client/models/v1alpha1_priority_class_list.py#L161-L183
danielplohmann/apiscout
8622b54302cb2712fe35ce971e77d1f3d5849a2a
apiscout/db_builder/pefile.py
python
PE.parse_resources_directory
(self, rva, size=0, base_rva = None, level = 0, dirs=None)
return resource_directory_data
Parse the resources directory. Given the RVA of the resources directory, it will process all its entries. The root will have the corresponding member of its structure, IMAGE_RESOURCE_DIRECTORY plus 'entries', a list of all the entries in the directory. Those entries will have, correspondingly, all the structure's members (IMAGE_RESOURCE_DIRECTORY_ENTRY) and an additional one, "directory", pointing to the IMAGE_RESOURCE_DIRECTORY structure representing upper layers of the tree. This one will also have an 'entries' attribute, pointing to the 3rd, and last, level. Another directory with more entries. Those last entries will have a new attribute (both 'leaf' or 'data_entry' can be used to access it). This structure finally points to the resource data. All the members of this structure, IMAGE_RESOURCE_DATA_ENTRY, are available as its attributes.
Parse the resources directory.
[ "Parse", "the", "resources", "directory", "." ]
def parse_resources_directory(self, rva, size=0, base_rva = None, level = 0, dirs=None): """Parse the resources directory. Given the RVA of the resources directory, it will process all its entries. The root will have the corresponding member of its structure, IMAGE_RESOURCE_DIRECTORY plus 'entries', a list of all the entries in the directory. Those entries will have, correspondingly, all the structure's members (IMAGE_RESOURCE_DIRECTORY_ENTRY) and an additional one, "directory", pointing to the IMAGE_RESOURCE_DIRECTORY structure representing upper layers of the tree. This one will also have an 'entries' attribute, pointing to the 3rd, and last, level. Another directory with more entries. Those last entries will have a new attribute (both 'leaf' or 'data_entry' can be used to access it). This structure finally points to the resource data. All the members of this structure, IMAGE_RESOURCE_DATA_ENTRY, are available as its attributes. """ # OC Patch: if dirs is None: dirs = [rva] if base_rva is None: base_rva = rva resources_section = self.get_section_by_rva(rva) try: # If the RVA is invalid all would blow up. Some EXEs seem to be # specially nasty and have an invalid RVA. data = self.get_data(rva, Structure(self.__IMAGE_RESOURCE_DIRECTORY_format__).sizeof() ) except PEFormatError as e: self.__warnings.append( 'Invalid resources directory. Can\'t read ' 'directory data at RVA: 0x%x' % rva) return None # Get the resource directory structure, that is, the header # of the table preceding the actual entries # resource_dir = self.__unpack_data__( self.__IMAGE_RESOURCE_DIRECTORY_format__, data, file_offset = self.get_offset_from_rva(rva) ) if resource_dir is None: # If we can't parse resources directory then silently return. # This directory does not necessarily have to be valid to # still have a valid PE file self.__warnings.append( 'Invalid resources directory. Can\'t parse ' 'directory data at RVA: 0x%x' % rva) return None dir_entries = [] # Advance the RVA to the position immediately following the directory # table header and pointing to the first entry in the table # rva += resource_dir.sizeof() number_of_entries = ( resource_dir.NumberOfNamedEntries + resource_dir.NumberOfIdEntries ) # Set a hard limit on the maximum reasonable number of entries MAX_ALLOWED_ENTRIES = 4096 if number_of_entries > MAX_ALLOWED_ENTRIES: self.__warnings.append( 'Error parsing the resources directory. ' 'The directory contains %d entries (>%s)' % (number_of_entries, MAX_ALLOWED_ENTRIES) ) return None strings_to_postprocess = list() # Keep track of the last name's start and end offsets in order # to be able to detect overlapping entries that might suggest # and invalid or corrupt directory. last_name_begin_end = None for idx in range(number_of_entries): res = self.parse_resource_entry(rva) if res is None: self.__warnings.append( 'Error parsing the resources directory, ' 'Entry %d is invalid, RVA = 0x%x. ' % (idx, rva) ) break entry_name = None entry_id = None name_is_string = (res.Name & 0x80000000) >> 31 if not name_is_string: entry_id = res.Name else: ustr_offset = base_rva+res.NameOffset try: entry_name = UnicodeStringWrapperPostProcessor(self, ustr_offset) # If the last entry's offset points before the current's but its end # is past the current's beginning, assume the overlap indicates a # corrupt name. if last_name_begin_end and (last_name_begin_end[0] < ustr_offset and last_name_begin_end[1] >= ustr_offset): # Remove the previous overlapping entry as it's likely to be already corrupt data. strings_to_postprocess.pop() self.__warnings.append( 'Error parsing the resources directory, ' 'attempting to read entry name. ' 'Entry names overlap 0x%x' % (ustr_offset) ) break last_name_begin_end = (ustr_offset, ustr_offset+entry_name.get_pascal_16_length()) strings_to_postprocess.append(entry_name) except PEFormatError as excp: self.__warnings.append( 'Error parsing the resources directory, ' 'attempting to read entry name. ' 'Can\'t read unicode string at offset 0x%x' % (ustr_offset) ) if res.DataIsDirectory: # OC Patch: # # One trick malware can do is to recursively reference # the next directory. This causes hilarity to ensue when # trying to parse everything correctly. # If the original RVA given to this function is equal to # the next one to parse, we assume that it's a trick. # Instead of raising a PEFormatError this would skip some # reasonable data so we just break. # # 9ee4d0a0caf095314fd7041a3e4404dc is the offending sample if (base_rva + res.OffsetToDirectory) in dirs: break else: entry_directory = self.parse_resources_directory( base_rva+res.OffsetToDirectory, size-(rva-base_rva), # size base_rva=base_rva, level = level+1, dirs=dirs + [base_rva + res.OffsetToDirectory]) if not entry_directory: break # Ange Albertini's code to process resources' strings # strings = None if entry_id == RESOURCE_TYPE['RT_STRING']: strings = dict() for resource_id in entry_directory.entries: if hasattr(resource_id, 'directory'): resource_strings = dict() for resource_lang in resource_id.directory.entries: if (resource_lang is None or not hasattr(resource_lang, 'data') or resource_lang.data.struct.Size is None or resource_id.id is None): continue string_entry_rva = resource_lang.data.struct.OffsetToData string_entry_size = resource_lang.data.struct.Size string_entry_id = resource_id.id # XXX: has been raising exceptions preventing parsing try: string_entry_data = self.get_data(string_entry_rva, string_entry_size) except: self.__warnings.append( 'Error parsing resource of type RT_STRING at RVA 0x%x with size %d' % (string_entry_rva, string_entry_size)) continue parse_strings(string_entry_data, (int(string_entry_id) - 1) * 16, resource_strings) strings.update(resource_strings) resource_id.directory.strings = resource_strings dir_entries.append( ResourceDirEntryData( struct = res, name = entry_name, id = entry_id, directory = entry_directory)) else: struct = self.parse_resource_data_entry( base_rva + res.OffsetToDirectory) if struct: entry_data = ResourceDataEntryData( struct = struct, lang = res.Name & 0x3ff, sublang = res.Name >> 10 ) dir_entries.append( ResourceDirEntryData( struct = res, name = entry_name, id = entry_id, data = entry_data)) else: break # Check if this entry contains version information # if level == 0 and res.Id == RESOURCE_TYPE['RT_VERSION']: if dir_entries: last_entry = dir_entries[-1] try: version_entries = last_entry.directory.entries[0].directory.entries except: # Maybe a malformed directory structure...? # Let's ignore it pass else: for version_entry in version_entries: rt_version_struct = None try: rt_version_struct = version_entry.data.struct except: # Maybe a malformed directory structure...? # Let's ignore it pass if rt_version_struct is not None: self.parse_version_information(rt_version_struct) rva += res.sizeof() string_rvas = [s.get_rva() for s in strings_to_postprocess] string_rvas.sort() for idx, s in enumerate(strings_to_postprocess): s.render_pascal_16() resource_directory_data = ResourceDirData( struct = resource_dir, entries = dir_entries) return resource_directory_data
[ "def", "parse_resources_directory", "(", "self", ",", "rva", ",", "size", "=", "0", ",", "base_rva", "=", "None", ",", "level", "=", "0", ",", "dirs", "=", "None", ")", ":", "# OC Patch:", "if", "dirs", "is", "None", ":", "dirs", "=", "[", "rva", "...
https://github.com/danielplohmann/apiscout/blob/8622b54302cb2712fe35ce971e77d1f3d5849a2a/apiscout/db_builder/pefile.py#L3247-L3502
ghostop14/sparrow-wifi
4b8289773ea4304872062f65a6ffc9352612b08e
sparrowhackrf.py
python
HackrfSweepThread.__init__
(self, parentHackrf)
[]
def __init__(self, parentHackrf): super().__init__() self.parentHackrf= parentHackrf self.minFreq = 2400 self.maxFreq = 5900 self.binWidth = 250000 # 250 KHz width self.gain = 40 # mirror qspectrumanalyzer # In python3 / is a floating point operation whereas // is explicitly integer division. Result is without remainder self.lna_gain = 8 * (self.gain // 18) self.vga_gain = 2 * ((self.gain - self.lna_gain) // 2)
[ "def", "__init__", "(", "self", ",", "parentHackrf", ")", ":", "super", "(", ")", ".", "__init__", "(", ")", "self", ".", "parentHackrf", "=", "parentHackrf", "self", ".", "minFreq", "=", "2400", "self", ".", "maxFreq", "=", "5900", "self", ".", "binWi...
https://github.com/ghostop14/sparrow-wifi/blob/4b8289773ea4304872062f65a6ffc9352612b08e/sparrowhackrf.py#L35-L46
openshift/openshift-tools
1188778e728a6e4781acf728123e5b356380fe6f
openshift/installer/vendored/openshift-ansible-3.11.28-1/roles/lib_vendored_deps/library/oc_configmap.py
python
Utils.cleanup
(files)
Clean up on exit
Clean up on exit
[ "Clean", "up", "on", "exit" ]
def cleanup(files): '''Clean up on exit ''' for sfile in files: if os.path.exists(sfile): if os.path.isdir(sfile): shutil.rmtree(sfile) elif os.path.isfile(sfile): os.remove(sfile)
[ "def", "cleanup", "(", "files", ")", ":", "for", "sfile", "in", "files", ":", "if", "os", ".", "path", ".", "exists", "(", "sfile", ")", ":", "if", "os", ".", "path", ".", "isdir", "(", "sfile", ")", ":", "shutil", ".", "rmtree", "(", "sfile", ...
https://github.com/openshift/openshift-tools/blob/1188778e728a6e4781acf728123e5b356380fe6f/openshift/installer/vendored/openshift-ansible-3.11.28-1/roles/lib_vendored_deps/library/oc_configmap.py#L1225-L1232
tanghaibao/goatools
647e9dd833695f688cd16c2f9ea18f1692e5c6bc
versioneer.py
python
get_config_from_root
(root)
return cfg
Read the project setup.cfg file to determine Versioneer config.
Read the project setup.cfg file to determine Versioneer config.
[ "Read", "the", "project", "setup", ".", "cfg", "file", "to", "determine", "Versioneer", "config", "." ]
def get_config_from_root(root): """Read the project setup.cfg file to determine Versioneer config.""" # This might raise EnvironmentError (if setup.cfg is missing), or # configparser.NoSectionError (if it lacks a [versioneer] section), or # configparser.NoOptionError (if it lacks "VCS="). See the docstring at # the top of versioneer.py for instructions on writing your setup.cfg . setup_cfg = os.path.join(root, "setup.cfg") parser = configparser.SafeConfigParser() with open(setup_cfg, "r") as f: parser.readfp(f) VCS = parser.get("versioneer", "VCS") # mandatory def get(parser, name): if parser.has_option("versioneer", name): return parser.get("versioneer", name) return None cfg = VersioneerConfig() cfg.VCS = VCS cfg.style = get(parser, "style") or "" cfg.versionfile_source = get(parser, "versionfile_source") cfg.versionfile_build = get(parser, "versionfile_build") cfg.tag_prefix = get(parser, "tag_prefix") if cfg.tag_prefix in ("''", '""'): cfg.tag_prefix = "" cfg.parentdir_prefix = get(parser, "parentdir_prefix") cfg.verbose = get(parser, "verbose") return cfg
[ "def", "get_config_from_root", "(", "root", ")", ":", "# This might raise EnvironmentError (if setup.cfg is missing), or", "# configparser.NoSectionError (if it lacks a [versioneer] section), or", "# configparser.NoOptionError (if it lacks \"VCS=\"). See the docstring at", "# the top of versioneer...
https://github.com/tanghaibao/goatools/blob/647e9dd833695f688cd16c2f9ea18f1692e5c6bc/versioneer.py#L335-L361
shchur/gnn-benchmark
1e72912a0810cdf27ae54fd589a3b43358a2b161
gnnbench/models/graphsage.py
python
GraphSAGE._preprocess_features
(self, features)
return to_sparse_tensor(features)
[]
def _preprocess_features(self, features): if self.normalize_features: features = row_normalize(features) return to_sparse_tensor(features)
[ "def", "_preprocess_features", "(", "self", ",", "features", ")", ":", "if", "self", ".", "normalize_features", ":", "features", "=", "row_normalize", "(", "features", ")", "return", "to_sparse_tensor", "(", "features", ")" ]
https://github.com/shchur/gnn-benchmark/blob/1e72912a0810cdf27ae54fd589a3b43358a2b161/gnnbench/models/graphsage.py#L215-L218
angr/angr
4b04d56ace135018083d36d9083805be8146688b
angr/angrdb/serializers/comments.py
python
CommentsSerializer.dump
(session, db_kb, comments)
:param session: :param DbKnowledgeBase db_kb: :param Comments comments: :return: None
[]
def dump(session, db_kb, comments): """ :param session: :param DbKnowledgeBase db_kb: :param Comments comments: :return: None """ for addr, comment in comments.items(): db_comment = session.query(DbComment).filter_by( kb=db_kb, addr=addr, ).scalar() if db_comment is not None: if comment == db_comment.comment: continue db_comment.comment = comment else: db_comment = DbComment( kb=db_kb, addr=addr, comment=comment, type=0, ) session.add(db_comment)
[ "def", "dump", "(", "session", ",", "db_kb", ",", "comments", ")", ":", "for", "addr", ",", "comment", "in", "comments", ".", "items", "(", ")", ":", "db_comment", "=", "session", ".", "query", "(", "DbComment", ")", ".", "filter_by", "(", "kb", "=",...
https://github.com/angr/angr/blob/4b04d56ace135018083d36d9083805be8146688b/angr/angrdb/serializers/comments.py#L13-L38
talkpython/data-driven-web-apps-with-flask
60852486d56af680b36e2c0addef8648b21fd07b
app/ch13-validation/final/alembic/env.py
python
run_migrations_online
()
Run migrations in 'online' mode. In this scenario we need to create an Engine and associate a connection with the context.
Run migrations in 'online' mode.
[ "Run", "migrations", "in", "online", "mode", "." ]
def run_migrations_online(): """Run migrations in 'online' mode. In this scenario we need to create an Engine and associate a connection with the context. """ connectable = engine_from_config( config.get_section(config.config_ini_section), prefix="sqlalchemy.", poolclass=pool.NullPool, ) with connectable.connect() as connection: context.configure( connection=connection, target_metadata=target_metadata ) with context.begin_transaction(): context.run_migrations()
[ "def", "run_migrations_online", "(", ")", ":", "connectable", "=", "engine_from_config", "(", "config", ".", "get_section", "(", "config", ".", "config_ini_section", ")", ",", "prefix", "=", "\"sqlalchemy.\"", ",", "poolclass", "=", "pool", ".", "NullPool", ",",...
https://github.com/talkpython/data-driven-web-apps-with-flask/blob/60852486d56af680b36e2c0addef8648b21fd07b/app/ch13-validation/final/alembic/env.py#L61-L80
buke/GreenOdoo
3d8c55d426fb41fdb3f2f5a1533cfe05983ba1df
runtime/python/lib/python2.7/site-packages/passlib-1.6.2-py2.7.egg/passlib/utils/handlers.py
python
_bitsize
(count, chars)
helper for bitsize() methods
helper for bitsize() methods
[ "helper", "for", "bitsize", "()", "methods" ]
def _bitsize(count, chars): """helper for bitsize() methods""" if chars and count: import math return int(count * math.log(len(chars), 2)) else: return 0
[ "def", "_bitsize", "(", "count", ",", "chars", ")", ":", "if", "chars", "and", "count", ":", "import", "math", "return", "int", "(", "count", "*", "math", ".", "log", "(", "len", "(", "chars", ")", ",", "2", ")", ")", "else", ":", "return", "0" ]
https://github.com/buke/GreenOdoo/blob/3d8c55d426fb41fdb3f2f5a1533cfe05983ba1df/runtime/python/lib/python2.7/site-packages/passlib-1.6.2-py2.7.egg/passlib/utils/handlers.py#L76-L82
holzschu/Carnets
44effb10ddfc6aa5c8b0687582a724ba82c6b547
Library/lib/python3.7/site-packages/astropy-4.0-py3.7-macosx-10.9-x86_64.egg/astropy/modeling/core.py
python
make_subtree_dict
(tree, nodepath, tdict, leaflist)
Traverse a tree noting each node by a key that indicates all the left/right choices necessary to reach that node. Each key will reference a tuple that contains: - reference to the compound model for that node. - left most index contained within that subtree (relative to all indices for the whole tree) - right most index contained within that subtree
Traverse a tree noting each node by a key that indicates all the left/right choices necessary to reach that node. Each key will reference a tuple that contains:
[ "Traverse", "a", "tree", "noting", "each", "node", "by", "a", "key", "that", "indicates", "all", "the", "left", "/", "right", "choices", "necessary", "to", "reach", "that", "node", ".", "Each", "key", "will", "reference", "a", "tuple", "that", "contains", ...
def make_subtree_dict(tree, nodepath, tdict, leaflist): ''' Traverse a tree noting each node by a key that indicates all the left/right choices necessary to reach that node. Each key will reference a tuple that contains: - reference to the compound model for that node. - left most index contained within that subtree (relative to all indices for the whole tree) - right most index contained within that subtree ''' # if this is a leaf, just append it to the leaflist if not hasattr(tree, 'isleaf'): leaflist.append(tree) else: leftmostind = len(leaflist) make_subtree_dict(tree.left, nodepath+'l', tdict, leaflist) make_subtree_dict(tree.right, nodepath+'r', tdict, leaflist) rightmostind = len(leaflist)-1 tdict[nodepath] = (tree, leftmostind, rightmostind)
[ "def", "make_subtree_dict", "(", "tree", ",", "nodepath", ",", "tdict", ",", "leaflist", ")", ":", "# if this is a leaf, just append it to the leaflist", "if", "not", "hasattr", "(", "tree", ",", "'isleaf'", ")", ":", "leaflist", ".", "append", "(", "tree", ")",...
https://github.com/holzschu/Carnets/blob/44effb10ddfc6aa5c8b0687582a724ba82c6b547/Library/lib/python3.7/site-packages/astropy-4.0-py3.7-macosx-10.9-x86_64.egg/astropy/modeling/core.py#L3489-L3508
Melissa-AI/Melissa-Core
ea08ae5e3088360d3bddc40db72160697522b8f7
melissa/utilities/snowboydetect.py
python
SnowboyDetect.Reset
(self)
return _snowboydetect.SnowboyDetect_Reset(self)
[]
def Reset(self): return _snowboydetect.SnowboyDetect_Reset(self)
[ "def", "Reset", "(", "self", ")", ":", "return", "_snowboydetect", ".", "SnowboyDetect_Reset", "(", "self", ")" ]
https://github.com/Melissa-AI/Melissa-Core/blob/ea08ae5e3088360d3bddc40db72160697522b8f7/melissa/utilities/snowboydetect.py#L107-L108
PaddlePaddle/PaddleX
2bab73f81ab54e328204e7871e6ae4a82e719f5d
paddlex/paddleseg/utils/download.py
python
_uncompress_file_zip
(filepath, extrapath)
[]
def _uncompress_file_zip(filepath, extrapath): files = zipfile.ZipFile(filepath, 'r') filelist = files.namelist() rootpath = filelist[0] total_num = len(filelist) for index, file in enumerate(filelist): files.extract(file, extrapath) yield total_num, index, rootpath files.close() yield total_num, index, rootpath
[ "def", "_uncompress_file_zip", "(", "filepath", ",", "extrapath", ")", ":", "files", "=", "zipfile", ".", "ZipFile", "(", "filepath", ",", "'r'", ")", "filelist", "=", "files", ".", "namelist", "(", ")", "rootpath", "=", "filelist", "[", "0", "]", "total...
https://github.com/PaddlePaddle/PaddleX/blob/2bab73f81ab54e328204e7871e6ae4a82e719f5d/paddlex/paddleseg/utils/download.py#L67-L76
ryanmcgrath/twython
0c405604285364457f3c309969f11ba68163bd05
twython/endpoints.py
python
EndpointsMixin.get_contributees
(self, **params)
return self.get('users/contributees', params=params)
Returns a collection of users that the specified user can "contribute" to. Docs: https://dev.twitter.com/docs/api/1.1/get/users/contributees
Returns a collection of users that the specified user can "contribute" to.
[ "Returns", "a", "collection", "of", "users", "that", "the", "specified", "user", "can", "contribute", "to", "." ]
def get_contributees(self, **params): """Returns a collection of users that the specified user can "contribute" to. Docs: https://dev.twitter.com/docs/api/1.1/get/users/contributees """ return self.get('users/contributees', params=params)
[ "def", "get_contributees", "(", "self", ",", "*", "*", "params", ")", ":", "return", "self", ".", "get", "(", "'users/contributees'", ",", "params", "=", "params", ")" ]
https://github.com/ryanmcgrath/twython/blob/0c405604285364457f3c309969f11ba68163bd05/twython/endpoints.py#L640-L646
algorhythms/LeetCode
3fb14aeea62a960442e47dfde9f964c7ffce32be
128 Longest Consecutive Sequence.py
python
Solution.longestConsecutive_TLE
(self, num)
return max_length
O(n) within in one scan algorithm: array, inverted index O(kn), k is the length of consecutive sequence TLE due to excessive lookup :param num: a list of integer :return: an integer
O(n) within in one scan algorithm: array, inverted index O(kn), k is the length of consecutive sequence
[ "O", "(", "n", ")", "within", "in", "one", "scan", "algorithm", ":", "array", "inverted", "index", "O", "(", "kn", ")", "k", "is", "the", "length", "of", "consecutive", "sequence" ]
def longestConsecutive_TLE(self, num): """ O(n) within in one scan algorithm: array, inverted index O(kn), k is the length of consecutive sequence TLE due to excessive lookup :param num: a list of integer :return: an integer """ length = len(num) inverted_table = dict(zip(num, range(length))) max_length = -1<<31 for ind, val in enumerate(num): current_length = 1 # check val-- sequence_val_expected = val-1 while sequence_val_expected in inverted_table: sequence_val_expected -= 1 current_length += 1 # check val++ sequence_val_expected = val+1 while sequence_val_expected in inverted_table: sequence_val_expected += 1 current_length += 1 max_length = max(max_length, current_length) return max_length
[ "def", "longestConsecutive_TLE", "(", "self", ",", "num", ")", ":", "length", "=", "len", "(", "num", ")", "inverted_table", "=", "dict", "(", "zip", "(", "num", ",", "range", "(", "length", ")", ")", ")", "max_length", "=", "-", "1", "<<", "31", "...
https://github.com/algorhythms/LeetCode/blob/3fb14aeea62a960442e47dfde9f964c7ffce32be/128 Longest Consecutive Sequence.py#L12-L43
sagemath/sage
f9b2db94f675ff16963ccdefba4f1a3393b3fe0d
src/sage/algebras/lie_algebras/structure_coefficients.py
python
LieAlgebraWithStructureCoefficients.__init__
(self, R, s_coeff, names, index_set, category=None, prefix=None, bracket=None, latex_bracket=None, string_quotes=None, **kwds)
Initialize ``self``. EXAMPLES:: sage: L = LieAlgebra(QQ, 'x,y', {('x','y'): {'x':1}}) sage: TestSuite(L).run()
Initialize ``self``.
[ "Initialize", "self", "." ]
def __init__(self, R, s_coeff, names, index_set, category=None, prefix=None, bracket=None, latex_bracket=None, string_quotes=None, **kwds): """ Initialize ``self``. EXAMPLES:: sage: L = LieAlgebra(QQ, 'x,y', {('x','y'): {'x':1}}) sage: TestSuite(L).run() """ default = (names != tuple(index_set)) if prefix is None: if default: prefix = 'L' else: prefix = '' if bracket is None: bracket = default if latex_bracket is None: latex_bracket = default if string_quotes is None: string_quotes = default #self._pos_to_index = dict(enumerate(index_set)) self._index_to_pos = {k: i for i,k in enumerate(index_set)} if "sorting_key" not in kwds: kwds["sorting_key"] = self._index_to_pos.__getitem__ cat = LieAlgebras(R).WithBasis().FiniteDimensional().or_subcategory(category) FinitelyGeneratedLieAlgebra.__init__(self, R, names, index_set, cat) IndexedGenerators.__init__(self, self._indices, prefix=prefix, bracket=bracket, latex_bracket=latex_bracket, string_quotes=string_quotes, **kwds) self._M = FreeModule(R, len(index_set)) # Transform the values in the structure coefficients to elements def to_vector(tuples): vec = [R.zero()]*len(index_set) for k,c in tuples: vec[self._index_to_pos[k]] = c vec = self._M(vec) vec.set_immutable() return vec self._s_coeff = {(self._index_to_pos[k[0]], self._index_to_pos[k[1]]): to_vector(s_coeff[k]) for k in s_coeff.keys()}
[ "def", "__init__", "(", "self", ",", "R", ",", "s_coeff", ",", "names", ",", "index_set", ",", "category", "=", "None", ",", "prefix", "=", "None", ",", "bracket", "=", "None", ",", "latex_bracket", "=", "None", ",", "string_quotes", "=", "None", ",", ...
https://github.com/sagemath/sage/blob/f9b2db94f675ff16963ccdefba4f1a3393b3fe0d/src/sage/algebras/lie_algebras/structure_coefficients.py#L191-L237
IJDykeman/wangTiles
7c1ee2095ebdf7f72bce07d94c6484915d5cae8b
experimental_code/tiles_3d/venv_mac_py3/lib/python2.7/site-packages/pip/_internal/utils/ui.py
python
InterruptibleMixin.__init__
(self, *args, **kwargs)
Save the original SIGINT handler for later.
Save the original SIGINT handler for later.
[ "Save", "the", "original", "SIGINT", "handler", "for", "later", "." ]
def __init__(self, *args, **kwargs): """ Save the original SIGINT handler for later. """ super(InterruptibleMixin, self).__init__(*args, **kwargs) self.original_handler = signal(SIGINT, self.handle_sigint) # If signal() returns None, the previous handler was not installed from # Python, and we cannot restore it. This probably should not happen, # but if it does, we must restore something sensible instead, at least. # The least bad option should be Python's default SIGINT handler, which # just raises KeyboardInterrupt. if self.original_handler is None: self.original_handler = default_int_handler
[ "def", "__init__", "(", "self", ",", "*", "args", ",", "*", "*", "kwargs", ")", ":", "super", "(", "InterruptibleMixin", ",", "self", ")", ".", "__init__", "(", "*", "args", ",", "*", "*", "kwargs", ")", "self", ".", "original_handler", "=", "signal"...
https://github.com/IJDykeman/wangTiles/blob/7c1ee2095ebdf7f72bce07d94c6484915d5cae8b/experimental_code/tiles_3d/venv_mac_py3/lib/python2.7/site-packages/pip/_internal/utils/ui.py#L84-L98
fritzy/SleekXMPP
cc1d470397de768ffcc41d2ed5ac3118d19f09f5
sleekxmpp/plugins/xep_0009/remote.py
python
RemoteSession.__init__
(self, client, session_close_callback)
Initializes a new RPC session. Arguments: client -- The SleekXMPP client associated with this session. session_close_callback -- A callback called when the session is closed.
Initializes a new RPC session.
[ "Initializes", "a", "new", "RPC", "session", "." ]
def __init__(self, client, session_close_callback): ''' Initializes a new RPC session. Arguments: client -- The SleekXMPP client associated with this session. session_close_callback -- A callback called when the session is closed. ''' self._client = client self._session_close_callback = session_close_callback self._event = threading.Event() self._entries = {} self._callbacks = {} self._acls = {} self._lock = RLock()
[ "def", "__init__", "(", "self", ",", "client", ",", "session_close_callback", ")", ":", "self", ".", "_client", "=", "client", "self", ".", "_session_close_callback", "=", "session_close_callback", "self", ".", "_event", "=", "threading", ".", "Event", "(", ")...
https://github.com/fritzy/SleekXMPP/blob/cc1d470397de768ffcc41d2ed5ac3118d19f09f5/sleekxmpp/plugins/xep_0009/remote.py#L470-L485
kokjo/universalrop
6b675445b9a6a5d58af03efc796ce597c49d60bd
unirop.py
python
RealGadget.analyse
(self)
[]
def analyse(self): ip = self.arch.instruction_pointer sp = self.arch.stack_pointer emu = Emulator(self.arch) emu.map_code(self.address, self.code) stack = get_random_page(self.arch) stack_data = randoms(self.arch.page_size) emu.setup_stack( stack, self.arch.page_size, stack_data ) init_regs = {} for reg in self.arch.regs: if reg in (ip, sp): continue val = self.arch.unpack(randoms(self.arch.bits >> 3)) emu[reg] = val init_regs[val] = reg emu.run(self.address, len(self.code)) for reg in self.arch.regs: self.regs[reg] = ("junk", ) val = emu[reg] if init_regs.get(val, None): self.regs[reg] = ("mov", init_regs[val]) continue offset = gen_find(self.arch.pack(val), stack_data) if offset != -1: self.regs[reg] = ("stack", offset) if self.regs[sp][0] == "junk": self.move = emu[self.arch.stack_pointer] - stack self.regs[sp] = ("add", self.move) self.analysed = True
[ "def", "analyse", "(", "self", ")", ":", "ip", "=", "self", ".", "arch", ".", "instruction_pointer", "sp", "=", "self", ".", "arch", ".", "stack_pointer", "emu", "=", "Emulator", "(", "self", ".", "arch", ")", "emu", ".", "map_code", "(", "self", "."...
https://github.com/kokjo/universalrop/blob/6b675445b9a6a5d58af03efc796ce597c49d60bd/unirop.py#L141-L181
AppScale/gts
46f909cf5dc5ba81faf9d81dc9af598dcf8a82a9
AppServer/google/appengine/datastore/datastore_v4_pb.py
python
_DatastoreV4Service_ClientBaseStub.BeginTransaction
(self, request, rpc=None, callback=None, response=None)
return self._MakeCall(rpc, self._full_name_BeginTransaction, 'BeginTransaction', request, response, callback, self._protorpc_BeginTransaction)
Make a BeginTransaction RPC call. Args: request: a BeginTransactionRequest instance. rpc: Optional RPC instance to use for the call. callback: Optional final callback. Will be called as callback(rpc, result) when the rpc completes. If None, the call is synchronous. response: Optional ProtocolMessage to be filled in with response. Returns: The BeginTransactionResponse if callback is None. Otherwise, returns None.
Make a BeginTransaction RPC call.
[ "Make", "a", "BeginTransaction", "RPC", "call", "." ]
def BeginTransaction(self, request, rpc=None, callback=None, response=None): """Make a BeginTransaction RPC call. Args: request: a BeginTransactionRequest instance. rpc: Optional RPC instance to use for the call. callback: Optional final callback. Will be called as callback(rpc, result) when the rpc completes. If None, the call is synchronous. response: Optional ProtocolMessage to be filled in with response. Returns: The BeginTransactionResponse if callback is None. Otherwise, returns None. """ if response is None: response = BeginTransactionResponse return self._MakeCall(rpc, self._full_name_BeginTransaction, 'BeginTransaction', request, response, callback, self._protorpc_BeginTransaction)
[ "def", "BeginTransaction", "(", "self", ",", "request", ",", "rpc", "=", "None", ",", "callback", "=", "None", ",", "response", "=", "None", ")", ":", "if", "response", "is", "None", ":", "response", "=", "BeginTransactionResponse", "return", "self", ".", ...
https://github.com/AppScale/gts/blob/46f909cf5dc5ba81faf9d81dc9af598dcf8a82a9/AppServer/google/appengine/datastore/datastore_v4_pb.py#L6325-L6348
jupyter/nbconvert
b645841af220922f67684acd03e0a8f39e613648
nbconvert/filters/strings.py
python
text_base64
(text)
return base64.b64encode(text.encode()).decode()
Encode base64 text
Encode base64 text
[ "Encode", "base64", "text" ]
def text_base64(text): """ Encode base64 text """ return base64.b64encode(text.encode()).decode()
[ "def", "text_base64", "(", "text", ")", ":", "return", "base64", ".", "b64encode", "(", "text", ".", "encode", "(", ")", ")", ".", "decode", "(", ")" ]
https://github.com/jupyter/nbconvert/blob/b645841af220922f67684acd03e0a8f39e613648/nbconvert/filters/strings.py#L255-L259
holzschu/Carnets
44effb10ddfc6aa5c8b0687582a724ba82c6b547
Library/lib/python3.7/site-packages/prompt_toolkit/layout/containers.py
python
WindowRenderInfo.cursor_position
(self)
Return the cursor position coordinates, relative to the left/top corner of the rendered screen.
Return the cursor position coordinates, relative to the left/top corner of the rendered screen.
[ "Return", "the", "cursor", "position", "coordinates", "relative", "to", "the", "left", "/", "top", "corner", "of", "the", "rendered", "screen", "." ]
def cursor_position(self): """ Return the cursor position coordinates, relative to the left/top corner of the rendered screen. """ cpos = self.ui_content.cursor_position try: y, x = self._rowcol_to_yx[cpos.y, cpos.x] except KeyError: # For `DummyControl` for instance, the content can be empty, and so # will `_rowcol_to_yx` be. Return 0/0 by default. return Point(x=0, y=0) else: return Point(x=x - self._x_offset, y=y - self._y_offset)
[ "def", "cursor_position", "(", "self", ")", ":", "cpos", "=", "self", ".", "ui_content", ".", "cursor_position", "try", ":", "y", ",", "x", "=", "self", ".", "_rowcol_to_yx", "[", "cpos", ".", "y", ",", "cpos", ".", "x", "]", "except", "KeyError", ":...
https://github.com/holzschu/Carnets/blob/44effb10ddfc6aa5c8b0687582a724ba82c6b547/Library/lib/python3.7/site-packages/prompt_toolkit/layout/containers.py#L955-L968
PaddlePaddle/PaddleSpeech
26524031d242876b7fdb71582b0b3a7ea45c7d9d
paddlespeech/t2s/frontend/zh_normalization/chronology.py
python
_time_num2str
(num_string: str)
return result
A special case for verbalizing number in time.
A special case for verbalizing number in time.
[ "A", "special", "case", "for", "verbalizing", "number", "in", "time", "." ]
def _time_num2str(num_string: str) -> str: """A special case for verbalizing number in time.""" result = num2str(num_string.lstrip('0')) if num_string.startswith('0'): result = DIGITS['0'] + result return result
[ "def", "_time_num2str", "(", "num_string", ":", "str", ")", "->", "str", ":", "result", "=", "num2str", "(", "num_string", ".", "lstrip", "(", "'0'", ")", ")", "if", "num_string", ".", "startswith", "(", "'0'", ")", ":", "result", "=", "DIGITS", "[", ...
https://github.com/PaddlePaddle/PaddleSpeech/blob/26524031d242876b7fdb71582b0b3a7ea45c7d9d/paddlespeech/t2s/frontend/zh_normalization/chronology.py#L22-L27
Cog-Creators/Red-DiscordBot
b05933274a11fb097873ab0d1b246d37b06aa306
redbot/core/config.py
python
Config._all_from_scope
(self, scope: str)
return ret
Get a dict of all values from a particular scope of data. :code:`scope` must be one of the constants attributed to this class, i.e. :code:`GUILD`, :code:`MEMBER` et cetera. IDs as keys in the returned dict are casted to `int` for convenience. Default values are also mixed into the data if they have not yet been overwritten.
Get a dict of all values from a particular scope of data.
[ "Get", "a", "dict", "of", "all", "values", "from", "a", "particular", "scope", "of", "data", "." ]
async def _all_from_scope(self, scope: str) -> Dict[int, Dict[Any, Any]]: """Get a dict of all values from a particular scope of data. :code:`scope` must be one of the constants attributed to this class, i.e. :code:`GUILD`, :code:`MEMBER` et cetera. IDs as keys in the returned dict are casted to `int` for convenience. Default values are also mixed into the data if they have not yet been overwritten. """ group = self._get_base_group(scope) ret = {} defaults = self.defaults.get(scope, {}) try: dict_ = await self.driver.get(group.identifier_data) except KeyError: pass else: for k, v in dict_.items(): data = pickle.loads(pickle.dumps(defaults, -1)) data.update(v) ret[int(k)] = data return ret
[ "async", "def", "_all_from_scope", "(", "self", ",", "scope", ":", "str", ")", "->", "Dict", "[", "int", ",", "Dict", "[", "Any", ",", "Any", "]", "]", ":", "group", "=", "self", ".", "_get_base_group", "(", "scope", ")", "ret", "=", "{", "}", "d...
https://github.com/Cog-Creators/Red-DiscordBot/blob/b05933274a11fb097873ab0d1b246d37b06aa306/redbot/core/config.py#L1133-L1158
giswqs/whitebox-python
b4df0bbb10a1dee3bd0f6b3482511f7c829b38fe
whitebox/whitebox_tools.py
python
WhiteboxTools.ln
(self, i, output, callback=None)
return self.run_tool('ln', args, callback)
Returns the natural logarithm of values in a raster. Keyword arguments: i -- Input raster file. output -- Output raster file. callback -- Custom function for handling tool text outputs.
Returns the natural logarithm of values in a raster.
[ "Returns", "the", "natural", "logarithm", "of", "values", "in", "a", "raster", "." ]
def ln(self, i, output, callback=None): """Returns the natural logarithm of values in a raster. Keyword arguments: i -- Input raster file. output -- Output raster file. callback -- Custom function for handling tool text outputs. """ args = [] args.append("--input='{}'".format(i)) args.append("--output='{}'".format(output)) return self.run_tool('ln', args, callback)
[ "def", "ln", "(", "self", ",", "i", ",", "output", ",", "callback", "=", "None", ")", ":", "args", "=", "[", "]", "args", ".", "append", "(", "\"--input='{}'\"", ".", "format", "(", "i", ")", ")", "args", ".", "append", "(", "\"--output='{}'\"", "....
https://github.com/giswqs/whitebox-python/blob/b4df0bbb10a1dee3bd0f6b3482511f7c829b38fe/whitebox/whitebox_tools.py#L8121-L8133
ApostropheEditor/Apostrophe
cc30858c15f3408296d73202497d3cdef5a46064
apostrophe/inline_preview.py
python
InlinePreview.get_view_for_lexikon
(self, match)
return None
[]
def get_view_for_lexikon(self, match): term = match.group("text") lexikon_dict = get_dictionary(term) if lexikon_dict: grid = Gtk.Grid.new() grid.get_style_context().add_class("lexikon") grid.set_row_spacing(2) grid.set_column_spacing(4) i = 0 for entry in lexikon_dict: if not entry["defs"]: continue elif entry["class"].startswith("n"): word_type = _("noun") elif entry["class"].startswith("v"): word_type = _("verb") elif entry["class"].startswith("adj"): word_type = _("adjective") elif entry["class"].startswith("adv"): word_type = _("adverb") else: continue vocab_label = Gtk.Label.new(term + " ~ " + word_type) vocab_label.get_style_context().add_class("header") if i == 0: vocab_label.get_style_context().add_class("first") vocab_label.set_halign(Gtk.Align.START) vocab_label.set_justify(Gtk.Justification.LEFT) grid.attach(vocab_label, 0, i, 3, 1) for definition in entry["defs"]: i = i + 1 num_label = Gtk.Label.new(definition["num"] + ".") num_label.get_style_context().add_class("number") num_label.set_valign(Gtk.Align.START) grid.attach(num_label, 0, i, 1, 1) def_label = Gtk.Label( label=" ".join( definition["description"])) def_label.get_style_context().add_class("description") def_label.set_halign(Gtk.Align.START) def_label.set_max_width_chars(self.characters_per_line) def_label.set_line_wrap(True) def_label.set_justify(Gtk.Justification.FILL) grid.attach(def_label, 1, i, 1, 1) i = i + 1 if i > 0: return grid return None
[ "def", "get_view_for_lexikon", "(", "self", ",", "match", ")", ":", "term", "=", "match", ".", "group", "(", "\"text\"", ")", "lexikon_dict", "=", "get_dictionary", "(", "term", ")", "if", "lexikon_dict", ":", "grid", "=", "Gtk", ".", "Grid", ".", "new",...
https://github.com/ApostropheEditor/Apostrophe/blob/cc30858c15f3408296d73202497d3cdef5a46064/apostrophe/inline_preview.py#L247-L297
postgres/pgadmin4
374c5e952fa594d749fadf1f88076c1cba8c5f64
web/pgadmin/browser/server_groups/servers/databases/schemas/functions/__init__.py
python
FunctionView.sql
(self, gid, sid, did, scid, fnid=None, **kwargs)
return ajax_response(response=sql)
Returns the SQL for the Function object. Args: gid: Server Group Id sid: Server Id did: Database Id scid: Schema Id fnid: Function Id json_resp:
Returns the SQL for the Function object.
[ "Returns", "the", "SQL", "for", "the", "Function", "object", "." ]
def sql(self, gid, sid, did, scid, fnid=None, **kwargs): """ Returns the SQL for the Function object. Args: gid: Server Group Id sid: Server Id did: Database Id scid: Schema Id fnid: Function Id json_resp: """ json_resp = kwargs.get('json_resp', True) target_schema = kwargs.get('target_schema', None) resp_data = self._fetch_properties(gid, sid, did, scid, fnid) # Most probably this is due to error if not isinstance(resp_data, dict): return resp_data # Fetch the function definition. args = '' args_without_name = [] args_list = [] vol_dict = {'v': 'VOLATILE', 's': 'STABLE', 'i': 'IMMUTABLE'} if 'arguments' in resp_data and len(resp_data['arguments']) > 0: args_list = resp_data['arguments'] resp_data['args'] = resp_data['arguments'] self._get_arguments(args_list, args, args_without_name) resp_data['func_args'] = args.strip(' ') resp_data['func_args_without'] = ', '.join(args_without_name) self.reformat_prosrc_code(resp_data) if self.node_type == 'procedure': object_type = 'procedure' if 'provolatile' in resp_data: resp_data['provolatile'] = vol_dict.get( resp_data['provolatile'], '' ) # Get Schema Name from its OID. self._get_schema_name_from_oid(resp_data) # Parse privilege data self._parse_privilege_data(resp_data) func_def = self._get_procedure_definition(scid, fnid, resp_data, target_schema) else: object_type = 'function' # We are showing trigger functions under the trigger node. # It may possible that trigger is in one schema and trigger # function is in another schema, so to show the SQL we need to # change the schema id i.e scid. if self.node_type == 'trigger_function' and \ scid != resp_data['pronamespace']: scid = resp_data['pronamespace'] # Get Schema Name from its OID. self._get_schema_name_from_oid(resp_data) # Parse privilege data self._parse_privilege_data(resp_data) func_def = self._get_function_definition(scid, fnid, resp_data, target_schema) # This is to check whether any exception occurred, if yes, then return if not isinstance(func_def, str) and func_def.status_code is not None: return func_def sql_header = """-- {0}: {1}.{2}({3})\n\n""".format( object_type.upper(), resp_data['pronamespace'], resp_data['proname'], resp_data['proargtypenames'].lstrip('(').rstrip(')')) sql_header += """-- DROP {0} IF EXISTS {1}({2});\n\n""".format( object_type.upper(), self.qtIdent( self.conn, resp_data['pronamespace'], resp_data['proname']), resp_data['proargtypenames'].lstrip('(').rstrip(')')) pattern = '\n{2,}' repl = '\n\n' if not json_resp: return re.sub(pattern, repl, func_def) sql = sql_header + func_def sql = re.sub(pattern, repl, sql) return ajax_response(response=sql)
[ "def", "sql", "(", "self", ",", "gid", ",", "sid", ",", "did", ",", "scid", ",", "fnid", "=", "None", ",", "*", "*", "kwargs", ")", ":", "json_resp", "=", "kwargs", ".", "get", "(", "'json_resp'", ",", "True", ")", "target_schema", "=", "kwargs", ...
https://github.com/postgres/pgadmin4/blob/374c5e952fa594d749fadf1f88076c1cba8c5f64/web/pgadmin/browser/server_groups/servers/databases/schemas/functions/__init__.py#L1143-L1239
openhatch/oh-mainline
ce29352a034e1223141dcc2f317030bbc3359a51
vendor/packages/django-tastypie/tastypie/utils/formatting.py
python
format_time
(t)
return dateformat.format(dt, 'H:i:s O')
RFC 2822 time formatter
RFC 2822 time formatter
[ "RFC", "2822", "time", "formatter" ]
def format_time(t): """ RFC 2822 time formatter """ # again, workaround dateformat input requirement dt = aware_datetime(2000, 1, 1, t.hour, t.minute, t.second) return dateformat.format(dt, 'H:i:s O')
[ "def", "format_time", "(", "t", ")", ":", "# again, workaround dateformat input requirement", "dt", "=", "aware_datetime", "(", "2000", ",", "1", ",", "1", ",", "t", ".", "hour", ",", "t", ".", "minute", ",", "t", ".", "second", ")", "return", "dateformat"...
https://github.com/openhatch/oh-mainline/blob/ce29352a034e1223141dcc2f317030bbc3359a51/vendor/packages/django-tastypie/tastypie/utils/formatting.py#L31-L37
huggingface/transformers
623b4f7c63f60cce917677ee704d6c93ee960b4b
src/transformers/utils/dummy_pt_objects.py
python
UniSpeechSatForSequenceClassification.from_pretrained
(cls, *args, **kwargs)
[]
def from_pretrained(cls, *args, **kwargs): requires_backends(cls, ["torch"])
[ "def", "from_pretrained", "(", "cls", ",", "*", "args", ",", "*", "*", "kwargs", ")", ":", "requires_backends", "(", "cls", ",", "[", "\"torch\"", "]", ")" ]
https://github.com/huggingface/transformers/blob/623b4f7c63f60cce917677ee704d6c93ee960b4b/src/transformers/utils/dummy_pt_objects.py#L4955-L4956
spyder-ide/spyder
55da47c032dfcf519600f67f8b30eab467f965e7
external-deps/qdarkstyle/qdarkstyle/utils/scss.py
python
_create_scss_variables
(variables_scss_filepath, palette, header=HEADER_SCSS)
Create a scss variables file.
Create a scss variables file.
[ "Create", "a", "scss", "variables", "file", "." ]
def _create_scss_variables(variables_scss_filepath, palette, header=HEADER_SCSS): """Create a scss variables file.""" scss = _dict_to_scss(palette.to_dict()) data = header.format(qtsass.__version__) + scss + '\n' with open(variables_scss_filepath, 'w') as f: f.write(data)
[ "def", "_create_scss_variables", "(", "variables_scss_filepath", ",", "palette", ",", "header", "=", "HEADER_SCSS", ")", ":", "scss", "=", "_dict_to_scss", "(", "palette", ".", "to_dict", "(", ")", ")", "data", "=", "header", ".", "format", "(", "qtsass", "....
https://github.com/spyder-ide/spyder/blob/55da47c032dfcf519600f67f8b30eab467f965e7/external-deps/qdarkstyle/qdarkstyle/utils/scss.py#L81-L88
llSourcell/AI_Artist
3038c06c2e389b9c919c881c9a169efe2fd7810e
lib/python2.7/site-packages/setuptools/command/build_py.py
python
build_py.build_package_data
(self)
Copy data files into build directory
Copy data files into build directory
[ "Copy", "data", "files", "into", "build", "directory" ]
def build_package_data(self): """Copy data files into build directory""" for package, src_dir, build_dir, filenames in self.data_files: for filename in filenames: target = os.path.join(build_dir, filename) self.mkpath(os.path.dirname(target)) srcfile = os.path.join(src_dir, filename) outf, copied = self.copy_file(srcfile, target) srcfile = os.path.abspath(srcfile) if (copied and srcfile in self.distribution.convert_2to3_doctests): self.__doctests_2to3.append(outf)
[ "def", "build_package_data", "(", "self", ")", ":", "for", "package", ",", "src_dir", ",", "build_dir", ",", "filenames", "in", "self", ".", "data_files", ":", "for", "filename", "in", "filenames", ":", "target", "=", "os", ".", "path", ".", "join", "(",...
https://github.com/llSourcell/AI_Artist/blob/3038c06c2e389b9c919c881c9a169efe2fd7810e/lib/python2.7/site-packages/setuptools/command/build_py.py#L111-L122
openshift/openshift-tools
1188778e728a6e4781acf728123e5b356380fe6f
openshift/installer/vendored/openshift-ansible-3.9.40/roles/openshift_health_checker/openshift_checks/logging/fluentd.py
python
Fluentd.get_nodes_by_name
(self)
return { node['metadata']['name']: node for node in nodes['items'] }
Retrieve all the node definitions. Returns: dict(name: node)
Retrieve all the node definitions. Returns: dict(name: node)
[ "Retrieve", "all", "the", "node", "definitions", ".", "Returns", ":", "dict", "(", "name", ":", "node", ")" ]
def get_nodes_by_name(self): """Retrieve all the node definitions. Returns: dict(name: node)""" nodes_json = self.exec_oc("get nodes -o json", []) try: nodes = json.loads(nodes_json) except ValueError: # no valid json - should not happen raise OpenShiftCheckException( "BadOcNodeList", "Could not obtain a list of nodes to validate fluentd.\n" "Output from oc get:\n" + nodes_json ) if not nodes or not nodes.get('items'): # also should not happen raise OpenShiftCheckException( "NoNodesDefined", "No nodes appear to be defined according to the API." ) return { node['metadata']['name']: node for node in nodes['items'] }
[ "def", "get_nodes_by_name", "(", "self", ")", ":", "nodes_json", "=", "self", ".", "exec_oc", "(", "\"get nodes -o json\"", ",", "[", "]", ")", "try", ":", "nodes", "=", "json", ".", "loads", "(", "nodes_json", ")", "except", "ValueError", ":", "# no valid...
https://github.com/openshift/openshift-tools/blob/1188778e728a6e4781acf728123e5b356380fe6f/openshift/installer/vendored/openshift-ansible-3.9.40/roles/openshift_health_checker/openshift_checks/logging/fluentd.py#L50-L69
playframework/play1
0ecac3bc2421ae2dbec27a368bf671eda1c9cba5
python/Lib/site-packages/win32/lib/regutil.py
python
RegisterHelpFile
(helpFile, helpPath, helpDesc = None, bCheckFile = 1)
Register a help file in the registry. Note that this used to support writing to the Windows Help key, however this is no longer done, as it seems to be incompatible. helpFile -- the base name of the help file. helpPath -- the path to the help file helpDesc -- A description for the help file. If None, the helpFile param is used. bCheckFile -- A flag indicating if the file existence should be checked.
Register a help file in the registry. Note that this used to support writing to the Windows Help key, however this is no longer done, as it seems to be incompatible.
[ "Register", "a", "help", "file", "in", "the", "registry", ".", "Note", "that", "this", "used", "to", "support", "writing", "to", "the", "Windows", "Help", "key", "however", "this", "is", "no", "longer", "done", "as", "it", "seems", "to", "be", "incompati...
def RegisterHelpFile(helpFile, helpPath, helpDesc = None, bCheckFile = 1): """Register a help file in the registry. Note that this used to support writing to the Windows Help key, however this is no longer done, as it seems to be incompatible. helpFile -- the base name of the help file. helpPath -- the path to the help file helpDesc -- A description for the help file. If None, the helpFile param is used. bCheckFile -- A flag indicating if the file existence should be checked. """ if helpDesc is None: helpDesc = helpFile fullHelpFile = os.path.join(helpPath, helpFile) try: if bCheckFile: os.stat(fullHelpFile) except os.error: raise ValueError("Help file does not exist") # Now register with Python itself. win32api.RegSetValue(GetRootKey(), BuildDefaultPythonKey() + "\\Help\\%s" % helpDesc, win32con.REG_SZ, fullHelpFile)
[ "def", "RegisterHelpFile", "(", "helpFile", ",", "helpPath", ",", "helpDesc", "=", "None", ",", "bCheckFile", "=", "1", ")", ":", "if", "helpDesc", "is", "None", ":", "helpDesc", "=", "helpFile", "fullHelpFile", "=", "os", ".", "path", ".", "join", "(", ...
https://github.com/playframework/play1/blob/0ecac3bc2421ae2dbec27a368bf671eda1c9cba5/python/Lib/site-packages/win32/lib/regutil.py#L171-L190
ajinabraham/OWASP-Xenotix-XSS-Exploit-Framework
cb692f527e4e819b6c228187c5702d990a180043
external/Scripting Engine/Xenotix Python Scripting Engine/Lib/numbers.py
python
Integral.__ror__
(self, other)
other | self
other | self
[ "other", "|", "self" ]
def __ror__(self, other): """other | self""" raise NotImplementedError
[ "def", "__ror__", "(", "self", ",", "other", ")", ":", "raise", "NotImplementedError" ]
https://github.com/ajinabraham/OWASP-Xenotix-XSS-Exploit-Framework/blob/cb692f527e4e819b6c228187c5702d990a180043/external/Scripting Engine/Xenotix Python Scripting Engine/Lib/numbers.py#L366-L368
xiaobai1217/MBMD
246f3434bccb9c8357e0f698995b659578bf1afb
lib/slim/datasets/flowers.py
python
get_split
(split_name, dataset_dir, file_pattern=None, reader=None)
return slim.dataset.Dataset( data_sources=file_pattern, reader=reader, decoder=decoder, num_samples=SPLITS_TO_SIZES[split_name], items_to_descriptions=_ITEMS_TO_DESCRIPTIONS, num_classes=_NUM_CLASSES, labels_to_names=labels_to_names)
Gets a dataset tuple with instructions for reading flowers. Args: split_name: A train/validation split name. dataset_dir: The base directory of the dataset sources. file_pattern: The file pattern to use when matching the dataset sources. It is assumed that the pattern contains a '%s' string so that the split name can be inserted. reader: The TensorFlow reader type. Returns: A `Dataset` namedtuple. Raises: ValueError: if `split_name` is not a valid train/validation split.
Gets a dataset tuple with instructions for reading flowers.
[ "Gets", "a", "dataset", "tuple", "with", "instructions", "for", "reading", "flowers", "." ]
def get_split(split_name, dataset_dir, file_pattern=None, reader=None): """Gets a dataset tuple with instructions for reading flowers. Args: split_name: A train/validation split name. dataset_dir: The base directory of the dataset sources. file_pattern: The file pattern to use when matching the dataset sources. It is assumed that the pattern contains a '%s' string so that the split name can be inserted. reader: The TensorFlow reader type. Returns: A `Dataset` namedtuple. Raises: ValueError: if `split_name` is not a valid train/validation split. """ if split_name not in SPLITS_TO_SIZES: raise ValueError('split name %s was not recognized.' % split_name) if not file_pattern: file_pattern = _FILE_PATTERN file_pattern = os.path.join(dataset_dir, file_pattern % split_name) # Allowing None in the signature so that dataset_factory can use the default. if reader is None: reader = tf.TFRecordReader keys_to_features = { 'image/encoded': tf.FixedLenFeature((), tf.string, default_value=''), 'image/format': tf.FixedLenFeature((), tf.string, default_value='png'), 'image/class/label': tf.FixedLenFeature( [], tf.int64, default_value=tf.zeros([], dtype=tf.int64)), } items_to_handlers = { 'image': slim.tfexample_decoder.Image(), 'label': slim.tfexample_decoder.Tensor('image/class/label'), } decoder = slim.tfexample_decoder.TFExampleDecoder( keys_to_features, items_to_handlers) labels_to_names = None if dataset_utils.has_labels(dataset_dir): labels_to_names = dataset_utils.read_label_file(dataset_dir) return slim.dataset.Dataset( data_sources=file_pattern, reader=reader, decoder=decoder, num_samples=SPLITS_TO_SIZES[split_name], items_to_descriptions=_ITEMS_TO_DESCRIPTIONS, num_classes=_NUM_CLASSES, labels_to_names=labels_to_names)
[ "def", "get_split", "(", "split_name", ",", "dataset_dir", ",", "file_pattern", "=", "None", ",", "reader", "=", "None", ")", ":", "if", "split_name", "not", "in", "SPLITS_TO_SIZES", ":", "raise", "ValueError", "(", "'split name %s was not recognized.'", "%", "s...
https://github.com/xiaobai1217/MBMD/blob/246f3434bccb9c8357e0f698995b659578bf1afb/lib/slim/datasets/flowers.py#L44-L98
domlysz/BlenderGIS
0c00bc361d05599467174b8721d4cfeb4c3db608
operators/view3d_mapviewer.py
python
BaseMap.place
(self)
Set map as background image
Set map as background image
[ "Set", "map", "as", "background", "image" ]
def place(self): '''Set map as background image''' #Get or load bpy image try: self.img = [img for img in bpy.data.images if img.filepath == self.imgPath and len(img.packed_files) == 0][0] except IndexError: self.img = bpy.data.images.load(self.imgPath) #Get or load background image empties = [obj for obj in self.scn.objects if obj.type == 'EMPTY'] bkgs = [obj for obj in empties if obj.empty_display_type == 'IMAGE'] for bkg in bkgs: bkg.hide_viewport = True try: self.bkg = [bkg for bkg in bkgs if bkg.data.filepath == self.imgPath and len(bkg.data.packed_files) == 0][0] except IndexError: self.bkg = bpy.data.objects.new(self.name, None) #None will create an empty self.bkg.empty_display_type = 'IMAGE' self.bkg.empty_image_depth = 'BACK' self.bkg.data = self.img self.scn.collection.objects.link(self.bkg) else: self.bkg.hide_viewport = False #Get some image props img_ox, img_oy = self.mosaic.center img_w, img_h = self.mosaic.size res = self.mosaic.pxSize.x #res = self.tm.getRes(self.zoom) #Set background size sizex = img_w * res / self.scale sizey = img_h * res / self.scale size = max([sizex, sizey]) #self.bkg.empty_display_size = sizex #limited to 1000 self.bkg.empty_display_size = 1 #a size of 1 means image width=1bu self.bkg.scale = (size, size, 1) #Set background offset (image origin does not match scene origin) dx = (self.crsx - img_ox) / self.scale dy = (self.crsy - img_oy) / self.scale #self.bkg.empty_image_offset = [-0.5, -0.5] #in image unit space self.bkg.location = (-dx, -dy, 0) #ratio = img_w / img_h #self.bkg.offset_y = -dy * ratio #https://developer.blender.org/T48034 #Get 3d area's number of pixels and resulting size at the requested zoom level resolution #dst = max( [self.area3d.width, self.area3d.height] ) #WARN return [1,1] !!!!???? dst = max( [self.area.width, self.area.height] ) z = self.lockedZoom if self.lockedZoom is not None else self.zoom res = self.tm.getRes(z) dst = dst * res / self.scale #Compute 3dview FOV and needed z distance to see the maximum extent that #can be draw at full res (area 3d needs enough pixels otherwise the image will appears downgraded) #WARN seems these formulas does not works properly in Blender2.8 view3D_aperture = 36 #Blender constant (see source code) view3D_zoom = 2 #Blender constant (see source code) fov = 2 * math.atan(view3D_aperture / (self.view3d.lens*2) ) #fov equation fov = math.atan(math.tan(fov/2) * view3D_zoom) * 2 #zoom correction (see source code) zdst = (dst/2) / math.tan(fov/2) #trigo zdst = math.floor(zdst) #make sure no downgrade self.reg3d.view_distance = zdst self.viewDstZ = zdst #Update image drawing self.bkg.data.reload()
[ "def", "place", "(", "self", ")", ":", "#Get or load bpy image", "try", ":", "self", ".", "img", "=", "[", "img", "for", "img", "in", "bpy", ".", "data", ".", "images", "if", "img", ".", "filepath", "==", "self", ".", "imgPath", "and", "len", "(", ...
https://github.com/domlysz/BlenderGIS/blob/0c00bc361d05599467174b8721d4cfeb4c3db608/operators/view3d_mapviewer.py#L216-L283
Miserlou/Zappa
5a11c17f5ecf0568bdb73b4baf6fb08ff0184f39
zappa/utilities.py
python
is_valid_bucket_name
(name)
return True
Checks if an S3 bucket name is valid according to https://docs.aws.amazon.com/AmazonS3/latest/dev/BucketRestrictions.html#bucketnamingrules
Checks if an S3 bucket name is valid according to https://docs.aws.amazon.com/AmazonS3/latest/dev/BucketRestrictions.html#bucketnamingrules
[ "Checks", "if", "an", "S3", "bucket", "name", "is", "valid", "according", "to", "https", ":", "//", "docs", ".", "aws", ".", "amazon", ".", "com", "/", "AmazonS3", "/", "latest", "/", "dev", "/", "BucketRestrictions", ".", "html#bucketnamingrules" ]
def is_valid_bucket_name(name): """ Checks if an S3 bucket name is valid according to https://docs.aws.amazon.com/AmazonS3/latest/dev/BucketRestrictions.html#bucketnamingrules """ # Bucket names must be at least 3 and no more than 63 characters long. if (len(name) < 3 or len(name) > 63): return False # Bucket names must not contain uppercase characters or underscores. if (any(x.isupper() for x in name)): return False if "_" in name: return False # Bucket names must start with a lowercase letter or number. if not (name[0].islower() or name[0].isdigit()): return False # Bucket names must be a series of one or more labels. Adjacent labels are separated by a single period (.). for label in name.split("."): # Each label must start and end with a lowercase letter or a number. if len(label) < 1: return False if not (label[0].islower() or label[0].isdigit()): return False if not (label[-1].islower() or label[-1].isdigit()): return False # Bucket names must not be formatted as an IP address (for example, 192.168.5.4). looks_like_IP = True for label in name.split("."): if not label.isdigit(): looks_like_IP = False break if looks_like_IP: return False return True
[ "def", "is_valid_bucket_name", "(", "name", ")", ":", "# Bucket names must be at least 3 and no more than 63 characters long.", "if", "(", "len", "(", "name", ")", "<", "3", "or", "len", "(", "name", ")", ">", "63", ")", ":", "return", "False", "# Bucket names mus...
https://github.com/Miserlou/Zappa/blob/5a11c17f5ecf0568bdb73b4baf6fb08ff0184f39/zappa/utilities.py#L534-L567
UFAL-DSG/tgen
3c43c0e29faa7ea3857a6e490d9c28a8daafc7d0
tgen/candgen.py
python
RandomCandidateGenerator.can_generate_greedy
(self, tree, da)
return True
Check if the candidate generator can generate a given tree greedily, always pursuing the first viable path. This is for debugging purposes only. Uses `get_all_successors` and always goes on with the first one that increases coverage of the current tree.
Check if the candidate generator can generate a given tree greedily, always pursuing the first viable path.
[ "Check", "if", "the", "candidate", "generator", "can", "generate", "a", "given", "tree", "greedily", "always", "pursuing", "the", "first", "viable", "path", "." ]
def can_generate_greedy(self, tree, da): """Check if the candidate generator can generate a given tree greedily, always pursuing the first viable path. This is for debugging purposes only. Uses `get_all_successors` and always goes on with the first one that increases coverage of the current tree. """ self.init_run(da) cur_subtree = TreeData() found = True while found and cur_subtree != tree: found = False for succ in self.get_all_successors(cur_subtree): # use the first successor that is still a subtree of the target tree if tree.common_subtree_size(succ) == len(succ): cur_subtree = succ found = True break # we have hit a dead end if cur_subtree != tree: log_info('Did not find tree: ' + str(tree) + ' for DA: ' + str(da)) return False # everything alright log_info('Found tree: %s for DA: %s' % (str(tree), str(da))) return True
[ "def", "can_generate_greedy", "(", "self", ",", "tree", ",", "da", ")", ":", "self", ".", "init_run", "(", "da", ")", "cur_subtree", "=", "TreeData", "(", ")", "found", "=", "True", "while", "found", "and", "cur_subtree", "!=", "tree", ":", "found", "=...
https://github.com/UFAL-DSG/tgen/blob/3c43c0e29faa7ea3857a6e490d9c28a8daafc7d0/tgen/candgen.py#L484-L512
iagcl/watchmen
d329b357e6fde3ad91e972988b160a33c12afc2a
verification_rules/check_cloudtrail/check_cloudtrail.py
python
lambda_handler
(event, context)
Entrypoint for lambda function. Args: event: lambda event context: lambda context
Entrypoint for lambda function.
[ "Entrypoint", "for", "lambda", "function", "." ]
def lambda_handler(event, context): """Entrypoint for lambda function. Args: event: lambda event context: lambda context """ citizen_exec_role_arn = event["citizen_exec_role_arn"] event = event["config_event"] logger.log_event(event, context, None, None) invoking_event = json.loads(event["invokingEvent"]) parameter = rule_parameter.RuleParameter(event) is_test_mode = parameter.get("testMode", False) assumed_creds = credential.get_assumed_creds(boto3.client("sts"), citizen_exec_role_arn) config = boto3.client("config", **assumed_creds) compliance_type = get_compliance_type(boto3.client("cloudtrail", **assumed_creds)) eval_element = evaluation.EvaluationElement( event["accountId"], "AWS::::Account", compliance_type, "", invoking_event["notificationCreationTime"] ) evaluation.put_log_evaluation( config, eval_element, event["resultToken"], is_test_mode, logger, event, context )
[ "def", "lambda_handler", "(", "event", ",", "context", ")", ":", "citizen_exec_role_arn", "=", "event", "[", "\"citizen_exec_role_arn\"", "]", "event", "=", "event", "[", "\"config_event\"", "]", "logger", ".", "log_event", "(", "event", ",", "context", ",", "...
https://github.com/iagcl/watchmen/blob/d329b357e6fde3ad91e972988b160a33c12afc2a/verification_rules/check_cloudtrail/check_cloudtrail.py#L47-L85
bigboNed3/bert_serving
44d33920da6888cf91cb72e6c7b27c7b0c7d8815
run_classifier.py
python
create_model
(bert_config, is_training, input_ids, input_mask, segment_ids, labels, num_labels, use_one_hot_embeddings)
Creates a classification model.
Creates a classification model.
[ "Creates", "a", "classification", "model", "." ]
def create_model(bert_config, is_training, input_ids, input_mask, segment_ids, labels, num_labels, use_one_hot_embeddings): """Creates a classification model.""" model = modeling.BertModel( config=bert_config, is_training=is_training, input_ids=input_ids, input_mask=input_mask, token_type_ids=segment_ids, use_one_hot_embeddings=use_one_hot_embeddings) # In the demo, we are doing a simple classification task on the entire # segment. # # If you want to use the token-level output, use model.get_sequence_output() # instead. output_layer = model.get_pooled_output() hidden_size = output_layer.shape[-1].value output_weights = tf.get_variable( "output_weights", [num_labels, hidden_size], initializer=tf.truncated_normal_initializer(stddev=0.02)) output_bias = tf.get_variable( "output_bias", [num_labels], initializer=tf.zeros_initializer()) with tf.variable_scope("loss"): if is_training: # I.e., 0.1 dropout output_layer = tf.nn.dropout(output_layer, keep_prob=0.9) logits = tf.matmul(output_layer, output_weights, transpose_b=True) logits = tf.nn.bias_add(logits, output_bias) probabilities = tf.nn.softmax(logits, axis=-1) log_probs = tf.nn.log_softmax(logits, axis=-1) one_hot_labels = tf.one_hot(labels, depth=num_labels, dtype=tf.float32) per_example_loss = -tf.reduce_sum(one_hot_labels * log_probs, axis=-1) loss = tf.reduce_mean(per_example_loss) return (loss, per_example_loss, logits, probabilities)
[ "def", "create_model", "(", "bert_config", ",", "is_training", ",", "input_ids", ",", "input_mask", ",", "segment_ids", ",", "labels", ",", "num_labels", ",", "use_one_hot_embeddings", ")", ":", "model", "=", "modeling", ".", "BertModel", "(", "config", "=", "...
https://github.com/bigboNed3/bert_serving/blob/44d33920da6888cf91cb72e6c7b27c7b0c7d8815/run_classifier.py#L582-L624
zhl2008/awd-platform
0416b31abea29743387b10b3914581fbe8e7da5e
web_flaskbb/lib/python2.7/site-packages/whoosh/analysis/filters.py
python
Filter.__ne__
(self, other)
return not self == other
[]
def __ne__(self, other): return not self == other
[ "def", "__ne__", "(", "self", ",", "other", ")", ":", "return", "not", "self", "==", "other" ]
https://github.com/zhl2008/awd-platform/blob/0416b31abea29743387b10b3914581fbe8e7da5e/web_flaskbb/lib/python2.7/site-packages/whoosh/analysis/filters.py#L77-L78
twilio/twilio-python
6e1e811ea57a1edfadd5161ace87397c563f6915
twilio/rest/verify/__init__.py
python
Verify.v2
(self)
return self._v2
:returns: Version v2 of verify :rtype: twilio.rest.verify.v2.V2
:returns: Version v2 of verify :rtype: twilio.rest.verify.v2.V2
[ ":", "returns", ":", "Version", "v2", "of", "verify", ":", "rtype", ":", "twilio", ".", "rest", ".", "verify", ".", "v2", ".", "V2" ]
def v2(self): """ :returns: Version v2 of verify :rtype: twilio.rest.verify.v2.V2 """ if self._v2 is None: self._v2 = V2(self) return self._v2
[ "def", "v2", "(", "self", ")", ":", "if", "self", ".", "_v2", "is", "None", ":", "self", ".", "_v2", "=", "V2", "(", "self", ")", "return", "self", ".", "_v2" ]
https://github.com/twilio/twilio-python/blob/6e1e811ea57a1edfadd5161ace87397c563f6915/twilio/rest/verify/__init__.py#L30-L37
boto/boto
b2a6f08122b2f1b89888d2848e730893595cd001
boto/opsworks/layer1.py
python
OpsWorksConnection.create_layer
(self, stack_id, type, name, shortname, attributes=None, custom_instance_profile_arn=None, custom_security_group_ids=None, packages=None, volume_configurations=None, enable_auto_healing=None, auto_assign_elastic_ips=None, auto_assign_public_ips=None, custom_recipes=None, install_updates_on_boot=None, use_ebs_optimized_instances=None, lifecycle_event_configuration=None)
return self.make_request(action='CreateLayer', body=json.dumps(params))
Creates a layer. For more information, see `How to Create a Layer`_. You should use **CreateLayer** for noncustom layer types such as PHP App Server only if the stack does not have an existing layer of that type. A stack can have at most one instance of each noncustom layer; if you attempt to create a second instance, **CreateLayer** fails. A stack can have an arbitrary number of custom layers, so you can call **CreateLayer** as many times as you like for that layer type. **Required Permissions**: To use this action, an IAM user must have a Manage permissions level for the stack, or an attached policy that explicitly grants permissions. For more information on user permissions, see `Managing User Permissions`_. :type stack_id: string :param stack_id: The layer stack ID. :type type: string :param type: The layer type. A stack cannot have more than one built-in layer of the same type. It can have any number of custom layers. :type name: string :param name: The layer name, which is used by the console. :type shortname: string :param shortname: The layer short name, which is used internally by AWS OpsWorks and by Chef recipes. The short name is also used as the name for the directory where your app files are installed. It can have a maximum of 200 characters, which are limited to the alphanumeric characters, '-', '_', and '.'. :type attributes: map :param attributes: One or more user-defined key/value pairs to be added to the stack attributes. :type custom_instance_profile_arn: string :param custom_instance_profile_arn: The ARN of an IAM profile that to be used for the layer's EC2 instances. For more information about IAM ARNs, see `Using Identifiers`_. :type custom_security_group_ids: list :param custom_security_group_ids: An array containing the layer custom security group IDs. :type packages: list :param packages: An array of `Package` objects that describe the layer packages. :type volume_configurations: list :param volume_configurations: A `VolumeConfigurations` object that describes the layer's Amazon EBS volumes. :type enable_auto_healing: boolean :param enable_auto_healing: Whether to disable auto healing for the layer. :type auto_assign_elastic_ips: boolean :param auto_assign_elastic_ips: Whether to automatically assign an `Elastic IP address`_ to the layer's instances. For more information, see `How to Edit a Layer`_. :type auto_assign_public_ips: boolean :param auto_assign_public_ips: For stacks that are running in a VPC, whether to automatically assign a public IP address to the layer's instances. For more information, see `How to Edit a Layer`_. :type custom_recipes: dict :param custom_recipes: A `LayerCustomRecipes` object that specifies the layer custom recipes. :type install_updates_on_boot: boolean :param install_updates_on_boot: Whether to install operating system and package updates when the instance boots. The default value is `True`. To control when updates are installed, set this value to `False`. You must then update your instances manually by using CreateDeployment to run the `update_dependencies` stack command or manually running `yum` (Amazon Linux) or `apt-get` (Ubuntu) on the instances. We strongly recommend using the default value of `True`, to ensure that your instances have the latest security updates. :type use_ebs_optimized_instances: boolean :param use_ebs_optimized_instances: Whether to use Amazon EBS-optimized instances. :type lifecycle_event_configuration: dict :param lifecycle_event_configuration: A LifeCycleEventConfiguration object that you can use to configure the Shutdown event to specify an execution timeout and enable or disable Elastic Load Balancer connection draining.
Creates a layer. For more information, see `How to Create a Layer`_.
[ "Creates", "a", "layer", ".", "For", "more", "information", "see", "How", "to", "Create", "a", "Layer", "_", "." ]
def create_layer(self, stack_id, type, name, shortname, attributes=None, custom_instance_profile_arn=None, custom_security_group_ids=None, packages=None, volume_configurations=None, enable_auto_healing=None, auto_assign_elastic_ips=None, auto_assign_public_ips=None, custom_recipes=None, install_updates_on_boot=None, use_ebs_optimized_instances=None, lifecycle_event_configuration=None): """ Creates a layer. For more information, see `How to Create a Layer`_. You should use **CreateLayer** for noncustom layer types such as PHP App Server only if the stack does not have an existing layer of that type. A stack can have at most one instance of each noncustom layer; if you attempt to create a second instance, **CreateLayer** fails. A stack can have an arbitrary number of custom layers, so you can call **CreateLayer** as many times as you like for that layer type. **Required Permissions**: To use this action, an IAM user must have a Manage permissions level for the stack, or an attached policy that explicitly grants permissions. For more information on user permissions, see `Managing User Permissions`_. :type stack_id: string :param stack_id: The layer stack ID. :type type: string :param type: The layer type. A stack cannot have more than one built-in layer of the same type. It can have any number of custom layers. :type name: string :param name: The layer name, which is used by the console. :type shortname: string :param shortname: The layer short name, which is used internally by AWS OpsWorks and by Chef recipes. The short name is also used as the name for the directory where your app files are installed. It can have a maximum of 200 characters, which are limited to the alphanumeric characters, '-', '_', and '.'. :type attributes: map :param attributes: One or more user-defined key/value pairs to be added to the stack attributes. :type custom_instance_profile_arn: string :param custom_instance_profile_arn: The ARN of an IAM profile that to be used for the layer's EC2 instances. For more information about IAM ARNs, see `Using Identifiers`_. :type custom_security_group_ids: list :param custom_security_group_ids: An array containing the layer custom security group IDs. :type packages: list :param packages: An array of `Package` objects that describe the layer packages. :type volume_configurations: list :param volume_configurations: A `VolumeConfigurations` object that describes the layer's Amazon EBS volumes. :type enable_auto_healing: boolean :param enable_auto_healing: Whether to disable auto healing for the layer. :type auto_assign_elastic_ips: boolean :param auto_assign_elastic_ips: Whether to automatically assign an `Elastic IP address`_ to the layer's instances. For more information, see `How to Edit a Layer`_. :type auto_assign_public_ips: boolean :param auto_assign_public_ips: For stacks that are running in a VPC, whether to automatically assign a public IP address to the layer's instances. For more information, see `How to Edit a Layer`_. :type custom_recipes: dict :param custom_recipes: A `LayerCustomRecipes` object that specifies the layer custom recipes. :type install_updates_on_boot: boolean :param install_updates_on_boot: Whether to install operating system and package updates when the instance boots. The default value is `True`. To control when updates are installed, set this value to `False`. You must then update your instances manually by using CreateDeployment to run the `update_dependencies` stack command or manually running `yum` (Amazon Linux) or `apt-get` (Ubuntu) on the instances. We strongly recommend using the default value of `True`, to ensure that your instances have the latest security updates. :type use_ebs_optimized_instances: boolean :param use_ebs_optimized_instances: Whether to use Amazon EBS-optimized instances. :type lifecycle_event_configuration: dict :param lifecycle_event_configuration: A LifeCycleEventConfiguration object that you can use to configure the Shutdown event to specify an execution timeout and enable or disable Elastic Load Balancer connection draining. """ params = { 'StackId': stack_id, 'Type': type, 'Name': name, 'Shortname': shortname, } if attributes is not None: params['Attributes'] = attributes if custom_instance_profile_arn is not None: params['CustomInstanceProfileArn'] = custom_instance_profile_arn if custom_security_group_ids is not None: params['CustomSecurityGroupIds'] = custom_security_group_ids if packages is not None: params['Packages'] = packages if volume_configurations is not None: params['VolumeConfigurations'] = volume_configurations if enable_auto_healing is not None: params['EnableAutoHealing'] = enable_auto_healing if auto_assign_elastic_ips is not None: params['AutoAssignElasticIps'] = auto_assign_elastic_ips if auto_assign_public_ips is not None: params['AutoAssignPublicIps'] = auto_assign_public_ips if custom_recipes is not None: params['CustomRecipes'] = custom_recipes if install_updates_on_boot is not None: params['InstallUpdatesOnBoot'] = install_updates_on_boot if use_ebs_optimized_instances is not None: params['UseEbsOptimizedInstances'] = use_ebs_optimized_instances if lifecycle_event_configuration is not None: params['LifecycleEventConfiguration'] = lifecycle_event_configuration return self.make_request(action='CreateLayer', body=json.dumps(params))
[ "def", "create_layer", "(", "self", ",", "stack_id", ",", "type", ",", "name", ",", "shortname", ",", "attributes", "=", "None", ",", "custom_instance_profile_arn", "=", "None", ",", "custom_security_group_ids", "=", "None", ",", "packages", "=", "None", ",", ...
https://github.com/boto/boto/blob/b2a6f08122b2f1b89888d2848e730893595cd001/boto/opsworks/layer1.py#L757-L897
AppScale/gts
46f909cf5dc5ba81faf9d81dc9af598dcf8a82a9
AppServer/lib/django-1.2/django/core/urlresolvers.py
python
get_urlconf
(default=None)
return default
Returns the root URLconf to use for the current thread if it has been changed from the default one.
Returns the root URLconf to use for the current thread if it has been changed from the default one.
[ "Returns", "the", "root", "URLconf", "to", "use", "for", "the", "current", "thread", "if", "it", "has", "been", "changed", "from", "the", "default", "one", "." ]
def get_urlconf(default=None): """ Returns the root URLconf to use for the current thread if it has been changed from the default one. """ thread = currentThread() if thread in _urlconfs: return _urlconfs[thread] return default
[ "def", "get_urlconf", "(", "default", "=", "None", ")", ":", "thread", "=", "currentThread", "(", ")", "if", "thread", "in", "_urlconfs", ":", "return", "_urlconfs", "[", "thread", "]", "return", "default" ]
https://github.com/AppScale/gts/blob/46f909cf5dc5ba81faf9d81dc9af598dcf8a82a9/AppServer/lib/django-1.2/django/core/urlresolvers.py#L388-L396
nucleic/enaml
65c2a2a2d765e88f2e1103046680571894bb41ed
enaml/core/code_tracing.py
python
CodeTracer.return_value
(self, value)
Called before the RETURN_VALUE opcode is executed. Parameters ---------- value : object The value that will be returned from the code object.
Called before the RETURN_VALUE opcode is executed.
[ "Called", "before", "the", "RETURN_VALUE", "opcode", "is", "executed", "." ]
def return_value(self, value): """ Called before the RETURN_VALUE opcode is executed. Parameters ---------- value : object The value that will be returned from the code object. """ pass
[ "def", "return_value", "(", "self", ",", "value", ")", ":", "pass" ]
https://github.com/nucleic/enaml/blob/65c2a2a2d765e88f2e1103046680571894bb41ed/enaml/core/code_tracing.py#L107-L116
jaywink/socialhome
c3178b044936a5c57a502ab6ed2b4f43c8e076ca
socialhome/content/management/commands/create_dummy_content.py
python
Command.handle
(self, *args, **options)
Create dummy content.
Create dummy content.
[ "Create", "dummy", "content", "." ]
def handle(self, *args, **options): """Create dummy content.""" for i in range(options["amount"]): content = PublicContentFactory() print("Created content: %s" % content) user_email = options["user_email"] if user_email is not None: user = User.objects.get(email=user_email) if user is None: print("Error: no user with email {0} found").format(user_email) else: all_profiles = Profile.objects.exclude(uuid=user.profile.uuid)[::1] count = len(all_profiles) nb_contacts = min(count, 500) shuffle(all_profiles) user.profile.following.add(*all_profiles[:nb_contacts]) print("Added %s contacts to %s's followed contacts" % (nb_contacts, user_email)) shuffle(all_profiles) profiles = all_profiles[:nb_contacts] [profile.following.add(user.profile) for profile in profiles] print("Added %s contacts to %s's followers" % (nb_contacts, user_email))
[ "def", "handle", "(", "self", ",", "*", "args", ",", "*", "*", "options", ")", ":", "for", "i", "in", "range", "(", "options", "[", "\"amount\"", "]", ")", ":", "content", "=", "PublicContentFactory", "(", ")", "print", "(", "\"Created content: %s\"", ...
https://github.com/jaywink/socialhome/blob/c3178b044936a5c57a502ab6ed2b4f43c8e076ca/socialhome/content/management/commands/create_dummy_content.py#L17-L41
AppScale/gts
46f909cf5dc5ba81faf9d81dc9af598dcf8a82a9
AppServer/lib/django-1.5/django/contrib/gis/db/backends/postgis/operations.py
python
PostGISOperations.postgis_version_tuple
(self)
return (version, major, minor1, minor2)
Returns the PostGIS version as a tuple (version string, major, minor, subminor).
Returns the PostGIS version as a tuple (version string, major, minor, subminor).
[ "Returns", "the", "PostGIS", "version", "as", "a", "tuple", "(", "version", "string", "major", "minor", "subminor", ")", "." ]
def postgis_version_tuple(self): """ Returns the PostGIS version as a tuple (version string, major, minor, subminor). """ # Getting the PostGIS version version = self.postgis_lib_version() m = self.version_regex.match(version) if m: major = int(m.group('major')) minor1 = int(m.group('minor1')) minor2 = int(m.group('minor2')) else: raise Exception('Could not parse PostGIS version string: %s' % version) return (version, major, minor1, minor2)
[ "def", "postgis_version_tuple", "(", "self", ")", ":", "# Getting the PostGIS version", "version", "=", "self", ".", "postgis_lib_version", "(", ")", "m", "=", "self", ".", "version_regex", ".", "match", "(", "version", ")", "if", "m", ":", "major", "=", "in...
https://github.com/AppScale/gts/blob/46f909cf5dc5ba81faf9d81dc9af598dcf8a82a9/AppServer/lib/django-1.5/django/contrib/gis/db/backends/postgis/operations.py#L438-L454