code
stringlengths
51
2.38k
docstring
stringlengths
4
15.2k
def is_canonical_address(address: Any) -> bool: if not is_bytes(address) or len(address) != 20: return False return address == to_canonical_address(address)
Returns `True` if the `value` is an address in its canonical form.
def irfftn(a, s=None, axes=None, norm=None): output = mkl_fft.irfftn_numpy(a, s, axes) if _unitary(norm): output *= sqrt(_tot_size(output, axes)) return output
Compute the inverse of the N-dimensional FFT of real input. This function computes the inverse of the N-dimensional discrete Fourier Transform for real input over any number of axes in an M-dimensional array by means of the Fast Fourier Transform (FFT). In other words, ``irfftn(rfftn(a), a.shape) == a`` to within numerical accuracy. (The ``a.shape`` is necessary like ``len(a)`` is for `irfft`, and for the same reason.) The input should be ordered in the same way as is returned by `rfftn`, i.e. as for `irfft` for the final transformation axis, and as for `ifftn` along all the other axes. Parameters ---------- a : array_like Input array. s : sequence of ints, optional Shape (length of each transformed axis) of the output (``s[0]`` refers to axis 0, ``s[1]`` to axis 1, etc.). `s` is also the number of input points used along this axis, except for the last axis, where ``s[-1]//2+1`` points of the input are used. Along any axis, if the shape indicated by `s` is smaller than that of the input, the input is cropped. If it is larger, the input is padded with zeros. If `s` is not given, the shape of the input along the axes specified by `axes` is used. axes : sequence of ints, optional Axes over which to compute the inverse FFT. If not given, the last `len(s)` axes are used, or all axes if `s` is also not specified. Repeated indices in `axes` means that the inverse transform over that axis is performed multiple times. norm : {None, "ortho"}, optional .. versionadded:: 1.10.0 Normalization mode (see `numpy.fft`). Default is None. Returns ------- out : ndarray The truncated or zero-padded input, transformed along the axes indicated by `axes`, or by a combination of `s` or `a`, as explained in the parameters section above. The length of each transformed axis is as given by the corresponding element of `s`, or the length of the input in every axis except for the last one if `s` is not given. In the final transformed axis the length of the output when `s` is not given is ``2*(m-1)`` where ``m`` is the length of the final transformed axis of the input. To get an odd number of output points in the final axis, `s` must be specified. Raises ------ ValueError If `s` and `axes` have different length. IndexError If an element of `axes` is larger than than the number of axes of `a`. See Also -------- rfftn : The forward n-dimensional FFT of real input, of which `ifftn` is the inverse. fft : The one-dimensional FFT, with definitions and conventions used. irfft : The inverse of the one-dimensional FFT of real input. irfft2 : The inverse of the two-dimensional FFT of real input. Notes ----- See `fft` for definitions and conventions used. See `rfft` for definitions and conventions used for real input. Examples -------- >>> a = np.zeros((3, 2, 2)) >>> a[0, 0, 0] = 3 * 2 * 2 >>> np.fft.irfftn(a) array([[[ 1., 1.], [ 1., 1.]], [[ 1., 1.], [ 1., 1.]], [[ 1., 1.], [ 1., 1.]]])
def get_examples(examples_dir="examples/"): all_files = os.listdir(examples_dir) python_files = [f for f in all_files if is_python_file(f)] basenames = [remove_suffix(f) for f in python_files] modules = [import_module(module) for module in pathify(basenames)] return [ module for module in modules if getattr(module, 'app', None) is not None ]
All example modules
def gdbgui(): interpreter = "lldb" if app.config["LLDB"] else "gdb" gdbpid = request.args.get("gdbpid", 0) initial_gdb_user_command = request.args.get("initial_gdb_user_command", "") add_csrf_token_to_session() THEMES = ["monokai", "light"] initial_data = { "csrf_token": session["csrf_token"], "gdbgui_version": __version__, "gdbpid": gdbpid, "initial_gdb_user_command": initial_gdb_user_command, "interpreter": interpreter, "initial_binary_and_args": app.config["initial_binary_and_args"], "p": pbkdf2_hex(str(app.config.get("l")), "Feo8CJol") if app.config.get("l") else "", "project_home": app.config["project_home"], "remap_sources": app.config["remap_sources"], "rr": app.config["rr"], "show_gdbgui_upgrades": app.config["show_gdbgui_upgrades"], "themes": THEMES, "signals": SIGNAL_NAME_TO_OBJ, "using_windows": USING_WINDOWS, } return render_template( "gdbgui.html", version=__version__, debug=app.debug, interpreter=interpreter, initial_data=initial_data, themes=THEMES, )
Render the main gdbgui interface
def anchored_pairs(self, anchor): pairs = OrderedDict() for term in self.keys: score = self.get_pair(anchor, term) if score: pairs[term] = score return utils.sort_dict(pairs)
Get distances between an anchor term and all other terms. Args: anchor (str): The anchor term. Returns: OrderedDict: The distances, in descending order.
def zscore(self, key, member): fut = self.execute(b'ZSCORE', key, member) return wait_convert(fut, optional_int_or_float)
Get the score associated with the given member in a sorted set.
def output(self, result): if self.returns: errors = None try: return self._adapt_result(result) except AdaptErrors as e: errors = e.errors except AdaptError as e: errors = [e] raise AnticipateErrors( message='Return value %r does not match anticipated type %r' % (type(result), self.returns), errors=errors) elif self.strict: if result is not None: raise AnticipateErrors( message='Return value %r does not match anticipated value ' 'of None' % type(result), errors=None) return None else: return result
Adapts the result of a function based on the returns definition.
def stop_patching(name=None): global _patchers, _mocks if not _patchers: warnings.warn('stop_patching() called again, already stopped') if name is not None: items = [(name, _patchers[name])] else: items = list(_patchers.items()) for name, patcher in items: patcher.stop() del _patchers[name] del _mocks[name]
Finish the mocking initiated by `start_patching` Kwargs: name (Optional[str]): if given, only unpatch the specified path, else all defined default mocks
def lists_submissions(self, date, course_id, grader_id, assignment_id): path = {} data = {} params = {} path["course_id"] = course_id path["date"] = date path["grader_id"] = grader_id path["assignment_id"] = assignment_id self.logger.debug("GET /api/v1/courses/{course_id}/gradebook_history/{date}/graders/{grader_id}/assignments/{assignment_id}/submissions with query params: {params} and form data: {data}".format(params=params, data=data, **path)) return self.generic_request("GET", "/api/v1/courses/{course_id}/gradebook_history/{date}/graders/{grader_id}/assignments/{assignment_id}/submissions".format(**path), data=data, params=params, all_pages=True)
Lists submissions. Gives a nested list of submission versions
def poisson(lam=1, shape=_Null, dtype=_Null, **kwargs): return _random_helper(_internal._random_poisson, _internal._sample_poisson, [lam], shape, dtype, kwargs)
Draw random samples from a Poisson distribution. Samples are distributed according to a Poisson distribution parametrized by *lambda* (rate). Samples will always be returned as a floating point data type. Parameters ---------- lam : float or Symbol, optional Expectation of interval, should be >= 0. shape : int or tuple of ints, optional The number of samples to draw. If shape is, e.g., `(m, n)` and `lam` is a scalar, output shape will be `(m, n)`. If `lam` is an Symbol with shape, e.g., `(x, y)`, then output will have shape `(x, y, m, n)`, where `m*n` samples are drawn for each entry in `lam`. dtype : {'float16', 'float32', 'float64'}, optional Data type of output samples. Default is 'float32' Returns ------- Symbol If input `shape` has dimensions, e.g., `(m, n)`, and `lam` is a scalar, output shape will be `(m, n)`. If `lam` is an Symbol with shape, e.g., `(x, y)`, then output will have shape `(x, y, m, n)`, where `m*n` samples are drawn for each entry in `lam`.
def _propagate_options(self, change): "Set the values and labels, and select the first option if we aren't initializing" options = self._options_full self.set_trait('_options_labels', tuple(i[0] for i in options)) self._options_values = tuple(i[1] for i in options) if self._initializing_traits_ is not True: if len(options) > 0: if self.index == 0: self._notify_trait('index', 0, 0) else: self.index = 0 else: self.index = None
Set the values and labels, and select the first option if we aren't initializing
def list_append(self, key, value, create=False, **kwargs): op = SD.array_append('', value) sdres = self.mutate_in(key, op, **kwargs) return self._wrap_dsop(sdres)
Add an item to the end of a list. :param str key: The document ID of the list :param value: The value to append :param create: Whether the list should be created if it does not exist. Note that this option only works on servers >= 4.6 :param kwargs: Additional arguments to :meth:`mutate_in` :return: :class:`~.OperationResult`. :raise: :cb_exc:`NotFoundError` if the document does not exist. and `create` was not specified. example:: cb.list_append('a_list', 'hello') cb.list_append('a_list', 'world') .. seealso:: :meth:`map_add`
def buildCliString(self): config = self.navbar.getActiveConfig() group = self.buildSpec['widgets'][self.navbar.getSelectedGroup()] positional = config.getPositionalArgs() optional = config.getOptionalArgs() print(cli.buildCliString( self.buildSpec['target'], group['command'], positional, optional )) return cli.buildCliString( self.buildSpec['target'], group['command'], positional, optional )
Collect all of the required information from the config screen and build a CLI string which can be used to invoke the client program
def set_template(path, template, context, defaults, saltenv='base', **kwargs): path = __salt__['cp.get_template']( path=path, dest=None, template=template, saltenv=saltenv, context=context, defaults=defaults, **kwargs) return set_file(path, saltenv, **kwargs)
Set answers to debconf questions from a template. path location of the file containing the package selections template template format context variables to add to the template environment default default values for the template environment CLI Example: .. code-block:: bash salt '*' debconf.set_template salt://pathto/pkg.selections.jinja jinja None None
def Validate(self, type_names): errs = [n for n in self._RDFTypes(type_names) if not self._GetClass(n)] if errs: raise DefinitionError("Undefined RDF Types: %s" % ",".join(errs))
Filtered types need to be RDFValues.
def get_view_url(self, view_name, user, url_kwargs=None, context_kwargs=None, follow_parent=True, check_permissions=True): view, url_name = self.get_initialized_view_and_name(view_name, follow_parent=follow_parent) if isinstance(view, URLAlias): view_name = view.get_view_name(view_name) bundle = view.get_bundle(self, url_kwargs, context_kwargs) if bundle and isinstance(bundle, Bundle): return bundle.get_view_url(view_name, user, url_kwargs=url_kwargs, context_kwargs=context_kwargs, follow_parent=follow_parent, check_permissions=check_permissions) elif view: if not url_kwargs: url_kwargs = {} url_kwargs = view.get_url_kwargs(context_kwargs, **url_kwargs) view.kwargs = url_kwargs if check_permissions and not view.can_view(user): return None url = reverse("admin:%s" % url_name, kwargs=url_kwargs) return url
Returns the url for a given view_name. If the view isn't found or the user does not have permission None is returned. A NoReverseMatch error may be raised if the view was unable to find the correct keyword arguments for the reverse function from the given url_kwargs and context_kwargs. :param view_name: The name of the view that you want. :param user: The user who is requesting the view :param url_kwargs: The url keyword arguments that came \ with the request object. The view itself is responsible \ to remove arguments that would not be part of a normal match \ for that view. This is done by calling the `get_url_kwargs` \ method on the view. :param context_kwargs: Extra arguments that will be passed \ to the view for consideration in the final keyword arguments \ for reverse. :param follow_parent: If we encounter a parent reference should \ we follow it. Defaults to True. :param check_permisions: Run permissions checks. Defaults to True.
def construct_formatdb_cmd(filename, outdir, blastdb_exe=pyani_config.FORMATDB_DEFAULT): title = os.path.splitext(os.path.split(filename)[-1])[0] newfilename = os.path.join(outdir, os.path.split(filename)[-1]) shutil.copy(filename, newfilename) return ( "{0} -p F -i {1} -t {2}".format(blastdb_exe, newfilename, title), newfilename, )
Returns a single formatdb command. - filename - input filename - blastdb_exe - path to the formatdb executable
def output_deployment_status(awsclient, deployment_id, iterations=100): counter = 0 steady_states = ['Succeeded', 'Failed', 'Stopped'] client_codedeploy = awsclient.get_client('codedeploy') while counter <= iterations: response = client_codedeploy.get_deployment(deploymentId=deployment_id) status = response['deploymentInfo']['status'] if status not in steady_states: log.info('Deployment: %s - State: %s' % (deployment_id, status)) time.sleep(10) elif status == 'Failed': log.info( colored.red('Deployment: {} failed: {}'.format( deployment_id, json.dumps(response['deploymentInfo']['errorInformation'], indent=2) )) ) return 1 else: log.info('Deployment: %s - State: %s' % (deployment_id, status)) break return 0
Wait until an deployment is in an steady state and output information. :param deployment_id: :param iterations: :return: exit_code
def compile(self, **kwargs): code = compile(str(self), "<string>", "exec") global_dict = dict(self._deps) global_dict.update(kwargs) _compat.exec_(code, global_dict) return global_dict
Execute the python code and returns the global dict. kwargs can contain extra dependencies that get only used at compile time.
def add_gripper(self, arm_name, gripper): if arm_name in self.grippers: raise ValueError("Attempts to add multiple grippers to one body") arm_subtree = self.worldbody.find(".//body[@name='{}']".format(arm_name)) for actuator in gripper.actuator: if actuator.get("name") is None: raise XMLError("Actuator has no name") if not actuator.get("name").startswith("gripper"): raise XMLError( "Actuator name {} does not have prefix 'gripper'".format( actuator.get("name") ) ) for body in gripper.worldbody: arm_subtree.append(body) self.merge(gripper, merge_body=False) self.grippers[arm_name] = gripper
Mounts gripper to arm. Throws error if robot already has a gripper or gripper type is incorrect. Args: arm_name (str): name of arm mount gripper (MujocoGripper instance): gripper MJCF model
def run_parser_plugins(self, url_data, pagetype): run_plugins(self.parser_plugins, url_data, stop_after_match=True, pagetype=pagetype)
Run parser plugins for given pagetype.
def executemany(self, sql, *params): fut = self._run_operation(self._impl.executemany, sql, *params) return fut
Prepare a database query or command and then execute it against all parameter sequences found in the sequence seq_of_params. :param sql: the SQL statement to execute with optional ? parameters :param params: sequence parameters for the markers in the SQL.
def pack_and_batch(dataset, batch_size, length, pack=True): if pack: dataset = pack_dataset(dataset, length=length) dataset = dataset.map( functools.partial(trim_and_pad_all_features, length=length), num_parallel_calls=tf.data.experimental.AUTOTUNE ) dataset = dataset.batch(batch_size, drop_remainder=False) dataset = dataset.map( functools.partial(trim_and_pad_all_features, length=batch_size), num_parallel_calls=tf.data.experimental.AUTOTUNE ) dataset = dataset.map( lambda x: {k: tf.reshape(v, (batch_size, length)) for k, v in x.items()}, num_parallel_calls=tf.data.experimental.AUTOTUNE ) dataset = dataset.prefetch(100) return dataset
Create a tf.data.Dataset which emits training batches. The input dataset emits feature-dictionaries where each feature is a vector of integers ending in EOS=1 The tensors in the returned tf.data.Dataset have shape [batch_size, length]. Zeros indicate padding. length indicates the length of the emitted examples. Examples with inputs/targets longer than length get truncated. TODO(noam): for text2self problems, we should just chop too-long sequences into multiple parts and train on all of them. If pack=False, then each emitted example will contain one example emitted by load_internal(). If pack=True, then multiple examples emitted by load_internal() are concatenated to form one combined example with the given length. See comments in the function pack_dataset(). batch_size indicates the number of (combined) examples per batch, across all cores. Args: dataset: a tf.data.Dataset batch_size: an integer length: an integer pack: a boolean Returns: a tf.data.Dataset where all features have fixed shape [batch, length].
def flo(string): callers_locals = {} frame = inspect.currentframe() try: outerframe = frame.f_back callers_locals = outerframe.f_locals finally: del frame return string.format(**callers_locals)
Return the string given by param formatted with the callers locals.
def as_set(self, decode=False): items = self.database.smembers(self.key) return set(_decode(item) for item in items) if decode else items
Return a Python set containing all the items in the collection.
def SvcStop(self) -> None: self.ReportServiceStatus(win32service.SERVICE_STOP_PENDING) win32event.SetEvent(self.h_stop_event)
Called when the service is being shut down.
def cub200_iterator(data_path, batch_k, batch_size, data_shape): return (CUB200Iter(data_path, batch_k, batch_size, data_shape, is_train=True), CUB200Iter(data_path, batch_k, batch_size, data_shape, is_train=False))
Return training and testing iterator for the CUB200-2011 dataset.
def save(self): d = dict(self) old_dict = d.copy() _id = self.collection.save(d) self._id = _id self.on_save(old_dict) return self._id
Save model object to database.
def assert_numbers_almost_equal(self, actual_val, expected_val, allowed_delta=0.0001, failure_message='Expected numbers to be within {} of each other: "{}" and "{}"'): assertion = lambda: abs(expected_val - actual_val) <= allowed_delta self.webdriver_assert(assertion, unicode(failure_message).format(allowed_delta, actual_val, expected_val))
Asserts that two numbers are within an allowed delta of each other
def str_contains(arr, pat, case=True, flags=0, na=np.nan, regex=True): if regex: if not case: flags |= re.IGNORECASE regex = re.compile(pat, flags=flags) if regex.groups > 0: warnings.warn("This pattern has match groups. To actually get the" " groups, use str.extract.", UserWarning, stacklevel=3) f = lambda x: bool(regex.search(x)) else: if case: f = lambda x: pat in x else: upper_pat = pat.upper() f = lambda x: upper_pat in x uppered = _na_map(lambda x: x.upper(), arr) return _na_map(f, uppered, na, dtype=bool) return _na_map(f, arr, na, dtype=bool)
Test if pattern or regex is contained within a string of a Series or Index. Return boolean Series or Index based on whether a given pattern or regex is contained within a string of a Series or Index. Parameters ---------- pat : str Character sequence or regular expression. case : bool, default True If True, case sensitive. flags : int, default 0 (no flags) Flags to pass through to the re module, e.g. re.IGNORECASE. na : default NaN Fill value for missing values. regex : bool, default True If True, assumes the pat is a regular expression. If False, treats the pat as a literal string. Returns ------- Series or Index of boolean values A Series or Index of boolean values indicating whether the given pattern is contained within the string of each element of the Series or Index. See Also -------- match : Analogous, but stricter, relying on re.match instead of re.search. Series.str.startswith : Test if the start of each string element matches a pattern. Series.str.endswith : Same as startswith, but tests the end of string. Examples -------- Returning a Series of booleans using only a literal pattern. >>> s1 = pd.Series(['Mouse', 'dog', 'house and parrot', '23', np.NaN]) >>> s1.str.contains('og', regex=False) 0 False 1 True 2 False 3 False 4 NaN dtype: object Returning an Index of booleans using only a literal pattern. >>> ind = pd.Index(['Mouse', 'dog', 'house and parrot', '23.0', np.NaN]) >>> ind.str.contains('23', regex=False) Index([False, False, False, True, nan], dtype='object') Specifying case sensitivity using `case`. >>> s1.str.contains('oG', case=True, regex=True) 0 False 1 False 2 False 3 False 4 NaN dtype: object Specifying `na` to be `False` instead of `NaN` replaces NaN values with `False`. If Series or Index does not contain NaN values the resultant dtype will be `bool`, otherwise, an `object` dtype. >>> s1.str.contains('og', na=False, regex=True) 0 False 1 True 2 False 3 False 4 False dtype: bool Returning 'house' or 'dog' when either expression occurs in a string. >>> s1.str.contains('house|dog', regex=True) 0 False 1 True 2 True 3 False 4 NaN dtype: object Ignoring case sensitivity using `flags` with regex. >>> import re >>> s1.str.contains('PARROT', flags=re.IGNORECASE, regex=True) 0 False 1 False 2 True 3 False 4 NaN dtype: object Returning any digit using regular expression. >>> s1.str.contains('\\d', regex=True) 0 False 1 False 2 False 3 True 4 NaN dtype: object Ensure `pat` is a not a literal pattern when `regex` is set to True. Note in the following example one might expect only `s2[1]` and `s2[3]` to return `True`. However, '.0' as a regex matches any character followed by a 0. >>> s2 = pd.Series(['40', '40.0', '41', '41.0', '35']) >>> s2.str.contains('.0', regex=True) 0 True 1 True 2 False 3 True 4 False dtype: bool
def disconnect_async(self, conn_id, callback): future = self._loop.launch_coroutine(self._adapter.disconnect(conn_id)) future.add_done_callback(lambda x: self._callback_future(conn_id, x, callback))
Asynchronously disconnect from a device.
def digest_chunks(chunks, algorithms=(hashlib.md5, hashlib.sha1)): hashes = [algorithm() for algorithm in algorithms] for chunk in chunks: for h in hashes: h.update(chunk) return [_b64encode_to_str(h.digest()) for h in hashes]
returns a base64 rep of the given digest algorithms from the chunks of data
def getCompleteFile(self, basepath): dirname = getDirname(self.getName()) return os.path.join(basepath, dirname, "complete.txt")
Get filename indicating all comics are downloaded.
def update_url(url, params): if not isinstance(url, bytes): url = url.encode('utf-8') for key, value in list(params.items()): if not isinstance(key, bytes): del params[key] key = key.encode('utf-8') if not isinstance(value, bytes): value = value.encode('utf-8') params[key] = value url_parts = list(urlparse(url)) query = dict(parse_qsl(url_parts[4])) query.update(params) query = list(query.items()) query.sort() url_query = urlencode(query) if not isinstance(url_query, bytes): url_query = url_query.encode("utf-8") url_parts[4] = url_query return urlunparse(url_parts).decode('utf-8')
update parameters using ``params`` in the ``url`` query string :param url: An URL possibily with a querystring :type url: :obj:`unicode` or :obj:`str` :param dict params: A dictionary of parameters for updating the url querystring :return: The URL with an updated querystring :rtype: unicode
def set_names(self, names, level=None, inplace=False): if level is not None and not isinstance(self, ABCMultiIndex): raise ValueError('Level must be None for non-MultiIndex') if level is not None and not is_list_like(level) and is_list_like( names): msg = "Names must be a string when a single level is provided." raise TypeError(msg) if not is_list_like(names) and level is None and self.nlevels > 1: raise TypeError("Must pass list-like as `names`.") if not is_list_like(names): names = [names] if level is not None and not is_list_like(level): level = [level] if inplace: idx = self else: idx = self._shallow_copy() idx._set_names(names, level=level) if not inplace: return idx
Set Index or MultiIndex name. Able to set new names partially and by level. Parameters ---------- names : label or list of label Name(s) to set. level : int, label or list of int or label, optional If the index is a MultiIndex, level(s) to set (None for all levels). Otherwise level must be None. inplace : bool, default False Modifies the object directly, instead of creating a new Index or MultiIndex. Returns ------- Index The same type as the caller or None if inplace is True. See Also -------- Index.rename : Able to set new names without level. Examples -------- >>> idx = pd.Index([1, 2, 3, 4]) >>> idx Int64Index([1, 2, 3, 4], dtype='int64') >>> idx.set_names('quarter') Int64Index([1, 2, 3, 4], dtype='int64', name='quarter') >>> idx = pd.MultiIndex.from_product([['python', 'cobra'], ... [2018, 2019]]) >>> idx MultiIndex(levels=[['cobra', 'python'], [2018, 2019]], codes=[[1, 1, 0, 0], [0, 1, 0, 1]]) >>> idx.set_names(['kind', 'year'], inplace=True) >>> idx MultiIndex(levels=[['cobra', 'python'], [2018, 2019]], codes=[[1, 1, 0, 0], [0, 1, 0, 1]], names=['kind', 'year']) >>> idx.set_names('species', level=0) MultiIndex(levels=[['cobra', 'python'], [2018, 2019]], codes=[[1, 1, 0, 0], [0, 1, 0, 1]], names=['species', 'year'])
def guests_get_nic_info(self, userid=None, nic_id=None, vswitch=None): action = "get nic information" with zvmutils.log_and_reraise_sdkbase_error(action): return self._networkops.get_nic_info(userid=userid, nic_id=nic_id, vswitch=vswitch)
Retrieve nic information in the network database according to the requirements, the nic information will include the guest name, nic device number, vswitch name that the nic is coupled to, nic identifier and the comments. :param str userid: the user id of the vm :param str nic_id: nic identifier :param str vswitch: the name of the vswitch :returns: list describing nic information, format is [ (userid, interface, vswitch, nic_id, comments), (userid, interface, vswitch, nic_id, comments) ], such as [ ('VM01', '1000', 'xcatvsw2', '1111-2222', None), ('VM02', '2000', 'xcatvsw3', None, None) ] :rtype: list
def info_string(self, size=None, message='', frame=-1): info = [] if size is not None: info.append('Size: {1}x{0}'.format(*size)) elif self.size is not None: info.append('Size: {1}x{0}'.format(*self.size)) if frame >= 0: info.append('Frame: {}'.format(frame)) if message != '': info.append('{}'.format(message)) return ' '.join(info)
Returns information about the stream. Generates a string containing size, frame number, and info messages. Omits unnecessary information (e.g. empty messages and frame -1). This method is primarily used to update the suptitle of the plot figure. Returns: An info string.
def _factory(importname, base_class_type, path=None, *args, **kargs): def is_base_class(item): return isclass(item) and item.__module__ == importname if path: sys.path.append(path) absolute_path = os.path.join(path, importname) + '.py' module = imp.load_source(importname, absolute_path) else: module = import_module(importname) clsmembers = getmembers(module, is_base_class) if not len(clsmembers): raise ValueError('Found no matching class in %s.' % importname) else: cls = clsmembers[0][1] return cls(*args, **kargs)
Load a module of a given base class type Parameter -------- importname: string Name of the module, etc. converter base_class_type: class type E.g converter path: Absoulte path of the module Neede for extensions. If not given module is in online_monitor package *args, **kargs: Arguments to pass to the object init Return ------ Object of given base class type
def _clone(self): instance = super(Bungiesearch, self)._clone() instance._raw_results_only = self._raw_results_only return instance
Must clone additional fields to those cloned by elasticsearch-dsl-py.
def _update_label(self, label, array): maximum = float(numpy.max(array)) mean = float(numpy.mean(array)) median = float(numpy.median(array)) minimum = float(numpy.min(array)) stdev = float(numpy.std(array, ddof=1)) encoder = pvl.encoder.PDSLabelEncoder serial_label = pvl.dumps(label, cls=encoder) label_sz = len(serial_label) image_pointer = int(label_sz / label['RECORD_BYTES']) + 1 label['^IMAGE'] = image_pointer + 1 label['LABEL_RECORDS'] = image_pointer label['IMAGE']['MEAN'] = mean label['IMAGE']['MAXIMUM'] = maximum label['IMAGE']['MEDIAN'] = median label['IMAGE']['MINIMUM'] = minimum label['IMAGE']['STANDARD_DEVIATION'] = stdev return label
Update PDS3 label for NumPy Array. It is called by '_create_label' to update label values such as, - ^IMAGE, RECORD_BYTES - STANDARD_DEVIATION - MAXIMUM, MINIMUM - MEDIAN, MEAN Returns ------- Update label module for the NumPy array. Usage: self.label = self._update_label(label, array)
def gen_opt_str(ser_rec: pd.Series)->str: name = ser_rec.name indent = r' ' str_opt = f'.. option:: {name}'+'\n\n' for spec in ser_rec.sort_index().index: str_opt += indent+f':{spec}:'+'\n' spec_content = ser_rec[spec] str_opt += indent+indent+f'{spec_content}'+'\n' return str_opt
generate rst option string Parameters ---------- ser_rec : pd.Series record for specifications Returns ------- str rst string
def get(self, name, default=_MISSING): name = self._convert_name(name) if name not in self._fields: if default is _MISSING: default = self._default_value(name) return default if name in _UNICODEFIELDS: value = self._fields[name] return value elif name in _LISTFIELDS: value = self._fields[name] if value is None: return [] res = [] for val in value: if name not in _LISTTUPLEFIELDS: res.append(val) else: res.append((val[0], val[1])) return res elif name in _ELEMENTSFIELD: value = self._fields[name] if isinstance(value, string_types): return value.split(',') return self._fields[name]
Get a metadata field.
def detect_port(port): socket_test = socket.socket(socket.AF_INET,socket.SOCK_STREAM) try: socket_test.connect(('127.0.0.1', int(port))) socket_test.close() return True except: return False
Detect if the port is used
def contribute_to_class(self, cls, name, virtual_only=False): super(RegexField, self).contribute_to_class(cls, name, virtual_only) setattr(cls, name, CastOnAssignDescriptor(self))
Cast to the correct value every
def get_data(self): url = self.build_url() self.locationApiData = requests.get(url) if not self.locationApiData.status_code == 200: raise self.locationApiData.raise_for_status()
Gets data from the built url
def migrateProvPre010(self, newslab): did_migrate = self._migrate_db_pre010('prov', newslab) if not did_migrate: return self._migrate_db_pre010('provs', newslab)
Check for any pre-010 provstacks and migrate those to the new slab.
def parse(source, remove_comments=True, **kw): return ElementTree.parse(source, SourceLineParser(), **kw)
Thin wrapper around ElementTree.parse
def _set_cell_attr(self, selection, table, attr): post_command_event(self.main_window, self.ContentChangedMsg) if selection is not None: self.code_array.cell_attributes.append((selection, table, attr))
Sets cell attr for key cell and mark grid content as changed Parameters ---------- attr: dict \tContains cell attribute keys \tkeys in ["borderwidth_bottom", "borderwidth_right", \t"bordercolor_bottom", "bordercolor_right", \t"bgcolor", "textfont", \t"pointsize", "fontweight", "fontstyle", "textcolor", "underline", \t"strikethrough", "angle", "column-width", "row-height", \t"vertical_align", "justification", "frozen", "merge_area"]
def memmap_array(self, dtype, shape, offset=0, mode='r', order='C'): if not self.is_file: raise ValueError('Cannot memory-map file without fileno') return numpy.memmap(self._fh, dtype=dtype, mode=mode, offset=self._offset + offset, shape=shape, order=order)
Return numpy.memmap of data stored in file.
def _piecewise_learning_rate(step, boundaries, values): values = [1.0] + values boundaries = [float(x) for x in boundaries] return tf.train.piecewise_constant( step, boundaries, values, name="piecewise_lr")
Scale learning rate according to the given schedule. Multipliers are not cumulative. Args: step: global step boundaries: List of steps to transition on. values: Multiplier to apply at each boundary transition. Returns: Scaled value for the learning rate.
def parse_pgurl(self, url): parsed = urlsplit(url) return { 'user': parsed.username, 'password': parsed.password, 'database': parsed.path.lstrip('/'), 'host': parsed.hostname, 'port': parsed.port or 5432, }
Given a Postgres url, return a dict with keys for user, password, host, port, and database.
def S_isothermal_pipe_eccentric_to_isothermal_pipe(D1, D2, Z, L=1.): r return 2.*pi*L/acosh((D2**2 + D1**2 - 4.*Z**2)/(2.*D1*D2))
r'''Returns the Shape factor `S` of a pipe of constant outer temperature and of outer diameter `D1` which is `Z` distance from the center of another pipe of outer diameter`D2`. Length `L` must be provided, but can be set to 1 to obtain a dimensionless shape factor used in some sources. .. math:: S = \frac{2\pi L}{\cosh^{-1} \left(\frac{D_2^2 + D_1^2 - 4Z^2}{2D_1D_2}\right)} Parameters ---------- D1 : float Diameter of inner pipe, [m] D2 : float Diameter of outer pipe, [m] Z : float Distance from the middle of inner pipe to the center of the other, [m] L : float, optional Length of the pipe, [m] Returns ------- S : float Shape factor [m] Examples -------- >>> S_isothermal_pipe_eccentric_to_isothermal_pipe(.1, .4, .05, 10) 47.709841915608976 Notes ----- L should be much larger than both diameters. D2 should be larger than D1. .. math:: Q = Sk(T_1 - T_2) \\ R_{\text{shape}}=\frac{1}{Sk} References ---------- .. [1] Kreith, Frank, Raj Manglik, and Mark Bohn. Principles of Heat Transfer. Cengage, 2010. .. [2] Bergman, Theodore L., Adrienne S. Lavine, Frank P. Incropera, and David P. DeWitt. Introduction to Heat Transfer. 6E. Hoboken, NJ: Wiley, 2011.
def parse(cls, s, **kwargs): pb2_obj = cls._get_cmsg() pb2_obj.ParseFromString(s) return cls.parse_from_cmessage(pb2_obj, **kwargs)
Parse a bytes object and create a class object. :param bytes s: A bytes object. :return: A class object. :rtype: cls
def mac_address(ip): mac = '' for line in os.popen('/sbin/ifconfig'): s = line.split() if len(s) > 3: if s[3] == 'HWaddr': mac = s[4] elif s[2] == ip: break return {'MAC': mac}
Get the MAC address
def usage(text): def decorator(func): adaptor = ScriptAdaptor._get_adaptor(func) adaptor.usage = text return func return decorator
Decorator used to specify a usage string for the console script help message. :param text: The text to use for the usage.
def project_drawn(cb, msg): stream = cb.streams[0] old_data = stream.data stream.update(data=msg['data']) element = stream.element stream.update(data=old_data) proj = cb.plot.projection if not isinstance(element, _Element) or element.crs == proj: return None crs = element.crs element.crs = proj return project(element, projection=crs)
Projects a drawn element to the declared coordinate system
def start_external_service(self, service_name, conf=None): if service_name in self._external_services: ser = self._external_services[service_name] service = ser(service_name, conf=conf, bench=self.bench) try: service.start() except PluginException: self.logger.exception("Starting service %s caused an exception!", service_name) raise PluginException("Failed to start external service {}".format(service_name)) self._started_services.append(service) setattr(self.bench, service_name, service) else: self.logger.warning("Service %s not found. Check your plugins.", service_name)
Start external service service_name with configuration conf. :param service_name: Name of service to start :param conf: :return: nothing
def remove_channel(self, channel, *, verbose=True): channel_index = wt_kit.get_index(self.channel_names, channel) new = list(self.channel_names) name = new.pop(channel_index) del self[name] self.channel_names = new if verbose: print("channel {0} removed".format(name))
Remove channel from data. Parameters ---------- channel : int or str Channel index or name to remove. verbose : boolean (optional) Toggle talkback. Default is True.
def log(self, ctx='all'): path = '%s/%s.log' % (self.path, ctx) if os.path.exists(path) is True: with open(path, 'r') as f: print(f.read()) return validate_path = '%s/validate.log' % self.path build_path = '%s/build.log' % self.path out = [] with open(validate_path) as validate_log, open(build_path) as build_log: for line in validate_log.readlines(): out.append(line) for line in build_log.readlines(): out.append(line) print(''.join(out))
Gets the build log output. :param ctx: specifies which log message to show, it can be 'validate', 'build' or 'all'.
def read_config(): if not os.path.isfile(CONFIG): with open(CONFIG, "w"): pass parser = ConfigParser() parser.read(CONFIG) return parser
Read the configuration file and parse the different environments. Returns: ConfigParser object
def _parse_optional_params(self, oauth_params, req_kwargs): params = req_kwargs.get('params', {}) data = req_kwargs.get('data') or {} for oauth_param in OPTIONAL_OAUTH_PARAMS: if oauth_param in params: oauth_params[oauth_param] = params.pop(oauth_param) if oauth_param in data: oauth_params[oauth_param] = data.pop(oauth_param) if params: req_kwargs['params'] = params if data: req_kwargs['data'] = data
Parses and sets optional OAuth parameters on a request. :param oauth_param: The OAuth parameter to parse. :type oauth_param: str :param req_kwargs: The keyworded arguments passed to the request method. :type req_kwargs: dict
def include_library(libname): if exclude_list: if exclude_list.search(libname) and not include_list.search(libname): return False else: return True else: return True
Check if a dynamic library should be included with application or not.
def mod(x, y, context=None): return _apply_function_in_current_context( BigFloat, mpfr_mod, ( BigFloat._implicit_convert(x), BigFloat._implicit_convert(y), ), context, )
Return the remainder of x divided by y, with sign matching that of y.
def runSearchContinuousSets(self, request): return self.runSearchRequest( request, protocol.SearchContinuousSetsRequest, protocol.SearchContinuousSetsResponse, self.continuousSetsGenerator)
Returns a SearchContinuousSetsResponse for the specified SearchContinuousSetsRequest object.
def add(self, child): if isinstance(child, Run): self.add_run(child) elif isinstance(child, Record): self.add_record(child) elif isinstance(child, EventRecord): self.add_event_record(child) elif isinstance(child, DataDisplay): self.add_data_display(child) elif isinstance(child, DataWriter): self.add_data_writer(child) elif isinstance(child, EventWriter): self.add_event_writer(child) else: raise ModelError('Unsupported child element')
Adds a typed child object to the simulation spec. @param child: Child object to be added.
def between(self, start, end): if hasattr(start, 'strftime') and hasattr(end, 'strftime'): dt_between = ( 'javascript:gs.dateGenerate("%(start)s")' "@" 'javascript:gs.dateGenerate("%(end)s")' ) % { 'start': start.strftime('%Y-%m-%d %H:%M:%S'), 'end': end.strftime('%Y-%m-%d %H:%M:%S') } elif isinstance(start, int) and isinstance(end, int): dt_between = '%d@%d' % (start, end) else: raise QueryTypeError("Expected `start` and `end` of type `int` " "or instance of `datetime`, not %s and %s" % (type(start), type(end))) return self._add_condition('BETWEEN', dt_between, types=[str])
Adds new `BETWEEN` condition :param start: int or datetime compatible object (in SNOW user's timezone) :param end: int or datetime compatible object (in SNOW user's timezone) :raise: - QueryTypeError: if start or end arguments is of an invalid type
def enable_global_typechecked_decorator(flag = True, retrospective = True): global global_typechecked_decorator global_typechecked_decorator = flag if import_hook_enabled: _install_import_hook() if global_typechecked_decorator and retrospective: _catch_up_global_typechecked_decorator() return global_typechecked_decorator
Enables or disables global typechecking mode via decorators. See flag global_typechecked_decorator. In contrast to setting the flag directly, this function provides a retrospective option. If retrospective is true, this will also affect already imported modules, not only future imports. Does not work if checking_enabled is false. Does not work reliably if checking_enabled has ever been set to false during current run.
def create_query_engine(config, clazz): try: qe = clazz(**config.settings) except Exception as err: raise CreateQueryEngineError(clazz, config.settings, err) return qe
Create and return new query engine object from the given `DBConfig` object. :param config: Database configuration :type config: dbconfig.DBConfig :param clazz: Class to use for creating query engine. Should act like query_engine.QueryEngine. :type clazz: class :return: New query engine
def send(self, data): log.debug('Sending %s' % data) if not self._socket: log.warn('No connection') return self._socket.send_bytes(data.encode('utf-8'))
Send data through websocket
def parse_command_line() -> Namespace: import tornado.options parser.parse_known_args(namespace=config) set_loglevel() for k, v in vars(config).items(): if k.startswith('log'): tornado.options.options.__setattr__(k, v) return config
Parse command line options and set them to ``config``. This function skips unknown command line options. After parsing options, set log level and set options in ``tornado.options``.
def unpack(cls, msg): flags, first_payload_type, first_payload_size = cls.UNPACK_FROM(msg) if flags != 0: raise ProtocolError("Unsupported OP_MSG flags (%r)" % (flags,)) if first_payload_type != 0: raise ProtocolError( "Unsupported OP_MSG payload type (%r)" % (first_payload_type,)) if len(msg) != first_payload_size + 5: raise ProtocolError("Unsupported OP_MSG reply: >1 section") payload_document = bytes(msg[5:]) return cls(flags, payload_document)
Construct an _OpMsg from raw bytes.
def makeSocket(self, timeout=1): plain_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM) if hasattr(plain_socket, 'settimeout'): plain_socket.settimeout(timeout) wrapped_socket = ssl.wrap_socket( plain_socket, ca_certs=self.ca_certs, cert_reqs=self.reqs, keyfile=self.keyfile, certfile=self.certfile ) wrapped_socket.connect((self.host, self.port)) return wrapped_socket
Override SocketHandler.makeSocket, to allow creating wrapped TLS sockets
def enable_receiving(self, loop=None): self.receive_task = asyncio.ensure_future(self._receive_loop(), loop=loop) def do_if_done(fut): try: fut.result() except asyncio.CancelledError: pass except Exception as ex: self.receive_task_exception = ex self.receive_task = None self.receive_task.add_done_callback(do_if_done)
Schedules the receive loop to run on the given loop.
def main(model_folder): model_description_file = os.path.join(model_folder, "info.yml") with open(model_description_file, 'r') as ymlfile: model_description = yaml.load(ymlfile) logging.info(model_description['model']) data = {} data['training'] = os.path.join(model_folder, "traindata.hdf5") data['testing'] = os.path.join(model_folder, "testdata.hdf5") data['validating'] = os.path.join(model_folder, "validdata.hdf5") train_model(model_folder)
Main part of the training script.
def clean_series_name(seriesname): if not seriesname: return seriesname seriesname = re.sub(r'(\D)[.](\D)', '\\1 \\2', seriesname) seriesname = re.sub(r'(\D)[.]', '\\1 ', seriesname) seriesname = re.sub(r'[.](\D)', ' \\1', seriesname) seriesname = seriesname.replace('_', ' ') seriesname = re.sub('-$', '', seriesname) return _replace_series_name(seriesname.strip(), cfg.CONF.input_series_replacements)
Cleans up series name. By removing any . and _ characters, along with any trailing hyphens. Is basically equivalent to replacing all _ and . with a space, but handles decimal numbers in string, for example: >>> _clean_series_name("an.example.1.0.test") 'an example 1.0 test' >>> _clean_series_name("an_example_1.0_test") 'an example 1.0 test'
def set_options(self, option_type, option_dict, force_options=False): if force_options: self.options[option_type].update(option_dict) elif (option_type == 'yAxis' or option_type == 'xAxis') and isinstance(option_dict, list): self.options[option_type] = MultiAxis(option_type) for each_dict in option_dict: self.options[option_type].update(**each_dict) elif option_type == 'colors': self.options["colors"].set_colors(option_dict) elif option_type in ["global" , "lang"]: self.setOptions[option_type].update_dict(**option_dict) else: self.options[option_type].update_dict(**option_dict)
set plot options
def ffill_across_cols(df, columns, name_map): df.ffill(inplace=True) for column in columns: column_name = name_map[column.name] if column.dtype == categorical_dtype: df[column_name] = df[ column.name ].where(pd.notnull(df[column_name]), column.missing_value) else: df[column_name] = df[ column_name ].fillna(column.missing_value).astype(column.dtype)
Forward fill values in a DataFrame with special logic to handle cases that pd.DataFrame.ffill cannot and cast columns to appropriate types. Parameters ---------- df : pd.DataFrame The DataFrame to do forward-filling on. columns : list of BoundColumn The BoundColumns that correspond to columns in the DataFrame to which special filling and/or casting logic should be applied. name_map: map of string -> string Mapping from the name of each BoundColumn to the associated column name in `df`.
def get_vip_request(self, vip_request_id): uri = 'api/v3/vip-request/%s/' % vip_request_id return super(ApiVipRequest, self).get(uri)
Method to get vip request param vip_request_id: vip_request id
def iter(self, pages=None): i = self._pages() if pages is not None: i = itertools.islice(i, pages) return i
Get an iterator of pages. :param int pages: optional limit to number of pages :return: iter of this and subsequent pages
def get_token(): token = os.environ.get("GH_TOKEN", None) if not token: token = "GH_TOKEN environment variable not set" token = token.encode('utf-8') return token
Get the encrypted GitHub token in Travis. Make sure the contents this variable do not leak. The ``run()`` function will remove this from the output, so always use it.
def install(directory, composer=None, php=None, runas=None, prefer_source=None, prefer_dist=None, no_scripts=None, no_plugins=None, optimize=None, no_dev=None, quiet=False, composer_home='/root', env=None): result = _run_composer('install', directory=directory, composer=composer, php=php, runas=runas, prefer_source=prefer_source, prefer_dist=prefer_dist, no_scripts=no_scripts, no_plugins=no_plugins, optimize=optimize, no_dev=no_dev, quiet=quiet, composer_home=composer_home, env=env) return result
Install composer dependencies for a directory. If composer has not been installed globally making it available in the system PATH & making it executable, the ``composer`` and ``php`` parameters will need to be set to the location of the executables. directory Directory location of the composer.json file. composer Location of the composer.phar file. If not set composer will just execute "composer" as if it is installed globally. (i.e. /path/to/composer.phar) php Location of the php executable to use with composer. (i.e. /usr/bin/php) runas Which system user to run composer as. prefer_source --prefer-source option of composer. prefer_dist --prefer-dist option of composer. no_scripts --no-scripts option of composer. no_plugins --no-plugins option of composer. optimize --optimize-autoloader option of composer. Recommended for production. no_dev --no-dev option for composer. Recommended for production. quiet --quiet option for composer. Whether or not to return output from composer. composer_home $COMPOSER_HOME environment variable env A list of environment variables to be set prior to execution. CLI Example: .. code-block:: bash salt '*' composer.install /var/www/application salt '*' composer.install /var/www/application \ no_dev=True optimize=True
def get_rel_sciobj_file_path(pid): hash_str = hashlib.sha1(pid.encode('utf-8')).hexdigest() return os.path.join(hash_str[:2], hash_str[2:4], hash_str)
Get the relative local path to the file holding an object's bytes. - The path is relative to settings.OBJECT_STORE_PATH - There is a one-to-one mapping between pid and path - The path is based on a SHA1 hash. It's now possible to craft SHA1 collisions, but it's so unlikely that we ignore it for now - The path may or may not exist (yet).
def hybrid_forward(self, F, inputs): outputs = self.ffn_1(inputs) if self.activation: outputs = self.activation(outputs) outputs = self.ffn_2(outputs) if self._dropout: outputs = self.dropout_layer(outputs) if self._use_residual: outputs = outputs + inputs outputs = self.layer_norm(outputs) return outputs
Position-wise encoding of the inputs. Parameters ---------- inputs : Symbol or NDArray Input sequence. Shape (batch_size, length, C_in) Returns ------- outputs : Symbol or NDArray Shape (batch_size, length, C_out)
def load(filename): try: with open(filename, 'rb') as f: return pickle.load(f) except Exception as e1: try: return jl_load(filename) except Exception as e2: raise IOError( "Unable to load {} using the pickle or joblib protocol.\n" "Pickle: {}\n" "Joblib: {}".format(filename, e1, e2) )
Load an object that has been saved with dump. We try to open it using the pickle protocol. As a fallback, we use joblib.load. Joblib was the default prior to msmbuilder v3.2 Parameters ---------- filename : string The name of the file to load.
def variablename(var): s=[tpl[0] for tpl in itertools.ifilter(lambda x: var is x[1], globals().items())] s=s[0].upper() return s
Returns the string of a variable name.
def execute_with_style_LEGACY(template, style, data, callback, body_subtree='body'): try: body_data = data[body_subtree] except KeyError: raise EvaluationError('Data dictionary has no subtree %r' % body_subtree) tokens_body = [] template.execute(body_data, tokens_body.append) data[body_subtree] = tokens_body tokens = [] style.execute(data, tokens.append) _FlattenToCallback(tokens, callback)
OBSOLETE old API.
def bk_blue(cls): "Make the text background color blue." wAttributes = cls._get_text_attributes() wAttributes &= ~win32.BACKGROUND_MASK wAttributes |= win32.BACKGROUND_BLUE cls._set_text_attributes(wAttributes)
Make the text background color blue.
def default_spec(self, manager): specstr = "" stable_families = manager.stable_families if manager.config.releases_unstable_prehistory and stable_families: specstr = ">={}".format(min(stable_families)) if self.is_featurelike: if True: specstr = ">={}".format(max(manager.keys())) else: buckets = self.minor_releases(manager) if buckets: specstr = ">={}".format(max(buckets)) return Spec(specstr) if specstr else Spec()
Given the current release-lines structure, return a default Spec. Specifics: * For feature-like issues, only the highest major release is used, so given a ``manager`` with top level keys of ``[1, 2]``, this would return ``Spec(">=2")``. * When ``releases_always_forwardport_features`` is ``True``, that behavior is nullified, and this function always returns the empty ``Spec`` (which matches any and all versions/lines). * For bugfix-like issues, we only consider major release families which have actual releases already. * Thus the core difference here is that features are 'consumed' by upcoming major releases, and bugfixes are not. * When the ``unstable_prehistory`` setting is ``True``, the default spec starts at the oldest non-zero release line. (Otherwise, issues posted after prehistory ends would try being added to the 0.x part of the tree, which makes no sense in unstable-prehistory mode.)
def _get_id(self): return ''.join(map(str, filter(is_not_None, [self.Prefix, self.Name])))
Construct and return the identifier
def send_pgrp(cls, sock, pgrp): assert(isinstance(pgrp, IntegerForPid) and pgrp < 0) encoded_int = cls.encode_int(pgrp) cls.write_chunk(sock, ChunkType.PGRP, encoded_int)
Send the PGRP chunk over the specified socket.
def get_current_key(self, resource_name): url = ENCRYPTION_CURRENT_KEY_URL.format(resource_name) return self._key_from_json(self._get_resource(url))
Returns a restclients.Key object for the given resource. If the resource isn't found, or if there is an error communicating with the KWS, a DataFailureException will be thrown.
def headers(self): return { "Content-Type": ("multipart/form-data; boundary={}".format(self.boundary)), "Content-Length": str(self.len), "Content-Encoding": self.encoding, }
All headers needed to make a request
def _guessunit(self): if not self.days % 1: return 'd' elif not self.hours % 1: return 'h' elif not self.minutes % 1: return 'm' elif not self.seconds % 1: return 's' else: raise ValueError( 'The stepsize is not a multiple of one ' 'second, which is not allowed.')
Guess the unit of the period as the largest one, which results in an integer duration.
def from_response(response): http_response = response.raw._original_response status_line = "HTTP/1.1 %d %s" % (http_response.status, http_response.reason) headers = str(http_response.msg) body = http_response.read() response.raw._fp = StringIO(body) payload = status_line + "\r\n" + headers + "\r\n" + body headers = { "WARC-Type": "response", "WARC-Target-URI": response.request.full_url.encode('utf-8') } return WARCRecord(payload=payload, headers=headers)
Creates a WARCRecord from given response object. This must be called before reading the response. The response can be read after this method is called. :param response: An instance of :class:`requests.models.Response`.
def getlist(self, section, option): value = self.get(section, option) if value: return value.split(',') else: return None
returns the named option as a list, splitting the original value by ','
def setKeySequenceCounter(self, iKeySequenceValue): print '%s call setKeySequenceCounter' % self.port print iKeySequenceValue try: cmd = WPANCTL_CMD + 'setprop Network:KeyIndex %s' % str(iKeySequenceValue) if self.__sendCommand(cmd)[0] != 'Fail': time.sleep(1) return True else: return False except Exception, e: ModuleHelper.WriteIntoDebugLogger('setKeySequenceCounter() Error: ' + str(e))
set the Key sequence counter corresponding to Thread Network master key Args: iKeySequenceValue: key sequence value Returns: True: successful to set the key sequence False: fail to set the key sequence
def vertical_velocity(omega, pressure, temperature, mixing=0): r rho = density(pressure, temperature, mixing) return (omega / (- mpconsts.g * rho)).to('m/s')
r"""Calculate w from omega assuming hydrostatic conditions. This function converts vertical velocity with respect to pressure :math:`\left(\omega = \frac{Dp}{Dt}\right)` to that with respect to height :math:`\left(w = \frac{Dz}{Dt}\right)` assuming hydrostatic conditions on the synoptic scale. By Equation 7.33 in [Hobbs2006]_, .. math: \omega \simeq -\rho g w so that .. math w \simeq \frac{- \omega}{\rho g} Density (:math:`\rho`) is calculated using the :func:`density` function, from the given pressure and temperature. If `mixing` is given, the virtual temperature correction is used, otherwise, dry air is assumed. Parameters ---------- omega: `pint.Quantity` Vertical velocity in terms of pressure pressure: `pint.Quantity` Total atmospheric pressure temperature: `pint.Quantity` Air temperature mixing: `pint.Quantity`, optional Mixing ratio of air Returns ------- `pint.Quantity` Vertical velocity in terms of height (in meters / second) See Also -------- density, vertical_velocity_pressure
def _reconnect_handler(self): for channel_name, channel in self.channels.items(): data = {'channel': channel_name} if channel.auth: data['auth'] = channel.auth self.connection.send_event('pusher:subscribe', data)
Handle a reconnect.
def get(self, preview_id): return self.request.get('get', params=dict(id=preview_id))
Retrieve a Historics preview job. Warning: previews expire after 24 hours. Uses API documented at http://dev.datasift.com/docs/api/rest-api/endpoints/previewget :param preview_id: historics preview job hash of the job to retrieve :type preview_id: str :return: dict of REST API output with headers attached :rtype: :class:`~datasift.request.DictResponse` :raises: :class:`~datasift.exceptions.DataSiftApiException`, :class:`requests.exceptions.HTTPError`
def _logger(self): level = logging.INFO self.log.setLevel(level) self.log.handlers = [] if self.default_args.logging is not None: level = self._logger_levels[self.default_args.logging] elif self.default_args.tc_log_level is not None: level = self._logger_levels[self.default_args.tc_log_level] self.log.setLevel(level) if self.default_args.tc_log_path: self._logger_fh() if self.default_args.tc_token is not None and self.default_args.tc_log_to_api: self._logger_api() self.log.info('Logging Level: {}'.format(logging.getLevelName(level)))
Create TcEx app logger instance. The logger is accessible via the ``tc.log.<level>`` call. **Logging examples** .. code-block:: python :linenos: :lineno-start: 1 tcex.log.debug('logging debug') tcex.log.info('logging info') tcex.log.warning('logging warning') tcex.log.error('logging error') Args: stream_only (bool, default:False): If True only the Stream handler will be enabled. Returns: logger: An instance of logging