code
stringlengths
51
2.38k
docstring
stringlengths
4
15.2k
def get_kwargs(self, args): kwargs = {} argspec = inspect.getargspec(self._func) required = set(argspec.args[:-len(argspec.defaults)] if argspec.defaults else argspec.args) for arg_name in argspec.args: try: kwargs[arg_name] = getattr(args, arg_name) except AttributeError: if arg_name in required: raise if argspec.keywords: for key, value in args.__dict__.items(): if key in kwargs: continue kwargs[key] = value return kwargs
Given a Namespace object drawn from argparse, determines the keyword arguments to pass to the underlying function. Note that, if the underlying function accepts all keyword arguments, the dictionary returned will contain the entire contents of the Namespace object. Also note that an AttributeError will be raised if any argument required by the function is not set in the Namespace object. :param args: A Namespace object from argparse.
def write(parsed_obj, spec=None, filename=None): if not isinstance(parsed_obj, BreadStruct): raise ValueError( 'Object to write must be a structure created ' 'by bread.parse') if filename is not None: with open(filename, 'wb') as fp: parsed_obj._data_bits[:parsed_obj._length].tofile(fp) else: return bytearray(parsed_obj._data_bits[:parsed_obj._length].tobytes())
Writes an object created by `parse` to either a file or a bytearray. If the object doesn't end on a byte boundary, zeroes are appended to it until it does.
def read_config(conf_dir=DEFAULT_CONFIG_DIR): "Find and read config file for a directory, return None if not found." conf_path = os.path.expanduser(conf_dir) if not os.path.exists(conf_path): if conf_dir != DEFAULT_CONFIG_DIR: raise IOError("Config directory not found at %s" % (conf_path, )) return munge.load_datafile('config', conf_path, default=None)
Find and read config file for a directory, return None if not found.
def attach(self, engine, log_handler, event_name): if event_name not in State.event_to_attr: raise RuntimeError("Unknown event name '{}'".format(event_name)) engine.add_event_handler(event_name, log_handler, self, event_name)
Attach the logger to the engine and execute `log_handler` function at `event_name` events. Args: engine (Engine): engine object. log_handler (callable): a logging handler to execute event_name: event to attach the logging handler to. Valid events are from :class:`~ignite.engine.Events` or any `event_name` added by :meth:`~ignite.engine.Engine.register_events`.
def charge_sign(self): if self.charge > 0: sign = "+" elif self.charge < 0: sign = "–" else: return "" ab = abs(self.charge) if ab > 1: return str(ab) + sign return sign
Charge sign text
def set(self, key, value, *args, **kwargs): if self.cfg.jsonpickle: value = jsonpickle.encode(value) return self.conn.set(key, value, *args, **kwargs)
Store the given value into Redis. :returns: a coroutine
def evaluate(self, train_data, test_data=None, metric='perplexity'): train_data = _check_input(train_data) if test_data is None: test_data = train_data else: test_data = _check_input(test_data) predictions = self.predict(train_data, output_type='probability') topics = self.topics ret = {} ret['perplexity'] = perplexity(test_data, predictions, topics['topic_probabilities'], topics['vocabulary']) return ret
Estimate the model's ability to predict new data. Imagine you have a corpus of books. One common approach to evaluating topic models is to train on the first half of all of the books and see how well the model predicts the second half of each book. This method returns a metric called perplexity, which is related to the likelihood of observing these words under the given model. See :py:func:`~turicreate.topic_model.perplexity` for more details. The provided `train_data` and `test_data` must have the same length, i.e., both data sets must have the same number of documents; the model will use train_data to estimate which topic the document belongs to, and this is used to estimate the model's performance at predicting the unseen words in the test data. See :py:func:`~turicreate.topic_model.TopicModel.predict` for details on how these predictions are made, and see :py:func:`~turicreate.text_analytics.random_split` for a helper function that can be used for making train/test splits. Parameters ---------- train_data : SArray or SFrame A set of documents to predict topics for. test_data : SArray or SFrame, optional A set of documents to evaluate performance on. By default this will set to be the same as train_data. metric : str The chosen metric to use for evaluating the topic model. Currently only 'perplexity' is supported. Returns ------- out : dict The set of estimated evaluation metrics. See Also -------- predict, turicreate.toolkits.text_analytics.random_split Examples -------- >>> docs = turicreate.SArray('https://static.turi.com/datasets/nips-text') >>> train_data, test_data = turicreate.text_analytics.random_split(docs) >>> m = turicreate.topic_model.create(train_data) >>> m.evaluate(train_data, test_data) {'perplexity': 2467.530370396021}
def _get_local_users(self, disabled=None): users = dict() path = '/etc/passwd' with salt.utils.files.fopen(path, 'r') as fp_: for line in fp_: line = line.strip() if ':' not in line: continue name, password, uid, gid, gecos, directory, shell = line.split(':') active = not (password == '*' or password.startswith('!')) if (disabled is False and active) or (disabled is True and not active) or disabled is None: users[name] = { 'uid': uid, 'git': gid, 'info': gecos, 'home': directory, 'shell': shell, 'disabled': not active } return users
Return all known local accounts to the system.
def log_message(self, format, *args): code = args[1][0] levels = { '4': 'warning', '5': 'error' } log_handler = getattr(logger, levels.get(code, 'info')) log_handler(format % args)
overrides the ``log_message`` method from the wsgiref server so that normal logging works with whatever configuration the application has been set to. Levels are inferred from the HTTP status code, 4XX codes are treated as warnings, 5XX as errors and everything else as INFO level.
def root(venv_name): inenv = InenvManager() inenv.get_venv(venv_name) venv = inenv.registered_venvs[venv_name] click.secho(venv['root'])
Print the root directory of a virtualenv
def setup_logging(name): logger = logging.getLogger(__name__) if 'NVIM_PYTHON_LOG_FILE' in os.environ: prefix = os.environ['NVIM_PYTHON_LOG_FILE'].strip() major_version = sys.version_info[0] logfile = '{}_py{}_{}'.format(prefix, major_version, name) handler = logging.FileHandler(logfile, 'w', 'utf-8') handler.formatter = logging.Formatter( '%(asctime)s [%(levelname)s @ ' '%(filename)s:%(funcName)s:%(lineno)s] %(process)s - %(message)s') logging.root.addHandler(handler) level = logging.INFO if 'NVIM_PYTHON_LOG_LEVEL' in os.environ: lvl = getattr(logging, os.environ['NVIM_PYTHON_LOG_LEVEL'].strip(), level) if isinstance(lvl, int): level = lvl logger.setLevel(level)
Setup logging according to environment variables.
def add_payload(self, payload): if self.payloads: self.payloads[-1].next_payload = payload._type self.payloads.append(payload)
Adds a payload to packet, updating last payload's next_payload field
def get_items(self, assessment_taken_id): mgr = self._get_provider_manager('ASSESSMENT', local=True) taken_lookup_session = mgr.get_assessment_taken_lookup_session(proxy=self._proxy) taken_lookup_session.use_federated_bank_view() taken = taken_lookup_session.get_assessment_taken(assessment_taken_id) ils = get_item_lookup_session(runtime=self._runtime, proxy=self._proxy) ils.use_federated_bank_view() item_list = [] if 'sections' in taken._my_map: for section_id in taken._my_map['sections']: section = get_assessment_section(Id(section_id), runtime=self._runtime, proxy=self._proxy) for question in section._my_map['questions']: item_list.append(ils.get_item(Id(question['questionId']))) return ItemList(item_list)
Gets the items questioned in a assessment. arg: assessment_taken_id (osid.id.Id): ``Id`` of the ``AssessmentTaken`` return: (osid.assessment.ItemList) - the list of assessment questions raise: NotFound - ``assessment_taken_id`` is not found raise: NullArgument - ``assessment_taken_id`` is ``null`` raise: OperationFailed - unable to complete request raise: PermissionDenied - authorization failure occurred *compliance: mandatory -- This method must be implemented.*
def _build_url(self, path): if path.startswith('http://') or path.startswith('https://'): return path else: return '%s%s' % (self._url, path)
Returns the full url from path. If path is already a url, return it unchanged. If it's a path, append it to the stored url. Returns: str: The full URL
def add_nexusport_binding(port_id, vlan_id, vni, switch_ip, instance_id, is_native=False, ch_grp=0): LOG.debug("add_nexusport_binding() called") session = bc.get_writer_session() binding = nexus_models_v2.NexusPortBinding(port_id=port_id, vlan_id=vlan_id, vni=vni, switch_ip=switch_ip, instance_id=instance_id, is_native=is_native, channel_group=ch_grp) session.add(binding) session.flush() return binding
Adds a nexusport binding.
def train(self, ftrain): self.coeffs = 0*self.coeffs upoints, wpoints = self.getQuadraturePointsAndWeights() try: fpoints = [ftrain(u) for u in upoints] except TypeError: fpoints = ftrain for ipoly in np.arange(self.N_poly): inds = tuple(self.index_polys[ipoly]) coeff = 0.0 for (u, q, w) in zip(upoints, fpoints, wpoints): coeff += eval_poly(u, inds, self.J_list)*q*np.prod(w) self.coeffs[inds] = coeff return None
Trains the polynomial expansion. :param numpy.ndarray/function ftrain: output values corresponding to the quadrature points given by the getQuadraturePoints method to which the expansion should be trained. Or a function that should be evaluated at the quadrature points to give these output values. *Sample Usage*:: >>> thePC = PolySurrogate(dimensions=2) >>> thePC.train(myFunc) >>> predicted_q = thePC.predict([0, 1]) >>> thePC = PolySurrogate(dimensions=2) >>> U = thePC.getQuadraturePoints() >>> Q = [myFunc(u) for u in U] >>> thePC.train(Q) >>> predicted_q = thePC.predict([0, 1])
def commit_hash(self): commit_hash = None branch = None branch_file = '.git/HEAD' if os.path.isfile(branch_file): with open(branch_file, 'r') as f: try: branch = f.read().strip().split('/')[2] except IndexError: pass if branch: hash_file = '.git/refs/heads/{}'.format(branch) if os.path.isfile(hash_file): with open(hash_file, 'r') as f: commit_hash = f.read().strip() return commit_hash
Return the current commit hash if available. This is not a required task so best effort is fine. In other words this is not guaranteed to work 100% of the time.
def build_loss(model_logits, sparse_targets): time_major_shape = [FLAGS.unroll_steps, FLAGS.batch_size] flat_batch_shape = [FLAGS.unroll_steps * FLAGS.batch_size, -1] xent = tf.nn.sparse_softmax_cross_entropy_with_logits( logits=tf.reshape(model_logits, flat_batch_shape), labels=tf.reshape(sparse_targets, flat_batch_shape[:-1])) xent = tf.reshape(xent, time_major_shape) sequence_neg_log_prob = tf.reduce_sum(xent, axis=0) return tf.reduce_mean(sequence_neg_log_prob, axis=0)
Compute the log loss given predictions and targets.
def get_content_hash(self): if not self.rexists(): return SCons.Util.MD5signature('') fname = self.rfile().get_abspath() try: cs = SCons.Util.MD5filesignature(fname, chunksize=SCons.Node.FS.File.md5_chunksize*1024) except EnvironmentError as e: if not e.filename: e.filename = fname raise return cs
Compute and return the MD5 hash for this file.
def buckets_get(self, bucket, projection='noAcl'): args = {'projection': projection} url = Api._ENDPOINT + (Api._BUCKET_PATH % bucket) return google.datalab.utils.Http.request(url, credentials=self._credentials, args=args)
Issues a request to retrieve information about a bucket. Args: bucket: the name of the bucket. projection: the projection of the bucket information to retrieve. Returns: A parsed bucket information dictionary. Raises: Exception if there is an error performing the operation.
def inherit(prop, name, **kwargs): flags = [] if kwargs.get('recursive', False): flags.append('-r') if kwargs.get('revert', False): flags.append('-S') res = __salt__['cmd.run_all']( __utils__['zfs.zfs_command']( command='inherit', flags=flags, property_name=prop, target=name, ), python_shell=False, ) return __utils__['zfs.parse_command_result'](res, 'inherited')
Clears the specified property prop : string name of property name : string name of the filesystem, volume, or snapshot recursive : boolean recursively inherit the given property for all children. revert : boolean revert the property to the received value if one exists; otherwise operate as if the -S option was not specified. .. versionadded:: 2016.3.0 CLI Example: .. code-block:: bash salt '*' zfs.inherit canmount myzpool/mydataset [recursive=True|False]
def _factorize_array(values, na_sentinel=-1, size_hint=None, na_value=None): (hash_klass, _), values = _get_data_algo(values, _hashtables) table = hash_klass(size_hint or len(values)) uniques, labels = table.factorize(values, na_sentinel=na_sentinel, na_value=na_value) labels = ensure_platform_int(labels) return labels, uniques
Factorize an array-like to labels and uniques. This doesn't do any coercion of types or unboxing before factorization. Parameters ---------- values : ndarray na_sentinel : int, default -1 size_hint : int, optional Passsed through to the hashtable's 'get_labels' method na_value : object, optional A value in `values` to consider missing. Note: only use this parameter when you know that you don't have any values pandas would consider missing in the array (NaN for float data, iNaT for datetimes, etc.). Returns ------- labels, uniques : ndarray
def _PrintTasksInformation(self, storage_reader): table_view = views.ViewsFactory.GetTableView( self._views_format_type, title='Tasks') for task_start, _ in storage_reader.GetSessions(): start_time = timelib.Timestamp.CopyToIsoFormat( task_start.timestamp) task_identifier = uuid.UUID(hex=task_start.identifier) task_identifier = '{0!s}'.format(task_identifier) table_view.AddRow([task_identifier, start_time]) table_view.Write(self._output_writer)
Prints information about the tasks. Args: storage_reader (StorageReader): storage reader.
def __require_kytos_config(self): if self.__enabled is None: uri = self._kytos_api + 'api/kytos/core/config/' try: options = json.loads(urllib.request.urlopen(uri).read()) except urllib.error.URLError: print('Kytos is not running.') sys.exit() self.__enabled = Path(options.get('napps')) self.__installed = Path(options.get('installed_napps'))
Set path locations from kytosd API. It should not be called directly, but from properties that require a running kytosd instance.
def _stop_process(p, name): if p.poll() is not None: print("{} is already stopped.".format(name)) return p.terminate() time.sleep(0.1) if p.poll() is not None: print("{} is terminated.".format(name)) return p.kill() print("{} is killed.".format(name))
Stop process, by applying terminate and kill.
def first_n_three_layer_P(reference_patterns, estimated_patterns, n=5): validate(reference_patterns, estimated_patterns) if _n_onset_midi(reference_patterns) == 0 or \ _n_onset_midi(estimated_patterns) == 0: return 0., 0., 0. fn_est_patterns = estimated_patterns[:min(len(estimated_patterns), n)] F, P, R = three_layer_FPR(reference_patterns, fn_est_patterns) return P
First n three-layer precision. This metric is basically the same as the three-layer FPR but it is only applied to the first n estimated patterns, and it only returns the precision. In MIREX and typically, n = 5. Examples -------- >>> ref_patterns = mir_eval.io.load_patterns("ref_pattern.txt") >>> est_patterns = mir_eval.io.load_patterns("est_pattern.txt") >>> P = mir_eval.pattern.first_n_three_layer_P(ref_patterns, ... est_patterns, n=5) Parameters ---------- reference_patterns : list The reference patterns in the format returned by :func:`mir_eval.io.load_patterns()` estimated_patterns : list The estimated patterns in the same format n : int Number of patterns to consider from the estimated results, in the order they appear in the matrix (Default value = 5) Returns ------- precision : float The first n three-layer Precision
def spawn(func, *args, **kwargs): fiber = Fiber(func, args, **kwargs) fiber.start() return fiber
Spawn a new fiber. A new :class:`Fiber` is created with main function *func* and positional arguments *args*. The keyword arguments are passed to the :class:`Fiber` constructor, not to the main function. The fiber is then scheduled to start by calling its :meth:`~Fiber.start` method. The fiber instance is returned.
def erase(ctx): if os.path.exists(ctx.obj['report']): os.remove(ctx.obj['report'])
Erase the existing smother report.
def from_bytes(rawbytes): icmpv6popts = ICMPv6OptionList() i = 0 while i < len(rawbytes): opttype = rawbytes[i] optnum = ICMPv6OptionNumber(opttype) obj = ICMPv6OptionClasses[optnum]() eaten = obj.from_bytes(rawbytes[i:]) i += eaten icmpv6popts.append(obj) return icmpv6popts
Takes a byte string as a parameter and returns a list of ICMPv6Option objects.
def import_log_funcs(): global g_logger curr_mod = sys.modules[__name__] for func_name in _logging_funcs: func = getattr(g_logger, func_name) setattr(curr_mod, func_name, func)
Import the common log functions from the global logger to the module.
def add_success(self, group=None, type_='', field='', description=''): group = group or '(200)' group = int(group.lower()[1:-1]) self.retcode = self.retcode or group if group != self.retcode: raise ValueError('Two or more retcodes!') type_ = type_ or '{String}' p = Param(type_, field, description) self.params['responce'][p.field] = p
parse and append a success data param
def uninstall_ruby(ruby, runas=None): ruby = re.sub(r'^ruby-', '', ruby) _rbenv_exec(['uninstall', '--force', ruby], runas=runas) return True
Uninstall a ruby implementation. ruby The version of ruby to uninstall. Should match one of the versions listed by :py:func:`rbenv.versions <salt.modules.rbenv.versions>`. runas The user under which to run rbenv. If not specified, then rbenv will be run as the user under which Salt is running. CLI Example: .. code-block:: bash salt '*' rbenv.uninstall_ruby 2.0.0-p0
def Validate(self): bad_filters = [] for f in self.filters: try: f.Validate() except DefinitionError as e: bad_filters.append("%s: %s" % (f.expression, e)) if bad_filters: raise DefinitionError( "Filters with invalid expressions: %s" % ", ".join(bad_filters))
Verifies this filter set can process the result data.
def move(self, new_location): self._perform_change(change.MoveResource(self, new_location), 'Moving <%s> to <%s>' % (self.path, new_location))
Move resource to `new_location`
def get_context_data(self, **kwargs): context = super(TermsView, self).get_context_data(**kwargs) context['terms_base_template'] = getattr(settings, 'TERMS_BASE_TEMPLATE', DEFAULT_TERMS_BASE_TEMPLATE) return context
Pass additional context data
def hour(self): self.magnification = 3600 self._update(self.baseNumber, self.magnification) return self
set unit to hour
def hlist(self, name_start, name_end, limit=10): limit = get_positive_integer('limit', limit) return self.execute_command('hlist', name_start, name_end, limit)
Return a list of the top ``limit`` hash's name between ``name_start`` and ``name_end`` in ascending order .. note:: The range is (``name_start``, ``name_end``]. The ``name_start`` isn't in the range, but ``name_end`` is. :param string name_start: The lower bound(not included) of hash names to be returned, empty string ``''`` means -inf :param string name_end: The upper bound(included) of hash names to be returned, empty string ``''`` means +inf :param int limit: number of elements will be returned. :return: a list of hash's name :rtype: list >>> ssdb.hlist('hash_ ', 'hash_z', 10) ['hash_1', 'hash_2'] >>> ssdb.hlist('hash_ ', '', 3) ['hash_1', 'hash_2'] >>> ssdb.hlist('', 'aaa_not_exist', 10) []
def get_args_setting(args, jsonpath='scenario_setting.json'): if not jsonpath == None: with open(jsonpath) as f: args = json.load(f) return args
Get and open json file with scenaio settings of eTraGo ``args``. The settings incluedes all eTraGo specific settings of arguments and parameters for a reproducible calculation. Parameters ---------- json_file : str Default: ``scenario_setting.json`` Name of scenario setting json file Returns ------- args : dict Dictionary of json file
def create_graph(): with tf.gfile.FastGFile(os.path.join( FLAGS.model_dir, 'classify_image_graph_def.pb'), 'rb') as f: graph_def = tf.GraphDef() graph_def.ParseFromString(f.read()) _ = tf.import_graph_def(graph_def, name='')
Creates a graph from saved GraphDef file and returns a saver.
def add_token_layer(self, words_file, connected): for word in etree.parse(words_file).iterfind('//word'): token_node_id = word.attrib['id'] self.tokens.append(token_node_id) token_str = ensure_unicode(word.text) self.add_node(token_node_id, layers={self.ns, self.ns+':token'}, attr_dict={self.ns+':token': token_str, 'label': token_str}) if connected: self.add_edge(self.root, token_node_id, layers={self.ns, self.ns+':token'})
parses a _words.xml file, adds every token to the document graph and adds an edge from the MMAX root node to it. Parameters ---------- connected : bool Make the graph connected, i.e. add an edge from root to each token.
def writemessage(self, text): self.IQUEUELOCK.acquire() TelnetHandlerBase.writemessage(self, text) self.IQUEUELOCK.release()
Put data in output queue, rebuild the prompt and entered data
def __timestamp(): today = time.time() ret = struct.pack(b'=L', int(today)) return ret
Generate timestamp data for pyc header.
def pipe(data, *fns): return reduce(lambda acc, f: f(acc), fns, data)
Apply functions recursively on your data :param data: the data :param fns: functions :returns: an object >>> inc = lambda x: x + 1 >>> pipe(42, inc, str) '43'
def FileTransfer(*args, **kwargs): if len(args) >= 1: device_type = args[0].device_type else: device_type = kwargs["ssh_conn"].device_type if device_type not in scp_platforms: raise ValueError( "Unsupported SCP device_type: " "currently supported platforms are: {}".format(scp_platforms_str) ) FileTransferClass = FILE_TRANSFER_MAP[device_type] return FileTransferClass(*args, **kwargs)
Factory function selects the proper SCP class and creates object based on device_type.
def get_user_ratings(self, item_type=None): if item_type: query_string = 'itemType=%s' % item_type return self.parse_raw_response( requests_util.run_request('get', self.API_BASE_URL + '/user/ratings/qeury?%s' % query_string, headers=self.__get_header_with_auth())) else: return self.__get_user_ratings()
Returns a list of the ratings for the type of item provided, for the current user. :param item_type: One of: series, episode or banner. :return: a python dictionary with either the result of the search or an error from TheTVDB.
def predict(self, X, k=None, depth=None): if depth is None: if self.opt_depth is not None: depth = self.opt_depth if self.verbose > 0: 'using optimal depth to predict' else: depth = float('inf') response = self._predict(X=X, depth=depth) if k is not None: mean = self.predict(X=self.X, depth=depth).reshape(-1, 1) response += self.BLUP.predict(XTest=X, k=k, mean=mean).reshape(-1) return response
Predict response for X. The response to an input sample is computed as the sum of (1) the mean prediction of the trees in the forest (fixed effect) and (2) the estimated random effect. Parameters ---------- X : array-like of shape = [n_samples, n_features] The input samples. k: array-like of shape = [n_samples, n_samples_fitting] The cross-dependency structure between the samples used for learning the forest and the input samples. If not specified only the estimated fixed effect is returned. Returns ------- y : array of shape = [n_samples, 1] The response
def _iter_list_for_dicts(self, check_list): list_copy = copy.deepcopy(check_list) for index, elem in enumerate(check_list): if isinstance(elem, dict): list_copy[index] = self._check_for_python_keywords(elem) elif isinstance(elem, list): list_copy[index] = self._iter_list_for_dicts(elem) else: list_copy[index] = elem return list_copy
Iterate over list to find dicts and check for python keywords.
def delete_usage_plan(plan_id, region=None, key=None, keyid=None, profile=None): try: existing = describe_usage_plans(plan_id=plan_id, region=region, key=key, keyid=keyid, profile=profile) if 'error' in existing: return {'error': existing['error']} if 'plans' in existing and existing['plans']: conn = _get_conn(region=region, key=key, keyid=keyid, profile=profile) res = conn.delete_usage_plan(usagePlanId=plan_id) return {'deleted': True, 'usagePlanId': plan_id} except ClientError as e: return {'error': __utils__['boto3.get_error'](e)}
Deletes usage plan identified by plan_id .. versionadded:: 2017.7.0 CLI Example: .. code-block:: bash salt myminion boto_apigateway.delete_usage_plan plan_id='usage plan id'
def post_card(name, message, hook_url=None, title=None, theme_color=None): ret = {'name': name, 'changes': {}, 'result': False, 'comment': ''} if __opts__['test']: ret['comment'] = 'The following message is to be sent to Teams: {0}'.format(message) ret['result'] = None return ret if not message: ret['comment'] = 'Teams message is missing: {0}'.format(message) return ret try: result = __salt__['msteams.post_card']( message=message, hook_url=hook_url, title=title, theme_color=theme_color, ) except SaltInvocationError as sie: ret['comment'] = 'Failed to send message ({0}): {1}'.format(sie, name) else: if isinstance(result, bool) and result: ret['result'] = True ret['comment'] = 'Sent message: {0}'.format(name) else: ret['comment'] = 'Failed to send message ({0}): {1}'.format(result['message'], name) return ret
Send a message to a Microsft Teams channel .. code-block:: yaml send-msteams-message: msteams.post_card: - message: 'This state was executed successfully.' - hook_url: https://outlook.office.com/webhook/837 The following parameters are required: message The message that is to be sent to the MS Teams channel. The following parameters are optional: hook_url The webhook URL given configured in Teams interface, if not specified in the configuration options of master or minion. title The title for the card posted to the channel theme_color A hex code for the desired highlight color
def rewrite( filepath: Union[str, Path], mode: str = "w", **kw: Any ) -> Generator[IO, None, None]: if isinstance(filepath, str): base_dir = os.path.dirname(filepath) filename = os.path.basename(filepath) else: base_dir = str(filepath.parent) filename = filepath.name with tempfile.NamedTemporaryFile( mode=mode, prefix=f".{filename}.", delete=False, dir=base_dir, **kw ) as f: filepath_tmp = f.name yield f if not os.path.exists(filepath_tmp): return os.chmod(filepath_tmp, 0o100644) os.rename(filepath_tmp, filepath)
Rewrite an existing file atomically to avoid programs running in parallel to have race conditions while reading.
def get_inner_edges(self): inner_edges = [e for e in self._tree.preorder_edge_iter() if e.is_internal() and e.head_node and e.tail_node] return inner_edges
Returns a list of the internal edges of the tree.
def validate (properties): if isinstance(properties, Property): properties = [properties] assert is_iterable_typed(properties, Property) for p in properties: __validate1(p)
Exit with error if any of the properties is not valid. properties may be a single property or a sequence of properties.
def agg_wt_avg(mat, min_wt = 0.01, corr_metric='spearman'): assert mat.shape[1] > 0, "mat is empty! mat: {}".format(mat) if mat.shape[1] == 1: out_sig = mat upper_tri_df = None raw_weights = None weights = None else: assert corr_metric in ["spearman", "pearson"] corr_mat = mat.corr(method=corr_metric) upper_tri_df = get_upper_triangle(corr_mat) raw_weights, weights = calculate_weights(corr_mat, min_wt) weighted_values = mat * weights out_sig = weighted_values.sum(axis=1) return out_sig, upper_tri_df, raw_weights, weights
Aggregate a set of replicate profiles into a single signature using a weighted average. Args: mat (pandas df): a matrix of replicate profiles, where the columns are samples and the rows are features; columns correspond to the replicates of a single perturbagen min_wt (float): Minimum raw weight when calculating weighted average corr_metric (string): Spearman or Pearson; the correlation method Returns: out_sig (pandas series): weighted average values upper_tri_df (pandas df): the correlations between each profile that went into the signature raw weights (pandas series): weights before normalization weights (pandas series): weights after normalization
def unregister_switch_address(addr): ofp_handler = app_manager.lookup_service_brick(ofp_event.NAME) if ofp_handler.controller is None: return ofp_handler.controller.stop_client_loop(addr)
Unregister the given switch address. Unregisters the given switch address to let ryu.controller.controller.OpenFlowController stop trying to initiate connection to switch. :param addr: A tuple of (host, port) pair of switch.
def _raise_on_error(data: Union[str, dict]) -> None: if isinstance(data, str): raise_error(data) elif 'status' in data and data['status'] != 'success': raise_error(data['data']['message'])
Raise the appropriate exception on error.
def get_url_param(self, index, default=None): params = self.get_url_params() return params[index] if index < len(params) else default
Return url parameter with given index. Args: - index: starts from zero, and come after controller and action names in url.
def get_parents_for(self, child_ids): self._cache_init() parent_candidates = [] for parent, children in self._cache_get_entry(self.CACHE_NAME_PARENTS).items(): if set(children).intersection(child_ids): parent_candidates.append(parent) return set(parent_candidates)
Returns parent aliases for a list of child IDs. :param list child_ids: :rtype: set :return: a set of parent aliases
def interactive_login(self, username: str) -> None: if self.context.quiet: raise LoginRequiredException("Quiet mode requires given password or valid session file.") try: password = None while password is None: password = getpass.getpass(prompt="Enter Instagram password for %s: " % username) try: self.login(username, password) except BadCredentialsException as err: print(err, file=sys.stderr) password = None except TwoFactorAuthRequiredException: while True: try: code = input("Enter 2FA verification code: ") self.two_factor_login(code) break except BadCredentialsException: pass
Logs in and internally stores session, asking user for password interactively. :raises LoginRequiredException: when in quiet mode. :raises InvalidArgumentException: If the provided username does not exist. :raises ConnectionException: If connection to Instagram failed.
def page_through(page_size, function, *args, **kwargs): kwargs["limit"] = page_size def get_page(token): page_kwargs = kwargs.copy() if token: page_kwargs["token"] = token return function(*args, **page_kwargs) def page_generator(): token = None while True: try: response = get_page(token) token = response.headers.get("x-next-token") except PureError as err: yield None, err else: if response: sent_token = yield response, None if sent_token is not None: token = sent_token else: return return page_generator()
Return an iterator over all pages of a REST operation. :param page_size: Number of elements to retrieve per call. :param function: FlashArray function that accepts limit as an argument. :param \*args: Positional arguments to be passed to function. :param \*\*kwargs: Keyword arguments to be passed to function. :returns: An iterator of tuples containing a page of results for the function(\*args, \*\*kwargs) and None, or None and a PureError if a call to retrieve a page fails. :rtype: iterator .. note:: Requires use of REST API 1.7 or later. Only works with functions that accept limit as an argument. Iterator will retrieve page_size elements per call Iterator will yield None and an error if a call fails. The next call will repeat the same call, unless the caller sends in an alternate page token.
def replace_complexes(self, linked_stmts=None): if linked_stmts is None: linked_stmts = self.infer_complexes(self.statements) new_stmts = [] for stmt in self.statements: if not isinstance(stmt, Complex): new_stmts.append(stmt) continue found = False for linked_stmt in linked_stmts: if linked_stmt.refinement_of(stmt, hierarchies): found = True if not found: new_stmts.append(stmt) else: logger.info('Removing complex: %s' % stmt) self.statements = new_stmts
Remove Complex Statements that can be inferred out. This function iterates over self.statements and looks for Complex Statements that either match or are refined by inferred Complex Statements that were linked (provided as the linked_stmts argument). It removes Complex Statements from self.statements that can be explained by the linked statements. Parameters ---------- linked_stmts : Optional[list[indra.mechlinker.LinkedStatement]] A list of linked statements, optionally passed from outside. If None is passed, the MechLinker runs self.infer_complexes to infer Complexes and obtain a list of LinkedStatements that are then used for removing existing Complexes in self.statements.
def configure(logger=None): global LOGGER if logger is None: LOGGER = logging.basicConfig(stream=sys.stdout, level=logging.INFO) else: LOGGER = logger
Pass stump a logger to use. If no logger is supplied, a basic logger of level INFO will print to stdout.
def call_command(self, cmd, *argv): parser = self.get_parser() args = [cmd] + list(argv) namespace = parser.parse_args(args) self.run_command(namespace)
Runs a command. :param cmd: command to run (key at the registry) :param argv: arguments that would be passed to the command
def read_little_endian64(self): try: i = struct.unpack(wire_format.FORMAT_UINT64_LITTLE_ENDIAN, self._input.read(8)) self._pos += 8 return i[0] except struct.error as e: raise errors.DecodeError(e)
Interprets the next 8 bytes of the stream as a little-endian encoded, unsiged 64-bit integer, and returns that integer.
def download_kegg_gene_metadata(gene_id, outdir=None, force_rerun=False): if not outdir: outdir = '' outfile = op.join(outdir, '{}.kegg'.format(custom_slugify(gene_id))) if ssbio.utils.force_rerun(flag=force_rerun, outfile=outfile): raw_text = bs_kegg.get("{}".format(gene_id)) if raw_text == 404: return with io.open(outfile, mode='wt', encoding='utf-8') as f: f.write(raw_text) log.debug('{}: downloaded KEGG metadata file'.format(outfile)) else: log.debug('{}: KEGG metadata file already exists'.format(outfile)) return outfile
Download the KEGG flatfile for a KEGG ID and return the path. Args: gene_id: KEGG gene ID (with organism code), i.e. "eco:1244" outdir: optional output directory of metadata Returns: Path to metadata file
def set_args(self, namespace, dots=False): self.set(self._build_namespace_dict(namespace, dots))
Overlay parsed command-line arguments, generated by a library like argparse or optparse, onto this view's value. :param namespace: Dictionary or Namespace to overlay this config with. Supports nested Dictionaries and Namespaces. :type namespace: dict or Namespace :param dots: If True, any properties on namespace that contain dots (.) will be broken down into child dictionaries. :Example: {'foo.bar': 'car'} # Will be turned into {'foo': {'bar': 'car'}} :type dots: bool
def bulk_cursor_execute(self, bulk_cursor): try: result = bulk_cursor.execute() except BulkWriteError as bwe: msg = "bulk_cursor_execute: Exception in executing Bulk cursor to mongo with {error}".format( error=str(bwe)) raise Exception(msg) except Exception as e: msg = "Mongo Bulk cursor could not be fetched, Error: {error}".format( error=str(e)) raise Exception(msg)
Executes the bulk_cursor :param bulk_cursor: Cursor to perform bulk operations :type bulk_cursor: pymongo bulk cursor object :returns: pymongo bulk cursor object (for bulk operations)
def get_transactions(self, account: SEPAAccount, start_date: datetime.date = None, end_date: datetime.date = None): with self._get_dialog() as dialog: hkkaz = self._find_highest_supported_command(HKKAZ5, HKKAZ6, HKKAZ7) logger.info('Start fetching from {} to {}'.format(start_date, end_date)) responses = self._fetch_with_touchdowns( dialog, lambda touchdown: hkkaz( account=hkkaz._fields['account'].type.from_sepa_account(account), all_accounts=False, date_start=start_date, date_end=end_date, touchdown_point=touchdown, ), 'HIKAZ' ) logger.info('Fetching done.') statement = [] for seg in responses: statement += mt940_to_array(seg.statement_booked.decode('iso-8859-1')) logger.debug('Statement: {}'.format(statement)) return statement
Fetches the list of transactions of a bank account in a certain timeframe. :param account: SEPA :param start_date: First day to fetch :param end_date: Last day to fetch :return: A list of mt940.models.Transaction objects
def execute(self, query, parameters=None, timeout=_NOT_SET, trace=False, custom_payload=None, execution_profile=EXEC_PROFILE_DEFAULT, paging_state=None, host=None): return self.execute_async(query, parameters, trace, custom_payload, timeout, execution_profile, paging_state, host).result()
Execute the given query and synchronously wait for the response. If an error is encountered while executing the query, an Exception will be raised. `query` may be a query string or an instance of :class:`cassandra.query.Statement`. `parameters` may be a sequence or dict of parameters to bind. If a sequence is used, ``%s`` should be used the placeholder for each argument. If a dict is used, ``%(name)s`` style placeholders must be used. `timeout` should specify a floating-point timeout (in seconds) after which an :exc:`.OperationTimedOut` exception will be raised if the query has not completed. If not set, the timeout defaults to :attr:`~.Session.default_timeout`. If set to :const:`None`, there is no timeout. Please see :meth:`.ResponseFuture.result` for details on the scope and effect of this timeout. If `trace` is set to :const:`True`, the query will be sent with tracing enabled. The trace details can be obtained using the returned :class:`.ResultSet` object. `custom_payload` is a :ref:`custom_payload` dict to be passed to the server. If `query` is a Statement with its own custom_payload. The message payload will be a union of the two, with the values specified here taking precedence. `execution_profile` is the execution profile to use for this request. It can be a key to a profile configured via :meth:`Cluster.add_execution_profile` or an instance (from :meth:`Session.execution_profile_clone_update`, for example `paging_state` is an optional paging state, reused from a previous :class:`ResultSet`. `host` is the :class:`pool.Host` that should handle the query. Using this is discouraged except in a few cases, e.g., querying node-local tables and applying schema changes.
def GetMatchingShape(pattern_poly, trip, matches, max_distance, verbosity=0): if len(matches) == 0: print ('No matching shape found within max-distance %d for trip %s ' % (max_distance, trip.trip_id)) return None if verbosity >= 1: for match in matches: print("match: size %d" % match.GetNumPoints()) scores = [(pattern_poly.GreedyPolyMatchDist(match), match) for match in matches] scores.sort() if scores[0][0] > max_distance: print ('No matching shape found within max-distance %d for trip %s ' '(min score was %f)' % (max_distance, trip.trip_id, scores[0][0])) return None return scores[0][1]
Tries to find a matching shape for the given pattern Poly object, trip, and set of possibly matching Polys from which to choose a match.
def get_profiles(self, profile_base="/data/b2g/mozilla", timeout=None): rv = {} if timeout is None: timeout = self._timeout profile_path = posixpath.join(profile_base, "profiles.ini") try: proc = self.shell("cat %s" % profile_path, timeout=timeout) config = ConfigParser.ConfigParser() config.readfp(proc.stdout_file) for section in config.sections(): items = dict(config.items(section)) if "name" in items and "path" in items: path = items["path"] if "isrelative" in items and int(items["isrelative"]): path = posixpath.normpath("%s/%s" % (profile_base, path)) rv[items["name"]] = path finally: proc.stdout_file.close() proc.stderr_file.close() return rv
Return a list of paths to gecko profiles on the device, :param timeout: Timeout of each adb command run :param profile_base: Base directory containing the profiles.ini file
def _parse_json(cls, resources, exactly_one=True): if not len(resources['features']): return None if exactly_one: return cls.parse_resource(resources['features'][0]) else: return [cls.parse_resource(resource) for resource in resources['features']]
Parse display name, latitude, and longitude from a JSON response.
def spm_hrf_compat(t, peak_delay=6, under_delay=16, peak_disp=1, under_disp=1, p_u_ratio=6, normalize=True, ): if len([v for v in [peak_delay, peak_disp, under_delay, under_disp] if v <= 0]): raise ValueError("delays and dispersions must be > 0") hrf = np.zeros(t.shape, dtype=np.float) pos_t = t[t > 0] peak = sps.gamma.pdf(pos_t, peak_delay / peak_disp, loc=0, scale=peak_disp) undershoot = sps.gamma.pdf(pos_t, under_delay / under_disp, loc=0, scale=under_disp) hrf[t > 0] = peak - undershoot / p_u_ratio if not normalize: return hrf return hrf / np.max(hrf)
SPM HRF function from sum of two gamma PDFs This function is designed to be partially compatible with SPMs `spm_hrf.m` function. The SPN HRF is a *peak* gamma PDF (with location `peak_delay` and dispersion `peak_disp`), minus an *undershoot* gamma PDF (with location `under_delay` and dispersion `under_disp`, and divided by the `p_u_ratio`). Parameters ---------- t : array-like vector of times at which to sample HRF peak_delay : float, optional delay of peak peak_disp : float, optional width (dispersion) of peak under_delay : float, optional delay of undershoot under_disp : float, optional width (dispersion) of undershoot p_u_ratio : float, optional peak to undershoot ratio. Undershoot divided by this value before subtracting from peak. normalize : {True, False}, optional If True, divide HRF values by their sum before returning. SPM does this by default. Returns ------- hrf : array vector length ``len(t)`` of samples from HRF at times `t` Notes ----- See ``spm_hrf.m`` in the SPM distribution.
def get_cool_off() -> Optional[timedelta]: cool_off = settings.AXES_COOLOFF_TIME if isinstance(cool_off, int): return timedelta(hours=cool_off) return cool_off
Return the login cool off time interpreted from settings.AXES_COOLOFF_TIME. The return value is either None or timedelta. Notice that the settings.AXES_COOLOFF_TIME is either None, timedelta, or integer of hours, and this function offers a unified _timedelta or None_ representation of that configuration for use with the Axes internal implementations. :exception TypeError: if settings.AXES_COOLOFF_TIME is of wrong type.
def gene_name(st, exclude=("ev",), sep="."): if any(st.startswith(x) for x in exclude): sep = None st = st.split('|')[0] if sep and sep in st: name, suffix = st.rsplit(sep, 1) else: name, suffix = st, "" if len(suffix) != 1: name = st return name
Helper functions in the BLAST filtering to get rid alternative splicings. This is ugly, but different annotation groups are inconsistent with respect to how the alternative splicings are named. Mostly it can be done by removing the suffix, except for ones in the exclude list.
def get_env(key, *default, **kwargs): assert len(default) in (0, 1), "Too many args supplied." func = kwargs.get('coerce', lambda x: x) required = (len(default) == 0) default = default[0] if not required else None return _get_env(key, default=default, coerce=func, required=required)
Return env var. This is the parent function of all other get_foo functions, and is responsible for unpacking args/kwargs into the values that _get_env expects (it is the root function that actually interacts with environ). Args: key: string, the env var name to look up. default: (optional) the value to use if the env var does not exist. If this value is not supplied, then the env var is considered to be required, and a RequiredSettingMissing error will be raised if it does not exist. Kwargs: coerce: a func that may be supplied to coerce the value into something else. This is used by the default get_foo functions to cast strings to builtin types, but could be a function that returns a custom class. Returns the env var, coerced if required, and a default if supplied.
def approx_equals(self, other, atol): if other is self: return True return (type(other) is type(self) and self.ndim == other.ndim and self.shape == other.shape and all(np.allclose(vec_s, vec_o, atol=atol, rtol=0.0) for (vec_s, vec_o) in zip(self.coord_vectors, other.coord_vectors)))
Test if this grid is equal to another grid. Parameters ---------- other : Object to be tested atol : float Allow deviations up to this number in absolute value per vector entry. Returns ------- equals : bool ``True`` if ``other`` is a `RectGrid` instance with all coordinate vectors equal (up to the given tolerance), to the ones of this grid, ``False`` otherwise. Examples -------- >>> g1 = RectGrid([0, 1], [-1, 0, 2]) >>> g2 = RectGrid([-0.1, 1.1], [-1, 0.1, 2]) >>> g1.approx_equals(g2, atol=0) False >>> g1.approx_equals(g2, atol=0.15) True
def get_preservation_data(self): for obj in self.get_preservations(): info = self.get_base_info(obj) yield info
Returns a list of Preservation data
def from_args(cls, target_url, default_url, test_url): return cls("You're trying to upload to the legacy PyPI site '{}'. " "Uploading to those sites is deprecated. \n " "The new sites are pypi.org and test.pypi.org. Try using " "{} (or {}) to upload your packages instead. " "These are the default URLs for Twine now. \n More at " "https://packaging.python.org/guides/migrating-to-pypi-org/" " .".format(target_url, default_url, test_url) )
Return an UploadToDeprecatedPyPIDetected instance.
def rewind(self, position=0): if position < 0 or position > len(self._data): raise Exception("Invalid position to rewind cursor to: %s." % position) self._position = position
Set the position of the data buffer cursor to 'position'.
def get_instance(self, payload): return TerminatingSipDomainInstance(self._version, payload, trunk_sid=self._solution['trunk_sid'], )
Build an instance of TerminatingSipDomainInstance :param dict payload: Payload response from the API :returns: twilio.rest.trunking.v1.trunk.terminating_sip_domain.TerminatingSipDomainInstance :rtype: twilio.rest.trunking.v1.trunk.terminating_sip_domain.TerminatingSipDomainInstance
def split_points(self, point_cloud): if not isinstance(point_cloud, PointCloud): raise ValueError('Can only split point clouds') above_plane = point_cloud._data - np.tile(self._x0.data, [1, point_cloud.num_points]).T.dot(self._n) > 0 above_plane = point_cloud.z_coords > 0 & above_plane below_plane = point_cloud._data - np.tile(self._x0.data, [1, point_cloud.num_points]).T.dot(self._n) <= 0 below_plane = point_cloud.z_coords > 0 & below_plane above_data = point_cloud.data[:, above_plane] below_data = point_cloud.data[:, below_plane] return PointCloud(above_data, point_cloud.frame), PointCloud(below_data, point_cloud.frame)
Split a point cloud into two along this plane. Parameters ---------- point_cloud : :obj:`PointCloud` The PointCloud to divide in two. Returns ------- :obj:`tuple` of :obj:`PointCloud` Two new PointCloud objects. The first contains points above the plane, and the second contains points below the plane. Raises ------ ValueError If the input is not a PointCloud.
def run(self): unread = 0 current_unread = 0 for id, backend in enumerate(self.backends): temp = backend.unread or 0 unread = unread + temp if id == self.current_backend: current_unread = temp if not unread: color = self.color urgent = "false" if self.hide_if_null: self.output = None return else: color = self.color_unread urgent = "true" format = self.format if unread > 1: format = self.format_plural account_name = getattr(self.backends[self.current_backend], "account", "No name") self.output = { "full_text": format.format(unread=unread, current_unread=current_unread, account=account_name), "urgent": urgent, "color": color, }
Returns the sum of unread messages across all registered backends
def compute(self): self._compute_primary_smooths() self._smooth_the_residuals() self._select_best_smooth_at_each_point() self._enhance_bass() self._smooth_best_span_estimates() self._apply_best_spans_to_primaries() self._smooth_interpolated_smooth() self._store_unsorted_results(self.smooth_result, numpy.zeros(len(self.smooth_result)))
Run the SuperSmoother.
def str_dict_cast(dict_, include_keys=True, include_vals=True, **kwargs): new_keys = str_list_cast(dict_.keys(), **kwargs) if include_keys else dict_.keys() new_vals = str_list_cast(dict_.values(), **kwargs) if include_vals else dict_.values() new_dict = dict(zip_(new_keys, new_vals)) return new_dict
Converts any bytes-like items in input dict to string-like values, with respect to python version Parameters ---------- dict_ : dict any bytes-like objects contained in the dict will be converted to a string include_keys : bool, default=True if True, cast keys to a string, else ignore include_values : bool, default=True if True, cast values to a string, else ignore kwargs: encoding: str, default: 'utf-8' encoding to be used when decoding bytes
def _iter_candidate_groups(self, init_match, edges0, edges1): sources = {} for start_vertex0, end_vertex0 in edges0: l = sources.setdefault(start_vertex0, []) l.append(end_vertex0) dests = {} for start_vertex1, end_vertex1 in edges1: start_vertex0 = init_match.reverse[start_vertex1] l = dests.setdefault(start_vertex0, []) l.append(end_vertex1) for start_vertex0, end_vertices0 in sources.items(): end_vertices1 = dests.get(start_vertex0, []) yield end_vertices0, end_vertices1
Divide the edges into groups
def issuers(self): issuers = self._get_property('issuers') or [] result = { '_embedded': { 'issuers': issuers, }, 'count': len(issuers), } return List(result, Issuer)
Return the list of available issuers for this payment method.
async def process_check_ins(self): params = { 'include_participants': 1, 'include_matches': 1 if AUTO_GET_MATCHES else 0 } res = await self.connection('POST', 'tournaments/{}/process_check_ins'.format(self._id), **params) self._refresh_from_json(res)
finalize the check in phase |methcoro| Warning: |unstable| Note: |from_api| This should be invoked after a tournament's check-in window closes before the tournament is started. 1. Marks participants who have not checked in as inactive. 2. Moves inactive participants to bottom seeds (ordered by original seed). 3. Transitions the tournament state from 'checking_in' to 'checked_in' NOTE: Checked in participants on the waiting list will be promoted if slots become available. Raises: APIException
def setup_ics(graph): ics = [] for i0, i1 in graph.edges: ics.append(BondLength(i0, i1)) for i1 in range(graph.num_vertices): n = list(graph.neighbors[i1]) for index, i0 in enumerate(n): for i2 in n[:index]: ics.append(BendingAngle(i0, i1, i2)) for i1, i2 in graph.edges: for i0 in graph.neighbors[i1]: if i0==i2: continue for i3 in graph.neighbors[i2]: if i3==i1 or i3==i0: continue ics.append(DihedralAngle(i0, i1, i2, i3)) return ics
Make a list of internal coordinates based on the graph Argument: | ``graph`` -- A Graph instance. The list of internal coordinates will include all bond lengths, all bending angles, and all dihedral angles.
def standardize(): def f(G, bim): G_out = standardize_snps(G) return G_out, bim return f
return variant standarize function
def plot_sediment_rate(self, ax=None): if ax is None: ax = plt.gca() y_prior, x_prior = self.prior_sediment_rate() ax.plot(x_prior, y_prior, label='Prior') y_posterior = self.mcmcfit.sediment_rate density = scipy.stats.gaussian_kde(y_posterior.flat) density.covariance_factor = lambda: 0.25 density._compute_covariance() ax.plot(x_prior, density(x_prior), label='Posterior') acc_shape = self.mcmcsetup.mcmc_kws['acc_shape'] acc_mean = self.mcmcsetup.mcmc_kws['acc_mean'] annotstr_template = 'acc_shape: {0}\nacc_mean: {1}' annotstr = annotstr_template.format(acc_shape, acc_mean) ax.annotate(annotstr, xy=(0.9, 0.9), xycoords='axes fraction', horizontalalignment='right', verticalalignment='top') ax.set_ylabel('Density') ax.set_xlabel('Acc. rate (yr/cm)') ax.grid(True) return ax
Plot sediment accumulation rate prior and posterior distributions
def saveto(self, path, sortkey = True): with open(path, 'w') as f: self.savetofile(f, sortkey)
Save configurations to path
def refill(self, from_address, to_address, nfees, ntokens, password, min_confirmations=6, sync=False): path, from_address = from_address verb = Spoolverb() inputs = self.select_inputs(from_address, nfees + 1, ntokens, min_confirmations=min_confirmations) outputs = [{'address': to_address, 'value': self.token}] * ntokens outputs += [{'address': to_address, 'value': self.fee}] * nfees outputs += [{'script': self._t._op_return_hex(verb.fuel), 'value': 0}] unsigned_tx = self._t.build_transaction(inputs, outputs) signed_tx = self._t.sign_transaction(unsigned_tx, password, path=path) txid = self._t.push(signed_tx) return txid
Refill wallets with the necessary fuel to perform spool transactions Args: from_address (Tuple[str]): Federation wallet address. Fuels the wallets with tokens and fees. All transactions to wallets holding a particular piece should come from the Federation wallet to_address (str): Wallet address that needs to perform a spool transaction nfees (int): Number of fees to transfer. Each fee is 10000 satoshi. Used to pay for the transactions ntokens (int): Number of tokens to transfer. Each token is 600 satoshi. Used to register hashes in the blockchain password (str): Password for the Federation wallet. Used to sign the transaction min_confirmations (int): Number of confirmations when chosing the inputs of the transaction. Defaults to 6 sync (bool): Perform the transaction in synchronous mode, the call to the function will block until there is at least on confirmation on the blockchain. Defaults to False Returns: str: transaction id
def role_update(auth=None, **kwargs): cloud = get_operator_cloud(auth) kwargs = _clean_kwargs(**kwargs) if 'new_name' in kwargs: kwargs['name'] = kwargs.pop('new_name') return cloud.update_role(**kwargs)
Update a role CLI Example: .. code-block:: bash salt '*' keystoneng.role_update name=role1 new_name=newrole salt '*' keystoneng.role_update name=1eb6edd5525e4ac39af571adee673559 new_name=newrole
def get_families(self): if self.retrieved: raise errors.IllegalState('List has already been retrieved.') self.retrieved = True return objects.FamilyList(self._results, runtime=self._runtime)
Gets the family list resulting from a search. return: (osid.relationship.FamilyList) - the family list raise: IllegalState - list already retrieved *compliance: mandatory -- This method must be implemented.*
def register_scr_task(self, *args, **kwargs): kwargs["task_class"] = ScrTask return self.register_task(*args, **kwargs)
Register a screening task.
def parseFilename(filename): _indx = filename.find('[') if _indx > 0: _fname = filename[:_indx] _extn = filename[_indx + 1:-1] else: _fname = filename _extn = None return _fname, _extn
Parse out filename from any specified extensions. Returns rootname and string version of extension name.
def result(self): if self.cancelled or (self._fn is not None): raise NotExecutedYet() if self._fn_exc is not None: six.reraise(*self._fn_exc) else: return self._fn_res
The result from the executed task. Raises NotExecutedYet if not yet executed.
def kde_statsmodels_m(data, grid, **kwargs): kde = KDEMultivariate(data, **kwargs) return kde.pdf(grid)
Multivariate Kernel Density Estimation with Statsmodels Parameters ---------- data : numpy.array Data points used to compute a density estimator. It has `n x p` dimensions, representing n points and p variables. grid : numpy.array Data points at which the desity will be estimated. It has `m x p` dimensions, representing m points and p variables. Returns ------- out : numpy.array Density estimate. Has `m x 1` dimensions
def getRelativePath(basepath, path): basepath = splitpath(os.path.abspath(basepath)) path = splitpath(os.path.abspath(path)) afterCommon = False for c in basepath: if afterCommon or path[0] != c: path.insert(0, os.path.pardir) afterCommon = True else: del path[0] return os.path.join(*path)
Get a path that is relative to the given base path.
def getstrs(self): status, label, unit, format = _C.SDgetdimstrs(self._id, 128) _checkErr('getstrs', status, 'cannot execute') return label, unit, format
Retrieve the dimension standard string attributes. Args:: no argument Returns:: 3-element tuple holding: -dimension label (attribute 'long_name') -dimension unit (attribute 'units') -dimension format (attribute 'format') An exception is raised if the standard attributes have not been set. C library equivalent: SDgetdimstrs