code
stringlengths
51
2.38k
docstring
stringlengths
4
15.2k
def _getReader(self, filename, scoreClass): if filename.endswith('.json') or filename.endswith('.json.bz2'): return JSONRecordsReader(filename, scoreClass) else: raise ValueError( 'Unknown DIAMOND record file suffix for file %r.' % filename)
Obtain a JSON record reader for DIAMOND records. @param filename: The C{str} file name holding the JSON. @param scoreClass: A class to hold and compare scores (see scores.py).
def analysis_error(sender, exception, message): LOGGER.exception(message) message = get_error_message(exception, context=message) send_error_message(sender, message)
A helper to spawn an error and halt processing. An exception will be logged, busy status removed and a message displayed. .. versionadded:: 3.3 :param sender: The sender. :type sender: object :param message: an ErrorMessage to display :type message: ErrorMessage, Message :param exce...
def read_index(group, version='1.1'): if version == '0.1': return np.int64(group['index'][...]) elif version == '1.0': return group['file_index'][...] else: return group['index'][...]
Return the index stored in a h5features group. :param h5py.Group group: The group to read the index from. :param str version: The h5features version of the `group`. :return: a 1D numpy array of features indices.
def parse_mark_duplicate_metrics(fn): with open(fn) as f: lines = [x.strip().split('\t') for x in f.readlines()] metrics = pd.Series(lines[7], lines[6]) m = pd.to_numeric(metrics[metrics.index[1:]]) metrics[m.index] = m.values vals = np.array(lines[11:-1]) hist = pd.Series(vals[:, 1], in...
Parse the output from Picard's MarkDuplicates and return as pandas Series. Parameters ---------- filename : str of filename or file handle Filename of the Picard output you want to parse. Returns ------- metrics : pandas.Series Duplicate metrics. hist : pandas.Series ...
def get(remote_path, local_path='', recursive=False, preserve_times=False, **kwargs): scp_client = _prepare_connection(**kwargs) get_kwargs = { 'recursive': recursive, 'preserve_times': preserve_times } if local_path: get_kwargs['local_path'] = loc...
Transfer files and directories from remote host to the localhost of the Minion. remote_path Path to retrieve from remote host. Since this is evaluated by scp on the remote host, shell wildcards and environment variables may be used. recursive: ``False`` Transfer files and directori...
def bleu_score(logits, labels): predictions = tf.to_int32(tf.argmax(logits, axis=-1)) bleu = tf.py_func(compute_bleu, (labels, predictions), tf.float32) return bleu, tf.constant(1.0)
Approximate BLEU score computation between labels and predictions. An approximate BLEU scoring method since we do not glue word pieces or decode the ids and tokenize the output. By default, we use ngram order of 4 and use brevity penalty. Also, this does not have beam search. Args: logits: Tensor of size ...
def list_lbaas_members(self, lbaas_pool, retrieve_all=True, **_params): return self.list('members', self.lbaas_members_path % lbaas_pool, retrieve_all, **_params)
Fetches a list of all lbaas_members for a project.
def list_objects(self, instance, bucket_name, prefix=None, delimiter=None): url = '/buckets/{}/{}'.format(instance, bucket_name) params = {} if prefix is not None: params['prefix'] = prefix if delimiter is not None: params['delimiter'] = delimiter response...
List the objects for a bucket. :param str instance: A Yamcs instance name. :param str bucket_name: The name of the bucket. :param str prefix: If specified, only objects that start with this prefix are listed. :param str delimiter: If specified, return only obj...
def get_affected_box(self, src): mag = src.get_min_max_mag()[1] maxdist = self(src.tectonic_region_type, mag) bbox = get_bounding_box(src, maxdist) return (fix_lon(bbox[0]), bbox[1], fix_lon(bbox[2]), bbox[3])
Get the enlarged bounding box of a source. :param src: a source object :returns: a bounding box (min_lon, min_lat, max_lon, max_lat)
def info(*messages): sys.stderr.write("%s.%s: " % get_caller_info()) sys.stderr.write(' '.join(map(str, messages))) sys.stderr.write('\n')
Prints the current GloTK module and a `message`. Taken from biolite
def send_request(req_cat, con, req_str, kwargs): try: kwargs = parse.urlencode(kwargs) except: kwargs = urllib.urlencode(kwargs) try: con.request(req_cat, req_str, kwargs) except httplib.CannotSendRequest: con = create() ...
Sends request to facebook graph Returns the facebook-json response converted to python object
def move_to_limit(self, position): cmd = 'MOVE', [Float, Integer] self._write(cmd, position, 1)
Move to limit switch and define it as position. :param position: The new position of the limit switch.
def write_data(self, write_finished_cb): self._write_finished_cb = write_finished_cb data = bytearray() for poly4D in self.poly4Ds: data += struct.pack('<ffffffff', *poly4D.x.values) data += struct.pack('<ffffffff', *poly4D.y.values) data += struct.pack('<ffff...
Write trajectory data to the Crazyflie
def _document_path(self): if self._document_path_internal is None: if self._client is None: raise ValueError("A document reference requires a `client`.") self._document_path_internal = _get_document_path(self._client, self._path) return self._document_path_interna...
Create and cache the full path for this document. Of the form: ``projects/{project_id}/databases/{database_id}/... documents/{document_path}`` Returns: str: The full document path. Raises: ValueError: If the current document reference has...
def getChecks(self, **parameters): for key in parameters: if key not in ['limit', 'offset', 'tags']: sys.stderr.write('%s not a valid argument for getChecks()\n' % key) response = self.request('GET', 'checks', parameters) return [Pingd...
Pulls all checks from pingdom Optional Parameters: * limit -- Limits the number of returned probes to the specified quantity. Type: Integer (max 25000) Default: 25000 * offset -- Offset for listing (requires limit.) ...
def get_size(self, chrom=None): if len(self.size) == 0: raise LookupError("no chromosomes in index, is the index correct?") if chrom: if chrom in self.size: return self.size[chrom] else: raise KeyError("chromosome {} not in index".form...
Return the sizes of all sequences in the index, or the size of chrom if specified as an optional argument
def _drop_no_label_results(self, results, fh): results.seek(0) results = Results(results, self._tokenizer) results.remove_label(self._no_label) results.csv(fh)
Writes `results` to `fh` minus those results associated with the 'no' label. :param results: results to be manipulated :type results: file-like object :param fh: output destination :type fh: file-like object
def constant_outfile_iterator(outfiles, infiles, arggroups): assert len(infiles) == 1 assert len(arggroups) == 1 return ((outfile, infiles[0], arggroups[0]) for outfile in outfiles)
Iterate over all output files.
def _add_model(self, model_list_or_dict, core_element, model_class, model_key=None, load_meta_data=True): found_model = self._get_future_expected_model(core_element) if found_model: found_model.parent = self if model_class is IncomeModel: self.income = found_model if foun...
Adds one model for a given core element. The method will add a model for a given core object and checks if there is a corresponding model object in the future expected model list. The method does not check if an object with corresponding model has already been inserted. :param model_li...
def _add_numeric_methods_unary(cls): def _make_evaluate_unary(op, opstr): def _evaluate_numeric_unary(self): self._validate_for_numeric_unaryop(op, opstr) attrs = self._get_attributes_dict() attrs = self._maybe_update_attributes(attrs) ...
Add in numeric unary methods.
def clean(): screenshot_dir = settings.SELENIUM_SCREENSHOT_DIR if screenshot_dir and os.path.isdir(screenshot_dir): rmtree(screenshot_dir, ignore_errors=True)
Clear out any old screenshots
def getXmlText (parent, tag): elem = parent.getElementsByTagName(tag)[0] rc = [] for node in elem.childNodes: if node.nodeType == node.TEXT_NODE: rc.append(node.data) return ''.join(rc)
Return XML content of given tag in parent element.
def ext_pillar(minion_id, pillar, *args, **kwargs): return MySQLExtPillar().fetch(minion_id, pillar, *args, **kwargs)
Execute queries against MySQL, merge and return as a dict
def obj_name(self, obj: Union[str, Element]) -> str: if isinstance(obj, str): obj = self.obj_for(obj) if isinstance(obj, SlotDefinition): return underscore(self.aliased_slot_name(obj)) else: return camelcase(obj if isinstance(obj, str) else obj.name)
Return the formatted name used for the supplied definition
def deactivate(profile='default'): with jconfig(profile) as config: deact = True; if not getattr(config.NotebookApp.contents_manager_class, 'startswith',lambda x:False)('jupyterdrive'): deact=False if 'gdrive' not in getattr(config.NotebookApp.tornado_settings,'get', lambda _,__:...
should be a matter of just unsetting the above keys
def git_hash(blob): head = str("blob " + str(len(blob)) + "\0").encode("utf-8") return sha1(head + blob).hexdigest()
Return git-hash compatible SHA-1 hexdigits for a blob of data.
def handle_data(self, data): if self.current_parent_element['tag'] == '': self.cleaned_html += '<p>' self.current_parent_element['tag'] = 'p' self.cleaned_html += data
Called by HTMLParser.feed when text is found.
def age(self, minimum: int = 16, maximum: int = 66) -> int: age = self.random.randint(minimum, maximum) self._store['age'] = age return age
Get a random integer value. :param maximum: Maximum value of age. :param minimum: Minimum value of age. :return: Random integer. :Example: 23.
def read_wait_cell(self): table_state = self.bt_table.read_row( TABLE_STATE, filter_=bigtable_row_filters.ColumnRangeFilter( METADATA, WAIT_CELL, WAIT_CELL)) if table_state is None: utils.dbg('No waiting for new games needed; ' 'w...
Read the value of the cell holding the 'wait' value, Returns the int value of whatever it has, or None if the cell doesn't exist.
def initialize(config): "Initialize the bot with a dictionary of config items" config = init_config(config) _setup_logging() _load_library_extensions() if not Handler._registry: raise RuntimeError("No handlers registered") class_ = _load_bot_class() config.setdefault('log_channels', []) config.setdefault('oth...
Initialize the bot with a dictionary of config items
def assert_is_not(first, second, msg_fmt="{msg}"): if first is second: msg = "both arguments refer to {!r}".format(first) fail(msg_fmt.format(msg=msg, first=first, second=second))
Fail if first and second refer to the same object. >>> list1 = [5, "foo"] >>> list2 = [5, "foo"] >>> assert_is_not(list1, list2) >>> assert_is_not(list1, list1) Traceback (most recent call last): ... AssertionError: both arguments refer to [5, 'foo'] The following msg_fmt arguments...
def set_attribute(self, app, key, value): path = 'setattribute/' + parse.quote(app, '') + '/' + parse.quote( self._encode_string(key), '') res = self._make_ocs_request( 'POST', self.OCS_SERVICE_PRIVATEDATA, path, data={'value': self._encode_str...
Sets an application attribute :param app: application id :param key: key of the attribute to set :param value: value to set :returns: True if the operation succeeded, False otherwise :raises: HTTPResponseError in case an HTTP error status was returned
def token_auth(self, token): if not token: return self.headers.update({ 'Authorization': 'token {0}'.format(token) }) self.auth = None
Use an application token for authentication. :param str token: Application token retrieved from GitHub's /authorizations endpoint
def run(toolkit_name, options, verbose=True, show_progress=False): unity = glconnect.get_unity() if (not verbose): glconnect.get_server().set_log_progress(False) (success, message, params) = unity.run_toolkit(toolkit_name, options) if (len(message) > 0): logging.getLogger(__name__).error...
Internal function to execute toolkit on the turicreate server. Parameters ---------- toolkit_name : string The name of the toolkit. options : dict A map containing the required input for the toolkit function, for example: {'graph': g, 'reset_prob': 0.15}. verbose : bool ...
def build_tensor_serving_input_receiver_fn(shape, dtype=tf.float32, batch_size=1): def serving_input_receiver_fn(): features = tf.placeholder( dtype=dtype, shape=[batch_size] + shape, name='input_tensor') return tf.estimator.export.TensorServingInputReceiver(...
Returns a input_receiver_fn that can be used during serving. This expects examples to come through as float tensors, and simply wraps them as TensorServingInputReceivers. Arguably, this should live in tf.estimator.export. Testing here first. Args: shape: list representing target size of a single example....
def minmax_candidates(self): from numpy.polynomial import Polynomial as P p = P.fromroots(self.roots) return p.deriv(1).roots()
Get points where derivative is zero. Useful for computing the extrema of the polynomial over an interval if the polynomial has real roots. In this case, the maximum is attained for one of the interval endpoints or a point from the result of this function that is contained in the interva...
def strip_path_prefix(ipath, prefix): if prefix is None: return ipath return ipath[len(prefix):] if ipath.startswith(prefix) else ipath
Strip prefix from path. Args: ipath: input path prefix: the prefix to remove, if it is found in :ipath: Examples: >>> strip_path_prefix("/foo/bar", "/bar") '/foo/bar' >>> strip_path_prefix("/foo/bar", "/") 'foo/bar' >>> strip_path_prefix("/foo/bar", "/fo...
def from_uci(cls, uci: str) -> "Move": if uci == "0000": return cls.null() elif len(uci) == 4 and "@" == uci[1]: drop = PIECE_SYMBOLS.index(uci[0].lower()) square = SQUARE_NAMES.index(uci[2:]) return cls(square, square, drop=drop) elif len(uci) == ...
Parses an UCI string. :raises: :exc:`ValueError` if the UCI string is invalid.
def deunicode(item): if item is None: return None if isinstance(item, str): return item if isinstance(item, six.text_type): return item.encode('utf-8') if isinstance(item, dict): return { deunicode(key): deunicode(value) for (key, value) in item.it...
Convert unicode objects to str
def badRequestMethod(self, environ, start_response): response = "400 Bad Request\n\nTo access this PyAMF gateway you " \ "must use POST requests (%s received)" % environ['REQUEST_METHOD'] start_response('400 Bad Request', [ ('Content-Type', 'text/plain'), ('Content-Le...
Return HTTP 400 Bad Request.
def complete_event(self, event_id: str): event_ids = DB.get_list(self._processed_key) if event_id not in event_ids: raise KeyError('Unable to complete event. Event {} has not been ' 'processed (ie. it is not in the processed ' 'list).'.fo...
Complete the specified event.
def check_archive_format (format, compression): if format not in ArchiveFormats: raise util.PatoolError("unknown archive format `%s'" % format) if compression is not None and compression not in ArchiveCompressions: raise util.PatoolError("unkonwn archive compression `%s'" % compression)
Make sure format and compression is known.
def get_full_alias(self, query): if query in self.alias_table.sections(): return query return next((section for section in self.alias_table.sections() if section.split()[0] == query), '')
Get the full alias given a search query. Args: query: The query this function performs searching on. Returns: The full alias (with the placeholders, if any).
def _create_handler(self, config): if config is None: raise ValueError('No handler config to create handler from.') if 'name' not in config: raise ValueError('Handler name is required.') handler_name = config['name'] module_name = handler_name.rsplit('.', 1)[0] ...
Creates a handler from its config. Params: config: handler config Returns: handler instance
def print_variable(obj, **kwargs): variable_print_length = kwargs.get("variable_print_length", 1000) s = str(obj) if len(s) < 300: return "Printing the object:\n" + str(obj) else: return "Printing the object:\n" + str(obj)[:variable_print_length] + ' ...'
Print the variable out. Limit the string length to, by default, 300 characters.
def assertTimeZoneIsNotNone(self, dt, msg=None): if not isinstance(dt, datetime): raise TypeError('First argument is not a datetime object') self.assertIsNotNone(dt.tzinfo, msg=msg)
Fail unless ``dt`` has a non-null ``tzinfo`` attribute. Parameters ---------- dt : datetime msg : str If not provided, the :mod:`marbles.mixins` or :mod:`unittest` standard message will be used. Raises ------ TypeError If ``dt...
def minimize_core(self): if self.minz and len(self.core) > 1: self.core = sorted(self.core, key=lambda l: self.wght[l]) self.oracle.conf_budget(1000) i = 0 while i < len(self.core): to_test = self.core[:i] + self.core[(i + 1):] if s...
Reduce a previously extracted core and compute an over-approximation of an MUS. This is done using the simple deletion-based MUS extraction algorithm. The idea is to try to deactivate soft clauses of the unsatisfiable core one by one while checking if the rem...
def parse_class(val): module, class_name = val.rsplit('.', 1) module = importlib.import_module(module) try: return getattr(module, class_name) except AttributeError: raise ValueError('"%s" is not a valid member of %s' % ( class_name, qualname(module)) )
Parse a string, imports the module and returns the class. >>> parse_class('hashlib.md5') <built-in function openssl_md5>
def get_abbreviation_of(self, name): for language in self.user_data.languages: if language['language_string'] == name: return language['language'] return None
Get abbreviation of a language.
def _apply_args_to_func(global_args, func): global_args = vars(global_args) local_args = dict() for argument in inspect.getargspec(func).args: local_args[argument] = global_args[argument] return func(**local_args)
Unpacks the argparse Namespace object and applies its contents as normal arguments to the function func
def _exec(self, query, **kwargs): variables = {'entity': self.username, 'project': self.project, 'name': self.name} variables.update(kwargs) return self.client.execute(query, variable_values=variables)
Execute a query against the cloud backend
def _open_file_obj(f, mode="r"): if isinstance(f, six.string_types): if f.startswith(("http://", "https://")): file_obj = _urlopen(f) yield file_obj file_obj.close() else: with open(f, mode) as file_obj: yield file_obj else: ...
A context manager that provides access to a file. :param f: the file to be opened :type f: a file-like object or path to file :param mode: how to open the file :type mode: string
def format_auto_patching_settings(result): from collections import OrderedDict order_dict = OrderedDict() if result.enable is not None: order_dict['enable'] = result.enable if result.day_of_week is not None: order_dict['dayOfWeek'] = result.day_of_week if result.maintenance_window_st...
Formats the AutoPatchingSettings object removing arguments that are empty
def split_join_classification(element, classification_labels, nodes_classification): classification_join = "Join" classification_split = "Split" if len(element[1][consts.Consts.incoming_flow]) >= 2: classification_labels.append(classification_join) if len(element[1][consts.Co...
Add the "Split", "Join" classification, if the element qualifies for. :param element: an element from BPMN diagram, :param classification_labels: list of labels attached to the element, :param nodes_classification: dictionary of classification labels. Key - node id. Value - a list of labels.
def TP0(dv, u): return np.linalg.norm(np.array(dv)) + np.linalg.norm(np.array(u))
Demo problem 0 for horsetail matching, takes two input vectors of any size and returns a single output
def convert(value, source_unit, target_unit, fmt=False): orig_target_unit = target_unit source_unit = functions.value_for_key(INFORMATION_UNITS, source_unit) target_unit = functions.value_for_key(INFORMATION_UNITS, target_unit) q = ureg.Quantity(value, source_unit) q = q.to(ureg.parse_expression(tar...
Converts value from source_unit to target_unit. Returns a tuple containing the converted value and target_unit. Having fmt set to True causes the value to be formatted to 1 decimal digit if it's a decimal or be formatted as integer if it's an integer. E.g: >>> convert(2, 'hr', 'min') (120.0, ...
def generic_service_exception(*args): exception_tuple = LambdaErrorResponses.ServiceException return BaseLocalService.service_response( LambdaErrorResponses._construct_error_response_body(LambdaErrorResponses.SERVICE_ERROR, "ServiceException"), LambdaErrorResponses._construct_hea...
Creates a Lambda Service Generic ServiceException Response Parameters ---------- args list List of arguments Flask passes to the method Returns ------- Flask.Response A response object representing the GenericServiceException Error
def save_method_args(method): args_and_kwargs = collections.namedtuple('args_and_kwargs', 'args kwargs') @functools.wraps(method) def wrapper(self, *args, **kwargs): attr_name = '_saved_' + method.__name__ attr = args_and_kwargs(args, kwargs) setattr(self, attr_name, attr) return method(self, *args, **kwargs...
Wrap a method such that when it is called, the args and kwargs are saved on the method. >>> class MyClass: ... @save_method_args ... def method(self, a, b): ... print(a, b) >>> my_ob = MyClass() >>> my_ob.method(1, 2) 1 2 >>> my_ob._saved_method.args (1, 2) >>> my_ob._saved_method.kwargs {}...
def on_core_metadata_event(self, event): core_metadata = json.loads(event.log_message.message) input_names = ','.join(core_metadata['input_names']) output_names = ','.join(core_metadata['output_names']) target_nodes = ','.join(core_metadata['target_nodes']) self._run_key = RunKey(input_names, output...
Implementation of the core metadata-carrying Event proto callback. Args: event: An Event proto that contains core metadata about the debugged Session::Run() in its log_message.message field, as a JSON string. See the doc string of debug_data.DebugDumpDir.core_metadata for details.
def get_event_action(cls) -> Optional[str]: if not cls.actor: return None return event_context.get_event_action(cls.event_type)
Return the second part of the event_type e.g. >>> Event.event_type = 'experiment.deleted' >>> Event.get_event_action() == 'deleted'
def is_collection(obj): col = getattr(obj, '__getitem__', False) val = False if (not col) else True if isinstance(obj, basestring): val = False return val
Tests if an object is a collection.
def update_docs(self, t, module): key = "{}.{}".format(module.name, t.name) if key in module.predocs: t.docstring = self.docparser.to_doc(module.predocs[key][0], t.name) t.docstart, t.docend = (module.predocs[key][1], module.predocs[key][2])
Updates the documentation for the specified type using the module predocs.
def generate_tuple_batches(qs, batch_len=1): num_items, batch = 0, [] for item in qs: if num_items >= batch_len: yield tuple(batch) num_items = 0 batch = [] num_items += 1 batch += [item] if num_items: yield tuple(batch)
Iterate through a queryset in batches of length `batch_len` >>> [batch for batch in generate_tuple_batches(range(7), 3)] [(0, 1, 2), (3, 4, 5), (6,)]
def zscore(bars, window=20, stds=1, col='close'): std = numpy_rolling_std(bars[col], window) mean = numpy_rolling_mean(bars[col], window) return (bars[col] - mean) / (std * stds)
get zscore of price
def WhereIs(self, prog, path=None, pathext=None, reject=[]): if path is None: try: path = self['ENV']['PATH'] except KeyError: pass elif SCons.Util.is_String(path): path = self.subst(path) if pathext is None: try: ...
Find prog in the path.
def getAnalyst(self): analyst = self.getField("Analyst").get(self) if not analyst: analyst = self.getSubmittedBy() return analyst or ""
Returns the stored Analyst or the user who submitted the result
def get_conversion(scale, limits): fb = float(scale) / float(limits['b'][1] - limits['b'][0]) fl = float(scale) / float(limits['l'][1] - limits['l'][0]) fr = float(scale) / float(limits['r'][1] - limits['r'][0]) conversion = {"b": lambda x: (x - limits['b'][0]) * fb, "l": lambda x: (x ...
Get the conversion equations for each axis. limits: dict of min and max values for the axes in the order blr.
def set_zap_authenticator(self, zap_authenticator): result = self._zap_authenticator if result: self.unregister_child(result) self._zap_authenticator = zap_authenticator if self.zap_client: self.zap_client.close() if self._zap_authenticator: se...
Setup a ZAP authenticator. :param zap_authenticator: A ZAP authenticator instance to use. The context takes ownership of the specified instance. It will close it automatically when it stops. If `None` is specified, any previously owner instance is disowned and returned. It b...
def get_nonce(self): nonce = getattr(self, '_nonce', 0) if nonce: nonce += 1 self._nonce = max(int(time.time()), nonce) return self._nonce
Get a unique nonce for the bitstamp API. This integer must always be increasing, so use the current unix time. Every time this variable is requested, it automatically increments to allow for more than one API request per second. This isn't a thread-safe function however, so you should ...
def request(self, msg, use_mid=None): mid = self._get_mid_and_update_msg(msg, use_mid) self.send_request(msg) return mid
Send a request message, with automatic message ID assignment. Parameters ---------- msg : katcp.Message request message use_mid : bool or None, default=None Returns ------- mid : string or None The message id, or None if no msg id is used If...
def listify(args): if args: if isinstance(args, list): return args elif isinstance(args, (set, tuple, GeneratorType, range, past.builtins.xrange)): return list(args) return [args] return []
Return args as a list. If already a list - return as is. >>> listify([1, 2, 3]) [1, 2, 3] If a set - return as a list. >>> listify(set([1, 2, 3])) [1, 2, 3] If a tuple - return as a list. >>> listify(tuple([1, 2, 3])) [1, 2, 3] If a generator (also range / xrange) - return...
def sort_func(variant=VARIANT1, case_sensitive=False): return lambda x: normalize( x, variant=variant, case_sensitive=case_sensitive)
A function generator that can be used for sorting. All keywords are passed to `normalize()` and generate keywords that can be passed to `sorted()`:: >>> key = sort_func() >>> print(sorted(["fur", "far"], key=key)) [u'far', u'fur'] Please note, that `sort_func` returns a function.
def cylinder(target, throat_diameter='throat.diameter', throat_length='throat.length'): r D = target[throat_diameter] L = target[throat_length] value = _sp.pi*D*L return value
r""" Calculate surface area for a cylindrical throat Parameters ---------- target : OpenPNM Object The object which this model is associated with. This controls the length of the calculated array, and also provides access to other necessary properties. throat_diameter : str...
def delete(customer, card): if isinstance(customer, resources.Customer): customer = customer.id if isinstance(card, resources.Card): card = card.id http_client = HttpClient() http_client.delete(routes.url(routes.CARD_RESOURCE, resource_id=card, customer_id=custome...
Delete a card from its id. :param customer: The customer id or object :type customer: string|Customer :param card: The card id or object :type card: string|Card
def get(issue_id, issue_type_id): return db.Issue.find_one( Issue.issue_id == issue_id, Issue.issue_type_id == issue_type_id )
Return issue by ID Args: issue_id (str): Unique Issue identifier issue_type_id (str): Type of issue to get Returns: :obj:`Issue`: Returns Issue object if found, else None
def _is_allowed(input): gnupg_options = _get_all_gnupg_options() allowed = _get_options_group("allowed") try: assert allowed.issubset(gnupg_options) except AssertionError: raise UsageError("'allowed' isn't a subset of known options, diff: %s" % allowed.difference...
Check that an option or argument given to GPG is in the set of allowed options, the latter being a strict subset of the set of all options known to GPG. :param str input: An input meant to be parsed as an option or flag to the GnuPG process. Should be formatted the same as an option ...
def _PrintAnalysisStatusHeader(self, processing_status): self._output_writer.Write( 'Storage file\t\t: {0:s}\n'.format(self._storage_file_path)) self._PrintProcessingTime(processing_status) if processing_status and processing_status.events_status: self._PrintEventsStatus(processing_status.even...
Prints the analysis status header. Args: processing_status (ProcessingStatus): processing status.
def resolve_dst(self, dst_dir, src): if os.path.isabs(src): return os.path.join(dst_dir, os.path.basename(src)) return os.path.join(dst_dir, src)
finds the destination based on source if source is an absolute path, and there's no pattern, it copies the file to base dst_dir
def get_headers_global(): headers = dict() headers["applications_path_txt"] = 'Applications_Path' headers["channel_index_txt"] = 'Channel_Index' headers["channel_number_txt"] = 'Channel_Number' headers["channel_type_txt"] = 'Channel_Type' headers["comments_txt"] = 'Commen...
Defines the so-called global column headings for Arbin .res-files
def table(self, datatype=None, **kwargs): if config.future_deprecations: self.param.warning("The table method is deprecated and should no " "longer be used. If using a HoloMap use " "HoloMap.collapse() instead to return a Dataset.") ...
Deprecated method to convert an MultiDimensionalMapping of Elements to a Table.
def copy_fields(layer, fields_to_copy): for field in fields_to_copy: index = layer.fields().lookupField(field) if index != -1: layer.startEditing() source_field = layer.fields().at(index) new_field = QgsField(source_field) new_field.setName(fields_to_c...
Copy fields inside an attribute table. :param layer: The vector layer. :type layer: QgsVectorLayer :param fields_to_copy: Dictionary of fields to copy. :type fields_to_copy: dict
def selectfalse(table, field, complement=False): return select(table, field, lambda v: not bool(v), complement=complement)
Select rows where the given field evaluates `False`.
def row_csv_limiter(rows, limits=None): limits = [None, None] if limits is None else limits if len(exclude_empty_values(limits)) == 2: upper_limit = limits[0] lower_limit = limits[1] elif len(exclude_empty_values(limits)) == 1: upper_limit = limits[0] lower_limit = row_iter_l...
Limit row passing a value or detect limits making the best effort.
def visit_FunctionBody(self, node): for child in node.children: return_value = self.visit(child) if isinstance(child, ReturnStatement): return return_value if isinstance(child, (IfStatement, WhileStatement)): if return_value is not None: ...
Visitor for `FunctionBody` AST node.
def abort_expired_batches(self, request_timeout_ms, cluster): expired_batches = [] to_remove = [] count = 0 for tp in list(self._batches.keys()): assert tp in self._tp_locks, 'TopicPartition not in locks dict' if tp in self.muted: continue ...
Abort the batches that have been sitting in RecordAccumulator for more than the configured request_timeout due to metadata being unavailable. Arguments: request_timeout_ms (int): milliseconds to timeout cluster (ClusterMetadata): current metadata for kafka cluster ...
def _find_datastream(self, name): for stream in self.data_streams: if stream.name == name: return stream return None
Find and return if a datastream exists, by name.
def restore(source, offset): backup_location = os.path.join( os.path.dirname(os.path.abspath(source)), source + '.bytes_backup') click.echo('Reading backup from: {location}'.format(location=backup_location)) if not os.path.isfile(backup_location): click.echo('No backup found for: {source}'.f...
Restore a smudged file from .bytes_backup
def downcaseTokens(s,l,t): return [ tt.lower() for tt in map(_ustr,t) ]
Helper parse action to convert tokens to lower case.
def int_to_words(int_val, num_words=4, word_size=32): max_int = 2 ** (word_size*num_words) - 1 max_word_size = 2 ** word_size - 1 if not 0 <= int_val <= max_int: raise AttributeError('integer %r is out of bounds!' % hex(int_val)) words = [] for _ in range(num_words): ...
Convert a int value to bytes. :param int_val: an arbitrary length Python integer to be split up. Network byte order is assumed. Raises an IndexError if width of integer (in bits) exceeds word_size * num_words. :param num_words: number of words expected in return value tuple. ...
def create(self): self.server_attrs = self.consul.create_server( "%s-%s" % (self.stack.name, self.name), self.disk_image_id, self.instance_type, self.ssh_key_name, tags=self.tags, availability_zone=self.availability_...
Launches a new server instance.
def event_details(event_id=None, lang="en"): if event_id: cache_name = "event_details.%s.%s.json" % (event_id, lang) params = {"event_id": event_id, "lang": lang} else: cache_name = "event_details.%s.json" % lang params = {"lang": lang} data = get_cached("event_details.json",...
This resource returns static details about available events. :param event_id: Only list this event. :param lang: Show localized texts in the specified language. The response is a dictionary where the key is the event id, and the value is a dictionary containing the following properties: name (str...
def compile_excludes(self): self.compiled_exclude_files = [] for pattern in self.exclude_files: try: self.compiled_exclude_files.append(re.compile(pattern)) except re.error as e: raise ValueError( "Bad python regex in exclude '%...
Compile a set of regexps for files to be exlcuded from scans.
def retrieve_manual_indices(self): if self.parent_changed: pass else: pbool = map_indices_child2root( child=self.rtdc_ds, child_indices=np.where(~self.manual)[0]).tolist() pold = self._man_root_ids pall = sorted(list(set(pbo...
Read from self.manual Read from the boolean array `self.manual`, index all occurences of `False` and find the corresponding indices in the root hierarchy parent, return those and store them in `self._man_root_ids` as well. Notes ----- This method also retrieves ...
def promote_owner(self, stream_id, user_id): req_hook = 'pod/v1/room/' + stream_id + '/membership/promoteOwner' req_args = '{ "id": %s }' % user_id status_code, response = self.__rest__.POST_query(req_hook, req_args) self.logger.debug('%s: %s' % (status_code, response)) return st...
promote user to owner in stream
def get_version(): if isinstance(VERSION[-1], str): return '.'.join(map(str, VERSION[:-1])) + VERSION[-1] return '.'.join(map(str, VERSION))
Returns a string representation of the current SDK version.
def nrows_expected(self): return np.prod([i.cvalues.shape[0] for i in self.index_axes])
based on our axes, compute the expected nrows
def address_inline(request, prefix="", country_code=None, template_name="postal/form.html"): country_prefix = "country" prefix = request.POST.get('prefix', prefix) if prefix: country_prefix = prefix + '-country' country_code = request.POST.get(country_prefix, country_code) form_class = form_...
Displays postal address with localized fields
def generate_static_matching(app, directory_serve_app=DirectoryApp): static_dir = os.path.join(os.path.dirname(app.__file__), 'static') try: static_app = directory_serve_app(static_dir, index_page='') except OSError: return None ...
Creating a matching for WSGI application to serve static files for passed app. Static files will be collected from directory named 'static' under passed application:: ./blog/static/ This example is with an application named `blog`. URLs for static files in static directory will begin with...
def _summarize_coefficients(top_coefs, bottom_coefs): def get_row_name(row): if row['index'] is None: return row['name'] else: return "%s[%s]" % (row['name'], row['index']) if len(top_coefs) == 0: top_coefs_list = [('No Positive Coefficients', _precomputed_field('...
Return a tuple of sections and section titles. Sections are pretty print of model coefficients Parameters ---------- top_coefs : SFrame of top k coefficients bottom_coefs : SFrame of bottom k coefficients Returns ------- (sections, section_titles) : tuple sections : list ...
def GetNextNode(self, modes, innode): nodes = N.where(self.innodes == innode) if nodes[0].size == 0: return -1 defaultindex = N.where(self.keywords[nodes] == 'default') if len(defaultindex[0]) != 0: outnode = self.outnodes[nodes[0][defaultindex[0]]] for mo...
GetNextNode returns the outnode that matches an element from the modes list, starting at the given innode. This method isnt actually used, its just a helper method for debugging purposes.