code
stringlengths
51
2.38k
docstring
stringlengths
4
15.2k
def _getReader(self, filename, scoreClass): if filename.endswith('.json') or filename.endswith('.json.bz2'): return JSONRecordsReader(filename, scoreClass) else: raise ValueError( 'Unknown DIAMOND record file suffix for file %r.' % filename)
Obtain a JSON record reader for DIAMOND records. @param filename: The C{str} file name holding the JSON. @param scoreClass: A class to hold and compare scores (see scores.py).
def analysis_error(sender, exception, message): LOGGER.exception(message) message = get_error_message(exception, context=message) send_error_message(sender, message)
A helper to spawn an error and halt processing. An exception will be logged, busy status removed and a message displayed. .. versionadded:: 3.3 :param sender: The sender. :type sender: object :param message: an ErrorMessage to display :type message: ErrorMessage, Message :param exception: An exception that was raised :type exception: Exception
def read_index(group, version='1.1'): if version == '0.1': return np.int64(group['index'][...]) elif version == '1.0': return group['file_index'][...] else: return group['index'][...]
Return the index stored in a h5features group. :param h5py.Group group: The group to read the index from. :param str version: The h5features version of the `group`. :return: a 1D numpy array of features indices.
def parse_mark_duplicate_metrics(fn): with open(fn) as f: lines = [x.strip().split('\t') for x in f.readlines()] metrics = pd.Series(lines[7], lines[6]) m = pd.to_numeric(metrics[metrics.index[1:]]) metrics[m.index] = m.values vals = np.array(lines[11:-1]) hist = pd.Series(vals[:, 1], index=[int(float(x)) for x in vals[:, 0]]) hist = pd.to_numeric(hist) return metrics, hist
Parse the output from Picard's MarkDuplicates and return as pandas Series. Parameters ---------- filename : str of filename or file handle Filename of the Picard output you want to parse. Returns ------- metrics : pandas.Series Duplicate metrics. hist : pandas.Series Duplicate histogram.
def get(remote_path, local_path='', recursive=False, preserve_times=False, **kwargs): scp_client = _prepare_connection(**kwargs) get_kwargs = { 'recursive': recursive, 'preserve_times': preserve_times } if local_path: get_kwargs['local_path'] = local_path return scp_client.get(remote_path, **get_kwargs)
Transfer files and directories from remote host to the localhost of the Minion. remote_path Path to retrieve from remote host. Since this is evaluated by scp on the remote host, shell wildcards and environment variables may be used. recursive: ``False`` Transfer files and directories recursively. preserve_times: ``False`` Preserve ``mtime`` and ``atime`` of transferred files and directories. hostname The hostname of the remote device. port: ``22`` The port of the remote device. username The username required for SSH authentication on the device. password Used for password authentication. It is also used for private key decryption if ``passphrase`` is not given. passphrase Used for decrypting private keys. pkey An optional private key to use for authentication. key_filename The filename, or list of filenames, of optional private key(s) and/or certificates to try for authentication. timeout An optional timeout (in seconds) for the TCP connect. socket_timeout: ``10`` The channel socket timeout in seconds. buff_size: ``16384`` The size of the SCP send buffer. allow_agent: ``True`` Set to ``False`` to disable connecting to the SSH agent. look_for_keys: ``True`` Set to ``False`` to disable searching for discoverable private key files in ``~/.ssh/`` banner_timeout An optional timeout (in seconds) to wait for the SSH banner to be presented. auth_timeout An optional timeout (in seconds) to wait for an authentication response. auto_add_policy: ``False`` Automatically add the host to the ``known_hosts``. CLI Example: .. code-block:: bash salt '*' scp.get /var/tmp/file /tmp/file hostname=10.10.10.1 auto_add_policy=True
def bleu_score(logits, labels): predictions = tf.to_int32(tf.argmax(logits, axis=-1)) bleu = tf.py_func(compute_bleu, (labels, predictions), tf.float32) return bleu, tf.constant(1.0)
Approximate BLEU score computation between labels and predictions. An approximate BLEU scoring method since we do not glue word pieces or decode the ids and tokenize the output. By default, we use ngram order of 4 and use brevity penalty. Also, this does not have beam search. Args: logits: Tensor of size [batch_size, length_logits, vocab_size] labels: Tensor of size [batch-size, length_labels] Returns: bleu: int, approx bleu score
def list_lbaas_members(self, lbaas_pool, retrieve_all=True, **_params): return self.list('members', self.lbaas_members_path % lbaas_pool, retrieve_all, **_params)
Fetches a list of all lbaas_members for a project.
def list_objects(self, instance, bucket_name, prefix=None, delimiter=None): url = '/buckets/{}/{}'.format(instance, bucket_name) params = {} if prefix is not None: params['prefix'] = prefix if delimiter is not None: params['delimiter'] = delimiter response = self._client.get_proto(path=url, params=params) message = rest_pb2.ListObjectsResponse() message.ParseFromString(response.content) return ObjectListing(message, instance, bucket_name, self)
List the objects for a bucket. :param str instance: A Yamcs instance name. :param str bucket_name: The name of the bucket. :param str prefix: If specified, only objects that start with this prefix are listed. :param str delimiter: If specified, return only objects whose name do not contain the delimiter after the prefix. For the other objects, the response contains (in the prefix response parameter) the name truncated after the delimiter. Duplicates are omitted.
def get_affected_box(self, src): mag = src.get_min_max_mag()[1] maxdist = self(src.tectonic_region_type, mag) bbox = get_bounding_box(src, maxdist) return (fix_lon(bbox[0]), bbox[1], fix_lon(bbox[2]), bbox[3])
Get the enlarged bounding box of a source. :param src: a source object :returns: a bounding box (min_lon, min_lat, max_lon, max_lat)
def info(*messages): sys.stderr.write("%s.%s: " % get_caller_info()) sys.stderr.write(' '.join(map(str, messages))) sys.stderr.write('\n')
Prints the current GloTK module and a `message`. Taken from biolite
def send_request(req_cat, con, req_str, kwargs): try: kwargs = parse.urlencode(kwargs) except: kwargs = urllib.urlencode(kwargs) try: con.request(req_cat, req_str, kwargs) except httplib.CannotSendRequest: con = create() con.request(req_cat, req_str, kwargs) try: res = con.getresponse().read() except (IOError, httplib.BadStatusLine): con = create() con.request(req_cat, req_str, kwargs) res = con.getresponse().read() t = type(res) if type(res) == t: res = bytes.decode(res) return json.loads(res)
Sends request to facebook graph Returns the facebook-json response converted to python object
def move_to_limit(self, position): cmd = 'MOVE', [Float, Integer] self._write(cmd, position, 1)
Move to limit switch and define it as position. :param position: The new position of the limit switch.
def write_data(self, write_finished_cb): self._write_finished_cb = write_finished_cb data = bytearray() for poly4D in self.poly4Ds: data += struct.pack('<ffffffff', *poly4D.x.values) data += struct.pack('<ffffffff', *poly4D.y.values) data += struct.pack('<ffffffff', *poly4D.z.values) data += struct.pack('<ffffffff', *poly4D.yaw.values) data += struct.pack('<f', poly4D.duration) self.mem_handler.write(self, 0x00, data, flush_queue=True)
Write trajectory data to the Crazyflie
def _document_path(self): if self._document_path_internal is None: if self._client is None: raise ValueError("A document reference requires a `client`.") self._document_path_internal = _get_document_path(self._client, self._path) return self._document_path_internal
Create and cache the full path for this document. Of the form: ``projects/{project_id}/databases/{database_id}/... documents/{document_path}`` Returns: str: The full document path. Raises: ValueError: If the current document reference has no ``client``.
def getChecks(self, **parameters): for key in parameters: if key not in ['limit', 'offset', 'tags']: sys.stderr.write('%s not a valid argument for getChecks()\n' % key) response = self.request('GET', 'checks', parameters) return [PingdomCheck(self, x) for x in response.json()['checks']]
Pulls all checks from pingdom Optional Parameters: * limit -- Limits the number of returned probes to the specified quantity. Type: Integer (max 25000) Default: 25000 * offset -- Offset for listing (requires limit.) Type: Integer Default: 0 * tags -- Filter listing by tag/s Type: String Default: None
def get_size(self, chrom=None): if len(self.size) == 0: raise LookupError("no chromosomes in index, is the index correct?") if chrom: if chrom in self.size: return self.size[chrom] else: raise KeyError("chromosome {} not in index".format(chrom)) total = 0 for size in self.size.values(): total += size return total
Return the sizes of all sequences in the index, or the size of chrom if specified as an optional argument
def _drop_no_label_results(self, results, fh): results.seek(0) results = Results(results, self._tokenizer) results.remove_label(self._no_label) results.csv(fh)
Writes `results` to `fh` minus those results associated with the 'no' label. :param results: results to be manipulated :type results: file-like object :param fh: output destination :type fh: file-like object
def constant_outfile_iterator(outfiles, infiles, arggroups): assert len(infiles) == 1 assert len(arggroups) == 1 return ((outfile, infiles[0], arggroups[0]) for outfile in outfiles)
Iterate over all output files.
def _add_model(self, model_list_or_dict, core_element, model_class, model_key=None, load_meta_data=True): found_model = self._get_future_expected_model(core_element) if found_model: found_model.parent = self if model_class is IncomeModel: self.income = found_model if found_model else IncomeModel(core_element, self) return if model_key is None: model_list_or_dict.append(found_model if found_model else model_class(core_element, self)) else: model_list_or_dict[model_key] = found_model if found_model else model_class(core_element, self, load_meta_data=load_meta_data)
Adds one model for a given core element. The method will add a model for a given core object and checks if there is a corresponding model object in the future expected model list. The method does not check if an object with corresponding model has already been inserted. :param model_list_or_dict: could be a list or dictionary of one model type :param core_element: the core element to a model for, can be state or state element :param model_class: model-class of the elements that should be insert :param model_key: if model_list_or_dict is a dictionary the key is the id of the respective element (e.g. 'state_id') :param load_meta_data: specific argument for loading meta data :return:
def _add_numeric_methods_unary(cls): def _make_evaluate_unary(op, opstr): def _evaluate_numeric_unary(self): self._validate_for_numeric_unaryop(op, opstr) attrs = self._get_attributes_dict() attrs = self._maybe_update_attributes(attrs) return Index(op(self.values), **attrs) _evaluate_numeric_unary.__name__ = opstr return _evaluate_numeric_unary cls.__neg__ = _make_evaluate_unary(operator.neg, '__neg__') cls.__pos__ = _make_evaluate_unary(operator.pos, '__pos__') cls.__abs__ = _make_evaluate_unary(np.abs, '__abs__') cls.__inv__ = _make_evaluate_unary(lambda x: -x, '__inv__')
Add in numeric unary methods.
def clean(): screenshot_dir = settings.SELENIUM_SCREENSHOT_DIR if screenshot_dir and os.path.isdir(screenshot_dir): rmtree(screenshot_dir, ignore_errors=True)
Clear out any old screenshots
def getXmlText (parent, tag): elem = parent.getElementsByTagName(tag)[0] rc = [] for node in elem.childNodes: if node.nodeType == node.TEXT_NODE: rc.append(node.data) return ''.join(rc)
Return XML content of given tag in parent element.
def ext_pillar(minion_id, pillar, *args, **kwargs): return MySQLExtPillar().fetch(minion_id, pillar, *args, **kwargs)
Execute queries against MySQL, merge and return as a dict
def obj_name(self, obj: Union[str, Element]) -> str: if isinstance(obj, str): obj = self.obj_for(obj) if isinstance(obj, SlotDefinition): return underscore(self.aliased_slot_name(obj)) else: return camelcase(obj if isinstance(obj, str) else obj.name)
Return the formatted name used for the supplied definition
def deactivate(profile='default'): with jconfig(profile) as config: deact = True; if not getattr(config.NotebookApp.contents_manager_class, 'startswith',lambda x:False)('jupyterdrive'): deact=False if 'gdrive' not in getattr(config.NotebookApp.tornado_settings,'get', lambda _,__:'')('contents_js_source',''): deact=False if deact: del config['NotebookApp']['tornado_settings']['contents_js_source'] del config['NotebookApp']['contents_manager_class']
should be a matter of just unsetting the above keys
def git_hash(blob): head = str("blob " + str(len(blob)) + "\0").encode("utf-8") return sha1(head + blob).hexdigest()
Return git-hash compatible SHA-1 hexdigits for a blob of data.
def handle_data(self, data): if self.current_parent_element['tag'] == '': self.cleaned_html += '<p>' self.current_parent_element['tag'] = 'p' self.cleaned_html += data
Called by HTMLParser.feed when text is found.
def age(self, minimum: int = 16, maximum: int = 66) -> int: age = self.random.randint(minimum, maximum) self._store['age'] = age return age
Get a random integer value. :param maximum: Maximum value of age. :param minimum: Minimum value of age. :return: Random integer. :Example: 23.
def read_wait_cell(self): table_state = self.bt_table.read_row( TABLE_STATE, filter_=bigtable_row_filters.ColumnRangeFilter( METADATA, WAIT_CELL, WAIT_CELL)) if table_state is None: utils.dbg('No waiting for new games needed; ' 'wait_for_game_number column not in table_state') return None value = table_state.cell_value(METADATA, WAIT_CELL) if not value: utils.dbg('No waiting for new games needed; ' 'no value in wait_for_game_number cell ' 'in table_state') return None return cbt_intvalue(value)
Read the value of the cell holding the 'wait' value, Returns the int value of whatever it has, or None if the cell doesn't exist.
def initialize(config): "Initialize the bot with a dictionary of config items" config = init_config(config) _setup_logging() _load_library_extensions() if not Handler._registry: raise RuntimeError("No handlers registered") class_ = _load_bot_class() config.setdefault('log_channels', []) config.setdefault('other_channels', []) channels = config.log_channels + config.other_channels log.info('Running with config') log.info(pprint.pformat(config)) host = config.get('server_host', 'localhost') port = config.get('server_port', 6667) return class_( host, port, config.bot_nickname, channels=channels, password=config.get('password'), )
Initialize the bot with a dictionary of config items
def assert_is_not(first, second, msg_fmt="{msg}"): if first is second: msg = "both arguments refer to {!r}".format(first) fail(msg_fmt.format(msg=msg, first=first, second=second))
Fail if first and second refer to the same object. >>> list1 = [5, "foo"] >>> list2 = [5, "foo"] >>> assert_is_not(list1, list2) >>> assert_is_not(list1, list1) Traceback (most recent call last): ... AssertionError: both arguments refer to [5, 'foo'] The following msg_fmt arguments are supported: * msg - the default error message * first - the first argument * second - the second argument
def set_attribute(self, app, key, value): path = 'setattribute/' + parse.quote(app, '') + '/' + parse.quote( self._encode_string(key), '') res = self._make_ocs_request( 'POST', self.OCS_SERVICE_PRIVATEDATA, path, data={'value': self._encode_string(value)} ) if res.status_code == 200: tree = ET.fromstring(res.content) self._check_ocs_status(tree) return True raise HTTPResponseError(res)
Sets an application attribute :param app: application id :param key: key of the attribute to set :param value: value to set :returns: True if the operation succeeded, False otherwise :raises: HTTPResponseError in case an HTTP error status was returned
def token_auth(self, token): if not token: return self.headers.update({ 'Authorization': 'token {0}'.format(token) }) self.auth = None
Use an application token for authentication. :param str token: Application token retrieved from GitHub's /authorizations endpoint
def run(toolkit_name, options, verbose=True, show_progress=False): unity = glconnect.get_unity() if (not verbose): glconnect.get_server().set_log_progress(False) (success, message, params) = unity.run_toolkit(toolkit_name, options) if (len(message) > 0): logging.getLogger(__name__).error("Toolkit error: " + message) glconnect.get_server().set_log_progress(True) if success: return params else: raise ToolkitError(str(message))
Internal function to execute toolkit on the turicreate server. Parameters ---------- toolkit_name : string The name of the toolkit. options : dict A map containing the required input for the toolkit function, for example: {'graph': g, 'reset_prob': 0.15}. verbose : bool If true, enable progress log from server. show_progress : bool If true, display progress plot. Returns ------- out : dict The toolkit specific model parameters. Raises ------ RuntimeError Raises RuntimeError if the server fail executing the toolkit.
def build_tensor_serving_input_receiver_fn(shape, dtype=tf.float32, batch_size=1): def serving_input_receiver_fn(): features = tf.placeholder( dtype=dtype, shape=[batch_size] + shape, name='input_tensor') return tf.estimator.export.TensorServingInputReceiver( features=features, receiver_tensors=features) return serving_input_receiver_fn
Returns a input_receiver_fn that can be used during serving. This expects examples to come through as float tensors, and simply wraps them as TensorServingInputReceivers. Arguably, this should live in tf.estimator.export. Testing here first. Args: shape: list representing target size of a single example. dtype: the expected datatype for the input example batch_size: number of input tensors that will be passed for prediction Returns: A function that itself returns a TensorServingInputReceiver.
def minmax_candidates(self): from numpy.polynomial import Polynomial as P p = P.fromroots(self.roots) return p.deriv(1).roots()
Get points where derivative is zero. Useful for computing the extrema of the polynomial over an interval if the polynomial has real roots. In this case, the maximum is attained for one of the interval endpoints or a point from the result of this function that is contained in the interval.
def strip_path_prefix(ipath, prefix): if prefix is None: return ipath return ipath[len(prefix):] if ipath.startswith(prefix) else ipath
Strip prefix from path. Args: ipath: input path prefix: the prefix to remove, if it is found in :ipath: Examples: >>> strip_path_prefix("/foo/bar", "/bar") '/foo/bar' >>> strip_path_prefix("/foo/bar", "/") 'foo/bar' >>> strip_path_prefix("/foo/bar", "/foo") '/bar' >>> strip_path_prefix("/foo/bar", "None") '/foo/bar'
def from_uci(cls, uci: str) -> "Move": if uci == "0000": return cls.null() elif len(uci) == 4 and "@" == uci[1]: drop = PIECE_SYMBOLS.index(uci[0].lower()) square = SQUARE_NAMES.index(uci[2:]) return cls(square, square, drop=drop) elif len(uci) == 4: return cls(SQUARE_NAMES.index(uci[0:2]), SQUARE_NAMES.index(uci[2:4])) elif len(uci) == 5: promotion = PIECE_SYMBOLS.index(uci[4]) return cls(SQUARE_NAMES.index(uci[0:2]), SQUARE_NAMES.index(uci[2:4]), promotion=promotion) else: raise ValueError("expected uci string to be of length 4 or 5: {!r}".format(uci))
Parses an UCI string. :raises: :exc:`ValueError` if the UCI string is invalid.
def deunicode(item): if item is None: return None if isinstance(item, str): return item if isinstance(item, six.text_type): return item.encode('utf-8') if isinstance(item, dict): return { deunicode(key): deunicode(value) for (key, value) in item.items() } if isinstance(item, list): return [deunicode(x) for x in item] raise TypeError('Unhandled item type: {!r}'.format(item))
Convert unicode objects to str
def badRequestMethod(self, environ, start_response): response = "400 Bad Request\n\nTo access this PyAMF gateway you " \ "must use POST requests (%s received)" % environ['REQUEST_METHOD'] start_response('400 Bad Request', [ ('Content-Type', 'text/plain'), ('Content-Length', str(len(response))), ('Server', gateway.SERVER_NAME), ]) return [response]
Return HTTP 400 Bad Request.
def complete_event(self, event_id: str): event_ids = DB.get_list(self._processed_key) if event_id not in event_ids: raise KeyError('Unable to complete event. Event {} has not been ' 'processed (ie. it is not in the processed ' 'list).'.format(event_id)) DB.remove_from_list(self._processed_key, event_id, pipeline=True) key = _keys.completed_events(self._object_type, self._subscriber) DB.append_to_list(key, event_id, pipeline=True) DB.execute()
Complete the specified event.
def check_archive_format (format, compression): if format not in ArchiveFormats: raise util.PatoolError("unknown archive format `%s'" % format) if compression is not None and compression not in ArchiveCompressions: raise util.PatoolError("unkonwn archive compression `%s'" % compression)
Make sure format and compression is known.
def get_full_alias(self, query): if query in self.alias_table.sections(): return query return next((section for section in self.alias_table.sections() if section.split()[0] == query), '')
Get the full alias given a search query. Args: query: The query this function performs searching on. Returns: The full alias (with the placeholders, if any).
def _create_handler(self, config): if config is None: raise ValueError('No handler config to create handler from.') if 'name' not in config: raise ValueError('Handler name is required.') handler_name = config['name'] module_name = handler_name.rsplit('.', 1)[0] class_name = handler_name.rsplit('.', 1)[-1] module = import_module(module_name) handler_class = getattr(module, class_name) instance = handler_class(**config) return instance
Creates a handler from its config. Params: config: handler config Returns: handler instance
def print_variable(obj, **kwargs): variable_print_length = kwargs.get("variable_print_length", 1000) s = str(obj) if len(s) < 300: return "Printing the object:\n" + str(obj) else: return "Printing the object:\n" + str(obj)[:variable_print_length] + ' ...'
Print the variable out. Limit the string length to, by default, 300 characters.
def assertTimeZoneIsNotNone(self, dt, msg=None): if not isinstance(dt, datetime): raise TypeError('First argument is not a datetime object') self.assertIsNotNone(dt.tzinfo, msg=msg)
Fail unless ``dt`` has a non-null ``tzinfo`` attribute. Parameters ---------- dt : datetime msg : str If not provided, the :mod:`marbles.mixins` or :mod:`unittest` standard message will be used. Raises ------ TypeError If ``dt`` is not a datetime object.
def minimize_core(self): if self.minz and len(self.core) > 1: self.core = sorted(self.core, key=lambda l: self.wght[l]) self.oracle.conf_budget(1000) i = 0 while i < len(self.core): to_test = self.core[:i] + self.core[(i + 1):] if self.oracle.solve_limited(assumptions=to_test) == False: self.core = to_test else: i += 1
Reduce a previously extracted core and compute an over-approximation of an MUS. This is done using the simple deletion-based MUS extraction algorithm. The idea is to try to deactivate soft clauses of the unsatisfiable core one by one while checking if the remaining soft clauses together with the hard part of the formula are unsatisfiable. Clauses that are necessary for preserving unsatisfiability comprise an MUS of the input formula (it is contained in the given unsatisfiable core) and are reported as a result of the procedure. During this core minimization procedure, all SAT calls are dropped after obtaining 1000 conflicts.
def parse_class(val): module, class_name = val.rsplit('.', 1) module = importlib.import_module(module) try: return getattr(module, class_name) except AttributeError: raise ValueError('"%s" is not a valid member of %s' % ( class_name, qualname(module)) )
Parse a string, imports the module and returns the class. >>> parse_class('hashlib.md5') <built-in function openssl_md5>
def get_abbreviation_of(self, name): for language in self.user_data.languages: if language['language_string'] == name: return language['language'] return None
Get abbreviation of a language.
def _apply_args_to_func(global_args, func): global_args = vars(global_args) local_args = dict() for argument in inspect.getargspec(func).args: local_args[argument] = global_args[argument] return func(**local_args)
Unpacks the argparse Namespace object and applies its contents as normal arguments to the function func
def _exec(self, query, **kwargs): variables = {'entity': self.username, 'project': self.project, 'name': self.name} variables.update(kwargs) return self.client.execute(query, variable_values=variables)
Execute a query against the cloud backend
def _open_file_obj(f, mode="r"): if isinstance(f, six.string_types): if f.startswith(("http://", "https://")): file_obj = _urlopen(f) yield file_obj file_obj.close() else: with open(f, mode) as file_obj: yield file_obj else: yield f
A context manager that provides access to a file. :param f: the file to be opened :type f: a file-like object or path to file :param mode: how to open the file :type mode: string
def format_auto_patching_settings(result): from collections import OrderedDict order_dict = OrderedDict() if result.enable is not None: order_dict['enable'] = result.enable if result.day_of_week is not None: order_dict['dayOfWeek'] = result.day_of_week if result.maintenance_window_starting_hour is not None: order_dict['maintenanceWindowStartingHour'] = result.maintenance_window_starting_hour if result.maintenance_window_duration is not None: order_dict['maintenanceWindowDuration'] = result.maintenance_window_duration return order_dict
Formats the AutoPatchingSettings object removing arguments that are empty
def split_join_classification(element, classification_labels, nodes_classification): classification_join = "Join" classification_split = "Split" if len(element[1][consts.Consts.incoming_flow]) >= 2: classification_labels.append(classification_join) if len(element[1][consts.Consts.outgoing_flow]) >= 2: classification_labels.append(classification_split) nodes_classification[element[0]] = classification_labels
Add the "Split", "Join" classification, if the element qualifies for. :param element: an element from BPMN diagram, :param classification_labels: list of labels attached to the element, :param nodes_classification: dictionary of classification labels. Key - node id. Value - a list of labels.
def TP0(dv, u): return np.linalg.norm(np.array(dv)) + np.linalg.norm(np.array(u))
Demo problem 0 for horsetail matching, takes two input vectors of any size and returns a single output
def convert(value, source_unit, target_unit, fmt=False): orig_target_unit = target_unit source_unit = functions.value_for_key(INFORMATION_UNITS, source_unit) target_unit = functions.value_for_key(INFORMATION_UNITS, target_unit) q = ureg.Quantity(value, source_unit) q = q.to(ureg.parse_expression(target_unit)) value = functions.format_value(q.magnitude) if fmt else q.magnitude return value, orig_target_unit
Converts value from source_unit to target_unit. Returns a tuple containing the converted value and target_unit. Having fmt set to True causes the value to be formatted to 1 decimal digit if it's a decimal or be formatted as integer if it's an integer. E.g: >>> convert(2, 'hr', 'min') (120.0, 'min') >>> convert(2, 'hr', 'min', fmt=True) (120, 'min') >>> convert(30, 'min', 'hr', fmt=True) (0.5, 'hr')
def generic_service_exception(*args): exception_tuple = LambdaErrorResponses.ServiceException return BaseLocalService.service_response( LambdaErrorResponses._construct_error_response_body(LambdaErrorResponses.SERVICE_ERROR, "ServiceException"), LambdaErrorResponses._construct_headers(exception_tuple[0]), exception_tuple[1] )
Creates a Lambda Service Generic ServiceException Response Parameters ---------- args list List of arguments Flask passes to the method Returns ------- Flask.Response A response object representing the GenericServiceException Error
def save_method_args(method): args_and_kwargs = collections.namedtuple('args_and_kwargs', 'args kwargs') @functools.wraps(method) def wrapper(self, *args, **kwargs): attr_name = '_saved_' + method.__name__ attr = args_and_kwargs(args, kwargs) setattr(self, attr_name, attr) return method(self, *args, **kwargs) return wrapper
Wrap a method such that when it is called, the args and kwargs are saved on the method. >>> class MyClass: ... @save_method_args ... def method(self, a, b): ... print(a, b) >>> my_ob = MyClass() >>> my_ob.method(1, 2) 1 2 >>> my_ob._saved_method.args (1, 2) >>> my_ob._saved_method.kwargs {} >>> my_ob.method(a=3, b='foo') 3 foo >>> my_ob._saved_method.args () >>> my_ob._saved_method.kwargs == dict(a=3, b='foo') True The arguments are stored on the instance, allowing for different instance to save different args. >>> your_ob = MyClass() >>> your_ob.method({str('x'): 3}, b=[4]) {'x': 3} [4] >>> your_ob._saved_method.args ({'x': 3},) >>> my_ob._saved_method.args ()
def on_core_metadata_event(self, event): core_metadata = json.loads(event.log_message.message) input_names = ','.join(core_metadata['input_names']) output_names = ','.join(core_metadata['output_names']) target_nodes = ','.join(core_metadata['target_nodes']) self._run_key = RunKey(input_names, output_names, target_nodes) if not self._graph_defs: self._graph_defs_arrive_first = False else: for device_name in self._graph_defs: self._add_graph_def(device_name, self._graph_defs[device_name]) self._outgoing_channel.put(_comm_metadata(self._run_key, event.wall_time)) logger.info('on_core_metadata_event() waiting for client ack (meta)...') self._incoming_channel.get() logger.info('on_core_metadata_event() client ack received (meta).')
Implementation of the core metadata-carrying Event proto callback. Args: event: An Event proto that contains core metadata about the debugged Session::Run() in its log_message.message field, as a JSON string. See the doc string of debug_data.DebugDumpDir.core_metadata for details.
def get_event_action(cls) -> Optional[str]: if not cls.actor: return None return event_context.get_event_action(cls.event_type)
Return the second part of the event_type e.g. >>> Event.event_type = 'experiment.deleted' >>> Event.get_event_action() == 'deleted'
def is_collection(obj): col = getattr(obj, '__getitem__', False) val = False if (not col) else True if isinstance(obj, basestring): val = False return val
Tests if an object is a collection.
def update_docs(self, t, module): key = "{}.{}".format(module.name, t.name) if key in module.predocs: t.docstring = self.docparser.to_doc(module.predocs[key][0], t.name) t.docstart, t.docend = (module.predocs[key][1], module.predocs[key][2])
Updates the documentation for the specified type using the module predocs.
def generate_tuple_batches(qs, batch_len=1): num_items, batch = 0, [] for item in qs: if num_items >= batch_len: yield tuple(batch) num_items = 0 batch = [] num_items += 1 batch += [item] if num_items: yield tuple(batch)
Iterate through a queryset in batches of length `batch_len` >>> [batch for batch in generate_tuple_batches(range(7), 3)] [(0, 1, 2), (3, 4, 5), (6,)]
def zscore(bars, window=20, stds=1, col='close'): std = numpy_rolling_std(bars[col], window) mean = numpy_rolling_mean(bars[col], window) return (bars[col] - mean) / (std * stds)
get zscore of price
def WhereIs(self, prog, path=None, pathext=None, reject=[]): if path is None: try: path = self['ENV']['PATH'] except KeyError: pass elif SCons.Util.is_String(path): path = self.subst(path) if pathext is None: try: pathext = self['ENV']['PATHEXT'] except KeyError: pass elif SCons.Util.is_String(pathext): pathext = self.subst(pathext) prog = SCons.Util.CLVar(self.subst(prog)) path = SCons.Util.WhereIs(prog[0], path, pathext, reject) if path: return path return None
Find prog in the path.
def getAnalyst(self): analyst = self.getField("Analyst").get(self) if not analyst: analyst = self.getSubmittedBy() return analyst or ""
Returns the stored Analyst or the user who submitted the result
def get_conversion(scale, limits): fb = float(scale) / float(limits['b'][1] - limits['b'][0]) fl = float(scale) / float(limits['l'][1] - limits['l'][0]) fr = float(scale) / float(limits['r'][1] - limits['r'][0]) conversion = {"b": lambda x: (x - limits['b'][0]) * fb, "l": lambda x: (x - limits['l'][0]) * fl, "r": lambda x: (x - limits['r'][0]) * fr} return conversion
Get the conversion equations for each axis. limits: dict of min and max values for the axes in the order blr.
def set_zap_authenticator(self, zap_authenticator): result = self._zap_authenticator if result: self.unregister_child(result) self._zap_authenticator = zap_authenticator if self.zap_client: self.zap_client.close() if self._zap_authenticator: self.register_child(zap_authenticator) self.zap_client = ZAPClient(context=self) self.register_child(self.zap_client) else: self.zap_client = None return result
Setup a ZAP authenticator. :param zap_authenticator: A ZAP authenticator instance to use. The context takes ownership of the specified instance. It will close it automatically when it stops. If `None` is specified, any previously owner instance is disowned and returned. It becomes the caller's responsibility to close it. :returns: The previous ZAP authenticator instance.
def get_nonce(self): nonce = getattr(self, '_nonce', 0) if nonce: nonce += 1 self._nonce = max(int(time.time()), nonce) return self._nonce
Get a unique nonce for the bitstamp API. This integer must always be increasing, so use the current unix time. Every time this variable is requested, it automatically increments to allow for more than one API request per second. This isn't a thread-safe function however, so you should only rely on a single thread if you have a high level of concurrent API requests in your application.
def request(self, msg, use_mid=None): mid = self._get_mid_and_update_msg(msg, use_mid) self.send_request(msg) return mid
Send a request message, with automatic message ID assignment. Parameters ---------- msg : katcp.Message request message use_mid : bool or None, default=None Returns ------- mid : string or None The message id, or None if no msg id is used If use_mid is None and the server supports msg ids, or if use_mid is True a message ID will automatically be assigned msg.mid is None. if msg.mid has a value, and the server supports msg ids, that value will be used. If the server does not support msg ids, KatcpVersionError will be raised.
def listify(args): if args: if isinstance(args, list): return args elif isinstance(args, (set, tuple, GeneratorType, range, past.builtins.xrange)): return list(args) return [args] return []
Return args as a list. If already a list - return as is. >>> listify([1, 2, 3]) [1, 2, 3] If a set - return as a list. >>> listify(set([1, 2, 3])) [1, 2, 3] If a tuple - return as a list. >>> listify(tuple([1, 2, 3])) [1, 2, 3] If a generator (also range / xrange) - return as a list. >>> listify(x + 1 for x in range(3)) [1, 2, 3] >>> from past.builtins import xrange >>> from builtins import range >>> listify(xrange(1, 4)) [1, 2, 3] >>> listify(range(1, 4)) [1, 2, 3] If a single instance of something that isn't any of the above - put as a single element of the returned list. >>> listify(1) [1] If "empty" (None or False or '' or anything else that evaluates to False), return an empty list ([]). >>> listify(None) [] >>> listify(False) [] >>> listify('') [] >>> listify(0) [] >>> listify([]) []
def sort_func(variant=VARIANT1, case_sensitive=False): return lambda x: normalize( x, variant=variant, case_sensitive=case_sensitive)
A function generator that can be used for sorting. All keywords are passed to `normalize()` and generate keywords that can be passed to `sorted()`:: >>> key = sort_func() >>> print(sorted(["fur", "far"], key=key)) [u'far', u'fur'] Please note, that `sort_func` returns a function.
def cylinder(target, throat_diameter='throat.diameter', throat_length='throat.length'): r D = target[throat_diameter] L = target[throat_length] value = _sp.pi*D*L return value
r""" Calculate surface area for a cylindrical throat Parameters ---------- target : OpenPNM Object The object which this model is associated with. This controls the length of the calculated array, and also provides access to other necessary properties. throat_diameter : string Dictionary key to the throat diameter array. Default is 'throat.diameter'. throat_length : string Dictionary key to the throat length array. Default is 'throat.length'.
def delete(customer, card): if isinstance(customer, resources.Customer): customer = customer.id if isinstance(card, resources.Card): card = card.id http_client = HttpClient() http_client.delete(routes.url(routes.CARD_RESOURCE, resource_id=card, customer_id=customer))
Delete a card from its id. :param customer: The customer id or object :type customer: string|Customer :param card: The card id or object :type card: string|Card
def get(issue_id, issue_type_id): return db.Issue.find_one( Issue.issue_id == issue_id, Issue.issue_type_id == issue_type_id )
Return issue by ID Args: issue_id (str): Unique Issue identifier issue_type_id (str): Type of issue to get Returns: :obj:`Issue`: Returns Issue object if found, else None
def _is_allowed(input): gnupg_options = _get_all_gnupg_options() allowed = _get_options_group("allowed") try: assert allowed.issubset(gnupg_options) except AssertionError: raise UsageError("'allowed' isn't a subset of known options, diff: %s" % allowed.difference(gnupg_options)) if not isinstance(input, str): input = ' '.join([x for x in input]) if isinstance(input, str): if input.find('_') > 0: if not input.startswith('--'): hyphenated = _hyphenate(input, add_prefix=True) else: hyphenated = _hyphenate(input) else: hyphenated = input try: assert hyphenated in allowed except AssertionError as ae: dropped = _fix_unsafe(hyphenated) log.warn("_is_allowed(): Dropping option '%s'..." % dropped) raise ProtectedOption("Option '%s' not supported." % dropped) else: return input return None
Check that an option or argument given to GPG is in the set of allowed options, the latter being a strict subset of the set of all options known to GPG. :param str input: An input meant to be parsed as an option or flag to the GnuPG process. Should be formatted the same as an option or flag to the commandline gpg, i.e. "--encrypt-files". :ivar frozenset gnupg_options: All known GPG options and flags. :ivar frozenset allowed: All allowed GPG options and flags, e.g. all GPG options and flags which we are willing to acknowledge and parse. If we want to support a new option, it will need to have its own parsing class and its name will need to be added to this set. :raises: :exc:`UsageError` if **input** is not a subset of the hard-coded set of all GnuPG options in :func:`_get_all_gnupg_options`. :exc:`ProtectedOption` if **input** is not in the set of allowed options. :rtype: str :return: The original **input** parameter, unmodified and unsanitized, if no errors occur.
def _PrintAnalysisStatusHeader(self, processing_status): self._output_writer.Write( 'Storage file\t\t: {0:s}\n'.format(self._storage_file_path)) self._PrintProcessingTime(processing_status) if processing_status and processing_status.events_status: self._PrintEventsStatus(processing_status.events_status) self._output_writer.Write('\n')
Prints the analysis status header. Args: processing_status (ProcessingStatus): processing status.
def resolve_dst(self, dst_dir, src): if os.path.isabs(src): return os.path.join(dst_dir, os.path.basename(src)) return os.path.join(dst_dir, src)
finds the destination based on source if source is an absolute path, and there's no pattern, it copies the file to base dst_dir
def get_headers_global(): headers = dict() headers["applications_path_txt"] = 'Applications_Path' headers["channel_index_txt"] = 'Channel_Index' headers["channel_number_txt"] = 'Channel_Number' headers["channel_type_txt"] = 'Channel_Type' headers["comments_txt"] = 'Comments' headers["creator_txt"] = 'Creator' headers["daq_index_txt"] = 'DAQ_Index' headers["item_id_txt"] = 'Item_ID' headers["log_aux_data_flag_txt"] = 'Log_Aux_Data_Flag' headers["log_chanstat_data_flag_txt"] = 'Log_ChanStat_Data_Flag' headers["log_event_data_flag_txt"] = 'Log_Event_Data_Flag' headers["log_smart_battery_data_flag_txt"] = 'Log_Smart_Battery_Data_Flag' headers["mapped_aux_conc_cnumber_txt"] = 'Mapped_Aux_Conc_CNumber' headers["mapped_aux_di_cnumber_txt"] = 'Mapped_Aux_DI_CNumber' headers["mapped_aux_do_cnumber_txt"] = 'Mapped_Aux_DO_CNumber' headers["mapped_aux_flow_rate_cnumber_txt"] = 'Mapped_Aux_Flow_Rate_CNumber' headers["mapped_aux_ph_number_txt"] = 'Mapped_Aux_PH_Number' headers["mapped_aux_pressure_number_txt"] = 'Mapped_Aux_Pressure_Number' headers["mapped_aux_temperature_number_txt"] = 'Mapped_Aux_Temperature_Number' headers["mapped_aux_voltage_number_txt"] = 'Mapped_Aux_Voltage_Number' headers["schedule_file_name_txt"] = 'Schedule_File_Name' headers["start_datetime_txt"] = 'Start_DateTime' headers["test_id_txt"] = 'Test_ID' headers["test_name_txt"] = 'Test_Name' return headers
Defines the so-called global column headings for Arbin .res-files
def table(self, datatype=None, **kwargs): if config.future_deprecations: self.param.warning("The table method is deprecated and should no " "longer be used. If using a HoloMap use " "HoloMap.collapse() instead to return a Dataset.") from .data.interface import Interface from ..element.tabular import Table new_data = [(key, value.table(datatype=datatype, **kwargs)) for key, value in self.data.items()] tables = self.clone(new_data) return Interface.concatenate(tables, new_type=Table)
Deprecated method to convert an MultiDimensionalMapping of Elements to a Table.
def copy_fields(layer, fields_to_copy): for field in fields_to_copy: index = layer.fields().lookupField(field) if index != -1: layer.startEditing() source_field = layer.fields().at(index) new_field = QgsField(source_field) new_field.setName(fields_to_copy[field]) layer.addAttribute(new_field) new_index = layer.fields().lookupField(fields_to_copy[field]) for feature in layer.getFeatures(): attributes = feature.attributes() source_value = attributes[index] layer.changeAttributeValue( feature.id(), new_index, source_value) layer.commitChanges() layer.updateFields()
Copy fields inside an attribute table. :param layer: The vector layer. :type layer: QgsVectorLayer :param fields_to_copy: Dictionary of fields to copy. :type fields_to_copy: dict
def selectfalse(table, field, complement=False): return select(table, field, lambda v: not bool(v), complement=complement)
Select rows where the given field evaluates `False`.
def row_csv_limiter(rows, limits=None): limits = [None, None] if limits is None else limits if len(exclude_empty_values(limits)) == 2: upper_limit = limits[0] lower_limit = limits[1] elif len(exclude_empty_values(limits)) == 1: upper_limit = limits[0] lower_limit = row_iter_limiter(rows, 1, -1, 1) else: upper_limit = row_iter_limiter(rows, 0, 1, 0) lower_limit = row_iter_limiter(rows, 1, -1, 1) return rows[upper_limit: lower_limit]
Limit row passing a value or detect limits making the best effort.
def visit_FunctionBody(self, node): for child in node.children: return_value = self.visit(child) if isinstance(child, ReturnStatement): return return_value if isinstance(child, (IfStatement, WhileStatement)): if return_value is not None: return return_value return NoneType()
Visitor for `FunctionBody` AST node.
def abort_expired_batches(self, request_timeout_ms, cluster): expired_batches = [] to_remove = [] count = 0 for tp in list(self._batches.keys()): assert tp in self._tp_locks, 'TopicPartition not in locks dict' if tp in self.muted: continue with self._tp_locks[tp]: dq = self._batches[tp] for batch in dq: is_full = bool(bool(batch != dq[-1]) or batch.records.is_full()) if batch.maybe_expire(request_timeout_ms, self.config['retry_backoff_ms'], self.config['linger_ms'], is_full): expired_batches.append(batch) to_remove.append(batch) count += 1 self.deallocate(batch) else: break if to_remove: for batch in to_remove: dq.remove(batch) to_remove = [] if expired_batches: log.warning("Expired %d batches in accumulator", count) return expired_batches
Abort the batches that have been sitting in RecordAccumulator for more than the configured request_timeout due to metadata being unavailable. Arguments: request_timeout_ms (int): milliseconds to timeout cluster (ClusterMetadata): current metadata for kafka cluster Returns: list of ProducerBatch that were expired
def _find_datastream(self, name): for stream in self.data_streams: if stream.name == name: return stream return None
Find and return if a datastream exists, by name.
def restore(source, offset): backup_location = os.path.join( os.path.dirname(os.path.abspath(source)), source + '.bytes_backup') click.echo('Reading backup from: {location}'.format(location=backup_location)) if not os.path.isfile(backup_location): click.echo('No backup found for: {source}'.format(source=source)) return with open(backup_location, 'r+b') as b: data = b.read() click.echo('Restoring {c} bytes from offset {o}'.format(c=len(data), o=offset)) with open(source, 'r+b') as f: f.seek(offset) f.write(data) f.flush() click.echo('Changes written')
Restore a smudged file from .bytes_backup
def downcaseTokens(s,l,t): return [ tt.lower() for tt in map(_ustr,t) ]
Helper parse action to convert tokens to lower case.
def int_to_words(int_val, num_words=4, word_size=32): max_int = 2 ** (word_size*num_words) - 1 max_word_size = 2 ** word_size - 1 if not 0 <= int_val <= max_int: raise AttributeError('integer %r is out of bounds!' % hex(int_val)) words = [] for _ in range(num_words): word = int_val & max_word_size words.append(int(word)) int_val >>= word_size words.reverse() return words
Convert a int value to bytes. :param int_val: an arbitrary length Python integer to be split up. Network byte order is assumed. Raises an IndexError if width of integer (in bits) exceeds word_size * num_words. :param num_words: number of words expected in return value tuple. :param word_size: size/width of individual words (in bits). :return: a list of fixed width words based on provided parameters.
def create(self): self.server_attrs = self.consul.create_server( "%s-%s" % (self.stack.name, self.name), self.disk_image_id, self.instance_type, self.ssh_key_name, tags=self.tags, availability_zone=self.availability_zone, timeout_s=self.launch_timeout_s, security_groups=self.security_groups, **self.provider_extras ) log.debug('Post launch delay: %d s' % self.post_launch_delay_s) time.sleep(self.post_launch_delay_s)
Launches a new server instance.
def event_details(event_id=None, lang="en"): if event_id: cache_name = "event_details.%s.%s.json" % (event_id, lang) params = {"event_id": event_id, "lang": lang} else: cache_name = "event_details.%s.json" % lang params = {"lang": lang} data = get_cached("event_details.json", cache_name, params=params) events = data["events"] return events.get(event_id) if event_id else events
This resource returns static details about available events. :param event_id: Only list this event. :param lang: Show localized texts in the specified language. The response is a dictionary where the key is the event id, and the value is a dictionary containing the following properties: name (string) The name of the event. level (int) The event level. map_id (int) The map where the event takes place. flags (list) A list of additional flags. Possible flags are: ``group_event`` For group events ``map_wide`` For map-wide events. location (object) The location of the event. type (string) The type of the event location, can be ``sphere``, ``cylinder`` or ``poly``. center (list) X, Y, Z coordinates of the event location. radius (number) (type ``sphere`` and ``cylinder``) Radius of the event location. z_range (list) (type ``poly``) List of Minimum and Maximum Z coordinate. points (list) (type ``poly``) List of Points (X, Y) denoting the event location perimeter. If a event_id is given, only the values for that event are returned.
def compile_excludes(self): self.compiled_exclude_files = [] for pattern in self.exclude_files: try: self.compiled_exclude_files.append(re.compile(pattern)) except re.error as e: raise ValueError( "Bad python regex in exclude '%s': %s" % (pattern, str(e)))
Compile a set of regexps for files to be exlcuded from scans.
def retrieve_manual_indices(self): if self.parent_changed: pass else: pbool = map_indices_child2root( child=self.rtdc_ds, child_indices=np.where(~self.manual)[0]).tolist() pold = self._man_root_ids pall = sorted(list(set(pbool + pold))) pvis_c = map_indices_root2child(child=self.rtdc_ds, root_indices=pall).tolist() pvis_p = map_indices_child2root(child=self.rtdc_ds, child_indices=pvis_c).tolist() phid = list(set(pall) - set(pvis_p)) all_idx = list(set(pbool + phid)) self._man_root_ids = sorted(all_idx) return self._man_root_ids
Read from self.manual Read from the boolean array `self.manual`, index all occurences of `False` and find the corresponding indices in the root hierarchy parent, return those and store them in `self._man_root_ids` as well. Notes ----- This method also retrieves hidden indices, i.e. events that are not part of the current hierarchy child but which have been manually excluded before and are now hidden because a hierarchy parent filtered it out. If `self.parent_changed` is `True`, i.e. the parent applied a filter and the child did not yet hear about this, then nothing is computed and `self._man_root_ids` as-is. This is important, because the size of the current filter would not match the size of the filtered events of the parent and thus index-mapping would not work.
def promote_owner(self, stream_id, user_id): req_hook = 'pod/v1/room/' + stream_id + '/membership/promoteOwner' req_args = '{ "id": %s }' % user_id status_code, response = self.__rest__.POST_query(req_hook, req_args) self.logger.debug('%s: %s' % (status_code, response)) return status_code, response
promote user to owner in stream
def get_version(): if isinstance(VERSION[-1], str): return '.'.join(map(str, VERSION[:-1])) + VERSION[-1] return '.'.join(map(str, VERSION))
Returns a string representation of the current SDK version.
def nrows_expected(self): return np.prod([i.cvalues.shape[0] for i in self.index_axes])
based on our axes, compute the expected nrows
def address_inline(request, prefix="", country_code=None, template_name="postal/form.html"): country_prefix = "country" prefix = request.POST.get('prefix', prefix) if prefix: country_prefix = prefix + '-country' country_code = request.POST.get(country_prefix, country_code) form_class = form_factory(country_code=country_code) if request.method == "POST": data = {} for (key, val) in request.POST.items(): if val is not None and len(val) > 0: data[key] = val data.update({country_prefix: country_code}) form = form_class(prefix=prefix, initial=data) else: form = form_class(prefix=prefix) return render_to_string(template_name, RequestContext(request, { "form": form, "prefix": prefix, }))
Displays postal address with localized fields
def generate_static_matching(app, directory_serve_app=DirectoryApp): static_dir = os.path.join(os.path.dirname(app.__file__), 'static') try: static_app = directory_serve_app(static_dir, index_page='') except OSError: return None static_pattern = '/static/{app.__name__}/*path'.format(app=app) static_name = '{app.__name__}:static'.format(app=app) return Matching(static_pattern, static_app, static_name)
Creating a matching for WSGI application to serve static files for passed app. Static files will be collected from directory named 'static' under passed application:: ./blog/static/ This example is with an application named `blog`. URLs for static files in static directory will begin with /static/app_name/. so in blog app case, if the directory has css/main.css file, the file will be published like this:: yoursite.com/static/blog/css/main.css And you can get this URL by reversing form matching object:: matching.reverse('blog:static', path=['css', 'main.css'])
def _summarize_coefficients(top_coefs, bottom_coefs): def get_row_name(row): if row['index'] is None: return row['name'] else: return "%s[%s]" % (row['name'], row['index']) if len(top_coefs) == 0: top_coefs_list = [('No Positive Coefficients', _precomputed_field('') )] else: top_coefs_list = [ (get_row_name(row), _precomputed_field(row['value'])) \ for row in top_coefs ] if len(bottom_coefs) == 0: bottom_coefs_list = [('No Negative Coefficients', _precomputed_field(''))] else: bottom_coefs_list = [ (get_row_name(row), _precomputed_field(row['value'])) \ for row in bottom_coefs ] return ([top_coefs_list, bottom_coefs_list], \ [ 'Highest Positive Coefficients', 'Lowest Negative Coefficients'] )
Return a tuple of sections and section titles. Sections are pretty print of model coefficients Parameters ---------- top_coefs : SFrame of top k coefficients bottom_coefs : SFrame of bottom k coefficients Returns ------- (sections, section_titles) : tuple sections : list summary sections for top/bottom k coefficients section_titles : list summary section titles
def GetNextNode(self, modes, innode): nodes = N.where(self.innodes == innode) if nodes[0].size == 0: return -1 defaultindex = N.where(self.keywords[nodes] == 'default') if len(defaultindex[0]) != 0: outnode = self.outnodes[nodes[0][defaultindex[0]]] for mode in modes: result = self.keywords[nodes].count(mode) if result != 0: index = N.where(self.keywords[nodes]==mode) outnode = self.outnodes[nodes[0][index[0]]] return outnode
GetNextNode returns the outnode that matches an element from the modes list, starting at the given innode. This method isnt actually used, its just a helper method for debugging purposes.