code
stringlengths
51
2.38k
docstring
stringlengths
4
15.2k
def friendly_format(self): if self.description is not None: msg = self.description else: msg = 'errorCode: {} / detailCode: {}'.format( self.errorCode, self.detailCode ) return self._fmt(self.name, msg)
Serialize to a format more suitable for displaying to end users.
def _search_for_files(parts): file_parts = [] for part in parts: if isinstance(part, list): file_parts.extend(_search_for_files(part)) elif isinstance(part, FileToken): file_parts.append(part) return file_parts
Given a list of parts, return all of the nested file parts.
def rbinomial(n, p, size=None): if not size: size = None return np.random.binomial(np.ravel(n), np.ravel(p), size)
Random binomial variates.
def insert_sort(node, target): sort = target.sort lang = target.lang collator = Collator.createInstance(Locale(lang) if lang else Locale()) for child in target.tree: if collator.compare(sort(child) or '', sort(node) or '') > 0: child.addprevious(node) break else: ...
Insert node into sorted position in target tree. Uses sort function and language from target
def append_metrics(self, metrics, df_name): dataframe = self._dataframes[df_name] _add_new_columns(dataframe, metrics) dataframe.loc[len(dataframe)] = metrics
Append new metrics to selected dataframes. Parameters ---------- metrics : metric.EvalMetric New metrics to be added. df_name : str Name of the dataframe to be modified.
def command(self, command, raw=False, timeout_ms=None): return ''.join(self.streaming_command(command, raw, timeout_ms))
Run the given command and return the output.
def eval_py(self, _globals, _locals): try: params = eval(self.script, _globals, _locals) except NameError as e: raise Exception( 'Failed to evaluate parameters: {}' .format(str(e)) ) except ResolutionError as e: rais...
Evaluates a file containing a Python params dictionary.
def delete_budget(self, model_uuid): return make_request( '{}model/{}/budget'.format(self.url, model_uuid), method='DELETE', timeout=self.timeout, client=self._client)
Delete a budget. @param the name of the wallet. @param the model UUID. @return a success string from the plans server. @raise ServerError via make_request.
def get_theme_dir(): return os.path.abspath(os.path.join(os.path.dirname(__file__), "theme"))
Returns path to directory containing this package's theme. This is designed to be used when setting the ``html_theme_path`` option within Sphinx's ``conf.py`` file.
def GaussianBlur(X, ksize_width, ksize_height, sigma_x, sigma_y): return image_transform( X, cv2.GaussianBlur, ksize=(ksize_width, ksize_height), sigmaX=sigma_x, sigmaY=sigma_y )
Apply Gaussian blur to the given data. Args: X: data to blur kernel_size: Gaussian kernel size stddev: Gaussian kernel standard deviation (in both X and Y directions)
def __validate_required_fields(self, document): try: required = set(field for field, definition in self.schema.items() if self._resolve_rules_set(definition). get('required') is True) except AttributeError: if self.is_child an...
Validates that required fields are not missing. :param document: The document being validated.
def cancelMktData(self, contract: Contract): ticker = self.ticker(contract) reqId = self.wrapper.endTicker(ticker, 'mktData') if reqId: self.client.cancelMktData(reqId) else: self._logger.error( 'cancelMktData: ' f'No reqId found for contract {cont...
Unsubscribe from realtime streaming tick data. Args: contract: The exact contract object that was used to subscribe with.
def untrace_modules(self, modules): for module in modules: foundations.trace.untrace_module(module) self.__model__refresh_attributes() return True
Untraces given modules. :param modules: Modules to untrace. :type modules: list :return: Method success. :rtype: bool
def open_target_group_for_form(self, form): target = self.first_container_with_errors(form.errors.keys()) if target is None: target = self.fields[0] if not getattr(target, '_active_originally_included', None): target.active = True return target ...
Makes sure that the first group that should be open is open. This is either the first group with errors or the first group in the container, unless that first group was originally set to active=False.
def column(self, key): for row in self.rows: if key in row: yield row[key]
Iterator over a given column, skipping steps that don't have that key
def structure_results(res): out = {'hits': {'hits': []}} keys = [u'admin1_code', u'admin2_code', u'admin3_code', u'admin4_code', u'alternativenames', u'asciiname', u'cc2', u'coordinates', u'country_code2', u'country_code3', u'dem', u'elevation', u'feature_class', u'feature_co...
Format Elasticsearch result as Python dictionary
def _create_list_of_array_controllers(self): headers, array_uri, array_settings = ( self._get_array_controller_resource()) array_uri_links = [] if ('links' in array_settings and 'Member' in array_settings['links']): array_uri_links = array_settings['links'...
Creates the list of Array Controller URIs. :raises: IloCommandNotSupportedError if the ArrayControllers doesnt have member "Member". :returns list of ArrayControllers.
def _expand_qname(self, qname): if type(qname) is not rt.URIRef: raise TypeError("Cannot expand qname of type {}, must be URIRef" .format(type(qname))) for ns in self.graph.namespaces(): if ns[0] == qname.split(':')[0]: return rt.URIRef...
expand a qualified name's namespace prefix to include the resolved namespace root url
def slice_sequence(self,start,end): l = self.length indexstart = start indexend = end ns = [] tot = 0 for r in self._rngs: tot += r.length n = r.copy() if indexstart > r.length: indexstart-=r.length continue n.start = n.start+indexsta...
Slice the mapping by the position in the sequence First coordinate is 0-indexed start Second coordinate is 1-indexed finish
def _did_retrieve(self, connection): response = connection.response try: self.from_dict(response.data[0]) except: pass return self._did_perform_standard_operation(connection)
Callback called after fetching the object
def abs_path_from_base(base_path, rel_path): return os.path.abspath( os.path.join( os.path.dirname(sys._getframe(1).f_code.co_filename), base_path, rel_path ) )
Join a base and a relative path and return an absolute path to the resulting location. Args: base_path: str Relative or absolute path to prepend to ``rel_path``. rel_path: str Path relative to the location of the module file from which this function is called. Returns: ...
def estimate(self, upgrades): val = 0 for u in upgrades: val += u.estimate() return val
Estimate the time needed to apply upgrades. If an upgrades does not specify and estimate it is assumed to be in the order of 1 second. :param upgrades: List of upgrades sorted in topological order.
def write_summary(summary: dict, cache_dir: str): summary['accessed'] = time() with open(join(cache_dir, 'summary.json'), 'w') as summary_file: summary_file.write(json.dumps(summary, indent=4, sort_keys=True))
Write the `summary` JSON to `cache_dir`. Updated the accessed timestamp to now before writing.
def pld3(pos, line_vertex, line_dir): pos = np.atleast_2d(pos) line_vertex = np.atleast_1d(line_vertex) line_dir = np.atleast_1d(line_dir) c = np.cross(line_dir, line_vertex - pos) n1 = np.linalg.norm(c, axis=1) n2 = np.linalg.norm(line_dir) out = n1 / n2 if out.ndim == 1 and len(out) ==...
Calculate the point-line-distance for given point and line.
def _frombuffer(ptr, frames, channels, dtype): framesize = channels * dtype.itemsize data = np.frombuffer(ffi.buffer(ptr, frames * framesize), dtype=dtype) data.shape = -1, channels return data
Create NumPy array from a pointer to some memory.
def validate_object_id(object_id): result = re.match(OBJECT_ID_RE, str(object_id)) if not result: print("'%s' appears not to be a valid 990 object_id" % object_id) raise RuntimeError(OBJECT_ID_MSG) return object_id
It's easy to make a mistake entering these, validate the format
def update_terms(self, project_id, data, fuzzy_trigger=None): kwargs = {} if fuzzy_trigger is not None: kwargs['fuzzy_trigger'] = fuzzy_trigger data = self._run( url_path="terms/update", id=project_id, data=json.dumps(data), **kwargs ...
Updates project terms. Lets you change the text, context, reference, plural and tags. >>> data = [ { "term": "Add new list", "context": "", "new_term": "Save list", "new_context": "", "reference"...
def clean_stale_refs(self, local_refs=None): try: if pygit2.GIT_FETCH_PRUNE: return [] except AttributeError: pass if self.credentials is not None: log.debug( 'The installed version of pygit2 (%s) does not support ' ...
Clean stale local refs so they don't appear as fileserver environments
def drop_right(self, n): return self._transform(transformations.CACHE_T, transformations.drop_right_t(n))
Drops the last n elements of the sequence. >>> seq([1, 2, 3, 4, 5]).drop_right(2) [1, 2, 3] :param n: number of elements to drop :return: sequence with last n elements dropped
def hold(self, policy="combine"): if self._hold is not None and self._hold != policy: log.warning("hold already active with '%s', ignoring '%s'" % (self._hold, policy)) return if policy not in HoldPolicy: raise ValueError("Unknown hold policy %r" % policy) sel...
Activate a document hold. While a hold is active, no model changes will be applied, or trigger callbacks. Once ``unhold`` is called, the events collected during the hold will be applied according to the hold policy. Args: hold ('combine' or 'collect', optional) ...
def kill(self): for process in list(self.processes): process["subprocess"].send_signal(signal.SIGKILL) self.stop_watch()
Kills the processes right now with a SIGKILL
def pull_tag_dict(data): tags = data.pop('Tags', {}) or {} if tags: proper_tags = {} for tag in tags: proper_tags[tag['Key']] = tag['Value'] tags = proper_tags return tags
This will pull out a list of Tag Name-Value objects, and return it as a dictionary. :param data: The dict collected from the collector. :returns dict: A dict of the tag names and their corresponding values.
def fromTFExample(bytestr): example = tf.train.Example() example.ParseFromString(bytestr) return example
Deserializes a TFExample from a byte string
def job_error_message(self, job, queue, to_be_requeued, exception, trace=None): return '[%s|%s|%s] error: %s [%s]' % (queue._cached_name, job.pk.get(), job._cached_identifier, str(exception), 'req...
Return the message to log when a job raised an error
def http_datetime_str_from_dt(dt): epoch_seconds = ts_from_dt(dt) return email.utils.formatdate(epoch_seconds, localtime=False, usegmt=True)
Format datetime to HTTP Full Date format. Args: dt : datetime - tz-aware: Used in the formatted string. - tz-naive: Assumed to be in UTC. Returns: str The returned format is a is fixed-length subset of that defined by RFC 1123 and is the preferred format fo...
def int_to_bytes(i, minlen=1, order='big'): blen = max(minlen, PGPObject.int_byte_len(i), 1) if six.PY2: r = iter(_ * 8 for _ in (range(blen) if order == 'little' else range(blen - 1, -1, -1))) return bytes(bytearray((i >> c) & 0xff for c in r)) return i.to_bytes(blen, or...
convert integer to bytes
def quick_response(self, status_code): translator = Translator(environ=self.environ) if status_code == 404: self.status(404) self.message(translator.trans('http_messages.404')) elif status_code == 401: self.status(401) self.message(translato...
Quickly construct response using a status code
def add_range(self, name, min=None, max=None): self._ranges.append(_mk_range_bucket(name, 'min', 'max', min, max)) return self
Add a numeric range. :param str name: the name by which the range is accessed in the results :param int | float min: Lower range bound :param int | float max: Upper range bound :return: This object; suitable for method chaining
def _map_in_out(self, inside_var_name): for out_name, in_name in self.outside_name_map.items(): if inside_var_name == in_name: return out_name return None
Return the external name of a variable mapped from inside.
def generate_tokens(doc, regex=CRE_TOKEN, strip=True, nonwords=False): r if isinstance(regex, basestring): regex = re.compile(regex) for w in regex.finditer(doc): if w: w = w.group() if strip: w = w.strip(r'-_*`()}{' + r"'") if w and (nonwo...
r"""Return a sequence of words or tokens, using a re.match iteratively through the str >>> doc = "John D. Rock\n\nObjective: \n\tSeeking a position as Software --Architect-- / _Project Lead_ that can utilize my expertise and" >>> doc += " experiences in business application development and proven records in de...
def _start_vibration_win(self, left_motor, right_motor): xinput_set_state = self.manager.xinput.XInputSetState xinput_set_state.argtypes = [ ctypes.c_uint, ctypes.POINTER(XinputVibration)] xinput_set_state.restype = ctypes.c_uint vibration = XinputVibration( int(l...
Start the vibration, which will run until stopped.
def register(self, event, keys): if self.running: raise RuntimeError("Can't register while running") handler = self._handlers.get(event, None) if handler is not None: raise ValueError("Event {} already registered".format(event)) self._handlers[event] = EventHandle...
Register a new event with available keys. Raises ValueError when the event has already been registered. Usage: dispatch.register("my_event", ["foo", "bar", "baz"])
def collect_and_report(self): logger.debug("Metric reporting thread is now alive") def metric_work(): self.process() if self.agent.is_timed_out(): logger.warn("Host agent offline for >1 min. Going to sit in a corner...") self.agent.reset() ...
Target function for the metric reporting thread. This is a simple loop to collect and report entity data every 1 second.
def get_topic_partition_metadata(hosts): kafka_client = KafkaToolClient(hosts, timeout=10) kafka_client.load_metadata_for_topics() topic_partitions = kafka_client.topic_partitions resp = kafka_client.send_metadata_request() for _, topic, partitions in resp.topics: for partition_error, partit...
Returns topic-partition metadata from Kafka broker. kafka-python 1.3+ doesn't include partition metadata information in topic_partitions so we extract it from metadata ourselves.
def _parse_launch_error(data): return LaunchFailure( data.get(ERROR_REASON, None), data.get(APP_ID), data.get(REQUEST_ID), )
Parses a LAUNCH_ERROR message and returns a LaunchFailure object. :type data: dict :rtype: LaunchFailure
def _raw_predict(self, Xnew, full_cov=False, kern=None): mu, var = self.posterior._raw_predict(kern=self.kern if kern is None else kern, Xnew=Xnew, pred_var=self._predictive_variable, full_cov=full_cov) if self.mean_function is not None: mu += se...
For making predictions, does not account for normalization or likelihood full_cov is a boolean which defines whether the full covariance matrix of the prediction is computed. If full_cov is False (default), only the diagonal of the covariance is returned. .. math:: p(f*|X*,...
def AgregarReceptor(self, cod_caracter, **kwargs): "Agrego los datos del receptor a la liq." d = {'codCaracter': cod_caracter} self.solicitud['receptor'].update(d) return True
Agrego los datos del receptor a la liq.
def sparse_arrays(self, value): if not isinstance(value, bool): raise TypeError('sparse_arrays attribute must be a logical type.') self._sparse_arrays = value
Validate and enable spare arrays.
def push(self, key, value, *, section=DataStoreDocumentSection.Data): key_notation = '.'.join([section, key]) result = self._collection.update_one( {"_id": ObjectId(self._workflow_id)}, { "$push": { key_notation: self._encode_value(value) ...
Appends a value to a list in the specified section of the document. Args: key (str): The key pointing to the value that should be stored/updated. It supports MongoDB's dot notation for nested fields. value: The value that should be appended to a list in the data store. ...
def dump(self, path): with open(path, 'w') as props: Properties.dump(self._props, props)
Saves the pushdb as a properties file to the given path.
def percentile(self, percentile): out = scipy.percentile(self.value, percentile, axis=0) if self.name is not None: name = '{}: {} percentile'.format(self.name, _ordinal(percentile)) else: name = None return FrequencySeries(out, epoch=self.epoch, channel=self.chann...
Calculate a given spectral percentile for this `Spectrogram`. Parameters ---------- percentile : `float` percentile (0 - 100) of the bins to compute Returns ------- spectrum : `~gwpy.frequencyseries.FrequencySeries` the given percentile `Frequenc...
def _GetWinevtRcDatabaseReader(self): if not self._winevt_database_reader and self._data_location: database_path = os.path.join( self._data_location, self._WINEVT_RC_DATABASE) if not os.path.isfile(database_path): return None self._winevt_database_reader = ( winevt_rc.W...
Opens the Windows Event Log resource database reader. Returns: WinevtResourcesSqlite3DatabaseReader: Windows Event Log resource database reader or None.
def set_pause_param(self, autoneg, rx_pause, tx_pause): ecmd = array.array('B', struct.pack('IIII', ETHTOOL_SPAUSEPARAM, bool(autoneg), bool(rx_pause), bool(tx_pause))) buf_addr, _buf_len = ecmd.buffer_info() ifreq = struct.pack('16sP', self.name, buf_addr) fcntl.ioctl(sockfd...
Ethernet has flow control! The inter-frame pause can be adjusted, by auto-negotiation through an ethernet frame type with a simple two-field payload, and by setting it explicitly. http://en.wikipedia.org/wiki/Ethernet_flow_control
def shutdown(at_time=None): cmd = ['shutdown', '-h', ('{0}'.format(at_time) if at_time else 'now')] ret = __salt__['cmd.run'](cmd, python_shell=False) return ret
Shutdown a running system at_time The wait time in minutes before the system will be shutdown. CLI Example: .. code-block:: bash salt '*' system.shutdown 5
def compute_voltages(grid, configs_raw, potentials_raw): voltages = [] for config, potentials in zip(configs_raw, potentials_raw): print('config', config) e3_node = grid.get_electrode_node(config[2]) e4_node = grid.get_electrode_node(config[3]) print(e3_node, e4_node) pri...
Given a list of potential distribution and corresponding four-point spreads, compute the voltages Parameters ---------- grid: crt_grid object the grid is used to infer electrode positions configs_raw: Nx4 array containing the measurement configs (1-indexed) potentials_raw: list ...
def icons(self, strip_ext=False): result = [f for f in self._stripped_files if self._icons_pattern.match(f)] if strip_ext: result = [strip_suffix(f, '\.({ext})'.format(ext=self._icons_ext), regex=True) for f in result] return result
Get all icons in this DAP, optionally strip extensions
def HA1(realm, username, password, algorithm): if not realm: realm = u'' return H(b":".join([username.encode('utf-8'), realm.encode('utf-8'), password.encode('utf-8')]), algorithm)
Create HA1 hash by realm, username, password HA1 = md5(A1) = MD5(username:realm:password)
def get_icon(self, iconset): theme = iconset.attrib.get('theme') if theme is not None: return self._object_factory.createQObject("QIcon.fromTheme", 'icon', (self._object_factory.asString(theme), ), is_attribute=False) if iconset.text is None: ...
Return an icon described by the given iconset tag.
def exit(self, code=None, msg=None): if code is None: code = self.tcex.exit_code if code == 3: self.tcex.log.info(u'Changing exit code from 3 to 0.') code = 0 elif code not in [0, 1]: code = 1 self.tcex.exit(code, msg)
Playbook wrapper on TcEx exit method Playbooks do not support partial failures so we change the exit method from 3 to 1 and call it a partial success instead. Args: code (Optional [integer]): The exit code value for the app.
def generate_index(fn, cols=None, names=None, sep=" "): assert cols is not None, "'cols' was not set" assert names is not None, "'names' was not set" assert len(cols) == len(names) bgzip, open_func = get_open_func(fn, return_fmt=True) data = pd.read_csv(fn, sep=sep, engine="c", usecols=cols, names=n...
Build a index for the given file. Args: fn (str): the name of the file. cols (list): a list containing column to keep (as int). names (list): the name corresponding to the column to keep (as str). sep (str): the field separator. Returns: pandas.DataFrame: the index.
def func(self): fn = self.engine.query.sense_func_get( self.observer.name, self.sensename, *self.engine._btt() ) if fn is not None: return SenseFuncWrap(self.observer, fn)
Return the function most recently associated with this sense.
def get_default_connection(): tid = id(threading.current_thread()) conn = _conn_holder.get(tid) if not conn: with(_rlock): if 'project_endpoint' not in _options and 'project_id' not in _options: _options['project_endpoint'] = helper.get_project_endpoint_from_env() if 'credentials' not in _...
Returns the default datastore connection. Defaults endpoint to helper.get_project_endpoint_from_env() and credentials to helper.get_credentials_from_env(). Use set_options to override defaults.
def get_container_version(): root_dir = os.path.dirname(os.path.realpath(sys.argv[0])) version_file = os.path.join(root_dir, 'VERSION') if os.path.exists(version_file): with open(version_file) as f: return f.read() return ''
Return the version of the docker container running the present server, or '' if not in a container
def contains_peroxide(structure, relative_cutoff=1.1): ox_type = oxide_type(structure, relative_cutoff) if ox_type == "peroxide": return True else: return False
Determines if a structure contains peroxide anions. Args: structure (Structure): Input structure. relative_cutoff: The peroxide bond distance is 1.49 Angstrom. Relative_cutoff * 1.49 stipulates the maximum distance two O atoms must be to each other to be considered a peroxid...
def files(self): for header in (r"(.*)\t\[\[\[1\n", r"^(\d+)\n$"): header = re.compile(header) filename = None self.fd.seek(0) line = self.readline() while line: m = header.match(line) if m is not None: ...
Yields archive file information.
def wrap_viscm(cmap, dpi=100, saveplot=False): from viscm import viscm viscm(cmap) fig = plt.gcf() fig.set_size_inches(22, 10) plt.show() if saveplot: fig.savefig('figures/eval_' + cmap.name + '.png', bbox_inches='tight', dpi=dpi) fig.savefig('figures/eval_' + cmap.name + '.pdf',...
Evaluate goodness of colormap using perceptual deltas. :param cmap: Colormap instance. :param dpi=100: dpi for saved image. :param saveplot=False: Whether to save the plot or not.
def predict(self, test_X): with self.tf_graph.as_default(): with tf.Session() as self.tf_session: self.tf_saver.restore(self.tf_session, self.model_path) feed = { self.input_data: test_X, self.keep_prob: 1 } ...
Predict the labels for the test set. Parameters ---------- test_X : array_like, shape (n_samples, n_features) Test data. Returns ------- array_like, shape (n_samples,) : predicted labels.
def reshuffle(expr, by=None, sort=None, ascending=True): by = by or RandomScalar() grouped = expr.groupby(by) if sort: grouped = grouped.sort_values(sort, ascending=ascending) return ReshuffledCollectionExpr(_input=grouped, _schema=expr._schema)
Reshuffle data. :param expr: :param by: the sequence or scalar to shuffle by. RandomScalar as default :param sort: the sequence or scalar to sort. :param ascending: True if ascending else False :return: collection
def datetime_to_httpdate(dt): if isinstance(dt, (int, float)): return format_date_time(dt) elif isinstance(dt, datetime): return format_date_time(datetime_to_timestamp(dt)) else: raise TypeError("expected datetime.datetime or timestamp (int/float)," " got '%s'...
Convert datetime.datetime or Unix timestamp to HTTP date.
def _update_health_monitor_with_new_ips(route_spec, all_ips, q_monitor_ips): new_all_ips = \ sorted(set(itertools.chain.from_iterable(route_spec.values()))) if new_all_ips != all_ips: logging.debug("New route spec detected. Updating " ...
Take the current route spec and compare to the current list of known IP addresses. If the route spec mentiones a different set of IPs, update the monitoring thread with that new list. Return the current set of IPs mentioned in the route spec.
def closedopen(lower_value, upper_value): return Interval(Interval.CLOSED, lower_value, upper_value, Interval.OPEN)
Helper function to construct an interval object with a closed lower and open upper. For example: >>> closedopen(100.2, 800.9) [100.2, 800.9)
def to_ufo_glyph_background(self, glyph, layer): if not layer.hasBackground: return background = layer.background ufo_layer = self.to_ufo_background_layer(glyph) new_glyph = ufo_layer.newGlyph(glyph.name) width = background.userData[BACKGROUND_WIDTH_KEY] if width is not None: new...
Set glyph background.
def nl_socket_alloc(cb=None): cb = cb or nl_cb_alloc(default_cb) if not cb: return None sk = nl_sock() sk.s_cb = cb sk.s_local.nl_family = getattr(socket, 'AF_NETLINK', -1) sk.s_peer.nl_family = getattr(socket, 'AF_NETLINK', -1) sk.s_seq_expect = sk.s_seq_next = int(time.time()) ...
Allocate new Netlink socket. Does not yet actually open a socket. https://github.com/thom311/libnl/blob/libnl3_2_25/lib/socket.c#L206 Also has code for generating local port once. https://github.com/thom311/libnl/blob/libnl3_2_25/lib/nl.c#L123 Keyword arguments: cb -- custom callback handler. ...
def on_receive_request_vote_response(self, data): if data.get('vote_granted'): self.vote_count += 1 if self.state.is_majority(self.vote_count): self.state.to_leader()
Receives response for vote request. If the vote was granted then check if we got majority and may become Leader
def stop_capture(self): super(Treal, self).stop_capture() if self._machine: self._machine.close() self._stopped()
Stop listening for output from the stenotype machine.
def add_beacon(self, name, beacon_data): data = {} data[name] = beacon_data if name in self._get_beacons(include_opts=False): comment = 'Cannot update beacon item {0}, ' \ 'because it is configured in pillar.'.format(name) complete = False el...
Add a beacon item
def _get_source_chunks(self, input_text, language=None): chunks = ChunkList() seek = 0 result = self._get_annotations(input_text, language=language) tokens = result['tokens'] language = result['language'] for i, token in enumerate(tokens): word = token['text']['content'] begin_offset...
Returns a chunk list retrieved from Syntax Analysis results. Args: input_text (str): Text to annotate. language (:obj:`str`, optional): Language of the text. Returns: A chunk list. (:obj:`budou.chunk.ChunkList`)
def lookup_hostname(self, ip): res = self.lookup_by_lease(ip=ip) if "client-hostname" not in res: raise OmapiErrorAttributeNotFound() return res["client-hostname"].decode('utf-8')
Look up a lease object with given ip address and return the associated client hostname. @type ip: str @rtype: str or None @raises ValueError: @raises OmapiError: @raises OmapiErrorNotFound: if no lease object with the given ip address could be found @raises OmapiErrorAttributeNotFound: if lease could be fo...
def dashes_cleanup(records, prune_chars='.:?~'): logging.info( "Applying _dashes_cleanup: converting any of '{}' to '-'.".format(prune_chars)) translation_table = {ord(c): '-' for c in prune_chars} for record in records: record.seq = Seq(str(record.seq).translate(translation_table), ...
Take an alignment and convert any undesirable characters such as ? or ~ to -.
def _get_unicode(data, force=False): if isinstance(data, binary_type): return data.decode('utf-8') elif data is None: return '' elif force: if PY2: return unicode(data) else: return str(data) else: return data
Try to return a text aka unicode object from the given data.
def _get_arguments_for_execution(self, function_name, serialized_args): arguments = [] for (i, arg) in enumerate(serialized_args): if isinstance(arg, ObjectID): argument = self.get_object([arg])[0] if isinstance(argument, RayError): raise a...
Retrieve the arguments for the remote function. This retrieves the values for the arguments to the remote function that were passed in as object IDs. Arguments that were passed by value are not changed. This is called by the worker that is executing the remote function. Args: ...
def get_chalk(level): if level >= logging.ERROR: _chalk = chalk.red elif level >= logging.WARNING: _chalk = chalk.yellow elif level >= logging.INFO: _chalk = chalk.blue elif level >= logging.DEBUG: _chalk = chalk.green else: _chalk = chalk.white return _ch...
Gets the appropriate piece of chalk for the logging level
def pdftotext_conversion_is_bad(txtlines): numWords = numSpaces = 0 p_space = re.compile(unicode(r'(\s)'), re.UNICODE) p_noSpace = re.compile(unicode(r'(\S+)'), re.UNICODE) for txtline in txtlines: numWords = numWords + len(p_noSpace.findall(txtline.strip())) numSpaces = numSpaces + len(...
Check if conversion after pdftotext is bad. Sometimes pdftotext performs a bad conversion which consists of many spaces and garbage characters. This method takes a list of strings obtained from a pdftotext conversion and examines them to see if they are likely to be the result of a bad conversion....
def check_initial_web_request(self, item_session: ItemSession, request: HTTPRequest) -> Tuple[bool, str]: verdict, reason, test_info = self.consult_filters(item_session.request.url_info, item_session.url_record) if verdict and self._robots_txt_checker: can_fetch = yield from self.consult_rob...
Check robots.txt, URL filters, and scripting hook. Returns: tuple: (bool, str) Coroutine.
def _parse_perfdata(self, s): metrics = [] counters = re.findall(self.TOKENIZER_RE, s) if counters is None: self.log.warning("Failed to parse performance data: {s}".format( s=s)) return metrics for (key, value, uom, warn, crit, min, max) in counter...
Parse performance data from a perfdata string
def is_valid_scalar(self, node: ValueNode) -> None: location_type = self.context.get_input_type() if not location_type: return type_ = get_named_type(location_type) if not is_scalar_type(type_): self.report_error( GraphQLError( ...
Check whether this is a valid scalar. Any value literal may be a valid representation of a Scalar, depending on that scalar type.
def convert_slice_axis(node, **kwargs): name, input_nodes, attrs = get_inputs(node, kwargs) axes = int(attrs.get("axis")) starts = int(attrs.get("begin")) ends = int(attrs.get("end", None)) if not ends: raise ValueError("Slice: ONNX doesnt't support 'None' in 'end' attribute") node = onn...
Map MXNet's slice_axis operator attributes to onnx's Slice operator and return the created node.
def default(self, user_input): try: for i in self._cs.disasm(unhexlify(self.cleanup(user_input)), self.base_address): print("0x%08x:\t%s\t%s" %(i.address, i.mnemonic, i.op_str)) except CsError as e: print("Error: %s" %e)
if no other command was invoked
def get_parents(docgraph, child_node, strict=True): parents = [] for src, _, edge_attrs in docgraph.in_edges(child_node, data=True): if edge_attrs['edge_type'] == EdgeTypes.dominance_relation: parents.append(src) if strict and len(parents) > 1: raise ValueError(("In a syntax tree...
Return a list of parent nodes that dominate this child. In a 'syntax tree' a node never has more than one parent node dominating it. To enforce this, set strict=True. Parameters ---------- docgraph : DiscourseDocumentGraph a document graph strict : bool If True, raise a ValueEr...
def from_file(cls, name: str, mod_path: Tuple[str] = (".",), description: str = None) -> "DataModel": with open(name, encoding="utf-8") as infile: yltxt = infile.read() return cls(yltxt, mod_path, description)
Initialize the data model from a file with YANG library data. Args: name: Name of a file with YANG library data. mod_path: Tuple of directories where to look for YANG modules. description: Optional description of the data model. Returns: The data model ...
def _getEnumValues(self, data): enumstr = data.attrib.get('enumValues') if not enumstr: return None if ':' in enumstr: return {self._cast(k): v for k, v in [kv.split(':') for kv in enumstr.split('|')]} return enumstr.split('|')
Returns a list of dictionary of valis value for this setting.
def _get_converter(self, convert_to=None): conversion = self._get_conversion_type(convert_to) if conversion == "singularity": return self.docker2singularity return self.singularity2docker
see convert and save. This is a helper function that returns the proper conversion function, but doesn't call it. We do this so that in the case of convert, we do the conversion and return a string. In the case of save, we save the recipe to file for the user. P...
def convert_sed_cols(tab): for colname in list(tab.columns.keys()): newname = colname.lower() newname = newname.replace('dfde', 'dnde') if tab.columns[colname].name == newname: continue tab.columns[colname].name = newname return tab
Cast SED column names to lowercase.
def create_stream(name, **header): assert isinstance(name, basestring), name return CreateStream(parent=None, name=name, group=False, header=header)
Create a stream for publishing messages. All keyword arguments will be used to form the header.
def one_of(*validators): def validate(value, should_raise=True): if any(validate(value, should_raise=False) for validate in validators): return True if should_raise: raise TypeError("value did not match any allowable type") return False return validate
Returns a validator function that succeeds only if the input passes at least one of the provided validators. :param callable validators: the validator functions :returns: a function which returns True its input passes at least one of the validators, and raises TypeError otherwise :rtype: call...
def update_security_of_password(self, ID, data): log.info('Update security of password %s with %s' % (ID, data)) self.put('passwords/%s/security.json' % ID, data)
Update security of a password.
def _get_spades_circular_nodes(self, fastg): seq_reader = pyfastaq.sequences.file_reader(fastg) names = set([x.id.rstrip(';') for x in seq_reader if ':' in x.id]) found_fwd = set() found_rev = set() for name in names: l = name.split(':') if len(l) != 2: ...
Returns set of names of nodes in SPAdes fastg file that are circular. Names will match those in spades fasta file
def edit_rrset(self, zone_name, rtype, owner_name, ttl, rdata, profile=None): if type(rdata) is not list: rdata = [rdata] rrset = {"ttl": ttl, "rdata": rdata} if profile: rrset["profile"] = profile uri = "/v1/zones/" + zone_name + "/rrsets/" + rtype + "/" + owner_...
Updates an existing RRSet in the specified zone. Arguments: zone_name -- The zone that contains the RRSet. The trailing dot is optional. rtype -- The type of the RRSet. This can be numeric (1) or if a well-known name is defined for the type (A), you can use it instead. ...
def unresolve_filename(self, package_dir, filename): filename, _ = os.path.splitext(filename) if self.strip_extension: for ext in ('.scss', '.sass'): test_path = os.path.join( package_dir, self.sass_path, filename + ext, ) i...
Retrieves the probable source path from the output filename. Pass in a .css path to get out a .scss path. :param package_dir: the path of the package directory :type package_dir: :class:`str` :param filename: the css filename :type filename: :class:`str` :returns: the s...
def _check_and_apply_deprecations(self, scope, values): si = self.known_scope_to_info[scope] if si.removal_version: explicit_keys = self.for_scope(scope, inherit_from_enclosing_scope=False).get_explicit_keys() if explicit_keys: warn_or_error( removal_version=si.removal_version, ...
Checks whether a ScopeInfo has options specified in a deprecated scope. There are two related cases here. Either: 1) The ScopeInfo has an associated deprecated_scope that was replaced with a non-deprecated scope, meaning that the options temporarily live in two locations. 2) The entire ScopeIn...