code
stringlengths
51
2.38k
docstring
stringlengths
4
15.2k
def friendly_format(self): if self.description is not None: msg = self.description else: msg = 'errorCode: {} / detailCode: {}'.format( self.errorCode, self.detailCode ) return self._fmt(self.name, msg)
Serialize to a format more suitable for displaying to end users.
def _search_for_files(parts): file_parts = [] for part in parts: if isinstance(part, list): file_parts.extend(_search_for_files(part)) elif isinstance(part, FileToken): file_parts.append(part) return file_parts
Given a list of parts, return all of the nested file parts.
def rbinomial(n, p, size=None): if not size: size = None return np.random.binomial(np.ravel(n), np.ravel(p), size)
Random binomial variates.
def insert_sort(node, target): sort = target.sort lang = target.lang collator = Collator.createInstance(Locale(lang) if lang else Locale()) for child in target.tree: if collator.compare(sort(child) or '', sort(node) or '') > 0: child.addprevious(node) break else: target.tree.append(node)
Insert node into sorted position in target tree. Uses sort function and language from target
def append_metrics(self, metrics, df_name): dataframe = self._dataframes[df_name] _add_new_columns(dataframe, metrics) dataframe.loc[len(dataframe)] = metrics
Append new metrics to selected dataframes. Parameters ---------- metrics : metric.EvalMetric New metrics to be added. df_name : str Name of the dataframe to be modified.
def command(self, command, raw=False, timeout_ms=None): return ''.join(self.streaming_command(command, raw, timeout_ms))
Run the given command and return the output.
def eval_py(self, _globals, _locals): try: params = eval(self.script, _globals, _locals) except NameError as e: raise Exception( 'Failed to evaluate parameters: {}' .format(str(e)) ) except ResolutionError as e: raise Exception('GetOutput: {}'.format(str(e))) return params
Evaluates a file containing a Python params dictionary.
def delete_budget(self, model_uuid): return make_request( '{}model/{}/budget'.format(self.url, model_uuid), method='DELETE', timeout=self.timeout, client=self._client)
Delete a budget. @param the name of the wallet. @param the model UUID. @return a success string from the plans server. @raise ServerError via make_request.
def get_theme_dir(): return os.path.abspath(os.path.join(os.path.dirname(__file__), "theme"))
Returns path to directory containing this package's theme. This is designed to be used when setting the ``html_theme_path`` option within Sphinx's ``conf.py`` file.
def GaussianBlur(X, ksize_width, ksize_height, sigma_x, sigma_y): return image_transform( X, cv2.GaussianBlur, ksize=(ksize_width, ksize_height), sigmaX=sigma_x, sigmaY=sigma_y )
Apply Gaussian blur to the given data. Args: X: data to blur kernel_size: Gaussian kernel size stddev: Gaussian kernel standard deviation (in both X and Y directions)
def __validate_required_fields(self, document): try: required = set(field for field, definition in self.schema.items() if self._resolve_rules_set(definition). get('required') is True) except AttributeError: if self.is_child and self.schema_path[-1] == 'schema': raise _SchemaRuleTypeError else: raise required -= self._unrequired_by_excludes missing = required - set(field for field in document if document.get(field) is not None or not self.ignore_none_values) for field in missing: self._error(field, errors.REQUIRED_FIELD) if self._unrequired_by_excludes: fields = set(field for field in document if document.get(field) is not None) if self._unrequired_by_excludes.isdisjoint(fields): for field in self._unrequired_by_excludes - fields: self._error(field, errors.REQUIRED_FIELD)
Validates that required fields are not missing. :param document: The document being validated.
def cancelMktData(self, contract: Contract): ticker = self.ticker(contract) reqId = self.wrapper.endTicker(ticker, 'mktData') if reqId: self.client.cancelMktData(reqId) else: self._logger.error( 'cancelMktData: ' f'No reqId found for contract {contract}')
Unsubscribe from realtime streaming tick data. Args: contract: The exact contract object that was used to subscribe with.
def untrace_modules(self, modules): for module in modules: foundations.trace.untrace_module(module) self.__model__refresh_attributes() return True
Untraces given modules. :param modules: Modules to untrace. :type modules: list :return: Method success. :rtype: bool
def open_target_group_for_form(self, form): target = self.first_container_with_errors(form.errors.keys()) if target is None: target = self.fields[0] if not getattr(target, '_active_originally_included', None): target.active = True return target target.active = True return target
Makes sure that the first group that should be open is open. This is either the first group with errors or the first group in the container, unless that first group was originally set to active=False.
def column(self, key): for row in self.rows: if key in row: yield row[key]
Iterator over a given column, skipping steps that don't have that key
def structure_results(res): out = {'hits': {'hits': []}} keys = [u'admin1_code', u'admin2_code', u'admin3_code', u'admin4_code', u'alternativenames', u'asciiname', u'cc2', u'coordinates', u'country_code2', u'country_code3', u'dem', u'elevation', u'feature_class', u'feature_code', u'geonameid', u'modification_date', u'name', u'population', u'timezone'] for i in res: i_out = {} for k in keys: i_out[k] = i[k] out['hits']['hits'].append(i_out) return out
Format Elasticsearch result as Python dictionary
def _create_list_of_array_controllers(self): headers, array_uri, array_settings = ( self._get_array_controller_resource()) array_uri_links = [] if ('links' in array_settings and 'Member' in array_settings['links']): array_uri_links = array_settings['links']['Member'] else: msg = ('"links/Member" section in ArrayControllers' ' does not exist') raise exception.IloCommandNotSupportedError(msg) return array_uri_links
Creates the list of Array Controller URIs. :raises: IloCommandNotSupportedError if the ArrayControllers doesnt have member "Member". :returns list of ArrayControllers.
def _expand_qname(self, qname): if type(qname) is not rt.URIRef: raise TypeError("Cannot expand qname of type {}, must be URIRef" .format(type(qname))) for ns in self.graph.namespaces(): if ns[0] == qname.split(':')[0]: return rt.URIRef("%s%s" % (ns[1], qname.split(':')[-1])) return qname
expand a qualified name's namespace prefix to include the resolved namespace root url
def slice_sequence(self,start,end): l = self.length indexstart = start indexend = end ns = [] tot = 0 for r in self._rngs: tot += r.length n = r.copy() if indexstart > r.length: indexstart-=r.length continue n.start = n.start+indexstart if tot > end: diff = tot-end n.end -= diff tot = end indexstart = 0 ns.append(n) if tot == end: break if len(ns)==0: return None return MappingGeneric(ns,self._options)
Slice the mapping by the position in the sequence First coordinate is 0-indexed start Second coordinate is 1-indexed finish
def _did_retrieve(self, connection): response = connection.response try: self.from_dict(response.data[0]) except: pass return self._did_perform_standard_operation(connection)
Callback called after fetching the object
def abs_path_from_base(base_path, rel_path): return os.path.abspath( os.path.join( os.path.dirname(sys._getframe(1).f_code.co_filename), base_path, rel_path ) )
Join a base and a relative path and return an absolute path to the resulting location. Args: base_path: str Relative or absolute path to prepend to ``rel_path``. rel_path: str Path relative to the location of the module file from which this function is called. Returns: str : Absolute path to the location specified by ``rel_path``.
def estimate(self, upgrades): val = 0 for u in upgrades: val += u.estimate() return val
Estimate the time needed to apply upgrades. If an upgrades does not specify and estimate it is assumed to be in the order of 1 second. :param upgrades: List of upgrades sorted in topological order.
def write_summary(summary: dict, cache_dir: str): summary['accessed'] = time() with open(join(cache_dir, 'summary.json'), 'w') as summary_file: summary_file.write(json.dumps(summary, indent=4, sort_keys=True))
Write the `summary` JSON to `cache_dir`. Updated the accessed timestamp to now before writing.
def pld3(pos, line_vertex, line_dir): pos = np.atleast_2d(pos) line_vertex = np.atleast_1d(line_vertex) line_dir = np.atleast_1d(line_dir) c = np.cross(line_dir, line_vertex - pos) n1 = np.linalg.norm(c, axis=1) n2 = np.linalg.norm(line_dir) out = n1 / n2 if out.ndim == 1 and len(out) == 1: return out[0] return out
Calculate the point-line-distance for given point and line.
def _frombuffer(ptr, frames, channels, dtype): framesize = channels * dtype.itemsize data = np.frombuffer(ffi.buffer(ptr, frames * framesize), dtype=dtype) data.shape = -1, channels return data
Create NumPy array from a pointer to some memory.
def validate_object_id(object_id): result = re.match(OBJECT_ID_RE, str(object_id)) if not result: print("'%s' appears not to be a valid 990 object_id" % object_id) raise RuntimeError(OBJECT_ID_MSG) return object_id
It's easy to make a mistake entering these, validate the format
def update_terms(self, project_id, data, fuzzy_trigger=None): kwargs = {} if fuzzy_trigger is not None: kwargs['fuzzy_trigger'] = fuzzy_trigger data = self._run( url_path="terms/update", id=project_id, data=json.dumps(data), **kwargs ) return data['result']['terms']
Updates project terms. Lets you change the text, context, reference, plural and tags. >>> data = [ { "term": "Add new list", "context": "", "new_term": "Save list", "new_context": "", "reference": "\/projects", "plural": "", "comment": "", "tags": [ "first_tag", "second_tag" ] }, { "term": "Display list", "context": "", "new_term": "Show list", "new_context": "" } ]
def clean_stale_refs(self, local_refs=None): try: if pygit2.GIT_FETCH_PRUNE: return [] except AttributeError: pass if self.credentials is not None: log.debug( 'The installed version of pygit2 (%s) does not support ' 'detecting stale refs for authenticated remotes, saltenvs ' 'will not reflect branches/tags removed from remote \'%s\'', PYGIT2_VERSION, self.id ) return [] return super(Pygit2, self).clean_stale_refs()
Clean stale local refs so they don't appear as fileserver environments
def drop_right(self, n): return self._transform(transformations.CACHE_T, transformations.drop_right_t(n))
Drops the last n elements of the sequence. >>> seq([1, 2, 3, 4, 5]).drop_right(2) [1, 2, 3] :param n: number of elements to drop :return: sequence with last n elements dropped
def hold(self, policy="combine"): if self._hold is not None and self._hold != policy: log.warning("hold already active with '%s', ignoring '%s'" % (self._hold, policy)) return if policy not in HoldPolicy: raise ValueError("Unknown hold policy %r" % policy) self._hold = policy
Activate a document hold. While a hold is active, no model changes will be applied, or trigger callbacks. Once ``unhold`` is called, the events collected during the hold will be applied according to the hold policy. Args: hold ('combine' or 'collect', optional) Whether events collected during a hold should attempt to be combined (default: 'combine') When set to ``'collect'`` all events will be collected and replayed in order as-is when ``unhold`` is called. When set to ``'combine'`` Bokeh will attempt to combine compatible events together. Typically, different events that change the same property on the same mode can be combined. For example, if the following sequence occurs: .. code-block:: python doc.hold('combine') slider.value = 10 slider.value = 11 slider.value = 12 Then only *one* callback, for the last ``slider.value = 12`` will be triggered. Returns: None .. note:: ``hold`` only applies to document change events, i.e. setting properties on models. It does not apply to events such as ``ButtonClick``, etc.
def kill(self): for process in list(self.processes): process["subprocess"].send_signal(signal.SIGKILL) self.stop_watch()
Kills the processes right now with a SIGKILL
def pull_tag_dict(data): tags = data.pop('Tags', {}) or {} if tags: proper_tags = {} for tag in tags: proper_tags[tag['Key']] = tag['Value'] tags = proper_tags return tags
This will pull out a list of Tag Name-Value objects, and return it as a dictionary. :param data: The dict collected from the collector. :returns dict: A dict of the tag names and their corresponding values.
def fromTFExample(bytestr): example = tf.train.Example() example.ParseFromString(bytestr) return example
Deserializes a TFExample from a byte string
def job_error_message(self, job, queue, to_be_requeued, exception, trace=None): return '[%s|%s|%s] error: %s [%s]' % (queue._cached_name, job.pk.get(), job._cached_identifier, str(exception), 'requeued' if to_be_requeued else 'NOT requeued')
Return the message to log when a job raised an error
def http_datetime_str_from_dt(dt): epoch_seconds = ts_from_dt(dt) return email.utils.formatdate(epoch_seconds, localtime=False, usegmt=True)
Format datetime to HTTP Full Date format. Args: dt : datetime - tz-aware: Used in the formatted string. - tz-naive: Assumed to be in UTC. Returns: str The returned format is a is fixed-length subset of that defined by RFC 1123 and is the preferred format for use in the HTTP Date header. E.g.: ``Sat, 02 Jan 1999 03:04:05 GMT`` See Also: - http://www.w3.org/Protocols/rfc2616/rfc2616-sec3.html#sec3.3.1
def int_to_bytes(i, minlen=1, order='big'): blen = max(minlen, PGPObject.int_byte_len(i), 1) if six.PY2: r = iter(_ * 8 for _ in (range(blen) if order == 'little' else range(blen - 1, -1, -1))) return bytes(bytearray((i >> c) & 0xff for c in r)) return i.to_bytes(blen, order)
convert integer to bytes
def quick_response(self, status_code): translator = Translator(environ=self.environ) if status_code == 404: self.status(404) self.message(translator.trans('http_messages.404')) elif status_code == 401: self.status(401) self.message(translator.trans('http_messages.401')) elif status_code == 400: self.status(400) self.message(translator.trans('http_messages.400')) elif status_code == 200: self.status(200) self.message(translator.trans('http_messages.200'))
Quickly construct response using a status code
def add_range(self, name, min=None, max=None): self._ranges.append(_mk_range_bucket(name, 'min', 'max', min, max)) return self
Add a numeric range. :param str name: the name by which the range is accessed in the results :param int | float min: Lower range bound :param int | float max: Upper range bound :return: This object; suitable for method chaining
def _map_in_out(self, inside_var_name): for out_name, in_name in self.outside_name_map.items(): if inside_var_name == in_name: return out_name return None
Return the external name of a variable mapped from inside.
def generate_tokens(doc, regex=CRE_TOKEN, strip=True, nonwords=False): r if isinstance(regex, basestring): regex = re.compile(regex) for w in regex.finditer(doc): if w: w = w.group() if strip: w = w.strip(r'-_*`()}{' + r"'") if w and (nonwords or not re.match(r'^' + RE_NONWORD + '$', w)): yield w
r"""Return a sequence of words or tokens, using a re.match iteratively through the str >>> doc = "John D. Rock\n\nObjective: \n\tSeeking a position as Software --Architect-- / _Project Lead_ that can utilize my expertise and" >>> doc += " experiences in business application development and proven records in delivering 90's software. \n\nSummary: \n\tSoftware Architect" >>> doc += " who has gone through several full product-delivery life cycles from requirements gathering to deployment / production, and" >>> doc += " skilled in all areas of software development from client-side JavaScript to database modeling. With strong experiences in:" >>> doc += " \n\tRequirements gathering and analysis." >>> len(list(generate_tokens(doc, strip=False, nonwords=True))) 82 >>> seq = list(generate_tokens(doc, strip=False, nonwords=False)) >>> len(seq) 70 >>> '.' in seq or ':' in seq False >>> s = set(generate_tokens(doc, strip=False, nonwords=True)) >>> all(t in s for t in ('D', '.', ':', '_Project', 'Lead_', "90's", "Architect", "product-delivery")) True
def _start_vibration_win(self, left_motor, right_motor): xinput_set_state = self.manager.xinput.XInputSetState xinput_set_state.argtypes = [ ctypes.c_uint, ctypes.POINTER(XinputVibration)] xinput_set_state.restype = ctypes.c_uint vibration = XinputVibration( int(left_motor * 65535), int(right_motor * 65535)) xinput_set_state(self.__device_number, ctypes.byref(vibration))
Start the vibration, which will run until stopped.
def register(self, event, keys): if self.running: raise RuntimeError("Can't register while running") handler = self._handlers.get(event, None) if handler is not None: raise ValueError("Event {} already registered".format(event)) self._handlers[event] = EventHandler(event, keys, loop=self.loop)
Register a new event with available keys. Raises ValueError when the event has already been registered. Usage: dispatch.register("my_event", ["foo", "bar", "baz"])
def collect_and_report(self): logger.debug("Metric reporting thread is now alive") def metric_work(): self.process() if self.agent.is_timed_out(): logger.warn("Host agent offline for >1 min. Going to sit in a corner...") self.agent.reset() return False return True every(1, metric_work, "Metrics Collection")
Target function for the metric reporting thread. This is a simple loop to collect and report entity data every 1 second.
def get_topic_partition_metadata(hosts): kafka_client = KafkaToolClient(hosts, timeout=10) kafka_client.load_metadata_for_topics() topic_partitions = kafka_client.topic_partitions resp = kafka_client.send_metadata_request() for _, topic, partitions in resp.topics: for partition_error, partition, leader, replicas, isr in partitions: if topic_partitions.get(topic, {}).get(partition) is not None: topic_partitions[topic][partition] = PartitionMetadata(topic, partition, leader, replicas, isr, partition_error) return topic_partitions
Returns topic-partition metadata from Kafka broker. kafka-python 1.3+ doesn't include partition metadata information in topic_partitions so we extract it from metadata ourselves.
def _parse_launch_error(data): return LaunchFailure( data.get(ERROR_REASON, None), data.get(APP_ID), data.get(REQUEST_ID), )
Parses a LAUNCH_ERROR message and returns a LaunchFailure object. :type data: dict :rtype: LaunchFailure
def _raw_predict(self, Xnew, full_cov=False, kern=None): mu, var = self.posterior._raw_predict(kern=self.kern if kern is None else kern, Xnew=Xnew, pred_var=self._predictive_variable, full_cov=full_cov) if self.mean_function is not None: mu += self.mean_function.f(Xnew) return mu, var
For making predictions, does not account for normalization or likelihood full_cov is a boolean which defines whether the full covariance matrix of the prediction is computed. If full_cov is False (default), only the diagonal of the covariance is returned. .. math:: p(f*|X*, X, Y) = \int^{\inf}_{\inf} p(f*|f,X*)p(f|X,Y) df = MVN\left(\nu + N,f*| K_{x*x}(K_{xx})^{-1}Y, \frac{\nu + \beta - 2}{\nu + N - 2}K_{x*x*} - K_{xx*}(K_{xx})^{-1}K_{xx*}\right) \nu := \texttt{Degrees of freedom}
def AgregarReceptor(self, cod_caracter, **kwargs): "Agrego los datos del receptor a la liq." d = {'codCaracter': cod_caracter} self.solicitud['receptor'].update(d) return True
Agrego los datos del receptor a la liq.
def sparse_arrays(self, value): if not isinstance(value, bool): raise TypeError('sparse_arrays attribute must be a logical type.') self._sparse_arrays = value
Validate and enable spare arrays.
def push(self, key, value, *, section=DataStoreDocumentSection.Data): key_notation = '.'.join([section, key]) result = self._collection.update_one( {"_id": ObjectId(self._workflow_id)}, { "$push": { key_notation: self._encode_value(value) }, "$currentDate": {"lastModified": True} } ) return result.modified_count == 1
Appends a value to a list in the specified section of the document. Args: key (str): The key pointing to the value that should be stored/updated. It supports MongoDB's dot notation for nested fields. value: The value that should be appended to a list in the data store. section (DataStoreDocumentSection): The section from which the data should be retrieved. Returns: bool: ``True`` if the value could be appended, otherwise ``False``.
def dump(self, path): with open(path, 'w') as props: Properties.dump(self._props, props)
Saves the pushdb as a properties file to the given path.
def percentile(self, percentile): out = scipy.percentile(self.value, percentile, axis=0) if self.name is not None: name = '{}: {} percentile'.format(self.name, _ordinal(percentile)) else: name = None return FrequencySeries(out, epoch=self.epoch, channel=self.channel, name=name, f0=self.f0, df=self.df, frequencies=(hasattr(self, '_frequencies') and self.frequencies or None))
Calculate a given spectral percentile for this `Spectrogram`. Parameters ---------- percentile : `float` percentile (0 - 100) of the bins to compute Returns ------- spectrum : `~gwpy.frequencyseries.FrequencySeries` the given percentile `FrequencySeries` calculated from this `SpectralVaraicence`
def _GetWinevtRcDatabaseReader(self): if not self._winevt_database_reader and self._data_location: database_path = os.path.join( self._data_location, self._WINEVT_RC_DATABASE) if not os.path.isfile(database_path): return None self._winevt_database_reader = ( winevt_rc.WinevtResourcesSqlite3DatabaseReader()) if not self._winevt_database_reader.Open(database_path): self._winevt_database_reader = None return self._winevt_database_reader
Opens the Windows Event Log resource database reader. Returns: WinevtResourcesSqlite3DatabaseReader: Windows Event Log resource database reader or None.
def set_pause_param(self, autoneg, rx_pause, tx_pause): ecmd = array.array('B', struct.pack('IIII', ETHTOOL_SPAUSEPARAM, bool(autoneg), bool(rx_pause), bool(tx_pause))) buf_addr, _buf_len = ecmd.buffer_info() ifreq = struct.pack('16sP', self.name, buf_addr) fcntl.ioctl(sockfd, SIOCETHTOOL, ifreq)
Ethernet has flow control! The inter-frame pause can be adjusted, by auto-negotiation through an ethernet frame type with a simple two-field payload, and by setting it explicitly. http://en.wikipedia.org/wiki/Ethernet_flow_control
def shutdown(at_time=None): cmd = ['shutdown', '-h', ('{0}'.format(at_time) if at_time else 'now')] ret = __salt__['cmd.run'](cmd, python_shell=False) return ret
Shutdown a running system at_time The wait time in minutes before the system will be shutdown. CLI Example: .. code-block:: bash salt '*' system.shutdown 5
def compute_voltages(grid, configs_raw, potentials_raw): voltages = [] for config, potentials in zip(configs_raw, potentials_raw): print('config', config) e3_node = grid.get_electrode_node(config[2]) e4_node = grid.get_electrode_node(config[3]) print(e3_node, e4_node) print('pot1', potentials[e3_node]) print('pot2', potentials[e4_node]) voltage = potentials[e3_node] - potentials[e4_node] voltages.append(voltage) return voltages
Given a list of potential distribution and corresponding four-point spreads, compute the voltages Parameters ---------- grid: crt_grid object the grid is used to infer electrode positions configs_raw: Nx4 array containing the measurement configs (1-indexed) potentials_raw: list with N entries corresponding to each measurement, containing the node potentials of each injection dipole.
def icons(self, strip_ext=False): result = [f for f in self._stripped_files if self._icons_pattern.match(f)] if strip_ext: result = [strip_suffix(f, '\.({ext})'.format(ext=self._icons_ext), regex=True) for f in result] return result
Get all icons in this DAP, optionally strip extensions
def HA1(realm, username, password, algorithm): if not realm: realm = u'' return H(b":".join([username.encode('utf-8'), realm.encode('utf-8'), password.encode('utf-8')]), algorithm)
Create HA1 hash by realm, username, password HA1 = md5(A1) = MD5(username:realm:password)
def get_icon(self, iconset): theme = iconset.attrib.get('theme') if theme is not None: return self._object_factory.createQObject("QIcon.fromTheme", 'icon', (self._object_factory.asString(theme), ), is_attribute=False) if iconset.text is None: return None iset = _IconSet(iconset, self._base_dir) try: idx = self._cache.index(iset) except ValueError: idx = -1 if idx >= 0: iset = self._cache[idx] else: name = 'icon' idx = len(self._cache) if idx > 0: name += str(idx) icon = self._object_factory.createQObject("QIcon", name, (), is_attribute=False) iset.set_icon(icon, self._qtgui_module) self._cache.append(iset) return iset.icon
Return an icon described by the given iconset tag.
def exit(self, code=None, msg=None): if code is None: code = self.tcex.exit_code if code == 3: self.tcex.log.info(u'Changing exit code from 3 to 0.') code = 0 elif code not in [0, 1]: code = 1 self.tcex.exit(code, msg)
Playbook wrapper on TcEx exit method Playbooks do not support partial failures so we change the exit method from 3 to 1 and call it a partial success instead. Args: code (Optional [integer]): The exit code value for the app.
def generate_index(fn, cols=None, names=None, sep=" "): assert cols is not None, "'cols' was not set" assert names is not None, "'names' was not set" assert len(cols) == len(names) bgzip, open_func = get_open_func(fn, return_fmt=True) data = pd.read_csv(fn, sep=sep, engine="c", usecols=cols, names=names, compression="gzip" if bgzip else None) f = open_func(fn, "rb") data["seek"] = np.fromiter(_seek_generator(f), dtype=np.uint)[:-1] f.close() write_index(get_index_fn(fn), data) return data
Build a index for the given file. Args: fn (str): the name of the file. cols (list): a list containing column to keep (as int). names (list): the name corresponding to the column to keep (as str). sep (str): the field separator. Returns: pandas.DataFrame: the index.
def func(self): fn = self.engine.query.sense_func_get( self.observer.name, self.sensename, *self.engine._btt() ) if fn is not None: return SenseFuncWrap(self.observer, fn)
Return the function most recently associated with this sense.
def get_default_connection(): tid = id(threading.current_thread()) conn = _conn_holder.get(tid) if not conn: with(_rlock): if 'project_endpoint' not in _options and 'project_id' not in _options: _options['project_endpoint'] = helper.get_project_endpoint_from_env() if 'credentials' not in _options: _options['credentials'] = helper.get_credentials_from_env() _conn_holder[tid] = conn = connection.Datastore(**_options) return conn
Returns the default datastore connection. Defaults endpoint to helper.get_project_endpoint_from_env() and credentials to helper.get_credentials_from_env(). Use set_options to override defaults.
def get_container_version(): root_dir = os.path.dirname(os.path.realpath(sys.argv[0])) version_file = os.path.join(root_dir, 'VERSION') if os.path.exists(version_file): with open(version_file) as f: return f.read() return ''
Return the version of the docker container running the present server, or '' if not in a container
def contains_peroxide(structure, relative_cutoff=1.1): ox_type = oxide_type(structure, relative_cutoff) if ox_type == "peroxide": return True else: return False
Determines if a structure contains peroxide anions. Args: structure (Structure): Input structure. relative_cutoff: The peroxide bond distance is 1.49 Angstrom. Relative_cutoff * 1.49 stipulates the maximum distance two O atoms must be to each other to be considered a peroxide. Returns: Boolean indicating if structure contains a peroxide anion.
def files(self): for header in (r"(.*)\t\[\[\[1\n", r"^(\d+)\n$"): header = re.compile(header) filename = None self.fd.seek(0) line = self.readline() while line: m = header.match(line) if m is not None: filename = m.group(1) try: filelines = int(self.readline().rstrip()) except ValueError: raise ArchiveError('invalid archive format') filestart = self.fd.tell() yield (filename, filelines, filestart) line = self.readline() if filename is not None: break
Yields archive file information.
def wrap_viscm(cmap, dpi=100, saveplot=False): from viscm import viscm viscm(cmap) fig = plt.gcf() fig.set_size_inches(22, 10) plt.show() if saveplot: fig.savefig('figures/eval_' + cmap.name + '.png', bbox_inches='tight', dpi=dpi) fig.savefig('figures/eval_' + cmap.name + '.pdf', bbox_inches='tight', dpi=dpi)
Evaluate goodness of colormap using perceptual deltas. :param cmap: Colormap instance. :param dpi=100: dpi for saved image. :param saveplot=False: Whether to save the plot or not.
def predict(self, test_X): with self.tf_graph.as_default(): with tf.Session() as self.tf_session: self.tf_saver.restore(self.tf_session, self.model_path) feed = { self.input_data: test_X, self.keep_prob: 1 } return self.mod_y.eval(feed)
Predict the labels for the test set. Parameters ---------- test_X : array_like, shape (n_samples, n_features) Test data. Returns ------- array_like, shape (n_samples,) : predicted labels.
def reshuffle(expr, by=None, sort=None, ascending=True): by = by or RandomScalar() grouped = expr.groupby(by) if sort: grouped = grouped.sort_values(sort, ascending=ascending) return ReshuffledCollectionExpr(_input=grouped, _schema=expr._schema)
Reshuffle data. :param expr: :param by: the sequence or scalar to shuffle by. RandomScalar as default :param sort: the sequence or scalar to sort. :param ascending: True if ascending else False :return: collection
def datetime_to_httpdate(dt): if isinstance(dt, (int, float)): return format_date_time(dt) elif isinstance(dt, datetime): return format_date_time(datetime_to_timestamp(dt)) else: raise TypeError("expected datetime.datetime or timestamp (int/float)," " got '%s'" % dt)
Convert datetime.datetime or Unix timestamp to HTTP date.
def _update_health_monitor_with_new_ips(route_spec, all_ips, q_monitor_ips): new_all_ips = \ sorted(set(itertools.chain.from_iterable(route_spec.values()))) if new_all_ips != all_ips: logging.debug("New route spec detected. Updating " "health-monitor with: %s" % ",".join(new_all_ips)) all_ips = new_all_ips q_monitor_ips.put(all_ips) else: logging.debug("New route spec detected. No changes in " "IP address list, not sending update to " "health-monitor.") return all_ips
Take the current route spec and compare to the current list of known IP addresses. If the route spec mentiones a different set of IPs, update the monitoring thread with that new list. Return the current set of IPs mentioned in the route spec.
def closedopen(lower_value, upper_value): return Interval(Interval.CLOSED, lower_value, upper_value, Interval.OPEN)
Helper function to construct an interval object with a closed lower and open upper. For example: >>> closedopen(100.2, 800.9) [100.2, 800.9)
def to_ufo_glyph_background(self, glyph, layer): if not layer.hasBackground: return background = layer.background ufo_layer = self.to_ufo_background_layer(glyph) new_glyph = ufo_layer.newGlyph(glyph.name) width = background.userData[BACKGROUND_WIDTH_KEY] if width is not None: new_glyph.width = width self.to_ufo_background_image(new_glyph, background) self.to_ufo_paths(new_glyph, background) self.to_ufo_components(new_glyph, background) self.to_ufo_glyph_anchors(new_glyph, background.anchors) self.to_ufo_guidelines(new_glyph, background)
Set glyph background.
def nl_socket_alloc(cb=None): cb = cb or nl_cb_alloc(default_cb) if not cb: return None sk = nl_sock() sk.s_cb = cb sk.s_local.nl_family = getattr(socket, 'AF_NETLINK', -1) sk.s_peer.nl_family = getattr(socket, 'AF_NETLINK', -1) sk.s_seq_expect = sk.s_seq_next = int(time.time()) sk.s_flags = NL_OWN_PORT nl_socket_get_local_port(sk) return sk
Allocate new Netlink socket. Does not yet actually open a socket. https://github.com/thom311/libnl/blob/libnl3_2_25/lib/socket.c#L206 Also has code for generating local port once. https://github.com/thom311/libnl/blob/libnl3_2_25/lib/nl.c#L123 Keyword arguments: cb -- custom callback handler. Returns: Newly allocated Netlink socket (nl_sock class instance) or None.
def on_receive_request_vote_response(self, data): if data.get('vote_granted'): self.vote_count += 1 if self.state.is_majority(self.vote_count): self.state.to_leader()
Receives response for vote request. If the vote was granted then check if we got majority and may become Leader
def stop_capture(self): super(Treal, self).stop_capture() if self._machine: self._machine.close() self._stopped()
Stop listening for output from the stenotype machine.
def add_beacon(self, name, beacon_data): data = {} data[name] = beacon_data if name in self._get_beacons(include_opts=False): comment = 'Cannot update beacon item {0}, ' \ 'because it is configured in pillar.'.format(name) complete = False else: if name in self.opts['beacons']: comment = 'Updating settings for beacon ' \ 'item: {0}'.format(name) else: comment = 'Added new beacon item: {0}'.format(name) complete = True self.opts['beacons'].update(data) evt = salt.utils.event.get_event('minion', opts=self.opts) evt.fire_event({'complete': complete, 'comment': comment, 'beacons': self.opts['beacons']}, tag='/salt/minion/minion_beacon_add_complete') return True
Add a beacon item
def _get_source_chunks(self, input_text, language=None): chunks = ChunkList() seek = 0 result = self._get_annotations(input_text, language=language) tokens = result['tokens'] language = result['language'] for i, token in enumerate(tokens): word = token['text']['content'] begin_offset = token['text']['beginOffset'] label = token['dependencyEdge']['label'] pos = token['partOfSpeech']['tag'] if begin_offset > seek: chunks.append(Chunk.space()) seek = begin_offset chunk = Chunk(word, pos, label) if chunk.label in _DEPENDENT_LABEL: chunk.dependency = i < token['dependencyEdge']['headTokenIndex'] if chunk.is_punct(): chunk.dependency = chunk.is_open_punct() chunks.append(chunk) seek += len(word) return chunks, language
Returns a chunk list retrieved from Syntax Analysis results. Args: input_text (str): Text to annotate. language (:obj:`str`, optional): Language of the text. Returns: A chunk list. (:obj:`budou.chunk.ChunkList`)
def lookup_hostname(self, ip): res = self.lookup_by_lease(ip=ip) if "client-hostname" not in res: raise OmapiErrorAttributeNotFound() return res["client-hostname"].decode('utf-8')
Look up a lease object with given ip address and return the associated client hostname. @type ip: str @rtype: str or None @raises ValueError: @raises OmapiError: @raises OmapiErrorNotFound: if no lease object with the given ip address could be found @raises OmapiErrorAttributeNotFound: if lease could be found, but objects lacks a hostname @raises socket.error:
def dashes_cleanup(records, prune_chars='.:?~'): logging.info( "Applying _dashes_cleanup: converting any of '{}' to '-'.".format(prune_chars)) translation_table = {ord(c): '-' for c in prune_chars} for record in records: record.seq = Seq(str(record.seq).translate(translation_table), record.seq.alphabet) yield record
Take an alignment and convert any undesirable characters such as ? or ~ to -.
def _get_unicode(data, force=False): if isinstance(data, binary_type): return data.decode('utf-8') elif data is None: return '' elif force: if PY2: return unicode(data) else: return str(data) else: return data
Try to return a text aka unicode object from the given data.
def _get_arguments_for_execution(self, function_name, serialized_args): arguments = [] for (i, arg) in enumerate(serialized_args): if isinstance(arg, ObjectID): argument = self.get_object([arg])[0] if isinstance(argument, RayError): raise argument else: argument = arg arguments.append(argument) return arguments
Retrieve the arguments for the remote function. This retrieves the values for the arguments to the remote function that were passed in as object IDs. Arguments that were passed by value are not changed. This is called by the worker that is executing the remote function. Args: function_name (str): The name of the remote function whose arguments are being retrieved. serialized_args (List): The arguments to the function. These are either strings representing serialized objects passed by value or they are ray.ObjectIDs. Returns: The retrieved arguments in addition to the arguments that were passed by value. Raises: RayError: This exception is raised if a task that created one of the arguments failed.
def get_chalk(level): if level >= logging.ERROR: _chalk = chalk.red elif level >= logging.WARNING: _chalk = chalk.yellow elif level >= logging.INFO: _chalk = chalk.blue elif level >= logging.DEBUG: _chalk = chalk.green else: _chalk = chalk.white return _chalk
Gets the appropriate piece of chalk for the logging level
def pdftotext_conversion_is_bad(txtlines): numWords = numSpaces = 0 p_space = re.compile(unicode(r'(\s)'), re.UNICODE) p_noSpace = re.compile(unicode(r'(\S+)'), re.UNICODE) for txtline in txtlines: numWords = numWords + len(p_noSpace.findall(txtline.strip())) numSpaces = numSpaces + len(p_space.findall(txtline.strip())) if numSpaces >= (numWords * 3): return True else: return False
Check if conversion after pdftotext is bad. Sometimes pdftotext performs a bad conversion which consists of many spaces and garbage characters. This method takes a list of strings obtained from a pdftotext conversion and examines them to see if they are likely to be the result of a bad conversion. :param txtlines: (list) of unicode strings obtained from pdftotext conversion. :return: (integer) - 1 if bad conversion; 0 if good conversion.
def check_initial_web_request(self, item_session: ItemSession, request: HTTPRequest) -> Tuple[bool, str]: verdict, reason, test_info = self.consult_filters(item_session.request.url_info, item_session.url_record) if verdict and self._robots_txt_checker: can_fetch = yield from self.consult_robots_txt(request) if not can_fetch: verdict = False reason = 'robotstxt' verdict, reason = self.consult_hook( item_session, verdict, reason, test_info ) return verdict, reason
Check robots.txt, URL filters, and scripting hook. Returns: tuple: (bool, str) Coroutine.
def _parse_perfdata(self, s): metrics = [] counters = re.findall(self.TOKENIZER_RE, s) if counters is None: self.log.warning("Failed to parse performance data: {s}".format( s=s)) return metrics for (key, value, uom, warn, crit, min, max) in counters: try: norm_value = self._normalize_to_unit(float(value), uom) metrics.append((key, norm_value)) except ValueError: self.log.warning( "Couldn't convert value '{value}' to float".format( value=value)) return metrics
Parse performance data from a perfdata string
def is_valid_scalar(self, node: ValueNode) -> None: location_type = self.context.get_input_type() if not location_type: return type_ = get_named_type(location_type) if not is_scalar_type(type_): self.report_error( GraphQLError( bad_value_message( location_type, print_ast(node), enum_type_suggestion(type_, node), ), node, ) ) return type_ = cast(GraphQLScalarType, type_) try: parse_result = type_.parse_literal(node) if is_invalid(parse_result): self.report_error( GraphQLError( bad_value_message(location_type, print_ast(node)), node ) ) except Exception as error: self.report_error( GraphQLError( bad_value_message(location_type, print_ast(node), str(error)), node, original_error=error, ) )
Check whether this is a valid scalar. Any value literal may be a valid representation of a Scalar, depending on that scalar type.
def convert_slice_axis(node, **kwargs): name, input_nodes, attrs = get_inputs(node, kwargs) axes = int(attrs.get("axis")) starts = int(attrs.get("begin")) ends = int(attrs.get("end", None)) if not ends: raise ValueError("Slice: ONNX doesnt't support 'None' in 'end' attribute") node = onnx.helper.make_node( "Slice", input_nodes, [name], axes=[axes], starts=[starts], ends=[ends], name=name, ) return [node]
Map MXNet's slice_axis operator attributes to onnx's Slice operator and return the created node.
def default(self, user_input): try: for i in self._cs.disasm(unhexlify(self.cleanup(user_input)), self.base_address): print("0x%08x:\t%s\t%s" %(i.address, i.mnemonic, i.op_str)) except CsError as e: print("Error: %s" %e)
if no other command was invoked
def get_parents(docgraph, child_node, strict=True): parents = [] for src, _, edge_attrs in docgraph.in_edges(child_node, data=True): if edge_attrs['edge_type'] == EdgeTypes.dominance_relation: parents.append(src) if strict and len(parents) > 1: raise ValueError(("In a syntax tree, a node can't be " "dominated by more than one parent")) return parents
Return a list of parent nodes that dominate this child. In a 'syntax tree' a node never has more than one parent node dominating it. To enforce this, set strict=True. Parameters ---------- docgraph : DiscourseDocumentGraph a document graph strict : bool If True, raise a ValueError if a child node is dominated by more than one parent node. Returns ------- parents : list a list of (parent) node IDs.
def from_file(cls, name: str, mod_path: Tuple[str] = (".",), description: str = None) -> "DataModel": with open(name, encoding="utf-8") as infile: yltxt = infile.read() return cls(yltxt, mod_path, description)
Initialize the data model from a file with YANG library data. Args: name: Name of a file with YANG library data. mod_path: Tuple of directories where to look for YANG modules. description: Optional description of the data model. Returns: The data model instance. Raises: The same exceptions as the class constructor above.
def _getEnumValues(self, data): enumstr = data.attrib.get('enumValues') if not enumstr: return None if ':' in enumstr: return {self._cast(k): v for k, v in [kv.split(':') for kv in enumstr.split('|')]} return enumstr.split('|')
Returns a list of dictionary of valis value for this setting.
def _get_converter(self, convert_to=None): conversion = self._get_conversion_type(convert_to) if conversion == "singularity": return self.docker2singularity return self.singularity2docker
see convert and save. This is a helper function that returns the proper conversion function, but doesn't call it. We do this so that in the case of convert, we do the conversion and return a string. In the case of save, we save the recipe to file for the user. Parameters ========== convert_to: a string either docker or singularity, if a different Returns ======= converter: the function to do the conversion
def convert_sed_cols(tab): for colname in list(tab.columns.keys()): newname = colname.lower() newname = newname.replace('dfde', 'dnde') if tab.columns[colname].name == newname: continue tab.columns[colname].name = newname return tab
Cast SED column names to lowercase.
def create_stream(name, **header): assert isinstance(name, basestring), name return CreateStream(parent=None, name=name, group=False, header=header)
Create a stream for publishing messages. All keyword arguments will be used to form the header.
def one_of(*validators): def validate(value, should_raise=True): if any(validate(value, should_raise=False) for validate in validators): return True if should_raise: raise TypeError("value did not match any allowable type") return False return validate
Returns a validator function that succeeds only if the input passes at least one of the provided validators. :param callable validators: the validator functions :returns: a function which returns True its input passes at least one of the validators, and raises TypeError otherwise :rtype: callable
def update_security_of_password(self, ID, data): log.info('Update security of password %s with %s' % (ID, data)) self.put('passwords/%s/security.json' % ID, data)
Update security of a password.
def _get_spades_circular_nodes(self, fastg): seq_reader = pyfastaq.sequences.file_reader(fastg) names = set([x.id.rstrip(';') for x in seq_reader if ':' in x.id]) found_fwd = set() found_rev = set() for name in names: l = name.split(':') if len(l) != 2: continue if l[0] == l[1]: if l[0][-1] == "'": found_rev.add(l[0][:-1]) else: found_fwd.add(l[0]) return found_fwd.intersection(found_rev)
Returns set of names of nodes in SPAdes fastg file that are circular. Names will match those in spades fasta file
def edit_rrset(self, zone_name, rtype, owner_name, ttl, rdata, profile=None): if type(rdata) is not list: rdata = [rdata] rrset = {"ttl": ttl, "rdata": rdata} if profile: rrset["profile"] = profile uri = "/v1/zones/" + zone_name + "/rrsets/" + rtype + "/" + owner_name return self.rest_api_connection.put(uri, json.dumps(rrset))
Updates an existing RRSet in the specified zone. Arguments: zone_name -- The zone that contains the RRSet. The trailing dot is optional. rtype -- The type of the RRSet. This can be numeric (1) or if a well-known name is defined for the type (A), you can use it instead. owner_name -- The owner name for the RRSet. If no trailing dot is supplied, the owner_name is assumed to be relative (foo). If a trailing dot is supplied, the owner name is assumed to be absolute (foo.zonename.com.) ttl -- The updated TTL value for the RRSet. rdata -- The updated BIND data for the RRSet as a string. If there is a single resource record in the RRSet, you can pass in the single string. If there are multiple resource records in this RRSet, pass in a list of strings. profile -- The profile info if this is updating a resource pool
def unresolve_filename(self, package_dir, filename): filename, _ = os.path.splitext(filename) if self.strip_extension: for ext in ('.scss', '.sass'): test_path = os.path.join( package_dir, self.sass_path, filename + ext, ) if os.path.exists(test_path): return filename + ext else: return filename + '.scss' else: return filename
Retrieves the probable source path from the output filename. Pass in a .css path to get out a .scss path. :param package_dir: the path of the package directory :type package_dir: :class:`str` :param filename: the css filename :type filename: :class:`str` :returns: the scss filename :rtype: :class:`str`
def _check_and_apply_deprecations(self, scope, values): si = self.known_scope_to_info[scope] if si.removal_version: explicit_keys = self.for_scope(scope, inherit_from_enclosing_scope=False).get_explicit_keys() if explicit_keys: warn_or_error( removal_version=si.removal_version, deprecated_entity_description='scope {}'.format(scope), hint=si.removal_hint, ) deprecated_scope = si.deprecated_scope if deprecated_scope is not None and scope != deprecated_scope: explicit_keys = self.for_scope(deprecated_scope, inherit_from_enclosing_scope=False).get_explicit_keys() if explicit_keys: values.update(self.for_scope(deprecated_scope)) warn_or_error( removal_version=self.known_scope_to_info[scope].deprecated_scope_removal_version, deprecated_entity_description='scope {}'.format(deprecated_scope), hint='Use scope {} instead (options: {})'.format(scope, ', '.join(explicit_keys)) )
Checks whether a ScopeInfo has options specified in a deprecated scope. There are two related cases here. Either: 1) The ScopeInfo has an associated deprecated_scope that was replaced with a non-deprecated scope, meaning that the options temporarily live in two locations. 2) The entire ScopeInfo is deprecated (as in the case of deprecated SubsystemDependencies), meaning that the options live in one location. In the first case, this method has the sideeffect of merging options values from deprecated scopes into the given values.