code
stringlengths
51
2.38k
docstring
stringlengths
4
15.2k
def stoch2array(self): a = np.empty(self.dim) for stochastic in self.stochastics: a[self._slices[stochastic]] = stochastic.value return a
Return the stochastic objects as an array.
def extract(features, groups, weight_method=default_weight_method, num_bins=default_num_bins, edge_range=default_edge_range, trim_outliers=default_trim_behaviour, trim_percentile=default_trim_percentile, use_original_distribution=False, relative_to_all=False, asymmetric=False, return_networkx_graph=default_return_networkx_graph, out_weights_path=default_out_weights_path): features, groups, num_bins, edge_range, group_ids, num_groups, num_links = check_params( features, groups, num_bins, edge_range, trim_outliers, trim_percentile) weight_func, use_orig_distr, non_symmetric = check_weight_method(weight_method, use_original_distribution, asymmetric) edges = compute_bin_edges(features, num_bins, edge_range, trim_outliers, trim_percentile, use_orig_distr) if relative_to_all: result = non_pairwise.relative_to_all(features, groups, edges, weight_func, use_orig_distr, group_ids, num_groups, return_networkx_graph, out_weights_path) else: result = pairwise_extract(features, groups, edges, weight_func, use_orig_distr, group_ids, num_groups, num_links, non_symmetric, return_networkx_graph, out_weights_path) return result
Extracts the histogram-distance weighted adjacency matrix. Parameters ---------- features : ndarray or str 1d array of scalar values, either provided directly as a 1d numpy array, or as a path to a file containing these values groups : ndarray or str Membership array of same length as `features`, each value specifying which group that particular node belongs to. Input can be either provided directly as a 1d numpy array,or as a path to a file containing these values. For example, if you have cortical thickness values for 1000 vertices (`features` is ndarray of length 1000), belonging to 100 patches, the groups array (of length 1000) could have numbers 1 to 100 (number of unique values) specifying which element belongs to which cortical patch. Grouping with numerical values (contiguous from 1 to num_patches) is strongly recommended for simplicity, but this could also be a list of strings of length p, in which case a tuple is returned, identifying which weight belongs to which pair of patches. weight_method : string or callable, optional Type of distance (or metric) to compute between the pair of histograms. It can either be a string identifying one of the weights implemented below, or a valid callable. If a string, it must be one of the following methods: - 'chebyshev' - 'chebyshev_neg' - 'chi_square' - 'correlate' - 'correlate_1' - 'cosine' - 'cosine_1' - 'cosine_2' - 'cosine_alt' - 'euclidean' - 'fidelity_based' - 'histogram_intersection' - 'histogram_intersection_1' - 'jensen_shannon' - 'kullback_leibler' - 'manhattan' - 'minowski' - 'noelle_1' - 'noelle_2' - 'noelle_3' - 'noelle_4' - 'noelle_5' - 'relative_bin_deviation' - 'relative_deviation' Note only the following are *metrics*: - 'manhattan' - 'minowski' - 'euclidean' - 'noelle_2' - 'noelle_4' - 'noelle_5' The following are *semi- or quasi-metrics*: - 'kullback_leibler' - 'jensen_shannon' - 'chi_square' - 'chebyshev' - 'cosine_1' - 'chebyshev_neg' - 'correlate_1' - 'histogram_intersection_1' - 'relative_deviation' - 'relative_bin_deviation' - 'noelle_1' - 'noelle_3' The following are classified to be similarity functions: - 'histogram_intersection' - 'correlate' - 'cosine' - 'cosine_2' - 'cosine_alt' - 'fidelity_based' *Default* choice: 'minowski'. The method can also be one of the following identifying metrics that operate on the original data directly - e.g. difference in the medians coming from the distributions of the pair of ROIs. - 'diff_medians' - 'diff_means' - 'diff_medians_abs' - 'diff_means_abs' Please note this can lead to adjacency matrices that may not be symmetric e.g. difference metric on two scalars is not symmetric). In this case, be sure to use the flag: allow_non_symmetric=True If weight_method is a callable, it must two accept two arrays as input and return one scalar as output. Example: ``diff_in_skew = lambda x, y: abs(scipy.stats.skew(x)-scipy.stats.skew(y))`` NOTE: this method will be applied to histograms (not the original distribution of features from group/ROI). In order to apply this callable directly on the original distribution (without trimming and histogram binning), use ``use_original_distribution=True``. num_bins : scalar, optional Number of bins to use when computing histogram within each patch/group. Note: 1) Please ensure same number of bins are used across different subjects 2) histogram shape can vary widely with number of bins (esp with fewer bins in the range of 3-20), and hence the features extracted based on them vary also. 3) It is recommended to study the impact of this parameter on the final results of the experiment. This could also be optimized within an inner cross-validation loop if desired. edge_range : tuple or None The range of edges within which to bin the given values. This can be helpful to ensure correspondence across multiple invocations of hiwenet (for different subjects), in terms of range across all bins as well as individual bin edges. Default is to automatically compute from the given values. Accepted format: - tuple of finite values: (range_min, range_max) - None, triggering automatic calculation (default) Notes : when controlling the ``edge_range``, it is not possible trim the tails (e.g. using the parameters ``trim_outliers`` and ``trim_percentile``) for the current set of features using its own range. trim_outliers : bool, optional Whether to trim a small percentile of outliers at the edges of feature range, when features are expected to contain extreme outliers (like 0 or eps or Inf). This is important to avoid numerical problems and also to stabilize the weight estimates. trim_percentile : float Small value specifying the percentile of outliers to trim. Default: 5 (5%). Must be in open interval (0, 100). use_original_distribution : bool, optional When using a user-defined callable, this flag 1) allows skipping of pre-processing (trimming outliers) and histogram construction, 2) enables the application of arbitrary callable (user-defined) on the original distributions coming from the two groups/ROIs/nodes directly. Example: ``diff_in_medians = lambda x, y: abs(np.median(x)-np.median(y))`` This option is valid only when weight_method is a valid callable, which must take two inputs (possibly of different lengths) and return a single scalar. relative_to_all : bool Flag to instruct the computation of a grand histogram (distribution pooled from values in all ROIs), and compute distances (based on distance specified by ``weight_method``) by from each ROI to the grand mean. This would result in only N distances for N ROIs, instead of the usual N*(N-1) pair-wise distances. asymmetric : bool Flag to identify resulting adjacency matrix is expected to be non-symmetric. Note: this results in twice the computation time! Default: False , for histogram metrics implemented here are symmetric. return_networkx_graph : bool, optional Specifies the need for a networkx graph populated with weights computed. Default: False. out_weights_path : str, optional Where to save the extracted weight matrix. If networkx output is returned, it would be saved in GraphML format. Default: nothing saved unless instructed. Returns ------- edge_weights : ndarray numpy 2d array of pair-wise edge-weights (of size: num_groups x num_groups), wherein num_groups is determined by the total number of unique values in `groups`. **Note**: - Only the upper triangular matrix is filled as the distance between node i and j would be the same as j and i. - The edge weights from the upper triangular matrix can easily be obtained by .. code-block:: python weights_array = edge_weights[ np.triu_indices_from(edge_weights, 1) ]
def _get_master_proc_by_name(self, name, tags): master_name = GUnicornCheck._get_master_proc_name(name) master_procs = [p for p in psutil.process_iter() if p.cmdline() and p.cmdline()[0] == master_name] if len(master_procs) == 0: self.service_check( self.SVC_NAME, AgentCheck.CRITICAL, tags=['app:' + name] + tags, message="No gunicorn process with name %s found" % name, ) raise GUnicornCheckError("Found no master process with name: %s" % master_name) else: self.log.debug("There exist %s master process(es) with the name %s" % (len(master_procs), name)) return master_procs
Return a psutil process for the master gunicorn process with the given name.
def should_set_cookie(self, app: 'Quart', session: SessionMixin) -> bool: if session.modified: return True save_each = app.config['SESSION_REFRESH_EACH_REQUEST'] return save_each and session.permanent
Helper method to return if the Set Cookie header should be present. This triggers if the session is marked as modified or the app is configured to always refresh the cookie.
def parse_reaction_list(path, reactions, default_compartment=None): context = FilePathContext(path) for reaction_def in reactions: if 'include' in reaction_def: include_context = context.resolve(reaction_def['include']) for reaction in parse_reaction_file( include_context, default_compartment): yield reaction else: yield parse_reaction(reaction_def, default_compartment, context)
Parse a structured list of reactions as obtained from a YAML file Yields tuples of reaction ID and reaction object. Path can be given as a string or a context.
def run_impl(self, change, entry, out): options = self.options javascripts = self._relative_uris(options.html_javascripts) stylesheets = self._relative_uris(options.html_stylesheets) template_class = resolve_cheetah_template(type(change)) template = template_class() template.transaction = DummyTransaction() template.transaction.response(resp=out) template.change = change template.entry = entry template.options = options template.breadcrumbs = self.breadcrumbs template.javascripts = javascripts template.stylesheets = stylesheets template.render_change = lambda c: self.run_impl(c, entry, out) template.respond() template.shutdown()
sets up the report directory for an HTML report. Obtains the top-level Cheetah template that is appropriate for the change instance, and runs it. The cheetah templates are supplied the following values: * change - the Change instance to report on * entry - the string name of the entry for this report * options - the cli options object * breadcrumbs - list of backlinks * javascripts - list of .js links * stylesheets - list of .css links The cheetah templates are also given a render_change method which can be called on another Change instance to cause its template to be resolved and run in-line.
def libvlc_media_player_set_video_title_display(p_mi, position, timeout): f = _Cfunctions.get('libvlc_media_player_set_video_title_display', None) or \ _Cfunction('libvlc_media_player_set_video_title_display', ((1,), (1,), (1,),), None, None, MediaPlayer, Position, ctypes.c_int) return f(p_mi, position, timeout)
Set if, and how, the video title will be shown when media is played. @param p_mi: the media player. @param position: position at which to display the title, or libvlc_position_disable to prevent the title from being displayed. @param timeout: title display timeout in milliseconds (ignored if libvlc_position_disable). @version: libVLC 2.1.0 or later.
def qrot(vector, quaternion): t = 2 * np.cross(quaternion[1:], vector) v_rot = vector + quaternion[0] * t + np.cross(quaternion[1:], t) return v_rot
Rotate a 3D vector using quaternion algebra. Implemented by Vladimir Kulikovskiy. Parameters ---------- vector: np.array quaternion: np.array Returns ------- np.array
def getResiduals(self): X = np.zeros((self.N*self.P,self.n_fixed_effs)) ip = 0 for i in range(self.n_terms): Ki = self.A[i].shape[0]*self.F[i].shape[1] X[:,ip:ip+Ki] = np.kron(self.A[i].T,self.F[i]) ip += Ki y = np.reshape(self.Y,(self.Y.size,1),order='F') RV = regressOut(y,X) RV = np.reshape(RV,self.Y.shape,order='F') return RV
regress out fixed effects and results residuals
def easeInOutQuad(n): _checkRange(n) if n < 0.5: return 2 * n**2 else: n = n * 2 - 1 return -0.5 * (n*(n-2) - 1)
A quadratic tween function that accelerates, reaches the midpoint, and then decelerates. Args: n (float): The time progress, starting at 0.0 and ending at 1.0. Returns: (float) The line progress, starting at 0.0 and ending at 1.0. Suitable for passing to getPointOnLine().
def log_inference(batch_id, batch_num, metric, step_loss, log_interval): metric_nm, metric_val = metric.get() if not isinstance(metric_nm, list): metric_nm = [metric_nm] metric_val = [metric_val] eval_str = '[Batch %d/%d] loss=%.4f, metrics:' + \ ','.join([i + ':%.4f' for i in metric_nm]) logging.info(eval_str, batch_id + 1, batch_num, \ step_loss / log_interval, \ *metric_val)
Generate and print out the log message for inference.
def parse(cls, root): subsection = root.tag.replace(utils.lxmlns("mets"), "", 1) if subsection not in cls.ALLOWED_SUBSECTIONS: raise exceptions.ParseError( "SubSection can only parse elements with tag in %s with METS namespace" % (cls.ALLOWED_SUBSECTIONS,) ) section_id = root.get("ID") created = root.get("CREATED", "") status = root.get("STATUS", "") child = root[0] if child.tag == utils.lxmlns("mets") + "mdWrap": mdwrap = MDWrap.parse(child) obj = cls(subsection, mdwrap, section_id) elif child.tag == utils.lxmlns("mets") + "mdRef": mdref = MDRef.parse(child) obj = cls(subsection, mdref, section_id) else: raise exceptions.ParseError( "Child of %s must be mdWrap or mdRef" % subsection ) obj.created = created obj.status = status return obj
Create a new SubSection by parsing root. :param root: Element or ElementTree to be parsed into an object. :raises exceptions.ParseError: If root's tag is not in :const:`SubSection.ALLOWED_SUBSECTIONS`. :raises exceptions.ParseError: If the first child of root is not mdRef or mdWrap.
def from_dict(cls, d): o = super(DistributionList, cls).from_dict(d) o.members = [] if 'dlm' in d: o.members = [utils.get_content(member) for member in utils.as_list(d["dlm"])] return o
Override default, adding the capture of members.
def collect(self, order_ref): response = self.client.post( self._collect_endpoint, json={"orderRef": order_ref} ) if response.status_code == 200: return response.json() else: raise get_json_error_class(response)
Collects the result of a sign or auth order using the ``orderRef`` as reference. RP should keep on calling collect every two seconds as long as status indicates pending. RP must abort if status indicates failed. The user identity is returned when complete. Example collect results returned while authentication or signing is still pending: .. code-block:: json { "orderRef":"131daac9-16c6-4618-beb0-365768f37288", "status":"pending", "hintCode":"userSign" } Example collect result when authentication or signing has failed: .. code-block:: json { "orderRef":"131daac9-16c6-4618-beb0-365768f37288", "status":"failed", "hintCode":"userCancel" } Example collect result when authentication or signing is successful and completed: .. code-block:: json { "orderRef":"131daac9-16c6-4618-beb0-365768f37288", "status":"complete", "completionData": { "user": { "personalNumber":"190000000000", "name":"Karl Karlsson", "givenName":"Karl", "surname":"Karlsson" }, "device": { "ipAddress":"192.168.0.1" }, "cert": { "notBefore":"1502983274000", "notAfter":"1563549674000" }, "signature":"<base64-encoded data>", "ocspResponse":"<base64-encoded data>" } } See `BankID Relying Party Guidelines Version: 3.0 <https://www.bankid.com/assets/bankid/rp/bankid-relying-party-guidelines-v3.0.pdf>`_ for more details about how to inform end user of the current status, whether it is pending, failed or completed. :param order_ref: The ``orderRef`` UUID returned from auth or sign. :type order_ref: str :return: The CollectResponse parsed to a dictionary. :rtype: dict :raises BankIDError: raises a subclass of this error when error has been returned from server.
def adopt(self, payload, *args, flavour: ModuleType, **kwargs): if args or kwargs: payload = functools.partial(payload, *args, **kwargs) self._meta_runner.register_payload(payload, flavour=flavour)
Concurrently run ``payload`` in the background If ``*args*`` and/or ``**kwargs`` are provided, pass them to ``payload`` upon execution.
def _init_kelas(self, makna_label): kelas = makna_label.find(color='red') lain = makna_label.find(color='darkgreen') info = makna_label.find(color='green') if kelas: kelas = kelas.find_all('span') if lain: self.kelas = {lain.text.strip(): lain['title'].strip()} self.submakna = lain.next_sibling.strip() self.submakna += ' ' + makna_label.find(color='grey').text.strip() else: self.kelas = { k.text.strip(): k['title'].strip() for k in kelas } if kelas else {} self.info = info.text.strip() if info else ''
Memproses kelas kata yang ada dalam makna. :param makna_label: BeautifulSoup untuk makna yang ingin diproses. :type makna_label: BeautifulSoup
def GetCPIOArchiveFileEntryByPathSpec(self, path_spec): location = getattr(path_spec, 'location', None) if location is None: raise errors.PathSpecError('Path specification missing location.') if not location.startswith(self.LOCATION_ROOT): raise errors.PathSpecError('Invalid location in path specification.') if len(location) == 1: return None return self._cpio_archive_file.GetFileEntryByPath(location[1:])
Retrieves the CPIO archive file entry for a path specification. Args: path_spec (PathSpec): a path specification. Returns: CPIOArchiveFileEntry: CPIO archive file entry or None if not available. Raises: PathSpecError: if the path specification is incorrect.
def within_joyner_boore_distance(self, surface, distance, **kwargs): upper_depth, lower_depth = _check_depth_limits(kwargs) rjb = surface.get_joyner_boore_distance( self.catalogue.hypocentres_as_mesh()) is_valid = np.logical_and( rjb <= distance, np.logical_and(self.catalogue.data['depth'] >= upper_depth, self.catalogue.data['depth'] < lower_depth)) return self.select_catalogue(is_valid)
Select events within a Joyner-Boore distance of a fault :param surface: Fault surface as instance of nhlib.geo.surface.base.SimpleFaultSurface or as instance of nhlib.geo.surface.ComplexFaultSurface :param float distance: Rupture distance (km) :returns: Instance of :class:`openquake.hmtk.seismicity.catalogue.Catalogue` containing only selected events
def run(self, file, updateconfig=True, clean=False, path=None): if updateconfig: self.update_config() self.program, self.version = self.setup(path) commandline = ( self.program + " -c " + self.config['CONFIG_FILE'] + " " + file) rcode = os.system(commandline) if (rcode): raise SExtractorException( "SExtractor command [%s] failed." % commandline ) if clean: self.clean()
Run SExtractor. If updateconfig is True (default), the configuration files will be updated before running SExtractor. If clean is True (default: False), configuration files (if any) will be deleted after SExtractor terminates.
def iter_all_children(self): if self.inline_child: yield self.inline_child for x in self.children: yield x
Return an iterator that yields every node which is a child of this one. This includes inline children, and control structure `else` clauses.
def run_forever(self): res = self.slack.rtm.start() self.log.info("current channels: %s", ','.join(c['name'] for c in res.body['channels'] if c['is_member'])) self.id = res.body['self']['id'] self.name = res.body['self']['name'] self.my_mention = "<@%s>" % self.id self.ws = websocket.WebSocketApp( res.body['url'], on_message=self._on_message, on_error=self._on_error, on_close=self._on_close, on_open=self._on_open) self.prepare_connection(self.config) self.ws.run_forever()
Run the bot, blocking forever.
def set_sampling_info(self, sample): if sample.getScheduledSamplingSampler() and sample.getSamplingDate(): return True sampler = self.get_form_value("getScheduledSamplingSampler", sample, sample.getScheduledSamplingSampler()) sampled = self.get_form_value("getSamplingDate", sample.getSamplingDate()) if not all([sampler, sampled]): return False sample.setScheduledSamplingSampler(sampler) sample.setSamplingDate(DateTime(sampled)) return True
Updates the scheduled Sampling sampler and the Sampling Date with the values provided in the request. If neither Sampling sampler nor Sampling Date are present in the request, returns False
def connect(self): self.socket = socket.socket() self.socket.settimeout(self.timeout_in_seconds) try: self.socket.connect(self.addr) except socket.timeout: raise GraphiteSendException( "Took over %d second(s) to connect to %s" % (self.timeout_in_seconds, self.addr)) except socket.gaierror: raise GraphiteSendException( "No address associated with hostname %s:%s" % self.addr) except Exception as error: raise GraphiteSendException( "unknown exception while connecting to %s - %s" % (self.addr, error) ) return self.socket
Make a TCP connection to the graphite server on port self.port
def _prep_cmd(cmd, tx_out_file): cmd = " ".join(cmd) if isinstance(cmd, (list, tuple)) else cmd return "export TMPDIR=%s && %s" % (os.path.dirname(tx_out_file), cmd)
Wrap CNVkit commands ensuring we use local temporary directories.
def evaluate_block(self, template, context=None, escape=None, safe_wrapper=None): if context is None: context = {} try: with self._evaluation_context(escape, safe_wrapper): template = self._environment.from_string(template) return template.render(**context) except jinja2.TemplateError as error: raise EvaluationError(error.args[0]) finally: self._escape = None
Evaluate a template block.
def find_range(self, interval): return self.find(self.tree, interval, self.start, self.end)
wrapper for find
def get_grade_systems_by_ids(self, grade_system_ids): collection = JSONClientValidated('grading', collection='GradeSystem', runtime=self._runtime) object_id_list = [] for i in grade_system_ids: object_id_list.append(ObjectId(self._get_id(i, 'grading').get_identifier())) result = collection.find( dict({'_id': {'$in': object_id_list}}, **self._view_filter())) result = list(result) sorted_result = [] for object_id in object_id_list: for object_map in result: if object_map['_id'] == object_id: sorted_result.append(object_map) break return objects.GradeSystemList(sorted_result, runtime=self._runtime, proxy=self._proxy)
Gets a ``GradeSystemList`` corresponding to the given ``IdList``. In plenary mode, the returned list contains all of the systems specified in the ``Id`` list, in the order of the list, including duplicates, or an error results if an ``Id`` in the supplied list is not found or inaccessible. Otherwise, inaccessible ``GradeSystems`` may be omitted from the list and may present the elements in any order including returning a unique set. arg: grade_system_ids (osid.id.IdList): the list of ``Ids`` to retrieve return: (osid.grading.GradeSystemList) - the returned ``GradeSystem`` list raise: NotFound - an ``Id was`` not found raise: NullArgument - ``grade_system_ids`` is ``null`` raise: OperationFailed - unable to complete request raise: PermissionDenied - authorization failure *compliance: mandatory -- This method must be implemented.*
def get_args(cls, dist, header=None): if header is None: header = cls.get_header() spec = str(dist.as_requirement()) for type_ in 'console', 'gui': group = type_ + '_scripts' for name, ep in dist.get_entry_map(group).items(): if re.search(r'[\\/]', name): raise ValueError("Path separators not allowed in script names") script_text = TEMPLATE.format( ep.module_name, ep.attrs[0], '.'.join(ep.attrs), spec, group, name, ) args = cls._get_script_args(type_, name, header, script_text) for res in args: yield res
Overrides easy_install.ScriptWriter.get_args This method avoids using pkg_resources to map a named entry_point to a callable at invocation time.
def find_name(): name_file = read_file('__init__.py') name_match = re.search(r'^__package_name__ = ["\']([^"\']*)["\']', name_file, re.M) if name_match: return name_match.group(1) raise RuntimeError('Unable to find name string.')
Only define name in one place
def AsDict(self): sources = [] for source in self.sources: source_definition = { 'type': source.type_indicator, 'attributes': source.AsDict() } if source.supported_os: source_definition['supported_os'] = source.supported_os if source.conditions: source_definition['conditions'] = source.conditions sources.append(source_definition) artifact_definition = { 'name': self.name, 'doc': self.description, 'sources': sources, } if self.labels: artifact_definition['labels'] = self.labels if self.supported_os: artifact_definition['supported_os'] = self.supported_os if self.provides: artifact_definition['provides'] = self.provides if self.conditions: artifact_definition['conditions'] = self.conditions if self.urls: artifact_definition['urls'] = self.urls return artifact_definition
Represents an artifact as a dictionary. Returns: dict[str, object]: artifact attributes.
def __wrap(self, func): def deffunc(*args, **kwargs): if hasattr(inspect, 'signature'): function_args = inspect.signature(func).parameters else: function_args = inspect.getargspec(func).args filtered_kwargs = kwargs.copy() for param in function_args: if param in kwargs: filtered_kwargs[param] = kwargs[param] elif param in self._defaults: filtered_kwargs[param] = self._defaults[param] return func(*args, **filtered_kwargs) wrapped = functools.update_wrapper(deffunc, func) wrapped.__doc__ = ('WARNING: this function has been modified by the Presets ' 'package.\nDefault parameter values described in the ' 'documentation below may be inaccurate.\n\n{}'.format(wrapped.__doc__)) return wrapped
This decorator overrides the default arguments of a function. For each keyword argument in the function, the decorator first checks if the argument has been overridden by the caller, and uses that value instead if so. If not, the decorator consults the Preset object for an override value. If both of the above cases fail, the decorator reverts to the function's native default parameter value.
def string_to_sign(self, http_request): headers_to_sign = self.headers_to_sign(http_request) canonical_headers = self.canonical_headers(headers_to_sign) string_to_sign = '\n'.join([http_request.method, http_request.path, '', canonical_headers, '', http_request.body]) return string_to_sign, headers_to_sign
Return the canonical StringToSign as well as a dict containing the original version of all headers that were included in the StringToSign.
def get(self, key): modcommit = self._get_modcommit(key) if not modcommit: return None if key not in self.foreignkeys: return cPickle.loads(str(modcommit.value)) try: return TimeMachine(uid = modcommit.value).get_object() except self.content_type.DoesNotExist: raise DisciplineException("When restoring a ForeignKey, the " \ "%s %s was not found." % (self.content_type.name, self.uid))
Return the value of a field. Take a string argument representing a field name, return the value of that field at the time of this TimeMachine. When restoring a ForeignKey-pointer object that doesn't exist, raise DisciplineException
def map_agent(self, agent, do_rename): agent_text = agent.db_refs.get('TEXT') mapped_to_agent_json = self.agent_map.get(agent_text) if mapped_to_agent_json: mapped_to_agent = \ Agent._from_json(mapped_to_agent_json['agent']) return mapped_to_agent, False if agent_text in self.gm.keys(): map_db_refs = self.gm[agent_text] else: return agent, False if map_db_refs is None: logger.debug("Skipping %s" % agent_text) return None, True else: self.update_agent_db_refs(agent, agent_text, do_rename) return agent, False
Return the given Agent with its grounding mapped. This function grounds a single agent. It returns the new Agent object (which might be a different object if we load a new agent state from json) or the same object otherwise. Parameters ---------- agent : :py:class:`indra.statements.Agent` The Agent to map. do_rename: bool If True, the Agent name is updated based on the mapped grounding. If do_rename is True the priority for setting the name is FamPlex ID, HGNC symbol, then the gene name from Uniprot. Returns ------- grounded_agent : :py:class:`indra.statements.Agent` The grounded Agent. maps_to_none : bool True if the Agent is in the grounding map and maps to None.
def cli_info(self, event): self.log('Instance:', self.instance, 'Dev:', self.development, 'Host:', self.host, 'Port:', self.port, 'Insecure:', self.insecure, 'Frontend:', self.frontendtarget)
Provides information about the running instance
def _consume(self): while not self.is_closed: msg = self._command_queue.get() if msg is None: return with self._lock: if self.is_ready: (command, reps, wait) = msg if command.select and self._selected_number != command.group_number: if self._send_raw(command.select_command.get_bytes(self)): self._selected_number = command.group_number time.sleep(SELECT_WAIT) else: self.is_ready = False for _ in range(reps): if self.is_ready: if self._send_raw(command.get_bytes(self)): time.sleep(wait) else: self.is_ready = False if not self.is_ready and not self.is_closed: if self.version < 6: time.sleep(RECONNECT_TIME) self.is_ready = True
Consume commands from the queue. The command is repeated according to the configured value. Wait after each command is sent. The bridge socket is a shared resource. It must only be used by one thread at a time. Note that this can and will delay commands if multiple groups are attempting to communicate at the same time on the same bridge.
def get_projects(): assert request.method == "GET", "GET request expected received {}".format(request.method) try: if request.method == 'GET': projects = utils.get_projects() return jsonify(projects) except Exception as e: logging.error(e) return jsonify({"0": "__EMPTY"})
Send a dictionary of projects that are available on the database. Usage description: This function is usually called to get and display the list of projects available in the database. :return: JSON, {<int_keys>: <project_name>}
def _link_record(self): action = self._get_lexicon_option('action') identifier = self._get_lexicon_option('identifier') rdtype = self._get_lexicon_option('type') name = (self._fqdn_name(self._get_lexicon_option('name')) if self._get_lexicon_option('name') else None) link = self._get_provider_option('linked') qname = name if identifier: rdtype, name, _ = self._parse_identifier(identifier) if action != 'list' and rdtype in ('A', 'AAAA', 'TXT') and name and link == 'yes': if action != 'update' or name == qname or not qname: LOGGER.info('Hetzner => Enable CNAME lookup ' '(see --linked parameter)') return name, True LOGGER.info('Hetzner => Disable CNAME lookup ' '(see --linked parameter)') return name, False
Checks restrictions for use of CNAME lookup and returns a tuple of the fully qualified record name to lookup and a boolean, if a CNAME lookup should be done or not. The fully qualified record name is empty if no record name is specified by this provider.
def from_db_value(self, value, expression, connection, context): if value is None: return value return self.parse_seconds(value)
Handle data loaded from database.
def hexstr(text): text = text.strip().lower() if text.startswith(('0x', '0X')): text = text[2:] if not text: raise s_exc.BadTypeValu(valu=text, name='hexstr', mesg='No string left after stripping') try: s_common.uhex(text) except (binascii.Error, ValueError) as e: raise s_exc.BadTypeValu(valu=text, name='hexstr', mesg=str(e)) return text
Ensure a string is valid hex. Args: text (str): String to normalize. Examples: Norm a few strings: hexstr('0xff00') hexstr('ff00') Notes: Will accept strings prefixed by '0x' or '0X' and remove them. Returns: str: Normalized hex string.
def _get_queue_batch_size(self, queue): batch_queues = self.config['BATCH_QUEUES'] batch_size = 1 for part in dotted_parts(queue): if part in batch_queues: batch_size = batch_queues[part] return batch_size
Get queue batch size.
def _get_text_color(self): color = self.code_array.cell_attributes[self.key]["textcolor"] return tuple(c / 255.0 for c in color_pack2rgb(color))
Returns text color rgb tuple of right line
def add_suffix(string, suffix): if string[-len(suffix):] != suffix: return string + suffix else: return string
Adds a suffix to a string, if the string does not already have that suffix. :param string: the string that should have a suffix added to it :param suffix: the suffix to be added to the string :return: the string with the suffix added, if it does not already end in the suffix. Otherwise, it returns the original string.
def get_weekday_parameters(self, filename='shlp_weekday_factors.csv'): file = os.path.join(self.datapath, filename) f_df = pd.read_csv(file, index_col=0) tmp_df = f_df.query('shlp_type=="{0}"'.format(self.shlp_type)).drop( 'shlp_type', axis=1) tmp_df['weekdays'] = np.array(list(range(7))) + 1 return np.array(list(map(float, pd.DataFrame.merge( tmp_df, self.df, left_on='weekdays', right_on='weekday', how='inner', left_index=True).sort_index()['wochentagsfaktor'])))
Retrieve the weekday parameter from csv-file Parameters ---------- filename : string name of file where sigmoid factors are stored
def getClsNames(item): mro = inspect.getmro(item.__class__) mro = [c for c in mro if c not in clsskip] return ['%s.%s' % (c.__module__, c.__name__) for c in mro]
Return a list of "fully qualified" class names for an instance. Example: for name in getClsNames(foo): print(name)
def get_ladder_metadata(session, url): parsed = make_scrape_request(session, url) tag = parsed.find('a', href=re.compile(LADDER_ID_REGEX)) return { 'id': int(tag['href'].split('/')[-1]), 'slug': url.split('/')[-1], 'url': url }
Get ladder metadata.
def asdict(self): if self.cache and self._cached_asdict is not None: return self._cached_asdict d = self._get_nosync() if self.cache: self._cached_asdict = d return d
Retrieve all attributes as a dictionary.
def add_check(self, func, *, call_once=False): if call_once: self._check_once.append(func) else: self._checks.append(func)
Adds a global check to the bot. This is the non-decorator interface to :meth:`.check` and :meth:`.check_once`. Parameters ----------- func The function that was used as a global check. call_once: :class:`bool` If the function should only be called once per :meth:`.Command.invoke` call.
def UpdateTaskAsProcessingByIdentifier(self, task_identifier): with self._lock: task_processing = self._tasks_processing.get(task_identifier, None) if task_processing: task_processing.UpdateProcessingTime() self._UpdateLatestProcessingTime(task_processing) return task_queued = self._tasks_queued.get(task_identifier, None) if task_queued: logger.debug('Task {0:s} was queued, now processing.'.format( task_identifier)) self._tasks_processing[task_identifier] = task_queued del self._tasks_queued[task_identifier] task_queued.UpdateProcessingTime() self._UpdateLatestProcessingTime(task_queued) return task_abandoned = self._tasks_abandoned.get(task_identifier, None) if task_abandoned: del self._tasks_abandoned[task_identifier] self._tasks_processing[task_identifier] = task_abandoned logger.debug('Task {0:s} was abandoned, but now processing.'.format( task_identifier)) task_abandoned.UpdateProcessingTime() self._UpdateLatestProcessingTime(task_abandoned) return if task_identifier in self._tasks_pending_merge: return raise KeyError('Status of task {0:s} is unknown.'.format(task_identifier))
Updates the task manager to reflect the task is processing. Args: task_identifier (str): unique identifier of the task. Raises: KeyError: if the task is not known to the task manager.
def _GetCallingPrototypeAsString(self, flow_cls): output = [] output.append("flow.StartAFF4Flow(client_id=client_id, ") output.append("flow_name=\"%s\", " % flow_cls.__name__) prototypes = [] if flow_cls.args_type: for type_descriptor in flow_cls.args_type.type_infos: if not type_descriptor.hidden: prototypes.append("%s=%s" % (type_descriptor.name, type_descriptor.name)) output.append(", ".join(prototypes)) output.append(")") return "".join(output)
Get a description of the calling prototype for this flow class.
def mget(self, key, *keys, encoding=_NOTSET): return self.execute(b'MGET', key, *keys, encoding=encoding)
Get the values of all the given keys.
def set_link_status(link_id, status, **kwargs): user_id = kwargs.get('user_id') try: link_i = db.DBSession.query(Link).filter(Link.id == link_id).one() except NoResultFound: raise ResourceNotFoundError("Link %s not found"%(link_id)) link_i.network.check_write_permission(user_id) link_i.status = status db.DBSession.flush()
Set the status of a link
def min_length_discard(records, min_length): logging.info('Applying _min_length_discard generator: ' 'discarding records shorter than %d.', min_length) for record in records: if len(record) < min_length: logging.debug('Discarding short sequence: %s, length=%d', record.id, len(record)) else: yield record
Discard any records that are shorter than min_length.
def scatter(self, *args, **kwargs): marker_type = kwargs.pop("marker", "circle") if isinstance(marker_type, string_types) and marker_type in _MARKER_SHORTCUTS: marker_type = _MARKER_SHORTCUTS[marker_type] if marker_type == "circle" and "radius" in kwargs: return self.circle(*args, **kwargs) else: return self._scatter(*args, marker=marker_type, **kwargs)
Creates a scatter plot of the given x and y items. Args: x (str or seq[float]) : values or field names of center x coordinates y (str or seq[float]) : values or field names of center y coordinates size (str or list[float]) : values or field names of sizes in screen units marker (str, or list[str]): values or field names of marker types color (color value, optional): shorthand to set both fill and line color source (:class:`~bokeh.models.sources.ColumnDataSource`) : a user-supplied data source. An attempt will be made to convert the object to :class:`~bokeh.models.sources.ColumnDataSource` if needed. If none is supplied, one is created for the user automatically. **kwargs: :ref:`userguide_styling_line_properties` and :ref:`userguide_styling_fill_properties` Examples: >>> p.scatter([1,2,3],[4,5,6], marker="square", fill_color="red") >>> p.scatter("data1", "data2", marker="mtype", source=data_source, ...) .. note:: When passing ``marker="circle"`` it is also possible to supply a ``radius`` value in data-space units. When configuring marker type from a data source column, *all* markers incuding circles may only be configured with ``size`` in screen units.
def apply_rcparams(kind="fast"): if kind == "default": matplotlib.rcdefaults() elif kind == "fast": matplotlib.rcParams["text.usetex"] = False matplotlib.rcParams["mathtext.fontset"] = "cm" matplotlib.rcParams["font.family"] = "sans-serif" matplotlib.rcParams["font.size"] = 14 matplotlib.rcParams["legend.edgecolor"] = "grey" matplotlib.rcParams["contour.negative_linestyle"] = "solid" elif kind == "publication": matplotlib.rcParams["text.usetex"] = True preamble = "\\usepackage[cm]{sfmath}\\usepackage{amssymb}" matplotlib.rcParams["text.latex.preamble"] = preamble matplotlib.rcParams["mathtext.fontset"] = "cm" matplotlib.rcParams["font.family"] = "sans-serif" matplotlib.rcParams["font.serif"] = "cm" matplotlib.rcParams["font.sans-serif"] = "cm" matplotlib.rcParams["font.size"] = 14 matplotlib.rcParams["legend.edgecolor"] = "grey" matplotlib.rcParams["contour.negative_linestyle"] = "solid"
Quickly apply rcparams for given purposes. Parameters ---------- kind: {'default', 'fast', 'publication'} (optional) Settings to use. Default is 'fast'.
def share_with_link(self, share_type='view', share_scope='anonymous'): if not self.object_id: return None url = self.build_url( self._endpoints.get('share_link').format(id=self.object_id)) data = { 'type': share_type, 'scope': share_scope } response = self.con.post(url, data=data) if not response: return None data = response.json() return DriveItemPermission(parent=self, **{self._cloud_data_key: data})
Creates or returns a link you can share with others :param str share_type: 'view' to allow only view access, 'edit' to allow editions, and 'embed' to allow the DriveItem to be embedded :param str share_scope: 'anonymous': anyone with the link can access. 'organization' Only organization members can access :return: link to share :rtype: DriveItemPermission
def run(wf, *, display, n_threads=1): worker = dynamic_exclusion_worker(display, n_threads) return noodles.Scheduler(error_handler=display.error_handler)\ .run(worker, get_workflow(wf))
Run the workflow using the dynamic-exclusion worker.
def CELERY_RESULT_BACKEND(self): configured = get('CELERY_RESULT_BACKEND', None) if configured: return configured if not self._redis_available(): return None host, port = self.REDIS_HOST, self.REDIS_PORT if host and port: default = "redis://{host}:{port}/{db}".format( host=host, port=port, db=self.CELERY_REDIS_RESULT_DB) return default
Redis result backend config
def _fields_common(self): result = {} if not self.testmode: result["__reponame__"] = self.repo.repo.full_name result["__repodesc__"] = self.repo.repo.description result["__repourl__"] = self.repo.repo.html_url result["__repodir__"] = self.repodir if self.organization is not None: owner = self.repo.organization else: owner = self.repo.user result["__username__"] = owner.name result["__userurl__"] = owner.html_url result["__useravatar__"] = owner.avatar_url result["__useremail__"] = owner.email return result
Returns a dictionary of fields and values that are common to all events for which fields dictionaries are created.
def map(self, key_pattern, func, all_args, timeout=None): results = [] keys = [ make_key(key_pattern, func, args, {}) for args in all_args ] cached = dict(zip(keys, self.get_many(keys))) cache_to_add = {} for key, args in zip(keys, all_args): val = cached[key] if val is None: val = func(*args) cache_to_add[key] = val if val is not None else NONE_RESULT if val == NONE_RESULT: val = None results.append(val) if cache_to_add: self.set_many(cache_to_add, timeout) return results
Cache return value of multiple calls. Args: key_pattern (str): the key pattern to use for generating keys for caches of the decorated function. func (function): the function to call. all_args (list): a list of args to be used to make calls to the function. timeout (int): the cache timeout Returns: A list of the return values of the calls. Example:: def add(a, b): return a + b cache.map(key_pat, add, [(1, 2), (3, 4)]) == [3, 7]
def makeUnicodeToGlyphNameMapping(self): compiler = self.context.compiler cmap = None if compiler is not None: table = compiler.ttFont.get("cmap") if table is not None: cmap = table.getBestCmap() if cmap is None: from ufo2ft.util import makeUnicodeToGlyphNameMapping if compiler is not None: glyphSet = compiler.glyphSet else: glyphSet = self.context.font cmap = makeUnicodeToGlyphNameMapping(glyphSet) return cmap
Return the Unicode to glyph name mapping for the current font.
def _margtime_loglr(self, mf_snr, opt_snr): return special.logsumexp(mf_snr, b=self._deltat) - 0.5*opt_snr
Returns the log likelihood ratio marginalized over time.
def metrics(self, name): return [ MetricStub( ensure_unicode(stub.name), stub.type, stub.value, normalize_tags(stub.tags), ensure_unicode(stub.hostname), ) for stub in self._metrics.get(to_string(name), []) ]
Return the metrics received under the given name
def log_request(self, extra=''): thread_name = threading.currentThread().getName().lower() if thread_name == 'mainthread': thread_name = '' else: thread_name = '-%s' % thread_name if self.config['proxy']: if self.config['proxy_userpwd']: auth = ' with authorization' else: auth = '' proxy_info = ' via %s proxy of type %s%s' % ( self.config['proxy'], self.config['proxy_type'], auth) else: proxy_info = '' if extra: extra = '[%s] ' % extra logger_network.debug( '[%s%s] %s%s %s%s', ('%02d' % self.request_counter if self.request_counter is not None else 'NA'), thread_name, extra, self.request_method or 'GET', self.config['url'], proxy_info)
Send request details to logging system.
def run_parser(quil): input_stream = InputStream(quil) lexer = QuilLexer(input_stream) stream = CommonTokenStream(lexer) parser = QuilParser(stream) parser.removeErrorListeners() parser.addErrorListener(CustomErrorListener()) tree = parser.quil() pyquil_listener = PyQuilListener() walker = ParseTreeWalker() walker.walk(pyquil_listener, tree) return pyquil_listener.result
Run the ANTLR parser. :param str quil: a single or multiline Quil program :return: list of instructions that were parsed
def current(self): results = self._timeline.find_withtag(tk.CURRENT) return results[0] if len(results) != 0 else None
Currently active item on the _timeline Canvas :rtype: str
def get_url(path, host, port, method="http"): return urlunsplit( (method, "%s:%s" % (host, port), path, "", "") )
make url from path, host and port :param method: str :param path: str, path within the request, e.g. "/api/version" :param host: str :param port: str or int :return: str
def tbframes(tb): 'unwind traceback tb_next structure to array' frames=[tb.tb_frame] while tb.tb_next: tb=tb.tb_next; frames.append(tb.tb_frame) return frames
unwind traceback tb_next structure to array
def get_disabled(): ret = set() for name in _iter_service_names(): if _service_is_upstart(name): if _upstart_is_disabled(name): ret.add(name) else: if _service_is_sysv(name): if _sysv_is_disabled(name): ret.add(name) return sorted(ret)
Return the disabled services CLI Example: .. code-block:: bash salt '*' service.get_disabled
def get_data(): y = 1.0 / (1.0 + 1j*(n_x.get_value()-0.002)*1000) + _n.random.rand()*0.1 _t.sleep(0.1) return abs(y), _n.angle(y, True)
Currently pretends to talk to an instrument and get back the magnitud and phase of the measurement.
def getComponentByPosition(self, idx, default=noValue, instantiate=True): try: componentValue = self._componentValues[idx] except IndexError: if not instantiate: return default self.setComponentByPosition(idx) componentValue = self._componentValues[idx] if default is noValue or componentValue.isValue: return componentValue else: return default
Return |ASN.1| type component value by position. Equivalent to Python sequence subscription operation (e.g. `[]`). Parameters ---------- idx : :class:`int` Component index (zero-based). Must either refer to an existing component or to N+1 component (if *componentType* is set). In the latter case a new component type gets instantiated and appended to the |ASN.1| sequence. Keyword Args ------------ default: :class:`object` If set and requested component is a schema object, return the `default` object instead of the requested component. instantiate: :class:`bool` If `True` (default), inner component will be automatically instantiated. If 'False' either existing component or the `noValue` object will be returned. Returns ------- : :py:class:`~pyasn1.type.base.PyAsn1Item` Instantiate |ASN.1| component type or return existing component value Examples -------- .. code-block:: python # can also be SetOf class MySequenceOf(SequenceOf): componentType = OctetString() s = MySequenceOf() # returns component #0 with `.isValue` property False s.getComponentByPosition(0) # returns None s.getComponentByPosition(0, default=None) s.clear() # returns noValue s.getComponentByPosition(0, instantiate=False) # sets component #0 to OctetString() ASN.1 schema # object and returns it s.getComponentByPosition(0, instantiate=True) # sets component #0 to ASN.1 value object s.setComponentByPosition(0, 'ABCD') # returns OctetString('ABCD') value object s.getComponentByPosition(0, instantiate=False) s.clear() # returns noValue s.getComponentByPosition(0, instantiate=False)
def sort_by_size(self, group_limit=None, discard_others=False, others_label='others'): self.groups = OrderedDict(sorted(six.iteritems(self.groups), key=lambda x: len(x[1]), reverse=True)) if group_limit is not None: if not discard_others: group_keys = self.groups.keys()[group_limit - 1:] self.groups.setdefault(others_label, list()) else: group_keys = self.groups.keys()[group_limit:] for g in group_keys: if not discard_others: self.groups[others_label].extend(self.groups[g]) del self.groups[g] if (others_label in self.groups and len(self.groups[others_label]) == 0): del self.groups[others_label] if discard_others and others_label in self.groups: del self.groups[others_label]
Sort the groups by the number of elements they contain, descending. Also has option to limit the number of groups. If this option is chosen, the remaining elements are placed into another group with the name specified with others_label. if discard_others is True, the others group is removed instead.
def reconnectRemote(self, remote): if not isinstance(remote, Remote): raise PlenumTypeError('remote', remote, Remote) logger.info('{} reconnecting to {}'.format(self, remote)) public, secret = self.selfEncKeys remote.disconnect() remote.connect(self.ctx, public, secret) self.sendPingPong(remote, is_ping=True)
Disconnect remote and connect to it again :param remote: instance of Remote from self.remotes :param remoteName: name of remote :return:
def get_validation_errors(self, xml_input): errors = [] try: parsed_xml = etree.parse(self._handle_xml(xml_input)) self.xmlschema.assertValid(parsed_xml) except (etree.DocumentInvalid, etree.XMLSyntaxError), e: errors = self._handle_errors(e.error_log) except AttributeError: raise CannotValidate('Set XSD to validate the XML') return errors
This method returns a list of validation errors. If there are no errors an empty list is returned
def get_applicable_content_pattern_names(self, path): encodings = set() applicable_content_pattern_names = set() for path_pattern_name, content_pattern_names in self._required_matches.items(): m = self._path_matchers[path_pattern_name] if m.matches(path): encodings.add(m.content_encoding) applicable_content_pattern_names.update(content_pattern_names) if len(encodings) > 1: raise ValueError('Path matched patterns with multiple content encodings ({}): {}'.format( ', '.join(sorted(encodings)), path )) content_encoding = next(iter(encodings)) if encodings else None return applicable_content_pattern_names, content_encoding
Return the content patterns applicable to a given path. Returns a tuple (applicable_content_pattern_names, content_encoding). If path matches no path patterns, the returned content_encoding will be None (and applicable_content_pattern_names will be empty).
def convert_hardsigmoid(node, **kwargs): name, input_nodes, attrs = get_inputs(node, kwargs) alpha = float(attrs.get("alpha", 0.2)) beta = float(attrs.get("beta", 0.5)) node = onnx.helper.make_node( 'HardSigmoid', input_nodes, [name], alpha=alpha, beta=beta, name=name ) return [node]
Map MXNet's hard_sigmoid operator attributes to onnx's HardSigmoid operator and return the created node.
def _ast_worker(tokens, tokens_len, index, term): statements = [] arguments = [] while index < tokens_len: if term: if term(index, tokens): break if tokens[index].type == TokenType.Word and \ index + 1 < tokens_len and \ tokens[index + 1].type == TokenType.LeftParen: index, statement = _handle_function_call(tokens, tokens_len, index) statements.append(statement) elif _is_word_type(tokens[index].type): arguments.append(Word(type=_word_type(tokens[index].type), contents=tokens[index].content, line=tokens[index].line, col=tokens[index].col, index=index)) index = index + 1 return (index, GenericBody(statements=statements, arguments=arguments))
The main collector for all AST functions. This function is called recursively to find both variable use and function calls and returns a GenericBody with both those variables and function calls hanging off of it. The caller can figure out what to do with both of those
def debug(self_,msg,*args,**kw): self_.__db_print(DEBUG,msg,*args,**kw)
Print msg merged with args as a debugging statement. See Python's logging module for details of message formatting.
def _structure(msg, fp=None, level=0, include_default=False): if fp is None: fp = sys.stdout tab = ' ' * (level * 4) print(tab + msg.get_content_type(), end='', file=fp) if include_default: print(' [%s]' % msg.get_default_type(), file=fp) else: print(file=fp) if msg.is_multipart(): for subpart in msg.get_payload(): _structure(subpart, fp, level+1, include_default)
A handy debugging aid
def getActionSetHandle(self, pchActionSetName): fn = self.function_table.getActionSetHandle pHandle = VRActionSetHandle_t() result = fn(pchActionSetName, byref(pHandle)) return result, pHandle
Returns a handle for an action set. This handle is used for all performance-sensitive calls.
def smooth(data, fw): if fw == 0: fdata = data else: fdata = lfilter(np.ones(fw)/fw, 1, data) return fdata
Smooth data with a moving average.
def get_all_dataset_names(configuration=None, **kwargs): dataset = Dataset(configuration=configuration) dataset['id'] = 'all dataset names' return dataset._write_to_hdx('list', kwargs, 'id')
Get all dataset names in HDX Args: configuration (Optional[Configuration]): HDX configuration. Defaults to global configuration. **kwargs: See below limit (int): Number of rows to return. Defaults to all dataset names. offset (int): Offset in the complete result for where the set of returned dataset names should begin Returns: List[str]: list of all dataset names in HDX
def run(self) -> None: fd = self._fd encoding = self._encoding line_terminators = self._line_terminators queue = self._queue buf = "" while True: try: c = fd.read(1).decode(encoding) except UnicodeDecodeError as e: log.warning("Decoding error from {!r}: {!r}", self._cmdargs, e) if self._suppress_decoding_errors: continue else: raise if not c: return buf += c for t in line_terminators: try: t_idx = buf.index(t) + len(t) fragment = buf[:t_idx] buf = buf[t_idx:] queue.put(fragment) except ValueError: pass
Read lines and put them on the queue.
def _netname(name: str) -> dict: try: long = net_query(name).name short = net_query(name).shortname except AttributeError: raise UnsupportedNetwork() return {'long': long, 'short': short}
resolute network name, required because some providers use shortnames and other use longnames.
def update(self, environments): data = {'environments': environments} environments_ids = [str(env.get('id')) for env in environments] return super(ApiEnvironment, self).put('api/v3/environment/%s/' % ';'.join(environments_ids), data)
Method to update environments :param environments: List containing environments desired to updated :return: None
def ReadSerializedDict(cls, json_dict): if json_dict: json_object = cls._ConvertDictToObject(json_dict) if not isinstance(json_object, containers_interface.AttributeContainer): raise TypeError('{0:s} is not an attribute container type.'.format( type(json_object))) return json_object return None
Reads an attribute container from serialized dictionary form. Args: json_dict (dict[str, object]): JSON serialized objects. Returns: AttributeContainer: attribute container or None. Raises: TypeError: if the serialized dictionary does not contain an AttributeContainer.
def fetch_image(self,rtsp_server_uri = _source,timeout_secs = 15): self._check_ffmpeg() cmd = "ffmpeg -rtsp_transport tcp -i {} -loglevel quiet -frames 1 -f image2pipe -".format(rtsp_server_uri) with _sp.Popen(cmd, shell=True, stdout=_sp.PIPE) as process: try: stdout,stderr = process.communicate(timeout=timeout_secs) except _sp.TimeoutExpired as e: process.kill() raise TimeoutError("Connection to {} timed out".format(rtsp_server_uri),e) return _Image.open(_io.BytesIO(stdout))
Fetch a single frame using FFMPEG. Convert to PIL Image. Slow.
def validate_data_files(problem, data_files, min_size): data_dir = os.path.split(data_files[0])[0] out_filepaths = problem.out_filepaths(data_dir) missing_filepaths = set(out_filepaths) - set(data_files) if missing_filepaths: tf.logging.error("Missing %d data files", len(missing_filepaths)) too_small = [] for data_file in data_files: length = get_length(data_file) if length < min_size: too_small.append(data_file) if too_small: tf.logging.error("%d files too small", len(too_small)) bad_files = too_small + list(missing_filepaths) return bad_files
Validate presence and minimum size of files.
def delete(self, domain, delete_subdomains=False): uri = "/%s/%s" % (self.uri_base, utils.get_id(domain)) if delete_subdomains: uri = "%s?deleteSubdomains=true" % uri resp, resp_body = self._async_call(uri, method="DELETE", error_class=exc.DomainDeletionFailed, has_response=False)
Deletes the specified domain and all of its resource records. If the domain has subdomains, each subdomain will now become a root domain. If you wish to also delete any subdomains, pass True to 'delete_subdomains'.
def deep_dependendants(self, target): direct_dependents = self._gettask(target).provides_for return (direct_dependents + reduce( lambda a, b: a + b, [self.deep_dependendants(x) for x in direct_dependents], []))
Recursively finds the dependents of a given build target. Assumes the dependency graph is noncyclic
async def create_new_pump_async(self, partition_id, lease): loop = asyncio.get_event_loop() partition_pump = EventHubPartitionPump(self.host, lease) loop.create_task(partition_pump.open_async()) self.partition_pumps[partition_id] = partition_pump _logger.info("Created new partition pump %r %r", self.host.guid, partition_id)
Create a new pump thread with a given lease. :param partition_id: The partition ID. :type partition_id: str :param lease: The lease to be used. :type lease: ~azure.eventprocessorhost.lease.Lease
def remove_uid(self, uid): for sid in self.get('uid2sid', uid): self.remove('sid2uid', sid, uid) self.delete('uid2sid', uid)
Remove all references to a specific User ID :param uid: A User ID
def generate_apsara_log_config(json_value): input_detail = json_value['inputDetail'] output_detail = json_value['outputDetail'] config_name = json_value['configName'] logSample = json_value.get('logSample', '') logstore_name = output_detail['logstoreName'] endpoint = output_detail.get('endpoint', '') log_path = input_detail['logPath'] file_pattern = input_detail['filePattern'] log_begin_regex = input_detail.get('logBeginRegex', '') topic_format = input_detail['topicFormat'] filter_keys = input_detail['filterKey'] filter_keys_reg = input_detail['filterRegex'] config = ApsaraLogConfigDetail(config_name, logstore_name, endpoint, log_path, file_pattern, log_begin_regex, topic_format, filter_keys, filter_keys_reg, logSample) return config
Generate apsara logtail config from loaded json value :param json_value: :return:
def put_file(self, filename, index, doc_type, id=None, name=None): if id is None: request_method = 'POST' else: request_method = 'PUT' path = make_path(index, doc_type, id) doc = file_to_attachment(filename) if name: doc["_name"] = name return self._send_request(request_method, path, doc)
Store a file in a index
def copy(self): return Header([line.copy() for line in self.lines], self.samples.copy())
Return a copy of this header
def _ensure_config_file_exists(): config_file = Path(ELIBConfig.config_file_path).absolute() if not config_file.exists(): raise ConfigFileNotFoundError(ELIBConfig.config_file_path)
Makes sure the config file exists. :raises: :class:`epab.core.new_config.exc.ConfigFileNotFoundError`
def evalsha(self, sha, numkeys, *keys_and_args): return self.execute_command('EVALSHA', sha, numkeys, *keys_and_args)
Use the ``sha`` to execute a Lua script already registered via EVAL or SCRIPT LOAD. Specify the ``numkeys`` the script will touch and the key names and argument values in ``keys_and_args``. Returns the result of the script. In practice, use the object returned by ``register_script``. This function exists purely for Redis API completion.
def create_archive_dir(self): archive_dir = os.path.join(self.tmp_dir, self.archive_name) os.makedirs(archive_dir, 0o700) return archive_dir
Create the archive dir
def _normalize(image): offset = tf.constant(MEAN_RGB, shape=[1, 1, 3]) image -= offset scale = tf.constant(STDDEV_RGB, shape=[1, 1, 3]) image /= scale return image
Normalize the image to zero mean and unit variance.
def series(self): if not self.pages: return [] useframes = self.pages.useframes keyframe = self.pages.keyframe.index series = [] for name in ('lsm', 'ome', 'imagej', 'shaped', 'fluoview', 'sis', 'uniform', 'mdgel'): if getattr(self, 'is_' + name, False): series = getattr(self, '_series_' + name)() break self.pages.useframes = useframes self.pages.keyframe = keyframe if not series: series = self._series_generic() series = [s for s in series if product(s.shape) > 0] for i, s in enumerate(series): s.index = i return series
Return related pages as TiffPageSeries. Side effect: after calling this function, TiffFile.pages might contain TiffPage and TiffFrame instances.