code
stringlengths
51
2.38k
docstring
stringlengths
4
15.2k
def stoch2array(self): a = np.empty(self.dim) for stochastic in self.stochastics: a[self._slices[stochastic]] = stochastic.value return a
Return the stochastic objects as an array.
def extract(features, groups, weight_method=default_weight_method, num_bins=default_num_bins, edge_range=default_edge_range, trim_outliers=default_trim_behaviour, trim_percentile=default_trim_percentile, use_original_distribution=False, ...
Extracts the histogram-distance weighted adjacency matrix. Parameters ---------- features : ndarray or str 1d array of scalar values, either provided directly as a 1d numpy array, or as a path to a file containing these values groups : ndarray or str Membership array of same le...
def _get_master_proc_by_name(self, name, tags): master_name = GUnicornCheck._get_master_proc_name(name) master_procs = [p for p in psutil.process_iter() if p.cmdline() and p.cmdline()[0] == master_name] if len(master_procs) == 0: self.service_check( self.SVC_NAME, ...
Return a psutil process for the master gunicorn process with the given name.
def should_set_cookie(self, app: 'Quart', session: SessionMixin) -> bool: if session.modified: return True save_each = app.config['SESSION_REFRESH_EACH_REQUEST'] return save_each and session.permanent
Helper method to return if the Set Cookie header should be present. This triggers if the session is marked as modified or the app is configured to always refresh the cookie.
def parse_reaction_list(path, reactions, default_compartment=None): context = FilePathContext(path) for reaction_def in reactions: if 'include' in reaction_def: include_context = context.resolve(reaction_def['include']) for reaction in parse_reaction_file( inc...
Parse a structured list of reactions as obtained from a YAML file Yields tuples of reaction ID and reaction object. Path can be given as a string or a context.
def run_impl(self, change, entry, out): options = self.options javascripts = self._relative_uris(options.html_javascripts) stylesheets = self._relative_uris(options.html_stylesheets) template_class = resolve_cheetah_template(type(change)) template = template_class() templ...
sets up the report directory for an HTML report. Obtains the top-level Cheetah template that is appropriate for the change instance, and runs it. The cheetah templates are supplied the following values: * change - the Change instance to report on * entry - the string name of t...
def libvlc_media_player_set_video_title_display(p_mi, position, timeout): f = _Cfunctions.get('libvlc_media_player_set_video_title_display', None) or \ _Cfunction('libvlc_media_player_set_video_title_display', ((1,), (1,), (1,),), None, None, MediaPlayer, Position, ctypes.c_int) retu...
Set if, and how, the video title will be shown when media is played. @param p_mi: the media player. @param position: position at which to display the title, or libvlc_position_disable to prevent the title from being displayed. @param timeout: title display timeout in milliseconds (ignored if libvlc_position...
def qrot(vector, quaternion): t = 2 * np.cross(quaternion[1:], vector) v_rot = vector + quaternion[0] * t + np.cross(quaternion[1:], t) return v_rot
Rotate a 3D vector using quaternion algebra. Implemented by Vladimir Kulikovskiy. Parameters ---------- vector: np.array quaternion: np.array Returns ------- np.array
def getResiduals(self): X = np.zeros((self.N*self.P,self.n_fixed_effs)) ip = 0 for i in range(self.n_terms): Ki = self.A[i].shape[0]*self.F[i].shape[1] X[:,ip:ip+Ki] = np.kron(self.A[i].T,self.F[i]) ip += Ki y = np.reshape(self.Y,(self.Y.size,1),order=...
regress out fixed effects and results residuals
def easeInOutQuad(n): _checkRange(n) if n < 0.5: return 2 * n**2 else: n = n * 2 - 1 return -0.5 * (n*(n-2) - 1)
A quadratic tween function that accelerates, reaches the midpoint, and then decelerates. Args: n (float): The time progress, starting at 0.0 and ending at 1.0. Returns: (float) The line progress, starting at 0.0 and ending at 1.0. Suitable for passing to getPointOnLine().
def log_inference(batch_id, batch_num, metric, step_loss, log_interval): metric_nm, metric_val = metric.get() if not isinstance(metric_nm, list): metric_nm = [metric_nm] metric_val = [metric_val] eval_str = '[Batch %d/%d] loss=%.4f, metrics:' + \ ','.join([i + ':%.4f' for i in...
Generate and print out the log message for inference.
def parse(cls, root): subsection = root.tag.replace(utils.lxmlns("mets"), "", 1) if subsection not in cls.ALLOWED_SUBSECTIONS: raise exceptions.ParseError( "SubSection can only parse elements with tag in %s with METS namespace" % (cls.ALLOWED_SUBSECTIONS,) ...
Create a new SubSection by parsing root. :param root: Element or ElementTree to be parsed into an object. :raises exceptions.ParseError: If root's tag is not in :const:`SubSection.ALLOWED_SUBSECTIONS`. :raises exceptions.ParseError: If the first child of root is not mdRef or mdWrap.
def from_dict(cls, d): o = super(DistributionList, cls).from_dict(d) o.members = [] if 'dlm' in d: o.members = [utils.get_content(member) for member in utils.as_list(d["dlm"])] return o
Override default, adding the capture of members.
def collect(self, order_ref): response = self.client.post( self._collect_endpoint, json={"orderRef": order_ref} ) if response.status_code == 200: return response.json() else: raise get_json_error_class(response)
Collects the result of a sign or auth order using the ``orderRef`` as reference. RP should keep on calling collect every two seconds as long as status indicates pending. RP must abort if status indicates failed. The user identity is returned when complete. Example collect resul...
def adopt(self, payload, *args, flavour: ModuleType, **kwargs): if args or kwargs: payload = functools.partial(payload, *args, **kwargs) self._meta_runner.register_payload(payload, flavour=flavour)
Concurrently run ``payload`` in the background If ``*args*`` and/or ``**kwargs`` are provided, pass them to ``payload`` upon execution.
def _init_kelas(self, makna_label): kelas = makna_label.find(color='red') lain = makna_label.find(color='darkgreen') info = makna_label.find(color='green') if kelas: kelas = kelas.find_all('span') if lain: self.kelas = {lain.text.strip(): lain['title'].str...
Memproses kelas kata yang ada dalam makna. :param makna_label: BeautifulSoup untuk makna yang ingin diproses. :type makna_label: BeautifulSoup
def GetCPIOArchiveFileEntryByPathSpec(self, path_spec): location = getattr(path_spec, 'location', None) if location is None: raise errors.PathSpecError('Path specification missing location.') if not location.startswith(self.LOCATION_ROOT): raise errors.PathSpecError('Invalid location in path spe...
Retrieves the CPIO archive file entry for a path specification. Args: path_spec (PathSpec): a path specification. Returns: CPIOArchiveFileEntry: CPIO archive file entry or None if not available. Raises: PathSpecError: if the path specification is incorrect.
def within_joyner_boore_distance(self, surface, distance, **kwargs): upper_depth, lower_depth = _check_depth_limits(kwargs) rjb = surface.get_joyner_boore_distance( self.catalogue.hypocentres_as_mesh()) is_valid = np.logical_and( rjb <= distance, np.logical_an...
Select events within a Joyner-Boore distance of a fault :param surface: Fault surface as instance of nhlib.geo.surface.base.SimpleFaultSurface or as instance of nhlib.geo.surface.ComplexFaultSurface :param float distance: Rupture distance (km) ...
def run(self, file, updateconfig=True, clean=False, path=None): if updateconfig: self.update_config() self.program, self.version = self.setup(path) commandline = ( self.program + " -c " + self.config['CONFIG_FILE'] + " " + file) rcode = os.system(commandline) ...
Run SExtractor. If updateconfig is True (default), the configuration files will be updated before running SExtractor. If clean is True (default: False), configuration files (if any) will be deleted after SExtractor terminates.
def iter_all_children(self): if self.inline_child: yield self.inline_child for x in self.children: yield x
Return an iterator that yields every node which is a child of this one. This includes inline children, and control structure `else` clauses.
def run_forever(self): res = self.slack.rtm.start() self.log.info("current channels: %s", ','.join(c['name'] for c in res.body['channels'] if c['is_member'])) self.id = res.body['self']['id'] self.name = res.body['self']['name'] ...
Run the bot, blocking forever.
def set_sampling_info(self, sample): if sample.getScheduledSamplingSampler() and sample.getSamplingDate(): return True sampler = self.get_form_value("getScheduledSamplingSampler", sample, sample.getScheduledSamplingSampler()) sampled = self.get_f...
Updates the scheduled Sampling sampler and the Sampling Date with the values provided in the request. If neither Sampling sampler nor Sampling Date are present in the request, returns False
def connect(self): self.socket = socket.socket() self.socket.settimeout(self.timeout_in_seconds) try: self.socket.connect(self.addr) except socket.timeout: raise GraphiteSendException( "Took over %d second(s) to connect to %s" % (se...
Make a TCP connection to the graphite server on port self.port
def _prep_cmd(cmd, tx_out_file): cmd = " ".join(cmd) if isinstance(cmd, (list, tuple)) else cmd return "export TMPDIR=%s && %s" % (os.path.dirname(tx_out_file), cmd)
Wrap CNVkit commands ensuring we use local temporary directories.
def evaluate_block(self, template, context=None, escape=None, safe_wrapper=None): if context is None: context = {} try: with self._evaluation_context(escape, safe_wrapper): template = self._environment.from_string(template) return template.render(*...
Evaluate a template block.
def find_range(self, interval): return self.find(self.tree, interval, self.start, self.end)
wrapper for find
def get_grade_systems_by_ids(self, grade_system_ids): collection = JSONClientValidated('grading', collection='GradeSystem', runtime=self._runtime) object_id_list = [] for i in grade_system_ids: object_i...
Gets a ``GradeSystemList`` corresponding to the given ``IdList``. In plenary mode, the returned list contains all of the systems specified in the ``Id`` list, in the order of the list, including duplicates, or an error results if an ``Id`` in the supplied list is not found or inaccessib...
def get_args(cls, dist, header=None): if header is None: header = cls.get_header() spec = str(dist.as_requirement()) for type_ in 'console', 'gui': group = type_ + '_scripts' for name, ep in dist.get_entry_map(group).items(): if re.search(r'[\\/]', name): ...
Overrides easy_install.ScriptWriter.get_args This method avoids using pkg_resources to map a named entry_point to a callable at invocation time.
def find_name(): name_file = read_file('__init__.py') name_match = re.search(r'^__package_name__ = ["\']([^"\']*)["\']', name_file, re.M) if name_match: return name_match.group(1) raise RuntimeError('Unable to find name string.')
Only define name in one place
def AsDict(self): sources = [] for source in self.sources: source_definition = { 'type': source.type_indicator, 'attributes': source.AsDict() } if source.supported_os: source_definition['supported_os'] = source.supported_os if source.conditions: source...
Represents an artifact as a dictionary. Returns: dict[str, object]: artifact attributes.
def __wrap(self, func): def deffunc(*args, **kwargs): if hasattr(inspect, 'signature'): function_args = inspect.signature(func).parameters else: function_args = inspect.getargspec(func).args filtered_kwargs = kwargs.copy() for param...
This decorator overrides the default arguments of a function. For each keyword argument in the function, the decorator first checks if the argument has been overridden by the caller, and uses that value instead if so. If not, the decorator consults the Preset object for an override value. ...
def string_to_sign(self, http_request): headers_to_sign = self.headers_to_sign(http_request) canonical_headers = self.canonical_headers(headers_to_sign) string_to_sign = '\n'.join([http_request.method, http_request.path, '',...
Return the canonical StringToSign as well as a dict containing the original version of all headers that were included in the StringToSign.
def get(self, key): modcommit = self._get_modcommit(key) if not modcommit: return None if key not in self.foreignkeys: return cPickle.loads(str(modcommit.value)) try: return TimeMachine(uid = modcommit.value).get_object() except self.content_type.DoesNotEx...
Return the value of a field. Take a string argument representing a field name, return the value of that field at the time of this TimeMachine. When restoring a ForeignKey-pointer object that doesn't exist, raise DisciplineException
def map_agent(self, agent, do_rename): agent_text = agent.db_refs.get('TEXT') mapped_to_agent_json = self.agent_map.get(agent_text) if mapped_to_agent_json: mapped_to_agent = \ Agent._from_json(mapped_to_agent_json['agent']) return mapped_to_agent, False ...
Return the given Agent with its grounding mapped. This function grounds a single agent. It returns the new Agent object (which might be a different object if we load a new agent state from json) or the same object otherwise. Parameters ---------- agent : :py:class:`indr...
def cli_info(self, event): self.log('Instance:', self.instance, 'Dev:', self.development, 'Host:', self.host, 'Port:', self.port, 'Insecure:', self.insecure, 'Frontend:', self.frontendtarget)
Provides information about the running instance
def _consume(self): while not self.is_closed: msg = self._command_queue.get() if msg is None: return with self._lock: if self.is_ready: (command, reps, wait) = msg if command.select and self._selected_num...
Consume commands from the queue. The command is repeated according to the configured value. Wait after each command is sent. The bridge socket is a shared resource. It must only be used by one thread at a time. Note that this can and will delay commands if multiple groups are a...
def get_projects(): assert request.method == "GET", "GET request expected received {}".format(request.method) try: if request.method == 'GET': projects = utils.get_projects() return jsonify(projects) except Exception as e: logging.error(e) return jsonify({"0": "__...
Send a dictionary of projects that are available on the database. Usage description: This function is usually called to get and display the list of projects available in the database. :return: JSON, {<int_keys>: <project_name>}
def _link_record(self): action = self._get_lexicon_option('action') identifier = self._get_lexicon_option('identifier') rdtype = self._get_lexicon_option('type') name = (self._fqdn_name(self._get_lexicon_option('name')) if self._get_lexicon_option('name') else None) ...
Checks restrictions for use of CNAME lookup and returns a tuple of the fully qualified record name to lookup and a boolean, if a CNAME lookup should be done or not. The fully qualified record name is empty if no record name is specified by this provider.
def from_db_value(self, value, expression, connection, context): if value is None: return value return self.parse_seconds(value)
Handle data loaded from database.
def hexstr(text): text = text.strip().lower() if text.startswith(('0x', '0X')): text = text[2:] if not text: raise s_exc.BadTypeValu(valu=text, name='hexstr', mesg='No string left after stripping') try: s_common.uhex(text) except (binascii.Erro...
Ensure a string is valid hex. Args: text (str): String to normalize. Examples: Norm a few strings: hexstr('0xff00') hexstr('ff00') Notes: Will accept strings prefixed by '0x' or '0X' and remove them. Returns: str: Normalized hex string.
def _get_queue_batch_size(self, queue): batch_queues = self.config['BATCH_QUEUES'] batch_size = 1 for part in dotted_parts(queue): if part in batch_queues: batch_size = batch_queues[part] return batch_size
Get queue batch size.
def _get_text_color(self): color = self.code_array.cell_attributes[self.key]["textcolor"] return tuple(c / 255.0 for c in color_pack2rgb(color))
Returns text color rgb tuple of right line
def add_suffix(string, suffix): if string[-len(suffix):] != suffix: return string + suffix else: return string
Adds a suffix to a string, if the string does not already have that suffix. :param string: the string that should have a suffix added to it :param suffix: the suffix to be added to the string :return: the string with the suffix added, if it does not already end in the suffix. Otherwise, it returns ...
def get_weekday_parameters(self, filename='shlp_weekday_factors.csv'): file = os.path.join(self.datapath, filename) f_df = pd.read_csv(file, index_col=0) tmp_df = f_df.query('shlp_type=="{0}"'.format(self.shlp_type)).drop( 'shlp_type', axis=1) tmp_df['weekdays'] = np.array(li...
Retrieve the weekday parameter from csv-file Parameters ---------- filename : string name of file where sigmoid factors are stored
def getClsNames(item): mro = inspect.getmro(item.__class__) mro = [c for c in mro if c not in clsskip] return ['%s.%s' % (c.__module__, c.__name__) for c in mro]
Return a list of "fully qualified" class names for an instance. Example: for name in getClsNames(foo): print(name)
def get_ladder_metadata(session, url): parsed = make_scrape_request(session, url) tag = parsed.find('a', href=re.compile(LADDER_ID_REGEX)) return { 'id': int(tag['href'].split('/')[-1]), 'slug': url.split('/')[-1], 'url': url }
Get ladder metadata.
def asdict(self): if self.cache and self._cached_asdict is not None: return self._cached_asdict d = self._get_nosync() if self.cache: self._cached_asdict = d return d
Retrieve all attributes as a dictionary.
def add_check(self, func, *, call_once=False): if call_once: self._check_once.append(func) else: self._checks.append(func)
Adds a global check to the bot. This is the non-decorator interface to :meth:`.check` and :meth:`.check_once`. Parameters ----------- func The function that was used as a global check. call_once: :class:`bool` If the function should only be calle...
def UpdateTaskAsProcessingByIdentifier(self, task_identifier): with self._lock: task_processing = self._tasks_processing.get(task_identifier, None) if task_processing: task_processing.UpdateProcessingTime() self._UpdateLatestProcessingTime(task_processing) return task_queue...
Updates the task manager to reflect the task is processing. Args: task_identifier (str): unique identifier of the task. Raises: KeyError: if the task is not known to the task manager.
def _GetCallingPrototypeAsString(self, flow_cls): output = [] output.append("flow.StartAFF4Flow(client_id=client_id, ") output.append("flow_name=\"%s\", " % flow_cls.__name__) prototypes = [] if flow_cls.args_type: for type_descriptor in flow_cls.args_type.type_infos: if not type_descr...
Get a description of the calling prototype for this flow class.
def mget(self, key, *keys, encoding=_NOTSET): return self.execute(b'MGET', key, *keys, encoding=encoding)
Get the values of all the given keys.
def set_link_status(link_id, status, **kwargs): user_id = kwargs.get('user_id') try: link_i = db.DBSession.query(Link).filter(Link.id == link_id).one() except NoResultFound: raise ResourceNotFoundError("Link %s not found"%(link_id)) link_i.network.check_write_permission(user_id) link...
Set the status of a link
def min_length_discard(records, min_length): logging.info('Applying _min_length_discard generator: ' 'discarding records shorter than %d.', min_length) for record in records: if len(record) < min_length: logging.debug('Discarding short sequence: %s, length=%d', ...
Discard any records that are shorter than min_length.
def scatter(self, *args, **kwargs): marker_type = kwargs.pop("marker", "circle") if isinstance(marker_type, string_types) and marker_type in _MARKER_SHORTCUTS: marker_type = _MARKER_SHORTCUTS[marker_type] if marker_type == "circle" and "radius" in kwargs: return self.circ...
Creates a scatter plot of the given x and y items. Args: x (str or seq[float]) : values or field names of center x coordinates y (str or seq[float]) : values or field names of center y coordinates size (str or list[float]) : values or field names of sizes in screen units ...
def apply_rcparams(kind="fast"): if kind == "default": matplotlib.rcdefaults() elif kind == "fast": matplotlib.rcParams["text.usetex"] = False matplotlib.rcParams["mathtext.fontset"] = "cm" matplotlib.rcParams["font.family"] = "sans-serif" matplotlib.rcParams["font.size"]...
Quickly apply rcparams for given purposes. Parameters ---------- kind: {'default', 'fast', 'publication'} (optional) Settings to use. Default is 'fast'.
def share_with_link(self, share_type='view', share_scope='anonymous'): if not self.object_id: return None url = self.build_url( self._endpoints.get('share_link').format(id=self.object_id)) data = { 'type': share_type, 'scope': share_scope }...
Creates or returns a link you can share with others :param str share_type: 'view' to allow only view access, 'edit' to allow editions, and 'embed' to allow the DriveItem to be embedded :param str share_scope: 'anonymous': anyone with the link can access. 'organization' Only o...
def run(wf, *, display, n_threads=1): worker = dynamic_exclusion_worker(display, n_threads) return noodles.Scheduler(error_handler=display.error_handler)\ .run(worker, get_workflow(wf))
Run the workflow using the dynamic-exclusion worker.
def CELERY_RESULT_BACKEND(self): configured = get('CELERY_RESULT_BACKEND', None) if configured: return configured if not self._redis_available(): return None host, port = self.REDIS_HOST, self.REDIS_PORT if host and port: default = "redis://{ho...
Redis result backend config
def _fields_common(self): result = {} if not self.testmode: result["__reponame__"] = self.repo.repo.full_name result["__repodesc__"] = self.repo.repo.description result["__repourl__"] = self.repo.repo.html_url result["__repodir__"] = self.repodir ...
Returns a dictionary of fields and values that are common to all events for which fields dictionaries are created.
def map(self, key_pattern, func, all_args, timeout=None): results = [] keys = [ make_key(key_pattern, func, args, {}) for args in all_args ] cached = dict(zip(keys, self.get_many(keys))) cache_to_add = {} for key, args in zip(keys, all_args): ...
Cache return value of multiple calls. Args: key_pattern (str): the key pattern to use for generating keys for caches of the decorated function. func (function): the function to call. all_args (list): a list of args to be used to make calls to ...
def makeUnicodeToGlyphNameMapping(self): compiler = self.context.compiler cmap = None if compiler is not None: table = compiler.ttFont.get("cmap") if table is not None: cmap = table.getBestCmap() if cmap is None: from ufo2ft.util import...
Return the Unicode to glyph name mapping for the current font.
def _margtime_loglr(self, mf_snr, opt_snr): return special.logsumexp(mf_snr, b=self._deltat) - 0.5*opt_snr
Returns the log likelihood ratio marginalized over time.
def metrics(self, name): return [ MetricStub( ensure_unicode(stub.name), stub.type, stub.value, normalize_tags(stub.tags), ensure_unicode(stub.hostname), ) for stub in self._metrics.get(to_string(...
Return the metrics received under the given name
def log_request(self, extra=''): thread_name = threading.currentThread().getName().lower() if thread_name == 'mainthread': thread_name = '' else: thread_name = '-%s' % thread_name if self.config['proxy']: if self.config['proxy_userpwd']: ...
Send request details to logging system.
def run_parser(quil): input_stream = InputStream(quil) lexer = QuilLexer(input_stream) stream = CommonTokenStream(lexer) parser = QuilParser(stream) parser.removeErrorListeners() parser.addErrorListener(CustomErrorListener()) tree = parser.quil() pyquil_listener = PyQuilListener() wa...
Run the ANTLR parser. :param str quil: a single or multiline Quil program :return: list of instructions that were parsed
def current(self): results = self._timeline.find_withtag(tk.CURRENT) return results[0] if len(results) != 0 else None
Currently active item on the _timeline Canvas :rtype: str
def get_url(path, host, port, method="http"): return urlunsplit( (method, "%s:%s" % (host, port), path, "", "") )
make url from path, host and port :param method: str :param path: str, path within the request, e.g. "/api/version" :param host: str :param port: str or int :return: str
def tbframes(tb): 'unwind traceback tb_next structure to array' frames=[tb.tb_frame] while tb.tb_next: tb=tb.tb_next; frames.append(tb.tb_frame) return frames
unwind traceback tb_next structure to array
def get_disabled(): ret = set() for name in _iter_service_names(): if _service_is_upstart(name): if _upstart_is_disabled(name): ret.add(name) else: if _service_is_sysv(name): if _sysv_is_disabled(name): ret.add(name) ...
Return the disabled services CLI Example: .. code-block:: bash salt '*' service.get_disabled
def get_data(): y = 1.0 / (1.0 + 1j*(n_x.get_value()-0.002)*1000) + _n.random.rand()*0.1 _t.sleep(0.1) return abs(y), _n.angle(y, True)
Currently pretends to talk to an instrument and get back the magnitud and phase of the measurement.
def getComponentByPosition(self, idx, default=noValue, instantiate=True): try: componentValue = self._componentValues[idx] except IndexError: if not instantiate: return default self.setComponentByPosition(idx) componentValue = self._compone...
Return |ASN.1| type component value by position. Equivalent to Python sequence subscription operation (e.g. `[]`). Parameters ---------- idx : :class:`int` Component index (zero-based). Must either refer to an existing component or to N+1 component (if *componen...
def sort_by_size(self, group_limit=None, discard_others=False, others_label='others'): self.groups = OrderedDict(sorted(six.iteritems(self.groups), key=lambda x: len(x[1]), reverse=True)) if group_limi...
Sort the groups by the number of elements they contain, descending. Also has option to limit the number of groups. If this option is chosen, the remaining elements are placed into another group with the name specified with others_label. if discard_others is True, the others group is rem...
def reconnectRemote(self, remote): if not isinstance(remote, Remote): raise PlenumTypeError('remote', remote, Remote) logger.info('{} reconnecting to {}'.format(self, remote)) public, secret = self.selfEncKeys remote.disconnect() remote.connect(self.ctx, public, secre...
Disconnect remote and connect to it again :param remote: instance of Remote from self.remotes :param remoteName: name of remote :return:
def get_validation_errors(self, xml_input): errors = [] try: parsed_xml = etree.parse(self._handle_xml(xml_input)) self.xmlschema.assertValid(parsed_xml) except (etree.DocumentInvalid, etree.XMLSyntaxError), e: errors = self._handle_errors(e.error_log) ...
This method returns a list of validation errors. If there are no errors an empty list is returned
def get_applicable_content_pattern_names(self, path): encodings = set() applicable_content_pattern_names = set() for path_pattern_name, content_pattern_names in self._required_matches.items(): m = self._path_matchers[path_pattern_name] if m.matches(path): encodings.add(m.content_encoding...
Return the content patterns applicable to a given path. Returns a tuple (applicable_content_pattern_names, content_encoding). If path matches no path patterns, the returned content_encoding will be None (and applicable_content_pattern_names will be empty).
def convert_hardsigmoid(node, **kwargs): name, input_nodes, attrs = get_inputs(node, kwargs) alpha = float(attrs.get("alpha", 0.2)) beta = float(attrs.get("beta", 0.5)) node = onnx.helper.make_node( 'HardSigmoid', input_nodes, [name], alpha=alpha, beta=beta, ...
Map MXNet's hard_sigmoid operator attributes to onnx's HardSigmoid operator and return the created node.
def _ast_worker(tokens, tokens_len, index, term): statements = [] arguments = [] while index < tokens_len: if term: if term(index, tokens): break if tokens[index].type == TokenType.Word and \ index + 1 < tokens_len and \ tokens[index + 1].typ...
The main collector for all AST functions. This function is called recursively to find both variable use and function calls and returns a GenericBody with both those variables and function calls hanging off of it. The caller can figure out what to do with both of those
def debug(self_,msg,*args,**kw): self_.__db_print(DEBUG,msg,*args,**kw)
Print msg merged with args as a debugging statement. See Python's logging module for details of message formatting.
def _structure(msg, fp=None, level=0, include_default=False): if fp is None: fp = sys.stdout tab = ' ' * (level * 4) print(tab + msg.get_content_type(), end='', file=fp) if include_default: print(' [%s]' % msg.get_default_type(), file=fp) else: print(file=fp) if msg.is_mu...
A handy debugging aid
def getActionSetHandle(self, pchActionSetName): fn = self.function_table.getActionSetHandle pHandle = VRActionSetHandle_t() result = fn(pchActionSetName, byref(pHandle)) return result, pHandle
Returns a handle for an action set. This handle is used for all performance-sensitive calls.
def smooth(data, fw): if fw == 0: fdata = data else: fdata = lfilter(np.ones(fw)/fw, 1, data) return fdata
Smooth data with a moving average.
def get_all_dataset_names(configuration=None, **kwargs): dataset = Dataset(configuration=configuration) dataset['id'] = 'all dataset names' return dataset._write_to_hdx('list', kwargs, 'id')
Get all dataset names in HDX Args: configuration (Optional[Configuration]): HDX configuration. Defaults to global configuration. **kwargs: See below limit (int): Number of rows to return. Defaults to all dataset names. offset (int): Offset in the complete result ...
def run(self) -> None: fd = self._fd encoding = self._encoding line_terminators = self._line_terminators queue = self._queue buf = "" while True: try: c = fd.read(1).decode(encoding) except UnicodeDecodeError as e: l...
Read lines and put them on the queue.
def _netname(name: str) -> dict: try: long = net_query(name).name short = net_query(name).shortname except AttributeError: raise UnsupportedNetwork() return {'long': long, 'short': short}
resolute network name, required because some providers use shortnames and other use longnames.
def update(self, environments): data = {'environments': environments} environments_ids = [str(env.get('id')) for env in environments] return super(ApiEnvironment, self).put('api/v3/environment/%s/' % ';'.join(environments_ids), data)
Method to update environments :param environments: List containing environments desired to updated :return: None
def ReadSerializedDict(cls, json_dict): if json_dict: json_object = cls._ConvertDictToObject(json_dict) if not isinstance(json_object, containers_interface.AttributeContainer): raise TypeError('{0:s} is not an attribute container type.'.format( type(json_object))) return json_o...
Reads an attribute container from serialized dictionary form. Args: json_dict (dict[str, object]): JSON serialized objects. Returns: AttributeContainer: attribute container or None. Raises: TypeError: if the serialized dictionary does not contain an AttributeContainer.
def fetch_image(self,rtsp_server_uri = _source,timeout_secs = 15): self._check_ffmpeg() cmd = "ffmpeg -rtsp_transport tcp -i {} -loglevel quiet -frames 1 -f image2pipe -".format(rtsp_server_uri) with _sp.Popen(cmd, shell=True, stdout=_sp.PIPE) as process: try: stdout...
Fetch a single frame using FFMPEG. Convert to PIL Image. Slow.
def validate_data_files(problem, data_files, min_size): data_dir = os.path.split(data_files[0])[0] out_filepaths = problem.out_filepaths(data_dir) missing_filepaths = set(out_filepaths) - set(data_files) if missing_filepaths: tf.logging.error("Missing %d data files", len(missing_filepaths)) too_small = []...
Validate presence and minimum size of files.
def delete(self, domain, delete_subdomains=False): uri = "/%s/%s" % (self.uri_base, utils.get_id(domain)) if delete_subdomains: uri = "%s?deleteSubdomains=true" % uri resp, resp_body = self._async_call(uri, method="DELETE", error_class=exc.DomainDeletionFailed, has_re...
Deletes the specified domain and all of its resource records. If the domain has subdomains, each subdomain will now become a root domain. If you wish to also delete any subdomains, pass True to 'delete_subdomains'.
def deep_dependendants(self, target): direct_dependents = self._gettask(target).provides_for return (direct_dependents + reduce( lambda a, b: a + b, [self.deep_dependendants(x) for x in direct_dependents], []))
Recursively finds the dependents of a given build target. Assumes the dependency graph is noncyclic
async def create_new_pump_async(self, partition_id, lease): loop = asyncio.get_event_loop() partition_pump = EventHubPartitionPump(self.host, lease) loop.create_task(partition_pump.open_async()) self.partition_pumps[partition_id] = partition_pump _logger.info("Created new partiti...
Create a new pump thread with a given lease. :param partition_id: The partition ID. :type partition_id: str :param lease: The lease to be used. :type lease: ~azure.eventprocessorhost.lease.Lease
def remove_uid(self, uid): for sid in self.get('uid2sid', uid): self.remove('sid2uid', sid, uid) self.delete('uid2sid', uid)
Remove all references to a specific User ID :param uid: A User ID
def generate_apsara_log_config(json_value): input_detail = json_value['inputDetail'] output_detail = json_value['outputDetail'] config_name = json_value['configName'] logSample = json_value.get('logSample', '') logstore_name = output_detail['logstoreName'] endpoint = outp...
Generate apsara logtail config from loaded json value :param json_value: :return:
def put_file(self, filename, index, doc_type, id=None, name=None): if id is None: request_method = 'POST' else: request_method = 'PUT' path = make_path(index, doc_type, id) doc = file_to_attachment(filename) if name: doc["_name"] = name ...
Store a file in a index
def copy(self): return Header([line.copy() for line in self.lines], self.samples.copy())
Return a copy of this header
def _ensure_config_file_exists(): config_file = Path(ELIBConfig.config_file_path).absolute() if not config_file.exists(): raise ConfigFileNotFoundError(ELIBConfig.config_file_path)
Makes sure the config file exists. :raises: :class:`epab.core.new_config.exc.ConfigFileNotFoundError`
def evalsha(self, sha, numkeys, *keys_and_args): return self.execute_command('EVALSHA', sha, numkeys, *keys_and_args)
Use the ``sha`` to execute a Lua script already registered via EVAL or SCRIPT LOAD. Specify the ``numkeys`` the script will touch and the key names and argument values in ``keys_and_args``. Returns the result of the script. In practice, use the object returned by ``register_script``. Th...
def create_archive_dir(self): archive_dir = os.path.join(self.tmp_dir, self.archive_name) os.makedirs(archive_dir, 0o700) return archive_dir
Create the archive dir
def _normalize(image): offset = tf.constant(MEAN_RGB, shape=[1, 1, 3]) image -= offset scale = tf.constant(STDDEV_RGB, shape=[1, 1, 3]) image /= scale return image
Normalize the image to zero mean and unit variance.
def series(self): if not self.pages: return [] useframes = self.pages.useframes keyframe = self.pages.keyframe.index series = [] for name in ('lsm', 'ome', 'imagej', 'shaped', 'fluoview', 'sis', 'uniform', 'mdgel'): if getattr(self, 'i...
Return related pages as TiffPageSeries. Side effect: after calling this function, TiffFile.pages might contain TiffPage and TiffFrame instances.