code
stringlengths
51
2.38k
docstring
stringlengths
4
15.2k
def WSGIHandler(self): sdm = werkzeug_wsgi.SharedDataMiddleware(self, { "/": config.CONFIG["AdminUI.document_root"], }) return werkzeug_wsgi.DispatcherMiddleware(self, { "/static": sdm, })
Returns GRR's WSGI handler.
def read(parser, stream): source = stream() if callable(stream) else stream try: text = source.read() stream_name = getattr(source, 'name', None) try: result = parser(text) except ECMASyntaxError as e: error_name = repr_compat(stream_name or source) ...
Return an AST from the input ES5 stream. Arguments parser A parser instance. stream Either a stream object or a callable that produces one. The stream object to read from; its 'read' method will be invoked. If a callable was provided, the 'close' method on its return ...
def is_link(path): if sys.getwindowsversion().major < 6: raise SaltInvocationError('Symlinks are only supported on Windows Vista or later.') try: return salt.utils.path.islink(path) except Exception as exc: raise CommandExecutionError(exc)
Check if the path is a symlink This is only supported on Windows Vista or later. Inline with Unix behavior, this function will raise an error if the path is not a symlink, however, the error raised will be a SaltInvocationError, not an OSError. Args: path (str): The path to a file or dire...
def get_list_filter(self, request): list_filter = super(VersionedAdmin, self).get_list_filter(request) return list(list_filter) + [('version_start_date', DateTimeFilter), IsCurrentFilter]
Adds versionable custom filtering ability to changelist
def extract_features(self, phrase): words = nltk.word_tokenize(phrase) features = {} for word in words: features['contains(%s)' % word] = (word in words) return features
This function will extract features from the phrase being used. Currently, the feature we are extracting are unigrams of the text corpus.
async def get_identity_document(client: Client, current_block: dict, pubkey: str) -> Identity: lookup_data = await client(bma.wot.lookup, pubkey) uid = None timestamp = BlockUID.empty() signature = None for result in lookup_data['results']: if result["pubkey"] == pubkey: uids = r...
Get the identity document of the pubkey :param client: Client to connect to the api :param current_block: Current block data :param pubkey: UID/Public key :rtype: Identity
def load(source): parser = get_xml_parser() return etree.parse(source, parser=parser).getroot()
Load OpenCorpora corpus. The ``source`` can be any of the following: - a file name/path - a file object - a file-like object - a URL using the HTTP or FTP protocol
def defer(self, *args, **kwargs): LOG.debug( '%s on %s (awaitable %s async %s provider %s)', 'deferring', self._func, self._is_awaitable, self._is_asyncio_provider, self._concurrency_provider ) if self._blocked: ...
Call the function and immediately return an asynchronous object. The calling code will need to check for the result at a later time using: In Python 2/3 using ThreadPools - an AsyncResult (https://docs.python.org/2/library/multiprocessing.html#multiprocessing.pool.AsyncResult) In ...
def _bigger(interval1, interval2): if interval2.cardinality > interval1.cardinality: return interval2.copy() return interval1.copy()
Return interval with bigger cardinality Refer Section 3.1 :param interval1: first interval :param interval2: second interval :return: Interval or interval2 whichever has greater cardinality
def get_damage(self, amount: int, target) -> int: if target.immune: self.log("%r is immune to %s for %i damage", target, self, amount) return 0 return amount
Override to modify the damage dealt to a target from the given amount.
def any_hook(*hook_patterns): current_hook = hookenv.hook_name() i_pat = re.compile(r'{([^:}]+):([^}]+)}') hook_patterns = _expand_replacements(i_pat, hookenv.role_and_interface_to_relations, hook_patterns) c_pat = re.compile(r'{((?:[^:,}]+,?)+)}') hook_patterns = _expand_replacements(c_pat, lambda ...
Assert that the currently executing hook matches one of the given patterns. Each pattern will match one or more hooks, and can use the following special syntax: * ``db-relation-{joined,changed}`` can be used to match multiple hooks (in this case, ``db-relation-joined`` and ``db-relation-changed`...
def create_file_chooser_dialog(self, text, parent, name=Gtk.STOCK_OPEN): text = None dialog = Gtk.FileChooserDialog( text, parent, Gtk.FileChooserAction.SELECT_FOLDER, (Gtk.STOCK_CANCEL, Gtk.ResponseType.CANCEL, name, Gtk.ResponseType.OK) ) response = ...
Function creates a file chooser dialog with title text
def get_func_info(func): name = func.__name__ doc = func.__doc__ or "" try: nicename = func.description except AttributeError: if doc: nicename = doc.split('\n')[0] if len(nicename) > 80: nicename = name else: nicename = name ...
Retrieve a function's information.
def update_template(self, template_dict, original_template_path, built_artifacts): original_dir = os.path.dirname(original_template_path) for logical_id, resource in template_dict.get("Resources", {}).items(): if logical_id not in built_artifacts: continue artifac...
Given the path to built artifacts, update the template to point appropriate resource CodeUris to the artifacts folder Parameters ---------- template_dict original_template_path : str Path where the template file will be written to built_artifacts : dict ...
def auto_forward(auto=True): global __auto_forward_state prev = __auto_forward_state __auto_forward_state = auto yield __auto_forward_state = prev
Context for dynamic graph execution mode. Args: auto (bool): Whether forward computation is executed during a computation graph construction. Returns: bool
def set_keepalive(sock, after_idle_sec=1, interval_sec=3, max_fails=5): if hasattr(socket, "SO_KEEPALIVE"): sock.setsockopt(socket.SOL_SOCKET, socket.SO_KEEPALIVE, 1) if hasattr(socket, "TCP_KEEPIDLE"): sock.setsockopt(socket.IPPROTO_TCP, socket.TCP_KEEPIDLE, after_idle_sec) if hasattr(socke...
Set TCP keepalive on an open socket. It activates after 1 second (after_idle_sec) of idleness, then sends a keepalive ping once every 3 seconds (interval_sec), and closes the connection after 5 failed ping (max_fails), or 15 seconds
def setup_figure(): fig, axes = mplstereonet.subplots(ncols=2, figsize=(20,10)) for ax in axes: ax.grid(ls='-') ax.set_longitude_grid_ends(90) return fig, axes
Setup the figure and axes
def create_action(self): actions = {} act = QAction('Load Montage...', self) act.triggered.connect(self.load_channels) act.setEnabled(False) actions['load_channels'] = act act = QAction('Save Montage...', self) act.triggered.connect(self.save_channels) act...
Create actions related to channel selection.
def show(self): from matplotlib import pyplot as plt if self.already_run: for ref in self.volts.keys(): plt.plot(self.t, self.volts[ref], label=ref) plt.title("Simulation voltage vs time") plt.legend() plt.xlabel("Time [ms]") ...
Plot the result of the simulation once it's been intialized
def delete_secret_versions(self, path, versions, mount_point=DEFAULT_MOUNT_POINT): if not isinstance(versions, list) or len(versions) == 0: error_msg = 'argument to "versions" must be a list containing one or more integers, "{versions}" provided.'.format( versions=versions ...
Issue a soft delete of the specified versions of the secret. This marks the versions as deleted and will stop them from being returned from reads, but the underlying data will not be removed. A delete can be undone using the undelete path. Supported methods: POST: /{mount_p...
def prepare_and_execute(self, connection_id, statement_id, sql, max_rows_total=None, first_frame_max_size=None): request = requests_pb2.PrepareAndExecuteRequest() request.connection_id = connection_id request.statement_id = statement_id request.sql = sql if max_rows_total is not ...
Prepares and immediately executes a statement. :param connection_id: ID of the current connection. :param statement_id: ID of the statement to prepare. :param sql: SQL query. :param max_rows_total: The maximum number of rows that will b...
def get_ccle_mrna(gene_list, cell_lines): gene_list_str = ','.join(gene_list) data = {'cmd': 'getProfileData', 'case_set_id': ccle_study + '_mrna', 'genetic_profile_id': ccle_study + '_mrna', 'gene_list': gene_list_str, 'skiprows': -1} df = send_request(**data...
Return a dict of mRNA amounts in given genes and cell lines from CCLE. Parameters ---------- gene_list : list[str] A list of HGNC gene symbols to get mRNA amounts for. cell_lines : list[str] A list of CCLE cell line names to get mRNA amounts for. Returns ------- mrna_amount...
def euclidean_dist(point1, point2): (x1, y1) = point1 (x2, y2) = point2 return math.sqrt((x1 - x2) ** 2 + (y1 - y2) ** 2)
Compute the Euclidean distance between two points. Parameters ---------- point1, point2 : 2-tuples of float The input points. Returns ------- d : float The distance between the input points. Examples -------- >>> point1 = (1.0, 2.0) >>> point2 = (4.0, 6.0) # (...
def stem(self): name = self.name i = name.rfind('.') if 0 < i < len(name) - 1: return name[:i] else: return name
The final path component, minus its last suffix.
def create(cls, zone_id, record): cls.echo('Creating new zone version') new_version_id = Zone.new(zone_id) cls.echo('Updating zone version') cls.add(zone_id, new_version_id, record) cls.echo('Activation of new zone version') Zone.set(zone_id, new_version_id) retur...
Create a new zone version for record.
def deserialise(self, element_json: str) -> Element: return self.deserialise_dict(json.loads(element_json))
Deserialises the given JSON into an element. >>> json = '{"element": "string", "content": "Hello"' >>> JSONDeserialiser().deserialise(json) String(content='Hello')
def add_data_item(self, data_item: DataItem) -> None: display_item = data_item._data_item.container.get_display_item_for_data_item(data_item._data_item) if data_item._data_item.container else None if display_item: self.__data_group.append_display_item(display_item)
Add a data item to the group. :param data_item: The :py:class:`nion.swift.Facade.DataItem` object to add. .. versionadded:: 1.0 Scriptable: Yes
def paginate(self, page=1, perpage=10, category=None): q = db.select(self.table).fields('title', 'slug', 'description', 'html', 'css', 'js', 'category', 'status', 'comments', 'author', 'created', 'pid') if category: q.condition('category', category) ...
Paginate the posts
def __batch_update(self, train_events, test_events, n_epoch): for epoch in range(n_epoch): if n_epoch != 1: np.random.shuffle(train_events) for e in train_events: self.rec.update(e, batch_train=True) MPR = self.__batch_evaluate(test_events) ...
Batch update called by the fitting method. Args: train_events (list of Event): Positive training events. test_events (list of Event): Test events. n_epoch (int): Number of epochs for the batch training.
def __update_service_status(self, statuscode): if self.__service_status != statuscode: self.__service_status = statuscode self.__send_service_status_to_frontend()
Set the internal status of the service object, and notify frontend.
def kill_the_system(self, warning: str): log.critical('Kill reason: ' + warning) if self.DEBUG: return try: self.mail_this(warning) except socket.gaierror: current_time = time.localtime() formatted_time = time.strftime('%Y-%m-%d %I:%M:%S%p'...
Send an e-mail, and then shut the system down quickly.
def write(self, ontol, **args): s = self.render(ontol, **args) if self.outfile is None: print(s) else: f = open(self.outfile, 'w') f.write(s) f.close()
Write a `ontology` object
def _check_pos(self, level, *tokens): for record in self.records: if all(record.levelno == level and token in record.message for token in tokens): return level_name = logging.getLevelName(level) msgs = ["Tokens {} not found in {}, all was logged is...".format(tokens, ...
Check if the different tokens were logged in one record, assert by level.
def getFile(self, file_xml_uri): find = re.match('/fmi/xml/cnt/([\w\d.-]+)\.([\w]+)?-*', file_xml_uri) file_name = find.group(1) file_extension = find.group(2) file_binary = self._doRequest(is_file=True, file_xml_uri=file_xml_uri) return (file_name, file_extension, file_binary)
This will execute cmd to fetch file data from FMServer
def put(self, job, result): "Perform a job by a member in the pool and return the result." self.job.put(job) r = result.get() return r
Perform a job by a member in the pool and return the result.
def get_auth(host, app_name, database_name): from .hooks import _get_auth_hook return _get_auth_hook(host, app_name, database_name)
Authentication hook to allow plugging in custom authentication credential providers
def get_algorithm(self, name): name = adapt_name_for_rest(name) url = '/mdb/{}/algorithms{}'.format(self._instance, name) response = self._client.get_proto(url) message = mdb_pb2.AlgorithmInfo() message.ParseFromString(response.content) return Algorithm(message)
Gets a single algorithm by its unique name. :param str name: Either a fully-qualified XTCE name or an alias in the format ``NAMESPACE/NAME``. :rtype: .Algorithm
def build(ctx, inputs, output, cs): click.echo('chemdataextractor.dict.build') dt = DictionaryTagger(lexicon=ChemLexicon(), case_sensitive=cs) names = [] for input in inputs: for line in input: tokens = line.split() names.append(tokens) dt.build(words=names) dt.sa...
Build chemical name dictionary.
def deleteuser(self, user_id): deleted = self.delete_user(user_id) if deleted is False: return False else: return True
Deletes a user. Available only for administrators. This is an idempotent function, calling this function for a non-existent user id still returns a status code 200 OK. The JSON response differs if the user was actually deleted or not. In the former the user is returned and in the latter ...
def hexbin(self, x, y, C=None, reduce_C_function=None, gridsize=None, **kwds): if reduce_C_function is not None: kwds['reduce_C_function'] = reduce_C_function if gridsize is not None: kwds['gridsize'] = gridsize return self(kind='hexbin', x=x, y=y, C=C, **k...
Generate a hexagonal binning plot. Generate a hexagonal binning plot of `x` versus `y`. If `C` is `None` (the default), this is a histogram of the number of occurrences of the observations at ``(x[i], y[i])``. If `C` is specified, specifies values at given coordinates ``(x[i], ...
def commit_comment(self, comment_id): url = self._build_url('comments', str(comment_id), base_url=self._api) json = self._json(self._get(url), 200) return RepoComment(json, self) if json else None
Get a single commit comment. :param int comment_id: (required), id of the comment used by GitHub :returns: :class:`RepoComment <github3.repos.comment.RepoComment>` if successful, otherwise None
def join(self, delimiter=' ', overlap_threshold=0.1): sorted_by_start = sorted(self.labels) concat_values = [] last_label_end = None for label in sorted_by_start: if last_label_end is None or (last_label_end - label.start < overlap_threshold and last_label_end > 0): ...
Return a string with all labels concatenated together. The order of the labels is defined by the start of the label. If the overlapping between two labels is greater than ``overlap_threshold``, an Exception is thrown. Args: delimiter (str): A string to join two consecutive l...
def getPrintAddress(self): address_lines = [] addresses = [ self.getPostalAddress(), self.getPhysicalAddress(), self.getBillingAddress(), ] for address in addresses: city = address.get("city", "") zip = address.get("zip", "") ...
Get an address for printing
def reqHistogramData( self, contract: Contract, useRTH: bool, period: str) -> List[HistogramData]: return self._run( self.reqHistogramDataAsync(contract, useRTH, period))
Request histogram data. This method is blocking. https://interactivebrokers.github.io/tws-api/histograms.html Args: contract: Contract to query. useRTH: If True then only show data from within Regular Trading Hours, if False then show all data. ...
def push(collector, image, **kwargs): if not image.image_index: raise BadOption("The chosen image does not have a image_index configuration", wanted=image.name) tag = kwargs["artifact"] if tag is NotSpecified: tag = collector.configuration["harpoon"].tag if tag is not NotSpecified: ...
Push an image
def reroot(self, rppr=None, pretend=False): with scratch_file(prefix='tree', suffix='.tre') as name: subprocess.check_call([rppr or 'rppr', 'reroot', '-c', self.path, '-o', name]) if not(pretend): self.update_file('tree', name) s...
Reroot the phylogenetic tree. This operation calls ``rppr reroot`` to generate the rerooted tree, so you must have ``pplacer`` and its auxiliary tools ``rppr`` and ``guppy`` installed for it to work. You can specify the path to ``rppr`` by giving it as the *rppr* argument. ...
def annual_volatility(returns, period=DAILY, alpha=2.0, annualization=None, out=None): allocated_output = out is None if allocated_output: out = np.empty(returns.shape[1:]) returns_1d = returns.ndim == 1 if l...
Determines the annual volatility of a strategy. Parameters ---------- returns : pd.Series or np.ndarray Periodic returns of the strategy, noncumulative. - See full explanation in :func:`~empyrical.stats.cum_returns`. period : str, optional Defines the periodicity of the 'returns...
def state_transition_run(self, event_to_wait_on): while event_to_wait_on.wait(): event_to_wait_on.clear() if self.state_transition_callback_kill_event.is_set(): return self.state_transition_func()
This is the thread that listens to an event from the timer process to execute the state_transition_func callback in the context of the main process.
def apply_transformation(self, structure): if self.species_map == None: match = StructureMatcher() s_map = \ match.get_best_electronegativity_anonymous_mapping(self.unrelaxed_structure, structure) ...
Returns a copy of structure with lattice parameters and sites scaled to the same degree as the relaxed_structure. Arg: structure (Structure): A structurally similar structure in regards to crystal and site positions.
def decode(pieces, sequence_length, model_file=None, model_proto=None, reverse=False, name=None): return _gen_sentencepiece_processor_op.sentencepiece_decode( pieces, sequence_length, model_file=model_file, model_proto=model_proto, reverse=reverse, name=name)
Decode pieces into postprocessed text. Args: pieces: A 2D int32 or string tensor [batch_size x max_length] of encoded sequences. sequence_length: A 1D int32 tensor [batch_size] representing the length of pieces. model_file: The sentencepiece model file path. model_proto...
def fnmatches(entry, *pattern_list): for pattern in pattern_list: if pattern and fnmatch(entry, pattern): return True return False
returns true if entry matches any of the glob patterns, false otherwise
def construct_item_args(self, domain_event): sequence_id = domain_event.__dict__[self.sequence_id_attr_name] position = getattr(domain_event, self.position_attr_name, None) topic, state = self.get_item_topic_and_state( domain_event.__class__, domain_event.__dict__ ...
Constructs attributes of a sequenced item from the given domain event.
def autocommand(func): name = func.__name__ title, desc = command.parse_docstring(func) if not title: title = 'Auto command for: %s' % name if not desc: desc = ' ' return AutoCommand(title=title, desc=desc, name=name, func=func)
A simplified decorator for making a single function a Command instance. In the future this will leverage PEP0484 to do really smart function parsing and conversion to argparse actions.
def _init_journal(self, permissive=True): nowstamp = datetime.now().strftime("%d-%b-%Y %H:%M:%S.%f")[:-3] self._add_entry(templates.INIT .format(time_stamp=nowstamp)) if permissive: self._add_entry(templates.INIT_DEBUG)
Add the initialization lines to the journal. By default adds JrnObj variable and timestamp to the journal contents. Args: permissive (bool): if True most errors in journal will not cause Revit to stop journal execution. Some sti...
def get_squeezed_contents(contents): line_between_example_code = substitute.Substitution( '\n\n ', '\n ', True ) lines_between_examples = substitute.Substitution('\n\n\n', '\n\n', True) lines_between_sections = substitute.Substitution( '\n\n\n\n', '\n\n\n', True ...
Squeeze the contents by removing blank lines between definition and example and remove duplicate blank lines except between sections.
def reset(self): for stat in six.itervalues(self._op_stats): if stat._start_time is not None: return False self._op_stats = {} return True
Reset all statistics and clear any statistic names. All statistics must be inactive before a reset will execute Returns: True if reset, False if not
def _validate_iterable(self, is_iterable, key, value): if is_iterable: try: iter(value) except TypeError: self._error(key, "Must be iterable (e.g. a list or array)")
Validate fields with `iterable` key in schema set to True
def combine_action_handlers(*handlers): for handler in handlers: if not (iscoroutinefunction(handler) or iscoroutine(handler)): raise ValueError("Provided handler is not a coroutine: %s" % handler) async def combined_handler(*args, **kwds): for handler in handlers: await ...
This function combines the given action handlers into a single function which will call all of them.
def offset(polygons, distance, join='miter', tolerance=2, precision=0.001, join_first=False, max_points=199, layer=0, datatype=0): poly = [] if isinstance(polygons, PolygonSet): poly.extend(polygons.polygons) eli...
Shrink or expand a polygon or polygon set. Parameters ---------- polygons : polygon or array-like Polygons to be offset. Must be a ``PolygonSet``, ``CellReference``, ``CellArray``, or an array. The array may contain any of the previous objects or an array-like[N][2] of ver...
def get_random_string(): hash_string = "%8x" % random.getrandbits(32) hash_string = hash_string.strip() while is_number(hash_string): hash_string = "%8x" % random.getrandbits(32) hash_string = hash_string.strip() return hash_string
make a random string, which we can use for bsub job IDs, so that different jobs do not have the same job IDs.
def update_tab_for_course(self, tab_id, course_id, hidden=None, position=None): path = {} data = {} params = {} path["course_id"] = course_id path["tab_id"] = tab_id if position is not None: data["position"] = position if hidden is not None: ...
Update a tab for a course. Home and Settings tabs are not manageable, and can't be hidden or moved Returns a tab object
def has_group_perms(self, perm, obj, approved): if not self.group: return False if self.use_smart_cache: content_type_pk = Permission.objects.get_content_type(obj).pk def _group_has_perms(cached_perms): return cached_perms.get(( obj...
Check if group has the permission for the given object
def do_GET(self): if self.path.lower().endswith("?wsdl"): service_path = self.path[:-5] service = self.server.getNode(service_path) if hasattr(service, "_wsdl"): wsdl = service._wsdl proto = 'http' if hasattr(self.server,'proto'...
The GET command.
def _get_pull_requests(self): for pull in self.repo.pull_requests( state="closed", base=self.github_info["master_branch"], direction="asc" ): if self._include_pull_request(pull): yield pull
Gets all pull requests from the repo since we can't do a filtered date merged search
def set_motor_force(self, motor_name, force): self.call_remote_api('simxSetJointForce', self.get_object_handle(motor_name), force, sending=True)
Sets the maximum force or torque that a joint can exert.
def create_ckan_ini(self): self.run_command( command='/scripts/run_as_user.sh /usr/lib/ckan/bin/paster make-config' ' ckan /project/development.ini', rw_project=True, ro={scripts.get_script_path('run_as_user.sh'): '/scripts/run_as_user.sh'}, )
Use make-config to generate an initial development.ini file
def make_object(cls, data): if issubclass(cls, Object): self = object.__new__(cls) self._data = data else: self = data return self
Creates an API object of class `cls`, setting its `_data` to data. Subclasses of `Object` are required to use this to build a new, empty instance without using their constructor.
def copy_from_model(cls, model_name, reference, **kwargs): if isinstance(reference, cls): settings = reference.__dict__.copy() settings.pop('model') else: settings = _get_model_info(reference) settings.pop('model_name') settings.update(kwargs) ...
Set-up a user-defined grid using specifications of a reference grid model. Parameters ---------- model_name : string name of the user-defined grid model. reference : string or :class:`CTMGrid` instance Name of the reference model (see :func:`get_supported...
def timedelta_isoformat(td: datetime.timedelta) -> str: minutes, seconds = divmod(td.seconds, 60) hours, minutes = divmod(minutes, 60) return f'P{td.days}DT{hours:d}H{minutes:d}M{seconds:d}.{td.microseconds:06d}S'
ISO 8601 encoding for timedeltas.
def _toolkit_serialize_summary_struct(model, sections, section_titles): output_dict = dict() output_dict['sections'] = [ [ ( field[0], __extract_model_summary_value(model, field[1]) ) \ for field in section ] ...
Serialize model summary into a dict with ordered lists of sections and section titles Parameters ---------- model : Model object sections : Ordered list of lists (sections) of tuples (field,value) [ [(field1, value1), (field2, value2)], [(field3, value3), (field4, value4)], ...
def arr_normalize(arr, *args, **kwargs): f_max = arr.max() f_min = arr.min() f_range = f_max - f_min arr_shifted = arr + -f_min arr_norm = arr_shifted / f_range for key, value in kwargs.items(): if key == 'scale': arr_norm *= value return arr_norm
ARGS arr array to normalize **kargs scale = <f_scale> scale the normalized output by <f_scale> DESC Given an input array, <arr>, normalize all values to range between 0 and 1. If specified in the **kwargs, optionally set the scale with <f_s...
def affine_map(points1, points2): A = np.ones((4, 4)) A[:, :3] = points1 B = np.ones((4, 4)) B[:, :3] = points2 matrix = np.eye(4) for i in range(3): matrix[i] = np.linalg.solve(A, B[:, i]) return matrix
Find a 3D transformation matrix that maps points1 onto points2. Arguments are specified as arrays of four 3D coordinates, shape (4, 3).
async def get_scm_level(context, project): await context.populate_projects() level = context.projects[project]['access'].replace("scm_level_", "") return level
Get the scm level for a project from ``projects.yml``. We define all known projects in ``projects.yml``. Let's make sure we have it populated in ``context``, then return the scm level of ``project``. SCM levels are an integer, 1-3, matching Mozilla commit levels. https://www.mozilla.org/en-US/about/go...
def update_agent_db_refs(self, agent, agent_text, do_rename=True): map_db_refs = deepcopy(self.gm.get(agent_text)) self.standardize_agent_db_refs(agent, map_db_refs, do_rename)
Update db_refs of agent using the grounding map If the grounding map is missing one of the HGNC symbol or Uniprot ID, attempts to reconstruct one from the other. Parameters ---------- agent : :py:class:`indra.statements.Agent` The agent whose db_refs will be updated...
def ask_dir(self): args ['directory'] = askdirectory(**self.dir_opt) self.dir_text.set(args ['directory'])
dialogue box for choosing directory
def _ufunc_helper(lhs, rhs, fn_array, fn_scalar, lfn_scalar, rfn_scalar=None): if isinstance(lhs, numeric_types): if isinstance(rhs, numeric_types): return fn_scalar(lhs, rhs) else: if rfn_scalar is None: return lfn_scalar(rhs, float(lhs)) else: ...
Helper function for element-wise operation. The function will perform numpy-like broadcasting if needed and call different functions. Parameters -------- lhs : NDArray or numeric value Left-hand side operand. rhs : NDArray or numeric value Right-hand operand, fn_array : functi...
def to_array(self): array = super(Document, self).to_array() array['file_id'] = u(self.file_id) if self.thumb is not None: array['thumb'] = self.thumb.to_array() if self.file_name is not None: array['file_name'] = u(self.file_name) if self.mime_type is not...
Serializes this Document to a dictionary. :return: dictionary representation of this object. :rtype: dict
def text_wrap(text, length=None, indent='', firstline_indent=None): if length is None: length = get_help_width() if indent is None: indent = '' if firstline_indent is None: firstline_indent = indent if len(indent) >= length: raise ValueError('Length of indent exceeds length') if len(firstline_...
Wraps a given text to a maximum line length and returns it. It turns lines that only contain whitespace into empty lines, keeps new lines, and expands tabs using 4 spaces. Args: text: str, text to wrap. length: int, maximum length of a line, includes indentation. If this is None then use get_hel...
def updated(self, user): for who, what, old, new in self.history(user): if (what == "comment" or what == "description") and new != "": return True return False
True if the user commented the ticket in given time frame
def create_log(self): return EventLog( self.networkapi_url, self.user, self.password, self.user_ldap)
Get an instance of log services facade.
def python_path(self, script): if not script: try: import __main__ script = getfile(__main__) except Exception: return script = os.path.realpath(script) if self.cfg.get('python_path', True): path = os.path.dirnam...
Called during initialisation to obtain the ``script`` name. If ``script`` does not evaluate to ``True`` it is evaluated from the ``__main__`` import. Returns the real path of the python script which runs the application.
def precompute_sharp_round(nxk, nyk, xc, yc): s4m = np.ones((nyk,nxk),dtype=np.int16) s4m[yc, xc] = 0 s2m = np.ones((nyk,nxk),dtype=np.int16) s2m[yc, xc] = 0 s2m[yc:nyk, 0:xc] = -1; s2m[0:yc+1, xc+1:nxk] = -1; return s2m, s4m
Pre-computes mask arrays to be used by the 'sharp_round' function for roundness computations based on two- and four-fold symmetries.
def flatten( iterables ): for it in iterables: if isinstance(it, str): yield it else: for element in it: yield element
Flatten an iterable, except for string elements.
def verify(password_hash, password): if password_hash.startswith(argon2id.STRPREFIX): return argon2id.verify(password_hash, password) elif password_hash.startswith(argon2i.STRPREFIX): return argon2id.verify(password_hash, password) elif password_hash.startswith(scrypt.STRPREFIX): ret...
Takes a modular crypt encoded stored password hash derived using one of the algorithms supported by `libsodium` and checks if the user provided password will hash to the same string when using the parameters saved in the stored hash
def crc32File(filename, skip=0): with open(filename, 'rb') as stream: discard = stream.read(skip) return zlib.crc32(stream.read()) & 0xffffffff
Computes the CRC-32 of the contents of filename, optionally skipping a certain number of bytes at the beginning of the file.
def _filter_names(names): names = [n for n in names if n not in EXCLUDE_NAMES] for pattern in EXCLUDE_PATTERNS: names = [n for n in names if (not fnmatch.fnmatch(n, pattern)) and (not n.endswith('.py'))] return names
Given a list of file names, return those names that should be copied.
def build_vcf_parts(feature, genome_2bit, info=None): base1 = genome_2bit[feature.chrom1].get( feature.start1, feature.start1 + 1).upper() id1 = "hydra{0}a".format(feature.name) base2 = genome_2bit[feature.chrom2].get( feature.start2, feature.start2 + 1).upper() id2 = "hydra{0}b".format(...
Convert BedPe feature information into VCF part representation. Each feature will have two VCF lines for each side of the breakpoint.
def resource_set_create(self, token, name, **kwargs): return self._realm.client.post( self.well_known['resource_registration_endpoint'], data=self._get_data(name=name, **kwargs), headers=self.get_headers(token) )
Create a resource set. https://docs.kantarainitiative.org/uma/rec-oauth-resource-reg-v1_0_1.html#rfc.section.2.2.1 :param str token: client access token :param str id: Identifier of the resource set :param str name: :param str uri: (optional) :param str type: (optional)...
def athalianatruth(args): p = OptionParser(athalianatruth.__doc__) opts, args = p.parse_args(args) if len(args) != 2: sys.exit(not p.print_help()) atxt, bctxt = args g = Grouper() pairs = set() for txt in (atxt, bctxt): extract_groups(g, pairs, txt) fw = open("pairs", "w"...
%prog athalianatruth J_a.txt J_bc.txt Prepare pairs data for At alpha/beta/gamma.
def unique(new_cmp_dict, old_cmp_dict): newkeys = set(new_cmp_dict) oldkeys = set(old_cmp_dict) unique = newkeys - oldkeys unique_ldict = [] for key in unique: unique_ldict.append(new_cmp_dict[key]) return unique_ldict
Return a list dict of the unique keys in new_cmp_dict
def over(expr, window): prior_op = expr.op() if isinstance(prior_op, ops.WindowOp): op = prior_op.over(window) else: op = ops.WindowOp(expr, window) result = op.to_expr() try: name = expr.get_name() except com.ExpressionError: pass else: result = resul...
Turn an aggregation or full-sample analytic operation into a windowed operation. See ibis.window for more details on window configuration Parameters ---------- expr : value expression window : ibis.Window Returns ------- expr : type of input
def getFlaskResponse(responseString, httpStatus=200): return flask.Response(responseString, status=httpStatus, mimetype=MIMETYPE)
Returns a Flask response object for the specified data and HTTP status.
def set_ntp_servers(primary_server=None, secondary_server=None, deploy=False): ret = {} if primary_server: query = {'type': 'config', 'action': 'set', 'xpath': '/config/devices/entry[@name=\'localhost.localdomain\']/deviceconfig/system/ntp-servers/' ...
Set the NTP servers of the Palo Alto proxy minion. A commit will be required before this is processed. CLI Example: Args: primary_server(str): The primary NTP server IP address or FQDN. secondary_server(str): The secondary NTP server IP address or FQDN. deploy (bool): If true then co...
def next(self): try: return six.next(self._wrapped) except grpc.RpcError as exc: six.raise_from(exceptions.from_grpc_error(exc), exc)
Get the next response from the stream. Returns: protobuf.Message: A single response from the stream.
def service_url_parse(url): endpoint = get_sanitized_endpoint(url) url_split_list = url.split(endpoint + '/') if len(url_split_list) != 0: url_split_list = url_split_list[1].split('/') else: raise Exception('Wrong url parsed') parsed_url = [s for s in url_split_list if '?' not in s i...
Function that parses from url the service and folder of services.
def cancel_inquiry (self): self.names_to_find = {} if self.is_inquiring: try: _bt.hci_send_cmd (self.sock, _bt.OGF_LINK_CTL, \ _bt.OCF_INQUIRY_CANCEL) except _bt.error as e: self.sock.close () self.sock = Non...
Call this method to cancel an inquiry in process. inquiry_complete will still be called.
def free (self): result = [p for p in self.lazy_properties if not p.feature.incidental and p.feature.free] result.extend(self.free_) return result
Returns free properties which are not dependency properties.
def transform_properties(properties, schema): new_properties = properties.copy() for prop_value, (prop_name, prop_type) in zip(new_properties.values(), schema["properties"].items()): if prop_value is None: continue elif prop_type == "time": new_properties[prop_name] = par...
Transform properties types according to a schema. Parameters ---------- properties : dict Properties to transform. schema : dict Fiona schema containing the types.
def get_letters_per_page(self, per_page=1000, page=1, params=None): return self._get_resource_per_page(resource=LETTERS, per_page=per_page, page=page, params=params)
Get letters per page :param per_page: How many objects per page. Default: 1000 :param page: Which page. Default: 1 :param params: Search parameters. Default: {} :return: list
def search(api_key, query, offset=0, type='personal'): if not isinstance(api_key, str): raise InvalidAPIKeyException('API key must be a string') if not api_key or len(api_key) < 40: raise InvalidAPIKeyException('Invalid API key.') url = get_endpoint(api_key, query, offset, type) try: ...
Get a list of email addresses for the provided domain. The type of search executed will vary depending on the query provided. Currently this query is restricted to either domain searches, in which the email addresses (and other bits) for the domain are returned, or searches for an email address. The la...