code
stringlengths
51
2.38k
docstring
stringlengths
4
15.2k
def WSGIHandler(self): sdm = werkzeug_wsgi.SharedDataMiddleware(self, { "/": config.CONFIG["AdminUI.document_root"], }) return werkzeug_wsgi.DispatcherMiddleware(self, { "/static": sdm, })
Returns GRR's WSGI handler.
def read(parser, stream): source = stream() if callable(stream) else stream try: text = source.read() stream_name = getattr(source, 'name', None) try: result = parser(text) except ECMASyntaxError as e: error_name = repr_compat(stream_name or source) raise type(e)('%s in %s' % (str(e), error_name)) finally: if callable(stream): source.close() result.sourcepath = stream_name return result
Return an AST from the input ES5 stream. Arguments parser A parser instance. stream Either a stream object or a callable that produces one. The stream object to read from; its 'read' method will be invoked. If a callable was provided, the 'close' method on its return value will be called to close the stream.
def is_link(path): if sys.getwindowsversion().major < 6: raise SaltInvocationError('Symlinks are only supported on Windows Vista or later.') try: return salt.utils.path.islink(path) except Exception as exc: raise CommandExecutionError(exc)
Check if the path is a symlink This is only supported on Windows Vista or later. Inline with Unix behavior, this function will raise an error if the path is not a symlink, however, the error raised will be a SaltInvocationError, not an OSError. Args: path (str): The path to a file or directory Returns: bool: True if path is a symlink, otherwise False CLI Example: .. code-block:: bash salt '*' file.is_link /path/to/link
def get_list_filter(self, request): list_filter = super(VersionedAdmin, self).get_list_filter(request) return list(list_filter) + [('version_start_date', DateTimeFilter), IsCurrentFilter]
Adds versionable custom filtering ability to changelist
def extract_features(self, phrase): words = nltk.word_tokenize(phrase) features = {} for word in words: features['contains(%s)' % word] = (word in words) return features
This function will extract features from the phrase being used. Currently, the feature we are extracting are unigrams of the text corpus.
async def get_identity_document(client: Client, current_block: dict, pubkey: str) -> Identity: lookup_data = await client(bma.wot.lookup, pubkey) uid = None timestamp = BlockUID.empty() signature = None for result in lookup_data['results']: if result["pubkey"] == pubkey: uids = result['uids'] for uid_data in uids: timestamp = BlockUID.from_str(uid_data["meta"]["timestamp"]) uid = uid_data["uid"] signature = uid_data["self"] return Identity( version=10, currency=current_block['currency'], pubkey=pubkey, uid=uid, ts=timestamp, signature=signature )
Get the identity document of the pubkey :param client: Client to connect to the api :param current_block: Current block data :param pubkey: UID/Public key :rtype: Identity
def load(source): parser = get_xml_parser() return etree.parse(source, parser=parser).getroot()
Load OpenCorpora corpus. The ``source`` can be any of the following: - a file name/path - a file object - a file-like object - a URL using the HTTP or FTP protocol
def defer(self, *args, **kwargs): LOG.debug( '%s on %s (awaitable %s async %s provider %s)', 'deferring', self._func, self._is_awaitable, self._is_asyncio_provider, self._concurrency_provider ) if self._blocked: raise RuntimeError('Already activated this deferred call by blocking on it') with self._lock: if not self._deferable: func_partial = functools.partial(self._func, *args, **kwargs) self._deferable = ( asyncio.ensure_future(func_partial(), loop=self._concurrency_provider) if self._is_awaitable else ( self._concurrency_provider.run_in_executor( func=func_partial, executor=None ) if self._is_asyncio_provider else ( self._concurrency_provider.apply_async(func_partial) ) ) ) return self._deferable
Call the function and immediately return an asynchronous object. The calling code will need to check for the result at a later time using: In Python 2/3 using ThreadPools - an AsyncResult (https://docs.python.org/2/library/multiprocessing.html#multiprocessing.pool.AsyncResult) In Python 3 using Asyncio - a Future (https://docs.python.org/3/library/asyncio-task.html#future) :param args: :param kwargs: :return:
def _bigger(interval1, interval2): if interval2.cardinality > interval1.cardinality: return interval2.copy() return interval1.copy()
Return interval with bigger cardinality Refer Section 3.1 :param interval1: first interval :param interval2: second interval :return: Interval or interval2 whichever has greater cardinality
def get_damage(self, amount: int, target) -> int: if target.immune: self.log("%r is immune to %s for %i damage", target, self, amount) return 0 return amount
Override to modify the damage dealt to a target from the given amount.
def any_hook(*hook_patterns): current_hook = hookenv.hook_name() i_pat = re.compile(r'{([^:}]+):([^}]+)}') hook_patterns = _expand_replacements(i_pat, hookenv.role_and_interface_to_relations, hook_patterns) c_pat = re.compile(r'{((?:[^:,}]+,?)+)}') hook_patterns = _expand_replacements(c_pat, lambda v: v.split(','), hook_patterns) return current_hook in hook_patterns
Assert that the currently executing hook matches one of the given patterns. Each pattern will match one or more hooks, and can use the following special syntax: * ``db-relation-{joined,changed}`` can be used to match multiple hooks (in this case, ``db-relation-joined`` and ``db-relation-changed``). * ``{provides:mysql}-relation-joined`` can be used to match a relation hook by the role and interface instead of the relation name. The role must be one of ``provides``, ``requires``, or ``peer``. * The previous two can be combined, of course: ``{provides:mysql}-relation-{joined,changed}``
def create_file_chooser_dialog(self, text, parent, name=Gtk.STOCK_OPEN): text = None dialog = Gtk.FileChooserDialog( text, parent, Gtk.FileChooserAction.SELECT_FOLDER, (Gtk.STOCK_CANCEL, Gtk.ResponseType.CANCEL, name, Gtk.ResponseType.OK) ) response = dialog.run() if response == Gtk.ResponseType.OK: text = dialog.get_filename() dialog.destroy() return text
Function creates a file chooser dialog with title text
def get_func_info(func): name = func.__name__ doc = func.__doc__ or "" try: nicename = func.description except AttributeError: if doc: nicename = doc.split('\n')[0] if len(nicename) > 80: nicename = name else: nicename = name parameters = [] try: closure = func.func_closure except AttributeError: closure = func.__closure__ try: varnames = func.func_code.co_freevars except AttributeError: varnames = func.__code__.co_freevars if closure: for index, arg in enumerate(closure): if not callable(arg.cell_contents): parameters.append((varnames[index], text_type(arg.cell_contents))) return ({ "nicename": nicename, "doc": doc, "parameters": parameters, "name": name, "time": str(datetime.datetime.now()), "hostname": socket.gethostname(), })
Retrieve a function's information.
def update_template(self, template_dict, original_template_path, built_artifacts): original_dir = os.path.dirname(original_template_path) for logical_id, resource in template_dict.get("Resources", {}).items(): if logical_id not in built_artifacts: continue artifact_relative_path = os.path.relpath(built_artifacts[logical_id], original_dir) resource_type = resource.get("Type") properties = resource.setdefault("Properties", {}) if resource_type == "AWS::Serverless::Function": properties["CodeUri"] = artifact_relative_path if resource_type == "AWS::Lambda::Function": properties["Code"] = artifact_relative_path return template_dict
Given the path to built artifacts, update the template to point appropriate resource CodeUris to the artifacts folder Parameters ---------- template_dict original_template_path : str Path where the template file will be written to built_artifacts : dict Map of LogicalId of a resource to the path where the the built artifacts for this resource lives Returns ------- dict Updated template
def auto_forward(auto=True): global __auto_forward_state prev = __auto_forward_state __auto_forward_state = auto yield __auto_forward_state = prev
Context for dynamic graph execution mode. Args: auto (bool): Whether forward computation is executed during a computation graph construction. Returns: bool
def set_keepalive(sock, after_idle_sec=1, interval_sec=3, max_fails=5): if hasattr(socket, "SO_KEEPALIVE"): sock.setsockopt(socket.SOL_SOCKET, socket.SO_KEEPALIVE, 1) if hasattr(socket, "TCP_KEEPIDLE"): sock.setsockopt(socket.IPPROTO_TCP, socket.TCP_KEEPIDLE, after_idle_sec) if hasattr(socket, "TCP_KEEPINTVL"): sock.setsockopt(socket.IPPROTO_TCP, socket.TCP_KEEPINTVL, interval_sec) if hasattr(socket, "TCP_KEEPCNT"): sock.setsockopt(socket.IPPROTO_TCP, socket.TCP_KEEPCNT, max_fails)
Set TCP keepalive on an open socket. It activates after 1 second (after_idle_sec) of idleness, then sends a keepalive ping once every 3 seconds (interval_sec), and closes the connection after 5 failed ping (max_fails), or 15 seconds
def setup_figure(): fig, axes = mplstereonet.subplots(ncols=2, figsize=(20,10)) for ax in axes: ax.grid(ls='-') ax.set_longitude_grid_ends(90) return fig, axes
Setup the figure and axes
def create_action(self): actions = {} act = QAction('Load Montage...', self) act.triggered.connect(self.load_channels) act.setEnabled(False) actions['load_channels'] = act act = QAction('Save Montage...', self) act.triggered.connect(self.save_channels) act.setEnabled(False) actions['save_channels'] = act self.action = actions
Create actions related to channel selection.
def show(self): from matplotlib import pyplot as plt if self.already_run: for ref in self.volts.keys(): plt.plot(self.t, self.volts[ref], label=ref) plt.title("Simulation voltage vs time") plt.legend() plt.xlabel("Time [ms]") plt.ylabel("Voltage [mV]") else: pynml.print_comment("First you have to 'go()' the simulation.", True) plt.show()
Plot the result of the simulation once it's been intialized
def delete_secret_versions(self, path, versions, mount_point=DEFAULT_MOUNT_POINT): if not isinstance(versions, list) or len(versions) == 0: error_msg = 'argument to "versions" must be a list containing one or more integers, "{versions}" provided.'.format( versions=versions ) raise exceptions.ParamValidationError(error_msg) params = { 'versions': versions, } api_path = '/v1/{mount_point}/delete/{path}'.format(mount_point=mount_point, path=path) return self._adapter.post( url=api_path, json=params, )
Issue a soft delete of the specified versions of the secret. This marks the versions as deleted and will stop them from being returned from reads, but the underlying data will not be removed. A delete can be undone using the undelete path. Supported methods: POST: /{mount_point}/delete/{path}. Produces: 204 (empty body) :param path: Specifies the path of the secret to delete. This is specified as part of the URL. :type path: str | unicode :param versions: The versions to be deleted. The versioned data will not be deleted, but it will no longer be returned in normal get requests. :type versions: int :param mount_point: The "path" the secret engine was mounted on. :type mount_point: str | unicode :return: The response of the request. :rtype: requests.Response
def prepare_and_execute(self, connection_id, statement_id, sql, max_rows_total=None, first_frame_max_size=None): request = requests_pb2.PrepareAndExecuteRequest() request.connection_id = connection_id request.statement_id = statement_id request.sql = sql if max_rows_total is not None: request.max_rows_total = max_rows_total if first_frame_max_size is not None: request.first_frame_max_size = first_frame_max_size response_data = self._apply(request, 'ExecuteResponse') response = responses_pb2.ExecuteResponse() response.ParseFromString(response_data) return response.results
Prepares and immediately executes a statement. :param connection_id: ID of the current connection. :param statement_id: ID of the statement to prepare. :param sql: SQL query. :param max_rows_total: The maximum number of rows that will be allowed for this query. :param first_frame_max_size: The maximum number of rows that will be returned in the first Frame returned for this query. :returns: Result set with the signature of the prepared statement and the first frame data.
def get_ccle_mrna(gene_list, cell_lines): gene_list_str = ','.join(gene_list) data = {'cmd': 'getProfileData', 'case_set_id': ccle_study + '_mrna', 'genetic_profile_id': ccle_study + '_mrna', 'gene_list': gene_list_str, 'skiprows': -1} df = send_request(**data) mrna_amounts = {cl: {g: [] for g in gene_list} for cl in cell_lines} for cell_line in cell_lines: if cell_line in df.columns: for gene in gene_list: value_cell = df[cell_line][df['COMMON'] == gene] if value_cell.empty: mrna_amounts[cell_line][gene] = None elif pandas.isnull(value_cell.values[0]): mrna_amounts[cell_line][gene] = None else: value = value_cell.values[0] mrna_amounts[cell_line][gene] = value else: mrna_amounts[cell_line] = None return mrna_amounts
Return a dict of mRNA amounts in given genes and cell lines from CCLE. Parameters ---------- gene_list : list[str] A list of HGNC gene symbols to get mRNA amounts for. cell_lines : list[str] A list of CCLE cell line names to get mRNA amounts for. Returns ------- mrna_amounts : dict[dict[float]] A dict keyed to cell lines containing a dict keyed to genes containing float
def euclidean_dist(point1, point2): (x1, y1) = point1 (x2, y2) = point2 return math.sqrt((x1 - x2) ** 2 + (y1 - y2) ** 2)
Compute the Euclidean distance between two points. Parameters ---------- point1, point2 : 2-tuples of float The input points. Returns ------- d : float The distance between the input points. Examples -------- >>> point1 = (1.0, 2.0) >>> point2 = (4.0, 6.0) # (3., 4.) away, simplest Pythagorean triangle >>> euclidean_dist(point1, point2) 5.0
def stem(self): name = self.name i = name.rfind('.') if 0 < i < len(name) - 1: return name[:i] else: return name
The final path component, minus its last suffix.
def create(cls, zone_id, record): cls.echo('Creating new zone version') new_version_id = Zone.new(zone_id) cls.echo('Updating zone version') cls.add(zone_id, new_version_id, record) cls.echo('Activation of new zone version') Zone.set(zone_id, new_version_id) return new_version_id
Create a new zone version for record.
def deserialise(self, element_json: str) -> Element: return self.deserialise_dict(json.loads(element_json))
Deserialises the given JSON into an element. >>> json = '{"element": "string", "content": "Hello"' >>> JSONDeserialiser().deserialise(json) String(content='Hello')
def add_data_item(self, data_item: DataItem) -> None: display_item = data_item._data_item.container.get_display_item_for_data_item(data_item._data_item) if data_item._data_item.container else None if display_item: self.__data_group.append_display_item(display_item)
Add a data item to the group. :param data_item: The :py:class:`nion.swift.Facade.DataItem` object to add. .. versionadded:: 1.0 Scriptable: Yes
def paginate(self, page=1, perpage=10, category=None): q = db.select(self.table).fields('title', 'slug', 'description', 'html', 'css', 'js', 'category', 'status', 'comments', 'author', 'created', 'pid') if category: q.condition('category', category) results = (q.limit(perpage).offset((page - 1) * perpage) .order_by('created', 'DESC').execute()) return [self.load(data, self.model) for data in results]
Paginate the posts
def __batch_update(self, train_events, test_events, n_epoch): for epoch in range(n_epoch): if n_epoch != 1: np.random.shuffle(train_events) for e in train_events: self.rec.update(e, batch_train=True) MPR = self.__batch_evaluate(test_events) if self.debug: logger.debug('epoch %2d: MPR = %f' % (epoch + 1, MPR))
Batch update called by the fitting method. Args: train_events (list of Event): Positive training events. test_events (list of Event): Test events. n_epoch (int): Number of epochs for the batch training.
def __update_service_status(self, statuscode): if self.__service_status != statuscode: self.__service_status = statuscode self.__send_service_status_to_frontend()
Set the internal status of the service object, and notify frontend.
def kill_the_system(self, warning: str): log.critical('Kill reason: ' + warning) if self.DEBUG: return try: self.mail_this(warning) except socket.gaierror: current_time = time.localtime() formatted_time = time.strftime('%Y-%m-%d %I:%M:%S%p', current_time) with open(self.config['global']['killer_file'], 'a', encoding='utf-8') as killer_file: killer_file.write('Time: {0}\nInternet is out.\n' 'Failure: {1}\n\n'.format(formatted_time, warning))
Send an e-mail, and then shut the system down quickly.
def write(self, ontol, **args): s = self.render(ontol, **args) if self.outfile is None: print(s) else: f = open(self.outfile, 'w') f.write(s) f.close()
Write a `ontology` object
def _check_pos(self, level, *tokens): for record in self.records: if all(record.levelno == level and token in record.message for token in tokens): return level_name = logging.getLevelName(level) msgs = ["Tokens {} not found in {}, all was logged is...".format(tokens, level_name)] for record in self.records: msgs.append(" {:9s} {!r}".format(record.levelname, record.message)) self.test_instance.fail("\n".join(msgs))
Check if the different tokens were logged in one record, assert by level.
def getFile(self, file_xml_uri): find = re.match('/fmi/xml/cnt/([\w\d.-]+)\.([\w]+)?-*', file_xml_uri) file_name = find.group(1) file_extension = find.group(2) file_binary = self._doRequest(is_file=True, file_xml_uri=file_xml_uri) return (file_name, file_extension, file_binary)
This will execute cmd to fetch file data from FMServer
def put(self, job, result): "Perform a job by a member in the pool and return the result." self.job.put(job) r = result.get() return r
Perform a job by a member in the pool and return the result.
def get_auth(host, app_name, database_name): from .hooks import _get_auth_hook return _get_auth_hook(host, app_name, database_name)
Authentication hook to allow plugging in custom authentication credential providers
def get_algorithm(self, name): name = adapt_name_for_rest(name) url = '/mdb/{}/algorithms{}'.format(self._instance, name) response = self._client.get_proto(url) message = mdb_pb2.AlgorithmInfo() message.ParseFromString(response.content) return Algorithm(message)
Gets a single algorithm by its unique name. :param str name: Either a fully-qualified XTCE name or an alias in the format ``NAMESPACE/NAME``. :rtype: .Algorithm
def build(ctx, inputs, output, cs): click.echo('chemdataextractor.dict.build') dt = DictionaryTagger(lexicon=ChemLexicon(), case_sensitive=cs) names = [] for input in inputs: for line in input: tokens = line.split() names.append(tokens) dt.build(words=names) dt.save(output)
Build chemical name dictionary.
def deleteuser(self, user_id): deleted = self.delete_user(user_id) if deleted is False: return False else: return True
Deletes a user. Available only for administrators. This is an idempotent function, calling this function for a non-existent user id still returns a status code 200 OK. The JSON response differs if the user was actually deleted or not. In the former the user is returned and in the latter not. .. warning:: Warning this is being deprecated please use :func:`gitlab.Gitlab.delete_user` :param user_id: The ID of the user :return: True if it deleted, False if it couldn't
def hexbin(self, x, y, C=None, reduce_C_function=None, gridsize=None, **kwds): if reduce_C_function is not None: kwds['reduce_C_function'] = reduce_C_function if gridsize is not None: kwds['gridsize'] = gridsize return self(kind='hexbin', x=x, y=y, C=C, **kwds)
Generate a hexagonal binning plot. Generate a hexagonal binning plot of `x` versus `y`. If `C` is `None` (the default), this is a histogram of the number of occurrences of the observations at ``(x[i], y[i])``. If `C` is specified, specifies values at given coordinates ``(x[i], y[i])``. These values are accumulated for each hexagonal bin and then reduced according to `reduce_C_function`, having as default the NumPy's mean function (:meth:`numpy.mean`). (If `C` is specified, it must also be a 1-D sequence of the same length as `x` and `y`, or a column label.) Parameters ---------- x : int or str The column label or position for x points. y : int or str The column label or position for y points. C : int or str, optional The column label or position for the value of `(x, y)` point. reduce_C_function : callable, default `np.mean` Function of one argument that reduces all the values in a bin to a single number (e.g. `np.mean`, `np.max`, `np.sum`, `np.std`). gridsize : int or tuple of (int, int), default 100 The number of hexagons in the x-direction. The corresponding number of hexagons in the y-direction is chosen in a way that the hexagons are approximately regular. Alternatively, gridsize can be a tuple with two elements specifying the number of hexagons in the x-direction and the y-direction. **kwds Additional keyword arguments are documented in :meth:`DataFrame.plot`. Returns ------- matplotlib.AxesSubplot The matplotlib ``Axes`` on which the hexbin is plotted. See Also -------- DataFrame.plot : Make plots of a DataFrame. matplotlib.pyplot.hexbin : Hexagonal binning plot using matplotlib, the matplotlib function that is used under the hood. Examples -------- The following examples are generated with random data from a normal distribution. .. plot:: :context: close-figs >>> n = 10000 >>> df = pd.DataFrame({'x': np.random.randn(n), ... 'y': np.random.randn(n)}) >>> ax = df.plot.hexbin(x='x', y='y', gridsize=20) The next example uses `C` and `np.sum` as `reduce_C_function`. Note that `'observations'` values ranges from 1 to 5 but the result plot shows values up to more than 25. This is because of the `reduce_C_function`. .. plot:: :context: close-figs >>> n = 500 >>> df = pd.DataFrame({ ... 'coord_x': np.random.uniform(-3, 3, size=n), ... 'coord_y': np.random.uniform(30, 50, size=n), ... 'observations': np.random.randint(1,5, size=n) ... }) >>> ax = df.plot.hexbin(x='coord_x', ... y='coord_y', ... C='observations', ... reduce_C_function=np.sum, ... gridsize=10, ... cmap="viridis")
def commit_comment(self, comment_id): url = self._build_url('comments', str(comment_id), base_url=self._api) json = self._json(self._get(url), 200) return RepoComment(json, self) if json else None
Get a single commit comment. :param int comment_id: (required), id of the comment used by GitHub :returns: :class:`RepoComment <github3.repos.comment.RepoComment>` if successful, otherwise None
def join(self, delimiter=' ', overlap_threshold=0.1): sorted_by_start = sorted(self.labels) concat_values = [] last_label_end = None for label in sorted_by_start: if last_label_end is None or (last_label_end - label.start < overlap_threshold and last_label_end > 0): concat_values.append(label.value) last_label_end = label.end else: raise ValueError('Labels overlap, not able to define the correct order') return delimiter.join(concat_values)
Return a string with all labels concatenated together. The order of the labels is defined by the start of the label. If the overlapping between two labels is greater than ``overlap_threshold``, an Exception is thrown. Args: delimiter (str): A string to join two consecutive labels. overlap_threshold (float): Maximum overlap between two consecutive labels. Returns: str: A string with all labels concatenated together. Example: >>> ll = LabelList(idx='some', labels=[ >>> Label('a', start=0, end=4), >>> Label('b', start=3.95, end=6.0), >>> Label('c', start=7.0, end=10.2), >>> Label('d', start=10.3, end=14.0) >>> ]) >>> ll.join(' - ') 'a - b - c - d'
def getPrintAddress(self): address_lines = [] addresses = [ self.getPostalAddress(), self.getPhysicalAddress(), self.getBillingAddress(), ] for address in addresses: city = address.get("city", "") zip = address.get("zip", "") state = address.get("state", "") country = address.get("country", "") if city: address_lines = [ address["address"].strip(), "{} {}".format(city, zip).strip(), "{} {}".format(state, country).strip(), ] break return address_lines
Get an address for printing
def reqHistogramData( self, contract: Contract, useRTH: bool, period: str) -> List[HistogramData]: return self._run( self.reqHistogramDataAsync(contract, useRTH, period))
Request histogram data. This method is blocking. https://interactivebrokers.github.io/tws-api/histograms.html Args: contract: Contract to query. useRTH: If True then only show data from within Regular Trading Hours, if False then show all data. period: Period of which data is being requested, for example '3 days'.
def push(collector, image, **kwargs): if not image.image_index: raise BadOption("The chosen image does not have a image_index configuration", wanted=image.name) tag = kwargs["artifact"] if tag is NotSpecified: tag = collector.configuration["harpoon"].tag if tag is not NotSpecified: image.tag = tag Builder().make_image(image, collector.configuration["images"], pushing=True) Syncer().push(image)
Push an image
def reroot(self, rppr=None, pretend=False): with scratch_file(prefix='tree', suffix='.tre') as name: subprocess.check_call([rppr or 'rppr', 'reroot', '-c', self.path, '-o', name]) if not(pretend): self.update_file('tree', name) self._log('Rerooting refpkg')
Reroot the phylogenetic tree. This operation calls ``rppr reroot`` to generate the rerooted tree, so you must have ``pplacer`` and its auxiliary tools ``rppr`` and ``guppy`` installed for it to work. You can specify the path to ``rppr`` by giving it as the *rppr* argument. If *pretend* is ``True``, the convexification is run, but the refpkg is not actually updated.
def annual_volatility(returns, period=DAILY, alpha=2.0, annualization=None, out=None): allocated_output = out is None if allocated_output: out = np.empty(returns.shape[1:]) returns_1d = returns.ndim == 1 if len(returns) < 2: out[()] = np.nan if returns_1d: out = out.item() return out ann_factor = annualization_factor(period, annualization) nanstd(returns, ddof=1, axis=0, out=out) out = np.multiply(out, ann_factor ** (1.0 / alpha), out=out) if returns_1d: out = out.item() return out
Determines the annual volatility of a strategy. Parameters ---------- returns : pd.Series or np.ndarray Periodic returns of the strategy, noncumulative. - See full explanation in :func:`~empyrical.stats.cum_returns`. period : str, optional Defines the periodicity of the 'returns' data for purposes of annualizing. Value ignored if `annualization` parameter is specified. Defaults are:: 'monthly':12 'weekly': 52 'daily': 252 alpha : float, optional Scaling relation (Levy stability exponent). annualization : int, optional Used to suppress default values available in `period` to convert returns into annual returns. Value should be the annual frequency of `returns`. out : array-like, optional Array to use as output buffer. If not passed, a new array will be created. Returns ------- annual_volatility : float
def state_transition_run(self, event_to_wait_on): while event_to_wait_on.wait(): event_to_wait_on.clear() if self.state_transition_callback_kill_event.is_set(): return self.state_transition_func()
This is the thread that listens to an event from the timer process to execute the state_transition_func callback in the context of the main process.
def apply_transformation(self, structure): if self.species_map == None: match = StructureMatcher() s_map = \ match.get_best_electronegativity_anonymous_mapping(self.unrelaxed_structure, structure) else: s_map = self.species_map params = list(structure.lattice.abc) params.extend(structure.lattice.angles) new_lattice = Lattice.from_parameters(*[p*self.params_percent_change[i] \ for i, p in enumerate(params)]) species, frac_coords = [], [] for site in self.relaxed_structure: species.append(s_map[site.specie]) frac_coords.append(site.frac_coords) return Structure(new_lattice, species, frac_coords)
Returns a copy of structure with lattice parameters and sites scaled to the same degree as the relaxed_structure. Arg: structure (Structure): A structurally similar structure in regards to crystal and site positions.
def decode(pieces, sequence_length, model_file=None, model_proto=None, reverse=False, name=None): return _gen_sentencepiece_processor_op.sentencepiece_decode( pieces, sequence_length, model_file=model_file, model_proto=model_proto, reverse=reverse, name=name)
Decode pieces into postprocessed text. Args: pieces: A 2D int32 or string tensor [batch_size x max_length] of encoded sequences. sequence_length: A 1D int32 tensor [batch_size] representing the length of pieces. model_file: The sentencepiece model file path. model_proto: The sentencepiece model serialized proto. Either `model_file` or `model_proto` must be set. reverse: Reverses the tokenized sequence (Default = false) name: The name argument that is passed to the op function. Returns: text: A 1D string tensor of decoded string.
def fnmatches(entry, *pattern_list): for pattern in pattern_list: if pattern and fnmatch(entry, pattern): return True return False
returns true if entry matches any of the glob patterns, false otherwise
def construct_item_args(self, domain_event): sequence_id = domain_event.__dict__[self.sequence_id_attr_name] position = getattr(domain_event, self.position_attr_name, None) topic, state = self.get_item_topic_and_state( domain_event.__class__, domain_event.__dict__ ) other_args = tuple((getattr(domain_event, name) for name in self.other_attr_names)) return (sequence_id, position, topic, state) + other_args
Constructs attributes of a sequenced item from the given domain event.
def autocommand(func): name = func.__name__ title, desc = command.parse_docstring(func) if not title: title = 'Auto command for: %s' % name if not desc: desc = ' ' return AutoCommand(title=title, desc=desc, name=name, func=func)
A simplified decorator for making a single function a Command instance. In the future this will leverage PEP0484 to do really smart function parsing and conversion to argparse actions.
def _init_journal(self, permissive=True): nowstamp = datetime.now().strftime("%d-%b-%Y %H:%M:%S.%f")[:-3] self._add_entry(templates.INIT .format(time_stamp=nowstamp)) if permissive: self._add_entry(templates.INIT_DEBUG)
Add the initialization lines to the journal. By default adds JrnObj variable and timestamp to the journal contents. Args: permissive (bool): if True most errors in journal will not cause Revit to stop journal execution. Some still do.
def get_squeezed_contents(contents): line_between_example_code = substitute.Substitution( '\n\n ', '\n ', True ) lines_between_examples = substitute.Substitution('\n\n\n', '\n\n', True) lines_between_sections = substitute.Substitution( '\n\n\n\n', '\n\n\n', True ) result = contents result = line_between_example_code.apply_and_get_result(result) result = lines_between_examples.apply_and_get_result(result) result = lines_between_sections.apply_and_get_result(result) return result
Squeeze the contents by removing blank lines between definition and example and remove duplicate blank lines except between sections.
def reset(self): for stat in six.itervalues(self._op_stats): if stat._start_time is not None: return False self._op_stats = {} return True
Reset all statistics and clear any statistic names. All statistics must be inactive before a reset will execute Returns: True if reset, False if not
def _validate_iterable(self, is_iterable, key, value): if is_iterable: try: iter(value) except TypeError: self._error(key, "Must be iterable (e.g. a list or array)")
Validate fields with `iterable` key in schema set to True
def combine_action_handlers(*handlers): for handler in handlers: if not (iscoroutinefunction(handler) or iscoroutine(handler)): raise ValueError("Provided handler is not a coroutine: %s" % handler) async def combined_handler(*args, **kwds): for handler in handlers: await handler(*args, **kwds) return combined_handler
This function combines the given action handlers into a single function which will call all of them.
def offset(polygons, distance, join='miter', tolerance=2, precision=0.001, join_first=False, max_points=199, layer=0, datatype=0): poly = [] if isinstance(polygons, PolygonSet): poly.extend(polygons.polygons) elif isinstance(polygons, CellReference) or isinstance( polygons, CellArray): poly.extend(polygons.get_polygons()) else: for obj in polygons: if isinstance(obj, PolygonSet): poly.extend(obj.polygons) elif isinstance(obj, CellReference) or isinstance(obj, CellArray): poly.extend(obj.get_polygons()) else: poly.append(obj) result = clipper.offset(poly, distance, join, tolerance, 1 / precision, 1 if join_first else 0) return None if len(result) == 0 else PolygonSet( result, layer, datatype, verbose=False).fracture( max_points, precision)
Shrink or expand a polygon or polygon set. Parameters ---------- polygons : polygon or array-like Polygons to be offset. Must be a ``PolygonSet``, ``CellReference``, ``CellArray``, or an array. The array may contain any of the previous objects or an array-like[N][2] of vertices of a polygon. distance : number Offset distance. Positive to expand, negative to shrink. join : {'miter', 'bevel', 'round'} Type of join used to create the offset polygon. tolerance : number For miter joints, this number must be at least 2 and it represents the maximun distance in multiples of offset betwen new vertices and their original position before beveling to avoid spikes at acute joints. For round joints, it indicates the curvature resolution in number of points per full circle. precision : float Desired precision for rounding vertice coordinates. join_first : bool Join all paths before offseting to avoid unecessary joins in adjacent polygon sides. max_points : integer If greater than 4, fracture the resulting polygons to ensure they have at most ``max_points`` vertices. This is not a tessellating function, so this number should be as high as possible. For example, it should be set to 199 for polygons being drawn in GDSII files. layer : integer The GDSII layer number for the resulting element. datatype : integer The GDSII datatype for the resulting element (between 0 and 255). Returns ------- out : ``PolygonSet`` or ``None`` Return the offset shape as a set of polygons.
def get_random_string(): hash_string = "%8x" % random.getrandbits(32) hash_string = hash_string.strip() while is_number(hash_string): hash_string = "%8x" % random.getrandbits(32) hash_string = hash_string.strip() return hash_string
make a random string, which we can use for bsub job IDs, so that different jobs do not have the same job IDs.
def update_tab_for_course(self, tab_id, course_id, hidden=None, position=None): path = {} data = {} params = {} path["course_id"] = course_id path["tab_id"] = tab_id if position is not None: data["position"] = position if hidden is not None: data["hidden"] = hidden self.logger.debug("PUT /api/v1/courses/{course_id}/tabs/{tab_id} with query params: {params} and form data: {data}".format(params=params, data=data, **path)) return self.generic_request("PUT", "/api/v1/courses/{course_id}/tabs/{tab_id}".format(**path), data=data, params=params, single_item=True)
Update a tab for a course. Home and Settings tabs are not manageable, and can't be hidden or moved Returns a tab object
def has_group_perms(self, perm, obj, approved): if not self.group: return False if self.use_smart_cache: content_type_pk = Permission.objects.get_content_type(obj).pk def _group_has_perms(cached_perms): return cached_perms.get(( obj.pk, content_type_pk, perm, approved, )) return _group_has_perms(self._group_perm_cache) return Permission.objects.group_permissions( self.group, perm, obj, approved, ).filter( object_id=obj.pk, ).exists()
Check if group has the permission for the given object
def do_GET(self): if self.path.lower().endswith("?wsdl"): service_path = self.path[:-5] service = self.server.getNode(service_path) if hasattr(service, "_wsdl"): wsdl = service._wsdl proto = 'http' if hasattr(self.server,'proto'): proto = self.server.proto serviceUrl = '%s://%s:%d%s' % (proto, self.server.server_name, self.server.server_port, service_path) soapAddress = '<soap:address location="%s"/>' % serviceUrl wsdlre = re.compile('\<soap:address[^\>]*>',re.IGNORECASE) wsdl = re.sub(wsdlre,soapAddress,wsdl) self.send_xml(wsdl) else: self.send_error(404, "WSDL not available for that service [%s]." % self.path) else: self.send_error(404, "Service not found [%s]." % self.path)
The GET command.
def _get_pull_requests(self): for pull in self.repo.pull_requests( state="closed", base=self.github_info["master_branch"], direction="asc" ): if self._include_pull_request(pull): yield pull
Gets all pull requests from the repo since we can't do a filtered date merged search
def set_motor_force(self, motor_name, force): self.call_remote_api('simxSetJointForce', self.get_object_handle(motor_name), force, sending=True)
Sets the maximum force or torque that a joint can exert.
def create_ckan_ini(self): self.run_command( command='/scripts/run_as_user.sh /usr/lib/ckan/bin/paster make-config' ' ckan /project/development.ini', rw_project=True, ro={scripts.get_script_path('run_as_user.sh'): '/scripts/run_as_user.sh'}, )
Use make-config to generate an initial development.ini file
def make_object(cls, data): if issubclass(cls, Object): self = object.__new__(cls) self._data = data else: self = data return self
Creates an API object of class `cls`, setting its `_data` to data. Subclasses of `Object` are required to use this to build a new, empty instance without using their constructor.
def copy_from_model(cls, model_name, reference, **kwargs): if isinstance(reference, cls): settings = reference.__dict__.copy() settings.pop('model') else: settings = _get_model_info(reference) settings.pop('model_name') settings.update(kwargs) settings['reference'] = reference return cls(model_name, **settings)
Set-up a user-defined grid using specifications of a reference grid model. Parameters ---------- model_name : string name of the user-defined grid model. reference : string or :class:`CTMGrid` instance Name of the reference model (see :func:`get_supported_models`), or a :class:`CTMGrid` object from which grid set-up is copied. **kwargs Any set-up parameter which will override the settings of the reference model (see :class:`CTMGrid` parameters). Returns ------- A :class:`CTMGrid` object.
def timedelta_isoformat(td: datetime.timedelta) -> str: minutes, seconds = divmod(td.seconds, 60) hours, minutes = divmod(minutes, 60) return f'P{td.days}DT{hours:d}H{minutes:d}M{seconds:d}.{td.microseconds:06d}S'
ISO 8601 encoding for timedeltas.
def _toolkit_serialize_summary_struct(model, sections, section_titles): output_dict = dict() output_dict['sections'] = [ [ ( field[0], __extract_model_summary_value(model, field[1]) ) \ for field in section ] for section in sections ] output_dict['section_titles'] = section_titles return output_dict
Serialize model summary into a dict with ordered lists of sections and section titles Parameters ---------- model : Model object sections : Ordered list of lists (sections) of tuples (field,value) [ [(field1, value1), (field2, value2)], [(field3, value3), (field4, value4)], ] section_titles : Ordered list of section titles Returns ------- output_dict : A dict with two entries: 'sections' : ordered list with tuples of the form ('label',value) 'section_titles' : ordered list of section labels
def arr_normalize(arr, *args, **kwargs): f_max = arr.max() f_min = arr.min() f_range = f_max - f_min arr_shifted = arr + -f_min arr_norm = arr_shifted / f_range for key, value in kwargs.items(): if key == 'scale': arr_norm *= value return arr_norm
ARGS arr array to normalize **kargs scale = <f_scale> scale the normalized output by <f_scale> DESC Given an input array, <arr>, normalize all values to range between 0 and 1. If specified in the **kwargs, optionally set the scale with <f_scale>.
def affine_map(points1, points2): A = np.ones((4, 4)) A[:, :3] = points1 B = np.ones((4, 4)) B[:, :3] = points2 matrix = np.eye(4) for i in range(3): matrix[i] = np.linalg.solve(A, B[:, i]) return matrix
Find a 3D transformation matrix that maps points1 onto points2. Arguments are specified as arrays of four 3D coordinates, shape (4, 3).
async def get_scm_level(context, project): await context.populate_projects() level = context.projects[project]['access'].replace("scm_level_", "") return level
Get the scm level for a project from ``projects.yml``. We define all known projects in ``projects.yml``. Let's make sure we have it populated in ``context``, then return the scm level of ``project``. SCM levels are an integer, 1-3, matching Mozilla commit levels. https://www.mozilla.org/en-US/about/governance/policies/commit/access-policy/ Args: context (scriptworker.context.Context): the scriptworker context project (str): the project to get the scm level for. Returns: str: the level of the project, as a string.
def update_agent_db_refs(self, agent, agent_text, do_rename=True): map_db_refs = deepcopy(self.gm.get(agent_text)) self.standardize_agent_db_refs(agent, map_db_refs, do_rename)
Update db_refs of agent using the grounding map If the grounding map is missing one of the HGNC symbol or Uniprot ID, attempts to reconstruct one from the other. Parameters ---------- agent : :py:class:`indra.statements.Agent` The agent whose db_refs will be updated agent_text : str The agent_text to find a grounding for in the grounding map dictionary. Typically this will be agent.db_refs['TEXT'] but there may be situations where a different value should be used. do_rename: Optional[bool] If True, the Agent name is updated based on the mapped grounding. If do_rename is True the priority for setting the name is FamPlex ID, HGNC symbol, then the gene name from Uniprot. Default: True Raises ------ ValueError If the the grounding map contains and HGNC symbol for agent_text but no HGNC ID can be found for it. ValueError If the grounding map contains both an HGNC symbol and a Uniprot ID, but the HGNC symbol and the gene name associated with the gene in Uniprot do not match or if there is no associated gene name in Uniprot.
def ask_dir(self): args ['directory'] = askdirectory(**self.dir_opt) self.dir_text.set(args ['directory'])
dialogue box for choosing directory
def _ufunc_helper(lhs, rhs, fn_array, fn_scalar, lfn_scalar, rfn_scalar=None): if isinstance(lhs, numeric_types): if isinstance(rhs, numeric_types): return fn_scalar(lhs, rhs) else: if rfn_scalar is None: return lfn_scalar(rhs, float(lhs)) else: return rfn_scalar(rhs, float(lhs)) elif isinstance(rhs, numeric_types): return lfn_scalar(lhs, float(rhs)) elif isinstance(rhs, NDArray): return fn_array(lhs, rhs) else: raise TypeError('type %s not supported' % str(type(rhs)))
Helper function for element-wise operation. The function will perform numpy-like broadcasting if needed and call different functions. Parameters -------- lhs : NDArray or numeric value Left-hand side operand. rhs : NDArray or numeric value Right-hand operand, fn_array : function Function to be called if both lhs and rhs are of ``NDArray`` type. fn_scalar : function Function to be called if both lhs and rhs are numeric values. lfn_scalar : function Function to be called if lhs is ``NDArray`` while rhs is numeric value rfn_scalar : function Function to be called if lhs is numeric value while rhs is ``NDArray``; if none is provided, then the function is commutative, so rfn_scalar is equal to lfn_scalar Returns -------- NDArray result array
def to_array(self): array = super(Document, self).to_array() array['file_id'] = u(self.file_id) if self.thumb is not None: array['thumb'] = self.thumb.to_array() if self.file_name is not None: array['file_name'] = u(self.file_name) if self.mime_type is not None: array['mime_type'] = u(self.mime_type) if self.file_size is not None: array['file_size'] = int(self.file_size) return array
Serializes this Document to a dictionary. :return: dictionary representation of this object. :rtype: dict
def text_wrap(text, length=None, indent='', firstline_indent=None): if length is None: length = get_help_width() if indent is None: indent = '' if firstline_indent is None: firstline_indent = indent if len(indent) >= length: raise ValueError('Length of indent exceeds length') if len(firstline_indent) >= length: raise ValueError('Length of first line indent exceeds length') text = text.expandtabs(4) result = [] wrapper = textwrap.TextWrapper( width=length, initial_indent=firstline_indent, subsequent_indent=indent) subsequent_wrapper = textwrap.TextWrapper( width=length, initial_indent=indent, subsequent_indent=indent) for paragraph in (p.strip() for p in text.splitlines()): if paragraph: result.extend(wrapper.wrap(paragraph)) else: result.append('') wrapper = subsequent_wrapper return '\n'.join(result)
Wraps a given text to a maximum line length and returns it. It turns lines that only contain whitespace into empty lines, keeps new lines, and expands tabs using 4 spaces. Args: text: str, text to wrap. length: int, maximum length of a line, includes indentation. If this is None then use get_help_width() indent: str, indent for all but first line. firstline_indent: str, indent for first line; if None, fall back to indent. Returns: str, the wrapped text. Raises: ValueError: Raised if indent or firstline_indent not shorter than length.
def updated(self, user): for who, what, old, new in self.history(user): if (what == "comment" or what == "description") and new != "": return True return False
True if the user commented the ticket in given time frame
def create_log(self): return EventLog( self.networkapi_url, self.user, self.password, self.user_ldap)
Get an instance of log services facade.
def python_path(self, script): if not script: try: import __main__ script = getfile(__main__) except Exception: return script = os.path.realpath(script) if self.cfg.get('python_path', True): path = os.path.dirname(script) if path not in sys.path: sys.path.insert(0, path) return script
Called during initialisation to obtain the ``script`` name. If ``script`` does not evaluate to ``True`` it is evaluated from the ``__main__`` import. Returns the real path of the python script which runs the application.
def precompute_sharp_round(nxk, nyk, xc, yc): s4m = np.ones((nyk,nxk),dtype=np.int16) s4m[yc, xc] = 0 s2m = np.ones((nyk,nxk),dtype=np.int16) s2m[yc, xc] = 0 s2m[yc:nyk, 0:xc] = -1; s2m[0:yc+1, xc+1:nxk] = -1; return s2m, s4m
Pre-computes mask arrays to be used by the 'sharp_round' function for roundness computations based on two- and four-fold symmetries.
def flatten( iterables ): for it in iterables: if isinstance(it, str): yield it else: for element in it: yield element
Flatten an iterable, except for string elements.
def verify(password_hash, password): if password_hash.startswith(argon2id.STRPREFIX): return argon2id.verify(password_hash, password) elif password_hash.startswith(argon2i.STRPREFIX): return argon2id.verify(password_hash, password) elif password_hash.startswith(scrypt.STRPREFIX): return scrypt.verify(password_hash, password) else: raise(CryptPrefixError("given password_hash is not " "in a supported format" ) )
Takes a modular crypt encoded stored password hash derived using one of the algorithms supported by `libsodium` and checks if the user provided password will hash to the same string when using the parameters saved in the stored hash
def crc32File(filename, skip=0): with open(filename, 'rb') as stream: discard = stream.read(skip) return zlib.crc32(stream.read()) & 0xffffffff
Computes the CRC-32 of the contents of filename, optionally skipping a certain number of bytes at the beginning of the file.
def _filter_names(names): names = [n for n in names if n not in EXCLUDE_NAMES] for pattern in EXCLUDE_PATTERNS: names = [n for n in names if (not fnmatch.fnmatch(n, pattern)) and (not n.endswith('.py'))] return names
Given a list of file names, return those names that should be copied.
def build_vcf_parts(feature, genome_2bit, info=None): base1 = genome_2bit[feature.chrom1].get( feature.start1, feature.start1 + 1).upper() id1 = "hydra{0}a".format(feature.name) base2 = genome_2bit[feature.chrom2].get( feature.start2, feature.start2 + 1).upper() id2 = "hydra{0}b".format(feature.name) orientation = _breakend_orientation(feature.strand1, feature.strand2) return (VcfLine(feature.chrom1, feature.start1, id1, base1, _vcf_alt(base1, feature.chrom2, feature.start2, orientation.is_rc1, orientation.is_first1), _vcf_info(feature.start1, feature.end1, id2, info)), VcfLine(feature.chrom2, feature.start2, id2, base2, _vcf_alt(base2, feature.chrom1, feature.start1, orientation.is_rc2, orientation.is_first2), _vcf_info(feature.start2, feature.end2, id1, info)))
Convert BedPe feature information into VCF part representation. Each feature will have two VCF lines for each side of the breakpoint.
def resource_set_create(self, token, name, **kwargs): return self._realm.client.post( self.well_known['resource_registration_endpoint'], data=self._get_data(name=name, **kwargs), headers=self.get_headers(token) )
Create a resource set. https://docs.kantarainitiative.org/uma/rec-oauth-resource-reg-v1_0_1.html#rfc.section.2.2.1 :param str token: client access token :param str id: Identifier of the resource set :param str name: :param str uri: (optional) :param str type: (optional) :param list scopes: (optional) :param str icon_url: (optional) :param str DisplayName: (optional) :param boolean ownerManagedAccess: (optional) :param str owner: (optional) :rtype: str
def athalianatruth(args): p = OptionParser(athalianatruth.__doc__) opts, args = p.parse_args(args) if len(args) != 2: sys.exit(not p.print_help()) atxt, bctxt = args g = Grouper() pairs = set() for txt in (atxt, bctxt): extract_groups(g, pairs, txt) fw = open("pairs", "w") for pair in sorted(pairs): print("\t".join(pair), file=fw) fw.close() fw = open("groups", "w") for group in list(g): print(",".join(group), file=fw) fw.close()
%prog athalianatruth J_a.txt J_bc.txt Prepare pairs data for At alpha/beta/gamma.
def unique(new_cmp_dict, old_cmp_dict): newkeys = set(new_cmp_dict) oldkeys = set(old_cmp_dict) unique = newkeys - oldkeys unique_ldict = [] for key in unique: unique_ldict.append(new_cmp_dict[key]) return unique_ldict
Return a list dict of the unique keys in new_cmp_dict
def over(expr, window): prior_op = expr.op() if isinstance(prior_op, ops.WindowOp): op = prior_op.over(window) else: op = ops.WindowOp(expr, window) result = op.to_expr() try: name = expr.get_name() except com.ExpressionError: pass else: result = result.name(name) return result
Turn an aggregation or full-sample analytic operation into a windowed operation. See ibis.window for more details on window configuration Parameters ---------- expr : value expression window : ibis.Window Returns ------- expr : type of input
def getFlaskResponse(responseString, httpStatus=200): return flask.Response(responseString, status=httpStatus, mimetype=MIMETYPE)
Returns a Flask response object for the specified data and HTTP status.
def set_ntp_servers(primary_server=None, secondary_server=None, deploy=False): ret = {} if primary_server: query = {'type': 'config', 'action': 'set', 'xpath': '/config/devices/entry[@name=\'localhost.localdomain\']/deviceconfig/system/ntp-servers/' 'primary-ntp-server', 'element': '<ntp-server-address>{0}</ntp-server-address>'.format(primary_server)} ret.update({'primary_server': __proxy__['panos.call'](query)}) if secondary_server: query = {'type': 'config', 'action': 'set', 'xpath': '/config/devices/entry[@name=\'localhost.localdomain\']/deviceconfig/system/ntp-servers/' 'secondary-ntp-server', 'element': '<ntp-server-address>{0}</ntp-server-address>'.format(secondary_server)} ret.update({'secondary_server': __proxy__['panos.call'](query)}) if deploy is True: ret.update(commit()) return ret
Set the NTP servers of the Palo Alto proxy minion. A commit will be required before this is processed. CLI Example: Args: primary_server(str): The primary NTP server IP address or FQDN. secondary_server(str): The secondary NTP server IP address or FQDN. deploy (bool): If true then commit the full candidate configuration, if false only set pending change. .. code-block:: bash salt '*' ntp.set_servers 0.pool.ntp.org 1.pool.ntp.org salt '*' ntp.set_servers primary_server=0.pool.ntp.org secondary_server=1.pool.ntp.org salt '*' ntp.ser_servers 0.pool.ntp.org 1.pool.ntp.org deploy=True
def next(self): try: return six.next(self._wrapped) except grpc.RpcError as exc: six.raise_from(exceptions.from_grpc_error(exc), exc)
Get the next response from the stream. Returns: protobuf.Message: A single response from the stream.
def service_url_parse(url): endpoint = get_sanitized_endpoint(url) url_split_list = url.split(endpoint + '/') if len(url_split_list) != 0: url_split_list = url_split_list[1].split('/') else: raise Exception('Wrong url parsed') parsed_url = [s for s in url_split_list if '?' not in s if 'Server' not in s] return parsed_url
Function that parses from url the service and folder of services.
def cancel_inquiry (self): self.names_to_find = {} if self.is_inquiring: try: _bt.hci_send_cmd (self.sock, _bt.OGF_LINK_CTL, \ _bt.OCF_INQUIRY_CANCEL) except _bt.error as e: self.sock.close () self.sock = None raise BluetoothError (e.args[0], "error canceling inquiry: " + e.args[1]) self.is_inquiring = False
Call this method to cancel an inquiry in process. inquiry_complete will still be called.
def free (self): result = [p for p in self.lazy_properties if not p.feature.incidental and p.feature.free] result.extend(self.free_) return result
Returns free properties which are not dependency properties.
def transform_properties(properties, schema): new_properties = properties.copy() for prop_value, (prop_name, prop_type) in zip(new_properties.values(), schema["properties"].items()): if prop_value is None: continue elif prop_type == "time": new_properties[prop_name] = parse_date(prop_value).time() elif prop_type == "date": new_properties[prop_name] = parse_date(prop_value).date() elif prop_type == "datetime": new_properties[prop_name] = parse_date(prop_value) return new_properties
Transform properties types according to a schema. Parameters ---------- properties : dict Properties to transform. schema : dict Fiona schema containing the types.
def get_letters_per_page(self, per_page=1000, page=1, params=None): return self._get_resource_per_page(resource=LETTERS, per_page=per_page, page=page, params=params)
Get letters per page :param per_page: How many objects per page. Default: 1000 :param page: Which page. Default: 1 :param params: Search parameters. Default: {} :return: list
def search(api_key, query, offset=0, type='personal'): if not isinstance(api_key, str): raise InvalidAPIKeyException('API key must be a string') if not api_key or len(api_key) < 40: raise InvalidAPIKeyException('Invalid API key.') url = get_endpoint(api_key, query, offset, type) try: return requests.get(url).json() except requests.exceptions.RequestException as err: raise PunterException(err)
Get a list of email addresses for the provided domain. The type of search executed will vary depending on the query provided. Currently this query is restricted to either domain searches, in which the email addresses (and other bits) for the domain are returned, or searches for an email address. The latter is primary meant for checking if an email address exists, although various other useful bits are also provided (for example, the domain where the address was found). :param api_key: Secret client API key. :param query: URL or email address on which to search. :param offset: Specifies the number of emails to skip. :param type: Specifies email type (i.e. generic or personal).