code
stringlengths
75
104k
docstring
stringlengths
1
46.9k
text
stringlengths
164
112k
def all(self, store_id, product_id, get_all=False, **queryparams): """ Get information about a product’s images. :param store_id: The store id. :type store_id: :py:class:`str` :param product_id: The id for the product of a store. :type product_id: :py:class:`str` :param get_all: Should the query get all results :type get_all: :py:class:`bool` :param queryparams: The query string parameters queryparams['fields'] = [] queryparams['exclude_fields'] = [] queryparams['count'] = integer queryparams['offset'] = integer """ self.store_id = store_id self.product_id = product_id self.image_id = None if get_all: return self._iterate(url=self._build_path(store_id, 'products', product_id, 'images'), **queryparams) else: return self._mc_client._post(url=self._build_path(store_id, 'products', product_id, 'images'), **queryparams)
Get information about a product’s images. :param store_id: The store id. :type store_id: :py:class:`str` :param product_id: The id for the product of a store. :type product_id: :py:class:`str` :param get_all: Should the query get all results :type get_all: :py:class:`bool` :param queryparams: The query string parameters queryparams['fields'] = [] queryparams['exclude_fields'] = [] queryparams['count'] = integer queryparams['offset'] = integer
Below is the the instruction that describes the task: ### Input: Get information about a product’s images. :param store_id: The store id. :type store_id: :py:class:`str` :param product_id: The id for the product of a store. :type product_id: :py:class:`str` :param get_all: Should the query get all results :type get_all: :py:class:`bool` :param queryparams: The query string parameters queryparams['fields'] = [] queryparams['exclude_fields'] = [] queryparams['count'] = integer queryparams['offset'] = integer ### Response: def all(self, store_id, product_id, get_all=False, **queryparams): """ Get information about a product’s images. :param store_id: The store id. :type store_id: :py:class:`str` :param product_id: The id for the product of a store. :type product_id: :py:class:`str` :param get_all: Should the query get all results :type get_all: :py:class:`bool` :param queryparams: The query string parameters queryparams['fields'] = [] queryparams['exclude_fields'] = [] queryparams['count'] = integer queryparams['offset'] = integer """ self.store_id = store_id self.product_id = product_id self.image_id = None if get_all: return self._iterate(url=self._build_path(store_id, 'products', product_id, 'images'), **queryparams) else: return self._mc_client._post(url=self._build_path(store_id, 'products', product_id, 'images'), **queryparams)
def trigger(self, name, *args, **kwargs): """ Triggers an event to run through middleware. This method will execute a chain of relevant trigger callbacks, until one of the callbacks returns the `break_trigger`. """ # Relevant middleware is cached so we don't have to rediscover it # every time. Fetch the cached value if possible. listeners = self._triggers.get(name, []) # Execute each piece of middleware for listener in listeners: result = listener(*args, **kwargs) if result == break_trigger: return False return True
Triggers an event to run through middleware. This method will execute a chain of relevant trigger callbacks, until one of the callbacks returns the `break_trigger`.
Below is the the instruction that describes the task: ### Input: Triggers an event to run through middleware. This method will execute a chain of relevant trigger callbacks, until one of the callbacks returns the `break_trigger`. ### Response: def trigger(self, name, *args, **kwargs): """ Triggers an event to run through middleware. This method will execute a chain of relevant trigger callbacks, until one of the callbacks returns the `break_trigger`. """ # Relevant middleware is cached so we don't have to rediscover it # every time. Fetch the cached value if possible. listeners = self._triggers.get(name, []) # Execute each piece of middleware for listener in listeners: result = listener(*args, **kwargs) if result == break_trigger: return False return True
def add_graph( self, y, x_label=None, y_label="", title="", x_run=None, y_run=None, svg_size_px=None, key_position="bottom right", ): """ Add a new graph to the overlap report. Args: y (str): Value plotted on y-axis. x_label (str): Label on x-axis. y_label (str): Label on y-axis. title (str): Title of the plot. x_run ((float,float)): x-range. y_run ((int,int)): y-rang. svg_size_px ((int,int): Size of SVG image in pixels. key_position (str): GnuPlot position of the legend. """ if x_run is None: x_run = self.default_x_run if y_run is None: y_run = self.default_y_run if svg_size_px is None: svg_size_px = self.default_svg_size_px for panel in self.panels: x_run = self._load_x_run(x_run) y_run = self._load_y_run(y_run) svg_size_px = self._load_svg_size_px(svg_size_px) panel.add_graph( y=y, x_run=x_run, y_run=y_run, svg_size_px=svg_size_px, y_label=y_label, x_label=x_label if x_label is not None else self.default_x_label, title=title, key_position=key_position, )
Add a new graph to the overlap report. Args: y (str): Value plotted on y-axis. x_label (str): Label on x-axis. y_label (str): Label on y-axis. title (str): Title of the plot. x_run ((float,float)): x-range. y_run ((int,int)): y-rang. svg_size_px ((int,int): Size of SVG image in pixels. key_position (str): GnuPlot position of the legend.
Below is the the instruction that describes the task: ### Input: Add a new graph to the overlap report. Args: y (str): Value plotted on y-axis. x_label (str): Label on x-axis. y_label (str): Label on y-axis. title (str): Title of the plot. x_run ((float,float)): x-range. y_run ((int,int)): y-rang. svg_size_px ((int,int): Size of SVG image in pixels. key_position (str): GnuPlot position of the legend. ### Response: def add_graph( self, y, x_label=None, y_label="", title="", x_run=None, y_run=None, svg_size_px=None, key_position="bottom right", ): """ Add a new graph to the overlap report. Args: y (str): Value plotted on y-axis. x_label (str): Label on x-axis. y_label (str): Label on y-axis. title (str): Title of the plot. x_run ((float,float)): x-range. y_run ((int,int)): y-rang. svg_size_px ((int,int): Size of SVG image in pixels. key_position (str): GnuPlot position of the legend. """ if x_run is None: x_run = self.default_x_run if y_run is None: y_run = self.default_y_run if svg_size_px is None: svg_size_px = self.default_svg_size_px for panel in self.panels: x_run = self._load_x_run(x_run) y_run = self._load_y_run(y_run) svg_size_px = self._load_svg_size_px(svg_size_px) panel.add_graph( y=y, x_run=x_run, y_run=y_run, svg_size_px=svg_size_px, y_label=y_label, x_label=x_label if x_label is not None else self.default_x_label, title=title, key_position=key_position, )
def delete_run(): """ Delete the selected run from the database. :return: """ assert request.method == "POST", "POST request expected received {}".format(request.method) if request.method == "POST": try: selections = json.loads(request.form["selections"]) utils.drop_run(selections["project"], selections["run"]) return jsonify({"response": "deleted {}".format(selections["run"])}) except Exception as e: logging.error(e) return jsonify({"0": "__EMPTY"})
Delete the selected run from the database. :return:
Below is the the instruction that describes the task: ### Input: Delete the selected run from the database. :return: ### Response: def delete_run(): """ Delete the selected run from the database. :return: """ assert request.method == "POST", "POST request expected received {}".format(request.method) if request.method == "POST": try: selections = json.loads(request.form["selections"]) utils.drop_run(selections["project"], selections["run"]) return jsonify({"response": "deleted {}".format(selections["run"])}) except Exception as e: logging.error(e) return jsonify({"0": "__EMPTY"})
def init_gaussian_hmm(observations, nstates, lag=1, reversible=True): """ Use a heuristic scheme to generate an initial model. Parameters ---------- observations : list of ndarray((T_i)) list of arrays of length T_i with observation data nstates : int The number of states. Examples -------- Generate initial model for a gaussian output model. >>> import bhmm >>> [model, observations, states] = bhmm.testsystems.generate_synthetic_observations(output='gaussian') >>> initial_model = init_gaussian_hmm(observations, model.nstates) """ from bhmm.init import gaussian if lag > 1: observations = lag_observations(observations, lag) hmm0 = gaussian.init_model_gaussian1d(observations, nstates, reversible=reversible) hmm0._lag = lag return hmm0
Use a heuristic scheme to generate an initial model. Parameters ---------- observations : list of ndarray((T_i)) list of arrays of length T_i with observation data nstates : int The number of states. Examples -------- Generate initial model for a gaussian output model. >>> import bhmm >>> [model, observations, states] = bhmm.testsystems.generate_synthetic_observations(output='gaussian') >>> initial_model = init_gaussian_hmm(observations, model.nstates)
Below is the the instruction that describes the task: ### Input: Use a heuristic scheme to generate an initial model. Parameters ---------- observations : list of ndarray((T_i)) list of arrays of length T_i with observation data nstates : int The number of states. Examples -------- Generate initial model for a gaussian output model. >>> import bhmm >>> [model, observations, states] = bhmm.testsystems.generate_synthetic_observations(output='gaussian') >>> initial_model = init_gaussian_hmm(observations, model.nstates) ### Response: def init_gaussian_hmm(observations, nstates, lag=1, reversible=True): """ Use a heuristic scheme to generate an initial model. Parameters ---------- observations : list of ndarray((T_i)) list of arrays of length T_i with observation data nstates : int The number of states. Examples -------- Generate initial model for a gaussian output model. >>> import bhmm >>> [model, observations, states] = bhmm.testsystems.generate_synthetic_observations(output='gaussian') >>> initial_model = init_gaussian_hmm(observations, model.nstates) """ from bhmm.init import gaussian if lag > 1: observations = lag_observations(observations, lag) hmm0 = gaussian.init_model_gaussian1d(observations, nstates, reversible=reversible) hmm0._lag = lag return hmm0
def compute_geometric_median(X, eps=1e-5): """ Estimate the geometric median of points in 2D. Code from https://stackoverflow.com/a/30305181 Parameters ---------- X : (N,2) ndarray Points in 2D. Second axis must be given in xy-form. eps : float, optional Distance threshold when to return the median. Returns ------- (2,) ndarray Geometric median as xy-coordinate. """ y = np.mean(X, 0) while True: D = scipy.spatial.distance.cdist(X, [y]) nonzeros = (D != 0)[:, 0] Dinv = 1 / D[nonzeros] Dinvs = np.sum(Dinv) W = Dinv / Dinvs T = np.sum(W * X[nonzeros], 0) num_zeros = len(X) - np.sum(nonzeros) if num_zeros == 0: y1 = T elif num_zeros == len(X): return y else: R = (T - y) * Dinvs r = np.linalg.norm(R) rinv = 0 if r == 0 else num_zeros/r y1 = max(0, 1-rinv)*T + min(1, rinv)*y if scipy.spatial.distance.euclidean(y, y1) < eps: return y1 y = y1
Estimate the geometric median of points in 2D. Code from https://stackoverflow.com/a/30305181 Parameters ---------- X : (N,2) ndarray Points in 2D. Second axis must be given in xy-form. eps : float, optional Distance threshold when to return the median. Returns ------- (2,) ndarray Geometric median as xy-coordinate.
Below is the the instruction that describes the task: ### Input: Estimate the geometric median of points in 2D. Code from https://stackoverflow.com/a/30305181 Parameters ---------- X : (N,2) ndarray Points in 2D. Second axis must be given in xy-form. eps : float, optional Distance threshold when to return the median. Returns ------- (2,) ndarray Geometric median as xy-coordinate. ### Response: def compute_geometric_median(X, eps=1e-5): """ Estimate the geometric median of points in 2D. Code from https://stackoverflow.com/a/30305181 Parameters ---------- X : (N,2) ndarray Points in 2D. Second axis must be given in xy-form. eps : float, optional Distance threshold when to return the median. Returns ------- (2,) ndarray Geometric median as xy-coordinate. """ y = np.mean(X, 0) while True: D = scipy.spatial.distance.cdist(X, [y]) nonzeros = (D != 0)[:, 0] Dinv = 1 / D[nonzeros] Dinvs = np.sum(Dinv) W = Dinv / Dinvs T = np.sum(W * X[nonzeros], 0) num_zeros = len(X) - np.sum(nonzeros) if num_zeros == 0: y1 = T elif num_zeros == len(X): return y else: R = (T - y) * Dinvs r = np.linalg.norm(R) rinv = 0 if r == 0 else num_zeros/r y1 = max(0, 1-rinv)*T + min(1, rinv)*y if scipy.spatial.distance.euclidean(y, y1) < eps: return y1 y = y1
def stats_timing(stats_key, stats_logger): """Provide a transactional scope around a series of operations.""" start_ts = now_as_float() try: yield start_ts except Exception as e: raise e finally: stats_logger.timing(stats_key, now_as_float() - start_ts)
Provide a transactional scope around a series of operations.
Below is the the instruction that describes the task: ### Input: Provide a transactional scope around a series of operations. ### Response: def stats_timing(stats_key, stats_logger): """Provide a transactional scope around a series of operations.""" start_ts = now_as_float() try: yield start_ts except Exception as e: raise e finally: stats_logger.timing(stats_key, now_as_float() - start_ts)
def find_by_ids(ids, _connection=None, page_size=100, page_number=0, sort_by=enums.DEFAULT_SORT_BY, sort_order=enums.DEFAULT_SORT_ORDER): """ List all videos identified by a list of Brightcove video ids """ if not isinstance(ids, (list, tuple)): err = "Video.find_by_ids expects an iterable argument" raise exceptions.PyBrightcoveError(err) ids = ','.join([str(i) for i in ids]) return connection.ItemResultSet('find_videos_by_ids', Video, _connection, page_size, page_number, sort_by, sort_order, video_ids=ids)
List all videos identified by a list of Brightcove video ids
Below is the the instruction that describes the task: ### Input: List all videos identified by a list of Brightcove video ids ### Response: def find_by_ids(ids, _connection=None, page_size=100, page_number=0, sort_by=enums.DEFAULT_SORT_BY, sort_order=enums.DEFAULT_SORT_ORDER): """ List all videos identified by a list of Brightcove video ids """ if not isinstance(ids, (list, tuple)): err = "Video.find_by_ids expects an iterable argument" raise exceptions.PyBrightcoveError(err) ids = ','.join([str(i) for i in ids]) return connection.ItemResultSet('find_videos_by_ids', Video, _connection, page_size, page_number, sort_by, sort_order, video_ids=ids)
def drange(start, stop, step): """ A generator that yields successive samples from start (inclusive) to stop (exclusive) in step intervals. Parameters ---------- start : float starting point stop : float stopping point step : float stepping interval Yields ------ x : float next sample """ x = start if step > 0: while x + step <= stop: # produces same behaviour as numpy.arange yield x x += step elif step < 0: while x + step >= stop: # produces same behaviour as numpy.arange yield x x += step else: raise ZeroDivisionError("Step must be non-zero")
A generator that yields successive samples from start (inclusive) to stop (exclusive) in step intervals. Parameters ---------- start : float starting point stop : float stopping point step : float stepping interval Yields ------ x : float next sample
Below is the the instruction that describes the task: ### Input: A generator that yields successive samples from start (inclusive) to stop (exclusive) in step intervals. Parameters ---------- start : float starting point stop : float stopping point step : float stepping interval Yields ------ x : float next sample ### Response: def drange(start, stop, step): """ A generator that yields successive samples from start (inclusive) to stop (exclusive) in step intervals. Parameters ---------- start : float starting point stop : float stopping point step : float stepping interval Yields ------ x : float next sample """ x = start if step > 0: while x + step <= stop: # produces same behaviour as numpy.arange yield x x += step elif step < 0: while x + step >= stop: # produces same behaviour as numpy.arange yield x x += step else: raise ZeroDivisionError("Step must be non-zero")
def js_exec(self, method: str, *args: Union[int, str, bool]) -> None: """Execute ``method`` in the related node on browser. Other keyword arguments are passed to ``params`` attribute. If this node is not in any document tree (namely, this node does not have parent node), the ``method`` is not executed. """ if self.connected: self.ws_send(dict(method=method, params=args))
Execute ``method`` in the related node on browser. Other keyword arguments are passed to ``params`` attribute. If this node is not in any document tree (namely, this node does not have parent node), the ``method`` is not executed.
Below is the the instruction that describes the task: ### Input: Execute ``method`` in the related node on browser. Other keyword arguments are passed to ``params`` attribute. If this node is not in any document tree (namely, this node does not have parent node), the ``method`` is not executed. ### Response: def js_exec(self, method: str, *args: Union[int, str, bool]) -> None: """Execute ``method`` in the related node on browser. Other keyword arguments are passed to ``params`` attribute. If this node is not in any document tree (namely, this node does not have parent node), the ``method`` is not executed. """ if self.connected: self.ws_send(dict(method=method, params=args))
def nics_skip(name, nics, ipv6): ''' Alias for :mod:`csf.nics_skipped <salt.states.csf.nics_skipped>` ''' return nics_skipped(name, nics=nics, ipv6=ipv6)
Alias for :mod:`csf.nics_skipped <salt.states.csf.nics_skipped>`
Below is the the instruction that describes the task: ### Input: Alias for :mod:`csf.nics_skipped <salt.states.csf.nics_skipped>` ### Response: def nics_skip(name, nics, ipv6): ''' Alias for :mod:`csf.nics_skipped <salt.states.csf.nics_skipped>` ''' return nics_skipped(name, nics=nics, ipv6=ipv6)
def nvmlUnitSetLedState(unit, color): r""" /** * Set the LED state for the unit. The LED can be either green (0) or amber (1). * * For S-class products. * Requires root/admin permissions. * * This operation takes effect immediately. * * * <b>Current S-Class products don't provide unique LEDs for each unit. As such, both front * and back LEDs will be toggled in unison regardless of which unit is specified with this command.</b> * * See \ref nvmlLedColor_t for available colors. * * @param unit The identifier of the target unit * @param color The target LED color * * @return * - \ref NVML_SUCCESS if the LED color has been set * - \ref NVML_ERROR_UNINITIALIZED if the library has not been successfully initialized * - \ref NVML_ERROR_INVALID_ARGUMENT if \a unit or \a color is invalid * - \ref NVML_ERROR_NOT_SUPPORTED if this is not an S-class product * - \ref NVML_ERROR_NO_PERMISSION if the user doesn't have permission to perform this operation * - \ref NVML_ERROR_UNKNOWN on any unexpected error * * @see nvmlUnitGetLedState() */ nvmlReturn_t DECLDIR nvmlUnitSetLedState """ fn = _nvmlGetFunctionPointer("nvmlUnitSetLedState") ret = fn(unit, _nvmlLedColor_t(color)) _nvmlCheckReturn(ret) return None
r""" /** * Set the LED state for the unit. The LED can be either green (0) or amber (1). * * For S-class products. * Requires root/admin permissions. * * This operation takes effect immediately. * * * <b>Current S-Class products don't provide unique LEDs for each unit. As such, both front * and back LEDs will be toggled in unison regardless of which unit is specified with this command.</b> * * See \ref nvmlLedColor_t for available colors. * * @param unit The identifier of the target unit * @param color The target LED color * * @return * - \ref NVML_SUCCESS if the LED color has been set * - \ref NVML_ERROR_UNINITIALIZED if the library has not been successfully initialized * - \ref NVML_ERROR_INVALID_ARGUMENT if \a unit or \a color is invalid * - \ref NVML_ERROR_NOT_SUPPORTED if this is not an S-class product * - \ref NVML_ERROR_NO_PERMISSION if the user doesn't have permission to perform this operation * - \ref NVML_ERROR_UNKNOWN on any unexpected error * * @see nvmlUnitGetLedState() */ nvmlReturn_t DECLDIR nvmlUnitSetLedState
Below is the the instruction that describes the task: ### Input: r""" /** * Set the LED state for the unit. The LED can be either green (0) or amber (1). * * For S-class products. * Requires root/admin permissions. * * This operation takes effect immediately. * * * <b>Current S-Class products don't provide unique LEDs for each unit. As such, both front * and back LEDs will be toggled in unison regardless of which unit is specified with this command.</b> * * See \ref nvmlLedColor_t for available colors. * * @param unit The identifier of the target unit * @param color The target LED color * * @return * - \ref NVML_SUCCESS if the LED color has been set * - \ref NVML_ERROR_UNINITIALIZED if the library has not been successfully initialized * - \ref NVML_ERROR_INVALID_ARGUMENT if \a unit or \a color is invalid * - \ref NVML_ERROR_NOT_SUPPORTED if this is not an S-class product * - \ref NVML_ERROR_NO_PERMISSION if the user doesn't have permission to perform this operation * - \ref NVML_ERROR_UNKNOWN on any unexpected error * * @see nvmlUnitGetLedState() */ nvmlReturn_t DECLDIR nvmlUnitSetLedState ### Response: def nvmlUnitSetLedState(unit, color): r""" /** * Set the LED state for the unit. The LED can be either green (0) or amber (1). * * For S-class products. * Requires root/admin permissions. * * This operation takes effect immediately. * * * <b>Current S-Class products don't provide unique LEDs for each unit. As such, both front * and back LEDs will be toggled in unison regardless of which unit is specified with this command.</b> * * See \ref nvmlLedColor_t for available colors. * * @param unit The identifier of the target unit * @param color The target LED color * * @return * - \ref NVML_SUCCESS if the LED color has been set * - \ref NVML_ERROR_UNINITIALIZED if the library has not been successfully initialized * - \ref NVML_ERROR_INVALID_ARGUMENT if \a unit or \a color is invalid * - \ref NVML_ERROR_NOT_SUPPORTED if this is not an S-class product * - \ref NVML_ERROR_NO_PERMISSION if the user doesn't have permission to perform this operation * - \ref NVML_ERROR_UNKNOWN on any unexpected error * * @see nvmlUnitGetLedState() */ nvmlReturn_t DECLDIR nvmlUnitSetLedState """ fn = _nvmlGetFunctionPointer("nvmlUnitSetLedState") ret = fn(unit, _nvmlLedColor_t(color)) _nvmlCheckReturn(ret) return None
def tradeStatus(self, trade_id): """Return trade status. :params trade_id: Trade id. """ method = 'GET' url = 'trade/status' if not isinstance(trade_id, (list, tuple)): trade_id = (trade_id,) trade_id = (str(i) for i in trade_id) params = {'tradeIds': ','.join(trade_id)} # multiple trade_ids not tested rc = self.__request__(method, url, params=params) return [itemParse(i, full=False) for i in rc['auctionInfo']]
Return trade status. :params trade_id: Trade id.
Below is the the instruction that describes the task: ### Input: Return trade status. :params trade_id: Trade id. ### Response: def tradeStatus(self, trade_id): """Return trade status. :params trade_id: Trade id. """ method = 'GET' url = 'trade/status' if not isinstance(trade_id, (list, tuple)): trade_id = (trade_id,) trade_id = (str(i) for i in trade_id) params = {'tradeIds': ','.join(trade_id)} # multiple trade_ids not tested rc = self.__request__(method, url, params=params) return [itemParse(i, full=False) for i in rc['auctionInfo']]
def _simsearch_to_simresult(self, sim_resp: Dict, method: SimAlgorithm) -> SimResult: """ Convert owlsim json to SimResult object :param sim_resp: owlsim response from search_by_attribute_set() :param method: SimAlgorithm :return: SimResult object """ sim_ids = get_nodes_from_ids(sim_resp['query_IRIs']) sim_resp['results'] = OwlSim2Api._rank_results(sim_resp['results'], method) # get id type map: ids = [result['j']['id'] for result in sim_resp['results']] id_type_map = get_id_type_map(ids) matches = [] for result in sim_resp['results']: matches.append( SimMatch( id=result['j']['id'], label=result['j']['label'], rank=result['rank'], score=result[OwlSim2Api.method2key[method]], type=id_type_map[result['j']['id']][0], taxon=get_taxon(result['j']['id']), significance="NaN", pairwise_match=OwlSim2Api._make_pairwise_matches(result) ) ) return SimResult( query=SimQuery( ids=sim_ids, unresolved_ids=sim_resp['unresolved'], target_ids=[[]] ), matches=matches, metadata=SimMetadata( max_max_ic=self.statistics.max_max_ic ) )
Convert owlsim json to SimResult object :param sim_resp: owlsim response from search_by_attribute_set() :param method: SimAlgorithm :return: SimResult object
Below is the the instruction that describes the task: ### Input: Convert owlsim json to SimResult object :param sim_resp: owlsim response from search_by_attribute_set() :param method: SimAlgorithm :return: SimResult object ### Response: def _simsearch_to_simresult(self, sim_resp: Dict, method: SimAlgorithm) -> SimResult: """ Convert owlsim json to SimResult object :param sim_resp: owlsim response from search_by_attribute_set() :param method: SimAlgorithm :return: SimResult object """ sim_ids = get_nodes_from_ids(sim_resp['query_IRIs']) sim_resp['results'] = OwlSim2Api._rank_results(sim_resp['results'], method) # get id type map: ids = [result['j']['id'] for result in sim_resp['results']] id_type_map = get_id_type_map(ids) matches = [] for result in sim_resp['results']: matches.append( SimMatch( id=result['j']['id'], label=result['j']['label'], rank=result['rank'], score=result[OwlSim2Api.method2key[method]], type=id_type_map[result['j']['id']][0], taxon=get_taxon(result['j']['id']), significance="NaN", pairwise_match=OwlSim2Api._make_pairwise_matches(result) ) ) return SimResult( query=SimQuery( ids=sim_ids, unresolved_ids=sim_resp['unresolved'], target_ids=[[]] ), matches=matches, metadata=SimMetadata( max_max_ic=self.statistics.max_max_ic ) )
def rule(ctx, rule): """ [bookie] Show a specific rule :param str bmg: Betting market id """ rule = Rule(rule, peerplays_instance=ctx.peerplays) t = PrettyTable([ "id", "name", ]) t.align = "l" t.add_row([ rule["id"], "\n".join(["{}: {}".format(v[0], v[1]) for v in rule["name"]]), ]) click.echo(str(t)) click.echo( "\n".join(["{}: {}".format(v[0], v[1]) for v in rule["description"]]) )
[bookie] Show a specific rule :param str bmg: Betting market id
Below is the the instruction that describes the task: ### Input: [bookie] Show a specific rule :param str bmg: Betting market id ### Response: def rule(ctx, rule): """ [bookie] Show a specific rule :param str bmg: Betting market id """ rule = Rule(rule, peerplays_instance=ctx.peerplays) t = PrettyTable([ "id", "name", ]) t.align = "l" t.add_row([ rule["id"], "\n".join(["{}: {}".format(v[0], v[1]) for v in rule["name"]]), ]) click.echo(str(t)) click.echo( "\n".join(["{}: {}".format(v[0], v[1]) for v in rule["description"]]) )
def package_files(directory): """Get list of data files to add to the package.""" paths = [] for (path, _, file_names) in walk(directory): for filename in file_names: paths.append(join('..', path, filename)) return paths
Get list of data files to add to the package.
Below is the the instruction that describes the task: ### Input: Get list of data files to add to the package. ### Response: def package_files(directory): """Get list of data files to add to the package.""" paths = [] for (path, _, file_names) in walk(directory): for filename in file_names: paths.append(join('..', path, filename)) return paths
def rsdl_rn(self, AX, Y): """Compute primal residual normalisation term. Overriding this method is required if methods :meth:`cnst_A`, :meth:`cnst_AT`, :meth:`cnst_B`, and :meth:`cnst_c` are not overridden. """ if not hasattr(self, '_cnst_nrm_c'): self._cnst_nrm_c = np.sqrt(np.linalg.norm(self.cnst_c0())**2 + np.linalg.norm(self.cnst_c1())**2) return max((np.linalg.norm(AX), np.linalg.norm(Y), self._cnst_nrm_c))
Compute primal residual normalisation term. Overriding this method is required if methods :meth:`cnst_A`, :meth:`cnst_AT`, :meth:`cnst_B`, and :meth:`cnst_c` are not overridden.
Below is the the instruction that describes the task: ### Input: Compute primal residual normalisation term. Overriding this method is required if methods :meth:`cnst_A`, :meth:`cnst_AT`, :meth:`cnst_B`, and :meth:`cnst_c` are not overridden. ### Response: def rsdl_rn(self, AX, Y): """Compute primal residual normalisation term. Overriding this method is required if methods :meth:`cnst_A`, :meth:`cnst_AT`, :meth:`cnst_B`, and :meth:`cnst_c` are not overridden. """ if not hasattr(self, '_cnst_nrm_c'): self._cnst_nrm_c = np.sqrt(np.linalg.norm(self.cnst_c0())**2 + np.linalg.norm(self.cnst_c1())**2) return max((np.linalg.norm(AX), np.linalg.norm(Y), self._cnst_nrm_c))
def get_stream_records(self, iterator_id): """Wraps :func:`boto3.DynamoDBStreams.Client.get_records`. :param iterator_id: Iterator id. Usually :data:`Shard.iterator_id <bloop.stream.shard.Shard.iterator_id>`. :return: Dict with "Records" list (may be empty) and "NextShardIterator" str (may not exist). :rtype: dict :raises bloop.exceptions.RecordsExpired: The iterator moved beyond the Trim Horizon since it was created. :raises bloop.exceptions.ShardIteratorExpired: The iterator was created more than 15 minutes ago. """ try: return self.stream_client.get_records(ShardIterator=iterator_id) except botocore.exceptions.ClientError as error: if error.response["Error"]["Code"] == "TrimmedDataAccessException": raise RecordsExpired from error elif error.response["Error"]["Code"] == "ExpiredIteratorException": raise ShardIteratorExpired from error raise BloopException("Unexpected error while getting records.") from error
Wraps :func:`boto3.DynamoDBStreams.Client.get_records`. :param iterator_id: Iterator id. Usually :data:`Shard.iterator_id <bloop.stream.shard.Shard.iterator_id>`. :return: Dict with "Records" list (may be empty) and "NextShardIterator" str (may not exist). :rtype: dict :raises bloop.exceptions.RecordsExpired: The iterator moved beyond the Trim Horizon since it was created. :raises bloop.exceptions.ShardIteratorExpired: The iterator was created more than 15 minutes ago.
Below is the the instruction that describes the task: ### Input: Wraps :func:`boto3.DynamoDBStreams.Client.get_records`. :param iterator_id: Iterator id. Usually :data:`Shard.iterator_id <bloop.stream.shard.Shard.iterator_id>`. :return: Dict with "Records" list (may be empty) and "NextShardIterator" str (may not exist). :rtype: dict :raises bloop.exceptions.RecordsExpired: The iterator moved beyond the Trim Horizon since it was created. :raises bloop.exceptions.ShardIteratorExpired: The iterator was created more than 15 minutes ago. ### Response: def get_stream_records(self, iterator_id): """Wraps :func:`boto3.DynamoDBStreams.Client.get_records`. :param iterator_id: Iterator id. Usually :data:`Shard.iterator_id <bloop.stream.shard.Shard.iterator_id>`. :return: Dict with "Records" list (may be empty) and "NextShardIterator" str (may not exist). :rtype: dict :raises bloop.exceptions.RecordsExpired: The iterator moved beyond the Trim Horizon since it was created. :raises bloop.exceptions.ShardIteratorExpired: The iterator was created more than 15 minutes ago. """ try: return self.stream_client.get_records(ShardIterator=iterator_id) except botocore.exceptions.ClientError as error: if error.response["Error"]["Code"] == "TrimmedDataAccessException": raise RecordsExpired from error elif error.response["Error"]["Code"] == "ExpiredIteratorException": raise ShardIteratorExpired from error raise BloopException("Unexpected error while getting records.") from error
def get_profile_configs(profile=None, use_cache=True): """ Returns upload configs for profile. """ if use_cache and profile in _profile_configs_cache: return _profile_configs_cache[profile] profile_conf = None if profile is not None: try: profile_conf = dju_settings.DJU_IMG_UPLOAD_PROFILES[profile] except KeyError: if profile != 'default': raise ValueError(unicode(ERROR_MESSAGES['unknown_profile']) % {'profile': profile}) conf = copy.deepcopy(dju_settings.DJU_IMG_UPLOAD_PROFILE_DEFAULT) if profile_conf: conf.update(copy.deepcopy(profile_conf)) for v_i in xrange(len(conf['VARIANTS'])): v = conf['VARIANTS'][v_i] conf['VARIANTS'][v_i] = copy.deepcopy(dju_settings.DJU_IMG_UPLOAD_PROFILE_VARIANT_DEFAULT) conf['VARIANTS'][v_i].update(v) if use_cache: _profile_configs_cache[profile] = conf return conf
Returns upload configs for profile.
Below is the the instruction that describes the task: ### Input: Returns upload configs for profile. ### Response: def get_profile_configs(profile=None, use_cache=True): """ Returns upload configs for profile. """ if use_cache and profile in _profile_configs_cache: return _profile_configs_cache[profile] profile_conf = None if profile is not None: try: profile_conf = dju_settings.DJU_IMG_UPLOAD_PROFILES[profile] except KeyError: if profile != 'default': raise ValueError(unicode(ERROR_MESSAGES['unknown_profile']) % {'profile': profile}) conf = copy.deepcopy(dju_settings.DJU_IMG_UPLOAD_PROFILE_DEFAULT) if profile_conf: conf.update(copy.deepcopy(profile_conf)) for v_i in xrange(len(conf['VARIANTS'])): v = conf['VARIANTS'][v_i] conf['VARIANTS'][v_i] = copy.deepcopy(dju_settings.DJU_IMG_UPLOAD_PROFILE_VARIANT_DEFAULT) conf['VARIANTS'][v_i].update(v) if use_cache: _profile_configs_cache[profile] = conf return conf
def service_timeouts(self): """ run callbacks on all expired timers Called from the event thread :return: next end time, or None """ queue = self._queue if self._new_timers: new_timers = self._new_timers while new_timers: heappush(queue, new_timers.pop()) if queue: now = time.time() while queue: try: timer = queue[0][1] if timer.finish(now): heappop(queue) else: return timer.end except Exception: log.exception("Exception while servicing timeout callback: ")
run callbacks on all expired timers Called from the event thread :return: next end time, or None
Below is the the instruction that describes the task: ### Input: run callbacks on all expired timers Called from the event thread :return: next end time, or None ### Response: def service_timeouts(self): """ run callbacks on all expired timers Called from the event thread :return: next end time, or None """ queue = self._queue if self._new_timers: new_timers = self._new_timers while new_timers: heappush(queue, new_timers.pop()) if queue: now = time.time() while queue: try: timer = queue[0][1] if timer.finish(now): heappop(queue) else: return timer.end except Exception: log.exception("Exception while servicing timeout callback: ")
def users_profile_get(self, **kwargs) -> SlackResponse: """Retrieves a user's profile information.""" self._validate_xoxp_token() return self.api_call("users.profile.get", http_verb="GET", params=kwargs)
Retrieves a user's profile information.
Below is the the instruction that describes the task: ### Input: Retrieves a user's profile information. ### Response: def users_profile_get(self, **kwargs) -> SlackResponse: """Retrieves a user's profile information.""" self._validate_xoxp_token() return self.api_call("users.profile.get", http_verb="GET", params=kwargs)
def p_delays_intnumber(self, p): 'delays : DELAY intnumber' p[0] = DelayStatement( IntConst(p[2], lineno=p.lineno(1)), lineno=p.lineno(1)) p.set_lineno(0, p.lineno(1))
delays : DELAY intnumber
Below is the the instruction that describes the task: ### Input: delays : DELAY intnumber ### Response: def p_delays_intnumber(self, p): 'delays : DELAY intnumber' p[0] = DelayStatement( IntConst(p[2], lineno=p.lineno(1)), lineno=p.lineno(1)) p.set_lineno(0, p.lineno(1))
def format_sizeof(num, suffix='bytes'): '''Readable size format, courtesy of Sridhar Ratnakumar''' for unit in ['','K','M','G','T','P','E','Z']: if abs(num) < 1000.0: return "%3.1f%s%s" % (num, unit, suffix) num /= 1000.0 return "%.1f%s%s" % (num, 'Y', suffix)
Readable size format, courtesy of Sridhar Ratnakumar
Below is the the instruction that describes the task: ### Input: Readable size format, courtesy of Sridhar Ratnakumar ### Response: def format_sizeof(num, suffix='bytes'): '''Readable size format, courtesy of Sridhar Ratnakumar''' for unit in ['','K','M','G','T','P','E','Z']: if abs(num) < 1000.0: return "%3.1f%s%s" % (num, unit, suffix) num /= 1000.0 return "%.1f%s%s" % (num, 'Y', suffix)
def find_indentation(node): """Find the indentation of *node*.""" while node is not None: if node.type == syms.suite and len(node.children) > 2: indent = node.children[1] if indent.type == token.INDENT: return indent.value node = node.parent return u""
Find the indentation of *node*.
Below is the the instruction that describes the task: ### Input: Find the indentation of *node*. ### Response: def find_indentation(node): """Find the indentation of *node*.""" while node is not None: if node.type == syms.suite and len(node.children) > 2: indent = node.children[1] if indent.type == token.INDENT: return indent.value node = node.parent return u""
def _repo_url_to_path(self, repo): """Convert a `repo` url to a file path for local storage.""" repo = repo.replace('http://', '') repo = repo.replace('https://', '') repo = repo.replace('/', '_') return os.sep.join([self._data_directory, repo])
Convert a `repo` url to a file path for local storage.
Below is the the instruction that describes the task: ### Input: Convert a `repo` url to a file path for local storage. ### Response: def _repo_url_to_path(self, repo): """Convert a `repo` url to a file path for local storage.""" repo = repo.replace('http://', '') repo = repo.replace('https://', '') repo = repo.replace('/', '_') return os.sep.join([self._data_directory, repo])
def set_input_fields(self, input_fields): """Given a scalar or ordered list of strings generate JSONPaths that describe how to access the values necessary for the Extractor """ if not (isinstance(input_fields, basestring) or isinstance(input_fields, types.ListType)): raise ValueError("input_fields must be a string or a list") self.input_fields = input_fields self.generate_json_paths() return self
Given a scalar or ordered list of strings generate JSONPaths that describe how to access the values necessary for the Extractor
Below is the the instruction that describes the task: ### Input: Given a scalar or ordered list of strings generate JSONPaths that describe how to access the values necessary for the Extractor ### Response: def set_input_fields(self, input_fields): """Given a scalar or ordered list of strings generate JSONPaths that describe how to access the values necessary for the Extractor """ if not (isinstance(input_fields, basestring) or isinstance(input_fields, types.ListType)): raise ValueError("input_fields must be a string or a list") self.input_fields = input_fields self.generate_json_paths() return self
def merge(self, dict_=None): """not is use so far, see check()""" if dict_ is None and hasattr(self, '__dict__'): dict_ = self.__dict__ # doesn't work anymore as we have _lock attribute if dict_ is None: return self self.update(dict_) return self
not is use so far, see check()
Below is the the instruction that describes the task: ### Input: not is use so far, see check() ### Response: def merge(self, dict_=None): """not is use so far, see check()""" if dict_ is None and hasattr(self, '__dict__'): dict_ = self.__dict__ # doesn't work anymore as we have _lock attribute if dict_ is None: return self self.update(dict_) return self
def findall(self, obj, forced_type=None, cls=anyconfig.models.processor.Processor): """ :param obj: a file path, file, file-like object, pathlib.Path object or an 'anyconfig.globals.IOInfo' (namedtuple) object :param forced_type: Forced processor type to find :param cls: A class object to compare with 'ptype' :return: A list of instances of processor classes to process 'obj' :raises: ValueError, UnknownProcessorTypeError, UnknownFileTypeError """ return [p() for p in findall(obj, self.list(), forced_type=forced_type, cls=cls)]
:param obj: a file path, file, file-like object, pathlib.Path object or an 'anyconfig.globals.IOInfo' (namedtuple) object :param forced_type: Forced processor type to find :param cls: A class object to compare with 'ptype' :return: A list of instances of processor classes to process 'obj' :raises: ValueError, UnknownProcessorTypeError, UnknownFileTypeError
Below is the the instruction that describes the task: ### Input: :param obj: a file path, file, file-like object, pathlib.Path object or an 'anyconfig.globals.IOInfo' (namedtuple) object :param forced_type: Forced processor type to find :param cls: A class object to compare with 'ptype' :return: A list of instances of processor classes to process 'obj' :raises: ValueError, UnknownProcessorTypeError, UnknownFileTypeError ### Response: def findall(self, obj, forced_type=None, cls=anyconfig.models.processor.Processor): """ :param obj: a file path, file, file-like object, pathlib.Path object or an 'anyconfig.globals.IOInfo' (namedtuple) object :param forced_type: Forced processor type to find :param cls: A class object to compare with 'ptype' :return: A list of instances of processor classes to process 'obj' :raises: ValueError, UnknownProcessorTypeError, UnknownFileTypeError """ return [p() for p in findall(obj, self.list(), forced_type=forced_type, cls=cls)]
def _populate_bookmarks_list(self): """Read the sqlite database and populate the bookmarks list. If no bookmarks are found, the bookmarks radio button will be disabled and the label will be shown indicating that the user should add bookmarks in QGIS first. Every bookmark are reprojected to mapcanvas crs. """ # Connect to the QGIS sqlite database and check if the table exists. # noinspection PyArgumentList db_file_path = QgsApplication.qgisUserDatabaseFilePath() db = sqlite3.connect(db_file_path) cursor = db.cursor() cursor.execute( 'SELECT COUNT(*) ' 'FROM sqlite_master ' 'WHERE type=\'table\' ' 'AND name=\'tbl_bookmarks\';') number_of_rows = cursor.fetchone()[0] if number_of_rows > 0: cursor.execute( 'SELECT * ' 'FROM tbl_bookmarks;') bookmarks = cursor.fetchall() canvas_crs = self.canvas.mapSettings().destinationCrs() for bookmark in bookmarks: name = bookmark[1] srid = bookmark[7] rectangle = QgsRectangle( bookmark[3], bookmark[4], bookmark[5], bookmark[6]) if srid != canvas_crs.srsid(): transform = QgsCoordinateTransform( QgsCoordinateReferenceSystem(srid), canvas_crs, QgsProject.instance() ) try: rectangle = transform.transform(rectangle) except QgsCsException: rectangle = QgsRectangle() if rectangle.isEmpty(): pass self.bookmarks_list.addItem(name, rectangle) if self.bookmarks_list.currentIndex() >= 0: self.create_bookmarks_label.hide() else: self.create_bookmarks_label.show() self.hazard_exposure_bookmark.setDisabled(True) self.bookmarks_list.hide()
Read the sqlite database and populate the bookmarks list. If no bookmarks are found, the bookmarks radio button will be disabled and the label will be shown indicating that the user should add bookmarks in QGIS first. Every bookmark are reprojected to mapcanvas crs.
Below is the the instruction that describes the task: ### Input: Read the sqlite database and populate the bookmarks list. If no bookmarks are found, the bookmarks radio button will be disabled and the label will be shown indicating that the user should add bookmarks in QGIS first. Every bookmark are reprojected to mapcanvas crs. ### Response: def _populate_bookmarks_list(self): """Read the sqlite database and populate the bookmarks list. If no bookmarks are found, the bookmarks radio button will be disabled and the label will be shown indicating that the user should add bookmarks in QGIS first. Every bookmark are reprojected to mapcanvas crs. """ # Connect to the QGIS sqlite database and check if the table exists. # noinspection PyArgumentList db_file_path = QgsApplication.qgisUserDatabaseFilePath() db = sqlite3.connect(db_file_path) cursor = db.cursor() cursor.execute( 'SELECT COUNT(*) ' 'FROM sqlite_master ' 'WHERE type=\'table\' ' 'AND name=\'tbl_bookmarks\';') number_of_rows = cursor.fetchone()[0] if number_of_rows > 0: cursor.execute( 'SELECT * ' 'FROM tbl_bookmarks;') bookmarks = cursor.fetchall() canvas_crs = self.canvas.mapSettings().destinationCrs() for bookmark in bookmarks: name = bookmark[1] srid = bookmark[7] rectangle = QgsRectangle( bookmark[3], bookmark[4], bookmark[5], bookmark[6]) if srid != canvas_crs.srsid(): transform = QgsCoordinateTransform( QgsCoordinateReferenceSystem(srid), canvas_crs, QgsProject.instance() ) try: rectangle = transform.transform(rectangle) except QgsCsException: rectangle = QgsRectangle() if rectangle.isEmpty(): pass self.bookmarks_list.addItem(name, rectangle) if self.bookmarks_list.currentIndex() >= 0: self.create_bookmarks_label.hide() else: self.create_bookmarks_label.show() self.hazard_exposure_bookmark.setDisabled(True) self.bookmarks_list.hide()
def import_obj(cls, i_datasource, import_time=None): """Imports the datasource from the object to the database. Metrics and columns and datasource will be overrided if exists. This function can be used to import/export dashboards between multiple superset instances. Audit metadata isn't copies over. """ def lookup_sqlatable(table): return db.session.query(SqlaTable).join(Database).filter( SqlaTable.table_name == table.table_name, SqlaTable.schema == table.schema, Database.id == table.database_id, ).first() def lookup_database(table): return db.session.query(Database).filter_by( database_name=table.params_dict['database_name']).one() return import_datasource.import_datasource( db.session, i_datasource, lookup_database, lookup_sqlatable, import_time)
Imports the datasource from the object to the database. Metrics and columns and datasource will be overrided if exists. This function can be used to import/export dashboards between multiple superset instances. Audit metadata isn't copies over.
Below is the the instruction that describes the task: ### Input: Imports the datasource from the object to the database. Metrics and columns and datasource will be overrided if exists. This function can be used to import/export dashboards between multiple superset instances. Audit metadata isn't copies over. ### Response: def import_obj(cls, i_datasource, import_time=None): """Imports the datasource from the object to the database. Metrics and columns and datasource will be overrided if exists. This function can be used to import/export dashboards between multiple superset instances. Audit metadata isn't copies over. """ def lookup_sqlatable(table): return db.session.query(SqlaTable).join(Database).filter( SqlaTable.table_name == table.table_name, SqlaTable.schema == table.schema, Database.id == table.database_id, ).first() def lookup_database(table): return db.session.query(Database).filter_by( database_name=table.params_dict['database_name']).one() return import_datasource.import_datasource( db.session, i_datasource, lookup_database, lookup_sqlatable, import_time)
def stringify(*args): """ Joins args to build a string, unless there's one arg and it's a function, then acts a decorator. """ if (len(args) == 1) and callable(args[0]): func = args[0] @wraps(func) def _inner(*args, **kwargs): return "".join([str(i) for i in func(*args, **kwargs)]) return _inner else: return "".join([str(i) for i in args])
Joins args to build a string, unless there's one arg and it's a function, then acts a decorator.
Below is the the instruction that describes the task: ### Input: Joins args to build a string, unless there's one arg and it's a function, then acts a decorator. ### Response: def stringify(*args): """ Joins args to build a string, unless there's one arg and it's a function, then acts a decorator. """ if (len(args) == 1) and callable(args[0]): func = args[0] @wraps(func) def _inner(*args, **kwargs): return "".join([str(i) for i in func(*args, **kwargs)]) return _inner else: return "".join([str(i) for i in args])
def deactivate_program(self, program): """ Called by program, when it is deactivated. """ self.logger.debug("deactivate_program %s", program) with self._program_lock: self.logger.debug("deactivate_program got through %s", program) if program not in self.program_stack: import ipdb ipdb.set_trace() self.program_stack.remove(program) if program in self.program_status: del self.program_status[program] self._update_program_stack()
Called by program, when it is deactivated.
Below is the the instruction that describes the task: ### Input: Called by program, when it is deactivated. ### Response: def deactivate_program(self, program): """ Called by program, when it is deactivated. """ self.logger.debug("deactivate_program %s", program) with self._program_lock: self.logger.debug("deactivate_program got through %s", program) if program not in self.program_stack: import ipdb ipdb.set_trace() self.program_stack.remove(program) if program in self.program_status: del self.program_status[program] self._update_program_stack()
def taskotron_task(config, message, task=None): """ Particular taskotron task With this rule, you can limit messages to only those of particular `taskotron <https://taskotron.fedoraproject.org/>`_ task. You can specify several tasks by separating them with a comma ',', i.e.: ``dist.depcheck,dist.rpmlint``. """ # We only operate on taskotron messages, first off. if not taskotron_result_new(config, message): return False if not task: return False tasks = [item.strip().lower() for item in task.split(',')] return message['msg']['task'].get('name').lower() in tasks
Particular taskotron task With this rule, you can limit messages to only those of particular `taskotron <https://taskotron.fedoraproject.org/>`_ task. You can specify several tasks by separating them with a comma ',', i.e.: ``dist.depcheck,dist.rpmlint``.
Below is the the instruction that describes the task: ### Input: Particular taskotron task With this rule, you can limit messages to only those of particular `taskotron <https://taskotron.fedoraproject.org/>`_ task. You can specify several tasks by separating them with a comma ',', i.e.: ``dist.depcheck,dist.rpmlint``. ### Response: def taskotron_task(config, message, task=None): """ Particular taskotron task With this rule, you can limit messages to only those of particular `taskotron <https://taskotron.fedoraproject.org/>`_ task. You can specify several tasks by separating them with a comma ',', i.e.: ``dist.depcheck,dist.rpmlint``. """ # We only operate on taskotron messages, first off. if not taskotron_result_new(config, message): return False if not task: return False tasks = [item.strip().lower() for item in task.split(',')] return message['msg']['task'].get('name').lower() in tasks
def match_examples(self, parse_fn, examples): """ Given a parser instance and a dictionary mapping some label with some malformed syntax examples, it'll return the label for the example that bests matches the current error. """ assert self.state is not None, "Not supported for this exception" candidate = None for label, example in examples.items(): assert not isinstance(example, STRING_TYPE) for malformed in example: try: parse_fn(malformed) except UnexpectedInput as ut: if ut.state == self.state: try: if ut.token == self.token: # Try exact match first return label except AttributeError: pass if not candidate: candidate = label return candidate
Given a parser instance and a dictionary mapping some label with some malformed syntax examples, it'll return the label for the example that bests matches the current error.
Below is the the instruction that describes the task: ### Input: Given a parser instance and a dictionary mapping some label with some malformed syntax examples, it'll return the label for the example that bests matches the current error. ### Response: def match_examples(self, parse_fn, examples): """ Given a parser instance and a dictionary mapping some label with some malformed syntax examples, it'll return the label for the example that bests matches the current error. """ assert self.state is not None, "Not supported for this exception" candidate = None for label, example in examples.items(): assert not isinstance(example, STRING_TYPE) for malformed in example: try: parse_fn(malformed) except UnexpectedInput as ut: if ut.state == self.state: try: if ut.token == self.token: # Try exact match first return label except AttributeError: pass if not candidate: candidate = label return candidate
def history(self, user=None): """ Return relevant who-did-what logs from the ticket history """ for event in self.changelog: when, who, what, old, new, ignore = event if (when >= self.options.since.date and when <= self.options.until.date): if user is None or who.startswith(user.login): yield who, what, old, new
Return relevant who-did-what logs from the ticket history
Below is the the instruction that describes the task: ### Input: Return relevant who-did-what logs from the ticket history ### Response: def history(self, user=None): """ Return relevant who-did-what logs from the ticket history """ for event in self.changelog: when, who, what, old, new, ignore = event if (when >= self.options.since.date and when <= self.options.until.date): if user is None or who.startswith(user.login): yield who, what, old, new
def CreateDirectedEdges(self, points, gr, layer_width): """ Take each key (ie. point) in the graph and for that point create an edge to every point downstream of it where the weight of the edge is the tuple (distance, angle) """ for z0, x0, Q0 in points: for z1, x1, Q1 in points: dz = z1 - z0 # no fabs because we check arrow direction if dz > 0.0: # make sure arrow in right direction if dz - layer_width < distance_threshold: # only adjacents dx = math.fabs(x1 - x0) if dx > 5 * bar_width: continue # Weights are negative to in order to use shortest path # algorithms on the graph. weight = -1 * math.hypot(dz, dx) edge = ((z0, x0, Q0), (z1, x1, Q1)) gr.add_edge(edge, wt=weight) # Ensure that it is already transitively reduced assert len(critical.transitive_edges(gr)) == 0 return gr
Take each key (ie. point) in the graph and for that point create an edge to every point downstream of it where the weight of the edge is the tuple (distance, angle)
Below is the the instruction that describes the task: ### Input: Take each key (ie. point) in the graph and for that point create an edge to every point downstream of it where the weight of the edge is the tuple (distance, angle) ### Response: def CreateDirectedEdges(self, points, gr, layer_width): """ Take each key (ie. point) in the graph and for that point create an edge to every point downstream of it where the weight of the edge is the tuple (distance, angle) """ for z0, x0, Q0 in points: for z1, x1, Q1 in points: dz = z1 - z0 # no fabs because we check arrow direction if dz > 0.0: # make sure arrow in right direction if dz - layer_width < distance_threshold: # only adjacents dx = math.fabs(x1 - x0) if dx > 5 * bar_width: continue # Weights are negative to in order to use shortest path # algorithms on the graph. weight = -1 * math.hypot(dz, dx) edge = ((z0, x0, Q0), (z1, x1, Q1)) gr.add_edge(edge, wt=weight) # Ensure that it is already transitively reduced assert len(critical.transitive_edges(gr)) == 0 return gr
def child(self, number): """ :type number: int :rtype: ProtocolTreeItem """ if number < self.childCount(): return self.__childItems[number] else: return False
:type number: int :rtype: ProtocolTreeItem
Below is the the instruction that describes the task: ### Input: :type number: int :rtype: ProtocolTreeItem ### Response: def child(self, number): """ :type number: int :rtype: ProtocolTreeItem """ if number < self.childCount(): return self.__childItems[number] else: return False
def _best_fit_font_size(self, family, max_size, bold, italic, font_file): """ Return the largest integer point size not greater than *max_size* that allows all the text in this text frame to fit inside its extents when rendered using the font described by *family*, *bold*, and *italic*. If *font_file* is specified, it is used to calculate the fit, whether or not it matches *family*, *bold*, and *italic*. """ if font_file is None: font_file = FontFiles.find(family, bold, italic) return TextFitter.best_fit_font_size( self.text, self._extents, max_size, font_file )
Return the largest integer point size not greater than *max_size* that allows all the text in this text frame to fit inside its extents when rendered using the font described by *family*, *bold*, and *italic*. If *font_file* is specified, it is used to calculate the fit, whether or not it matches *family*, *bold*, and *italic*.
Below is the the instruction that describes the task: ### Input: Return the largest integer point size not greater than *max_size* that allows all the text in this text frame to fit inside its extents when rendered using the font described by *family*, *bold*, and *italic*. If *font_file* is specified, it is used to calculate the fit, whether or not it matches *family*, *bold*, and *italic*. ### Response: def _best_fit_font_size(self, family, max_size, bold, italic, font_file): """ Return the largest integer point size not greater than *max_size* that allows all the text in this text frame to fit inside its extents when rendered using the font described by *family*, *bold*, and *italic*. If *font_file* is specified, it is used to calculate the fit, whether or not it matches *family*, *bold*, and *italic*. """ if font_file is None: font_file = FontFiles.find(family, bold, italic) return TextFitter.best_fit_font_size( self.text, self._extents, max_size, font_file )
def buildhtmlheader(self): """generate HTML header content""" if self.drilldown_flag: self.add_JSsource('http://code.highcharts.com/modules/drilldown.js') if self.offline: opener = urllib.request.build_opener() opener.addheaders = [('User-Agent', 'Mozilla/5.0')] self.header_css = [ '<style>%s</style>' % opener.open(h).read() for h in self.CSSsource ] self.header_js = [ '<script type="text/javascript">%s</script>' % opener.open(h).read() for h in self.JSsource ] else: self.header_css = [ '<link href="%s" rel="stylesheet" />' % h for h in self.CSSsource ] self.header_js = [ '<script type="text/javascript" src="%s"></script>' % h for h in self.JSsource ] self.htmlheader = '' for css in self.header_css: self.htmlheader += css for js in self.header_js: self.htmlheader += js
generate HTML header content
Below is the the instruction that describes the task: ### Input: generate HTML header content ### Response: def buildhtmlheader(self): """generate HTML header content""" if self.drilldown_flag: self.add_JSsource('http://code.highcharts.com/modules/drilldown.js') if self.offline: opener = urllib.request.build_opener() opener.addheaders = [('User-Agent', 'Mozilla/5.0')] self.header_css = [ '<style>%s</style>' % opener.open(h).read() for h in self.CSSsource ] self.header_js = [ '<script type="text/javascript">%s</script>' % opener.open(h).read() for h in self.JSsource ] else: self.header_css = [ '<link href="%s" rel="stylesheet" />' % h for h in self.CSSsource ] self.header_js = [ '<script type="text/javascript" src="%s"></script>' % h for h in self.JSsource ] self.htmlheader = '' for css in self.header_css: self.htmlheader += css for js in self.header_js: self.htmlheader += js
def write_points(self, points, time_precision=None, database=None, retention_policy=None, tags=None, batch_size=None, protocol='json', consistency=None ): """Write to multiple time series names. :param points: the list of points to be written in the database :type points: list of dictionaries, each dictionary represents a point :type points: (if protocol is 'json') list of dicts, where each dict represents a point. (if protocol is 'line') sequence of line protocol strings. :param time_precision: Either 's', 'm', 'ms' or 'u', defaults to None :type time_precision: str :param database: the database to write the points to. Defaults to the client's current database :type database: str :param tags: a set of key-value pairs associated with each point. Both keys and values must be strings. These are shared tags and will be merged with point-specific tags, defaults to None :type tags: dict :param retention_policy: the retention policy for the points. Defaults to None :type retention_policy: str :param batch_size: value to write the points in batches instead of all at one time. Useful for when doing data dumps from one database to another or when doing a massive write operation, defaults to None :type batch_size: int :param protocol: Protocol for writing data. Either 'line' or 'json'. :type protocol: str :param consistency: Consistency for the points. One of {'any','one','quorum','all'}. :type consistency: str :returns: True, if the operation is successful :rtype: bool .. note:: if no retention policy is specified, the default retention policy for the database is used """ if batch_size and batch_size > 0: for batch in self._batches(points, batch_size): self._write_points(points=batch, time_precision=time_precision, database=database, retention_policy=retention_policy, tags=tags, protocol=protocol, consistency=consistency) return True return self._write_points(points=points, time_precision=time_precision, database=database, retention_policy=retention_policy, tags=tags, protocol=protocol, consistency=consistency)
Write to multiple time series names. :param points: the list of points to be written in the database :type points: list of dictionaries, each dictionary represents a point :type points: (if protocol is 'json') list of dicts, where each dict represents a point. (if protocol is 'line') sequence of line protocol strings. :param time_precision: Either 's', 'm', 'ms' or 'u', defaults to None :type time_precision: str :param database: the database to write the points to. Defaults to the client's current database :type database: str :param tags: a set of key-value pairs associated with each point. Both keys and values must be strings. These are shared tags and will be merged with point-specific tags, defaults to None :type tags: dict :param retention_policy: the retention policy for the points. Defaults to None :type retention_policy: str :param batch_size: value to write the points in batches instead of all at one time. Useful for when doing data dumps from one database to another or when doing a massive write operation, defaults to None :type batch_size: int :param protocol: Protocol for writing data. Either 'line' or 'json'. :type protocol: str :param consistency: Consistency for the points. One of {'any','one','quorum','all'}. :type consistency: str :returns: True, if the operation is successful :rtype: bool .. note:: if no retention policy is specified, the default retention policy for the database is used
Below is the the instruction that describes the task: ### Input: Write to multiple time series names. :param points: the list of points to be written in the database :type points: list of dictionaries, each dictionary represents a point :type points: (if protocol is 'json') list of dicts, where each dict represents a point. (if protocol is 'line') sequence of line protocol strings. :param time_precision: Either 's', 'm', 'ms' or 'u', defaults to None :type time_precision: str :param database: the database to write the points to. Defaults to the client's current database :type database: str :param tags: a set of key-value pairs associated with each point. Both keys and values must be strings. These are shared tags and will be merged with point-specific tags, defaults to None :type tags: dict :param retention_policy: the retention policy for the points. Defaults to None :type retention_policy: str :param batch_size: value to write the points in batches instead of all at one time. Useful for when doing data dumps from one database to another or when doing a massive write operation, defaults to None :type batch_size: int :param protocol: Protocol for writing data. Either 'line' or 'json'. :type protocol: str :param consistency: Consistency for the points. One of {'any','one','quorum','all'}. :type consistency: str :returns: True, if the operation is successful :rtype: bool .. note:: if no retention policy is specified, the default retention policy for the database is used ### Response: def write_points(self, points, time_precision=None, database=None, retention_policy=None, tags=None, batch_size=None, protocol='json', consistency=None ): """Write to multiple time series names. :param points: the list of points to be written in the database :type points: list of dictionaries, each dictionary represents a point :type points: (if protocol is 'json') list of dicts, where each dict represents a point. (if protocol is 'line') sequence of line protocol strings. :param time_precision: Either 's', 'm', 'ms' or 'u', defaults to None :type time_precision: str :param database: the database to write the points to. Defaults to the client's current database :type database: str :param tags: a set of key-value pairs associated with each point. Both keys and values must be strings. These are shared tags and will be merged with point-specific tags, defaults to None :type tags: dict :param retention_policy: the retention policy for the points. Defaults to None :type retention_policy: str :param batch_size: value to write the points in batches instead of all at one time. Useful for when doing data dumps from one database to another or when doing a massive write operation, defaults to None :type batch_size: int :param protocol: Protocol for writing data. Either 'line' or 'json'. :type protocol: str :param consistency: Consistency for the points. One of {'any','one','quorum','all'}. :type consistency: str :returns: True, if the operation is successful :rtype: bool .. note:: if no retention policy is specified, the default retention policy for the database is used """ if batch_size and batch_size > 0: for batch in self._batches(points, batch_size): self._write_points(points=batch, time_precision=time_precision, database=database, retention_policy=retention_policy, tags=tags, protocol=protocol, consistency=consistency) return True return self._write_points(points=points, time_precision=time_precision, database=database, retention_policy=retention_policy, tags=tags, protocol=protocol, consistency=consistency)
def parse_broken_json(json_text: str) -> dict: """ Parses broken JSON that the standard Python JSON module cannot parse. Ex: {success:true} Keys do not contain quotes and the JSON cannot be parsed using the regular json encoder. YAML happens to be a superset of JSON and can parse json without quotes. """ # Add spacing between Key and Value to prevent parsing error json_text = json_text.replace(":", ": ") json_dict = yaml.load(json_text) return json_dict
Parses broken JSON that the standard Python JSON module cannot parse. Ex: {success:true} Keys do not contain quotes and the JSON cannot be parsed using the regular json encoder. YAML happens to be a superset of JSON and can parse json without quotes.
Below is the the instruction that describes the task: ### Input: Parses broken JSON that the standard Python JSON module cannot parse. Ex: {success:true} Keys do not contain quotes and the JSON cannot be parsed using the regular json encoder. YAML happens to be a superset of JSON and can parse json without quotes. ### Response: def parse_broken_json(json_text: str) -> dict: """ Parses broken JSON that the standard Python JSON module cannot parse. Ex: {success:true} Keys do not contain quotes and the JSON cannot be parsed using the regular json encoder. YAML happens to be a superset of JSON and can parse json without quotes. """ # Add spacing between Key and Value to prevent parsing error json_text = json_text.replace(":", ": ") json_dict = yaml.load(json_text) return json_dict
def set_active_state(self, name, value): """Set active state.""" if name not in self.__active_states.keys(): raise ValueError("Can not set unknown state '" + name + "'") if (isinstance(self.__active_states[name], int) and isinstance(value, str)): # we get an update as str but current value is # an int, try to convert self.__active_states[name] = int(value) elif (isinstance(self.__active_states[name], float) and isinstance(value, str)): # we get an update as str but current value is # a float, try to convert self.__active_states[name] = float(value) else: self.__active_states[name] = value
Set active state.
Below is the the instruction that describes the task: ### Input: Set active state. ### Response: def set_active_state(self, name, value): """Set active state.""" if name not in self.__active_states.keys(): raise ValueError("Can not set unknown state '" + name + "'") if (isinstance(self.__active_states[name], int) and isinstance(value, str)): # we get an update as str but current value is # an int, try to convert self.__active_states[name] = int(value) elif (isinstance(self.__active_states[name], float) and isinstance(value, str)): # we get an update as str but current value is # a float, try to convert self.__active_states[name] = float(value) else: self.__active_states[name] = value
def render_template_directory(deck, arguments): """Render a template directory""" output_directory = dir_name_from_title(deck.title) if os.path.exists(output_directory): if sys.stdout.isatty(): if ask( '%s already exists, shall I delete it?' % output_directory, arguments.get('--noinput') ): shutil.rmtree(output_directory) else: shutil.rmtree(output_directory) # copy support files to output directory template_directory_path = ( '%s/templates/%s' % (remarkable.__path__[0], deck.presentation_type) ) shutil.copytree( template_directory_path, output_directory, ) # copy resources if os.path.exists('resources'): log.info('Copying resources') shutil.copytree('resources', '%s/resources' % output_directory) else: log.info('No resources to copy') # render template template_filename = '%s/index.html' % deck.presentation_type html = render_template(template_filename, deck.json) # write index to output directory index_filename = '%s/index.html' % output_directory write_file(index_filename, html) return output_directory
Render a template directory
Below is the the instruction that describes the task: ### Input: Render a template directory ### Response: def render_template_directory(deck, arguments): """Render a template directory""" output_directory = dir_name_from_title(deck.title) if os.path.exists(output_directory): if sys.stdout.isatty(): if ask( '%s already exists, shall I delete it?' % output_directory, arguments.get('--noinput') ): shutil.rmtree(output_directory) else: shutil.rmtree(output_directory) # copy support files to output directory template_directory_path = ( '%s/templates/%s' % (remarkable.__path__[0], deck.presentation_type) ) shutil.copytree( template_directory_path, output_directory, ) # copy resources if os.path.exists('resources'): log.info('Copying resources') shutil.copytree('resources', '%s/resources' % output_directory) else: log.info('No resources to copy') # render template template_filename = '%s/index.html' % deck.presentation_type html = render_template(template_filename, deck.json) # write index to output directory index_filename = '%s/index.html' % output_directory write_file(index_filename, html) return output_directory
def teardown_handles(self): """ If no custom update_handles method is supplied this method is called to tear down any previous handles before replacing them. """ if not isinstance(self.handles.get('artist'), GoogleTiles): self.handles['artist'].remove()
If no custom update_handles method is supplied this method is called to tear down any previous handles before replacing them.
Below is the the instruction that describes the task: ### Input: If no custom update_handles method is supplied this method is called to tear down any previous handles before replacing them. ### Response: def teardown_handles(self): """ If no custom update_handles method is supplied this method is called to tear down any previous handles before replacing them. """ if not isinstance(self.handles.get('artist'), GoogleTiles): self.handles['artist'].remove()
def status(name, sig=None): ''' Return ``True`` if service is running name the service's name sig signature to identify with ps CLI Example: .. code-block:: bash salt '*' runit.status <service name> ''' if sig: # usual way to do by others (debian_service, netbsdservice). # XXX probably does not work here (check 'runsv sshd' instead of 'sshd' ?) return bool(__salt__['status.pid'](sig)) svc_path = _service_path(name) if not os.path.exists(svc_path): # service does not exist return False # sv return code is not relevant to get a service status. # Check its output instead. cmd = 'sv status {0}'.format(svc_path) try: out = __salt__['cmd.run_stdout'](cmd) return out.startswith('run: ') except Exception: # sv (as a command) returned an error return False
Return ``True`` if service is running name the service's name sig signature to identify with ps CLI Example: .. code-block:: bash salt '*' runit.status <service name>
Below is the the instruction that describes the task: ### Input: Return ``True`` if service is running name the service's name sig signature to identify with ps CLI Example: .. code-block:: bash salt '*' runit.status <service name> ### Response: def status(name, sig=None): ''' Return ``True`` if service is running name the service's name sig signature to identify with ps CLI Example: .. code-block:: bash salt '*' runit.status <service name> ''' if sig: # usual way to do by others (debian_service, netbsdservice). # XXX probably does not work here (check 'runsv sshd' instead of 'sshd' ?) return bool(__salt__['status.pid'](sig)) svc_path = _service_path(name) if not os.path.exists(svc_path): # service does not exist return False # sv return code is not relevant to get a service status. # Check its output instead. cmd = 'sv status {0}'.format(svc_path) try: out = __salt__['cmd.run_stdout'](cmd) return out.startswith('run: ') except Exception: # sv (as a command) returned an error return False
def trigger_create(self, data, **kwargs): "https://developer.zendesk.com/rest_api/docs/chat/triggers#create-trigger" api_path = "/api/v2/triggers" return self.call(api_path, method="POST", data=data, **kwargs)
https://developer.zendesk.com/rest_api/docs/chat/triggers#create-trigger
Below is the the instruction that describes the task: ### Input: https://developer.zendesk.com/rest_api/docs/chat/triggers#create-trigger ### Response: def trigger_create(self, data, **kwargs): "https://developer.zendesk.com/rest_api/docs/chat/triggers#create-trigger" api_path = "/api/v2/triggers" return self.call(api_path, method="POST", data=data, **kwargs)
def FlowProportions( dem, method = None, exponent = None ): """Calculates flow proportions. A variety of methods are available. Args: dem (rdarray): An elevation model method (str): Flow accumulation method to use. (See below.) exponent (float): Some methods require an exponent; refer to the relevant publications for details. =================== ============================== =========================== Method Note Reference =================== ============================== =========================== Tarboton Alias for Dinf. `Taroboton (1997) doi: 10.1029/96WR03137 <http://dx.doi.org/10.1029/96WR03137>`_ Dinf Alias for Tarboton. `Taroboton (1997) doi: 10.1029/96WR03137 <http://dx.doi.org/10.1029/96WR03137>`_ Quinn Holmgren with exponent=1. `Quinn et al. (1991) doi: 10.1002/hyp.3360050106 <http://dx.doi.org/10.1002/hyp.3360050106>`_ Holmgren(E) Generalization of Quinn. `Holmgren (1994) doi: 10.1002/hyp.3360080405 <http://dx.doi.org/10.1002/hyp.3360080405>`_ Freeman(E) TODO `Freeman (1991) doi: 10.1016/0098-3004(91)90048-I <http://dx.doi.org/10.1016/0098-3004(91)90048-I>`_ FairfieldLeymarieD8 Alias for Rho8. `Fairfield and Leymarie (1991) doi: 10.1029/90WR02658 <http://dx.doi.org/10.1029/90WR02658>`_ FairfieldLeymarieD4 Alias for Rho4. `Fairfield and Leymarie (1991) doi: 10.1029/90WR02658 <http://dx.doi.org/10.1029/90WR02658>`_ Rho8 Alias for FairfieldLeymarieD8. `Fairfield and Leymarie (1991) doi: 10.1029/90WR02658 <http://dx.doi.org/10.1029/90WR02658>`_ Rho4 Alias for FairfieldLeymarieD4. `Fairfield and Leymarie (1991) doi: 10.1029/90WR02658 <http://dx.doi.org/10.1029/90WR02658>`_ OCallaghanD8 Alias for D8. `O'Callaghan and Mark (1984) doi: 10.1016/S0734-189X(84)80011-0 <http://dx.doi.org/10.1016/S0734-189X(84)80011-0>`_ OCallaghanD4 Alias for D8. `O'Callaghan and Mark (1984) doi: 10.1016/S0734-189X(84)80011-0 <http://dx.doi.org/10.1016/S0734-189X(84)80011-0>`_ D8 Alias for OCallaghanD8. `O'Callaghan and Mark (1984) doi: 10.1016/S0734-189X(84)80011-0 <http://dx.doi.org/10.1016/S0734-189X(84)80011-0>`_ D4 Alias for OCallaghanD4. `O'Callaghan and Mark (1984) doi: 10.1016/S0734-189X(84)80011-0 <http://dx.doi.org/10.1016/S0734-189X(84)80011-0>`_ =================== ============================== =========================== **Methods marked (E) require the exponent argument.** Returns: A flow proportion according to the desired method. """ if type(dem) is not rdarray: raise Exception("A richdem.rdarray or numpy.ndarray is required!") fprop_methods = { "Tarboton": _richdem.FM_Tarboton, "Dinf": _richdem.FM_Tarboton, "Quinn": _richdem.FM_Quinn, "FairfieldLeymarieD8": _richdem.FM_FairfieldLeymarieD8, "FairfieldLeymarieD4": _richdem.FM_FairfieldLeymarieD4, "Rho8": _richdem.FM_Rho8, "Rho4": _richdem.FM_Rho4, "OCallaghanD8": _richdem.FM_OCallaghanD8, "OCallaghanD4": _richdem.FM_OCallaghanD4, "D8": _richdem.FM_D8, "D4": _richdem.FM_D4 } fprop_methods_exponent = { "Freeman": _richdem.FM_Freeman, "Holmgren": _richdem.FM_Holmgren } fprops = rd3array(np.zeros(shape=dem.shape+(9,), dtype='float32'), meta_obj=dem, no_data=-2) fpropsw = fprops.wrap() _AddAnalysis(fprops, "FlowProportions(dem, method={method}, exponent={exponent})".format( method = method, exponent = exponent, )) if method in fprop_methods: fprop_methods[method](dem.wrap(),fpropsw) elif method in fprop_methods_exponent: if exponent is None: raise Exception('FlowProportions method "'+method+'" requires an exponent!') fprop_methods_exponent[method](dem.wrap(),fpropsw,exponent) else: raise Exception("Invalid FlowProportions method. Valid methods are: " + ', '.join(list(fprop_methods.keys()) + list(fprop_methods_exponent.keys()) )) fprops.copyFromWrapped(fpropsw) return fprops
Calculates flow proportions. A variety of methods are available. Args: dem (rdarray): An elevation model method (str): Flow accumulation method to use. (See below.) exponent (float): Some methods require an exponent; refer to the relevant publications for details. =================== ============================== =========================== Method Note Reference =================== ============================== =========================== Tarboton Alias for Dinf. `Taroboton (1997) doi: 10.1029/96WR03137 <http://dx.doi.org/10.1029/96WR03137>`_ Dinf Alias for Tarboton. `Taroboton (1997) doi: 10.1029/96WR03137 <http://dx.doi.org/10.1029/96WR03137>`_ Quinn Holmgren with exponent=1. `Quinn et al. (1991) doi: 10.1002/hyp.3360050106 <http://dx.doi.org/10.1002/hyp.3360050106>`_ Holmgren(E) Generalization of Quinn. `Holmgren (1994) doi: 10.1002/hyp.3360080405 <http://dx.doi.org/10.1002/hyp.3360080405>`_ Freeman(E) TODO `Freeman (1991) doi: 10.1016/0098-3004(91)90048-I <http://dx.doi.org/10.1016/0098-3004(91)90048-I>`_ FairfieldLeymarieD8 Alias for Rho8. `Fairfield and Leymarie (1991) doi: 10.1029/90WR02658 <http://dx.doi.org/10.1029/90WR02658>`_ FairfieldLeymarieD4 Alias for Rho4. `Fairfield and Leymarie (1991) doi: 10.1029/90WR02658 <http://dx.doi.org/10.1029/90WR02658>`_ Rho8 Alias for FairfieldLeymarieD8. `Fairfield and Leymarie (1991) doi: 10.1029/90WR02658 <http://dx.doi.org/10.1029/90WR02658>`_ Rho4 Alias for FairfieldLeymarieD4. `Fairfield and Leymarie (1991) doi: 10.1029/90WR02658 <http://dx.doi.org/10.1029/90WR02658>`_ OCallaghanD8 Alias for D8. `O'Callaghan and Mark (1984) doi: 10.1016/S0734-189X(84)80011-0 <http://dx.doi.org/10.1016/S0734-189X(84)80011-0>`_ OCallaghanD4 Alias for D8. `O'Callaghan and Mark (1984) doi: 10.1016/S0734-189X(84)80011-0 <http://dx.doi.org/10.1016/S0734-189X(84)80011-0>`_ D8 Alias for OCallaghanD8. `O'Callaghan and Mark (1984) doi: 10.1016/S0734-189X(84)80011-0 <http://dx.doi.org/10.1016/S0734-189X(84)80011-0>`_ D4 Alias for OCallaghanD4. `O'Callaghan and Mark (1984) doi: 10.1016/S0734-189X(84)80011-0 <http://dx.doi.org/10.1016/S0734-189X(84)80011-0>`_ =================== ============================== =========================== **Methods marked (E) require the exponent argument.** Returns: A flow proportion according to the desired method.
Below is the the instruction that describes the task: ### Input: Calculates flow proportions. A variety of methods are available. Args: dem (rdarray): An elevation model method (str): Flow accumulation method to use. (See below.) exponent (float): Some methods require an exponent; refer to the relevant publications for details. =================== ============================== =========================== Method Note Reference =================== ============================== =========================== Tarboton Alias for Dinf. `Taroboton (1997) doi: 10.1029/96WR03137 <http://dx.doi.org/10.1029/96WR03137>`_ Dinf Alias for Tarboton. `Taroboton (1997) doi: 10.1029/96WR03137 <http://dx.doi.org/10.1029/96WR03137>`_ Quinn Holmgren with exponent=1. `Quinn et al. (1991) doi: 10.1002/hyp.3360050106 <http://dx.doi.org/10.1002/hyp.3360050106>`_ Holmgren(E) Generalization of Quinn. `Holmgren (1994) doi: 10.1002/hyp.3360080405 <http://dx.doi.org/10.1002/hyp.3360080405>`_ Freeman(E) TODO `Freeman (1991) doi: 10.1016/0098-3004(91)90048-I <http://dx.doi.org/10.1016/0098-3004(91)90048-I>`_ FairfieldLeymarieD8 Alias for Rho8. `Fairfield and Leymarie (1991) doi: 10.1029/90WR02658 <http://dx.doi.org/10.1029/90WR02658>`_ FairfieldLeymarieD4 Alias for Rho4. `Fairfield and Leymarie (1991) doi: 10.1029/90WR02658 <http://dx.doi.org/10.1029/90WR02658>`_ Rho8 Alias for FairfieldLeymarieD8. `Fairfield and Leymarie (1991) doi: 10.1029/90WR02658 <http://dx.doi.org/10.1029/90WR02658>`_ Rho4 Alias for FairfieldLeymarieD4. `Fairfield and Leymarie (1991) doi: 10.1029/90WR02658 <http://dx.doi.org/10.1029/90WR02658>`_ OCallaghanD8 Alias for D8. `O'Callaghan and Mark (1984) doi: 10.1016/S0734-189X(84)80011-0 <http://dx.doi.org/10.1016/S0734-189X(84)80011-0>`_ OCallaghanD4 Alias for D8. `O'Callaghan and Mark (1984) doi: 10.1016/S0734-189X(84)80011-0 <http://dx.doi.org/10.1016/S0734-189X(84)80011-0>`_ D8 Alias for OCallaghanD8. `O'Callaghan and Mark (1984) doi: 10.1016/S0734-189X(84)80011-0 <http://dx.doi.org/10.1016/S0734-189X(84)80011-0>`_ D4 Alias for OCallaghanD4. `O'Callaghan and Mark (1984) doi: 10.1016/S0734-189X(84)80011-0 <http://dx.doi.org/10.1016/S0734-189X(84)80011-0>`_ =================== ============================== =========================== **Methods marked (E) require the exponent argument.** Returns: A flow proportion according to the desired method. ### Response: def FlowProportions( dem, method = None, exponent = None ): """Calculates flow proportions. A variety of methods are available. Args: dem (rdarray): An elevation model method (str): Flow accumulation method to use. (See below.) exponent (float): Some methods require an exponent; refer to the relevant publications for details. =================== ============================== =========================== Method Note Reference =================== ============================== =========================== Tarboton Alias for Dinf. `Taroboton (1997) doi: 10.1029/96WR03137 <http://dx.doi.org/10.1029/96WR03137>`_ Dinf Alias for Tarboton. `Taroboton (1997) doi: 10.1029/96WR03137 <http://dx.doi.org/10.1029/96WR03137>`_ Quinn Holmgren with exponent=1. `Quinn et al. (1991) doi: 10.1002/hyp.3360050106 <http://dx.doi.org/10.1002/hyp.3360050106>`_ Holmgren(E) Generalization of Quinn. `Holmgren (1994) doi: 10.1002/hyp.3360080405 <http://dx.doi.org/10.1002/hyp.3360080405>`_ Freeman(E) TODO `Freeman (1991) doi: 10.1016/0098-3004(91)90048-I <http://dx.doi.org/10.1016/0098-3004(91)90048-I>`_ FairfieldLeymarieD8 Alias for Rho8. `Fairfield and Leymarie (1991) doi: 10.1029/90WR02658 <http://dx.doi.org/10.1029/90WR02658>`_ FairfieldLeymarieD4 Alias for Rho4. `Fairfield and Leymarie (1991) doi: 10.1029/90WR02658 <http://dx.doi.org/10.1029/90WR02658>`_ Rho8 Alias for FairfieldLeymarieD8. `Fairfield and Leymarie (1991) doi: 10.1029/90WR02658 <http://dx.doi.org/10.1029/90WR02658>`_ Rho4 Alias for FairfieldLeymarieD4. `Fairfield and Leymarie (1991) doi: 10.1029/90WR02658 <http://dx.doi.org/10.1029/90WR02658>`_ OCallaghanD8 Alias for D8. `O'Callaghan and Mark (1984) doi: 10.1016/S0734-189X(84)80011-0 <http://dx.doi.org/10.1016/S0734-189X(84)80011-0>`_ OCallaghanD4 Alias for D8. `O'Callaghan and Mark (1984) doi: 10.1016/S0734-189X(84)80011-0 <http://dx.doi.org/10.1016/S0734-189X(84)80011-0>`_ D8 Alias for OCallaghanD8. `O'Callaghan and Mark (1984) doi: 10.1016/S0734-189X(84)80011-0 <http://dx.doi.org/10.1016/S0734-189X(84)80011-0>`_ D4 Alias for OCallaghanD4. `O'Callaghan and Mark (1984) doi: 10.1016/S0734-189X(84)80011-0 <http://dx.doi.org/10.1016/S0734-189X(84)80011-0>`_ =================== ============================== =========================== **Methods marked (E) require the exponent argument.** Returns: A flow proportion according to the desired method. """ if type(dem) is not rdarray: raise Exception("A richdem.rdarray or numpy.ndarray is required!") fprop_methods = { "Tarboton": _richdem.FM_Tarboton, "Dinf": _richdem.FM_Tarboton, "Quinn": _richdem.FM_Quinn, "FairfieldLeymarieD8": _richdem.FM_FairfieldLeymarieD8, "FairfieldLeymarieD4": _richdem.FM_FairfieldLeymarieD4, "Rho8": _richdem.FM_Rho8, "Rho4": _richdem.FM_Rho4, "OCallaghanD8": _richdem.FM_OCallaghanD8, "OCallaghanD4": _richdem.FM_OCallaghanD4, "D8": _richdem.FM_D8, "D4": _richdem.FM_D4 } fprop_methods_exponent = { "Freeman": _richdem.FM_Freeman, "Holmgren": _richdem.FM_Holmgren } fprops = rd3array(np.zeros(shape=dem.shape+(9,), dtype='float32'), meta_obj=dem, no_data=-2) fpropsw = fprops.wrap() _AddAnalysis(fprops, "FlowProportions(dem, method={method}, exponent={exponent})".format( method = method, exponent = exponent, )) if method in fprop_methods: fprop_methods[method](dem.wrap(),fpropsw) elif method in fprop_methods_exponent: if exponent is None: raise Exception('FlowProportions method "'+method+'" requires an exponent!') fprop_methods_exponent[method](dem.wrap(),fpropsw,exponent) else: raise Exception("Invalid FlowProportions method. Valid methods are: " + ', '.join(list(fprop_methods.keys()) + list(fprop_methods_exponent.keys()) )) fprops.copyFromWrapped(fpropsw) return fprops
def compile_bytecode(code: list) -> bytes: """ Compiles Pyte objects into a bytecode list. :param code: A list of objects to compile. :return: The computed bytecode. """ bc = b"" for i, op in enumerate(code): try: # Get the bytecode. if isinstance(op, _PyteOp) or isinstance(op, _PyteAugmentedComparator): bc_op = op.to_bytes(bc) elif isinstance(op, int): bc_op = op.to_bytes(1, byteorder="little") elif isinstance(op, bytes): bc_op = op else: raise CompileError("Could not compile code of type {}".format(type(op))) bc += bc_op except Exception as e: print("Fatal compiliation error on operator {i} ({op}).".format(i=i, op=op)) raise e return bc
Compiles Pyte objects into a bytecode list. :param code: A list of objects to compile. :return: The computed bytecode.
Below is the the instruction that describes the task: ### Input: Compiles Pyte objects into a bytecode list. :param code: A list of objects to compile. :return: The computed bytecode. ### Response: def compile_bytecode(code: list) -> bytes: """ Compiles Pyte objects into a bytecode list. :param code: A list of objects to compile. :return: The computed bytecode. """ bc = b"" for i, op in enumerate(code): try: # Get the bytecode. if isinstance(op, _PyteOp) or isinstance(op, _PyteAugmentedComparator): bc_op = op.to_bytes(bc) elif isinstance(op, int): bc_op = op.to_bytes(1, byteorder="little") elif isinstance(op, bytes): bc_op = op else: raise CompileError("Could not compile code of type {}".format(type(op))) bc += bc_op except Exception as e: print("Fatal compiliation error on operator {i} ({op}).".format(i=i, op=op)) raise e return bc
def AddAnalogShortIdRecordNoStatus(site_service, tag, time_value, value): """ This function will add an analog value to the specified eDNA service and tag, without an associated point status. :param site_service: The site.service where data will be pushed :param tag: The eDNA tag to push data. Tag only (e.g. ADE1CA01) :param time_value: The time of the point, which MUST be in UTC Epoch format. For example, "1483926416" not "2016/01/01 01:01:01". :param value: The value associated with the above time. :return: 0, if the data push is successful """ # Define all required variables in the correct ctypes format szService = c_char_p(site_service.encode('utf-8')) szPointId = c_char_p(tag.encode('utf-8')) tTime = c_long(int(time_value)) dValue = c_double(value) # Try to push the data. Function will return 0 if successful. nRet = dnaserv_dll.DnaAddAnalogShortIdRecordNoStatus(szService, szPointId, tTime, dValue) return nRet
This function will add an analog value to the specified eDNA service and tag, without an associated point status. :param site_service: The site.service where data will be pushed :param tag: The eDNA tag to push data. Tag only (e.g. ADE1CA01) :param time_value: The time of the point, which MUST be in UTC Epoch format. For example, "1483926416" not "2016/01/01 01:01:01". :param value: The value associated with the above time. :return: 0, if the data push is successful
Below is the the instruction that describes the task: ### Input: This function will add an analog value to the specified eDNA service and tag, without an associated point status. :param site_service: The site.service where data will be pushed :param tag: The eDNA tag to push data. Tag only (e.g. ADE1CA01) :param time_value: The time of the point, which MUST be in UTC Epoch format. For example, "1483926416" not "2016/01/01 01:01:01". :param value: The value associated with the above time. :return: 0, if the data push is successful ### Response: def AddAnalogShortIdRecordNoStatus(site_service, tag, time_value, value): """ This function will add an analog value to the specified eDNA service and tag, without an associated point status. :param site_service: The site.service where data will be pushed :param tag: The eDNA tag to push data. Tag only (e.g. ADE1CA01) :param time_value: The time of the point, which MUST be in UTC Epoch format. For example, "1483926416" not "2016/01/01 01:01:01". :param value: The value associated with the above time. :return: 0, if the data push is successful """ # Define all required variables in the correct ctypes format szService = c_char_p(site_service.encode('utf-8')) szPointId = c_char_p(tag.encode('utf-8')) tTime = c_long(int(time_value)) dValue = c_double(value) # Try to push the data. Function will return 0 if successful. nRet = dnaserv_dll.DnaAddAnalogShortIdRecordNoStatus(szService, szPointId, tTime, dValue) return nRet
def index(): """List linked accounts.""" oauth = current_app.extensions['oauthlib.client'] services = [] service_map = {} i = 0 for appid, conf in six.iteritems( current_app.config['OAUTHCLIENT_REMOTE_APPS']): if not conf.get('hide', False): services.append(dict( appid=appid, title=conf['title'], icon=conf.get('icon', None), description=conf.get('description', None), account=None )) service_map[oauth.remote_apps[appid].consumer_key] = i i += 1 # Fetch already linked accounts accounts = RemoteAccount.query.filter_by( user_id=current_user.get_id() ).all() for a in accounts: if a.client_id in service_map: services[service_map[a.client_id]]['account'] = a # Sort according to title services.sort(key=itemgetter('title')) return render_template( 'invenio_oauthclient/settings/index.html', services=services )
List linked accounts.
Below is the the instruction that describes the task: ### Input: List linked accounts. ### Response: def index(): """List linked accounts.""" oauth = current_app.extensions['oauthlib.client'] services = [] service_map = {} i = 0 for appid, conf in six.iteritems( current_app.config['OAUTHCLIENT_REMOTE_APPS']): if not conf.get('hide', False): services.append(dict( appid=appid, title=conf['title'], icon=conf.get('icon', None), description=conf.get('description', None), account=None )) service_map[oauth.remote_apps[appid].consumer_key] = i i += 1 # Fetch already linked accounts accounts = RemoteAccount.query.filter_by( user_id=current_user.get_id() ).all() for a in accounts: if a.client_id in service_map: services[service_map[a.client_id]]['account'] = a # Sort according to title services.sort(key=itemgetter('title')) return render_template( 'invenio_oauthclient/settings/index.html', services=services )
def update_payment_card_by_id(cls, payment_card_id, payment_card, **kwargs): """Update PaymentCard Update attributes of PaymentCard This method makes a synchronous HTTP request by default. To make an asynchronous HTTP request, please pass async=True >>> thread = api.update_payment_card_by_id(payment_card_id, payment_card, async=True) >>> result = thread.get() :param async bool :param str payment_card_id: ID of paymentCard to update. (required) :param PaymentCard payment_card: Attributes of paymentCard to update. (required) :return: PaymentCard If the method is called asynchronously, returns the request thread. """ kwargs['_return_http_data_only'] = True if kwargs.get('async'): return cls._update_payment_card_by_id_with_http_info(payment_card_id, payment_card, **kwargs) else: (data) = cls._update_payment_card_by_id_with_http_info(payment_card_id, payment_card, **kwargs) return data
Update PaymentCard Update attributes of PaymentCard This method makes a synchronous HTTP request by default. To make an asynchronous HTTP request, please pass async=True >>> thread = api.update_payment_card_by_id(payment_card_id, payment_card, async=True) >>> result = thread.get() :param async bool :param str payment_card_id: ID of paymentCard to update. (required) :param PaymentCard payment_card: Attributes of paymentCard to update. (required) :return: PaymentCard If the method is called asynchronously, returns the request thread.
Below is the the instruction that describes the task: ### Input: Update PaymentCard Update attributes of PaymentCard This method makes a synchronous HTTP request by default. To make an asynchronous HTTP request, please pass async=True >>> thread = api.update_payment_card_by_id(payment_card_id, payment_card, async=True) >>> result = thread.get() :param async bool :param str payment_card_id: ID of paymentCard to update. (required) :param PaymentCard payment_card: Attributes of paymentCard to update. (required) :return: PaymentCard If the method is called asynchronously, returns the request thread. ### Response: def update_payment_card_by_id(cls, payment_card_id, payment_card, **kwargs): """Update PaymentCard Update attributes of PaymentCard This method makes a synchronous HTTP request by default. To make an asynchronous HTTP request, please pass async=True >>> thread = api.update_payment_card_by_id(payment_card_id, payment_card, async=True) >>> result = thread.get() :param async bool :param str payment_card_id: ID of paymentCard to update. (required) :param PaymentCard payment_card: Attributes of paymentCard to update. (required) :return: PaymentCard If the method is called asynchronously, returns the request thread. """ kwargs['_return_http_data_only'] = True if kwargs.get('async'): return cls._update_payment_card_by_id_with_http_info(payment_card_id, payment_card, **kwargs) else: (data) = cls._update_payment_card_by_id_with_http_info(payment_card_id, payment_card, **kwargs) return data
def uninstall(self): ''' Uninstall the module finder. If not installed, this will do nothing. After uninstallation, none of the newly loaded modules will be decorated (that is, everything will be back to normal). ''' if self.installed: sys.meta_path.remove(self) # Reload all decorated items import_list = [] for name in self.__loaded_modules: del sys.modules[name] import_list.append(name) for name in import_list: __import__(name) self.__reset()
Uninstall the module finder. If not installed, this will do nothing. After uninstallation, none of the newly loaded modules will be decorated (that is, everything will be back to normal).
Below is the the instruction that describes the task: ### Input: Uninstall the module finder. If not installed, this will do nothing. After uninstallation, none of the newly loaded modules will be decorated (that is, everything will be back to normal). ### Response: def uninstall(self): ''' Uninstall the module finder. If not installed, this will do nothing. After uninstallation, none of the newly loaded modules will be decorated (that is, everything will be back to normal). ''' if self.installed: sys.meta_path.remove(self) # Reload all decorated items import_list = [] for name in self.__loaded_modules: del sys.modules[name] import_list.append(name) for name in import_list: __import__(name) self.__reset()
def Deserialize(self, reader): """ Deserialize full object. Args: reader (neocore.IO.BinaryReader): """ super(AssetState, self).Deserialize(reader) self.AssetId = reader.ReadUInt256() self.AssetType = reader.ReadByte() self.Name = reader.ReadVarString() position = reader.stream.tell() try: self.Amount = reader.ReadFixed8() except Exception as e: reader.stream.seek(position) self.Amount = reader.ReadFixed8() self.Available = reader.ReadFixed8() self.Precision = reader.ReadByte() # fee mode reader.ReadByte() self.Fee = reader.ReadFixed8() self.FeeAddress = reader.ReadUInt160() self.Owner = ECDSA.Deserialize_Secp256r1(reader) self.Admin = reader.ReadUInt160() self.Issuer = reader.ReadUInt160() self.Expiration = reader.ReadUInt32() self.IsFrozen = reader.ReadBool()
Deserialize full object. Args: reader (neocore.IO.BinaryReader):
Below is the the instruction that describes the task: ### Input: Deserialize full object. Args: reader (neocore.IO.BinaryReader): ### Response: def Deserialize(self, reader): """ Deserialize full object. Args: reader (neocore.IO.BinaryReader): """ super(AssetState, self).Deserialize(reader) self.AssetId = reader.ReadUInt256() self.AssetType = reader.ReadByte() self.Name = reader.ReadVarString() position = reader.stream.tell() try: self.Amount = reader.ReadFixed8() except Exception as e: reader.stream.seek(position) self.Amount = reader.ReadFixed8() self.Available = reader.ReadFixed8() self.Precision = reader.ReadByte() # fee mode reader.ReadByte() self.Fee = reader.ReadFixed8() self.FeeAddress = reader.ReadUInt160() self.Owner = ECDSA.Deserialize_Secp256r1(reader) self.Admin = reader.ReadUInt160() self.Issuer = reader.ReadUInt160() self.Expiration = reader.ReadUInt32() self.IsFrozen = reader.ReadBool()
def predict_compound_pairs_iterated( reactions, formulas, prior=(1, 43), max_iterations=None, element_weight=element_weight): """Predict reaction pairs using iterated method. Returns a tuple containing a dictionary of predictions keyed by the reaction IDs, and the final number of iterations. Each reaction prediction entry contains a tuple with a dictionary of transfers and a dictionary of unbalanced compounds. The dictionary of unbalanced compounds is empty only if the reaction is balanced. Args: reactions: Dictionary or pair-iterable of (id, equation) pairs. IDs must be any hashable reaction identifier (e.g. string) and equation must be :class:`psamm.reaction.Reaction` objects. formulas: Dictionary mapping compound IDs to :class:`psamm.formula.Formula`. Formulas must be flattened. prior: Tuple of (alpha, beta) parameters for the MAP inference. If not provided, the default parameters will be used: (1, 43). max_iterations: Maximum iterations to run before stopping. If the stopping condition is reached before this number of iterations, the procedure also stops. If None, the procedure only stops when the stopping condition is reached. element_weight: A function providing returning weight value for the given :class:`psamm.formula.Atom` or :class:`psamm.formula.Radical`. If not provided, the default weight will be used (H=0, C=1, *=0.82) """ prior_alpha, prior_beta = prior reactions = dict(reactions) pair_reactions = {} possible_pairs = Counter() for reaction_id, equation in iteritems(reactions): for (c1, _), (c2, _) in product(equation.left, equation.right): spair = tuple(sorted([c1.name, c2.name])) possible_pairs[spair] += 1 pair_reactions.setdefault(spair, set()).add(reaction_id) next_reactions = set(reactions) pairs_predicted = None prediction = {} weights = {} iteration = 0 while len(next_reactions) > 0: iteration += 1 if max_iterations is not None and iteration > max_iterations: break logger.info('Iteration {}: {} reactions...'.format( iteration, len(next_reactions))) for reaction_id in next_reactions: result = predict_compound_pairs( reactions[reaction_id], formulas, weights, element_weight) if result is None: continue transfer, balance = result rpairs = {} for ((c1, _), (c2, _)), form in iteritems(transfer): rpairs.setdefault((c1, c2), []).append(form) prediction[reaction_id] = rpairs, balance pairs_predicted = Counter() for reaction_id, (rpairs, _) in iteritems(prediction): for c1, c2 in rpairs: spair = tuple(sorted([c1.name, c2.name])) pairs_predicted[spair] += 1 next_reactions = set() for spair, total in sorted(iteritems(possible_pairs)): pred = pairs_predicted[spair] # The weight is set to the maximum a posteriori (MAP) estimate # of the primary pair probability distribution. posterior_alpha = prior_alpha + pred posterior_beta = prior_beta + total - pred pair_weight = ((posterior_alpha - 1) / (posterior_alpha + posterior_beta - 2)) if (spair not in weights or abs(pair_weight - weights[spair]) > 1e-5): next_reactions.update(pair_reactions[spair]) c1, c2 = spair weights[c1, c2] = pair_weight weights[c2, c1] = pair_weight return prediction, iteration
Predict reaction pairs using iterated method. Returns a tuple containing a dictionary of predictions keyed by the reaction IDs, and the final number of iterations. Each reaction prediction entry contains a tuple with a dictionary of transfers and a dictionary of unbalanced compounds. The dictionary of unbalanced compounds is empty only if the reaction is balanced. Args: reactions: Dictionary or pair-iterable of (id, equation) pairs. IDs must be any hashable reaction identifier (e.g. string) and equation must be :class:`psamm.reaction.Reaction` objects. formulas: Dictionary mapping compound IDs to :class:`psamm.formula.Formula`. Formulas must be flattened. prior: Tuple of (alpha, beta) parameters for the MAP inference. If not provided, the default parameters will be used: (1, 43). max_iterations: Maximum iterations to run before stopping. If the stopping condition is reached before this number of iterations, the procedure also stops. If None, the procedure only stops when the stopping condition is reached. element_weight: A function providing returning weight value for the given :class:`psamm.formula.Atom` or :class:`psamm.formula.Radical`. If not provided, the default weight will be used (H=0, C=1, *=0.82)
Below is the the instruction that describes the task: ### Input: Predict reaction pairs using iterated method. Returns a tuple containing a dictionary of predictions keyed by the reaction IDs, and the final number of iterations. Each reaction prediction entry contains a tuple with a dictionary of transfers and a dictionary of unbalanced compounds. The dictionary of unbalanced compounds is empty only if the reaction is balanced. Args: reactions: Dictionary or pair-iterable of (id, equation) pairs. IDs must be any hashable reaction identifier (e.g. string) and equation must be :class:`psamm.reaction.Reaction` objects. formulas: Dictionary mapping compound IDs to :class:`psamm.formula.Formula`. Formulas must be flattened. prior: Tuple of (alpha, beta) parameters for the MAP inference. If not provided, the default parameters will be used: (1, 43). max_iterations: Maximum iterations to run before stopping. If the stopping condition is reached before this number of iterations, the procedure also stops. If None, the procedure only stops when the stopping condition is reached. element_weight: A function providing returning weight value for the given :class:`psamm.formula.Atom` or :class:`psamm.formula.Radical`. If not provided, the default weight will be used (H=0, C=1, *=0.82) ### Response: def predict_compound_pairs_iterated( reactions, formulas, prior=(1, 43), max_iterations=None, element_weight=element_weight): """Predict reaction pairs using iterated method. Returns a tuple containing a dictionary of predictions keyed by the reaction IDs, and the final number of iterations. Each reaction prediction entry contains a tuple with a dictionary of transfers and a dictionary of unbalanced compounds. The dictionary of unbalanced compounds is empty only if the reaction is balanced. Args: reactions: Dictionary or pair-iterable of (id, equation) pairs. IDs must be any hashable reaction identifier (e.g. string) and equation must be :class:`psamm.reaction.Reaction` objects. formulas: Dictionary mapping compound IDs to :class:`psamm.formula.Formula`. Formulas must be flattened. prior: Tuple of (alpha, beta) parameters for the MAP inference. If not provided, the default parameters will be used: (1, 43). max_iterations: Maximum iterations to run before stopping. If the stopping condition is reached before this number of iterations, the procedure also stops. If None, the procedure only stops when the stopping condition is reached. element_weight: A function providing returning weight value for the given :class:`psamm.formula.Atom` or :class:`psamm.formula.Radical`. If not provided, the default weight will be used (H=0, C=1, *=0.82) """ prior_alpha, prior_beta = prior reactions = dict(reactions) pair_reactions = {} possible_pairs = Counter() for reaction_id, equation in iteritems(reactions): for (c1, _), (c2, _) in product(equation.left, equation.right): spair = tuple(sorted([c1.name, c2.name])) possible_pairs[spair] += 1 pair_reactions.setdefault(spair, set()).add(reaction_id) next_reactions = set(reactions) pairs_predicted = None prediction = {} weights = {} iteration = 0 while len(next_reactions) > 0: iteration += 1 if max_iterations is not None and iteration > max_iterations: break logger.info('Iteration {}: {} reactions...'.format( iteration, len(next_reactions))) for reaction_id in next_reactions: result = predict_compound_pairs( reactions[reaction_id], formulas, weights, element_weight) if result is None: continue transfer, balance = result rpairs = {} for ((c1, _), (c2, _)), form in iteritems(transfer): rpairs.setdefault((c1, c2), []).append(form) prediction[reaction_id] = rpairs, balance pairs_predicted = Counter() for reaction_id, (rpairs, _) in iteritems(prediction): for c1, c2 in rpairs: spair = tuple(sorted([c1.name, c2.name])) pairs_predicted[spair] += 1 next_reactions = set() for spair, total in sorted(iteritems(possible_pairs)): pred = pairs_predicted[spair] # The weight is set to the maximum a posteriori (MAP) estimate # of the primary pair probability distribution. posterior_alpha = prior_alpha + pred posterior_beta = prior_beta + total - pred pair_weight = ((posterior_alpha - 1) / (posterior_alpha + posterior_beta - 2)) if (spair not in weights or abs(pair_weight - weights[spair]) > 1e-5): next_reactions.update(pair_reactions[spair]) c1, c2 = spair weights[c1, c2] = pair_weight weights[c2, c1] = pair_weight return prediction, iteration
def _meanprecision(D, tol=1e-7, maxiter=None): '''Mean and precision alternating method for MLE of Dirichlet distribution''' N, K = D.shape logp = log(D).mean(axis=0) a0 = _init_a(D) s0 = a0.sum() if s0 < 0: a0 = a0/s0 s0 = 1 elif s0 == 0: a0 = ones(a.shape) / len(a) s0 = 1 m0 = a0/s0 # Start updating if maxiter is None: maxiter = MAXINT for i in xrange(maxiter): a1 = _fit_s(D, a0, logp, tol=tol) s1 = sum(a1) a1 = _fit_m(D, a1, logp, tol=tol) m = a1/s1 # if norm(a1-a0) < tol: if abs(loglikelihood(D, a1)-loglikelihood(D, a0)) < tol: # much faster return a1 a0 = a1 raise Exception('Failed to converge after {} iterations, values are {}.' .format(maxiter, a1))
Mean and precision alternating method for MLE of Dirichlet distribution
Below is the the instruction that describes the task: ### Input: Mean and precision alternating method for MLE of Dirichlet distribution ### Response: def _meanprecision(D, tol=1e-7, maxiter=None): '''Mean and precision alternating method for MLE of Dirichlet distribution''' N, K = D.shape logp = log(D).mean(axis=0) a0 = _init_a(D) s0 = a0.sum() if s0 < 0: a0 = a0/s0 s0 = 1 elif s0 == 0: a0 = ones(a.shape) / len(a) s0 = 1 m0 = a0/s0 # Start updating if maxiter is None: maxiter = MAXINT for i in xrange(maxiter): a1 = _fit_s(D, a0, logp, tol=tol) s1 = sum(a1) a1 = _fit_m(D, a1, logp, tol=tol) m = a1/s1 # if norm(a1-a0) < tol: if abs(loglikelihood(D, a1)-loglikelihood(D, a0)) < tol: # much faster return a1 a0 = a1 raise Exception('Failed to converge after {} iterations, values are {}.' .format(maxiter, a1))
def run(self, args): ''' Run the SPM command ''' command = args[0] try: if command == 'install': self._install(args) elif command == 'local': self._local(args) elif command == 'repo': self._repo(args) elif command == 'remove': self._remove(args) elif command == 'build': self._build(args) elif command == 'update_repo': self._download_repo_metadata(args) elif command == 'create_repo': self._create_repo(args) elif command == 'files': self._list_files(args) elif command == 'info': self._info(args) elif command == 'list': self._list(args) elif command == 'close': self._close() else: raise SPMInvocationError('Invalid command \'{0}\''.format(command)) except SPMException as exc: self.ui.error(six.text_type(exc))
Run the SPM command
Below is the the instruction that describes the task: ### Input: Run the SPM command ### Response: def run(self, args): ''' Run the SPM command ''' command = args[0] try: if command == 'install': self._install(args) elif command == 'local': self._local(args) elif command == 'repo': self._repo(args) elif command == 'remove': self._remove(args) elif command == 'build': self._build(args) elif command == 'update_repo': self._download_repo_metadata(args) elif command == 'create_repo': self._create_repo(args) elif command == 'files': self._list_files(args) elif command == 'info': self._info(args) elif command == 'list': self._list(args) elif command == 'close': self._close() else: raise SPMInvocationError('Invalid command \'{0}\''.format(command)) except SPMException as exc: self.ui.error(six.text_type(exc))
def fit(self, X, y=None): """ X : ANTsImage | string | list of ANTsImage types | list of strings images to register to fixed image y : string | list of strings labels for images """ moving_images = X if isinstance(X, (list,tuple)) else [X] moving_labels = y if y is not None else [i for i in range(len(moving_images))] fixed_image = self.fixed_image self.fwdtransforms_ = {} self.invtransforms_ = {} self.warpedmovout_ = {} self.warpedfixout_ = {} for moving_image, moving_label in zip(moving_images, moving_labels): fit_result = interface.registration(fixed_image, moving_image, type_of_transform=self.type_of_transform, initial_transform=None, outprefix='', mask=None, grad_step=0.2, flow_sigma=3, total_sigma=0, aff_metric='mattes', aff_sampling=32, syn_metric='mattes', syn_sampling=32, reg_iterations=(40,20,0), verbose=False) self.fwdtransforms_[moving_label] = fit_result['fwdtransforms'] self.invtransforms_[moving_label] = fit_result['invtransforms'] self.warpedmovout_[moving_label] = fit_result['warpedmovout'] self.warpedfixout_[moving_label] = fit_result['warpedfixout'] return self
X : ANTsImage | string | list of ANTsImage types | list of strings images to register to fixed image y : string | list of strings labels for images
Below is the the instruction that describes the task: ### Input: X : ANTsImage | string | list of ANTsImage types | list of strings images to register to fixed image y : string | list of strings labels for images ### Response: def fit(self, X, y=None): """ X : ANTsImage | string | list of ANTsImage types | list of strings images to register to fixed image y : string | list of strings labels for images """ moving_images = X if isinstance(X, (list,tuple)) else [X] moving_labels = y if y is not None else [i for i in range(len(moving_images))] fixed_image = self.fixed_image self.fwdtransforms_ = {} self.invtransforms_ = {} self.warpedmovout_ = {} self.warpedfixout_ = {} for moving_image, moving_label in zip(moving_images, moving_labels): fit_result = interface.registration(fixed_image, moving_image, type_of_transform=self.type_of_transform, initial_transform=None, outprefix='', mask=None, grad_step=0.2, flow_sigma=3, total_sigma=0, aff_metric='mattes', aff_sampling=32, syn_metric='mattes', syn_sampling=32, reg_iterations=(40,20,0), verbose=False) self.fwdtransforms_[moving_label] = fit_result['fwdtransforms'] self.invtransforms_[moving_label] = fit_result['invtransforms'] self.warpedmovout_[moving_label] = fit_result['warpedmovout'] self.warpedfixout_[moving_label] = fit_result['warpedfixout'] return self
def ToVM(self): """ Used for turning a ContractParameter item into somethnig consumable by the VM Returns: """ if self.Type == ContractParameterType.String: return str(self.Value).encode('utf-8').hex() elif self.Type == ContractParameterType.Integer and isinstance(self.Value, int): return BigInteger(self.Value) return self.Value
Used for turning a ContractParameter item into somethnig consumable by the VM Returns:
Below is the the instruction that describes the task: ### Input: Used for turning a ContractParameter item into somethnig consumable by the VM Returns: ### Response: def ToVM(self): """ Used for turning a ContractParameter item into somethnig consumable by the VM Returns: """ if self.Type == ContractParameterType.String: return str(self.Value).encode('utf-8').hex() elif self.Type == ContractParameterType.Integer and isinstance(self.Value, int): return BigInteger(self.Value) return self.Value
def _conglomerate_meshes(meshin, header): """Conglomerate meshes from several cores into one.""" meshout = {} npc = header['nts'] // header['ncs'] shp = [val + 1 if val != 1 else 1 for val in header['nts']] x_p = int(shp[0] != 1) y_p = int(shp[1] != 1) for coord in meshin[0]: meshout[coord] = np.zeros(shp) for icore in range(np.prod(header['ncs'])): ifs = [icore // np.prod(header['ncs'][:i]) % header['ncs'][i] * npc[i] for i in range(3)] for coord, mesh in meshin[icore].items(): meshout[coord][ifs[0]:ifs[0] + npc[0] + x_p, ifs[1]:ifs[1] + npc[1] + y_p, ifs[2]:ifs[2] + npc[2] + 1] = mesh return meshout
Conglomerate meshes from several cores into one.
Below is the the instruction that describes the task: ### Input: Conglomerate meshes from several cores into one. ### Response: def _conglomerate_meshes(meshin, header): """Conglomerate meshes from several cores into one.""" meshout = {} npc = header['nts'] // header['ncs'] shp = [val + 1 if val != 1 else 1 for val in header['nts']] x_p = int(shp[0] != 1) y_p = int(shp[1] != 1) for coord in meshin[0]: meshout[coord] = np.zeros(shp) for icore in range(np.prod(header['ncs'])): ifs = [icore // np.prod(header['ncs'][:i]) % header['ncs'][i] * npc[i] for i in range(3)] for coord, mesh in meshin[icore].items(): meshout[coord][ifs[0]:ifs[0] + npc[0] + x_p, ifs[1]:ifs[1] + npc[1] + y_p, ifs[2]:ifs[2] + npc[2] + 1] = mesh return meshout
def parse_timedelta(deltastr): """ Parse a string describing a period of time. """ matches = TIMEDELTA_REGEX.match(deltastr) if not matches: return None components = {} for name, value in matches.groupdict().items(): if value: components[name] = int(value) for period, hours in (('days', 24), ('years', 8766)): if period in components: components['hours'] = components.get('hours', 0) + \ components[period] * hours del components[period] return int(timedelta(**components).total_seconds())
Parse a string describing a period of time.
Below is the the instruction that describes the task: ### Input: Parse a string describing a period of time. ### Response: def parse_timedelta(deltastr): """ Parse a string describing a period of time. """ matches = TIMEDELTA_REGEX.match(deltastr) if not matches: return None components = {} for name, value in matches.groupdict().items(): if value: components[name] = int(value) for period, hours in (('days', 24), ('years', 8766)): if period in components: components['hours'] = components.get('hours', 0) + \ components[period] * hours del components[period] return int(timedelta(**components).total_seconds())
def findall(lst, key, value): """ Find all items in lst where key matches value. For example find all ``LAYER`` s in a ``MAP`` where ``GROUP`` equals ``VALUE`` Parameters ---------- list: list A list of composite dictionaries e.g. ``layers``, ``classes`` key: string The key name to search each dictionary in the list key: value The value to search for Returns ------- list A Python list containing the matching composite dictionaries Example ------- To find all ``LAYER`` s with ``GROUP`` set to ``test``:: s = ''' MAP LAYER NAME "Layer1" TYPE POLYGON GROUP "test" END LAYER NAME "Layer2" TYPE POLYGON GROUP "test1" END LAYER NAME "Layer3" TYPE POLYGON GROUP "test2" END LAYER NAME "Layer4" TYPE POLYGON GROUP "test" END END ''' d = mappyfile.loads(s) layers = mappyfile.findall(d["layers"], "group", "test") assert len(layers) == 2 """ return [item for item in lst if item[key.lower()] in value]
Find all items in lst where key matches value. For example find all ``LAYER`` s in a ``MAP`` where ``GROUP`` equals ``VALUE`` Parameters ---------- list: list A list of composite dictionaries e.g. ``layers``, ``classes`` key: string The key name to search each dictionary in the list key: value The value to search for Returns ------- list A Python list containing the matching composite dictionaries Example ------- To find all ``LAYER`` s with ``GROUP`` set to ``test``:: s = ''' MAP LAYER NAME "Layer1" TYPE POLYGON GROUP "test" END LAYER NAME "Layer2" TYPE POLYGON GROUP "test1" END LAYER NAME "Layer3" TYPE POLYGON GROUP "test2" END LAYER NAME "Layer4" TYPE POLYGON GROUP "test" END END ''' d = mappyfile.loads(s) layers = mappyfile.findall(d["layers"], "group", "test") assert len(layers) == 2
Below is the the instruction that describes the task: ### Input: Find all items in lst where key matches value. For example find all ``LAYER`` s in a ``MAP`` where ``GROUP`` equals ``VALUE`` Parameters ---------- list: list A list of composite dictionaries e.g. ``layers``, ``classes`` key: string The key name to search each dictionary in the list key: value The value to search for Returns ------- list A Python list containing the matching composite dictionaries Example ------- To find all ``LAYER`` s with ``GROUP`` set to ``test``:: s = ''' MAP LAYER NAME "Layer1" TYPE POLYGON GROUP "test" END LAYER NAME "Layer2" TYPE POLYGON GROUP "test1" END LAYER NAME "Layer3" TYPE POLYGON GROUP "test2" END LAYER NAME "Layer4" TYPE POLYGON GROUP "test" END END ''' d = mappyfile.loads(s) layers = mappyfile.findall(d["layers"], "group", "test") assert len(layers) == 2 ### Response: def findall(lst, key, value): """ Find all items in lst where key matches value. For example find all ``LAYER`` s in a ``MAP`` where ``GROUP`` equals ``VALUE`` Parameters ---------- list: list A list of composite dictionaries e.g. ``layers``, ``classes`` key: string The key name to search each dictionary in the list key: value The value to search for Returns ------- list A Python list containing the matching composite dictionaries Example ------- To find all ``LAYER`` s with ``GROUP`` set to ``test``:: s = ''' MAP LAYER NAME "Layer1" TYPE POLYGON GROUP "test" END LAYER NAME "Layer2" TYPE POLYGON GROUP "test1" END LAYER NAME "Layer3" TYPE POLYGON GROUP "test2" END LAYER NAME "Layer4" TYPE POLYGON GROUP "test" END END ''' d = mappyfile.loads(s) layers = mappyfile.findall(d["layers"], "group", "test") assert len(layers) == 2 """ return [item for item in lst if item[key.lower()] in value]
def register(model_or_iterable, **options): """ Registers the given model(s) with the given translation options. The model(s) should be Model classes, not instances. Fields declared for translation on a base class are inherited by subclasses. If the model or one of its subclasses is already registered for translation, this will raise an exception. @register(Author) class AuthorTranslation(TranslationOptions): pass """ from modeltranslation.translator import translator, TranslationOptions def wrapper(opts_class): if not issubclass(opts_class, TranslationOptions): raise ValueError('Wrapped class must subclass TranslationOptions.') translator.register(model_or_iterable, opts_class, **options) return opts_class return wrapper
Registers the given model(s) with the given translation options. The model(s) should be Model classes, not instances. Fields declared for translation on a base class are inherited by subclasses. If the model or one of its subclasses is already registered for translation, this will raise an exception. @register(Author) class AuthorTranslation(TranslationOptions): pass
Below is the the instruction that describes the task: ### Input: Registers the given model(s) with the given translation options. The model(s) should be Model classes, not instances. Fields declared for translation on a base class are inherited by subclasses. If the model or one of its subclasses is already registered for translation, this will raise an exception. @register(Author) class AuthorTranslation(TranslationOptions): pass ### Response: def register(model_or_iterable, **options): """ Registers the given model(s) with the given translation options. The model(s) should be Model classes, not instances. Fields declared for translation on a base class are inherited by subclasses. If the model or one of its subclasses is already registered for translation, this will raise an exception. @register(Author) class AuthorTranslation(TranslationOptions): pass """ from modeltranslation.translator import translator, TranslationOptions def wrapper(opts_class): if not issubclass(opts_class, TranslationOptions): raise ValueError('Wrapped class must subclass TranslationOptions.') translator.register(model_or_iterable, opts_class, **options) return opts_class return wrapper
def has_file(self, name: str): ''' check whether this directory contains the file. ''' return os.path.isfile(self._path / name)
check whether this directory contains the file.
Below is the the instruction that describes the task: ### Input: check whether this directory contains the file. ### Response: def has_file(self, name: str): ''' check whether this directory contains the file. ''' return os.path.isfile(self._path / name)
def sort(self, key, *get_patterns, by=None, offset=None, count=None, asc=None, alpha=False, store=None): """Sort the elements in a list, set or sorted set.""" args = [] if by is not None: args += [b'BY', by] if offset is not None and count is not None: args += [b'LIMIT', offset, count] if get_patterns: args += sum(([b'GET', pattern] for pattern in get_patterns), []) if asc is not None: args += [asc is True and b'ASC' or b'DESC'] if alpha: args += [b'ALPHA'] if store is not None: args += [b'STORE', store] return self.execute(b'SORT', key, *args)
Sort the elements in a list, set or sorted set.
Below is the the instruction that describes the task: ### Input: Sort the elements in a list, set or sorted set. ### Response: def sort(self, key, *get_patterns, by=None, offset=None, count=None, asc=None, alpha=False, store=None): """Sort the elements in a list, set or sorted set.""" args = [] if by is not None: args += [b'BY', by] if offset is not None and count is not None: args += [b'LIMIT', offset, count] if get_patterns: args += sum(([b'GET', pattern] for pattern in get_patterns), []) if asc is not None: args += [asc is True and b'ASC' or b'DESC'] if alpha: args += [b'ALPHA'] if store is not None: args += [b'STORE', store] return self.execute(b'SORT', key, *args)
def next_unit_id(self) -> int: """ Returns: next free Unit ID """ ids: typing.Set[int] = set() for unit in chain(self._blue_coa.units, self._red_coa.units): # type: ignore id_ = unit.unit_id if id_ in ids: raise IndexError(unit.unit_name) ids.add(id_) return max(ids) + 1
Returns: next free Unit ID
Below is the the instruction that describes the task: ### Input: Returns: next free Unit ID ### Response: def next_unit_id(self) -> int: """ Returns: next free Unit ID """ ids: typing.Set[int] = set() for unit in chain(self._blue_coa.units, self._red_coa.units): # type: ignore id_ = unit.unit_id if id_ in ids: raise IndexError(unit.unit_name) ids.add(id_) return max(ids) + 1
def _initial_broks(self, broker_name): """Get initial_broks from the scheduler This is used by the brokers to prepare the initial status broks This do not send broks, it only makes scheduler internal processing. Then the broker must use the *_broks* API to get all the stuff :param broker_name: broker name, used to filter broks :type broker_name: str :return: None """ with self.app.conf_lock: logger.info("A new broker just connected : %s", broker_name) return self.app.sched.fill_initial_broks(broker_name)
Get initial_broks from the scheduler This is used by the brokers to prepare the initial status broks This do not send broks, it only makes scheduler internal processing. Then the broker must use the *_broks* API to get all the stuff :param broker_name: broker name, used to filter broks :type broker_name: str :return: None
Below is the the instruction that describes the task: ### Input: Get initial_broks from the scheduler This is used by the brokers to prepare the initial status broks This do not send broks, it only makes scheduler internal processing. Then the broker must use the *_broks* API to get all the stuff :param broker_name: broker name, used to filter broks :type broker_name: str :return: None ### Response: def _initial_broks(self, broker_name): """Get initial_broks from the scheduler This is used by the brokers to prepare the initial status broks This do not send broks, it only makes scheduler internal processing. Then the broker must use the *_broks* API to get all the stuff :param broker_name: broker name, used to filter broks :type broker_name: str :return: None """ with self.app.conf_lock: logger.info("A new broker just connected : %s", broker_name) return self.app.sched.fill_initial_broks(broker_name)
def restore(self): """Restore signal handlers to their original settings.""" signal.signal(signal.SIGINT, self.original_sigint) signal.signal(signal.SIGTERM, self.original_sigterm) if os.name == 'nt': signal.signal(signal.SIGBREAK, self.original_sigbreak)
Restore signal handlers to their original settings.
Below is the the instruction that describes the task: ### Input: Restore signal handlers to their original settings. ### Response: def restore(self): """Restore signal handlers to their original settings.""" signal.signal(signal.SIGINT, self.original_sigint) signal.signal(signal.SIGTERM, self.original_sigterm) if os.name == 'nt': signal.signal(signal.SIGBREAK, self.original_sigbreak)
def receiver_blueprint_for(self, name): """ Get a Flask blueprint for the named provider that handles incoming messages & status reports Note: this requires Flask microframework. :rtype: flask.blueprints.Blueprint :returns: Flask Blueprint, fully functional :raises KeyError: provider not found :raises NotImplementedError: Provider does not implement a receiver """ # Get the provider & blueprint provider = self.get_provider(name) bp = provider.make_receiver_blueprint() # Register a Flask handler that initializes `g.provider` # This is the only way for the blueprint to get the current IProvider instance from flask.globals import g # local import as the user is not required to use receivers at all @bp.before_request def init_g(): g.provider = provider # Finish return bp
Get a Flask blueprint for the named provider that handles incoming messages & status reports Note: this requires Flask microframework. :rtype: flask.blueprints.Blueprint :returns: Flask Blueprint, fully functional :raises KeyError: provider not found :raises NotImplementedError: Provider does not implement a receiver
Below is the the instruction that describes the task: ### Input: Get a Flask blueprint for the named provider that handles incoming messages & status reports Note: this requires Flask microframework. :rtype: flask.blueprints.Blueprint :returns: Flask Blueprint, fully functional :raises KeyError: provider not found :raises NotImplementedError: Provider does not implement a receiver ### Response: def receiver_blueprint_for(self, name): """ Get a Flask blueprint for the named provider that handles incoming messages & status reports Note: this requires Flask microframework. :rtype: flask.blueprints.Blueprint :returns: Flask Blueprint, fully functional :raises KeyError: provider not found :raises NotImplementedError: Provider does not implement a receiver """ # Get the provider & blueprint provider = self.get_provider(name) bp = provider.make_receiver_blueprint() # Register a Flask handler that initializes `g.provider` # This is the only way for the blueprint to get the current IProvider instance from flask.globals import g # local import as the user is not required to use receivers at all @bp.before_request def init_g(): g.provider = provider # Finish return bp
def var(tensor_type, last_dim=0, test_shape=None): """ Wrap a Theano tensor into the variable for defining neural network. :param last_dim: last dimension of tensor, 0 indicates that the last dimension is flexible :rtype: deepy.core.neural_var.NeuralVariable """ # Create tensor from deepy.core.neural_var import NeuralVariable from deepy.core.env import env from theano.tensor.var import TensorVariable if isinstance(tensor_type, NeuralVariable): var = tensor_type if last_dim != 0: var.output_dim = last_dim elif isinstance(tensor_type, TensorVariable): var = NeuralVariable(tensor_type, dim=last_dim) elif isinstance(tensor_type, str): theano_tensor = getattr(TT, tensor_type)() var = NeuralVariable(theano_tensor, dim=last_dim) else: raise Exception("tensor_type shall be a string or a NeuralVariable") # Set test value if test_shape: if type(test_shape) != list and type(test_shape) != tuple: # May be it's a value var.set_test_value(test_shape) else: test_val = env.numpy_rand.rand(*test_shape) if len(test_shape) > 0: test_val = test_val.astype(var.tensor.dtype) elif var.tensor.dtype.startswith("int"): test_val = 1 var.set_test_value(test_val) else: # Create a general test_shape dims = [(d + 1) * 3 for d in range(var.tensor.ndim)] if var.dim() != 0: dims[-1] = var.dim() test_val = env.numpy_rand.rand(*dims) if len(dims) > 0: test_val = test_val.astype(var.tensor.dtype) elif var.tensor.dtype.startswith("int"): test_val = 1 var.set_test_value(test_val) return var
Wrap a Theano tensor into the variable for defining neural network. :param last_dim: last dimension of tensor, 0 indicates that the last dimension is flexible :rtype: deepy.core.neural_var.NeuralVariable
Below is the the instruction that describes the task: ### Input: Wrap a Theano tensor into the variable for defining neural network. :param last_dim: last dimension of tensor, 0 indicates that the last dimension is flexible :rtype: deepy.core.neural_var.NeuralVariable ### Response: def var(tensor_type, last_dim=0, test_shape=None): """ Wrap a Theano tensor into the variable for defining neural network. :param last_dim: last dimension of tensor, 0 indicates that the last dimension is flexible :rtype: deepy.core.neural_var.NeuralVariable """ # Create tensor from deepy.core.neural_var import NeuralVariable from deepy.core.env import env from theano.tensor.var import TensorVariable if isinstance(tensor_type, NeuralVariable): var = tensor_type if last_dim != 0: var.output_dim = last_dim elif isinstance(tensor_type, TensorVariable): var = NeuralVariable(tensor_type, dim=last_dim) elif isinstance(tensor_type, str): theano_tensor = getattr(TT, tensor_type)() var = NeuralVariable(theano_tensor, dim=last_dim) else: raise Exception("tensor_type shall be a string or a NeuralVariable") # Set test value if test_shape: if type(test_shape) != list and type(test_shape) != tuple: # May be it's a value var.set_test_value(test_shape) else: test_val = env.numpy_rand.rand(*test_shape) if len(test_shape) > 0: test_val = test_val.astype(var.tensor.dtype) elif var.tensor.dtype.startswith("int"): test_val = 1 var.set_test_value(test_val) else: # Create a general test_shape dims = [(d + 1) * 3 for d in range(var.tensor.ndim)] if var.dim() != 0: dims[-1] = var.dim() test_val = env.numpy_rand.rand(*dims) if len(dims) > 0: test_val = test_val.astype(var.tensor.dtype) elif var.tensor.dtype.startswith("int"): test_val = 1 var.set_test_value(test_val) return var
def read_struct_file(struct_file,return_type=GeoStruct): """read an existing PEST-type structure file into a GeoStruct instance Parameters ---------- struct_file : (str) existing pest-type structure file return_type : (object) the instance type to return. Default is GeoStruct Returns ------- GeoStruct : list or GeoStruct Note ---- if only on structure is listed in struct_file, then return type is GeoStruct. Otherwise, return type is a list of GeoStruct Example ------- ``>>>import pyemu`` ``>>>gs = pyemu.utils.geostats.reads_struct_file("struct.dat")`` """ VARTYPE = {1:SphVario,2:ExpVario,3:GauVario,4:None} assert os.path.exists(struct_file) structures = [] variograms = [] with open(struct_file,'r') as f: while True: line = f.readline() if line == '': break line = line.strip().lower() if line.startswith("structure"): name = line.strip().split()[1] nugget,transform,variogram_info = _read_structure_attributes(f) s = return_type(nugget=nugget,transform=transform,name=name) s.variogram_info = variogram_info # not sure what is going on, but if I don't copy s here, # all the structures end up sharing all the variograms later structures.append(copy.deepcopy(s)) elif line.startswith("variogram"): name = line.strip().split()[1].lower() vartype,bearing,a,anisotropy = _read_variogram(f) if name in variogram_info: v = VARTYPE[vartype](variogram_info[name],a,anisotropy=anisotropy, bearing=bearing,name=name) variograms.append(v) for i,st in enumerate(structures): for vname in st.variogram_info: vfound = None for v in variograms: if v.name == vname: vfound = v break if vfound is None: raise Exception("variogram {0} not found for structure {1}".\ format(vname,s.name)) st.variograms.append(vfound) if len(structures) == 1: return structures[0] return structures
read an existing PEST-type structure file into a GeoStruct instance Parameters ---------- struct_file : (str) existing pest-type structure file return_type : (object) the instance type to return. Default is GeoStruct Returns ------- GeoStruct : list or GeoStruct Note ---- if only on structure is listed in struct_file, then return type is GeoStruct. Otherwise, return type is a list of GeoStruct Example ------- ``>>>import pyemu`` ``>>>gs = pyemu.utils.geostats.reads_struct_file("struct.dat")``
Below is the the instruction that describes the task: ### Input: read an existing PEST-type structure file into a GeoStruct instance Parameters ---------- struct_file : (str) existing pest-type structure file return_type : (object) the instance type to return. Default is GeoStruct Returns ------- GeoStruct : list or GeoStruct Note ---- if only on structure is listed in struct_file, then return type is GeoStruct. Otherwise, return type is a list of GeoStruct Example ------- ``>>>import pyemu`` ``>>>gs = pyemu.utils.geostats.reads_struct_file("struct.dat")`` ### Response: def read_struct_file(struct_file,return_type=GeoStruct): """read an existing PEST-type structure file into a GeoStruct instance Parameters ---------- struct_file : (str) existing pest-type structure file return_type : (object) the instance type to return. Default is GeoStruct Returns ------- GeoStruct : list or GeoStruct Note ---- if only on structure is listed in struct_file, then return type is GeoStruct. Otherwise, return type is a list of GeoStruct Example ------- ``>>>import pyemu`` ``>>>gs = pyemu.utils.geostats.reads_struct_file("struct.dat")`` """ VARTYPE = {1:SphVario,2:ExpVario,3:GauVario,4:None} assert os.path.exists(struct_file) structures = [] variograms = [] with open(struct_file,'r') as f: while True: line = f.readline() if line == '': break line = line.strip().lower() if line.startswith("structure"): name = line.strip().split()[1] nugget,transform,variogram_info = _read_structure_attributes(f) s = return_type(nugget=nugget,transform=transform,name=name) s.variogram_info = variogram_info # not sure what is going on, but if I don't copy s here, # all the structures end up sharing all the variograms later structures.append(copy.deepcopy(s)) elif line.startswith("variogram"): name = line.strip().split()[1].lower() vartype,bearing,a,anisotropy = _read_variogram(f) if name in variogram_info: v = VARTYPE[vartype](variogram_info[name],a,anisotropy=anisotropy, bearing=bearing,name=name) variograms.append(v) for i,st in enumerate(structures): for vname in st.variogram_info: vfound = None for v in variograms: if v.name == vname: vfound = v break if vfound is None: raise Exception("variogram {0} not found for structure {1}".\ format(vname,s.name)) st.variograms.append(vfound) if len(structures) == 1: return structures[0] return structures
def get_first_model_with_rest_name(cls, rest_name): """ Get the first model corresponding to a rest_name Args: rest_name: the rest name """ models = cls.get_models_with_rest_name(rest_name) if len(models) > 0: return models[0] return None
Get the first model corresponding to a rest_name Args: rest_name: the rest name
Below is the the instruction that describes the task: ### Input: Get the first model corresponding to a rest_name Args: rest_name: the rest name ### Response: def get_first_model_with_rest_name(cls, rest_name): """ Get the first model corresponding to a rest_name Args: rest_name: the rest name """ models = cls.get_models_with_rest_name(rest_name) if len(models) > 0: return models[0] return None
def mousePressEvent(self, event): """ Creates the drag event for this item. :param event | <QMousePressEvent> """ near_x, near_y = self.nearestPoint(event.pos()) data = self.dragData(x=near_x, y=near_y) self.startDrag(data) super(XChartWidgetItem, self).mousePressEvent(event)
Creates the drag event for this item. :param event | <QMousePressEvent>
Below is the the instruction that describes the task: ### Input: Creates the drag event for this item. :param event | <QMousePressEvent> ### Response: def mousePressEvent(self, event): """ Creates the drag event for this item. :param event | <QMousePressEvent> """ near_x, near_y = self.nearestPoint(event.pos()) data = self.dragData(x=near_x, y=near_y) self.startDrag(data) super(XChartWidgetItem, self).mousePressEvent(event)
def refactor_move_module(self, new_name): """Move the current module.""" refactor = create_move(self.project, self.resource) resource = path_to_resource(self.project, new_name) return self._get_changes(refactor, resource)
Move the current module.
Below is the the instruction that describes the task: ### Input: Move the current module. ### Response: def refactor_move_module(self, new_name): """Move the current module.""" refactor = create_move(self.project, self.resource) resource = path_to_resource(self.project, new_name) return self._get_changes(refactor, resource)
def confusion_matrix(links_true, links_pred, total=None): """Compute the confusion matrix. The confusion matrix is of the following form: +----------------------+-----------------------+----------------------+ | | Predicted Positives | Predicted Negatives | +======================+=======================+======================+ | **True Positives** | True Positives (TP) | False Negatives (FN) | +----------------------+-----------------------+----------------------+ | **True Negatives** | False Positives (FP) | True Negatives (TN) | +----------------------+-----------------------+----------------------+ The confusion matrix is an informative way to analyse a prediction. The matrix can used to compute measures like precision and recall. The count of true prositives is [0,0], false negatives is [0,1], true negatives is [1,1] and false positives is [1,0]. Parameters ---------- links_true: pandas.MultiIndex, pandas.DataFrame, pandas.Series The true (or actual) links. links_pred: pandas.MultiIndex, pandas.DataFrame, pandas.Series The predicted links. total: int, pandas.MultiIndex The count of all record pairs (both links and non-links). When the argument is a pandas.MultiIndex, the length of the index is used. If the total is None, the number of True Negatives is not computed. Default None. Returns ------- numpy.array The confusion matrix with TP, TN, FN, FP values. Note ---- The number of True Negatives is computed based on the total argument. This argument is the number of record pairs of the entire matrix. """ links_true = _get_multiindex(links_true) links_pred = _get_multiindex(links_pred) tp = true_positives(links_true, links_pred) fp = false_positives(links_true, links_pred) fn = false_negatives(links_true, links_pred) if total is None: tn = numpy.nan else: tn = true_negatives(links_true, links_pred, total) return numpy.array([[tp, fn], [fp, tn]])
Compute the confusion matrix. The confusion matrix is of the following form: +----------------------+-----------------------+----------------------+ | | Predicted Positives | Predicted Negatives | +======================+=======================+======================+ | **True Positives** | True Positives (TP) | False Negatives (FN) | +----------------------+-----------------------+----------------------+ | **True Negatives** | False Positives (FP) | True Negatives (TN) | +----------------------+-----------------------+----------------------+ The confusion matrix is an informative way to analyse a prediction. The matrix can used to compute measures like precision and recall. The count of true prositives is [0,0], false negatives is [0,1], true negatives is [1,1] and false positives is [1,0]. Parameters ---------- links_true: pandas.MultiIndex, pandas.DataFrame, pandas.Series The true (or actual) links. links_pred: pandas.MultiIndex, pandas.DataFrame, pandas.Series The predicted links. total: int, pandas.MultiIndex The count of all record pairs (both links and non-links). When the argument is a pandas.MultiIndex, the length of the index is used. If the total is None, the number of True Negatives is not computed. Default None. Returns ------- numpy.array The confusion matrix with TP, TN, FN, FP values. Note ---- The number of True Negatives is computed based on the total argument. This argument is the number of record pairs of the entire matrix.
Below is the the instruction that describes the task: ### Input: Compute the confusion matrix. The confusion matrix is of the following form: +----------------------+-----------------------+----------------------+ | | Predicted Positives | Predicted Negatives | +======================+=======================+======================+ | **True Positives** | True Positives (TP) | False Negatives (FN) | +----------------------+-----------------------+----------------------+ | **True Negatives** | False Positives (FP) | True Negatives (TN) | +----------------------+-----------------------+----------------------+ The confusion matrix is an informative way to analyse a prediction. The matrix can used to compute measures like precision and recall. The count of true prositives is [0,0], false negatives is [0,1], true negatives is [1,1] and false positives is [1,0]. Parameters ---------- links_true: pandas.MultiIndex, pandas.DataFrame, pandas.Series The true (or actual) links. links_pred: pandas.MultiIndex, pandas.DataFrame, pandas.Series The predicted links. total: int, pandas.MultiIndex The count of all record pairs (both links and non-links). When the argument is a pandas.MultiIndex, the length of the index is used. If the total is None, the number of True Negatives is not computed. Default None. Returns ------- numpy.array The confusion matrix with TP, TN, FN, FP values. Note ---- The number of True Negatives is computed based on the total argument. This argument is the number of record pairs of the entire matrix. ### Response: def confusion_matrix(links_true, links_pred, total=None): """Compute the confusion matrix. The confusion matrix is of the following form: +----------------------+-----------------------+----------------------+ | | Predicted Positives | Predicted Negatives | +======================+=======================+======================+ | **True Positives** | True Positives (TP) | False Negatives (FN) | +----------------------+-----------------------+----------------------+ | **True Negatives** | False Positives (FP) | True Negatives (TN) | +----------------------+-----------------------+----------------------+ The confusion matrix is an informative way to analyse a prediction. The matrix can used to compute measures like precision and recall. The count of true prositives is [0,0], false negatives is [0,1], true negatives is [1,1] and false positives is [1,0]. Parameters ---------- links_true: pandas.MultiIndex, pandas.DataFrame, pandas.Series The true (or actual) links. links_pred: pandas.MultiIndex, pandas.DataFrame, pandas.Series The predicted links. total: int, pandas.MultiIndex The count of all record pairs (both links and non-links). When the argument is a pandas.MultiIndex, the length of the index is used. If the total is None, the number of True Negatives is not computed. Default None. Returns ------- numpy.array The confusion matrix with TP, TN, FN, FP values. Note ---- The number of True Negatives is computed based on the total argument. This argument is the number of record pairs of the entire matrix. """ links_true = _get_multiindex(links_true) links_pred = _get_multiindex(links_pred) tp = true_positives(links_true, links_pred) fp = false_positives(links_true, links_pred) fn = false_negatives(links_true, links_pred) if total is None: tn = numpy.nan else: tn = true_negatives(links_true, links_pred, total) return numpy.array([[tp, fn], [fp, tn]])
def rescue(device, start, end): ''' Rescue a lost partition that was located somewhere between start and end. If a partition is found, parted will ask if you want to create an entry for it in the partition table. CLI Example: .. code-block:: bash salt '*' partition.rescue /dev/sda 0 8056 ''' _validate_device(device) _validate_partition_boundary(start) _validate_partition_boundary(end) cmd = 'parted -m -s {0} rescue {1} {2}'.format(device, start, end) out = __salt__['cmd.run'](cmd).splitlines() return out
Rescue a lost partition that was located somewhere between start and end. If a partition is found, parted will ask if you want to create an entry for it in the partition table. CLI Example: .. code-block:: bash salt '*' partition.rescue /dev/sda 0 8056
Below is the the instruction that describes the task: ### Input: Rescue a lost partition that was located somewhere between start and end. If a partition is found, parted will ask if you want to create an entry for it in the partition table. CLI Example: .. code-block:: bash salt '*' partition.rescue /dev/sda 0 8056 ### Response: def rescue(device, start, end): ''' Rescue a lost partition that was located somewhere between start and end. If a partition is found, parted will ask if you want to create an entry for it in the partition table. CLI Example: .. code-block:: bash salt '*' partition.rescue /dev/sda 0 8056 ''' _validate_device(device) _validate_partition_boundary(start) _validate_partition_boundary(end) cmd = 'parted -m -s {0} rescue {1} {2}'.format(device, start, end) out = __salt__['cmd.run'](cmd).splitlines() return out
def get_syslog_config(host, username, password, protocol=None, port=None, esxi_hosts=None, credstore=None): ''' Retrieve the syslog configuration. host The location of the host. username The username used to login to the host, such as ``root``. password The password used to login to the host. protocol Optionally set to alternate protocol if the host is not using the default protocol. Default protocol is ``https``. port Optionally set to alternate port if the host is not using the default port. Default port is ``443``. esxi_hosts If ``host`` is a vCenter host, then use esxi_hosts to execute this function on a list of one or more ESXi machines. credstore Optionally set to path to the credential store file. :return: Dictionary with keys and values corresponding to the syslog configuration, per host. CLI Example: .. code-block:: bash # Used for ESXi host connection information salt '*' vsphere.get_syslog_config my.esxi.host root bad-password # Used for connecting to a vCenter Server salt '*' vsphere.get_syslog_config my.vcenter.location root bad-password \ esxi_hosts='[esxi-1.host.com, esxi-2.host.com]' ''' cmd = 'system syslog config get' ret = {} if esxi_hosts: if not isinstance(esxi_hosts, list): raise CommandExecutionError('\'esxi_hosts\' must be a list.') for esxi_host in esxi_hosts: response = salt.utils.vmware.esxcli(host, username, password, cmd, protocol=protocol, port=port, esxi_host=esxi_host, credstore=credstore) # format the response stdout into something useful ret.update({esxi_host: _format_syslog_config(response)}) else: # Handles a single host or a vCenter connection when no esxi_hosts are provided. response = salt.utils.vmware.esxcli(host, username, password, cmd, protocol=protocol, port=port, credstore=credstore) # format the response stdout into something useful ret.update({host: _format_syslog_config(response)}) return ret
Retrieve the syslog configuration. host The location of the host. username The username used to login to the host, such as ``root``. password The password used to login to the host. protocol Optionally set to alternate protocol if the host is not using the default protocol. Default protocol is ``https``. port Optionally set to alternate port if the host is not using the default port. Default port is ``443``. esxi_hosts If ``host`` is a vCenter host, then use esxi_hosts to execute this function on a list of one or more ESXi machines. credstore Optionally set to path to the credential store file. :return: Dictionary with keys and values corresponding to the syslog configuration, per host. CLI Example: .. code-block:: bash # Used for ESXi host connection information salt '*' vsphere.get_syslog_config my.esxi.host root bad-password # Used for connecting to a vCenter Server salt '*' vsphere.get_syslog_config my.vcenter.location root bad-password \ esxi_hosts='[esxi-1.host.com, esxi-2.host.com]'
Below is the the instruction that describes the task: ### Input: Retrieve the syslog configuration. host The location of the host. username The username used to login to the host, such as ``root``. password The password used to login to the host. protocol Optionally set to alternate protocol if the host is not using the default protocol. Default protocol is ``https``. port Optionally set to alternate port if the host is not using the default port. Default port is ``443``. esxi_hosts If ``host`` is a vCenter host, then use esxi_hosts to execute this function on a list of one or more ESXi machines. credstore Optionally set to path to the credential store file. :return: Dictionary with keys and values corresponding to the syslog configuration, per host. CLI Example: .. code-block:: bash # Used for ESXi host connection information salt '*' vsphere.get_syslog_config my.esxi.host root bad-password # Used for connecting to a vCenter Server salt '*' vsphere.get_syslog_config my.vcenter.location root bad-password \ esxi_hosts='[esxi-1.host.com, esxi-2.host.com]' ### Response: def get_syslog_config(host, username, password, protocol=None, port=None, esxi_hosts=None, credstore=None): ''' Retrieve the syslog configuration. host The location of the host. username The username used to login to the host, such as ``root``. password The password used to login to the host. protocol Optionally set to alternate protocol if the host is not using the default protocol. Default protocol is ``https``. port Optionally set to alternate port if the host is not using the default port. Default port is ``443``. esxi_hosts If ``host`` is a vCenter host, then use esxi_hosts to execute this function on a list of one or more ESXi machines. credstore Optionally set to path to the credential store file. :return: Dictionary with keys and values corresponding to the syslog configuration, per host. CLI Example: .. code-block:: bash # Used for ESXi host connection information salt '*' vsphere.get_syslog_config my.esxi.host root bad-password # Used for connecting to a vCenter Server salt '*' vsphere.get_syslog_config my.vcenter.location root bad-password \ esxi_hosts='[esxi-1.host.com, esxi-2.host.com]' ''' cmd = 'system syslog config get' ret = {} if esxi_hosts: if not isinstance(esxi_hosts, list): raise CommandExecutionError('\'esxi_hosts\' must be a list.') for esxi_host in esxi_hosts: response = salt.utils.vmware.esxcli(host, username, password, cmd, protocol=protocol, port=port, esxi_host=esxi_host, credstore=credstore) # format the response stdout into something useful ret.update({esxi_host: _format_syslog_config(response)}) else: # Handles a single host or a vCenter connection when no esxi_hosts are provided. response = salt.utils.vmware.esxcli(host, username, password, cmd, protocol=protocol, port=port, credstore=credstore) # format the response stdout into something useful ret.update({host: _format_syslog_config(response)}) return ret
def _bdtr(k, n, p): """The binomial cumulative distribution function. Args: k: floating point `Tensor`. n: floating point `Tensor`. p: floating point `Tensor`. Returns: `sum_{j=0}^k p^j (1 - p)^(n - j)`. """ # Trick for getting safe backprop/gradients into n, k when # betainc(a = 0, ..) = nan # Write: # where(unsafe, safe_output, betainc(where(unsafe, safe_input, input))) ones = tf.ones_like(n - k) k_eq_n = tf.equal(k, n) safe_dn = tf.where(k_eq_n, ones, n - k) dk = tf.math.betainc(a=safe_dn, b=k + 1, x=1 - p) return tf.where(k_eq_n, ones, dk)
The binomial cumulative distribution function. Args: k: floating point `Tensor`. n: floating point `Tensor`. p: floating point `Tensor`. Returns: `sum_{j=0}^k p^j (1 - p)^(n - j)`.
Below is the the instruction that describes the task: ### Input: The binomial cumulative distribution function. Args: k: floating point `Tensor`. n: floating point `Tensor`. p: floating point `Tensor`. Returns: `sum_{j=0}^k p^j (1 - p)^(n - j)`. ### Response: def _bdtr(k, n, p): """The binomial cumulative distribution function. Args: k: floating point `Tensor`. n: floating point `Tensor`. p: floating point `Tensor`. Returns: `sum_{j=0}^k p^j (1 - p)^(n - j)`. """ # Trick for getting safe backprop/gradients into n, k when # betainc(a = 0, ..) = nan # Write: # where(unsafe, safe_output, betainc(where(unsafe, safe_input, input))) ones = tf.ones_like(n - k) k_eq_n = tf.equal(k, n) safe_dn = tf.where(k_eq_n, ones, n - k) dk = tf.math.betainc(a=safe_dn, b=k + 1, x=1 - p) return tf.where(k_eq_n, ones, dk)
def _group_range(records, method): """ Yield the range of all dates between the extrema of a list of records, separated by a given time delta. """ start_date = records[0].datetime end_date = records[-1].datetime _fun = DATE_GROUPERS[method] d = start_date # Day and week use timedelta if method not in ["month", "year"]: def increment(i): return i + timedelta(**{method + 's': 1}) elif method == "month": def increment(i): year, month = divmod(i.month + 1, 12) if month == 0: month = 12 year = year - 1 return d.replace(year=d.year + year, month=month) elif method == "year": def increment(i): return d.replace(year=d.year + 1) while _fun(d) <= _fun(end_date): yield d d = increment(d)
Yield the range of all dates between the extrema of a list of records, separated by a given time delta.
Below is the the instruction that describes the task: ### Input: Yield the range of all dates between the extrema of a list of records, separated by a given time delta. ### Response: def _group_range(records, method): """ Yield the range of all dates between the extrema of a list of records, separated by a given time delta. """ start_date = records[0].datetime end_date = records[-1].datetime _fun = DATE_GROUPERS[method] d = start_date # Day and week use timedelta if method not in ["month", "year"]: def increment(i): return i + timedelta(**{method + 's': 1}) elif method == "month": def increment(i): year, month = divmod(i.month + 1, 12) if month == 0: month = 12 year = year - 1 return d.replace(year=d.year + year, month=month) elif method == "year": def increment(i): return d.replace(year=d.year + 1) while _fun(d) <= _fun(end_date): yield d d = increment(d)
def ite_burrowed(self): """ Returns an equivalent AST that "burrows" the ITE expressions as deep as possible into the ast, for simpler printing. """ if self._burrowed is None: self._burrowed = self._burrow_ite() # pylint:disable=attribute-defined-outside-init self._burrowed._burrowed = self._burrowed # pylint:disable=attribute-defined-outside-init return self._burrowed
Returns an equivalent AST that "burrows" the ITE expressions as deep as possible into the ast, for simpler printing.
Below is the the instruction that describes the task: ### Input: Returns an equivalent AST that "burrows" the ITE expressions as deep as possible into the ast, for simpler printing. ### Response: def ite_burrowed(self): """ Returns an equivalent AST that "burrows" the ITE expressions as deep as possible into the ast, for simpler printing. """ if self._burrowed is None: self._burrowed = self._burrow_ite() # pylint:disable=attribute-defined-outside-init self._burrowed._burrowed = self._burrowed # pylint:disable=attribute-defined-outside-init return self._burrowed
def get_parent_object(self): """ Lookup a parent object. If parent_field is None this will return None. Otherwise this will try to return that object. The filter arguments are found by using the known url parameters of the bundle, finding the value in the url keyword arguments and matching them with the arguments in `self.parent_lookups`. The first argument in parent_lookups matched with the value of the last argument in the list of bundle url parameters, the second with the second last and so forth. For example let's say the parent_field attribute is 'gallery' and the current bundle knows about these url parameters: * adm_post * adm_post_gallery And the current value for 'self.kwargs' is: * adm_post = 2 * adm_post_gallery = 3 if parent_lookups isn't set the filter for the queryset on the gallery model will be: * pk = 3 if parent_lookups is ('pk', 'post__pk') then the filter on the queryset will be: * pk = 3 * post__pk = 2 The model to filter on is found by finding the relationship in self.parent_field and filtering on that model. If a match is found, 'self.queryset` is changed to filter on the parent as described above and the parent object is returned. If no match is found, a Http404 error is raised. """ if self.parent_field: # Get the model we are querying on if getattr(self.model._meta, 'init_name_map', None): # pre-django-1.8 cache = self.model._meta.init_name_map() field, mod, direct, m2m = cache[self.parent_field] else: # 1.10 if DJANGO_VERSION[1] >= 10: field = self.model._meta.get_field(self.parent_field) m2m = field.is_relation and field.many_to_many direct = not field.auto_created or field.concrete else: # 1.8 and 1.9 field, mod, direct, m2m = self.model._meta.get_field(self.parent_field) to = None field_name = None if self.parent_lookups is None: self.parent_lookups = ('pk',) url_params = list(self.bundle.url_params) if url_params and getattr(self.bundle, 'delegated', False): url_params = url_params[:-1] offset = len(url_params) - len(self.parent_lookups) kwargs = {} for i in range(len(self.parent_lookups) - 1): k = url_params[offset + i] value = self.kwargs[k] kwargs[self.parent_lookups[i + 1]] = value main_arg = self.kwargs[url_params[-1]] main_key = self.parent_lookups[0] if m2m: rel = getattr(self.model, self.parent_field) kwargs[main_key] = main_arg if direct: to = rel.field.rel.to field_name = self.parent_field else: try: from django.db.models.fields.related import ( ForeignObjectRel) if isinstance(rel.rel, ForeignObjectRel): to = rel.rel.related_model else: to = rel.rel.model except ImportError: to = rel.rel.model field_name = rel.rel.field.name else: to = field.rel.to if main_key == 'pk': to_field = field.rel.field_name if to_field == 'vid': to_field = 'object_id' else: to_field = main_key kwargs[to_field] = main_arg # Build the list of arguments try: obj = to.objects.get(**kwargs) if self.queryset is None: if m2m: self.queryset = getattr(obj, field_name) else: self.queryset = self.model.objects.filter( **{self.parent_field: obj}) return obj except to.DoesNotExist: raise http.Http404 return None
Lookup a parent object. If parent_field is None this will return None. Otherwise this will try to return that object. The filter arguments are found by using the known url parameters of the bundle, finding the value in the url keyword arguments and matching them with the arguments in `self.parent_lookups`. The first argument in parent_lookups matched with the value of the last argument in the list of bundle url parameters, the second with the second last and so forth. For example let's say the parent_field attribute is 'gallery' and the current bundle knows about these url parameters: * adm_post * adm_post_gallery And the current value for 'self.kwargs' is: * adm_post = 2 * adm_post_gallery = 3 if parent_lookups isn't set the filter for the queryset on the gallery model will be: * pk = 3 if parent_lookups is ('pk', 'post__pk') then the filter on the queryset will be: * pk = 3 * post__pk = 2 The model to filter on is found by finding the relationship in self.parent_field and filtering on that model. If a match is found, 'self.queryset` is changed to filter on the parent as described above and the parent object is returned. If no match is found, a Http404 error is raised.
Below is the the instruction that describes the task: ### Input: Lookup a parent object. If parent_field is None this will return None. Otherwise this will try to return that object. The filter arguments are found by using the known url parameters of the bundle, finding the value in the url keyword arguments and matching them with the arguments in `self.parent_lookups`. The first argument in parent_lookups matched with the value of the last argument in the list of bundle url parameters, the second with the second last and so forth. For example let's say the parent_field attribute is 'gallery' and the current bundle knows about these url parameters: * adm_post * adm_post_gallery And the current value for 'self.kwargs' is: * adm_post = 2 * adm_post_gallery = 3 if parent_lookups isn't set the filter for the queryset on the gallery model will be: * pk = 3 if parent_lookups is ('pk', 'post__pk') then the filter on the queryset will be: * pk = 3 * post__pk = 2 The model to filter on is found by finding the relationship in self.parent_field and filtering on that model. If a match is found, 'self.queryset` is changed to filter on the parent as described above and the parent object is returned. If no match is found, a Http404 error is raised. ### Response: def get_parent_object(self): """ Lookup a parent object. If parent_field is None this will return None. Otherwise this will try to return that object. The filter arguments are found by using the known url parameters of the bundle, finding the value in the url keyword arguments and matching them with the arguments in `self.parent_lookups`. The first argument in parent_lookups matched with the value of the last argument in the list of bundle url parameters, the second with the second last and so forth. For example let's say the parent_field attribute is 'gallery' and the current bundle knows about these url parameters: * adm_post * adm_post_gallery And the current value for 'self.kwargs' is: * adm_post = 2 * adm_post_gallery = 3 if parent_lookups isn't set the filter for the queryset on the gallery model will be: * pk = 3 if parent_lookups is ('pk', 'post__pk') then the filter on the queryset will be: * pk = 3 * post__pk = 2 The model to filter on is found by finding the relationship in self.parent_field and filtering on that model. If a match is found, 'self.queryset` is changed to filter on the parent as described above and the parent object is returned. If no match is found, a Http404 error is raised. """ if self.parent_field: # Get the model we are querying on if getattr(self.model._meta, 'init_name_map', None): # pre-django-1.8 cache = self.model._meta.init_name_map() field, mod, direct, m2m = cache[self.parent_field] else: # 1.10 if DJANGO_VERSION[1] >= 10: field = self.model._meta.get_field(self.parent_field) m2m = field.is_relation and field.many_to_many direct = not field.auto_created or field.concrete else: # 1.8 and 1.9 field, mod, direct, m2m = self.model._meta.get_field(self.parent_field) to = None field_name = None if self.parent_lookups is None: self.parent_lookups = ('pk',) url_params = list(self.bundle.url_params) if url_params and getattr(self.bundle, 'delegated', False): url_params = url_params[:-1] offset = len(url_params) - len(self.parent_lookups) kwargs = {} for i in range(len(self.parent_lookups) - 1): k = url_params[offset + i] value = self.kwargs[k] kwargs[self.parent_lookups[i + 1]] = value main_arg = self.kwargs[url_params[-1]] main_key = self.parent_lookups[0] if m2m: rel = getattr(self.model, self.parent_field) kwargs[main_key] = main_arg if direct: to = rel.field.rel.to field_name = self.parent_field else: try: from django.db.models.fields.related import ( ForeignObjectRel) if isinstance(rel.rel, ForeignObjectRel): to = rel.rel.related_model else: to = rel.rel.model except ImportError: to = rel.rel.model field_name = rel.rel.field.name else: to = field.rel.to if main_key == 'pk': to_field = field.rel.field_name if to_field == 'vid': to_field = 'object_id' else: to_field = main_key kwargs[to_field] = main_arg # Build the list of arguments try: obj = to.objects.get(**kwargs) if self.queryset is None: if m2m: self.queryset = getattr(obj, field_name) else: self.queryset = self.model.objects.filter( **{self.parent_field: obj}) return obj except to.DoesNotExist: raise http.Http404 return None
def _get_chart_info(df, vtype, cat, prep, callers): """Retrieve values for a specific variant type, category and prep method. """ maxval_raw = max(list(df["value.floor"])) curdf = df[(df["variant.type"] == vtype) & (df["category"] == cat) & (df["bamprep"] == prep)] vals = [] labels = [] for c in callers: row = curdf[df["caller"] == c] if len(row) > 0: vals.append(list(row["value.floor"])[0]) labels.append(list(row["value"])[0]) else: vals.append(1) labels.append("") return vals, labels, maxval_raw
Retrieve values for a specific variant type, category and prep method.
Below is the the instruction that describes the task: ### Input: Retrieve values for a specific variant type, category and prep method. ### Response: def _get_chart_info(df, vtype, cat, prep, callers): """Retrieve values for a specific variant type, category and prep method. """ maxval_raw = max(list(df["value.floor"])) curdf = df[(df["variant.type"] == vtype) & (df["category"] == cat) & (df["bamprep"] == prep)] vals = [] labels = [] for c in callers: row = curdf[df["caller"] == c] if len(row) > 0: vals.append(list(row["value.floor"])[0]) labels.append(list(row["value"])[0]) else: vals.append(1) labels.append("") return vals, labels, maxval_raw
def aptknt(tau, order): """Create an acceptable knot vector. Minimal emulation of MATLAB's ``aptknt``. The returned knot vector can be used to generate splines of desired `order` that are suitable for interpolation to the collocation sites `tau`. Note that this is only possible when ``len(tau)`` >= `order` + 1. When this condition does not hold, a valid knot vector is returned, but using it to generate a spline basis will not have the desired effect (the spline will return a length-zero array upon evaluation). Parameters: tau: Python list or rank-1 array, collocation sites order: int, >= 0, order of spline Returns: rank-1 array, `k` copies of ``tau[0]``, then ``aveknt(tau[1:-1], k-1)``, and finally `k` copies of ``tau[-1]``, where ``k = min(order+1, len(tau))``. """ tau = np.atleast_1d(tau) k = order + 1 if tau.ndim > 1: raise ValueError("tau must be a list or a rank-1 array") # emulate MATLAB behavior for the "k" parameter # # See # https://se.mathworks.com/help/curvefit/aptknt.html # if len(tau) < k: k = len(tau) if not (tau == sorted(tau)).all(): raise ValueError("tau must be nondecreasing") # last processed element needs to be: # i + k - 1 = len(tau)- 1 # => i + k = len(tau) # => i = len(tau) - k # u = len(tau) - k for i in range(u): if tau[i+k-1] == tau[i]: raise ValueError("k-fold (or higher) repeated sites not allowed, but tau[i+k-1] == tau[i] for i = %d, k = %d" % (i,k)) # form the output sequence # prefix = [ tau[0] ] * k suffix = [ tau[-1] ] * k # https://se.mathworks.com/help/curvefit/aveknt.html # MATLAB's aveknt(): # - averages successive k-1 entries, but ours averages k # - seems to ignore the endpoints # tmp = aveknt(tau[1:-1], k-1) middle = tmp.tolist() return np.array( prefix + middle + suffix, dtype=tmp.dtype )
Create an acceptable knot vector. Minimal emulation of MATLAB's ``aptknt``. The returned knot vector can be used to generate splines of desired `order` that are suitable for interpolation to the collocation sites `tau`. Note that this is only possible when ``len(tau)`` >= `order` + 1. When this condition does not hold, a valid knot vector is returned, but using it to generate a spline basis will not have the desired effect (the spline will return a length-zero array upon evaluation). Parameters: tau: Python list or rank-1 array, collocation sites order: int, >= 0, order of spline Returns: rank-1 array, `k` copies of ``tau[0]``, then ``aveknt(tau[1:-1], k-1)``, and finally `k` copies of ``tau[-1]``, where ``k = min(order+1, len(tau))``.
Below is the the instruction that describes the task: ### Input: Create an acceptable knot vector. Minimal emulation of MATLAB's ``aptknt``. The returned knot vector can be used to generate splines of desired `order` that are suitable for interpolation to the collocation sites `tau`. Note that this is only possible when ``len(tau)`` >= `order` + 1. When this condition does not hold, a valid knot vector is returned, but using it to generate a spline basis will not have the desired effect (the spline will return a length-zero array upon evaluation). Parameters: tau: Python list or rank-1 array, collocation sites order: int, >= 0, order of spline Returns: rank-1 array, `k` copies of ``tau[0]``, then ``aveknt(tau[1:-1], k-1)``, and finally `k` copies of ``tau[-1]``, where ``k = min(order+1, len(tau))``. ### Response: def aptknt(tau, order): """Create an acceptable knot vector. Minimal emulation of MATLAB's ``aptknt``. The returned knot vector can be used to generate splines of desired `order` that are suitable for interpolation to the collocation sites `tau`. Note that this is only possible when ``len(tau)`` >= `order` + 1. When this condition does not hold, a valid knot vector is returned, but using it to generate a spline basis will not have the desired effect (the spline will return a length-zero array upon evaluation). Parameters: tau: Python list or rank-1 array, collocation sites order: int, >= 0, order of spline Returns: rank-1 array, `k` copies of ``tau[0]``, then ``aveknt(tau[1:-1], k-1)``, and finally `k` copies of ``tau[-1]``, where ``k = min(order+1, len(tau))``. """ tau = np.atleast_1d(tau) k = order + 1 if tau.ndim > 1: raise ValueError("tau must be a list or a rank-1 array") # emulate MATLAB behavior for the "k" parameter # # See # https://se.mathworks.com/help/curvefit/aptknt.html # if len(tau) < k: k = len(tau) if not (tau == sorted(tau)).all(): raise ValueError("tau must be nondecreasing") # last processed element needs to be: # i + k - 1 = len(tau)- 1 # => i + k = len(tau) # => i = len(tau) - k # u = len(tau) - k for i in range(u): if tau[i+k-1] == tau[i]: raise ValueError("k-fold (or higher) repeated sites not allowed, but tau[i+k-1] == tau[i] for i = %d, k = %d" % (i,k)) # form the output sequence # prefix = [ tau[0] ] * k suffix = [ tau[-1] ] * k # https://se.mathworks.com/help/curvefit/aveknt.html # MATLAB's aveknt(): # - averages successive k-1 entries, but ours averages k # - seems to ignore the endpoints # tmp = aveknt(tau[1:-1], k-1) middle = tmp.tolist() return np.array( prefix + middle + suffix, dtype=tmp.dtype )
def check_jobs_status(self, fail_running=False, fail_pending=False): """Check the status of all the jobs run from this link and return a status flag that summarizes that. Parameters ---------- fail_running : `bool` If True, consider running jobs as failed fail_pending : `bool` If True, consider pending jobs as failed Returns ------- status : `JobStatus` Job status flag that summarizes the status of all the jobs, """ n_failed = 0 n_partial = 0 n_passed = 0 n_total = 0 for job_details in self.jobs.values(): n_total += 1 if job_details.status in [JobStatus.failed, JobStatus.partial_failed]: n_failed += 1 elif fail_running and job_details.status == JobStatus.running: n_failed += 1 elif fail_pending and job_details.status == JobStatus.pending: n_failed += 1 elif job_details.status == JobStatus.done: n_passed += 1 if n_failed > 0: return JobStatus.failed elif n_passed == n_total: return JobStatus.done elif n_passed > 0: return JobStatus.running return JobStatus.pending
Check the status of all the jobs run from this link and return a status flag that summarizes that. Parameters ---------- fail_running : `bool` If True, consider running jobs as failed fail_pending : `bool` If True, consider pending jobs as failed Returns ------- status : `JobStatus` Job status flag that summarizes the status of all the jobs,
Below is the the instruction that describes the task: ### Input: Check the status of all the jobs run from this link and return a status flag that summarizes that. Parameters ---------- fail_running : `bool` If True, consider running jobs as failed fail_pending : `bool` If True, consider pending jobs as failed Returns ------- status : `JobStatus` Job status flag that summarizes the status of all the jobs, ### Response: def check_jobs_status(self, fail_running=False, fail_pending=False): """Check the status of all the jobs run from this link and return a status flag that summarizes that. Parameters ---------- fail_running : `bool` If True, consider running jobs as failed fail_pending : `bool` If True, consider pending jobs as failed Returns ------- status : `JobStatus` Job status flag that summarizes the status of all the jobs, """ n_failed = 0 n_partial = 0 n_passed = 0 n_total = 0 for job_details in self.jobs.values(): n_total += 1 if job_details.status in [JobStatus.failed, JobStatus.partial_failed]: n_failed += 1 elif fail_running and job_details.status == JobStatus.running: n_failed += 1 elif fail_pending and job_details.status == JobStatus.pending: n_failed += 1 elif job_details.status == JobStatus.done: n_passed += 1 if n_failed > 0: return JobStatus.failed elif n_passed == n_total: return JobStatus.done elif n_passed > 0: return JobStatus.running return JobStatus.pending
def sanitize(string): """ Catch and replace invalid path chars [replace, with] """ replace_chars = [ ['\\', '-'], [':', '-'], ['/', '-'], ['?', ''], ['<', ''], ['>', ''], ['`', '`'], ['|', '-'], ['*', '`'], ['"', '\''], ['.', ''], ['&', 'and'] ] for ch in replace_chars: string = string.replace(ch[0], ch[1]) return string
Catch and replace invalid path chars [replace, with]
Below is the the instruction that describes the task: ### Input: Catch and replace invalid path chars [replace, with] ### Response: def sanitize(string): """ Catch and replace invalid path chars [replace, with] """ replace_chars = [ ['\\', '-'], [':', '-'], ['/', '-'], ['?', ''], ['<', ''], ['>', ''], ['`', '`'], ['|', '-'], ['*', '`'], ['"', '\''], ['.', ''], ['&', 'and'] ] for ch in replace_chars: string = string.replace(ch[0], ch[1]) return string
def to_simple(self, serializer=None): """ Prepare to serialization. :return dict: paginator params """ return dict( count=self.paginator.count, page=self.page_number, num_pages=self.paginator.num_pages, next=self.next_page, prev=self.previous_page, resources=self.resources, )
Prepare to serialization. :return dict: paginator params
Below is the the instruction that describes the task: ### Input: Prepare to serialization. :return dict: paginator params ### Response: def to_simple(self, serializer=None): """ Prepare to serialization. :return dict: paginator params """ return dict( count=self.paginator.count, page=self.page_number, num_pages=self.paginator.num_pages, next=self.next_page, prev=self.previous_page, resources=self.resources, )
def execute(self): """ Executes a new build on a project. """ if not self.config.pr: raise NotPullRequestException logger.debug('Using the following configuration:') for name, value in self.config.as_dict().items(): logger.debug(' - {}={}'.format(name, repr(value))) logger.info('Running Lintly against PR #{} for repo {}'.format(self.config.pr, self.project)) parser = PARSERS.get(self.config.format) self._all_violations = parser.parse_violations(self.linter_output) logger.info('Lintly found violations in {} files'.format(len(self._all_violations))) diff = self.get_pr_diff() patch = self.get_pr_patch(diff) self._diff_violations = self.find_diff_violations(patch) logger.info('Lintly found diff violations in {} files'.format(len(self._diff_violations))) self.post_pr_comment(patch) self.post_commit_status()
Executes a new build on a project.
Below is the the instruction that describes the task: ### Input: Executes a new build on a project. ### Response: def execute(self): """ Executes a new build on a project. """ if not self.config.pr: raise NotPullRequestException logger.debug('Using the following configuration:') for name, value in self.config.as_dict().items(): logger.debug(' - {}={}'.format(name, repr(value))) logger.info('Running Lintly against PR #{} for repo {}'.format(self.config.pr, self.project)) parser = PARSERS.get(self.config.format) self._all_violations = parser.parse_violations(self.linter_output) logger.info('Lintly found violations in {} files'.format(len(self._all_violations))) diff = self.get_pr_diff() patch = self.get_pr_patch(diff) self._diff_violations = self.find_diff_violations(patch) logger.info('Lintly found diff violations in {} files'.format(len(self._diff_violations))) self.post_pr_comment(patch) self.post_commit_status()
def validate_model_specification_file(file_path: str) -> str: """Ensures the provided file is a yaml file""" if not os.path.isfile(file_path): raise ConfigurationError('If you provide a model specification file, it must be a file. ' f'You provided {file_path}') extension = file_path.split('.')[-1] if extension not in ['yaml', 'yml']: raise ConfigurationError(f'Model specification files must be in a yaml format. You provided {extension}') # Attempt to load yaml.full_load(file_path) return file_path
Ensures the provided file is a yaml file
Below is the the instruction that describes the task: ### Input: Ensures the provided file is a yaml file ### Response: def validate_model_specification_file(file_path: str) -> str: """Ensures the provided file is a yaml file""" if not os.path.isfile(file_path): raise ConfigurationError('If you provide a model specification file, it must be a file. ' f'You provided {file_path}') extension = file_path.split('.')[-1] if extension not in ['yaml', 'yml']: raise ConfigurationError(f'Model specification files must be in a yaml format. You provided {extension}') # Attempt to load yaml.full_load(file_path) return file_path
def SCM(root_dir, repo=None): # pylint: disable=invalid-name """Returns SCM instance that corresponds to a repo at the specified path. Args: root_dir (str): path to a root directory of the repo. repo (dvc.repo.Repo): dvc repo instance that root_dir belongs to. Returns: dvc.scm.base.Base: SCM instance. """ if Git.is_repo(root_dir) or Git.is_submodule(root_dir): return Git(root_dir, repo=repo) return NoSCM(root_dir, repo=repo)
Returns SCM instance that corresponds to a repo at the specified path. Args: root_dir (str): path to a root directory of the repo. repo (dvc.repo.Repo): dvc repo instance that root_dir belongs to. Returns: dvc.scm.base.Base: SCM instance.
Below is the the instruction that describes the task: ### Input: Returns SCM instance that corresponds to a repo at the specified path. Args: root_dir (str): path to a root directory of the repo. repo (dvc.repo.Repo): dvc repo instance that root_dir belongs to. Returns: dvc.scm.base.Base: SCM instance. ### Response: def SCM(root_dir, repo=None): # pylint: disable=invalid-name """Returns SCM instance that corresponds to a repo at the specified path. Args: root_dir (str): path to a root directory of the repo. repo (dvc.repo.Repo): dvc repo instance that root_dir belongs to. Returns: dvc.scm.base.Base: SCM instance. """ if Git.is_repo(root_dir) or Git.is_submodule(root_dir): return Git(root_dir, repo=repo) return NoSCM(root_dir, repo=repo)
def is_callable(self): """The fake can be called. This is useful for when you stub out a function as opposed to a class. For example:: >>> import fudge >>> remove = Fake('os.remove').is_callable() >>> remove('some/path') """ self._callable = Call(self, call_name=self._name, callable=True) return self
The fake can be called. This is useful for when you stub out a function as opposed to a class. For example:: >>> import fudge >>> remove = Fake('os.remove').is_callable() >>> remove('some/path')
Below is the the instruction that describes the task: ### Input: The fake can be called. This is useful for when you stub out a function as opposed to a class. For example:: >>> import fudge >>> remove = Fake('os.remove').is_callable() >>> remove('some/path') ### Response: def is_callable(self): """The fake can be called. This is useful for when you stub out a function as opposed to a class. For example:: >>> import fudge >>> remove = Fake('os.remove').is_callable() >>> remove('some/path') """ self._callable = Call(self, call_name=self._name, callable=True) return self
def search(self, title=None, libtype=None, **kwargs): """ Searching within a library section is much more powerful. It seems certain attributes on the media objects can be targeted to filter this search down a bit, but I havent found the documentation for it. Example: "studio=Comedy%20Central" or "year=1999" "title=Kung Fu" all work. Other items such as actor=<id> seem to work, but require you already know the id of the actor. TLDR: This is untested but seems to work. Use library section search when you can. """ args = {} if title: args['title'] = title if libtype: args['type'] = utils.searchType(libtype) for attr, value in kwargs.items(): args[attr] = value key = '/library/all%s' % utils.joinArgs(args) return self.fetchItems(key)
Searching within a library section is much more powerful. It seems certain attributes on the media objects can be targeted to filter this search down a bit, but I havent found the documentation for it. Example: "studio=Comedy%20Central" or "year=1999" "title=Kung Fu" all work. Other items such as actor=<id> seem to work, but require you already know the id of the actor. TLDR: This is untested but seems to work. Use library section search when you can.
Below is the the instruction that describes the task: ### Input: Searching within a library section is much more powerful. It seems certain attributes on the media objects can be targeted to filter this search down a bit, but I havent found the documentation for it. Example: "studio=Comedy%20Central" or "year=1999" "title=Kung Fu" all work. Other items such as actor=<id> seem to work, but require you already know the id of the actor. TLDR: This is untested but seems to work. Use library section search when you can. ### Response: def search(self, title=None, libtype=None, **kwargs): """ Searching within a library section is much more powerful. It seems certain attributes on the media objects can be targeted to filter this search down a bit, but I havent found the documentation for it. Example: "studio=Comedy%20Central" or "year=1999" "title=Kung Fu" all work. Other items such as actor=<id> seem to work, but require you already know the id of the actor. TLDR: This is untested but seems to work. Use library section search when you can. """ args = {} if title: args['title'] = title if libtype: args['type'] = utils.searchType(libtype) for attr, value in kwargs.items(): args[attr] = value key = '/library/all%s' % utils.joinArgs(args) return self.fetchItems(key)
def get_release_number(name): ''' Returns the release number of a given release code name in a ``<year>.<month>`` context. If the release name has not been given an assigned release number, the function returns a string. If the release cannot be found, it returns ``None``. name The release codename for which to find a release number. CLI Example: .. code-block:: bash salt '*' salt_version.get_release_number 'Oxygen' ''' name = name.lower() version_map = salt.version.SaltStackVersion.LNAMES version = version_map.get(name) if version is None: log.info('Version %s not found.', name) return None if version[1] == 0: log.info('Version %s found, but no release number has been assigned yet.', name) return 'No version assigned.' return '.'.join(str(item) for item in version)
Returns the release number of a given release code name in a ``<year>.<month>`` context. If the release name has not been given an assigned release number, the function returns a string. If the release cannot be found, it returns ``None``. name The release codename for which to find a release number. CLI Example: .. code-block:: bash salt '*' salt_version.get_release_number 'Oxygen'
Below is the the instruction that describes the task: ### Input: Returns the release number of a given release code name in a ``<year>.<month>`` context. If the release name has not been given an assigned release number, the function returns a string. If the release cannot be found, it returns ``None``. name The release codename for which to find a release number. CLI Example: .. code-block:: bash salt '*' salt_version.get_release_number 'Oxygen' ### Response: def get_release_number(name): ''' Returns the release number of a given release code name in a ``<year>.<month>`` context. If the release name has not been given an assigned release number, the function returns a string. If the release cannot be found, it returns ``None``. name The release codename for which to find a release number. CLI Example: .. code-block:: bash salt '*' salt_version.get_release_number 'Oxygen' ''' name = name.lower() version_map = salt.version.SaltStackVersion.LNAMES version = version_map.get(name) if version is None: log.info('Version %s not found.', name) return None if version[1] == 0: log.info('Version %s found, but no release number has been assigned yet.', name) return 'No version assigned.' return '.'.join(str(item) for item in version)
def forward(self, observations): """ Model forward pass """ input_data = self.input_block(observations) base_output = self.backbone(input_data) log_histogram = self.q_head(base_output) return log_histogram
Model forward pass
Below is the the instruction that describes the task: ### Input: Model forward pass ### Response: def forward(self, observations): """ Model forward pass """ input_data = self.input_block(observations) base_output = self.backbone(input_data) log_histogram = self.q_head(base_output) return log_histogram
def resolvePrefix(self): """ extract prefix information into dict with the key of '_prefixstr' """ tmpstrlist = [] tmpstodict = {} for line in self.file_lines: if line.startswith('%'): stolist = line.replace('%', '').split('sto') rpnexp = stolist[0].strip() # rpn expression rpnvar = stolist[1].strip() # rpn variable tmpstodict[rpnvar] = rpnexp # bug: rpnval in rpnexp # raises error when converting string convert to float # Found: 2016-06-08 22:29:25 PM CST # Fixed: 2016-06-12 11:51:01 AM CST # e.g. # a sto 0.1 # a sto b # then b should be 0.1, # i.e. b -> a -> 0.1 # solve the 'sto chain' assignment issue. self.stodict = self.resolve_rpn(tmpstodict) for k, v in self.stodict.items(): stostr = '% {val} sto {var}'.format(val=v, var=k) tmpstrlist.append(stostr) self.prestrdict['_prefixstr'] = tmpstrlist
extract prefix information into dict with the key of '_prefixstr'
Below is the the instruction that describes the task: ### Input: extract prefix information into dict with the key of '_prefixstr' ### Response: def resolvePrefix(self): """ extract prefix information into dict with the key of '_prefixstr' """ tmpstrlist = [] tmpstodict = {} for line in self.file_lines: if line.startswith('%'): stolist = line.replace('%', '').split('sto') rpnexp = stolist[0].strip() # rpn expression rpnvar = stolist[1].strip() # rpn variable tmpstodict[rpnvar] = rpnexp # bug: rpnval in rpnexp # raises error when converting string convert to float # Found: 2016-06-08 22:29:25 PM CST # Fixed: 2016-06-12 11:51:01 AM CST # e.g. # a sto 0.1 # a sto b # then b should be 0.1, # i.e. b -> a -> 0.1 # solve the 'sto chain' assignment issue. self.stodict = self.resolve_rpn(tmpstodict) for k, v in self.stodict.items(): stostr = '% {val} sto {var}'.format(val=v, var=k) tmpstrlist.append(stostr) self.prestrdict['_prefixstr'] = tmpstrlist
def set_mode_apm(self, mode, custom_mode = 0, custom_sub_mode = 0): '''enter arbitrary mode''' if isinstance(mode, str): mode_map = self.mode_mapping() if mode_map is None or mode not in mode_map: print("Unknown mode '%s'" % mode) return mode = mode_map[mode] # set mode by integer mode number for ArduPilot self.mav.set_mode_send(self.target_system, mavlink.MAV_MODE_FLAG_CUSTOM_MODE_ENABLED, mode)
enter arbitrary mode
Below is the the instruction that describes the task: ### Input: enter arbitrary mode ### Response: def set_mode_apm(self, mode, custom_mode = 0, custom_sub_mode = 0): '''enter arbitrary mode''' if isinstance(mode, str): mode_map = self.mode_mapping() if mode_map is None or mode not in mode_map: print("Unknown mode '%s'" % mode) return mode = mode_map[mode] # set mode by integer mode number for ArduPilot self.mav.set_mode_send(self.target_system, mavlink.MAV_MODE_FLAG_CUSTOM_MODE_ENABLED, mode)
def initDeviceScan(self): """Initialize Key Stored Values.""" self.__isIphone = self.detectIphoneOrIpod() self.__isAndroidPhone = self.detectAndroidPhone() self.__isTierTablet = self.detectTierTablet() self.__isTierIphone = self.detectTierIphone() self.__isTierRichCss = self.detectTierRichCss() self.__isTierGenericMobile = self.detectTierOtherPhones()
Initialize Key Stored Values.
Below is the the instruction that describes the task: ### Input: Initialize Key Stored Values. ### Response: def initDeviceScan(self): """Initialize Key Stored Values.""" self.__isIphone = self.detectIphoneOrIpod() self.__isAndroidPhone = self.detectAndroidPhone() self.__isTierTablet = self.detectTierTablet() self.__isTierIphone = self.detectTierIphone() self.__isTierRichCss = self.detectTierRichCss() self.__isTierGenericMobile = self.detectTierOtherPhones()
def exponential(data): """ Creates a segment cost function for a time series with a exponential distribution with changing mean Args: data (:obj:`list` of float): 1D time series data Returns: function: Function with signature (int, int) -> float where the first arg is the starting index, and the second is the last arg. Returns the cost of that segment """ data = np.hstack(([0.0], np.array(data))) cumm = np.cumsum(data) def cost(s, t): """ Cost function for exponential distribution with changing mean Args: start (int): start index end (int): end index Returns: float: Cost, from start to end """ return -1*(t-s) * (np.log(t-s) - np.log(cumm[t] - cumm[s])) return cost
Creates a segment cost function for a time series with a exponential distribution with changing mean Args: data (:obj:`list` of float): 1D time series data Returns: function: Function with signature (int, int) -> float where the first arg is the starting index, and the second is the last arg. Returns the cost of that segment
Below is the the instruction that describes the task: ### Input: Creates a segment cost function for a time series with a exponential distribution with changing mean Args: data (:obj:`list` of float): 1D time series data Returns: function: Function with signature (int, int) -> float where the first arg is the starting index, and the second is the last arg. Returns the cost of that segment ### Response: def exponential(data): """ Creates a segment cost function for a time series with a exponential distribution with changing mean Args: data (:obj:`list` of float): 1D time series data Returns: function: Function with signature (int, int) -> float where the first arg is the starting index, and the second is the last arg. Returns the cost of that segment """ data = np.hstack(([0.0], np.array(data))) cumm = np.cumsum(data) def cost(s, t): """ Cost function for exponential distribution with changing mean Args: start (int): start index end (int): end index Returns: float: Cost, from start to end """ return -1*(t-s) * (np.log(t-s) - np.log(cumm[t] - cumm[s])) return cost
def to_categorical(y, nb_classes, num_classes=None): """ Converts a class vector (integers) to binary class matrix. This is adapted from the Keras function with the same name. :param y: class vector to be converted into a matrix (integers from 0 to nb_classes). :param nb_classes: nb_classes: total number of classes. :param num_classses: depricated version of nb_classes :return: A binary matrix representation of the input. """ if num_classes is not None: if nb_classes is not None: raise ValueError("Should not specify both nb_classes and its deprecated " "alias, num_classes") warnings.warn("`num_classes` is deprecated. Switch to `nb_classes`." " `num_classes` may be removed on or after 2019-04-23.") nb_classes = num_classes del num_classes y = np.array(y, dtype='int').ravel() n = y.shape[0] categorical = np.zeros((n, nb_classes)) categorical[np.arange(n), y] = 1 return categorical
Converts a class vector (integers) to binary class matrix. This is adapted from the Keras function with the same name. :param y: class vector to be converted into a matrix (integers from 0 to nb_classes). :param nb_classes: nb_classes: total number of classes. :param num_classses: depricated version of nb_classes :return: A binary matrix representation of the input.
Below is the the instruction that describes the task: ### Input: Converts a class vector (integers) to binary class matrix. This is adapted from the Keras function with the same name. :param y: class vector to be converted into a matrix (integers from 0 to nb_classes). :param nb_classes: nb_classes: total number of classes. :param num_classses: depricated version of nb_classes :return: A binary matrix representation of the input. ### Response: def to_categorical(y, nb_classes, num_classes=None): """ Converts a class vector (integers) to binary class matrix. This is adapted from the Keras function with the same name. :param y: class vector to be converted into a matrix (integers from 0 to nb_classes). :param nb_classes: nb_classes: total number of classes. :param num_classses: depricated version of nb_classes :return: A binary matrix representation of the input. """ if num_classes is not None: if nb_classes is not None: raise ValueError("Should not specify both nb_classes and its deprecated " "alias, num_classes") warnings.warn("`num_classes` is deprecated. Switch to `nb_classes`." " `num_classes` may be removed on or after 2019-04-23.") nb_classes = num_classes del num_classes y = np.array(y, dtype='int').ravel() n = y.shape[0] categorical = np.zeros((n, nb_classes)) categorical[np.arange(n), y] = 1 return categorical
def plot_spectrum(self, t=0, f_start=None, f_stop=None, logged=False, if_id=0, c=None, **kwargs): """ Plot frequency spectrum of a given file Args: t (int): integration number to plot (0 -> len(data)) logged (bool): Plot in linear (False) or dB units (True) if_id (int): IF identification (if multiple IF signals in file) c: color for line kwargs: keyword args to be passed to matplotlib plot() """ if self.header[b'nbits'] <=2: logged = False t='all' ax = plt.gca() plot_f, plot_data = self.grab_data(f_start, f_stop, if_id) #Using accending frequency for all plots. if self.header[b'foff'] < 0: plot_data = plot_data[..., ::-1] # Reverse data plot_f = plot_f[::-1] if isinstance(t, int): print("extracting integration %i..." % t) plot_data = plot_data[t] elif t == b'all': print("averaging along time axis...") #Since the data has been squeezed, the axis for time goes away if only one bin, causing a bug with axis=1 if len(plot_data.shape) > 1: plot_data = plot_data.mean(axis=0) else: plot_data = plot_data.mean() else: raise RuntimeError("Unknown integration %s" % t) # Rebin to max number of points dec_fac_x = 1 if plot_data.shape[0] > MAX_PLT_POINTS: dec_fac_x = int(plot_data.shape[0] / MAX_PLT_POINTS) plot_data = rebin(plot_data, dec_fac_x, 1) plot_f = rebin(plot_f, dec_fac_x, 1) if not c: kwargs['c'] = '#333333' if logged: plt.plot(plot_f, db(plot_data),label='Stokes I', **kwargs) plt.ylabel("Power [dB]") else: plt.plot(plot_f, plot_data,label='Stokes I', **kwargs) plt.ylabel("Power [counts]") plt.xlabel("Frequency [MHz]") plt.legend() try: plt.title(self.header[b'source_name']) except KeyError: plt.title(self.filename) plt.xlim(plot_f[0], plot_f[-1])
Plot frequency spectrum of a given file Args: t (int): integration number to plot (0 -> len(data)) logged (bool): Plot in linear (False) or dB units (True) if_id (int): IF identification (if multiple IF signals in file) c: color for line kwargs: keyword args to be passed to matplotlib plot()
Below is the the instruction that describes the task: ### Input: Plot frequency spectrum of a given file Args: t (int): integration number to plot (0 -> len(data)) logged (bool): Plot in linear (False) or dB units (True) if_id (int): IF identification (if multiple IF signals in file) c: color for line kwargs: keyword args to be passed to matplotlib plot() ### Response: def plot_spectrum(self, t=0, f_start=None, f_stop=None, logged=False, if_id=0, c=None, **kwargs): """ Plot frequency spectrum of a given file Args: t (int): integration number to plot (0 -> len(data)) logged (bool): Plot in linear (False) or dB units (True) if_id (int): IF identification (if multiple IF signals in file) c: color for line kwargs: keyword args to be passed to matplotlib plot() """ if self.header[b'nbits'] <=2: logged = False t='all' ax = plt.gca() plot_f, plot_data = self.grab_data(f_start, f_stop, if_id) #Using accending frequency for all plots. if self.header[b'foff'] < 0: plot_data = plot_data[..., ::-1] # Reverse data plot_f = plot_f[::-1] if isinstance(t, int): print("extracting integration %i..." % t) plot_data = plot_data[t] elif t == b'all': print("averaging along time axis...") #Since the data has been squeezed, the axis for time goes away if only one bin, causing a bug with axis=1 if len(plot_data.shape) > 1: plot_data = plot_data.mean(axis=0) else: plot_data = plot_data.mean() else: raise RuntimeError("Unknown integration %s" % t) # Rebin to max number of points dec_fac_x = 1 if plot_data.shape[0] > MAX_PLT_POINTS: dec_fac_x = int(plot_data.shape[0] / MAX_PLT_POINTS) plot_data = rebin(plot_data, dec_fac_x, 1) plot_f = rebin(plot_f, dec_fac_x, 1) if not c: kwargs['c'] = '#333333' if logged: plt.plot(plot_f, db(plot_data),label='Stokes I', **kwargs) plt.ylabel("Power [dB]") else: plt.plot(plot_f, plot_data,label='Stokes I', **kwargs) plt.ylabel("Power [counts]") plt.xlabel("Frequency [MHz]") plt.legend() try: plt.title(self.header[b'source_name']) except KeyError: plt.title(self.filename) plt.xlim(plot_f[0], plot_f[-1])
def fetch(self, multithread=True, median_kernel=5, solar_diam=740): """ For all products in products, will call the correct fetch routine and download an image :param multithread: if true will fetch the files simultaneously :type multithread: bool :param median_kernel: the size of the kernel to smooth by :type median_kernel: int >= 0 :return: a dictionary of all fetched products :rtype: dict from product string to (header, data) tuple """ # helper function to pull data def func_map(product): """ determines which function to call for a specific product and gets :param product: which product to fetch :type product: str :return: product tuple :rtype: (header, data) """ if "halpha" in product: result = self.fetch_halpha(median_kernel=median_kernel) elif "aia" in product: result = self.fetch_aia(product, median_kernel=median_kernel) elif "l1b" in product: result = self.fetch_suvi_l1b(product, median_kernel=median_kernel) elif "l2-ci" in product: result = self.fetch_suvi_composite(product, median_kernel=median_kernel) elif "limb" in product: result = self.fetch_limb(solar_diam) else: raise ValueError("{} is not a valid product.".format(product)) return result if multithread: pool = ThreadPool() results = pool.map(func_map, self.products) else: results = [func_map(product) for product in self.products] results = {product: (head, data) for product, head, data in results} return results
For all products in products, will call the correct fetch routine and download an image :param multithread: if true will fetch the files simultaneously :type multithread: bool :param median_kernel: the size of the kernel to smooth by :type median_kernel: int >= 0 :return: a dictionary of all fetched products :rtype: dict from product string to (header, data) tuple
Below is the the instruction that describes the task: ### Input: For all products in products, will call the correct fetch routine and download an image :param multithread: if true will fetch the files simultaneously :type multithread: bool :param median_kernel: the size of the kernel to smooth by :type median_kernel: int >= 0 :return: a dictionary of all fetched products :rtype: dict from product string to (header, data) tuple ### Response: def fetch(self, multithread=True, median_kernel=5, solar_diam=740): """ For all products in products, will call the correct fetch routine and download an image :param multithread: if true will fetch the files simultaneously :type multithread: bool :param median_kernel: the size of the kernel to smooth by :type median_kernel: int >= 0 :return: a dictionary of all fetched products :rtype: dict from product string to (header, data) tuple """ # helper function to pull data def func_map(product): """ determines which function to call for a specific product and gets :param product: which product to fetch :type product: str :return: product tuple :rtype: (header, data) """ if "halpha" in product: result = self.fetch_halpha(median_kernel=median_kernel) elif "aia" in product: result = self.fetch_aia(product, median_kernel=median_kernel) elif "l1b" in product: result = self.fetch_suvi_l1b(product, median_kernel=median_kernel) elif "l2-ci" in product: result = self.fetch_suvi_composite(product, median_kernel=median_kernel) elif "limb" in product: result = self.fetch_limb(solar_diam) else: raise ValueError("{} is not a valid product.".format(product)) return result if multithread: pool = ThreadPool() results = pool.map(func_map, self.products) else: results = [func_map(product) for product in self.products] results = {product: (head, data) for product, head, data in results} return results
def objects_list(self, bucket, prefix=None, delimiter=None, projection='noAcl', versions=False, max_results=0, page_token=None): """Issues a request to retrieve information about an object. Args: bucket: the name of the bucket. prefix: an optional key prefix. delimiter: an optional key delimiter. projection: the projection of the objects to retrieve. versions: whether to list each version of a file as a distinct object. max_results: an optional maximum number of objects to retrieve. page_token: an optional token to continue the retrieval. Returns: A parsed list of object information dictionaries. Raises: Exception if there is an error performing the operation. """ if max_results == 0: max_results = Api._MAX_RESULTS args = {'maxResults': max_results} if prefix is not None: args['prefix'] = prefix if delimiter is not None: args['delimiter'] = delimiter if projection is not None: args['projection'] = projection if versions: args['versions'] = 'true' if page_token is not None: args['pageToken'] = page_token url = Api._ENDPOINT + (Api._OBJECT_PATH % (bucket, '')) return google.datalab.utils.Http.request(url, args=args, credentials=self._credentials)
Issues a request to retrieve information about an object. Args: bucket: the name of the bucket. prefix: an optional key prefix. delimiter: an optional key delimiter. projection: the projection of the objects to retrieve. versions: whether to list each version of a file as a distinct object. max_results: an optional maximum number of objects to retrieve. page_token: an optional token to continue the retrieval. Returns: A parsed list of object information dictionaries. Raises: Exception if there is an error performing the operation.
Below is the the instruction that describes the task: ### Input: Issues a request to retrieve information about an object. Args: bucket: the name of the bucket. prefix: an optional key prefix. delimiter: an optional key delimiter. projection: the projection of the objects to retrieve. versions: whether to list each version of a file as a distinct object. max_results: an optional maximum number of objects to retrieve. page_token: an optional token to continue the retrieval. Returns: A parsed list of object information dictionaries. Raises: Exception if there is an error performing the operation. ### Response: def objects_list(self, bucket, prefix=None, delimiter=None, projection='noAcl', versions=False, max_results=0, page_token=None): """Issues a request to retrieve information about an object. Args: bucket: the name of the bucket. prefix: an optional key prefix. delimiter: an optional key delimiter. projection: the projection of the objects to retrieve. versions: whether to list each version of a file as a distinct object. max_results: an optional maximum number of objects to retrieve. page_token: an optional token to continue the retrieval. Returns: A parsed list of object information dictionaries. Raises: Exception if there is an error performing the operation. """ if max_results == 0: max_results = Api._MAX_RESULTS args = {'maxResults': max_results} if prefix is not None: args['prefix'] = prefix if delimiter is not None: args['delimiter'] = delimiter if projection is not None: args['projection'] = projection if versions: args['versions'] = 'true' if page_token is not None: args['pageToken'] = page_token url = Api._ENDPOINT + (Api._OBJECT_PATH % (bucket, '')) return google.datalab.utils.Http.request(url, args=args, credentials=self._credentials)