code
stringlengths
75
104k
docstring
stringlengths
1
46.9k
text
stringlengths
164
112k
def register_value_producer(self, value_name: str, source: Callable[..., pd.DataFrame]=None, preferred_combiner: Callable=replace_combiner, preferred_post_processor: Callable[..., pd.DataFrame]=None) -> Pipeline: """Marks a ``Callable`` as the producer of a named value. Parameters ---------- value_name : The name of the new dynamic value pipeline. source : A callable source for the dynamic value pipeline. preferred_combiner : A strategy for combining the source and the results of any calls to mutators in the pipeline. ``vivarium`` provides the strategies ``replace_combiner`` (the default), ``list_combiner``, and ``set_combiner`` which are importable from ``vivarium.framework.values``. Client code may define additional strategies as necessary. preferred_post_processor : A strategy for processing the final output of the pipeline. ``vivarium`` provides the strategies ``rescale_post_processor`` and ``joint_value_post_processor`` which are importable from ``vivarium.framework.values``. Client code may define additional strategies as necessary. Returns ------- Callable A callable reference to the named dynamic value pipeline. """ return self._value_manager.register_value_producer(value_name, source, preferred_combiner, preferred_post_processor)
Marks a ``Callable`` as the producer of a named value. Parameters ---------- value_name : The name of the new dynamic value pipeline. source : A callable source for the dynamic value pipeline. preferred_combiner : A strategy for combining the source and the results of any calls to mutators in the pipeline. ``vivarium`` provides the strategies ``replace_combiner`` (the default), ``list_combiner``, and ``set_combiner`` which are importable from ``vivarium.framework.values``. Client code may define additional strategies as necessary. preferred_post_processor : A strategy for processing the final output of the pipeline. ``vivarium`` provides the strategies ``rescale_post_processor`` and ``joint_value_post_processor`` which are importable from ``vivarium.framework.values``. Client code may define additional strategies as necessary. Returns ------- Callable A callable reference to the named dynamic value pipeline.
Below is the the instruction that describes the task: ### Input: Marks a ``Callable`` as the producer of a named value. Parameters ---------- value_name : The name of the new dynamic value pipeline. source : A callable source for the dynamic value pipeline. preferred_combiner : A strategy for combining the source and the results of any calls to mutators in the pipeline. ``vivarium`` provides the strategies ``replace_combiner`` (the default), ``list_combiner``, and ``set_combiner`` which are importable from ``vivarium.framework.values``. Client code may define additional strategies as necessary. preferred_post_processor : A strategy for processing the final output of the pipeline. ``vivarium`` provides the strategies ``rescale_post_processor`` and ``joint_value_post_processor`` which are importable from ``vivarium.framework.values``. Client code may define additional strategies as necessary. Returns ------- Callable A callable reference to the named dynamic value pipeline. ### Response: def register_value_producer(self, value_name: str, source: Callable[..., pd.DataFrame]=None, preferred_combiner: Callable=replace_combiner, preferred_post_processor: Callable[..., pd.DataFrame]=None) -> Pipeline: """Marks a ``Callable`` as the producer of a named value. Parameters ---------- value_name : The name of the new dynamic value pipeline. source : A callable source for the dynamic value pipeline. preferred_combiner : A strategy for combining the source and the results of any calls to mutators in the pipeline. ``vivarium`` provides the strategies ``replace_combiner`` (the default), ``list_combiner``, and ``set_combiner`` which are importable from ``vivarium.framework.values``. Client code may define additional strategies as necessary. preferred_post_processor : A strategy for processing the final output of the pipeline. ``vivarium`` provides the strategies ``rescale_post_processor`` and ``joint_value_post_processor`` which are importable from ``vivarium.framework.values``. Client code may define additional strategies as necessary. Returns ------- Callable A callable reference to the named dynamic value pipeline. """ return self._value_manager.register_value_producer(value_name, source, preferred_combiner, preferred_post_processor)
def list_address(self, domain): " Get the list of addresses of a single domain." try: response = self.get('/REST/ARecord/%s/%s' % ( self.zone, domain)) except self.NotFoundError: return [] # Return a generator with the addresses. addresses = response.content['data'] return [Address.from_url(self, uri) for uri in addresses]
Get the list of addresses of a single domain.
Below is the the instruction that describes the task: ### Input: Get the list of addresses of a single domain. ### Response: def list_address(self, domain): " Get the list of addresses of a single domain." try: response = self.get('/REST/ARecord/%s/%s' % ( self.zone, domain)) except self.NotFoundError: return [] # Return a generator with the addresses. addresses = response.content['data'] return [Address.from_url(self, uri) for uri in addresses]
def get_all_tags(self): """ This method returns a list of all tags. """ data = self.get_data("tags") return [ Tag(token=self.token, **tag) for tag in data['tags'] ]
This method returns a list of all tags.
Below is the the instruction that describes the task: ### Input: This method returns a list of all tags. ### Response: def get_all_tags(self): """ This method returns a list of all tags. """ data = self.get_data("tags") return [ Tag(token=self.token, **tag) for tag in data['tags'] ]
def AIC(data, model, params=None, corrected=True): """ Akaike Information Criteria given data and a model Parameters ---------- {0} {1} params : int Number of parameters in the model. If None, calculated from model object. corrected : bool If True, calculates the small-sample size correct AICC. Default True. Returns ------- float AIC(C) value Notes ----- AICC should be used when the number of observations is < 40. Examples -------- >>> import macroeco.models as md >>> import macroeco.compare as comp >>> # Generate random data >>> rand_samp = md.nbinom_ztrunc.rvs(20, 0.5, size=100) >>> # Fit Zero-truncated NBD (Full model) >>> mle_nbd = md.nbinom_ztrunc.fit_mle(rand_samp) >>> # Fit a logseries (limiting case of Zero-truncated NBD, reduced model) >>> mle_logser = md.logser.fit_mle(rand_samp) >>> # Get AIC for ztrunc_nbinom >>> comp.AIC(rand_samp, md.nbinom_ztrunc(*mle_nbd)) 765.51518598676421 >>> # Get AIC for logser >>> comp.AIC(rand_samp, md.logser(*mle_logser)) 777.05165086534805 >>> # Support for for zero-truncated NBD over logseries because AIC is >>> # smaller >>> # Call AIC with params given as 2 (should be the same as above) >>> comp.AIC(rand_samp, md.nbinom_ztrunc(*mle_nbd), params=2) 765.51518598676421 >>> # Call AIC without sample size correction >>> comp.AIC(rand_samp, md.nbinom_ztrunc(*mle_nbd), params=2, corrected=False) 765.39147464655798 References ---------- .. [#] Burnham, K and Anderson, D. (2002) Model Selection and Multimodel Inference: A Practical and Information-Theoretic Approach (p. 66). New York City, USA: Springer. """ n = len(data) # Number of observations L = nll(data, model) if not params: k = len(model.kwds) + len(model.args) else: k = params if corrected: aic_value = 2 * k + 2 * L + (2 * k * (k + 1)) / (n - k - 1) else: aic_value = 2 * k + 2 * L return aic_value
Akaike Information Criteria given data and a model Parameters ---------- {0} {1} params : int Number of parameters in the model. If None, calculated from model object. corrected : bool If True, calculates the small-sample size correct AICC. Default True. Returns ------- float AIC(C) value Notes ----- AICC should be used when the number of observations is < 40. Examples -------- >>> import macroeco.models as md >>> import macroeco.compare as comp >>> # Generate random data >>> rand_samp = md.nbinom_ztrunc.rvs(20, 0.5, size=100) >>> # Fit Zero-truncated NBD (Full model) >>> mle_nbd = md.nbinom_ztrunc.fit_mle(rand_samp) >>> # Fit a logseries (limiting case of Zero-truncated NBD, reduced model) >>> mle_logser = md.logser.fit_mle(rand_samp) >>> # Get AIC for ztrunc_nbinom >>> comp.AIC(rand_samp, md.nbinom_ztrunc(*mle_nbd)) 765.51518598676421 >>> # Get AIC for logser >>> comp.AIC(rand_samp, md.logser(*mle_logser)) 777.05165086534805 >>> # Support for for zero-truncated NBD over logseries because AIC is >>> # smaller >>> # Call AIC with params given as 2 (should be the same as above) >>> comp.AIC(rand_samp, md.nbinom_ztrunc(*mle_nbd), params=2) 765.51518598676421 >>> # Call AIC without sample size correction >>> comp.AIC(rand_samp, md.nbinom_ztrunc(*mle_nbd), params=2, corrected=False) 765.39147464655798 References ---------- .. [#] Burnham, K and Anderson, D. (2002) Model Selection and Multimodel Inference: A Practical and Information-Theoretic Approach (p. 66). New York City, USA: Springer.
Below is the the instruction that describes the task: ### Input: Akaike Information Criteria given data and a model Parameters ---------- {0} {1} params : int Number of parameters in the model. If None, calculated from model object. corrected : bool If True, calculates the small-sample size correct AICC. Default True. Returns ------- float AIC(C) value Notes ----- AICC should be used when the number of observations is < 40. Examples -------- >>> import macroeco.models as md >>> import macroeco.compare as comp >>> # Generate random data >>> rand_samp = md.nbinom_ztrunc.rvs(20, 0.5, size=100) >>> # Fit Zero-truncated NBD (Full model) >>> mle_nbd = md.nbinom_ztrunc.fit_mle(rand_samp) >>> # Fit a logseries (limiting case of Zero-truncated NBD, reduced model) >>> mle_logser = md.logser.fit_mle(rand_samp) >>> # Get AIC for ztrunc_nbinom >>> comp.AIC(rand_samp, md.nbinom_ztrunc(*mle_nbd)) 765.51518598676421 >>> # Get AIC for logser >>> comp.AIC(rand_samp, md.logser(*mle_logser)) 777.05165086534805 >>> # Support for for zero-truncated NBD over logseries because AIC is >>> # smaller >>> # Call AIC with params given as 2 (should be the same as above) >>> comp.AIC(rand_samp, md.nbinom_ztrunc(*mle_nbd), params=2) 765.51518598676421 >>> # Call AIC without sample size correction >>> comp.AIC(rand_samp, md.nbinom_ztrunc(*mle_nbd), params=2, corrected=False) 765.39147464655798 References ---------- .. [#] Burnham, K and Anderson, D. (2002) Model Selection and Multimodel Inference: A Practical and Information-Theoretic Approach (p. 66). New York City, USA: Springer. ### Response: def AIC(data, model, params=None, corrected=True): """ Akaike Information Criteria given data and a model Parameters ---------- {0} {1} params : int Number of parameters in the model. If None, calculated from model object. corrected : bool If True, calculates the small-sample size correct AICC. Default True. Returns ------- float AIC(C) value Notes ----- AICC should be used when the number of observations is < 40. Examples -------- >>> import macroeco.models as md >>> import macroeco.compare as comp >>> # Generate random data >>> rand_samp = md.nbinom_ztrunc.rvs(20, 0.5, size=100) >>> # Fit Zero-truncated NBD (Full model) >>> mle_nbd = md.nbinom_ztrunc.fit_mle(rand_samp) >>> # Fit a logseries (limiting case of Zero-truncated NBD, reduced model) >>> mle_logser = md.logser.fit_mle(rand_samp) >>> # Get AIC for ztrunc_nbinom >>> comp.AIC(rand_samp, md.nbinom_ztrunc(*mle_nbd)) 765.51518598676421 >>> # Get AIC for logser >>> comp.AIC(rand_samp, md.logser(*mle_logser)) 777.05165086534805 >>> # Support for for zero-truncated NBD over logseries because AIC is >>> # smaller >>> # Call AIC with params given as 2 (should be the same as above) >>> comp.AIC(rand_samp, md.nbinom_ztrunc(*mle_nbd), params=2) 765.51518598676421 >>> # Call AIC without sample size correction >>> comp.AIC(rand_samp, md.nbinom_ztrunc(*mle_nbd), params=2, corrected=False) 765.39147464655798 References ---------- .. [#] Burnham, K and Anderson, D. (2002) Model Selection and Multimodel Inference: A Practical and Information-Theoretic Approach (p. 66). New York City, USA: Springer. """ n = len(data) # Number of observations L = nll(data, model) if not params: k = len(model.kwds) + len(model.args) else: k = params if corrected: aic_value = 2 * k + 2 * L + (2 * k * (k + 1)) / (n - k - 1) else: aic_value = 2 * k + 2 * L return aic_value
def update_bgp_peer(self, bgp_peer_id, body=None): """Update a BGP peer.""" return self.put(self.bgp_peer_path % bgp_peer_id, body=body)
Update a BGP peer.
Below is the the instruction that describes the task: ### Input: Update a BGP peer. ### Response: def update_bgp_peer(self, bgp_peer_id, body=None): """Update a BGP peer.""" return self.put(self.bgp_peer_path % bgp_peer_id, body=body)
def fenceloader(self): '''fence loader by sysid''' if not self.target_system in self.fenceloader_by_sysid: self.fenceloader_by_sysid[self.target_system] = mavwp.MAVFenceLoader() return self.fenceloader_by_sysid[self.target_system]
fence loader by sysid
Below is the the instruction that describes the task: ### Input: fence loader by sysid ### Response: def fenceloader(self): '''fence loader by sysid''' if not self.target_system in self.fenceloader_by_sysid: self.fenceloader_by_sysid[self.target_system] = mavwp.MAVFenceLoader() return self.fenceloader_by_sysid[self.target_system]
def compare_field_caches(self, replica, original): """Verify original is subset of replica""" if original is None: original = [] if replica is None: replica = [] self.pr_dbg("Comparing orig with %s fields to replica with %s fields" % (len(original), len(replica))) # convert list into dict, with each item's ['name'] as key orig = self.list_to_compare_dict(original) if orig is None: self.pr_dbg("Original has duplicate fields") return 1 repl = self.list_to_compare_dict(replica) if repl is None: self.pr_dbg("Replica has duplicate fields") return 1 # search orig for each item in repl # if any items in repl not within orig or vice versa, then complain # make sure contents of each item match orig_found = {} for (key, field) in iteritems(repl): field_name = field['name'] if field_name not in orig: self.pr_dbg("Replica has field not found in orig %s: %s" % (field_name, field)) return 1 orig_found[field_name] = True if orig[field_name] != field: self.pr_dbg("Field in replica doesn't match orig:") self.pr_dbg("orig:%s\nrepl:%s" % (orig[field_name], field)) return 1 unfound = set(orig_found.keys()) - set(repl.keys()) if len(unfound) > 0: self.pr_dbg("Orig contains fields that were not in replica") self.pr_dbg('%s' % unfound) return 1 # We don't care about case when replica has more fields than orig # unfound = set(repl.keys()) - set(orig_found.keys()) # if len(unfound) > 0: # self.pr_dbg("Replica contains fields that were not in orig") # self.pr_dbg('%s' % unfound) # return 1 self.pr_dbg("Original matches replica") return 0
Verify original is subset of replica
Below is the the instruction that describes the task: ### Input: Verify original is subset of replica ### Response: def compare_field_caches(self, replica, original): """Verify original is subset of replica""" if original is None: original = [] if replica is None: replica = [] self.pr_dbg("Comparing orig with %s fields to replica with %s fields" % (len(original), len(replica))) # convert list into dict, with each item's ['name'] as key orig = self.list_to_compare_dict(original) if orig is None: self.pr_dbg("Original has duplicate fields") return 1 repl = self.list_to_compare_dict(replica) if repl is None: self.pr_dbg("Replica has duplicate fields") return 1 # search orig for each item in repl # if any items in repl not within orig or vice versa, then complain # make sure contents of each item match orig_found = {} for (key, field) in iteritems(repl): field_name = field['name'] if field_name not in orig: self.pr_dbg("Replica has field not found in orig %s: %s" % (field_name, field)) return 1 orig_found[field_name] = True if orig[field_name] != field: self.pr_dbg("Field in replica doesn't match orig:") self.pr_dbg("orig:%s\nrepl:%s" % (orig[field_name], field)) return 1 unfound = set(orig_found.keys()) - set(repl.keys()) if len(unfound) > 0: self.pr_dbg("Orig contains fields that were not in replica") self.pr_dbg('%s' % unfound) return 1 # We don't care about case when replica has more fields than orig # unfound = set(repl.keys()) - set(orig_found.keys()) # if len(unfound) > 0: # self.pr_dbg("Replica contains fields that were not in orig") # self.pr_dbg('%s' % unfound) # return 1 self.pr_dbg("Original matches replica") return 0
def _compute_term2(self, C, mag, rrup): """ This computes the term f2 in equation 32, page 1021 """ c78_factor = (C['c7'] * np.exp(C['c8'] * mag)) ** 2 R = np.sqrt(rrup ** 2 + c78_factor) return C['c4'] * np.log(R) + (C['c5'] + C['c6'] * mag) * rrup
This computes the term f2 in equation 32, page 1021
Below is the the instruction that describes the task: ### Input: This computes the term f2 in equation 32, page 1021 ### Response: def _compute_term2(self, C, mag, rrup): """ This computes the term f2 in equation 32, page 1021 """ c78_factor = (C['c7'] * np.exp(C['c8'] * mag)) ** 2 R = np.sqrt(rrup ** 2 + c78_factor) return C['c4'] * np.log(R) + (C['c5'] + C['c6'] * mag) * rrup
def _solve_location(self, req, dut_req_len, idx): """ Helper function for resolving the location for a resource. :param req: Requirements dictionary :param dut_req_len: Amount of required resources :param idx: index, integer :return: Nothing, modifies req object """ if not req.get("location"): return if len(req.get("location")) == 2: for x_and_y, coord in enumerate(req.get("location")): if isinstance(coord, string_types): coord = ResourceConfig.__replace_coord_variables(coord, x_and_y, dut_req_len, idx) try: loc = req.get("location") loc[x_and_y] = eval(coord) # pylint: disable=eval-used req.set("location", loc) except SyntaxError as error: self.logger.error(error) loc = req.get("location") loc[x_and_y] = 0.0 req.set("location", loc) else: self.logger.error("invalid location field!") req.set("location", [0.0, 0.0])
Helper function for resolving the location for a resource. :param req: Requirements dictionary :param dut_req_len: Amount of required resources :param idx: index, integer :return: Nothing, modifies req object
Below is the the instruction that describes the task: ### Input: Helper function for resolving the location for a resource. :param req: Requirements dictionary :param dut_req_len: Amount of required resources :param idx: index, integer :return: Nothing, modifies req object ### Response: def _solve_location(self, req, dut_req_len, idx): """ Helper function for resolving the location for a resource. :param req: Requirements dictionary :param dut_req_len: Amount of required resources :param idx: index, integer :return: Nothing, modifies req object """ if not req.get("location"): return if len(req.get("location")) == 2: for x_and_y, coord in enumerate(req.get("location")): if isinstance(coord, string_types): coord = ResourceConfig.__replace_coord_variables(coord, x_and_y, dut_req_len, idx) try: loc = req.get("location") loc[x_and_y] = eval(coord) # pylint: disable=eval-used req.set("location", loc) except SyntaxError as error: self.logger.error(error) loc = req.get("location") loc[x_and_y] = 0.0 req.set("location", loc) else: self.logger.error("invalid location field!") req.set("location", [0.0, 0.0])
def get_quad_by_id(self, mosaic, quad_id): '''Get a quad response for a specific mosaic and quad. :param mosaic dict: A mosaic representation from the API :param quad_id str: A quad id (typically <xcoord>-<ycoord>) :returns: :py:Class:`planet.api.models.JSON` :raises planet.api.exceptions.APIException: On API error. ''' path = 'basemaps/v1/mosaics/{}/quads/{}'.format(mosaic['id'], quad_id) return self._get(self._url(path)).get_body()
Get a quad response for a specific mosaic and quad. :param mosaic dict: A mosaic representation from the API :param quad_id str: A quad id (typically <xcoord>-<ycoord>) :returns: :py:Class:`planet.api.models.JSON` :raises planet.api.exceptions.APIException: On API error.
Below is the the instruction that describes the task: ### Input: Get a quad response for a specific mosaic and quad. :param mosaic dict: A mosaic representation from the API :param quad_id str: A quad id (typically <xcoord>-<ycoord>) :returns: :py:Class:`planet.api.models.JSON` :raises planet.api.exceptions.APIException: On API error. ### Response: def get_quad_by_id(self, mosaic, quad_id): '''Get a quad response for a specific mosaic and quad. :param mosaic dict: A mosaic representation from the API :param quad_id str: A quad id (typically <xcoord>-<ycoord>) :returns: :py:Class:`planet.api.models.JSON` :raises planet.api.exceptions.APIException: On API error. ''' path = 'basemaps/v1/mosaics/{}/quads/{}'.format(mosaic['id'], quad_id) return self._get(self._url(path)).get_body()
def register_shortcuts(self): ''' .. versionchanged:: 0.14 Add keyboard shortcuts to set neighbouring electrode states based on directional input using ``<Control>`` key plus the corresponding direction (e.g., ``<Control>Up``) ''' def control_protocol(command): if self.plugin is not None: self.plugin.execute_async('microdrop.gui.protocol_controller', command) def actuate_direction(direction): if self.plugin is not None: self.plugin.execute_async('microdrop.electrode_controller_plugin', 'set_electrode_direction_states', direction=direction) # Tie shortcuts to protocol controller commands (next, previous, etc.) shortcuts = {'<Control>r': lambda *args: control_protocol('run_protocol'), '<Control>z': lambda *args: self.undo(), '<Control>y': lambda *args: self.redo(), 'A': lambda *args: control_protocol('first_step'), 'S': lambda *args: control_protocol('prev_step'), 'D': lambda *args: control_protocol('next_step'), 'F': lambda *args: control_protocol('last_step'), '<Control>Up': lambda *args: actuate_direction('up'), '<Control>Down': lambda *args: actuate_direction('down'), '<Control>Left': lambda *args: actuate_direction('left'), '<Control>Right': lambda *args: actuate_direction('right')} register_shortcuts(self.widget.parent, shortcuts)
.. versionchanged:: 0.14 Add keyboard shortcuts to set neighbouring electrode states based on directional input using ``<Control>`` key plus the corresponding direction (e.g., ``<Control>Up``)
Below is the the instruction that describes the task: ### Input: .. versionchanged:: 0.14 Add keyboard shortcuts to set neighbouring electrode states based on directional input using ``<Control>`` key plus the corresponding direction (e.g., ``<Control>Up``) ### Response: def register_shortcuts(self): ''' .. versionchanged:: 0.14 Add keyboard shortcuts to set neighbouring electrode states based on directional input using ``<Control>`` key plus the corresponding direction (e.g., ``<Control>Up``) ''' def control_protocol(command): if self.plugin is not None: self.plugin.execute_async('microdrop.gui.protocol_controller', command) def actuate_direction(direction): if self.plugin is not None: self.plugin.execute_async('microdrop.electrode_controller_plugin', 'set_electrode_direction_states', direction=direction) # Tie shortcuts to protocol controller commands (next, previous, etc.) shortcuts = {'<Control>r': lambda *args: control_protocol('run_protocol'), '<Control>z': lambda *args: self.undo(), '<Control>y': lambda *args: self.redo(), 'A': lambda *args: control_protocol('first_step'), 'S': lambda *args: control_protocol('prev_step'), 'D': lambda *args: control_protocol('next_step'), 'F': lambda *args: control_protocol('last_step'), '<Control>Up': lambda *args: actuate_direction('up'), '<Control>Down': lambda *args: actuate_direction('down'), '<Control>Left': lambda *args: actuate_direction('left'), '<Control>Right': lambda *args: actuate_direction('right')} register_shortcuts(self.widget.parent, shortcuts)
def query_columns(conn, query, name=None): """Lightweight query to retrieve column list of select query. Notes ----- Strongly urged to specify a cursor name for performance. """ with conn.cursor(name) as cursor: cursor.itersize = 1 cursor.execute(query) cursor.fetchmany(0) column_names = [column.name for column in cursor.description] return column_names
Lightweight query to retrieve column list of select query. Notes ----- Strongly urged to specify a cursor name for performance.
Below is the the instruction that describes the task: ### Input: Lightweight query to retrieve column list of select query. Notes ----- Strongly urged to specify a cursor name for performance. ### Response: def query_columns(conn, query, name=None): """Lightweight query to retrieve column list of select query. Notes ----- Strongly urged to specify a cursor name for performance. """ with conn.cursor(name) as cursor: cursor.itersize = 1 cursor.execute(query) cursor.fetchmany(0) column_names = [column.name for column in cursor.description] return column_names
def error2str(e): """returns the formatted stacktrace of the exception `e`. :param BaseException e: an exception to format into str :rtype: str """ out = StringIO() traceback.print_exception(None, e, e.__traceback__, file=out) out.seek(0) return out.read()
returns the formatted stacktrace of the exception `e`. :param BaseException e: an exception to format into str :rtype: str
Below is the the instruction that describes the task: ### Input: returns the formatted stacktrace of the exception `e`. :param BaseException e: an exception to format into str :rtype: str ### Response: def error2str(e): """returns the formatted stacktrace of the exception `e`. :param BaseException e: an exception to format into str :rtype: str """ out = StringIO() traceback.print_exception(None, e, e.__traceback__, file=out) out.seek(0) return out.read()
def to_seconds(value, strict=True, force_int=True): """ converts duration value to integer seconds strict=True (by default) raises StrictnessError if either hours, minutes or seconds in duration value exceed allowed values """ if isinstance(value, int): return value # assuming it's seconds elif isinstance(value, timedelta): seconds = value.total_seconds() if force_int: seconds = int(round(seconds)) return seconds elif isinstance(value, str): hours, minutes, seconds = _parse(value, strict) elif isinstance(value, tuple): check_tuple(value, strict) hours, minutes, seconds = value else: raise TypeError( 'Value %s (type %s) not supported' % ( value, type(value).__name__ ) ) if not (hours or minutes or seconds): raise ValueError('No hours, minutes or seconds found') result = hours*3600 + minutes*60 + seconds return result
converts duration value to integer seconds strict=True (by default) raises StrictnessError if either hours, minutes or seconds in duration value exceed allowed values
Below is the the instruction that describes the task: ### Input: converts duration value to integer seconds strict=True (by default) raises StrictnessError if either hours, minutes or seconds in duration value exceed allowed values ### Response: def to_seconds(value, strict=True, force_int=True): """ converts duration value to integer seconds strict=True (by default) raises StrictnessError if either hours, minutes or seconds in duration value exceed allowed values """ if isinstance(value, int): return value # assuming it's seconds elif isinstance(value, timedelta): seconds = value.total_seconds() if force_int: seconds = int(round(seconds)) return seconds elif isinstance(value, str): hours, minutes, seconds = _parse(value, strict) elif isinstance(value, tuple): check_tuple(value, strict) hours, minutes, seconds = value else: raise TypeError( 'Value %s (type %s) not supported' % ( value, type(value).__name__ ) ) if not (hours or minutes or seconds): raise ValueError('No hours, minutes or seconds found') result = hours*3600 + minutes*60 + seconds return result
def post_ext_init(state): """Setup blueprint.""" app = state.app app.config.setdefault( 'OAUTHCLIENT_SITENAME', app.config.get('THEME_SITENAME', 'Invenio')) app.config.setdefault( 'OAUTHCLIENT_BASE_TEMPLATE', app.config.get('BASE_TEMPLATE', 'invenio_oauthclient/base.html')) app.config.setdefault( 'OAUTHCLIENT_COVER_TEMPLATE', app.config.get('COVER_TEMPLATE', 'invenio_oauthclient/base_cover.html')) app.config.setdefault( 'OAUTHCLIENT_SETTINGS_TEMPLATE', app.config.get('SETTINGS_TEMPLATE', 'invenio_oauthclient/settings/base.html'))
Setup blueprint.
Below is the the instruction that describes the task: ### Input: Setup blueprint. ### Response: def post_ext_init(state): """Setup blueprint.""" app = state.app app.config.setdefault( 'OAUTHCLIENT_SITENAME', app.config.get('THEME_SITENAME', 'Invenio')) app.config.setdefault( 'OAUTHCLIENT_BASE_TEMPLATE', app.config.get('BASE_TEMPLATE', 'invenio_oauthclient/base.html')) app.config.setdefault( 'OAUTHCLIENT_COVER_TEMPLATE', app.config.get('COVER_TEMPLATE', 'invenio_oauthclient/base_cover.html')) app.config.setdefault( 'OAUTHCLIENT_SETTINGS_TEMPLATE', app.config.get('SETTINGS_TEMPLATE', 'invenio_oauthclient/settings/base.html'))
def google_analytics(parser, token): """ Google Analytics tracking template tag. Renders Javascript code to track page visits. You must supply your website property ID (as a string) in the ``GOOGLE_ANALYTICS_PROPERTY_ID`` setting. """ bits = token.split_contents() if len(bits) > 1: raise TemplateSyntaxError("'%s' takes no arguments" % bits[0]) return GoogleAnalyticsNode()
Google Analytics tracking template tag. Renders Javascript code to track page visits. You must supply your website property ID (as a string) in the ``GOOGLE_ANALYTICS_PROPERTY_ID`` setting.
Below is the the instruction that describes the task: ### Input: Google Analytics tracking template tag. Renders Javascript code to track page visits. You must supply your website property ID (as a string) in the ``GOOGLE_ANALYTICS_PROPERTY_ID`` setting. ### Response: def google_analytics(parser, token): """ Google Analytics tracking template tag. Renders Javascript code to track page visits. You must supply your website property ID (as a string) in the ``GOOGLE_ANALYTICS_PROPERTY_ID`` setting. """ bits = token.split_contents() if len(bits) > 1: raise TemplateSyntaxError("'%s' takes no arguments" % bits[0]) return GoogleAnalyticsNode()
def _read_function(ctx: ReaderContext) -> llist.List: """Read a function reader macro from the input stream.""" if ctx.is_in_anon_fn: raise SyntaxError(f"Nested #() definitions not allowed") with ctx.in_anon_fn(): form = _read_list(ctx) arg_set = set() def arg_suffix(arg_num): if arg_num is None: return "1" elif arg_num == "&": return "rest" else: return arg_num def sym_replacement(arg_num): suffix = arg_suffix(arg_num) return symbol.symbol(f"arg-{suffix}") def identify_and_replace(f): if isinstance(f, symbol.Symbol): if f.ns is None: match = fn_macro_args.match(f.name) if match is not None: arg_num = match.group(2) suffix = arg_suffix(arg_num) arg_set.add(suffix) return sym_replacement(arg_num) return f body = walk.postwalk(identify_and_replace, form) if len(form) > 0 else None arg_list: List[symbol.Symbol] = [] numbered_args = sorted(map(int, filter(lambda k: k != "rest", arg_set))) if len(numbered_args) > 0: max_arg = max(numbered_args) arg_list = [sym_replacement(str(i)) for i in range(1, max_arg + 1)] if "rest" in arg_set: arg_list.append(_AMPERSAND) arg_list.append(sym_replacement("rest")) return llist.l(_FN, vector.vector(arg_list), body)
Read a function reader macro from the input stream.
Below is the the instruction that describes the task: ### Input: Read a function reader macro from the input stream. ### Response: def _read_function(ctx: ReaderContext) -> llist.List: """Read a function reader macro from the input stream.""" if ctx.is_in_anon_fn: raise SyntaxError(f"Nested #() definitions not allowed") with ctx.in_anon_fn(): form = _read_list(ctx) arg_set = set() def arg_suffix(arg_num): if arg_num is None: return "1" elif arg_num == "&": return "rest" else: return arg_num def sym_replacement(arg_num): suffix = arg_suffix(arg_num) return symbol.symbol(f"arg-{suffix}") def identify_and_replace(f): if isinstance(f, symbol.Symbol): if f.ns is None: match = fn_macro_args.match(f.name) if match is not None: arg_num = match.group(2) suffix = arg_suffix(arg_num) arg_set.add(suffix) return sym_replacement(arg_num) return f body = walk.postwalk(identify_and_replace, form) if len(form) > 0 else None arg_list: List[symbol.Symbol] = [] numbered_args = sorted(map(int, filter(lambda k: k != "rest", arg_set))) if len(numbered_args) > 0: max_arg = max(numbered_args) arg_list = [sym_replacement(str(i)) for i in range(1, max_arg + 1)] if "rest" in arg_set: arg_list.append(_AMPERSAND) arg_list.append(sym_replacement("rest")) return llist.l(_FN, vector.vector(arg_list), body)
def smartypants(text): """ Transforms sequences of characters into HTML entities. =================================== ===================== ========= Markdown HTML Result =================================== ===================== ========= ``'s`` (s, t, m, d, re, ll, ve) &rsquo;s ’s ``"Quotes"`` &ldquo;Quotes&rdquo; “Quotes” ``---`` &mdash; — ``--`` &ndash; – ``...`` &hellip; … ``. . .`` &hellip; … ``(c)`` &copy; © ``(r)`` &reg; ® ``(tm)`` &trade; ™ ``3/4`` &frac34; ¾ ``1/2`` &frac12; ½ ``1/4`` &frac14; ¼ =================================== ===================== ========= """ byte_str = text.encode('utf-8') ob = lib.hoedown_buffer_new(OUNIT) lib.hoedown_html_smartypants(ob, byte_str, len(byte_str)) try: return to_string(ob) finally: lib.hoedown_buffer_free(ob);
Transforms sequences of characters into HTML entities. =================================== ===================== ========= Markdown HTML Result =================================== ===================== ========= ``'s`` (s, t, m, d, re, ll, ve) &rsquo;s ’s ``"Quotes"`` &ldquo;Quotes&rdquo; “Quotes” ``---`` &mdash; — ``--`` &ndash; – ``...`` &hellip; … ``. . .`` &hellip; … ``(c)`` &copy; © ``(r)`` &reg; ® ``(tm)`` &trade; ™ ``3/4`` &frac34; ¾ ``1/2`` &frac12; ½ ``1/4`` &frac14; ¼ =================================== ===================== =========
Below is the the instruction that describes the task: ### Input: Transforms sequences of characters into HTML entities. =================================== ===================== ========= Markdown HTML Result =================================== ===================== ========= ``'s`` (s, t, m, d, re, ll, ve) &rsquo;s ’s ``"Quotes"`` &ldquo;Quotes&rdquo; “Quotes” ``---`` &mdash; — ``--`` &ndash; – ``...`` &hellip; … ``. . .`` &hellip; … ``(c)`` &copy; © ``(r)`` &reg; ® ``(tm)`` &trade; ™ ``3/4`` &frac34; ¾ ``1/2`` &frac12; ½ ``1/4`` &frac14; ¼ =================================== ===================== ========= ### Response: def smartypants(text): """ Transforms sequences of characters into HTML entities. =================================== ===================== ========= Markdown HTML Result =================================== ===================== ========= ``'s`` (s, t, m, d, re, ll, ve) &rsquo;s ’s ``"Quotes"`` &ldquo;Quotes&rdquo; “Quotes” ``---`` &mdash; — ``--`` &ndash; – ``...`` &hellip; … ``. . .`` &hellip; … ``(c)`` &copy; © ``(r)`` &reg; ® ``(tm)`` &trade; ™ ``3/4`` &frac34; ¾ ``1/2`` &frac12; ½ ``1/4`` &frac14; ¼ =================================== ===================== ========= """ byte_str = text.encode('utf-8') ob = lib.hoedown_buffer_new(OUNIT) lib.hoedown_html_smartypants(ob, byte_str, len(byte_str)) try: return to_string(ob) finally: lib.hoedown_buffer_free(ob);
def add_alt_text(self, alt_text): """Adds an alt_text. arg: alt_text (displayText): the new alt_text raise: InvalidArgument - ``alt_text`` is invalid raise: NoAccess - ``Metadata.isReadOnly()`` is ``true`` raise: NullArgument - ``alt_text`` is ``null`` *compliance: mandatory -- This method must be implemented.* """ if self.get_alt_texts_metadata().is_read_only(): raise NoAccess() self.add_or_replace_value('altTexts', alt_text)
Adds an alt_text. arg: alt_text (displayText): the new alt_text raise: InvalidArgument - ``alt_text`` is invalid raise: NoAccess - ``Metadata.isReadOnly()`` is ``true`` raise: NullArgument - ``alt_text`` is ``null`` *compliance: mandatory -- This method must be implemented.*
Below is the the instruction that describes the task: ### Input: Adds an alt_text. arg: alt_text (displayText): the new alt_text raise: InvalidArgument - ``alt_text`` is invalid raise: NoAccess - ``Metadata.isReadOnly()`` is ``true`` raise: NullArgument - ``alt_text`` is ``null`` *compliance: mandatory -- This method must be implemented.* ### Response: def add_alt_text(self, alt_text): """Adds an alt_text. arg: alt_text (displayText): the new alt_text raise: InvalidArgument - ``alt_text`` is invalid raise: NoAccess - ``Metadata.isReadOnly()`` is ``true`` raise: NullArgument - ``alt_text`` is ``null`` *compliance: mandatory -- This method must be implemented.* """ if self.get_alt_texts_metadata().is_read_only(): raise NoAccess() self.add_or_replace_value('altTexts', alt_text)
def text_with_newlines(text, line_length=78, newline='\n'): '''Return text with a `newline` inserted after each `line_length` char. Return `text` unchanged if line_length == 0. ''' if line_length > 0: if len(text) <= line_length: return text else: return newline.join([text[idx:idx+line_length] for idx in range(0, len(text), line_length)]) else: return text
Return text with a `newline` inserted after each `line_length` char. Return `text` unchanged if line_length == 0.
Below is the the instruction that describes the task: ### Input: Return text with a `newline` inserted after each `line_length` char. Return `text` unchanged if line_length == 0. ### Response: def text_with_newlines(text, line_length=78, newline='\n'): '''Return text with a `newline` inserted after each `line_length` char. Return `text` unchanged if line_length == 0. ''' if line_length > 0: if len(text) <= line_length: return text else: return newline.join([text[idx:idx+line_length] for idx in range(0, len(text), line_length)]) else: return text
def do_connect(self, arg): ''' Connect to the arm. ''' if self.arm.is_connected(): print(self.style.error('Error: ', 'Arm is already connected.')) else: try: port = self.arm.connect() print(self.style.success('Success: ', 'Connected to \'{}\'.'.format(port))) except r12.ArmException as e: print(self.style.error('Error: ', str(e)))
Connect to the arm.
Below is the the instruction that describes the task: ### Input: Connect to the arm. ### Response: def do_connect(self, arg): ''' Connect to the arm. ''' if self.arm.is_connected(): print(self.style.error('Error: ', 'Arm is already connected.')) else: try: port = self.arm.connect() print(self.style.success('Success: ', 'Connected to \'{}\'.'.format(port))) except r12.ArmException as e: print(self.style.error('Error: ', str(e)))
def update(self, *args, **kw): ''' Update the dictionary with items and names:: (items, names, **kw) (dict, names, **kw) (MIDict, names, **kw) Optional positional argument ``names`` is only allowed when ``self.indices`` is empty (no indices are set yet). ''' if len(args) > 1 and self.indices: raise ValueError('Only one positional argument is allowed when the' 'index names are already set.') if not self.indices: # empty; init again _MI_init(self, *args, **kw) return d = MIMapping(*args, **kw) if not d.indices: return names = force_list(self.indices.keys()) if len(d.indices) != len(names): raise ValueError('Length of update items (%s) does not match ' 'length of original items (%s)' % (len(d.indices), len(names))) for key in d: # use __setitem__() to handle duplicate self[key] = d[key]
Update the dictionary with items and names:: (items, names, **kw) (dict, names, **kw) (MIDict, names, **kw) Optional positional argument ``names`` is only allowed when ``self.indices`` is empty (no indices are set yet).
Below is the the instruction that describes the task: ### Input: Update the dictionary with items and names:: (items, names, **kw) (dict, names, **kw) (MIDict, names, **kw) Optional positional argument ``names`` is only allowed when ``self.indices`` is empty (no indices are set yet). ### Response: def update(self, *args, **kw): ''' Update the dictionary with items and names:: (items, names, **kw) (dict, names, **kw) (MIDict, names, **kw) Optional positional argument ``names`` is only allowed when ``self.indices`` is empty (no indices are set yet). ''' if len(args) > 1 and self.indices: raise ValueError('Only one positional argument is allowed when the' 'index names are already set.') if not self.indices: # empty; init again _MI_init(self, *args, **kw) return d = MIMapping(*args, **kw) if not d.indices: return names = force_list(self.indices.keys()) if len(d.indices) != len(names): raise ValueError('Length of update items (%s) does not match ' 'length of original items (%s)' % (len(d.indices), len(names))) for key in d: # use __setitem__() to handle duplicate self[key] = d[key]
def key_type(key, host=None, port=None, db=None, password=None): ''' Get redis key type CLI Example: .. code-block:: bash salt '*' redis.type foo ''' server = _connect(host, port, db, password) return server.type(key)
Get redis key type CLI Example: .. code-block:: bash salt '*' redis.type foo
Below is the the instruction that describes the task: ### Input: Get redis key type CLI Example: .. code-block:: bash salt '*' redis.type foo ### Response: def key_type(key, host=None, port=None, db=None, password=None): ''' Get redis key type CLI Example: .. code-block:: bash salt '*' redis.type foo ''' server = _connect(host, port, db, password) return server.type(key)
def _conan_user_home(self, conan, in_workdir=False): """Create the CONAN_USER_HOME for this task fingerprint and initialize the Conan remotes. See https://docs.conan.io/en/latest/reference/commands/consumer/config.html#conan-config-install for docs on configuring remotes. """ # This argument is exposed so tests don't leak out of the workdir. if in_workdir: base_cache_dir = self.workdir else: base_cache_dir = get_pants_cachedir() user_home_base = os.path.join(base_cache_dir, 'conan-support', 'conan-user-home') # Locate the subdirectory of the pants shared cachedir specific to this task's option values. user_home = os.path.join(user_home_base, self.fingerprint) conan_install_base = os.path.join(user_home, '.conan') # Conan doesn't copy remotes.txt into the .conan subdir after the "config install" command, it # simply edits registry.json. However, it is valid to have this file there, and Conan won't # touch it, so we use its presence to detect whether we have appropriately initialized the # Conan installation. remotes_txt_sentinel = os.path.join(conan_install_base, 'remotes.txt') if not os.path.isfile(remotes_txt_sentinel): safe_mkdir(conan_install_base) # Conan doesn't consume the remotes.txt file just by being in the conan directory -- we need # to create another directory containing any selection of files detailed in # https://docs.conan.io/en/latest/reference/commands/consumer/config.html#conan-config-install # and "install" from there to our desired conan directory. with temporary_dir() as remotes_install_dir: # Create an artificial conan configuration dir containing just remotes.txt. remotes_txt_for_install = os.path.join(remotes_install_dir, 'remotes.txt') safe_file_dump(remotes_txt_for_install, self._remotes_txt_content) # Configure the desired user home from this artificial config dir. argv = ['config', 'install', remotes_install_dir] workunit_factory = functools.partial( self.context.new_workunit, name='initial-conan-config', labels=[WorkUnitLabel.TOOL]) env = { 'CONAN_USER_HOME': user_home, } cmdline, exit_code = conan.run(workunit_factory, argv, env=env) if exit_code != 0: raise self.ConanConfigError( 'Error configuring conan with argv {} and environment {}: exited non-zero ({}).' .format(cmdline, env, exit_code), exit_code=exit_code) # Generate the sentinel file so that we know the remotes have been successfully configured for # this particular task fingerprint in successive pants runs. safe_file_dump(remotes_txt_sentinel, self._remotes_txt_content) return user_home
Create the CONAN_USER_HOME for this task fingerprint and initialize the Conan remotes. See https://docs.conan.io/en/latest/reference/commands/consumer/config.html#conan-config-install for docs on configuring remotes.
Below is the the instruction that describes the task: ### Input: Create the CONAN_USER_HOME for this task fingerprint and initialize the Conan remotes. See https://docs.conan.io/en/latest/reference/commands/consumer/config.html#conan-config-install for docs on configuring remotes. ### Response: def _conan_user_home(self, conan, in_workdir=False): """Create the CONAN_USER_HOME for this task fingerprint and initialize the Conan remotes. See https://docs.conan.io/en/latest/reference/commands/consumer/config.html#conan-config-install for docs on configuring remotes. """ # This argument is exposed so tests don't leak out of the workdir. if in_workdir: base_cache_dir = self.workdir else: base_cache_dir = get_pants_cachedir() user_home_base = os.path.join(base_cache_dir, 'conan-support', 'conan-user-home') # Locate the subdirectory of the pants shared cachedir specific to this task's option values. user_home = os.path.join(user_home_base, self.fingerprint) conan_install_base = os.path.join(user_home, '.conan') # Conan doesn't copy remotes.txt into the .conan subdir after the "config install" command, it # simply edits registry.json. However, it is valid to have this file there, and Conan won't # touch it, so we use its presence to detect whether we have appropriately initialized the # Conan installation. remotes_txt_sentinel = os.path.join(conan_install_base, 'remotes.txt') if not os.path.isfile(remotes_txt_sentinel): safe_mkdir(conan_install_base) # Conan doesn't consume the remotes.txt file just by being in the conan directory -- we need # to create another directory containing any selection of files detailed in # https://docs.conan.io/en/latest/reference/commands/consumer/config.html#conan-config-install # and "install" from there to our desired conan directory. with temporary_dir() as remotes_install_dir: # Create an artificial conan configuration dir containing just remotes.txt. remotes_txt_for_install = os.path.join(remotes_install_dir, 'remotes.txt') safe_file_dump(remotes_txt_for_install, self._remotes_txt_content) # Configure the desired user home from this artificial config dir. argv = ['config', 'install', remotes_install_dir] workunit_factory = functools.partial( self.context.new_workunit, name='initial-conan-config', labels=[WorkUnitLabel.TOOL]) env = { 'CONAN_USER_HOME': user_home, } cmdline, exit_code = conan.run(workunit_factory, argv, env=env) if exit_code != 0: raise self.ConanConfigError( 'Error configuring conan with argv {} and environment {}: exited non-zero ({}).' .format(cmdline, env, exit_code), exit_code=exit_code) # Generate the sentinel file so that we know the remotes have been successfully configured for # this particular task fingerprint in successive pants runs. safe_file_dump(remotes_txt_sentinel, self._remotes_txt_content) return user_home
def prior_prediction(self): """get a dict of prior prediction variances Returns ------- prior_prediction : dict dictionary of prediction name, prior variance pairs """ if self.__prior_prediction is not None: return self.__prior_prediction else: if self.predictions is not None: self.log("propagating prior to predictions") prior_cov = self.predictions.T *\ self.parcov * self.predictions self.__prior_prediction = {n:v for n,v in zip(prior_cov.row_names, np.diag(prior_cov.x))} self.log("propagating prior to predictions") else: self.__prior_prediction = {} return self.__prior_prediction
get a dict of prior prediction variances Returns ------- prior_prediction : dict dictionary of prediction name, prior variance pairs
Below is the the instruction that describes the task: ### Input: get a dict of prior prediction variances Returns ------- prior_prediction : dict dictionary of prediction name, prior variance pairs ### Response: def prior_prediction(self): """get a dict of prior prediction variances Returns ------- prior_prediction : dict dictionary of prediction name, prior variance pairs """ if self.__prior_prediction is not None: return self.__prior_prediction else: if self.predictions is not None: self.log("propagating prior to predictions") prior_cov = self.predictions.T *\ self.parcov * self.predictions self.__prior_prediction = {n:v for n,v in zip(prior_cov.row_names, np.diag(prior_cov.x))} self.log("propagating prior to predictions") else: self.__prior_prediction = {} return self.__prior_prediction
def ifar(self, coinc_stat): """Return the far that would be associated with the coincident given. """ n = self.coincs.num_greater(coinc_stat) return self.background_time / lal.YRJUL_SI / (n + 1)
Return the far that would be associated with the coincident given.
Below is the the instruction that describes the task: ### Input: Return the far that would be associated with the coincident given. ### Response: def ifar(self, coinc_stat): """Return the far that would be associated with the coincident given. """ n = self.coincs.num_greater(coinc_stat) return self.background_time / lal.YRJUL_SI / (n + 1)
def get_typelist(self): """ This collects all avaliable types and applies include/exclude filters """ typelist = [] # convert type list into arrays if strings if isinstance(self.config['type_include'], basestring): self.config['type_include'] = self.config['type_include'].split() if isinstance(self.config['type_exclude'], basestring): self.config['type_exclude'] = self.config['type_exclude'].split() # remove any not in include list if self.config['type_include'] is None or len( self.config['type_include']) == 0: typelist = os.popen("lsof | awk '{ print $5 }' | sort | uniq -d" ).read().split() else: typelist = self.config['type_include'] # remove any in the exclude list if self.config['type_exclude'] is not None and len( self.config['type_include']) > 0: for t in self.config['type_exclude']: if t in typelist: typelist.remove(t) return typelist
This collects all avaliable types and applies include/exclude filters
Below is the the instruction that describes the task: ### Input: This collects all avaliable types and applies include/exclude filters ### Response: def get_typelist(self): """ This collects all avaliable types and applies include/exclude filters """ typelist = [] # convert type list into arrays if strings if isinstance(self.config['type_include'], basestring): self.config['type_include'] = self.config['type_include'].split() if isinstance(self.config['type_exclude'], basestring): self.config['type_exclude'] = self.config['type_exclude'].split() # remove any not in include list if self.config['type_include'] is None or len( self.config['type_include']) == 0: typelist = os.popen("lsof | awk '{ print $5 }' | sort | uniq -d" ).read().split() else: typelist = self.config['type_include'] # remove any in the exclude list if self.config['type_exclude'] is not None and len( self.config['type_include']) > 0: for t in self.config['type_exclude']: if t in typelist: typelist.remove(t) return typelist
def add_profile(self, namespace, key, value, force=False): """ Add profile information to this node at the DAX level """ try: entry = dax.Profile(namespace, key, value) self._dax_node.addProfile(entry) except dax.DuplicateError: if force: # Replace with the new key self._dax_node.removeProfile(entry) self._dax_node.addProfile(entry)
Add profile information to this node at the DAX level
Below is the the instruction that describes the task: ### Input: Add profile information to this node at the DAX level ### Response: def add_profile(self, namespace, key, value, force=False): """ Add profile information to this node at the DAX level """ try: entry = dax.Profile(namespace, key, value) self._dax_node.addProfile(entry) except dax.DuplicateError: if force: # Replace with the new key self._dax_node.removeProfile(entry) self._dax_node.addProfile(entry)
def _import_module(module_name, warn=True, prefix='_py_', ignore='_'): """Try import all public attributes from module into global namespace. Existing attributes with name clashes are renamed with prefix. Attributes starting with underscore are ignored by default. Return True on successful import. """ try: module = __import__(module_name) except ImportError: if warn: warnings.warn("Failed to import module " + module_name) else: for attr in dir(module): if ignore and attr.startswith(ignore): continue if prefix: if attr in globals(): globals()[prefix + attr] = globals()[attr] elif warn: warnings.warn("No Python implementation of " + attr) globals()[attr] = getattr(module, attr) return True
Try import all public attributes from module into global namespace. Existing attributes with name clashes are renamed with prefix. Attributes starting with underscore are ignored by default. Return True on successful import.
Below is the the instruction that describes the task: ### Input: Try import all public attributes from module into global namespace. Existing attributes with name clashes are renamed with prefix. Attributes starting with underscore are ignored by default. Return True on successful import. ### Response: def _import_module(module_name, warn=True, prefix='_py_', ignore='_'): """Try import all public attributes from module into global namespace. Existing attributes with name clashes are renamed with prefix. Attributes starting with underscore are ignored by default. Return True on successful import. """ try: module = __import__(module_name) except ImportError: if warn: warnings.warn("Failed to import module " + module_name) else: for attr in dir(module): if ignore and attr.startswith(ignore): continue if prefix: if attr in globals(): globals()[prefix + attr] = globals()[attr] elif warn: warnings.warn("No Python implementation of " + attr) globals()[attr] = getattr(module, attr) return True
def multipublish_tcp(self, topic, messages, **kwargs): """Use :meth:`NsqdTCPClient.multipublish` instead. .. deprecated:: 1.0.0 """ return self.__tcp_client.multipublish(topic, messages, **kwargs)
Use :meth:`NsqdTCPClient.multipublish` instead. .. deprecated:: 1.0.0
Below is the the instruction that describes the task: ### Input: Use :meth:`NsqdTCPClient.multipublish` instead. .. deprecated:: 1.0.0 ### Response: def multipublish_tcp(self, topic, messages, **kwargs): """Use :meth:`NsqdTCPClient.multipublish` instead. .. deprecated:: 1.0.0 """ return self.__tcp_client.multipublish(topic, messages, **kwargs)
def angular_error(self, axis_length): """ The angular error for an in-plane axis of given length (either a PCA major axis or an intermediate direction). """ hyp_axes = self.method(self) return N.arctan2(hyp_axes[-1],axis_length)
The angular error for an in-plane axis of given length (either a PCA major axis or an intermediate direction).
Below is the the instruction that describes the task: ### Input: The angular error for an in-plane axis of given length (either a PCA major axis or an intermediate direction). ### Response: def angular_error(self, axis_length): """ The angular error for an in-plane axis of given length (either a PCA major axis or an intermediate direction). """ hyp_axes = self.method(self) return N.arctan2(hyp_axes[-1],axis_length)
def get_argument(self, name, default=None, strip=True): """ Returns the value of the argument with the given name. If default is not provided, returns ``None`` If the argument appears in the url more than once, we return the last value. The returned value is always unicode """ return self._get_argument(name, default, self.request.arguments, strip)[name]
Returns the value of the argument with the given name. If default is not provided, returns ``None`` If the argument appears in the url more than once, we return the last value. The returned value is always unicode
Below is the the instruction that describes the task: ### Input: Returns the value of the argument with the given name. If default is not provided, returns ``None`` If the argument appears in the url more than once, we return the last value. The returned value is always unicode ### Response: def get_argument(self, name, default=None, strip=True): """ Returns the value of the argument with the given name. If default is not provided, returns ``None`` If the argument appears in the url more than once, we return the last value. The returned value is always unicode """ return self._get_argument(name, default, self.request.arguments, strip)[name]
def _init_tools(self, element, callbacks=[]): """ Processes the list of tools to be supplied to the plot. """ tooltips, hover_opts = self._hover_opts(element) tooltips = [(ttp.pprint_label, '@{%s}' % util.dimension_sanitizer(ttp.name)) if isinstance(ttp, Dimension) else ttp for ttp in tooltips] if not tooltips: tooltips = None callbacks = callbacks+self.callbacks cb_tools, tool_names = [], [] hover = False for cb in callbacks: for handle in cb.models+cb.extra_models: if handle and handle in known_tools: tool_names.append(handle) if handle == 'hover': tool = tools.HoverTool( tooltips=tooltips, tags=['hv_created'], **hover_opts) hover = tool else: tool = known_tools[handle]() cb_tools.append(tool) self.handles[handle] = tool tool_list = [ t for t in cb_tools + self.default_tools + self.tools if t not in tool_names] copied_tools = [] for tool in tool_list: if isinstance(tool, tools.Tool): properties = tool.properties_with_values(include_defaults=False) tool = type(tool)(**properties) copied_tools.append(tool) hover_tools = [t for t in copied_tools if isinstance(t, tools.HoverTool)] if 'hover' in copied_tools: hover = tools.HoverTool(tooltips=tooltips, tags=['hv_created'], **hover_opts) copied_tools[copied_tools.index('hover')] = hover elif any(hover_tools): hover = hover_tools[0] if hover: self.handles['hover'] = hover return copied_tools
Processes the list of tools to be supplied to the plot.
Below is the the instruction that describes the task: ### Input: Processes the list of tools to be supplied to the plot. ### Response: def _init_tools(self, element, callbacks=[]): """ Processes the list of tools to be supplied to the plot. """ tooltips, hover_opts = self._hover_opts(element) tooltips = [(ttp.pprint_label, '@{%s}' % util.dimension_sanitizer(ttp.name)) if isinstance(ttp, Dimension) else ttp for ttp in tooltips] if not tooltips: tooltips = None callbacks = callbacks+self.callbacks cb_tools, tool_names = [], [] hover = False for cb in callbacks: for handle in cb.models+cb.extra_models: if handle and handle in known_tools: tool_names.append(handle) if handle == 'hover': tool = tools.HoverTool( tooltips=tooltips, tags=['hv_created'], **hover_opts) hover = tool else: tool = known_tools[handle]() cb_tools.append(tool) self.handles[handle] = tool tool_list = [ t for t in cb_tools + self.default_tools + self.tools if t not in tool_names] copied_tools = [] for tool in tool_list: if isinstance(tool, tools.Tool): properties = tool.properties_with_values(include_defaults=False) tool = type(tool)(**properties) copied_tools.append(tool) hover_tools = [t for t in copied_tools if isinstance(t, tools.HoverTool)] if 'hover' in copied_tools: hover = tools.HoverTool(tooltips=tooltips, tags=['hv_created'], **hover_opts) copied_tools[copied_tools.index('hover')] = hover elif any(hover_tools): hover = hover_tools[0] if hover: self.handles['hover'] = hover return copied_tools
def get_queryset(self, request): """ Make special filtering by user's permissions. """ if not request.user.has_perm('zinnia.can_view_all'): queryset = self.model.objects.filter(authors__pk=request.user.pk) else: queryset = super(EntryAdmin, self).get_queryset(request) return queryset.prefetch_related('categories', 'authors', 'sites')
Make special filtering by user's permissions.
Below is the the instruction that describes the task: ### Input: Make special filtering by user's permissions. ### Response: def get_queryset(self, request): """ Make special filtering by user's permissions. """ if not request.user.has_perm('zinnia.can_view_all'): queryset = self.model.objects.filter(authors__pk=request.user.pk) else: queryset = super(EntryAdmin, self).get_queryset(request) return queryset.prefetch_related('categories', 'authors', 'sites')
def schedule_retry(self, config): """Schedule a retry""" raise self.retry(countdown=config.get('SAILTHRU_RETRY_SECONDS'), max_retries=config.get('SAILTHRU_RETRY_ATTEMPTS'))
Schedule a retry
Below is the the instruction that describes the task: ### Input: Schedule a retry ### Response: def schedule_retry(self, config): """Schedule a retry""" raise self.retry(countdown=config.get('SAILTHRU_RETRY_SECONDS'), max_retries=config.get('SAILTHRU_RETRY_ATTEMPTS'))
def filename_to_task_id(fname): """Map filename to the task id that created it assuming 1k tasks.""" # This matches the order and size in WikisumBase.out_filepaths fname = os.path.basename(fname) shard_id_increment = { "train": 0, "dev": 800, "test": 900, } parts = fname.split("-") split = parts[1] shard_id = parts[2] task_id = int(shard_id) + shard_id_increment[split] return task_id
Map filename to the task id that created it assuming 1k tasks.
Below is the the instruction that describes the task: ### Input: Map filename to the task id that created it assuming 1k tasks. ### Response: def filename_to_task_id(fname): """Map filename to the task id that created it assuming 1k tasks.""" # This matches the order and size in WikisumBase.out_filepaths fname = os.path.basename(fname) shard_id_increment = { "train": 0, "dev": 800, "test": 900, } parts = fname.split("-") split = parts[1] shard_id = parts[2] task_id = int(shard_id) + shard_id_increment[split] return task_id
def pois_from_address(address, distance, amenities=None): """ Get OSM points of Interests within some distance north, south, east, and west of an address. Parameters ---------- address : string the address to geocode to a lat-long point distance : numeric distance in meters amenities : list List of amenities that will be used for finding the POIs from the selected area. See available amenities from: http://wiki.openstreetmap.org/wiki/Key:amenity Returns ------- GeoDataFrame """ # geocode the address string to a (lat, lon) point point = geocode(query=address) # get POIs within distance of this point return pois_from_point(point=point, amenities=amenities, distance=distance)
Get OSM points of Interests within some distance north, south, east, and west of an address. Parameters ---------- address : string the address to geocode to a lat-long point distance : numeric distance in meters amenities : list List of amenities that will be used for finding the POIs from the selected area. See available amenities from: http://wiki.openstreetmap.org/wiki/Key:amenity Returns ------- GeoDataFrame
Below is the the instruction that describes the task: ### Input: Get OSM points of Interests within some distance north, south, east, and west of an address. Parameters ---------- address : string the address to geocode to a lat-long point distance : numeric distance in meters amenities : list List of amenities that will be used for finding the POIs from the selected area. See available amenities from: http://wiki.openstreetmap.org/wiki/Key:amenity Returns ------- GeoDataFrame ### Response: def pois_from_address(address, distance, amenities=None): """ Get OSM points of Interests within some distance north, south, east, and west of an address. Parameters ---------- address : string the address to geocode to a lat-long point distance : numeric distance in meters amenities : list List of amenities that will be used for finding the POIs from the selected area. See available amenities from: http://wiki.openstreetmap.org/wiki/Key:amenity Returns ------- GeoDataFrame """ # geocode the address string to a (lat, lon) point point = geocode(query=address) # get POIs within distance of this point return pois_from_point(point=point, amenities=amenities, distance=distance)
def add_dnc( self, obj_id, channel='email', reason=MANUAL, channel_id=None, comments='via API' ): """ Adds Do Not Contact :param obj_id: int :param channel: str :param reason: str :param channel_id: int :param comments: str :return: dict|str """ data = { 'reason': reason, 'channelId': channel_id, 'comments': comments } response = self._client.session.post( '{url}/{id}/dnc/add/{channel}'.format( url=self.endpoint_url, id=obj_id, channel=channel ), data=data ) return self.process_response(response)
Adds Do Not Contact :param obj_id: int :param channel: str :param reason: str :param channel_id: int :param comments: str :return: dict|str
Below is the the instruction that describes the task: ### Input: Adds Do Not Contact :param obj_id: int :param channel: str :param reason: str :param channel_id: int :param comments: str :return: dict|str ### Response: def add_dnc( self, obj_id, channel='email', reason=MANUAL, channel_id=None, comments='via API' ): """ Adds Do Not Contact :param obj_id: int :param channel: str :param reason: str :param channel_id: int :param comments: str :return: dict|str """ data = { 'reason': reason, 'channelId': channel_id, 'comments': comments } response = self._client.session.post( '{url}/{id}/dnc/add/{channel}'.format( url=self.endpoint_url, id=obj_id, channel=channel ), data=data ) return self.process_response(response)
def render_heading(self, token): """ Overrides super().render_heading; stores rendered heading first, then returns it. """ rendered = super().render_heading(token) content = self.parse_rendered_heading(rendered) if not (self.omit_title and token.level == 1 or token.level > self.depth or any(cond(content) for cond in self.filter_conds)): self._headings.append((token.level, content)) return rendered
Overrides super().render_heading; stores rendered heading first, then returns it.
Below is the the instruction that describes the task: ### Input: Overrides super().render_heading; stores rendered heading first, then returns it. ### Response: def render_heading(self, token): """ Overrides super().render_heading; stores rendered heading first, then returns it. """ rendered = super().render_heading(token) content = self.parse_rendered_heading(rendered) if not (self.omit_title and token.level == 1 or token.level > self.depth or any(cond(content) for cond in self.filter_conds)): self._headings.append((token.level, content)) return rendered
def bifurcated_extend(self, corpus, max_size): """Replaces the results with those n-grams that contain any of the original n-grams, and that represent points at which an n-gram is a constituent of multiple larger n-grams with a lower label count. :param corpus: corpus of works to which results belong :type corpus: `Corpus` :param max_size: maximum size of n-gram results to include :type max_size: `int` """ temp_fd, temp_path = tempfile.mkstemp(text=True) try: self._prepare_bifurcated_extend_data(corpus, max_size, temp_path, temp_fd) finally: try: os.remove(temp_path) except OSError as e: msg = ('Failed to remove temporary file containing unreduced ' 'results: {}') self._logger.error(msg.format(e)) self._bifurcated_extend()
Replaces the results with those n-grams that contain any of the original n-grams, and that represent points at which an n-gram is a constituent of multiple larger n-grams with a lower label count. :param corpus: corpus of works to which results belong :type corpus: `Corpus` :param max_size: maximum size of n-gram results to include :type max_size: `int`
Below is the the instruction that describes the task: ### Input: Replaces the results with those n-grams that contain any of the original n-grams, and that represent points at which an n-gram is a constituent of multiple larger n-grams with a lower label count. :param corpus: corpus of works to which results belong :type corpus: `Corpus` :param max_size: maximum size of n-gram results to include :type max_size: `int` ### Response: def bifurcated_extend(self, corpus, max_size): """Replaces the results with those n-grams that contain any of the original n-grams, and that represent points at which an n-gram is a constituent of multiple larger n-grams with a lower label count. :param corpus: corpus of works to which results belong :type corpus: `Corpus` :param max_size: maximum size of n-gram results to include :type max_size: `int` """ temp_fd, temp_path = tempfile.mkstemp(text=True) try: self._prepare_bifurcated_extend_data(corpus, max_size, temp_path, temp_fd) finally: try: os.remove(temp_path) except OSError as e: msg = ('Failed to remove temporary file containing unreduced ' 'results: {}') self._logger.error(msg.format(e)) self._bifurcated_extend()
def load_schema(schema_path): """Prepare the api specification for request and response validation. :returns: a mapping from :class:`RequestMatcher` to :class:`ValidatorMap` for every operation in the api specification. :rtype: dict """ with open(schema_path, 'r') as schema_file: schema = simplejson.load(schema_file) resolver = RefResolver('', '', schema.get('models', {})) return build_request_to_validator_map(schema, resolver)
Prepare the api specification for request and response validation. :returns: a mapping from :class:`RequestMatcher` to :class:`ValidatorMap` for every operation in the api specification. :rtype: dict
Below is the the instruction that describes the task: ### Input: Prepare the api specification for request and response validation. :returns: a mapping from :class:`RequestMatcher` to :class:`ValidatorMap` for every operation in the api specification. :rtype: dict ### Response: def load_schema(schema_path): """Prepare the api specification for request and response validation. :returns: a mapping from :class:`RequestMatcher` to :class:`ValidatorMap` for every operation in the api specification. :rtype: dict """ with open(schema_path, 'r') as schema_file: schema = simplejson.load(schema_file) resolver = RefResolver('', '', schema.get('models', {})) return build_request_to_validator_map(schema, resolver)
def minimise(routing_table, target_length): """Reduce the size of a routing table by merging together entries where possible and by removing any remaining default routes. .. warning:: The input routing table *must* also include entries which could be removed and replaced by default routing. .. warning:: It is assumed that the input routing table is not in any particular order and may be reordered into ascending order of generality (number of don't cares/Xs in the key-mask) without affecting routing correctness. It is also assumed that if this table is unordered it is at least orthogonal (i.e., there are no two entries which would match the same key) and reorderable. .. note:: If *all* the keys in the table are derived from a single instance of :py:class:`~rig.bitfield.BitField` then the table is guaranteed to be orthogonal and reorderable. .. note:: Use :py:meth:`~rig.routing_table.expand_entries` to generate an orthogonal table and receive warnings if the input table is not orthogonal. Parameters ---------- routing_table : [:py:class:`~rig.routing_table.RoutingTableEntry`, ...] Routing entries to be merged. target_length : int or None Target length of the routing table; the minimisation procedure will halt once either this target is reached or no further minimisation is possible. If None then the table will be made as small as possible. Raises ------ MinimisationFailedError If the smallest table that can be produced is larger than `target_length`. Returns ------- [:py:class:`~rig.routing_table.RoutingTableEntry`, ...] Reduced routing table entries. """ table, _ = ordered_covering(routing_table, target_length, no_raise=True) return remove_default_routes(table, target_length)
Reduce the size of a routing table by merging together entries where possible and by removing any remaining default routes. .. warning:: The input routing table *must* also include entries which could be removed and replaced by default routing. .. warning:: It is assumed that the input routing table is not in any particular order and may be reordered into ascending order of generality (number of don't cares/Xs in the key-mask) without affecting routing correctness. It is also assumed that if this table is unordered it is at least orthogonal (i.e., there are no two entries which would match the same key) and reorderable. .. note:: If *all* the keys in the table are derived from a single instance of :py:class:`~rig.bitfield.BitField` then the table is guaranteed to be orthogonal and reorderable. .. note:: Use :py:meth:`~rig.routing_table.expand_entries` to generate an orthogonal table and receive warnings if the input table is not orthogonal. Parameters ---------- routing_table : [:py:class:`~rig.routing_table.RoutingTableEntry`, ...] Routing entries to be merged. target_length : int or None Target length of the routing table; the minimisation procedure will halt once either this target is reached or no further minimisation is possible. If None then the table will be made as small as possible. Raises ------ MinimisationFailedError If the smallest table that can be produced is larger than `target_length`. Returns ------- [:py:class:`~rig.routing_table.RoutingTableEntry`, ...] Reduced routing table entries.
Below is the the instruction that describes the task: ### Input: Reduce the size of a routing table by merging together entries where possible and by removing any remaining default routes. .. warning:: The input routing table *must* also include entries which could be removed and replaced by default routing. .. warning:: It is assumed that the input routing table is not in any particular order and may be reordered into ascending order of generality (number of don't cares/Xs in the key-mask) without affecting routing correctness. It is also assumed that if this table is unordered it is at least orthogonal (i.e., there are no two entries which would match the same key) and reorderable. .. note:: If *all* the keys in the table are derived from a single instance of :py:class:`~rig.bitfield.BitField` then the table is guaranteed to be orthogonal and reorderable. .. note:: Use :py:meth:`~rig.routing_table.expand_entries` to generate an orthogonal table and receive warnings if the input table is not orthogonal. Parameters ---------- routing_table : [:py:class:`~rig.routing_table.RoutingTableEntry`, ...] Routing entries to be merged. target_length : int or None Target length of the routing table; the minimisation procedure will halt once either this target is reached or no further minimisation is possible. If None then the table will be made as small as possible. Raises ------ MinimisationFailedError If the smallest table that can be produced is larger than `target_length`. Returns ------- [:py:class:`~rig.routing_table.RoutingTableEntry`, ...] Reduced routing table entries. ### Response: def minimise(routing_table, target_length): """Reduce the size of a routing table by merging together entries where possible and by removing any remaining default routes. .. warning:: The input routing table *must* also include entries which could be removed and replaced by default routing. .. warning:: It is assumed that the input routing table is not in any particular order and may be reordered into ascending order of generality (number of don't cares/Xs in the key-mask) without affecting routing correctness. It is also assumed that if this table is unordered it is at least orthogonal (i.e., there are no two entries which would match the same key) and reorderable. .. note:: If *all* the keys in the table are derived from a single instance of :py:class:`~rig.bitfield.BitField` then the table is guaranteed to be orthogonal and reorderable. .. note:: Use :py:meth:`~rig.routing_table.expand_entries` to generate an orthogonal table and receive warnings if the input table is not orthogonal. Parameters ---------- routing_table : [:py:class:`~rig.routing_table.RoutingTableEntry`, ...] Routing entries to be merged. target_length : int or None Target length of the routing table; the minimisation procedure will halt once either this target is reached or no further minimisation is possible. If None then the table will be made as small as possible. Raises ------ MinimisationFailedError If the smallest table that can be produced is larger than `target_length`. Returns ------- [:py:class:`~rig.routing_table.RoutingTableEntry`, ...] Reduced routing table entries. """ table, _ = ordered_covering(routing_table, target_length, no_raise=True) return remove_default_routes(table, target_length)
def trace_function(module, function, tracer=tracer): """ Traces given module function using given tracer. :param module: Module of the function. :type module: object :param function: Function to trace. :type function: object :param tracer: Tracer. :type tracer: object :return: Definition success. :rtype: bool """ if is_traced(function): return False name = get_object_name(function) if is_untracable(function) or name in UNTRACABLE_NAMES: return False setattr(module, name, tracer(function)) return True
Traces given module function using given tracer. :param module: Module of the function. :type module: object :param function: Function to trace. :type function: object :param tracer: Tracer. :type tracer: object :return: Definition success. :rtype: bool
Below is the the instruction that describes the task: ### Input: Traces given module function using given tracer. :param module: Module of the function. :type module: object :param function: Function to trace. :type function: object :param tracer: Tracer. :type tracer: object :return: Definition success. :rtype: bool ### Response: def trace_function(module, function, tracer=tracer): """ Traces given module function using given tracer. :param module: Module of the function. :type module: object :param function: Function to trace. :type function: object :param tracer: Tracer. :type tracer: object :return: Definition success. :rtype: bool """ if is_traced(function): return False name = get_object_name(function) if is_untracable(function) or name in UNTRACABLE_NAMES: return False setattr(module, name, tracer(function)) return True
def tmp_file_path(self): """ :return: :rtype: str """ return os.path.normpath(os.path.join( TMP_DIR, self.filename ))
:return: :rtype: str
Below is the the instruction that describes the task: ### Input: :return: :rtype: str ### Response: def tmp_file_path(self): """ :return: :rtype: str """ return os.path.normpath(os.path.join( TMP_DIR, self.filename ))
def show_system_monitor_output_switch_status_switch_name(self, **kwargs): """Auto Generated Code """ config = ET.Element("config") show_system_monitor = ET.Element("show_system_monitor") config = show_system_monitor output = ET.SubElement(show_system_monitor, "output") switch_status = ET.SubElement(output, "switch-status") switch_name = ET.SubElement(switch_status, "switch-name") switch_name.text = kwargs.pop('switch_name') callback = kwargs.pop('callback', self._callback) return callback(config)
Auto Generated Code
Below is the the instruction that describes the task: ### Input: Auto Generated Code ### Response: def show_system_monitor_output_switch_status_switch_name(self, **kwargs): """Auto Generated Code """ config = ET.Element("config") show_system_monitor = ET.Element("show_system_monitor") config = show_system_monitor output = ET.SubElement(show_system_monitor, "output") switch_status = ET.SubElement(output, "switch-status") switch_name = ET.SubElement(switch_status, "switch-name") switch_name.text = kwargs.pop('switch_name') callback = kwargs.pop('callback', self._callback) return callback(config)
def str_to_obj(cls, file_path=None, text='', columns=None, remove_empty_rows=True, key_on=None, row_columns=None, deliminator='\t', eval_cells=True): """ This will convert text file or text to a seaborn table and return it :param file_path: str of the path to the file :param text: str of the csv text :param columns: list of str of columns to use :param row_columns: list of str of columns in data but not to use :param remove_empty_rows: bool if True will remove empty rows :param key_on: list of str of columns to key on :param deliminator: str to use as a deliminator :param eval_cells: bool if True will try to evaluate numbers :return: SeabornTable """ text = cls._get_lines(file_path, text) if len(text) == 1: text = text[0].split('\r') list_of_list = [[cls._eval_cell(cell, _eval=eval_cells) for cell in row.split(deliminator)] for row in text if not remove_empty_rows or True in [bool(r) for r in row]] if list_of_list[0][0] == '' and list_of_list[0][-1] == '': list_of_list = [row[1:-1] for row in list_of_list] return cls.list_to_obj(list_of_list, key_on=key_on, columns=columns, row_columns=row_columns)
This will convert text file or text to a seaborn table and return it :param file_path: str of the path to the file :param text: str of the csv text :param columns: list of str of columns to use :param row_columns: list of str of columns in data but not to use :param remove_empty_rows: bool if True will remove empty rows :param key_on: list of str of columns to key on :param deliminator: str to use as a deliminator :param eval_cells: bool if True will try to evaluate numbers :return: SeabornTable
Below is the the instruction that describes the task: ### Input: This will convert text file or text to a seaborn table and return it :param file_path: str of the path to the file :param text: str of the csv text :param columns: list of str of columns to use :param row_columns: list of str of columns in data but not to use :param remove_empty_rows: bool if True will remove empty rows :param key_on: list of str of columns to key on :param deliminator: str to use as a deliminator :param eval_cells: bool if True will try to evaluate numbers :return: SeabornTable ### Response: def str_to_obj(cls, file_path=None, text='', columns=None, remove_empty_rows=True, key_on=None, row_columns=None, deliminator='\t', eval_cells=True): """ This will convert text file or text to a seaborn table and return it :param file_path: str of the path to the file :param text: str of the csv text :param columns: list of str of columns to use :param row_columns: list of str of columns in data but not to use :param remove_empty_rows: bool if True will remove empty rows :param key_on: list of str of columns to key on :param deliminator: str to use as a deliminator :param eval_cells: bool if True will try to evaluate numbers :return: SeabornTable """ text = cls._get_lines(file_path, text) if len(text) == 1: text = text[0].split('\r') list_of_list = [[cls._eval_cell(cell, _eval=eval_cells) for cell in row.split(deliminator)] for row in text if not remove_empty_rows or True in [bool(r) for r in row]] if list_of_list[0][0] == '' and list_of_list[0][-1] == '': list_of_list = [row[1:-1] for row in list_of_list] return cls.list_to_obj(list_of_list, key_on=key_on, columns=columns, row_columns=row_columns)
def number_range_inclusive(min, max, type=float): """ Return a value check function which raises a ValueError if the supplied value when cast as `type` is less than `min` or greater than `max`. """ def checker(v): if type(v) < min or type(v) > max: raise ValueError(v) return checker
Return a value check function which raises a ValueError if the supplied value when cast as `type` is less than `min` or greater than `max`.
Below is the the instruction that describes the task: ### Input: Return a value check function which raises a ValueError if the supplied value when cast as `type` is less than `min` or greater than `max`. ### Response: def number_range_inclusive(min, max, type=float): """ Return a value check function which raises a ValueError if the supplied value when cast as `type` is less than `min` or greater than `max`. """ def checker(v): if type(v) < min or type(v) > max: raise ValueError(v) return checker
def get_objective_lookup_session(self): """Gets the OsidSession associated with the objective lookup service. return: (osid.learning.ObjectiveLookupSession) - an ObjectiveLookupSession raise: OperationFailed - unable to complete request raise: Unimplemented - supports_objective_lookup() is false compliance: optional - This method must be implemented if supports_objective_lookup() is true. """ if not self.supports_objective_lookup(): raise Unimplemented() try: from . import sessions except ImportError: raise # OperationFailed() try: session = sessions.ObjectiveLookupSession(runtime=self._runtime) except AttributeError: raise # OperationFailed() return session
Gets the OsidSession associated with the objective lookup service. return: (osid.learning.ObjectiveLookupSession) - an ObjectiveLookupSession raise: OperationFailed - unable to complete request raise: Unimplemented - supports_objective_lookup() is false compliance: optional - This method must be implemented if supports_objective_lookup() is true.
Below is the the instruction that describes the task: ### Input: Gets the OsidSession associated with the objective lookup service. return: (osid.learning.ObjectiveLookupSession) - an ObjectiveLookupSession raise: OperationFailed - unable to complete request raise: Unimplemented - supports_objective_lookup() is false compliance: optional - This method must be implemented if supports_objective_lookup() is true. ### Response: def get_objective_lookup_session(self): """Gets the OsidSession associated with the objective lookup service. return: (osid.learning.ObjectiveLookupSession) - an ObjectiveLookupSession raise: OperationFailed - unable to complete request raise: Unimplemented - supports_objective_lookup() is false compliance: optional - This method must be implemented if supports_objective_lookup() is true. """ if not self.supports_objective_lookup(): raise Unimplemented() try: from . import sessions except ImportError: raise # OperationFailed() try: session = sessions.ObjectiveLookupSession(runtime=self._runtime) except AttributeError: raise # OperationFailed() return session
def save_driver_script(driver, script_save=None): # noqa: E501 """Save a script Save a script # noqa: E501 :param driver: The driver to use for the request. ie. github :type driver: str :param script_save: The data needed to save this script :type script_save: dict | bytes :rtype: Response """ if connexion.request.is_json: script_save = ScriptSave.from_dict(connexion.request.get_json()) # noqa: E501 response = errorIfUnauthorized(role='developer') if response: return response else: response = ApitaxResponse() driver: Driver = LoadedDrivers.getDriver(driver) driver.saveDriverScript(script_save.script.name, script_save.script.content) return Response(status=200, body=response.getResponseBody())
Save a script Save a script # noqa: E501 :param driver: The driver to use for the request. ie. github :type driver: str :param script_save: The data needed to save this script :type script_save: dict | bytes :rtype: Response
Below is the the instruction that describes the task: ### Input: Save a script Save a script # noqa: E501 :param driver: The driver to use for the request. ie. github :type driver: str :param script_save: The data needed to save this script :type script_save: dict | bytes :rtype: Response ### Response: def save_driver_script(driver, script_save=None): # noqa: E501 """Save a script Save a script # noqa: E501 :param driver: The driver to use for the request. ie. github :type driver: str :param script_save: The data needed to save this script :type script_save: dict | bytes :rtype: Response """ if connexion.request.is_json: script_save = ScriptSave.from_dict(connexion.request.get_json()) # noqa: E501 response = errorIfUnauthorized(role='developer') if response: return response else: response = ApitaxResponse() driver: Driver = LoadedDrivers.getDriver(driver) driver.saveDriverScript(script_save.script.name, script_save.script.content) return Response(status=200, body=response.getResponseBody())
async def connect_model(self, model_name=None): """Connect to a model by name. If either controller or model parts of the name are empty, the current controller and/or model will be used. :param str model: <controller>:<model> """ try: controller_name, model_name = self.jujudata.parse_model(model_name) controller = self.jujudata.controllers().get(controller_name) except JujuError as e: raise JujuConnectionError(e.message) from e if controller is None: raise JujuConnectionError('Controller {} not found'.format( controller_name)) # TODO change Connection so we can pass all the endpoints # instead of just the first one. endpoint = controller['api-endpoints'][0] account = self.jujudata.accounts().get(controller_name, {}) models = self.jujudata.models().get(controller_name, {}).get('models', {}) if model_name not in models: raise JujuConnectionError('Model not found: {}'.format(model_name)) # TODO if there's no record for the required model name, connect # to the controller to find out the model's uuid, then connect # to that. This will let connect_model work with models that # haven't necessarily synced with the local juju data, # and also remove the need for base.CleanModel to # subclass JujuData. await self.connect( endpoint=endpoint, uuid=models[model_name]['uuid'], username=account.get('user'), password=account.get('password'), cacert=controller.get('ca-cert'), bakery_client=self.bakery_client_for_controller(controller_name), ) self.controller_name = controller_name self.model_name = controller_name + ':' + model_name
Connect to a model by name. If either controller or model parts of the name are empty, the current controller and/or model will be used. :param str model: <controller>:<model>
Below is the the instruction that describes the task: ### Input: Connect to a model by name. If either controller or model parts of the name are empty, the current controller and/or model will be used. :param str model: <controller>:<model> ### Response: async def connect_model(self, model_name=None): """Connect to a model by name. If either controller or model parts of the name are empty, the current controller and/or model will be used. :param str model: <controller>:<model> """ try: controller_name, model_name = self.jujudata.parse_model(model_name) controller = self.jujudata.controllers().get(controller_name) except JujuError as e: raise JujuConnectionError(e.message) from e if controller is None: raise JujuConnectionError('Controller {} not found'.format( controller_name)) # TODO change Connection so we can pass all the endpoints # instead of just the first one. endpoint = controller['api-endpoints'][0] account = self.jujudata.accounts().get(controller_name, {}) models = self.jujudata.models().get(controller_name, {}).get('models', {}) if model_name not in models: raise JujuConnectionError('Model not found: {}'.format(model_name)) # TODO if there's no record for the required model name, connect # to the controller to find out the model's uuid, then connect # to that. This will let connect_model work with models that # haven't necessarily synced with the local juju data, # and also remove the need for base.CleanModel to # subclass JujuData. await self.connect( endpoint=endpoint, uuid=models[model_name]['uuid'], username=account.get('user'), password=account.get('password'), cacert=controller.get('ca-cert'), bakery_client=self.bakery_client_for_controller(controller_name), ) self.controller_name = controller_name self.model_name = controller_name + ':' + model_name
def _parse_domain_caps(caps): ''' Parse the XML document of domain capabilities into a structure. Return the domain capabilities given an emulator, architecture, machine or virtualization type. .. versionadded:: 2019.2.0 :param emulator: return the capabilities for the given emulator binary :param arch: return the capabilities for the given CPU architecture :param machine: return the capabilities for the given emulated machine type :param domain: return the capabilities for the given virtualization type. :param connection: libvirt connection URI, overriding defaults :param username: username to connect with, overriding defaults :param password: password to connect with, overriding defaults The list of the possible emulator, arch, machine and domain can be found in the host capabilities output. If none of the parameters is provided the libvirt default domain capabilities will be returned. CLI Example: .. code-block:: bash salt '*' virt.domain_capabilities arch='x86_64' domain='kvm' ''' result = { 'emulator': caps.find('path').text if caps.find('path') is not None else None, 'domain': caps.find('domain').text if caps.find('domain') is not None else None, 'machine': caps.find('machine').text if caps.find('machine') is not None else None, 'arch': caps.find('arch').text if caps.find('arch') is not None else None } for child in caps: if child.tag == 'vcpu' and child.get('max'): result['max_vcpus'] = int(child.get('max')) elif child.tag == 'iothreads': result['iothreads'] = child.get('supported') == 'yes' elif child.tag == 'os': result['os'] = {} loader_node = child.find('loader') if loader_node is not None and loader_node.get('supported') == 'yes': loader = _parse_caps_loader(loader_node) result['os']['loader'] = loader elif child.tag == 'cpu': cpu = _parse_caps_cpu(child) if cpu: result['cpu'] = cpu elif child.tag == 'devices': devices = _parse_caps_devices_features(child) if devices: result['devices'] = devices elif child.tag == 'features': features = _parse_caps_devices_features(child) if features: result['features'] = features return result
Parse the XML document of domain capabilities into a structure. Return the domain capabilities given an emulator, architecture, machine or virtualization type. .. versionadded:: 2019.2.0 :param emulator: return the capabilities for the given emulator binary :param arch: return the capabilities for the given CPU architecture :param machine: return the capabilities for the given emulated machine type :param domain: return the capabilities for the given virtualization type. :param connection: libvirt connection URI, overriding defaults :param username: username to connect with, overriding defaults :param password: password to connect with, overriding defaults The list of the possible emulator, arch, machine and domain can be found in the host capabilities output. If none of the parameters is provided the libvirt default domain capabilities will be returned. CLI Example: .. code-block:: bash salt '*' virt.domain_capabilities arch='x86_64' domain='kvm'
Below is the the instruction that describes the task: ### Input: Parse the XML document of domain capabilities into a structure. Return the domain capabilities given an emulator, architecture, machine or virtualization type. .. versionadded:: 2019.2.0 :param emulator: return the capabilities for the given emulator binary :param arch: return the capabilities for the given CPU architecture :param machine: return the capabilities for the given emulated machine type :param domain: return the capabilities for the given virtualization type. :param connection: libvirt connection URI, overriding defaults :param username: username to connect with, overriding defaults :param password: password to connect with, overriding defaults The list of the possible emulator, arch, machine and domain can be found in the host capabilities output. If none of the parameters is provided the libvirt default domain capabilities will be returned. CLI Example: .. code-block:: bash salt '*' virt.domain_capabilities arch='x86_64' domain='kvm' ### Response: def _parse_domain_caps(caps): ''' Parse the XML document of domain capabilities into a structure. Return the domain capabilities given an emulator, architecture, machine or virtualization type. .. versionadded:: 2019.2.0 :param emulator: return the capabilities for the given emulator binary :param arch: return the capabilities for the given CPU architecture :param machine: return the capabilities for the given emulated machine type :param domain: return the capabilities for the given virtualization type. :param connection: libvirt connection URI, overriding defaults :param username: username to connect with, overriding defaults :param password: password to connect with, overriding defaults The list of the possible emulator, arch, machine and domain can be found in the host capabilities output. If none of the parameters is provided the libvirt default domain capabilities will be returned. CLI Example: .. code-block:: bash salt '*' virt.domain_capabilities arch='x86_64' domain='kvm' ''' result = { 'emulator': caps.find('path').text if caps.find('path') is not None else None, 'domain': caps.find('domain').text if caps.find('domain') is not None else None, 'machine': caps.find('machine').text if caps.find('machine') is not None else None, 'arch': caps.find('arch').text if caps.find('arch') is not None else None } for child in caps: if child.tag == 'vcpu' and child.get('max'): result['max_vcpus'] = int(child.get('max')) elif child.tag == 'iothreads': result['iothreads'] = child.get('supported') == 'yes' elif child.tag == 'os': result['os'] = {} loader_node = child.find('loader') if loader_node is not None and loader_node.get('supported') == 'yes': loader = _parse_caps_loader(loader_node) result['os']['loader'] = loader elif child.tag == 'cpu': cpu = _parse_caps_cpu(child) if cpu: result['cpu'] = cpu elif child.tag == 'devices': devices = _parse_caps_devices_features(child) if devices: result['devices'] = devices elif child.tag == 'features': features = _parse_caps_devices_features(child) if features: result['features'] = features return result
def init(): """Initialize the pipeline in maya so everything works Init environment and load plugins. This also creates the initial Jukebox Menu entry. :returns: None :rtype: None :raises: None """ main.init_environment() pluginpath = os.pathsep.join((os.environ.get('JUKEBOX_PLUGIN_PATH', ''), BUILTIN_PLUGIN_PATH)) os.environ['JUKEBOX_PLUGIN_PATH'] = pluginpath try: maya.standalone.initialize() jukeboxmaya.STANDALONE_INITIALIZED = True except RuntimeError as e: jukeboxmaya.STANDALONE_INITIALIZED = False if str(e) == "maya.standalone may only be used from an external Python interpreter": mm = MenuManager.get() mainmenu = mm.create_menu("Jukebox", tearOff=True) mm.create_menu("Help", parent=mainmenu, command=show_help) # load plugins pmanager = MayaPluginManager.get() pmanager.load_plugins() load_mayaplugins()
Initialize the pipeline in maya so everything works Init environment and load plugins. This also creates the initial Jukebox Menu entry. :returns: None :rtype: None :raises: None
Below is the the instruction that describes the task: ### Input: Initialize the pipeline in maya so everything works Init environment and load plugins. This also creates the initial Jukebox Menu entry. :returns: None :rtype: None :raises: None ### Response: def init(): """Initialize the pipeline in maya so everything works Init environment and load plugins. This also creates the initial Jukebox Menu entry. :returns: None :rtype: None :raises: None """ main.init_environment() pluginpath = os.pathsep.join((os.environ.get('JUKEBOX_PLUGIN_PATH', ''), BUILTIN_PLUGIN_PATH)) os.environ['JUKEBOX_PLUGIN_PATH'] = pluginpath try: maya.standalone.initialize() jukeboxmaya.STANDALONE_INITIALIZED = True except RuntimeError as e: jukeboxmaya.STANDALONE_INITIALIZED = False if str(e) == "maya.standalone may only be used from an external Python interpreter": mm = MenuManager.get() mainmenu = mm.create_menu("Jukebox", tearOff=True) mm.create_menu("Help", parent=mainmenu, command=show_help) # load plugins pmanager = MayaPluginManager.get() pmanager.load_plugins() load_mayaplugins()
def cmdline(argv=sys.argv[1:]): """ Script for rebasing a text file """ parser = ArgumentParser( description='Rebase a text from his stop words') parser.add_argument('language', help='The language used to rebase') parser.add_argument('source', help='Text file to rebase') options = parser.parse_args(argv) factory = StopWordFactory() language = options.language stop_words = factory.get_stop_words(language, fail_safe=True) content = open(options.source, 'rb').read().decode('utf-8') print(stop_words.rebase(content))
Script for rebasing a text file
Below is the the instruction that describes the task: ### Input: Script for rebasing a text file ### Response: def cmdline(argv=sys.argv[1:]): """ Script for rebasing a text file """ parser = ArgumentParser( description='Rebase a text from his stop words') parser.add_argument('language', help='The language used to rebase') parser.add_argument('source', help='Text file to rebase') options = parser.parse_args(argv) factory = StopWordFactory() language = options.language stop_words = factory.get_stop_words(language, fail_safe=True) content = open(options.source, 'rb').read().decode('utf-8') print(stop_words.rebase(content))
def merge_all(rasters, roi=None, dest_resolution=None, merge_strategy=MergeStrategy.UNION, shape=None, ul_corner=None, crs=None, pixel_strategy=PixelStrategy.FIRST, resampling=Resampling.nearest): """Merge a list of rasters, cropping by a region of interest. There are cases that the roi is not precise enough for this cases one can use, the upper left corner the shape and crs to precisely define the roi. When roi is provided the ul_corner, shape and crs are ignored """ first_raster = rasters[0] if roi: crs = crs or roi.crs dest_resolution = dest_resolution or _dest_resolution(first_raster, crs) # Create empty raster empty = GeoRaster2.empty_from_roi( roi, resolution=dest_resolution, band_names=first_raster.band_names, dtype=first_raster.dtype, shape=shape, ul_corner=ul_corner, crs=crs) # Create a list of single band rasters all_band_names, projected_rasters = _prepare_rasters(rasters, merge_strategy, empty, resampling=resampling) assert len(projected_rasters) == len(rasters) prepared_rasters = _apply_pixel_strategy(projected_rasters, pixel_strategy) # Extend the rasters list with only those that have the requested bands prepared_rasters = _explode_rasters(prepared_rasters, all_band_names) if all_band_names: # Merge common bands prepared_rasters = _merge_common_bands(prepared_rasters) # Merge all bands raster = reduce(_stack_bands, prepared_rasters) return empty.copy_with(image=raster.image, band_names=raster.band_names) else: raise ValueError("result contains no bands, use another merge strategy")
Merge a list of rasters, cropping by a region of interest. There are cases that the roi is not precise enough for this cases one can use, the upper left corner the shape and crs to precisely define the roi. When roi is provided the ul_corner, shape and crs are ignored
Below is the the instruction that describes the task: ### Input: Merge a list of rasters, cropping by a region of interest. There are cases that the roi is not precise enough for this cases one can use, the upper left corner the shape and crs to precisely define the roi. When roi is provided the ul_corner, shape and crs are ignored ### Response: def merge_all(rasters, roi=None, dest_resolution=None, merge_strategy=MergeStrategy.UNION, shape=None, ul_corner=None, crs=None, pixel_strategy=PixelStrategy.FIRST, resampling=Resampling.nearest): """Merge a list of rasters, cropping by a region of interest. There are cases that the roi is not precise enough for this cases one can use, the upper left corner the shape and crs to precisely define the roi. When roi is provided the ul_corner, shape and crs are ignored """ first_raster = rasters[0] if roi: crs = crs or roi.crs dest_resolution = dest_resolution or _dest_resolution(first_raster, crs) # Create empty raster empty = GeoRaster2.empty_from_roi( roi, resolution=dest_resolution, band_names=first_raster.band_names, dtype=first_raster.dtype, shape=shape, ul_corner=ul_corner, crs=crs) # Create a list of single band rasters all_band_names, projected_rasters = _prepare_rasters(rasters, merge_strategy, empty, resampling=resampling) assert len(projected_rasters) == len(rasters) prepared_rasters = _apply_pixel_strategy(projected_rasters, pixel_strategy) # Extend the rasters list with only those that have the requested bands prepared_rasters = _explode_rasters(prepared_rasters, all_band_names) if all_band_names: # Merge common bands prepared_rasters = _merge_common_bands(prepared_rasters) # Merge all bands raster = reduce(_stack_bands, prepared_rasters) return empty.copy_with(image=raster.image, band_names=raster.band_names) else: raise ValueError("result contains no bands, use another merge strategy")
def parse_response(self, raw): """ Format the requested data model into a dictionary of DataFrames and a criteria map DataFrame. Take data returned by a requests.get call to Earthref. Parameters ---------- raw: 'requests.models.Response' Returns --------- data_model : dictionary of DataFrames crit_map : DataFrame """ tables = raw.json()['tables'] crit = raw.json()['criteria_map'] return self.parse(tables, crit)
Format the requested data model into a dictionary of DataFrames and a criteria map DataFrame. Take data returned by a requests.get call to Earthref. Parameters ---------- raw: 'requests.models.Response' Returns --------- data_model : dictionary of DataFrames crit_map : DataFrame
Below is the the instruction that describes the task: ### Input: Format the requested data model into a dictionary of DataFrames and a criteria map DataFrame. Take data returned by a requests.get call to Earthref. Parameters ---------- raw: 'requests.models.Response' Returns --------- data_model : dictionary of DataFrames crit_map : DataFrame ### Response: def parse_response(self, raw): """ Format the requested data model into a dictionary of DataFrames and a criteria map DataFrame. Take data returned by a requests.get call to Earthref. Parameters ---------- raw: 'requests.models.Response' Returns --------- data_model : dictionary of DataFrames crit_map : DataFrame """ tables = raw.json()['tables'] crit = raw.json()['criteria_map'] return self.parse(tables, crit)
def Process(self, fs_msg, context): """Processes a single fleetspeak message.""" try: if fs_msg.message_type == "GrrMessage": grr_message = rdf_flows.GrrMessage.FromSerializedString( fs_msg.data.value) self._ProcessGRRMessages(fs_msg.source.client_id, [grr_message]) elif fs_msg.message_type == "MessageList": packed_messages = rdf_flows.PackedMessageList.FromSerializedString( fs_msg.data.value) message_list = communicator.Communicator.DecompressMessageList( packed_messages) self._ProcessGRRMessages(fs_msg.source.client_id, message_list.job) else: logging.error("Received message with unrecognized message_type: %s", fs_msg.message_type) context.set_code(grpc.StatusCode.INVALID_ARGUMENT) except Exception as e: logging.error("Exception processing message: %s", str(e)) raise
Processes a single fleetspeak message.
Below is the the instruction that describes the task: ### Input: Processes a single fleetspeak message. ### Response: def Process(self, fs_msg, context): """Processes a single fleetspeak message.""" try: if fs_msg.message_type == "GrrMessage": grr_message = rdf_flows.GrrMessage.FromSerializedString( fs_msg.data.value) self._ProcessGRRMessages(fs_msg.source.client_id, [grr_message]) elif fs_msg.message_type == "MessageList": packed_messages = rdf_flows.PackedMessageList.FromSerializedString( fs_msg.data.value) message_list = communicator.Communicator.DecompressMessageList( packed_messages) self._ProcessGRRMessages(fs_msg.source.client_id, message_list.job) else: logging.error("Received message with unrecognized message_type: %s", fs_msg.message_type) context.set_code(grpc.StatusCode.INVALID_ARGUMENT) except Exception as e: logging.error("Exception processing message: %s", str(e)) raise
def set_disk0(self, disk0): """ Sets the size (MB) for PCMCIA disk0. :param disk0: disk0 size (integer) """ yield from self._hypervisor.send('vm set_disk0 "{name}" {disk0}'.format(name=self._name, disk0=disk0)) log.info('Router "{name}" [{id}]: disk0 updated from {old_disk0}MB to {new_disk0}MB'.format(name=self._name, id=self._id, old_disk0=self._disk0, new_disk0=disk0)) self._disk0 = disk0
Sets the size (MB) for PCMCIA disk0. :param disk0: disk0 size (integer)
Below is the the instruction that describes the task: ### Input: Sets the size (MB) for PCMCIA disk0. :param disk0: disk0 size (integer) ### Response: def set_disk0(self, disk0): """ Sets the size (MB) for PCMCIA disk0. :param disk0: disk0 size (integer) """ yield from self._hypervisor.send('vm set_disk0 "{name}" {disk0}'.format(name=self._name, disk0=disk0)) log.info('Router "{name}" [{id}]: disk0 updated from {old_disk0}MB to {new_disk0}MB'.format(name=self._name, id=self._id, old_disk0=self._disk0, new_disk0=disk0)) self._disk0 = disk0
def _required_attribute(element, name, default): """ Add attribute with default value to element if it doesn't already exist. """ if element.get(name) is None: element.set(name, default)
Add attribute with default value to element if it doesn't already exist.
Below is the the instruction that describes the task: ### Input: Add attribute with default value to element if it doesn't already exist. ### Response: def _required_attribute(element, name, default): """ Add attribute with default value to element if it doesn't already exist. """ if element.get(name) is None: element.set(name, default)
def eval_upto(self, e, n, cast_to=None, **kwargs): """ Evaluate an expression, using the solver if necessary. Returns primitives as specified by the `cast_to` parameter. Only certain primitives are supported, check the implementation of `_cast_to` to see which ones. :param e: the expression :param n: the number of desired solutions :param extra_constraints: extra constraints to apply to the solver :param exact: if False, returns approximate solutions :param cast_to: A type to cast the resulting values to :return: a tuple of the solutions, in the form of Python primitives :rtype: tuple """ concrete_val = _concrete_value(e) if concrete_val is not None: return [self._cast_to(e, concrete_val, cast_to)] cast_vals = [self._cast_to(e, v, cast_to) for v in self._eval(e, n, **kwargs)] if len(cast_vals) == 0: raise SimUnsatError('Not satisfiable: %s, expected up to %d solutions' % (e.shallow_repr(), n)) return cast_vals
Evaluate an expression, using the solver if necessary. Returns primitives as specified by the `cast_to` parameter. Only certain primitives are supported, check the implementation of `_cast_to` to see which ones. :param e: the expression :param n: the number of desired solutions :param extra_constraints: extra constraints to apply to the solver :param exact: if False, returns approximate solutions :param cast_to: A type to cast the resulting values to :return: a tuple of the solutions, in the form of Python primitives :rtype: tuple
Below is the the instruction that describes the task: ### Input: Evaluate an expression, using the solver if necessary. Returns primitives as specified by the `cast_to` parameter. Only certain primitives are supported, check the implementation of `_cast_to` to see which ones. :param e: the expression :param n: the number of desired solutions :param extra_constraints: extra constraints to apply to the solver :param exact: if False, returns approximate solutions :param cast_to: A type to cast the resulting values to :return: a tuple of the solutions, in the form of Python primitives :rtype: tuple ### Response: def eval_upto(self, e, n, cast_to=None, **kwargs): """ Evaluate an expression, using the solver if necessary. Returns primitives as specified by the `cast_to` parameter. Only certain primitives are supported, check the implementation of `_cast_to` to see which ones. :param e: the expression :param n: the number of desired solutions :param extra_constraints: extra constraints to apply to the solver :param exact: if False, returns approximate solutions :param cast_to: A type to cast the resulting values to :return: a tuple of the solutions, in the form of Python primitives :rtype: tuple """ concrete_val = _concrete_value(e) if concrete_val is not None: return [self._cast_to(e, concrete_val, cast_to)] cast_vals = [self._cast_to(e, v, cast_to) for v in self._eval(e, n, **kwargs)] if len(cast_vals) == 0: raise SimUnsatError('Not satisfiable: %s, expected up to %d solutions' % (e.shallow_repr(), n)) return cast_vals
def _wfdb_fmt(bit_res, single_fmt=True): """ Return the most suitable wfdb format(s) to use given signal resolutions. Parameters ---------- bit_res : int, or list The resolution of the signal, or a list of resolutions, in bits. single_fmt : bool, optional Whether to return the format for the maximum resolution signal. Returns ------- fmt : str or list The most suitable wfdb format(s) used to encode the signal(s). """ if isinstance(bit_res, list): # Return a single format if single_fmt: bit_res = [max(bit_res)] * len(bit_res) return [wfdb_fmt(r) for r in bit_res] if bit_res <= 8: return '80' elif bit_res <= 12: return '212' elif bit_res <= 16: return '16' elif bit_res <= 24: return '24' else: return '32'
Return the most suitable wfdb format(s) to use given signal resolutions. Parameters ---------- bit_res : int, or list The resolution of the signal, or a list of resolutions, in bits. single_fmt : bool, optional Whether to return the format for the maximum resolution signal. Returns ------- fmt : str or list The most suitable wfdb format(s) used to encode the signal(s).
Below is the the instruction that describes the task: ### Input: Return the most suitable wfdb format(s) to use given signal resolutions. Parameters ---------- bit_res : int, or list The resolution of the signal, or a list of resolutions, in bits. single_fmt : bool, optional Whether to return the format for the maximum resolution signal. Returns ------- fmt : str or list The most suitable wfdb format(s) used to encode the signal(s). ### Response: def _wfdb_fmt(bit_res, single_fmt=True): """ Return the most suitable wfdb format(s) to use given signal resolutions. Parameters ---------- bit_res : int, or list The resolution of the signal, or a list of resolutions, in bits. single_fmt : bool, optional Whether to return the format for the maximum resolution signal. Returns ------- fmt : str or list The most suitable wfdb format(s) used to encode the signal(s). """ if isinstance(bit_res, list): # Return a single format if single_fmt: bit_res = [max(bit_res)] * len(bit_res) return [wfdb_fmt(r) for r in bit_res] if bit_res <= 8: return '80' elif bit_res <= 12: return '212' elif bit_res <= 16: return '16' elif bit_res <= 24: return '24' else: return '32'
def _set_mpls_reopt_lsp(self, v, load=False): """ Setter method for mpls_reopt_lsp, mapped from YANG variable /brocade_mpls_rpc/mpls_reopt_lsp (rpc) If this variable is read-only (config: false) in the source YANG file, then _set_mpls_reopt_lsp is considered as a private method. Backends looking to populate this variable should do so via calling thisObj._set_mpls_reopt_lsp() directly. """ if hasattr(v, "_utype"): v = v._utype(v) try: t = YANGDynClass(v,base=mpls_reopt_lsp.mpls_reopt_lsp, is_leaf=True, yang_name="mpls-reopt-lsp", rest_name="mpls-reopt-lsp", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, extensions={u'tailf-common': {u'hidden': u'rpccmd', u'actionpoint': u'mplsReoptimize'}}, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='rpc', is_config=True) except (TypeError, ValueError): raise ValueError({ 'error-string': """mpls_reopt_lsp must be of a type compatible with rpc""", 'defined-type': "rpc", 'generated-type': """YANGDynClass(base=mpls_reopt_lsp.mpls_reopt_lsp, is_leaf=True, yang_name="mpls-reopt-lsp", rest_name="mpls-reopt-lsp", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, extensions={u'tailf-common': {u'hidden': u'rpccmd', u'actionpoint': u'mplsReoptimize'}}, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='rpc', is_config=True)""", }) self.__mpls_reopt_lsp = t if hasattr(self, '_set'): self._set()
Setter method for mpls_reopt_lsp, mapped from YANG variable /brocade_mpls_rpc/mpls_reopt_lsp (rpc) If this variable is read-only (config: false) in the source YANG file, then _set_mpls_reopt_lsp is considered as a private method. Backends looking to populate this variable should do so via calling thisObj._set_mpls_reopt_lsp() directly.
Below is the the instruction that describes the task: ### Input: Setter method for mpls_reopt_lsp, mapped from YANG variable /brocade_mpls_rpc/mpls_reopt_lsp (rpc) If this variable is read-only (config: false) in the source YANG file, then _set_mpls_reopt_lsp is considered as a private method. Backends looking to populate this variable should do so via calling thisObj._set_mpls_reopt_lsp() directly. ### Response: def _set_mpls_reopt_lsp(self, v, load=False): """ Setter method for mpls_reopt_lsp, mapped from YANG variable /brocade_mpls_rpc/mpls_reopt_lsp (rpc) If this variable is read-only (config: false) in the source YANG file, then _set_mpls_reopt_lsp is considered as a private method. Backends looking to populate this variable should do so via calling thisObj._set_mpls_reopt_lsp() directly. """ if hasattr(v, "_utype"): v = v._utype(v) try: t = YANGDynClass(v,base=mpls_reopt_lsp.mpls_reopt_lsp, is_leaf=True, yang_name="mpls-reopt-lsp", rest_name="mpls-reopt-lsp", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, extensions={u'tailf-common': {u'hidden': u'rpccmd', u'actionpoint': u'mplsReoptimize'}}, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='rpc', is_config=True) except (TypeError, ValueError): raise ValueError({ 'error-string': """mpls_reopt_lsp must be of a type compatible with rpc""", 'defined-type': "rpc", 'generated-type': """YANGDynClass(base=mpls_reopt_lsp.mpls_reopt_lsp, is_leaf=True, yang_name="mpls-reopt-lsp", rest_name="mpls-reopt-lsp", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, extensions={u'tailf-common': {u'hidden': u'rpccmd', u'actionpoint': u'mplsReoptimize'}}, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='rpc', is_config=True)""", }) self.__mpls_reopt_lsp = t if hasattr(self, '_set'): self._set()
def _evaluate(self,x): ''' Returns the level of the function at each value in x as the minimum among all of the functions. Only called internally by HARKinterpolator1D.__call__. ''' if _isscalar(x): y = np.nanmin([f(x) for f in self.functions]) else: m = len(x) fx = np.zeros((m,self.funcCount)) for j in range(self.funcCount): fx[:,j] = self.functions[j](x) y = np.nanmin(fx,axis=1) return y
Returns the level of the function at each value in x as the minimum among all of the functions. Only called internally by HARKinterpolator1D.__call__.
Below is the the instruction that describes the task: ### Input: Returns the level of the function at each value in x as the minimum among all of the functions. Only called internally by HARKinterpolator1D.__call__. ### Response: def _evaluate(self,x): ''' Returns the level of the function at each value in x as the minimum among all of the functions. Only called internally by HARKinterpolator1D.__call__. ''' if _isscalar(x): y = np.nanmin([f(x) for f in self.functions]) else: m = len(x) fx = np.zeros((m,self.funcCount)) for j in range(self.funcCount): fx[:,j] = self.functions[j](x) y = np.nanmin(fx,axis=1) return y
def get_creation_time(self): """stub""" ct = self.my_osid_object._my_map['creationTime'] return DateTime(ct.year, ct.month, ct.day, ct.hour, ct.minute, ct.second, ct.microsecond)
stub
Below is the the instruction that describes the task: ### Input: stub ### Response: def get_creation_time(self): """stub""" ct = self.my_osid_object._my_map['creationTime'] return DateTime(ct.year, ct.month, ct.day, ct.hour, ct.minute, ct.second, ct.microsecond)
def _cartesian_to_spherical(cls, coord, center): """Cartesian to Spherical conversion .. warning:: The spherical form is equatorial, not zenithal """ x, y, z, vx, vy, vz = coord r = np.linalg.norm(coord[:3]) phi = arcsin(z / r) theta = arctan2(y, x) r_dot = (x * vx + y * vy + z * vz) / r phi_dot = (vz * (x ** 2 + y ** 2) - z * (x * vx + y * vy)) / (r ** 2 * sqrt(x ** 2 + y ** 2)) theta_dot = (x * vy - y * vx) / (x ** 2 + y ** 2) return np.array([r, theta, phi, r_dot, theta_dot, phi_dot], dtype=float)
Cartesian to Spherical conversion .. warning:: The spherical form is equatorial, not zenithal
Below is the the instruction that describes the task: ### Input: Cartesian to Spherical conversion .. warning:: The spherical form is equatorial, not zenithal ### Response: def _cartesian_to_spherical(cls, coord, center): """Cartesian to Spherical conversion .. warning:: The spherical form is equatorial, not zenithal """ x, y, z, vx, vy, vz = coord r = np.linalg.norm(coord[:3]) phi = arcsin(z / r) theta = arctan2(y, x) r_dot = (x * vx + y * vy + z * vz) / r phi_dot = (vz * (x ** 2 + y ** 2) - z * (x * vx + y * vy)) / (r ** 2 * sqrt(x ** 2 + y ** 2)) theta_dot = (x * vy - y * vx) / (x ** 2 + y ** 2) return np.array([r, theta, phi, r_dot, theta_dot, phi_dot], dtype=float)
def p_cpf_list(self, p): '''cpf_list : cpf_list cpf_def | empty''' if p[1] is None: p[0] = [] else: p[1].append(p[2]) p[0] = p[1]
cpf_list : cpf_list cpf_def | empty
Below is the the instruction that describes the task: ### Input: cpf_list : cpf_list cpf_def | empty ### Response: def p_cpf_list(self, p): '''cpf_list : cpf_list cpf_def | empty''' if p[1] is None: p[0] = [] else: p[1].append(p[2]) p[0] = p[1]
def _compute(self, inputs, outputs): """ Run one iteration of SPRegion's compute """ #if self.topDownMode and (not 'topDownIn' in inputs): # raise RuntimeError("The input topDownIn must be linked in if " # "topDownMode is True") if self._sfdr is None: raise RuntimeError("Spatial pooler has not been initialized") if not self.topDownMode: # # BOTTOM-UP compute # self._iterations += 1 # Get our inputs into numpy arrays buInputVector = inputs['bottomUpIn'] resetSignal = False if 'resetIn' in inputs: assert len(inputs['resetIn']) == 1 resetSignal = inputs['resetIn'][0] != 0 # Perform inference and/or learning rfOutput = self._doBottomUpCompute( rfInput = buInputVector.reshape((1,buInputVector.size)), resetSignal = resetSignal ) outputs['bottomUpOut'][:] = rfOutput.flat else: # # TOP-DOWN inference # topDownIn = inputs.get('topDownIn',None) spatialTopDownOut, temporalTopDownOut = self._doTopDownInfer(topDownIn) outputs['spatialTopDownOut'][:] = spatialTopDownOut if temporalTopDownOut is not None: outputs['temporalTopDownOut'][:] = temporalTopDownOut # OBSOLETE outputs['anomalyScore'][:] = 0
Run one iteration of SPRegion's compute
Below is the the instruction that describes the task: ### Input: Run one iteration of SPRegion's compute ### Response: def _compute(self, inputs, outputs): """ Run one iteration of SPRegion's compute """ #if self.topDownMode and (not 'topDownIn' in inputs): # raise RuntimeError("The input topDownIn must be linked in if " # "topDownMode is True") if self._sfdr is None: raise RuntimeError("Spatial pooler has not been initialized") if not self.topDownMode: # # BOTTOM-UP compute # self._iterations += 1 # Get our inputs into numpy arrays buInputVector = inputs['bottomUpIn'] resetSignal = False if 'resetIn' in inputs: assert len(inputs['resetIn']) == 1 resetSignal = inputs['resetIn'][0] != 0 # Perform inference and/or learning rfOutput = self._doBottomUpCompute( rfInput = buInputVector.reshape((1,buInputVector.size)), resetSignal = resetSignal ) outputs['bottomUpOut'][:] = rfOutput.flat else: # # TOP-DOWN inference # topDownIn = inputs.get('topDownIn',None) spatialTopDownOut, temporalTopDownOut = self._doTopDownInfer(topDownIn) outputs['spatialTopDownOut'][:] = spatialTopDownOut if temporalTopDownOut is not None: outputs['temporalTopDownOut'][:] = temporalTopDownOut # OBSOLETE outputs['anomalyScore'][:] = 0
def connectTo(self, remoteRouteName): """ Set the name of the route which will be added to outgoing boxes. """ self.remoteRouteName = remoteRouteName # This route must not be started before its router is started. If # sender is None, then the router is not started. When the router is # started, it will start this route. if self.router._sender is not None: self.start()
Set the name of the route which will be added to outgoing boxes.
Below is the the instruction that describes the task: ### Input: Set the name of the route which will be added to outgoing boxes. ### Response: def connectTo(self, remoteRouteName): """ Set the name of the route which will be added to outgoing boxes. """ self.remoteRouteName = remoteRouteName # This route must not be started before its router is started. If # sender is None, then the router is not started. When the router is # started, it will start this route. if self.router._sender is not None: self.start()
def cmd(self): """ Determine the final CMD instruction, if any, in the final build stage. CMDs from earlier stages are ignored. :return: value of final stage CMD instruction """ value = None for insndesc in self.structure: if insndesc['instruction'] == 'FROM': # new stage, reset value = None elif insndesc['instruction'] == 'CMD': value = insndesc['value'] return value
Determine the final CMD instruction, if any, in the final build stage. CMDs from earlier stages are ignored. :return: value of final stage CMD instruction
Below is the the instruction that describes the task: ### Input: Determine the final CMD instruction, if any, in the final build stage. CMDs from earlier stages are ignored. :return: value of final stage CMD instruction ### Response: def cmd(self): """ Determine the final CMD instruction, if any, in the final build stage. CMDs from earlier stages are ignored. :return: value of final stage CMD instruction """ value = None for insndesc in self.structure: if insndesc['instruction'] == 'FROM': # new stage, reset value = None elif insndesc['instruction'] == 'CMD': value = insndesc['value'] return value
def t_quotedvar_ENCAPSED_AND_WHITESPACE(t): r'( [^"\\${] | \\(.|\n) | \$(?![A-Za-z_{]) | \{(?!\$) )+' t.lexer.lineno += t.value.count("\n") t.lexer.pop_state() return t
r'( [^"\\${] | \\(.|\n) | \$(?![A-Za-z_{]) | \{(?!\$) )+
Below is the the instruction that describes the task: ### Input: r'( [^"\\${] | \\(.|\n) | \$(?![A-Za-z_{]) | \{(?!\$) )+ ### Response: def t_quotedvar_ENCAPSED_AND_WHITESPACE(t): r'( [^"\\${] | \\(.|\n) | \$(?![A-Za-z_{]) | \{(?!\$) )+' t.lexer.lineno += t.value.count("\n") t.lexer.pop_state() return t
def checksum_chain(self): """ Returns a list of checksums joined with "::". """ checksums = [] for certificate in self.certificates: checksums.append(certificate.checksum) return "::".join(checksums)
Returns a list of checksums joined with "::".
Below is the the instruction that describes the task: ### Input: Returns a list of checksums joined with "::". ### Response: def checksum_chain(self): """ Returns a list of checksums joined with "::". """ checksums = [] for certificate in self.certificates: checksums.append(certificate.checksum) return "::".join(checksums)
def setActivations(self, value): """ Sets all activations to the value of the argument. Value should be in the range [0,1]. """ #if self.verify and not self.activationSet == 0: # raise LayerError, \ # ('Activation flag not reset. Activations may have been set multiple times without any intervening call to propagate().', self.activationSet) Numeric.put(self.activation, Numeric.arange(len(self.activation)), value) self.activationSet = 1
Sets all activations to the value of the argument. Value should be in the range [0,1].
Below is the the instruction that describes the task: ### Input: Sets all activations to the value of the argument. Value should be in the range [0,1]. ### Response: def setActivations(self, value): """ Sets all activations to the value of the argument. Value should be in the range [0,1]. """ #if self.verify and not self.activationSet == 0: # raise LayerError, \ # ('Activation flag not reset. Activations may have been set multiple times without any intervening call to propagate().', self.activationSet) Numeric.put(self.activation, Numeric.arange(len(self.activation)), value) self.activationSet = 1
def log(db, job_id, timestamp, level, process, message): """ Write a log record in the database. :param db: a :class:`openquake.server.dbapi.Db` instance :param job_id: a job ID :param timestamp: timestamp to store in the log record :param level: logging level to store in the log record :param process: process ID to store in the log record :param message: message to store in the log record """ db('INSERT INTO log (job_id, timestamp, level, process, message) ' 'VALUES (?X)', (job_id, timestamp, level, process, message))
Write a log record in the database. :param db: a :class:`openquake.server.dbapi.Db` instance :param job_id: a job ID :param timestamp: timestamp to store in the log record :param level: logging level to store in the log record :param process: process ID to store in the log record :param message: message to store in the log record
Below is the the instruction that describes the task: ### Input: Write a log record in the database. :param db: a :class:`openquake.server.dbapi.Db` instance :param job_id: a job ID :param timestamp: timestamp to store in the log record :param level: logging level to store in the log record :param process: process ID to store in the log record :param message: message to store in the log record ### Response: def log(db, job_id, timestamp, level, process, message): """ Write a log record in the database. :param db: a :class:`openquake.server.dbapi.Db` instance :param job_id: a job ID :param timestamp: timestamp to store in the log record :param level: logging level to store in the log record :param process: process ID to store in the log record :param message: message to store in the log record """ db('INSERT INTO log (job_id, timestamp, level, process, message) ' 'VALUES (?X)', (job_id, timestamp, level, process, message))
def plot_slnp(fignum, SiteRec, datablock, key): """ plots lines and planes on a great circle with alpha 95 and mean deprecated (used in pmagplotlib) """ # make the stereonet plt.figure(num=fignum) plot_net(fignum) s = SiteRec['er_site_name'] # # plot on the data # coord = SiteRec['site_tilt_correction'] title = '' if coord == '-1': title = s + ": specimen coordinates" if coord == '0': title = s + ": geographic coordinates" if coord == '100': title = s + ": tilt corrected coordinates" DIblock, GCblock = [], [] for plotrec in datablock: if plotrec[key + '_direction_type'] == 'p': # direction is pole to plane GCblock.append( (float(plotrec[key + "_dec"]), float(plotrec[key + "_inc"]))) else: # assume direction is a directed line DIblock.append( (float(plotrec[key + "_dec"]), float(plotrec[key + "_inc"]))) if len(DIblock) > 0: plot_di(fignum, DIblock) # plot directed lines if len(GCblock) > 0: for pole in GCblock: plot_circ(fignum, pole, 90., 'g') # plot directed lines # # put on the mean direction # x, y = [], [] XY = pmag.dimap(float(SiteRec["site_dec"]), float(SiteRec["site_inc"])) x.append(XY[0]) y.append(XY[1]) plt.scatter(x, y, marker='d', s=80, c='g') plt.title(title) # # get the alpha95 # Xcirc, Ycirc = [], [] Da95, Ia95 = pmag.circ(float(SiteRec["site_dec"]), float( SiteRec["site_inc"]), float(SiteRec["site_alpha95"])) for k in range(len(Da95)): XY = pmag.dimap(Da95[k], Ia95[k]) Xcirc.append(XY[0]) Ycirc.append(XY[1]) plt.plot(Xcirc, Ycirc, 'g')
plots lines and planes on a great circle with alpha 95 and mean deprecated (used in pmagplotlib)
Below is the the instruction that describes the task: ### Input: plots lines and planes on a great circle with alpha 95 and mean deprecated (used in pmagplotlib) ### Response: def plot_slnp(fignum, SiteRec, datablock, key): """ plots lines and planes on a great circle with alpha 95 and mean deprecated (used in pmagplotlib) """ # make the stereonet plt.figure(num=fignum) plot_net(fignum) s = SiteRec['er_site_name'] # # plot on the data # coord = SiteRec['site_tilt_correction'] title = '' if coord == '-1': title = s + ": specimen coordinates" if coord == '0': title = s + ": geographic coordinates" if coord == '100': title = s + ": tilt corrected coordinates" DIblock, GCblock = [], [] for plotrec in datablock: if plotrec[key + '_direction_type'] == 'p': # direction is pole to plane GCblock.append( (float(plotrec[key + "_dec"]), float(plotrec[key + "_inc"]))) else: # assume direction is a directed line DIblock.append( (float(plotrec[key + "_dec"]), float(plotrec[key + "_inc"]))) if len(DIblock) > 0: plot_di(fignum, DIblock) # plot directed lines if len(GCblock) > 0: for pole in GCblock: plot_circ(fignum, pole, 90., 'g') # plot directed lines # # put on the mean direction # x, y = [], [] XY = pmag.dimap(float(SiteRec["site_dec"]), float(SiteRec["site_inc"])) x.append(XY[0]) y.append(XY[1]) plt.scatter(x, y, marker='d', s=80, c='g') plt.title(title) # # get the alpha95 # Xcirc, Ycirc = [], [] Da95, Ia95 = pmag.circ(float(SiteRec["site_dec"]), float( SiteRec["site_inc"]), float(SiteRec["site_alpha95"])) for k in range(len(Da95)): XY = pmag.dimap(Da95[k], Ia95[k]) Xcirc.append(XY[0]) Ycirc.append(XY[1]) plt.plot(Xcirc, Ycirc, 'g')
def host_from_uri(uri): """Extract hostname and port from URI. Will use default port for HTTP and HTTPS if none is present in the URI. """ default_ports = { 'HTTP': '80', 'HTTPS': '443', } sch, netloc, path, par, query, fra = urlparse(uri) if ':' in netloc: netloc, port = netloc.split(':', 1) else: port = default_ports.get(sch.upper()) return netloc, port
Extract hostname and port from URI. Will use default port for HTTP and HTTPS if none is present in the URI.
Below is the the instruction that describes the task: ### Input: Extract hostname and port from URI. Will use default port for HTTP and HTTPS if none is present in the URI. ### Response: def host_from_uri(uri): """Extract hostname and port from URI. Will use default port for HTTP and HTTPS if none is present in the URI. """ default_ports = { 'HTTP': '80', 'HTTPS': '443', } sch, netloc, path, par, query, fra = urlparse(uri) if ':' in netloc: netloc, port = netloc.split(':', 1) else: port = default_ports.get(sch.upper()) return netloc, port
def CropObservations(env): """" Crops the visual observations of an environment so that they only contain the game screen. Removes anything outside the game that usually belongs to universe (browser borders and so on). """ if env.spec.tags.get('flashgames', False): spec = runtime_spec('flashgames').server_registry[env.spec.id] return _CropObservations(env, x=18, y=84, height=spec["height"], width=spec["width"]) elif (env.spec.tags.get('atari', False) and env.spec.tags.get('vnc', False)): return _CropObservations(env, height=194, width=160) else: # if unknown environment (or local atari), do nothing return env
Crops the visual observations of an environment so that they only contain the game screen. Removes anything outside the game that usually belongs to universe (browser borders and so on).
Below is the the instruction that describes the task: ### Input: Crops the visual observations of an environment so that they only contain the game screen. Removes anything outside the game that usually belongs to universe (browser borders and so on). ### Response: def CropObservations(env): """" Crops the visual observations of an environment so that they only contain the game screen. Removes anything outside the game that usually belongs to universe (browser borders and so on). """ if env.spec.tags.get('flashgames', False): spec = runtime_spec('flashgames').server_registry[env.spec.id] return _CropObservations(env, x=18, y=84, height=spec["height"], width=spec["width"]) elif (env.spec.tags.get('atari', False) and env.spec.tags.get('vnc', False)): return _CropObservations(env, height=194, width=160) else: # if unknown environment (or local atari), do nothing return env
def combining_goal(state): """ Check if two Cubies are combined on the U face. """ ((corner, edge), (L, U, F, D, R, B)) = state if "U" not in corner or "U" not in edge: return False if set(edge).issubset(set(corner)): return True elif set(edge.facings.keys()).issubset(set(corner.facings.keys())): return False opposite = {"L":"R", "R":"L", "F":"B", "B":"F"} edge_facings = list(edge) for i, (face, square) in enumerate(edge_facings): if face == "U": if square != corner[opposite[edge_facings[(i+1)%2][0]]]: return False else: if square != corner["U"]: return False return True
Check if two Cubies are combined on the U face.
Below is the the instruction that describes the task: ### Input: Check if two Cubies are combined on the U face. ### Response: def combining_goal(state): """ Check if two Cubies are combined on the U face. """ ((corner, edge), (L, U, F, D, R, B)) = state if "U" not in corner or "U" not in edge: return False if set(edge).issubset(set(corner)): return True elif set(edge.facings.keys()).issubset(set(corner.facings.keys())): return False opposite = {"L":"R", "R":"L", "F":"B", "B":"F"} edge_facings = list(edge) for i, (face, square) in enumerate(edge_facings): if face == "U": if square != corner[opposite[edge_facings[(i+1)%2][0]]]: return False else: if square != corner["U"]: return False return True
def get(self, name, default="", parent_search=False, multikeys_search=False, __settings_temp=None, __rank_recursion=0): """ Récupération d'une configuration le paramètre ```name``` peut être soit un nom ou un chemin vers la valeur (séparateur /) ```parent_search``` est le boolean qui indique si on doit chercher la valeur dans la hiérarchie plus haute. Si la chaîne "/document/host/val" retourne None, on recherche dans "/document/val" puis dans "/val" ```multikeys_search``` indique si la recherche d'une clef non trouvabe se fait sur les parents en multi clef ie: /graphic/output/logo/enable va aussi chercher dans /graphic/logo/enable ```__settings_temp``` est le dictionnaire temporaire de transmission récursif (intégrant les sous configurations) ```__rank_recursion``` défini le rang de récusion pour chercher aussi depuis la racine du chemin en cas de récursion inverse exemple : valeur = self.settings("document/host/val", "mon_defaut") valeur = self.settings("/document/host/val", "mon_defaut") """ # configuration des settings temporaire pour traitement local if __settings_temp is None: __settings_temp = self.settings # check si le chemin commence par / auquel cas on le supprime if name.startswith("/"): name = name[1:] # check si le chemin termine par / auquel cas on le supprime if name.endswith("/"): name = name[:-1] # check s'il s'agit d'un chemin complet if "/" in name: # récupération du nom de la sous configuraiton name_master = name.split("/")[0] # récupération de l'indice si le nom obtenu contient [] indice_master = -1 indices_master = re.findall(r"\[\d+\]", name_master) if len(indices_master) > 0: try: indice_master = int(indices_master[0].replace("[", "").replace("]", "")) except: pass # suppression de l'indice dans le nom du chemin courant (ie: data[0] devient data) name_master = name_master.replace("[{}]".format(indice_master), "") # recherche si la clef est présente dans le chemin courant if name_master not in __settings_temp.keys(): return None # récupération de la sous configuration if indice_master < 0: # la sous configuration n'est pas une liste __settings_temp = __settings_temp[name_master] else: # la sous configuration est une liste (SI JSON !!) __settings_temp = __settings_temp[name_master][indice_master] if self.is_json else __settings_temp[name] # recursion sur le chemin en dessous name_split = name.split("/")[1:] search_path = "/".join(name_split) return_value = self.get( search_path, default, parent_search, multikeys_search, __settings_temp, __rank_recursion + 1) # pas de valeur trouvé, on cherche sur la récursion inverse if len(name_split) > 1 and return_value is None: i = len(name_split) while i >= 0: # on décrémente le curseur de recherche i -= 1 # établissement du nouveau chemin en supprimant le niveau supérieur new_search_path = "/".join(name_split[i-len(name_split):]) return_value = self.get( new_search_path, default, parent_search, multikeys_search, __settings_temp, __rank_recursion + 1) # pas de recherche multi clef if not multikeys_search: break # une valeur a été trouvée if not return_value is None: break # pas de valeur trouvé et on est à la racine du chemin if return_value is None and __rank_recursion == 0: # on change le nom du master et on cherche name = name_split[-1] return_value = self.get( name, default, parent_search, multikeys_search, self.settings, 0) # toujours pas de valeur, on garde le défaut if return_value is None: return_value = default # retour de la valeur récupérée return return_value # récupération de l'indice si le nom obtenu contient [] indice_master = -1 indices_master = re.findall(r"\[\d+\]", name) if len(indices_master) > 0: try: indice_master = int(indices_master[0].replace("[", "").replace("]", "")) except: pass # suppression de l'indice dans le nom du chemin courant (ie: data[0] devient data) name = name.replace("[{}]".format(indice_master), "") # check de la précense de la clef if type(__settings_temp) is str or name not in __settings_temp.keys(): # le hash n'est pas présent ! # si la recherche récursive inverse est activée et pas de valeur trouvée, # on recherche plus haut if parent_search: return None return default # récupération de la valeur if indice_master < 0: # la sous configuration n'est pas une liste value = __settings_temp[name] else: # la sous configuration est une liste (SI JSON !!) value = __settings_temp[name][indice_master] if self.is_json else __settings_temp[name] # interdiction de la valeur "None" if value is None: # si la recherche récursive inverse est activée et pas de valeur trouvée, # on recherche plus haut if parent_search: return None # valeur par défaut value = default # trim si value est un str if isinstance(value, str): value = value.strip() # retour de la valeur return value
Récupération d'une configuration le paramètre ```name``` peut être soit un nom ou un chemin vers la valeur (séparateur /) ```parent_search``` est le boolean qui indique si on doit chercher la valeur dans la hiérarchie plus haute. Si la chaîne "/document/host/val" retourne None, on recherche dans "/document/val" puis dans "/val" ```multikeys_search``` indique si la recherche d'une clef non trouvabe se fait sur les parents en multi clef ie: /graphic/output/logo/enable va aussi chercher dans /graphic/logo/enable ```__settings_temp``` est le dictionnaire temporaire de transmission récursif (intégrant les sous configurations) ```__rank_recursion``` défini le rang de récusion pour chercher aussi depuis la racine du chemin en cas de récursion inverse exemple : valeur = self.settings("document/host/val", "mon_defaut") valeur = self.settings("/document/host/val", "mon_defaut")
Below is the the instruction that describes the task: ### Input: Récupération d'une configuration le paramètre ```name``` peut être soit un nom ou un chemin vers la valeur (séparateur /) ```parent_search``` est le boolean qui indique si on doit chercher la valeur dans la hiérarchie plus haute. Si la chaîne "/document/host/val" retourne None, on recherche dans "/document/val" puis dans "/val" ```multikeys_search``` indique si la recherche d'une clef non trouvabe se fait sur les parents en multi clef ie: /graphic/output/logo/enable va aussi chercher dans /graphic/logo/enable ```__settings_temp``` est le dictionnaire temporaire de transmission récursif (intégrant les sous configurations) ```__rank_recursion``` défini le rang de récusion pour chercher aussi depuis la racine du chemin en cas de récursion inverse exemple : valeur = self.settings("document/host/val", "mon_defaut") valeur = self.settings("/document/host/val", "mon_defaut") ### Response: def get(self, name, default="", parent_search=False, multikeys_search=False, __settings_temp=None, __rank_recursion=0): """ Récupération d'une configuration le paramètre ```name``` peut être soit un nom ou un chemin vers la valeur (séparateur /) ```parent_search``` est le boolean qui indique si on doit chercher la valeur dans la hiérarchie plus haute. Si la chaîne "/document/host/val" retourne None, on recherche dans "/document/val" puis dans "/val" ```multikeys_search``` indique si la recherche d'une clef non trouvabe se fait sur les parents en multi clef ie: /graphic/output/logo/enable va aussi chercher dans /graphic/logo/enable ```__settings_temp``` est le dictionnaire temporaire de transmission récursif (intégrant les sous configurations) ```__rank_recursion``` défini le rang de récusion pour chercher aussi depuis la racine du chemin en cas de récursion inverse exemple : valeur = self.settings("document/host/val", "mon_defaut") valeur = self.settings("/document/host/val", "mon_defaut") """ # configuration des settings temporaire pour traitement local if __settings_temp is None: __settings_temp = self.settings # check si le chemin commence par / auquel cas on le supprime if name.startswith("/"): name = name[1:] # check si le chemin termine par / auquel cas on le supprime if name.endswith("/"): name = name[:-1] # check s'il s'agit d'un chemin complet if "/" in name: # récupération du nom de la sous configuraiton name_master = name.split("/")[0] # récupération de l'indice si le nom obtenu contient [] indice_master = -1 indices_master = re.findall(r"\[\d+\]", name_master) if len(indices_master) > 0: try: indice_master = int(indices_master[0].replace("[", "").replace("]", "")) except: pass # suppression de l'indice dans le nom du chemin courant (ie: data[0] devient data) name_master = name_master.replace("[{}]".format(indice_master), "") # recherche si la clef est présente dans le chemin courant if name_master not in __settings_temp.keys(): return None # récupération de la sous configuration if indice_master < 0: # la sous configuration n'est pas une liste __settings_temp = __settings_temp[name_master] else: # la sous configuration est une liste (SI JSON !!) __settings_temp = __settings_temp[name_master][indice_master] if self.is_json else __settings_temp[name] # recursion sur le chemin en dessous name_split = name.split("/")[1:] search_path = "/".join(name_split) return_value = self.get( search_path, default, parent_search, multikeys_search, __settings_temp, __rank_recursion + 1) # pas de valeur trouvé, on cherche sur la récursion inverse if len(name_split) > 1 and return_value is None: i = len(name_split) while i >= 0: # on décrémente le curseur de recherche i -= 1 # établissement du nouveau chemin en supprimant le niveau supérieur new_search_path = "/".join(name_split[i-len(name_split):]) return_value = self.get( new_search_path, default, parent_search, multikeys_search, __settings_temp, __rank_recursion + 1) # pas de recherche multi clef if not multikeys_search: break # une valeur a été trouvée if not return_value is None: break # pas de valeur trouvé et on est à la racine du chemin if return_value is None and __rank_recursion == 0: # on change le nom du master et on cherche name = name_split[-1] return_value = self.get( name, default, parent_search, multikeys_search, self.settings, 0) # toujours pas de valeur, on garde le défaut if return_value is None: return_value = default # retour de la valeur récupérée return return_value # récupération de l'indice si le nom obtenu contient [] indice_master = -1 indices_master = re.findall(r"\[\d+\]", name) if len(indices_master) > 0: try: indice_master = int(indices_master[0].replace("[", "").replace("]", "")) except: pass # suppression de l'indice dans le nom du chemin courant (ie: data[0] devient data) name = name.replace("[{}]".format(indice_master), "") # check de la précense de la clef if type(__settings_temp) is str or name not in __settings_temp.keys(): # le hash n'est pas présent ! # si la recherche récursive inverse est activée et pas de valeur trouvée, # on recherche plus haut if parent_search: return None return default # récupération de la valeur if indice_master < 0: # la sous configuration n'est pas une liste value = __settings_temp[name] else: # la sous configuration est une liste (SI JSON !!) value = __settings_temp[name][indice_master] if self.is_json else __settings_temp[name] # interdiction de la valeur "None" if value is None: # si la recherche récursive inverse est activée et pas de valeur trouvée, # on recherche plus haut if parent_search: return None # valeur par défaut value = default # trim si value est un str if isinstance(value, str): value = value.strip() # retour de la valeur return value
def list_kubernetes_roles(self, mount_point='kubernetes'): """GET /auth/<mount_point>/role?list=true :param mount_point: The "path" the k8s auth backend was mounted on. Vault currently defaults to "kubernetes". :type mount_point: str. :return: Parsed JSON response from the list roles GET request. :rtype: dict. """ url = 'v1/auth/{0}/role?list=true'.format(mount_point) return self._adapter.get(url).json()
GET /auth/<mount_point>/role?list=true :param mount_point: The "path" the k8s auth backend was mounted on. Vault currently defaults to "kubernetes". :type mount_point: str. :return: Parsed JSON response from the list roles GET request. :rtype: dict.
Below is the the instruction that describes the task: ### Input: GET /auth/<mount_point>/role?list=true :param mount_point: The "path" the k8s auth backend was mounted on. Vault currently defaults to "kubernetes". :type mount_point: str. :return: Parsed JSON response from the list roles GET request. :rtype: dict. ### Response: def list_kubernetes_roles(self, mount_point='kubernetes'): """GET /auth/<mount_point>/role?list=true :param mount_point: The "path" the k8s auth backend was mounted on. Vault currently defaults to "kubernetes". :type mount_point: str. :return: Parsed JSON response from the list roles GET request. :rtype: dict. """ url = 'v1/auth/{0}/role?list=true'.format(mount_point) return self._adapter.get(url).json()
def _i2c_idle(self): """Set I2C signals to idle state with SCL and SDA at a high value. Must be called within a transaction start/end. """ self._ft232h.output_pins({0: GPIO.HIGH, 1: GPIO.HIGH}, write=False) self._command.append(self._ft232h.mpsse_gpio() * _REPEAT_DELAY)
Set I2C signals to idle state with SCL and SDA at a high value. Must be called within a transaction start/end.
Below is the the instruction that describes the task: ### Input: Set I2C signals to idle state with SCL and SDA at a high value. Must be called within a transaction start/end. ### Response: def _i2c_idle(self): """Set I2C signals to idle state with SCL and SDA at a high value. Must be called within a transaction start/end. """ self._ft232h.output_pins({0: GPIO.HIGH, 1: GPIO.HIGH}, write=False) self._command.append(self._ft232h.mpsse_gpio() * _REPEAT_DELAY)
def indx_table(node_dict, tbl_mode=False): """Print Table for dict=formatted list conditionally include numbers.""" nt = PrettyTable() nt.header = False nt.padding_width = 2 nt.border = False clr_num = C_TI + "NUM" clr_name = C_TI + "NAME" clr_state = "STATE" + C_NORM t_lu = {True: [clr_num, "NAME", "REGION", "CLOUD", "SIZE", "PUBLIC IP", clr_state], False: [clr_name, "REGION", "CLOUD", "SIZE", "PUBLIC IP", clr_state]} nt.add_row(t_lu[tbl_mode]) for i, node in node_dict.items(): state = C_STAT[node.state] + node.state + C_NORM inum = C_WARN + str(i) + C_NORM if node.public_ips: n_ip = node.public_ips else: n_ip = "-" r_lu = {True: [inum, node.name, node.zone, node.cloud, node.size, n_ip, state], False: [node.name, node.zone, node.cloud, node.size, n_ip, state]} nt.add_row(r_lu[tbl_mode]) if not tbl_mode: print(nt) else: idx_tbl = nt.get_string() return idx_tbl
Print Table for dict=formatted list conditionally include numbers.
Below is the the instruction that describes the task: ### Input: Print Table for dict=formatted list conditionally include numbers. ### Response: def indx_table(node_dict, tbl_mode=False): """Print Table for dict=formatted list conditionally include numbers.""" nt = PrettyTable() nt.header = False nt.padding_width = 2 nt.border = False clr_num = C_TI + "NUM" clr_name = C_TI + "NAME" clr_state = "STATE" + C_NORM t_lu = {True: [clr_num, "NAME", "REGION", "CLOUD", "SIZE", "PUBLIC IP", clr_state], False: [clr_name, "REGION", "CLOUD", "SIZE", "PUBLIC IP", clr_state]} nt.add_row(t_lu[tbl_mode]) for i, node in node_dict.items(): state = C_STAT[node.state] + node.state + C_NORM inum = C_WARN + str(i) + C_NORM if node.public_ips: n_ip = node.public_ips else: n_ip = "-" r_lu = {True: [inum, node.name, node.zone, node.cloud, node.size, n_ip, state], False: [node.name, node.zone, node.cloud, node.size, n_ip, state]} nt.add_row(r_lu[tbl_mode]) if not tbl_mode: print(nt) else: idx_tbl = nt.get_string() return idx_tbl
def _fastfood_show(args): """Run on `fastfood show`.""" template_pack = pack.TemplatePack(args.template_pack) if args.stencil_set: stencil_set = template_pack.load_stencil_set(args.stencil_set) print("Stencil Set %s:" % args.stencil_set) print(' Stencils:') for stencil in stencil_set.stencils: print(" %s" % stencil) print(' Options:') for opt, vals in stencil_set.manifest['options'].items(): print(" %s - %s" % (opt, vals['help']))
Run on `fastfood show`.
Below is the the instruction that describes the task: ### Input: Run on `fastfood show`. ### Response: def _fastfood_show(args): """Run on `fastfood show`.""" template_pack = pack.TemplatePack(args.template_pack) if args.stencil_set: stencil_set = template_pack.load_stencil_set(args.stencil_set) print("Stencil Set %s:" % args.stencil_set) print(' Stencils:') for stencil in stencil_set.stencils: print(" %s" % stencil) print(' Options:') for opt, vals in stencil_set.manifest['options'].items(): print(" %s - %s" % (opt, vals['help']))
def pivot_by_group( df, variable, value, new_columns, groups, id_cols=None ): """ Pivot a dataframe by group of variables --- ### Parameters *mandatory :* * `variable` (*str*): name of the column used to create the groups. * `value` (*str*): name of the column containing the value to fill the pivoted df. * `new_columns` (*list of str*): names of the new columns. * `groups` (*dict*): names of the groups with their corresponding variables. **Warning**: the list of variables must have the same order as `new_columns` *optional :* * `id_cols` (*list of str*) : names of other columns to keep, default `None`. --- ### Example **Input** | type | variable | montant | |:----:|:----------:|:-------:| | A | var1 | 5 | | A | var1_evol | 0.3 | | A | var2 | 6 | | A | var2_evol | 0.2 | ```cson pivot_by_group : id_cols: ['type'] variable: 'variable' value: 'montant' new_columns: ['value', 'variation'] groups: 'Group 1' : ['var1', 'var1_evol'] 'Group 2' : ['var2', 'var2_evol'] ``` **Ouput** | type | variable | value | variation | |:----:|:----------:|:-------:|:---------:| | A | Group 1 | 5 | 0.3 | | A | Group 2 | 6 | 0.2 | """ if id_cols is None: index = [variable] else: index = [variable] + id_cols param = pd.DataFrame(groups, index=new_columns) temporary_colum = 'tmp' df[temporary_colum] = df[variable] for column in param.columns: df.loc[df[variable].isin(param[column]), variable] = column param = param.T for column in param.columns: df.loc[ df[temporary_colum].isin(param[column]), temporary_colum] = column df = pivot(df, index, temporary_colum, value) return df
Pivot a dataframe by group of variables --- ### Parameters *mandatory :* * `variable` (*str*): name of the column used to create the groups. * `value` (*str*): name of the column containing the value to fill the pivoted df. * `new_columns` (*list of str*): names of the new columns. * `groups` (*dict*): names of the groups with their corresponding variables. **Warning**: the list of variables must have the same order as `new_columns` *optional :* * `id_cols` (*list of str*) : names of other columns to keep, default `None`. --- ### Example **Input** | type | variable | montant | |:----:|:----------:|:-------:| | A | var1 | 5 | | A | var1_evol | 0.3 | | A | var2 | 6 | | A | var2_evol | 0.2 | ```cson pivot_by_group : id_cols: ['type'] variable: 'variable' value: 'montant' new_columns: ['value', 'variation'] groups: 'Group 1' : ['var1', 'var1_evol'] 'Group 2' : ['var2', 'var2_evol'] ``` **Ouput** | type | variable | value | variation | |:----:|:----------:|:-------:|:---------:| | A | Group 1 | 5 | 0.3 | | A | Group 2 | 6 | 0.2 |
Below is the the instruction that describes the task: ### Input: Pivot a dataframe by group of variables --- ### Parameters *mandatory :* * `variable` (*str*): name of the column used to create the groups. * `value` (*str*): name of the column containing the value to fill the pivoted df. * `new_columns` (*list of str*): names of the new columns. * `groups` (*dict*): names of the groups with their corresponding variables. **Warning**: the list of variables must have the same order as `new_columns` *optional :* * `id_cols` (*list of str*) : names of other columns to keep, default `None`. --- ### Example **Input** | type | variable | montant | |:----:|:----------:|:-------:| | A | var1 | 5 | | A | var1_evol | 0.3 | | A | var2 | 6 | | A | var2_evol | 0.2 | ```cson pivot_by_group : id_cols: ['type'] variable: 'variable' value: 'montant' new_columns: ['value', 'variation'] groups: 'Group 1' : ['var1', 'var1_evol'] 'Group 2' : ['var2', 'var2_evol'] ``` **Ouput** | type | variable | value | variation | |:----:|:----------:|:-------:|:---------:| | A | Group 1 | 5 | 0.3 | | A | Group 2 | 6 | 0.2 | ### Response: def pivot_by_group( df, variable, value, new_columns, groups, id_cols=None ): """ Pivot a dataframe by group of variables --- ### Parameters *mandatory :* * `variable` (*str*): name of the column used to create the groups. * `value` (*str*): name of the column containing the value to fill the pivoted df. * `new_columns` (*list of str*): names of the new columns. * `groups` (*dict*): names of the groups with their corresponding variables. **Warning**: the list of variables must have the same order as `new_columns` *optional :* * `id_cols` (*list of str*) : names of other columns to keep, default `None`. --- ### Example **Input** | type | variable | montant | |:----:|:----------:|:-------:| | A | var1 | 5 | | A | var1_evol | 0.3 | | A | var2 | 6 | | A | var2_evol | 0.2 | ```cson pivot_by_group : id_cols: ['type'] variable: 'variable' value: 'montant' new_columns: ['value', 'variation'] groups: 'Group 1' : ['var1', 'var1_evol'] 'Group 2' : ['var2', 'var2_evol'] ``` **Ouput** | type | variable | value | variation | |:----:|:----------:|:-------:|:---------:| | A | Group 1 | 5 | 0.3 | | A | Group 2 | 6 | 0.2 | """ if id_cols is None: index = [variable] else: index = [variable] + id_cols param = pd.DataFrame(groups, index=new_columns) temporary_colum = 'tmp' df[temporary_colum] = df[variable] for column in param.columns: df.loc[df[variable].isin(param[column]), variable] = column param = param.T for column in param.columns: df.loc[ df[temporary_colum].isin(param[column]), temporary_colum] = column df = pivot(df, index, temporary_colum, value) return df
def POST_AUTH(self, courseid, taskid): # pylint: disable=arguments-differ """ Edit a task """ if not id_checker(taskid) or not id_checker(courseid): raise Exception("Invalid course/task id") course, __ = self.get_course_and_check_rights(courseid, allow_all_staff=False) data = web.input(task_file={}) # Delete task ? if "delete" in data: self.task_factory.delete_task(courseid, taskid) if data.get("wipe", False): self.wipe_task(courseid, taskid) raise web.seeother(self.app.get_homepath() + "/admin/"+courseid+"/tasks") # Else, parse content try: try: task_zip = data.get("task_file").file except: task_zip = None del data["task_file"] problems = self.dict_from_prefix("problem", data) limits = self.dict_from_prefix("limits", data) #Tags tags = self.dict_from_prefix("tags", data) if tags is None: tags = {} tags = OrderedDict(sorted(tags.items(), key=lambda item: item[0])) # Sort by key # Repair tags for k in tags: tags[k]["visible"] = ("visible" in tags[k]) # Since unckecked checkboxes are not present here, we manually add them to avoid later errors tags[k]["type"] = int(tags[k]["type"]) if not "id" in tags[k]: tags[k]["id"] = "" # Since textinput is disabled when the tag is organisational, the id field is missing. We add it to avoid Keys Errors if tags[k]["type"] == 2: tags[k]["id"] = "" # Force no id if organisational tag # Remove uncompleted tags (tags with no name or no id) for k in list(tags.keys()): if (tags[k]["id"] == "" and tags[k]["type"] != 2) or tags[k]["name"] == "": del tags[k] # Find duplicate ids. Return an error if some tags use the same id. for k in tags: if tags[k]["type"] != 2: # Ignore organisational tags since they have no id. count = 0 id = str(tags[k]["id"]) if (" " in id): return json.dumps({"status": "error", "message": _("You can not use spaces in the tag id field.")}) if not id_checker(id): return json.dumps({"status": "error", "message": _("Invalid tag id: {}").format(id)}) for k2 in tags: if tags[k2]["type"] != 2 and tags[k2]["id"] == id: count = count+1 if count > 1: return json.dumps({"status": "error", "message": _("Some tags have the same id! The id of a tag must be unique.")}) data = {key: val for key, val in data.items() if not key.startswith("problem") and not key.startswith("limits") and not key.startswith("tags") and not key.startswith("/")} del data["@action"] # Determines the task filetype if data["@filetype"] not in self.task_factory.get_available_task_file_extensions(): return json.dumps({"status": "error", "message": _("Invalid file type: {}").format(str(data["@filetype"]))}) file_ext = data["@filetype"] del data["@filetype"] # Parse and order the problems (also deletes @order from the result) if problems is None: data["problems"] = OrderedDict([]) else: data["problems"] = OrderedDict([(key, self.parse_problem(val)) for key, val in sorted(iter(problems.items()), key=lambda x: int(x[1]['@order']))]) # Task limits data["limits"] = limits data["tags"] = OrderedDict(sorted(tags.items(), key=lambda x: x[1]['type'])) if "hard_time" in data["limits"] and data["limits"]["hard_time"] == "": del data["limits"]["hard_time"] # Weight try: data["weight"] = float(data["weight"]) except: return json.dumps({"status": "error", "message": _("Grade weight must be a floating-point number")}) # Groups if "groups" in data: data["groups"] = True if data["groups"] == "true" else False # Submision storage if "store_all" in data: try: stored_submissions = data["stored_submissions"] data["stored_submissions"] = 0 if data["store_all"] == "true" else int(stored_submissions) except: return json.dumps( {"status": "error", "message": _("The number of stored submission must be positive!")}) if data["store_all"] == "false" and data["stored_submissions"] <= 0: return json.dumps({"status": "error", "message": _("The number of stored submission must be positive!")}) del data['store_all'] # Submission limits if "submission_limit" in data: if data["submission_limit"] == "none": result = {"amount": -1, "period": -1} elif data["submission_limit"] == "hard": try: result = {"amount": int(data["submission_limit_hard"]), "period": -1} except: return json.dumps({"status": "error", "message": _("Invalid submission limit!")}) else: try: result = {"amount": int(data["submission_limit_soft_0"]), "period": int(data["submission_limit_soft_1"])} except: return json.dumps({"status": "error", "message": _("Invalid submission limit!")}) del data["submission_limit_hard"] del data["submission_limit_soft_0"] del data["submission_limit_soft_1"] data["submission_limit"] = result # Accessible if data["accessible"] == "custom": data["accessible"] = "{}/{}/{}".format(data["accessible_start"], data["accessible_soft_end"], data["accessible_end"]) elif data["accessible"] == "true": data["accessible"] = True else: data["accessible"] = False del data["accessible_start"] del data["accessible_end"] del data["accessible_soft_end"] # Checkboxes if data.get("responseIsHTML"): data["responseIsHTML"] = True # Network grading data["network_grading"] = "network_grading" in data except Exception as message: return json.dumps({"status": "error", "message": _("Your browser returned an invalid form ({})").format(message)}) # Get the course try: course = self.course_factory.get_course(courseid) except: return json.dumps({"status": "error", "message": _("Error while reading course's informations")}) # Get original data try: orig_data = self.task_factory.get_task_descriptor_content(courseid, taskid) data["order"] = orig_data["order"] except: pass task_fs = self.task_factory.get_task_fs(courseid, taskid) task_fs.ensure_exists() # Call plugins and return the first error plugin_results = self.plugin_manager.call_hook('task_editor_submit', course=course, taskid=taskid, task_data=data, task_fs=task_fs) # Retrieve the first non-null element error = next(filter(None, plugin_results), None) if error is not None: return error try: WebAppTask(course, taskid, data, task_fs, None, self.plugin_manager, self.task_factory.get_problem_types()) except Exception as message: return json.dumps({"status": "error", "message": _("Invalid data: {}").format(str(message))}) if task_zip: try: zipfile = ZipFile(task_zip) except Exception: return json.dumps({"status": "error", "message": _("Cannot read zip file. Files were not modified")}) with tempfile.TemporaryDirectory() as tmpdirname: try: zipfile.extractall(tmpdirname) except Exception: return json.dumps( {"status": "error", "message": _("There was a problem while extracting the zip archive. Some files may have been modified")}) task_fs.copy_to(tmpdirname) self.task_factory.delete_all_possible_task_files(courseid, taskid) self.task_factory.update_task_descriptor_content(courseid, taskid, data, force_extension=file_ext) course.update_all_tags_cache() return json.dumps({"status": "ok"})
Edit a task
Below is the the instruction that describes the task: ### Input: Edit a task ### Response: def POST_AUTH(self, courseid, taskid): # pylint: disable=arguments-differ """ Edit a task """ if not id_checker(taskid) or not id_checker(courseid): raise Exception("Invalid course/task id") course, __ = self.get_course_and_check_rights(courseid, allow_all_staff=False) data = web.input(task_file={}) # Delete task ? if "delete" in data: self.task_factory.delete_task(courseid, taskid) if data.get("wipe", False): self.wipe_task(courseid, taskid) raise web.seeother(self.app.get_homepath() + "/admin/"+courseid+"/tasks") # Else, parse content try: try: task_zip = data.get("task_file").file except: task_zip = None del data["task_file"] problems = self.dict_from_prefix("problem", data) limits = self.dict_from_prefix("limits", data) #Tags tags = self.dict_from_prefix("tags", data) if tags is None: tags = {} tags = OrderedDict(sorted(tags.items(), key=lambda item: item[0])) # Sort by key # Repair tags for k in tags: tags[k]["visible"] = ("visible" in tags[k]) # Since unckecked checkboxes are not present here, we manually add them to avoid later errors tags[k]["type"] = int(tags[k]["type"]) if not "id" in tags[k]: tags[k]["id"] = "" # Since textinput is disabled when the tag is organisational, the id field is missing. We add it to avoid Keys Errors if tags[k]["type"] == 2: tags[k]["id"] = "" # Force no id if organisational tag # Remove uncompleted tags (tags with no name or no id) for k in list(tags.keys()): if (tags[k]["id"] == "" and tags[k]["type"] != 2) or tags[k]["name"] == "": del tags[k] # Find duplicate ids. Return an error if some tags use the same id. for k in tags: if tags[k]["type"] != 2: # Ignore organisational tags since they have no id. count = 0 id = str(tags[k]["id"]) if (" " in id): return json.dumps({"status": "error", "message": _("You can not use spaces in the tag id field.")}) if not id_checker(id): return json.dumps({"status": "error", "message": _("Invalid tag id: {}").format(id)}) for k2 in tags: if tags[k2]["type"] != 2 and tags[k2]["id"] == id: count = count+1 if count > 1: return json.dumps({"status": "error", "message": _("Some tags have the same id! The id of a tag must be unique.")}) data = {key: val for key, val in data.items() if not key.startswith("problem") and not key.startswith("limits") and not key.startswith("tags") and not key.startswith("/")} del data["@action"] # Determines the task filetype if data["@filetype"] not in self.task_factory.get_available_task_file_extensions(): return json.dumps({"status": "error", "message": _("Invalid file type: {}").format(str(data["@filetype"]))}) file_ext = data["@filetype"] del data["@filetype"] # Parse and order the problems (also deletes @order from the result) if problems is None: data["problems"] = OrderedDict([]) else: data["problems"] = OrderedDict([(key, self.parse_problem(val)) for key, val in sorted(iter(problems.items()), key=lambda x: int(x[1]['@order']))]) # Task limits data["limits"] = limits data["tags"] = OrderedDict(sorted(tags.items(), key=lambda x: x[1]['type'])) if "hard_time" in data["limits"] and data["limits"]["hard_time"] == "": del data["limits"]["hard_time"] # Weight try: data["weight"] = float(data["weight"]) except: return json.dumps({"status": "error", "message": _("Grade weight must be a floating-point number")}) # Groups if "groups" in data: data["groups"] = True if data["groups"] == "true" else False # Submision storage if "store_all" in data: try: stored_submissions = data["stored_submissions"] data["stored_submissions"] = 0 if data["store_all"] == "true" else int(stored_submissions) except: return json.dumps( {"status": "error", "message": _("The number of stored submission must be positive!")}) if data["store_all"] == "false" and data["stored_submissions"] <= 0: return json.dumps({"status": "error", "message": _("The number of stored submission must be positive!")}) del data['store_all'] # Submission limits if "submission_limit" in data: if data["submission_limit"] == "none": result = {"amount": -1, "period": -1} elif data["submission_limit"] == "hard": try: result = {"amount": int(data["submission_limit_hard"]), "period": -1} except: return json.dumps({"status": "error", "message": _("Invalid submission limit!")}) else: try: result = {"amount": int(data["submission_limit_soft_0"]), "period": int(data["submission_limit_soft_1"])} except: return json.dumps({"status": "error", "message": _("Invalid submission limit!")}) del data["submission_limit_hard"] del data["submission_limit_soft_0"] del data["submission_limit_soft_1"] data["submission_limit"] = result # Accessible if data["accessible"] == "custom": data["accessible"] = "{}/{}/{}".format(data["accessible_start"], data["accessible_soft_end"], data["accessible_end"]) elif data["accessible"] == "true": data["accessible"] = True else: data["accessible"] = False del data["accessible_start"] del data["accessible_end"] del data["accessible_soft_end"] # Checkboxes if data.get("responseIsHTML"): data["responseIsHTML"] = True # Network grading data["network_grading"] = "network_grading" in data except Exception as message: return json.dumps({"status": "error", "message": _("Your browser returned an invalid form ({})").format(message)}) # Get the course try: course = self.course_factory.get_course(courseid) except: return json.dumps({"status": "error", "message": _("Error while reading course's informations")}) # Get original data try: orig_data = self.task_factory.get_task_descriptor_content(courseid, taskid) data["order"] = orig_data["order"] except: pass task_fs = self.task_factory.get_task_fs(courseid, taskid) task_fs.ensure_exists() # Call plugins and return the first error plugin_results = self.plugin_manager.call_hook('task_editor_submit', course=course, taskid=taskid, task_data=data, task_fs=task_fs) # Retrieve the first non-null element error = next(filter(None, plugin_results), None) if error is not None: return error try: WebAppTask(course, taskid, data, task_fs, None, self.plugin_manager, self.task_factory.get_problem_types()) except Exception as message: return json.dumps({"status": "error", "message": _("Invalid data: {}").format(str(message))}) if task_zip: try: zipfile = ZipFile(task_zip) except Exception: return json.dumps({"status": "error", "message": _("Cannot read zip file. Files were not modified")}) with tempfile.TemporaryDirectory() as tmpdirname: try: zipfile.extractall(tmpdirname) except Exception: return json.dumps( {"status": "error", "message": _("There was a problem while extracting the zip archive. Some files may have been modified")}) task_fs.copy_to(tmpdirname) self.task_factory.delete_all_possible_task_files(courseid, taskid) self.task_factory.update_task_descriptor_content(courseid, taskid, data, force_extension=file_ext) course.update_all_tags_cache() return json.dumps({"status": "ok"})
def parse_tstv_summary(self): """ Create the HTML for the TsTv summary plot. """ self.vcftools_tstv_summary = dict() for f in self.find_log_files('vcftools/tstv_summary', filehandles=True): d = {} for line in f['f'].readlines()[1:]: # don't add the header line (first row) key = line.split()[0] # taking the first column (MODEL) as key val = int(line.split()[1]) # taking the second column (COUNT) as value d[key] = val self.vcftools_tstv_summary[f['s_name']] = d # Filter out ignored sample names self.vcftools_tstv_summary = self.ignore_samples(self.vcftools_tstv_summary) if len(self.vcftools_tstv_summary) == 0: return 0 # Specifying the categories of the bargraph keys = OrderedDict() keys = ['AC', 'AG', 'AT', 'CG', 'CT', 'GT', 'Ts', 'Tv'] pconfig = { 'id': 'vcftools_tstv_summary', 'title': 'VCFTools: TsTv Summary', 'ylab': 'Counts', } self.add_section( name = 'TsTv Summary', anchor = 'vcftools-tstv-summary', description = "Plot of `TSTV-SUMMARY` - count of different types of transition and transversion SNPs.", plot = bargraph.plot(self.vcftools_tstv_summary,keys,pconfig) ) return len(self.vcftools_tstv_summary)
Create the HTML for the TsTv summary plot.
Below is the the instruction that describes the task: ### Input: Create the HTML for the TsTv summary plot. ### Response: def parse_tstv_summary(self): """ Create the HTML for the TsTv summary plot. """ self.vcftools_tstv_summary = dict() for f in self.find_log_files('vcftools/tstv_summary', filehandles=True): d = {} for line in f['f'].readlines()[1:]: # don't add the header line (first row) key = line.split()[0] # taking the first column (MODEL) as key val = int(line.split()[1]) # taking the second column (COUNT) as value d[key] = val self.vcftools_tstv_summary[f['s_name']] = d # Filter out ignored sample names self.vcftools_tstv_summary = self.ignore_samples(self.vcftools_tstv_summary) if len(self.vcftools_tstv_summary) == 0: return 0 # Specifying the categories of the bargraph keys = OrderedDict() keys = ['AC', 'AG', 'AT', 'CG', 'CT', 'GT', 'Ts', 'Tv'] pconfig = { 'id': 'vcftools_tstv_summary', 'title': 'VCFTools: TsTv Summary', 'ylab': 'Counts', } self.add_section( name = 'TsTv Summary', anchor = 'vcftools-tstv-summary', description = "Plot of `TSTV-SUMMARY` - count of different types of transition and transversion SNPs.", plot = bargraph.plot(self.vcftools_tstv_summary,keys,pconfig) ) return len(self.vcftools_tstv_summary)
def accept_quality(accept, default=1): """Separates out the quality score from the accepted content_type""" quality = default if accept and ";" in accept: accept, rest = accept.split(";", 1) accept_quality = RE_ACCEPT_QUALITY.search(rest) if accept_quality: quality = float(accept_quality.groupdict().get('quality', quality).strip()) return (quality, accept.strip())
Separates out the quality score from the accepted content_type
Below is the the instruction that describes the task: ### Input: Separates out the quality score from the accepted content_type ### Response: def accept_quality(accept, default=1): """Separates out the quality score from the accepted content_type""" quality = default if accept and ";" in accept: accept, rest = accept.split(";", 1) accept_quality = RE_ACCEPT_QUALITY.search(rest) if accept_quality: quality = float(accept_quality.groupdict().get('quality', quality).strip()) return (quality, accept.strip())
def build_span_analyzer(document, vec): """ Return an analyzer and the preprocessed doc. Analyzer will yield pairs of spans and feature, where spans are pairs of indices into the preprocessed doc. The idea here is to do minimal preprocessing so that we can still recover the same features as sklearn vectorizers, but with spans, that will allow us to highlight features in preprocessed documents. Analyzers are adapted from VectorizerMixin from sklearn. """ preprocessed_doc = vec.build_preprocessor()(vec.decode(document)) analyzer = None if vec.analyzer == 'word' and vec.tokenizer is None: stop_words = vec.get_stop_words() tokenize = _build_tokenizer(vec) analyzer = lambda doc: _word_ngrams(vec, tokenize(doc), stop_words) elif vec.analyzer == 'char': preprocessed_doc = vec._white_spaces.sub(' ', preprocessed_doc) analyzer = lambda doc: _char_ngrams(vec, doc) elif vec.analyzer == 'char_wb': preprocessed_doc = vec._white_spaces.sub(' ', preprocessed_doc) analyzer = lambda doc: _char_wb_ngrams(vec, doc) return analyzer, preprocessed_doc
Return an analyzer and the preprocessed doc. Analyzer will yield pairs of spans and feature, where spans are pairs of indices into the preprocessed doc. The idea here is to do minimal preprocessing so that we can still recover the same features as sklearn vectorizers, but with spans, that will allow us to highlight features in preprocessed documents. Analyzers are adapted from VectorizerMixin from sklearn.
Below is the the instruction that describes the task: ### Input: Return an analyzer and the preprocessed doc. Analyzer will yield pairs of spans and feature, where spans are pairs of indices into the preprocessed doc. The idea here is to do minimal preprocessing so that we can still recover the same features as sklearn vectorizers, but with spans, that will allow us to highlight features in preprocessed documents. Analyzers are adapted from VectorizerMixin from sklearn. ### Response: def build_span_analyzer(document, vec): """ Return an analyzer and the preprocessed doc. Analyzer will yield pairs of spans and feature, where spans are pairs of indices into the preprocessed doc. The idea here is to do minimal preprocessing so that we can still recover the same features as sklearn vectorizers, but with spans, that will allow us to highlight features in preprocessed documents. Analyzers are adapted from VectorizerMixin from sklearn. """ preprocessed_doc = vec.build_preprocessor()(vec.decode(document)) analyzer = None if vec.analyzer == 'word' and vec.tokenizer is None: stop_words = vec.get_stop_words() tokenize = _build_tokenizer(vec) analyzer = lambda doc: _word_ngrams(vec, tokenize(doc), stop_words) elif vec.analyzer == 'char': preprocessed_doc = vec._white_spaces.sub(' ', preprocessed_doc) analyzer = lambda doc: _char_ngrams(vec, doc) elif vec.analyzer == 'char_wb': preprocessed_doc = vec._white_spaces.sub(' ', preprocessed_doc) analyzer = lambda doc: _char_wb_ngrams(vec, doc) return analyzer, preprocessed_doc
def put(self, key, value, minutes): """ Store an item in the cache for a given number of minutes. :param key: The cache key :type key: str :param value: The cache value :type value: mixed :param minutes: The lifetime in minutes of the cached value :type minutes: int """ value = encode(str(self._expiration(minutes))) + encode(self.serialize(value)) path = self._path(key) self._create_cache_directory(path) with open(path, 'wb') as fh: fh.write(value)
Store an item in the cache for a given number of minutes. :param key: The cache key :type key: str :param value: The cache value :type value: mixed :param minutes: The lifetime in minutes of the cached value :type minutes: int
Below is the the instruction that describes the task: ### Input: Store an item in the cache for a given number of minutes. :param key: The cache key :type key: str :param value: The cache value :type value: mixed :param minutes: The lifetime in minutes of the cached value :type minutes: int ### Response: def put(self, key, value, minutes): """ Store an item in the cache for a given number of minutes. :param key: The cache key :type key: str :param value: The cache value :type value: mixed :param minutes: The lifetime in minutes of the cached value :type minutes: int """ value = encode(str(self._expiration(minutes))) + encode(self.serialize(value)) path = self._path(key) self._create_cache_directory(path) with open(path, 'wb') as fh: fh.write(value)
def can_create_objective_bank_with_record_types(self, objective_bank_record_types): """Tests if this user can create a single ``ObjectiveBank`` using the desired record types. While ``LearningManager.getObjectiveBankRecordTypes()`` can be used to examine which records are supported, this method tests which record(s) are required for creating a specific ``ObjectiveBank``. Providing an empty array tests if an ``ObjectiveBank`` can be created with no records. arg: objective_bank_record_types (osid.type.Type[]): array of objective bank record types return: (boolean) - ``true`` if ``ObjectiveBank`` creation using the specified ``Types`` is supported, ``false`` otherwise raise: NullArgument - ``objective_bank_record_types`` is ``null`` *compliance: mandatory -- This method must be implemented.* """ # Implemented from template for # osid.resource.BinAdminSession.can_create_bin_with_record_types # NOTE: It is expected that real authentication hints will be # handled in a service adapter above the pay grade of this impl. if self._catalog_session is not None: return self._catalog_session.can_create_catalog_with_record_types(catalog_record_types=objective_bank_record_types) return True
Tests if this user can create a single ``ObjectiveBank`` using the desired record types. While ``LearningManager.getObjectiveBankRecordTypes()`` can be used to examine which records are supported, this method tests which record(s) are required for creating a specific ``ObjectiveBank``. Providing an empty array tests if an ``ObjectiveBank`` can be created with no records. arg: objective_bank_record_types (osid.type.Type[]): array of objective bank record types return: (boolean) - ``true`` if ``ObjectiveBank`` creation using the specified ``Types`` is supported, ``false`` otherwise raise: NullArgument - ``objective_bank_record_types`` is ``null`` *compliance: mandatory -- This method must be implemented.*
Below is the the instruction that describes the task: ### Input: Tests if this user can create a single ``ObjectiveBank`` using the desired record types. While ``LearningManager.getObjectiveBankRecordTypes()`` can be used to examine which records are supported, this method tests which record(s) are required for creating a specific ``ObjectiveBank``. Providing an empty array tests if an ``ObjectiveBank`` can be created with no records. arg: objective_bank_record_types (osid.type.Type[]): array of objective bank record types return: (boolean) - ``true`` if ``ObjectiveBank`` creation using the specified ``Types`` is supported, ``false`` otherwise raise: NullArgument - ``objective_bank_record_types`` is ``null`` *compliance: mandatory -- This method must be implemented.* ### Response: def can_create_objective_bank_with_record_types(self, objective_bank_record_types): """Tests if this user can create a single ``ObjectiveBank`` using the desired record types. While ``LearningManager.getObjectiveBankRecordTypes()`` can be used to examine which records are supported, this method tests which record(s) are required for creating a specific ``ObjectiveBank``. Providing an empty array tests if an ``ObjectiveBank`` can be created with no records. arg: objective_bank_record_types (osid.type.Type[]): array of objective bank record types return: (boolean) - ``true`` if ``ObjectiveBank`` creation using the specified ``Types`` is supported, ``false`` otherwise raise: NullArgument - ``objective_bank_record_types`` is ``null`` *compliance: mandatory -- This method must be implemented.* """ # Implemented from template for # osid.resource.BinAdminSession.can_create_bin_with_record_types # NOTE: It is expected that real authentication hints will be # handled in a service adapter above the pay grade of this impl. if self._catalog_session is not None: return self._catalog_session.can_create_catalog_with_record_types(catalog_record_types=objective_bank_record_types) return True
def show(self, format='png', as_data=False): '''Returns an Image object of the current surface. Used for displaying output in Jupyter notebooks. Adapted from the cairo-jupyter project.''' from io import BytesIO b = BytesIO() if format == 'png': from IPython.display import Image surface = cairo.ImageSurface(cairo.FORMAT_ARGB32, self.WIDTH, self.HEIGHT) self.snapshot(surface) surface.write_to_png(b) b.seek(0) data = b.read() if as_data: return data else: return Image(data) elif format == 'svg': from IPython.display import SVG surface = cairo.SVGSurface(b, self.WIDTH, self.HEIGHT) surface.finish() b.seek(0) data = b.read() if as_data: return data else: return SVG(data)
Returns an Image object of the current surface. Used for displaying output in Jupyter notebooks. Adapted from the cairo-jupyter project.
Below is the the instruction that describes the task: ### Input: Returns an Image object of the current surface. Used for displaying output in Jupyter notebooks. Adapted from the cairo-jupyter project. ### Response: def show(self, format='png', as_data=False): '''Returns an Image object of the current surface. Used for displaying output in Jupyter notebooks. Adapted from the cairo-jupyter project.''' from io import BytesIO b = BytesIO() if format == 'png': from IPython.display import Image surface = cairo.ImageSurface(cairo.FORMAT_ARGB32, self.WIDTH, self.HEIGHT) self.snapshot(surface) surface.write_to_png(b) b.seek(0) data = b.read() if as_data: return data else: return Image(data) elif format == 'svg': from IPython.display import SVG surface = cairo.SVGSurface(b, self.WIDTH, self.HEIGHT) surface.finish() b.seek(0) data = b.read() if as_data: return data else: return SVG(data)
def range_is_obj(rng, rdfclass): """ Test to see if range for the class should be an object or a litteral """ if rng == 'rdfs_Literal': return False if hasattr(rdfclass, rng): mod_class = getattr(rdfclass, rng) for item in mod_class.cls_defs['rdf_type']: try: if issubclass(getattr(rdfclass, item), rdfclass.rdfs_Literal): return False except AttributeError: pass if isinstance(mod_class, rdfclass.RdfClassMeta): return True return False
Test to see if range for the class should be an object or a litteral
Below is the the instruction that describes the task: ### Input: Test to see if range for the class should be an object or a litteral ### Response: def range_is_obj(rng, rdfclass): """ Test to see if range for the class should be an object or a litteral """ if rng == 'rdfs_Literal': return False if hasattr(rdfclass, rng): mod_class = getattr(rdfclass, rng) for item in mod_class.cls_defs['rdf_type']: try: if issubclass(getattr(rdfclass, item), rdfclass.rdfs_Literal): return False except AttributeError: pass if isinstance(mod_class, rdfclass.RdfClassMeta): return True return False
def _translate_sext(self, oprnd1, oprnd2, oprnd3): """Return a formula representation of a SEXT instruction. """ assert oprnd1.size and oprnd3.size op1_var = self._translate_src_oprnd(oprnd1) op3_var, op3_var_constrs = self._translate_dst_oprnd(oprnd3) if oprnd3.size > oprnd1.size: result = smtfunction.sign_extend(op1_var, op3_var.size) elif oprnd3.size < oprnd1.size: raise Exception("Operands size mismatch.") else: result = op1_var return [op3_var == result] + op3_var_constrs
Return a formula representation of a SEXT instruction.
Below is the the instruction that describes the task: ### Input: Return a formula representation of a SEXT instruction. ### Response: def _translate_sext(self, oprnd1, oprnd2, oprnd3): """Return a formula representation of a SEXT instruction. """ assert oprnd1.size and oprnd3.size op1_var = self._translate_src_oprnd(oprnd1) op3_var, op3_var_constrs = self._translate_dst_oprnd(oprnd3) if oprnd3.size > oprnd1.size: result = smtfunction.sign_extend(op1_var, op3_var.size) elif oprnd3.size < oprnd1.size: raise Exception("Operands size mismatch.") else: result = op1_var return [op3_var == result] + op3_var_constrs
def prune_tour(self, tour, cpus): """ Test deleting each contig and check the delta_score; tour here must be an array of ints. """ while True: tour_score, = self.evaluate_tour_M(tour) logging.debug("Starting score: {}".format(tour_score)) active_sizes = self.active_sizes M = self.M args = [] for i, t in enumerate(tour): stour = tour[:i] + tour[i + 1:] args.append((t, stour, tour_score, active_sizes, M)) # Parallel run p = Pool(processes=cpus) results = list(p.imap(prune_tour_worker, args)) assert len(tour) == len(results), \ "Array size mismatch, tour({}) != results({})"\ .format(len(tour), len(results)) # Identify outliers active_contigs = self.active_contigs idx, log10deltas = zip(*results) lb, ub = outlier_cutoff(log10deltas) logging.debug("Log10(delta_score) ~ [{}, {}]".format(lb, ub)) remove = set(active_contigs[x] for (x, d) in results if d < lb) self.active -= remove self.report_active() tig_to_idx = self.tig_to_idx tour = [active_contigs[x] for x in tour] tour = array.array('i', [tig_to_idx[x] for x in tour if x not in remove]) if not remove: break self.tour = tour self.flip_all(tour) return tour
Test deleting each contig and check the delta_score; tour here must be an array of ints.
Below is the the instruction that describes the task: ### Input: Test deleting each contig and check the delta_score; tour here must be an array of ints. ### Response: def prune_tour(self, tour, cpus): """ Test deleting each contig and check the delta_score; tour here must be an array of ints. """ while True: tour_score, = self.evaluate_tour_M(tour) logging.debug("Starting score: {}".format(tour_score)) active_sizes = self.active_sizes M = self.M args = [] for i, t in enumerate(tour): stour = tour[:i] + tour[i + 1:] args.append((t, stour, tour_score, active_sizes, M)) # Parallel run p = Pool(processes=cpus) results = list(p.imap(prune_tour_worker, args)) assert len(tour) == len(results), \ "Array size mismatch, tour({}) != results({})"\ .format(len(tour), len(results)) # Identify outliers active_contigs = self.active_contigs idx, log10deltas = zip(*results) lb, ub = outlier_cutoff(log10deltas) logging.debug("Log10(delta_score) ~ [{}, {}]".format(lb, ub)) remove = set(active_contigs[x] for (x, d) in results if d < lb) self.active -= remove self.report_active() tig_to_idx = self.tig_to_idx tour = [active_contigs[x] for x in tour] tour = array.array('i', [tig_to_idx[x] for x in tour if x not in remove]) if not remove: break self.tour = tour self.flip_all(tour) return tour
def log_errors(f, self, *args, **kwargs): """decorator to log unhandled exceptions raised in a method. For use wrapping on_recv callbacks, so that exceptions do not cause the stream to be closed. """ try: return f(self, *args, **kwargs) except Exception: self.log.error("Uncaught exception in %r" % f, exc_info=True)
decorator to log unhandled exceptions raised in a method. For use wrapping on_recv callbacks, so that exceptions do not cause the stream to be closed.
Below is the the instruction that describes the task: ### Input: decorator to log unhandled exceptions raised in a method. For use wrapping on_recv callbacks, so that exceptions do not cause the stream to be closed. ### Response: def log_errors(f, self, *args, **kwargs): """decorator to log unhandled exceptions raised in a method. For use wrapping on_recv callbacks, so that exceptions do not cause the stream to be closed. """ try: return f(self, *args, **kwargs) except Exception: self.log.error("Uncaught exception in %r" % f, exc_info=True)
def _set_task_uuid(self, dependencies): """Adds universally unique user ids (UUID) to each task of the workflow. :param dependencies: The list of dependencies between tasks defining the computational graph :type dependencies: list(Dependency) :return: A dictionary mapping UUID to dependencies :rtype: dict(str: Dependency) """ uuid_dict = {} for dep in dependencies: task = dep.task if task.private_task_config.uuid in uuid_dict: raise ValueError('EOWorkflow cannot execute the same instance of EOTask multiple times') task.private_task_config.uuid = self.id_gen.next() uuid_dict[task.private_task_config.uuid] = dep return uuid_dict
Adds universally unique user ids (UUID) to each task of the workflow. :param dependencies: The list of dependencies between tasks defining the computational graph :type dependencies: list(Dependency) :return: A dictionary mapping UUID to dependencies :rtype: dict(str: Dependency)
Below is the the instruction that describes the task: ### Input: Adds universally unique user ids (UUID) to each task of the workflow. :param dependencies: The list of dependencies between tasks defining the computational graph :type dependencies: list(Dependency) :return: A dictionary mapping UUID to dependencies :rtype: dict(str: Dependency) ### Response: def _set_task_uuid(self, dependencies): """Adds universally unique user ids (UUID) to each task of the workflow. :param dependencies: The list of dependencies between tasks defining the computational graph :type dependencies: list(Dependency) :return: A dictionary mapping UUID to dependencies :rtype: dict(str: Dependency) """ uuid_dict = {} for dep in dependencies: task = dep.task if task.private_task_config.uuid in uuid_dict: raise ValueError('EOWorkflow cannot execute the same instance of EOTask multiple times') task.private_task_config.uuid = self.id_gen.next() uuid_dict[task.private_task_config.uuid] = dep return uuid_dict
def send_keys(key_string): """ sends the text or keys to the active application using shell Note, that the imp module shows deprecation warning. Examples: shell.SendKeys("^a") # CTRL+A shell.SendKeys("{DELETE}") # Delete key shell.SendKeys("hello this is a lot of text with a //") """ try: shell = win32com.client.Dispatch("WScript.Shell") shell.SendKeys(key_string) except Exception as ex: print('error calling win32com.client.Dispatch (SendKeys)')
sends the text or keys to the active application using shell Note, that the imp module shows deprecation warning. Examples: shell.SendKeys("^a") # CTRL+A shell.SendKeys("{DELETE}") # Delete key shell.SendKeys("hello this is a lot of text with a //")
Below is the the instruction that describes the task: ### Input: sends the text or keys to the active application using shell Note, that the imp module shows deprecation warning. Examples: shell.SendKeys("^a") # CTRL+A shell.SendKeys("{DELETE}") # Delete key shell.SendKeys("hello this is a lot of text with a //") ### Response: def send_keys(key_string): """ sends the text or keys to the active application using shell Note, that the imp module shows deprecation warning. Examples: shell.SendKeys("^a") # CTRL+A shell.SendKeys("{DELETE}") # Delete key shell.SendKeys("hello this is a lot of text with a //") """ try: shell = win32com.client.Dispatch("WScript.Shell") shell.SendKeys(key_string) except Exception as ex: print('error calling win32com.client.Dispatch (SendKeys)')
def download(self, resource_id): """Update the request URI to download the document for this resource. Args: resource_id (integer): The group id. """ self.resource_id(str(resource_id)) self._request_uri = '{}/download'.format(self._request_uri)
Update the request URI to download the document for this resource. Args: resource_id (integer): The group id.
Below is the the instruction that describes the task: ### Input: Update the request URI to download the document for this resource. Args: resource_id (integer): The group id. ### Response: def download(self, resource_id): """Update the request URI to download the document for this resource. Args: resource_id (integer): The group id. """ self.resource_id(str(resource_id)) self._request_uri = '{}/download'.format(self._request_uri)
def queryAll(self, *args, **kwargs): """ Returns a :class:`Deferred` object which will have its callback invoked with a :class:`BatchedView` when the results are complete. Parameters follow conventions of :meth:`~couchbase.bucket.Bucket.query`. Example:: d = cb.queryAll("beer", "brewery_beers") def on_all_rows(rows): for row in rows: print("Got row {0}".format(row)) d.addCallback(on_all_rows) """ if not self.connected: cb = lambda x: self.queryAll(*args, **kwargs) return self.connect().addCallback(cb) kwargs['itercls'] = BatchedView o = super(RawBucket, self).query(*args, **kwargs) o.start() return o._getDeferred()
Returns a :class:`Deferred` object which will have its callback invoked with a :class:`BatchedView` when the results are complete. Parameters follow conventions of :meth:`~couchbase.bucket.Bucket.query`. Example:: d = cb.queryAll("beer", "brewery_beers") def on_all_rows(rows): for row in rows: print("Got row {0}".format(row)) d.addCallback(on_all_rows)
Below is the the instruction that describes the task: ### Input: Returns a :class:`Deferred` object which will have its callback invoked with a :class:`BatchedView` when the results are complete. Parameters follow conventions of :meth:`~couchbase.bucket.Bucket.query`. Example:: d = cb.queryAll("beer", "brewery_beers") def on_all_rows(rows): for row in rows: print("Got row {0}".format(row)) d.addCallback(on_all_rows) ### Response: def queryAll(self, *args, **kwargs): """ Returns a :class:`Deferred` object which will have its callback invoked with a :class:`BatchedView` when the results are complete. Parameters follow conventions of :meth:`~couchbase.bucket.Bucket.query`. Example:: d = cb.queryAll("beer", "brewery_beers") def on_all_rows(rows): for row in rows: print("Got row {0}".format(row)) d.addCallback(on_all_rows) """ if not self.connected: cb = lambda x: self.queryAll(*args, **kwargs) return self.connect().addCallback(cb) kwargs['itercls'] = BatchedView o = super(RawBucket, self).query(*args, **kwargs) o.start() return o._getDeferred()
def getInterval(self, alpha): """ Evaluate the interval corresponding to a C.L. of (1-alpha)%. Parameters ---------- alpha : limit confidence level. """ dlnl = twosided_cl_to_dlnl(1.0 - alpha) lo_lim = self.getDeltaLogLike(dlnl, upper=False) hi_lim = self.getDeltaLogLike(dlnl, upper=True) return (lo_lim, hi_lim)
Evaluate the interval corresponding to a C.L. of (1-alpha)%. Parameters ---------- alpha : limit confidence level.
Below is the the instruction that describes the task: ### Input: Evaluate the interval corresponding to a C.L. of (1-alpha)%. Parameters ---------- alpha : limit confidence level. ### Response: def getInterval(self, alpha): """ Evaluate the interval corresponding to a C.L. of (1-alpha)%. Parameters ---------- alpha : limit confidence level. """ dlnl = twosided_cl_to_dlnl(1.0 - alpha) lo_lim = self.getDeltaLogLike(dlnl, upper=False) hi_lim = self.getDeltaLogLike(dlnl, upper=True) return (lo_lim, hi_lim)
def get_series(self, series): """ Returns a census series API handler. """ if series == "acs1": return self.census.acs1dp elif series == "acs5": return self.census.acs5 elif series == "sf1": return self.census.sf1 elif series == "sf3": return self.census.sf3 else: return None
Returns a census series API handler.
Below is the the instruction that describes the task: ### Input: Returns a census series API handler. ### Response: def get_series(self, series): """ Returns a census series API handler. """ if series == "acs1": return self.census.acs1dp elif series == "acs5": return self.census.acs5 elif series == "sf1": return self.census.sf1 elif series == "sf3": return self.census.sf3 else: return None
def as_text(self, max_rows=0, sep=" | "): """Format table as text.""" if not max_rows or max_rows > self.num_rows: max_rows = self.num_rows omitted = max(0, self.num_rows - max_rows) labels = self._columns.keys() fmts = self._get_column_formatters(max_rows, False) rows = [[fmt(label, label=True) for fmt, label in zip(fmts, labels)]] for row in itertools.islice(self.rows, max_rows): rows.append([f(v, label=False) for v, f in zip(row, fmts)]) lines = [sep.join(row) for row in rows] if omitted: lines.append('... ({} rows omitted)'.format(omitted)) return '\n'.join([line.rstrip() for line in lines])
Format table as text.
Below is the the instruction that describes the task: ### Input: Format table as text. ### Response: def as_text(self, max_rows=0, sep=" | "): """Format table as text.""" if not max_rows or max_rows > self.num_rows: max_rows = self.num_rows omitted = max(0, self.num_rows - max_rows) labels = self._columns.keys() fmts = self._get_column_formatters(max_rows, False) rows = [[fmt(label, label=True) for fmt, label in zip(fmts, labels)]] for row in itertools.islice(self.rows, max_rows): rows.append([f(v, label=False) for v, f in zip(row, fmts)]) lines = [sep.join(row) for row in rows] if omitted: lines.append('... ({} rows omitted)'.format(omitted)) return '\n'.join([line.rstrip() for line in lines])