code
stringlengths
75
104k
docstring
stringlengths
1
46.9k
text
stringlengths
164
112k
def put_container(self, container, headers=None, query=None, cdn=False, body=None): """ PUTs the container and returns the results. This is usually done to create new containers and can also be used to set X-Container-Meta-xxx headers. Note that if the container already exists, any existing X-Container-Meta-xxx headers will remain untouched. To remove an X-Container-Meta-xxx header, send the header with an empty string as its value. :param container: The name of the container. :param headers: Additional headers to send with the request. :param query: Set to a dict of query values to send on the query string of the request. :param cdn: If set True, the CDN management interface will be used. :param body: Some container PUT requests, like the extract-archive bulk upload request, take a body. :returns: A tuple of (status, reason, headers, contents). :status: is an int for the HTTP status code. :reason: is the str for the HTTP status (ex: "Ok"). :headers: is a dict with all lowercase keys of the HTTP headers; if a header has multiple values, it will be a list. :contents: is the str for the HTTP body. """ path = self._container_path(container) return self.request( 'PUT', path, body or '', headers, query=query, cdn=cdn)
PUTs the container and returns the results. This is usually done to create new containers and can also be used to set X-Container-Meta-xxx headers. Note that if the container already exists, any existing X-Container-Meta-xxx headers will remain untouched. To remove an X-Container-Meta-xxx header, send the header with an empty string as its value. :param container: The name of the container. :param headers: Additional headers to send with the request. :param query: Set to a dict of query values to send on the query string of the request. :param cdn: If set True, the CDN management interface will be used. :param body: Some container PUT requests, like the extract-archive bulk upload request, take a body. :returns: A tuple of (status, reason, headers, contents). :status: is an int for the HTTP status code. :reason: is the str for the HTTP status (ex: "Ok"). :headers: is a dict with all lowercase keys of the HTTP headers; if a header has multiple values, it will be a list. :contents: is the str for the HTTP body.
Below is the the instruction that describes the task: ### Input: PUTs the container and returns the results. This is usually done to create new containers and can also be used to set X-Container-Meta-xxx headers. Note that if the container already exists, any existing X-Container-Meta-xxx headers will remain untouched. To remove an X-Container-Meta-xxx header, send the header with an empty string as its value. :param container: The name of the container. :param headers: Additional headers to send with the request. :param query: Set to a dict of query values to send on the query string of the request. :param cdn: If set True, the CDN management interface will be used. :param body: Some container PUT requests, like the extract-archive bulk upload request, take a body. :returns: A tuple of (status, reason, headers, contents). :status: is an int for the HTTP status code. :reason: is the str for the HTTP status (ex: "Ok"). :headers: is a dict with all lowercase keys of the HTTP headers; if a header has multiple values, it will be a list. :contents: is the str for the HTTP body. ### Response: def put_container(self, container, headers=None, query=None, cdn=False, body=None): """ PUTs the container and returns the results. This is usually done to create new containers and can also be used to set X-Container-Meta-xxx headers. Note that if the container already exists, any existing X-Container-Meta-xxx headers will remain untouched. To remove an X-Container-Meta-xxx header, send the header with an empty string as its value. :param container: The name of the container. :param headers: Additional headers to send with the request. :param query: Set to a dict of query values to send on the query string of the request. :param cdn: If set True, the CDN management interface will be used. :param body: Some container PUT requests, like the extract-archive bulk upload request, take a body. :returns: A tuple of (status, reason, headers, contents). :status: is an int for the HTTP status code. :reason: is the str for the HTTP status (ex: "Ok"). :headers: is a dict with all lowercase keys of the HTTP headers; if a header has multiple values, it will be a list. :contents: is the str for the HTTP body. """ path = self._container_path(container) return self.request( 'PUT', path, body or '', headers, query=query, cdn=cdn)
def read_folder(folder): """ Parameters ---------- folder : str Returns ------- list of HandwrittenData objects """ hwr_objects = [] for filepath in natsort.natsorted(glob.glob("%s/*.inkml" % folder)): tmp = inkml.read(filepath) for hwr in tmp.to_single_symbol_list(): hwr_objects.append(hwr) logging.info("Done reading formulas") save_raw_pickle(hwr_objects) return hwr_objects
Parameters ---------- folder : str Returns ------- list of HandwrittenData objects
Below is the the instruction that describes the task: ### Input: Parameters ---------- folder : str Returns ------- list of HandwrittenData objects ### Response: def read_folder(folder): """ Parameters ---------- folder : str Returns ------- list of HandwrittenData objects """ hwr_objects = [] for filepath in natsort.natsorted(glob.glob("%s/*.inkml" % folder)): tmp = inkml.read(filepath) for hwr in tmp.to_single_symbol_list(): hwr_objects.append(hwr) logging.info("Done reading formulas") save_raw_pickle(hwr_objects) return hwr_objects
def qos_queue_scheduler_strict_priority_dwrr_traffic_class_last(self, **kwargs): """Auto Generated Code """ config = ET.Element("config") qos = ET.SubElement(config, "qos", xmlns="urn:brocade.com:mgmt:brocade-qos") queue = ET.SubElement(qos, "queue") scheduler = ET.SubElement(queue, "scheduler") strict_priority = ET.SubElement(scheduler, "strict-priority") dwrr_traffic_class_last = ET.SubElement(strict_priority, "dwrr-traffic-class-last") dwrr_traffic_class_last.text = kwargs.pop('dwrr_traffic_class_last') callback = kwargs.pop('callback', self._callback) return callback(config)
Auto Generated Code
Below is the the instruction that describes the task: ### Input: Auto Generated Code ### Response: def qos_queue_scheduler_strict_priority_dwrr_traffic_class_last(self, **kwargs): """Auto Generated Code """ config = ET.Element("config") qos = ET.SubElement(config, "qos", xmlns="urn:brocade.com:mgmt:brocade-qos") queue = ET.SubElement(qos, "queue") scheduler = ET.SubElement(queue, "scheduler") strict_priority = ET.SubElement(scheduler, "strict-priority") dwrr_traffic_class_last = ET.SubElement(strict_priority, "dwrr-traffic-class-last") dwrr_traffic_class_last.text = kwargs.pop('dwrr_traffic_class_last') callback = kwargs.pop('callback', self._callback) return callback(config)
def _summarize_o_mutation_type(model): """ This function create the actual mutation io summary corresponding to the model """ from nautilus.api.util import summarize_mutation_io # compute the appropriate name for the object object_type_name = get_model_string(model) # return a mutation io object return summarize_mutation_io( name=object_type_name, type=_summarize_object_type(model), required=False )
This function create the actual mutation io summary corresponding to the model
Below is the the instruction that describes the task: ### Input: This function create the actual mutation io summary corresponding to the model ### Response: def _summarize_o_mutation_type(model): """ This function create the actual mutation io summary corresponding to the model """ from nautilus.api.util import summarize_mutation_io # compute the appropriate name for the object object_type_name = get_model_string(model) # return a mutation io object return summarize_mutation_io( name=object_type_name, type=_summarize_object_type(model), required=False )
def _can_retry(self, batch, error): """ We can retry a send if the error is transient and the number of attempts taken is fewer than the maximum allowed """ return (batch.attempts < self.config['retries'] and getattr(error, 'retriable', False))
We can retry a send if the error is transient and the number of attempts taken is fewer than the maximum allowed
Below is the the instruction that describes the task: ### Input: We can retry a send if the error is transient and the number of attempts taken is fewer than the maximum allowed ### Response: def _can_retry(self, batch, error): """ We can retry a send if the error is transient and the number of attempts taken is fewer than the maximum allowed """ return (batch.attempts < self.config['retries'] and getattr(error, 'retriable', False))
def get_merkle_root(block_representation, coin_symbol='btc', api_key=None): ''' Takes a block_representation and returns the merkle root ''' return get_block_overview(block_representation=block_representation, coin_symbol=coin_symbol, txn_limit=1, api_key=api_key)['mrkl_root']
Takes a block_representation and returns the merkle root
Below is the the instruction that describes the task: ### Input: Takes a block_representation and returns the merkle root ### Response: def get_merkle_root(block_representation, coin_symbol='btc', api_key=None): ''' Takes a block_representation and returns the merkle root ''' return get_block_overview(block_representation=block_representation, coin_symbol=coin_symbol, txn_limit=1, api_key=api_key)['mrkl_root']
def save_array(store, arr, **kwargs): """Convenience function to save a NumPy array to the local file system, following a similar API to the NumPy save() function. Parameters ---------- store : MutableMapping or string Store or path to directory in file system or name of zip file. arr : ndarray NumPy array with data to save. kwargs Passed through to :func:`create`, e.g., compressor. Examples -------- Save an array to a directory on the file system (uses a :class:`DirectoryStore`):: >>> import zarr >>> import numpy as np >>> arr = np.arange(10000) >>> zarr.save_array('data/example.zarr', arr) >>> zarr.load('data/example.zarr') array([ 0, 1, 2, ..., 9997, 9998, 9999]) Save an array to a single file (uses a :class:`ZipStore`):: >>> zarr.save_array('data/example.zip', arr) >>> zarr.load('data/example.zip') array([ 0, 1, 2, ..., 9997, 9998, 9999]) """ may_need_closing = isinstance(store, str) store = normalize_store_arg(store, clobber=True) try: _create_array(arr, store=store, overwrite=True, **kwargs) finally: if may_need_closing and hasattr(store, 'close'): # needed to ensure zip file records are written store.close()
Convenience function to save a NumPy array to the local file system, following a similar API to the NumPy save() function. Parameters ---------- store : MutableMapping or string Store or path to directory in file system or name of zip file. arr : ndarray NumPy array with data to save. kwargs Passed through to :func:`create`, e.g., compressor. Examples -------- Save an array to a directory on the file system (uses a :class:`DirectoryStore`):: >>> import zarr >>> import numpy as np >>> arr = np.arange(10000) >>> zarr.save_array('data/example.zarr', arr) >>> zarr.load('data/example.zarr') array([ 0, 1, 2, ..., 9997, 9998, 9999]) Save an array to a single file (uses a :class:`ZipStore`):: >>> zarr.save_array('data/example.zip', arr) >>> zarr.load('data/example.zip') array([ 0, 1, 2, ..., 9997, 9998, 9999])
Below is the the instruction that describes the task: ### Input: Convenience function to save a NumPy array to the local file system, following a similar API to the NumPy save() function. Parameters ---------- store : MutableMapping or string Store or path to directory in file system or name of zip file. arr : ndarray NumPy array with data to save. kwargs Passed through to :func:`create`, e.g., compressor. Examples -------- Save an array to a directory on the file system (uses a :class:`DirectoryStore`):: >>> import zarr >>> import numpy as np >>> arr = np.arange(10000) >>> zarr.save_array('data/example.zarr', arr) >>> zarr.load('data/example.zarr') array([ 0, 1, 2, ..., 9997, 9998, 9999]) Save an array to a single file (uses a :class:`ZipStore`):: >>> zarr.save_array('data/example.zip', arr) >>> zarr.load('data/example.zip') array([ 0, 1, 2, ..., 9997, 9998, 9999]) ### Response: def save_array(store, arr, **kwargs): """Convenience function to save a NumPy array to the local file system, following a similar API to the NumPy save() function. Parameters ---------- store : MutableMapping or string Store or path to directory in file system or name of zip file. arr : ndarray NumPy array with data to save. kwargs Passed through to :func:`create`, e.g., compressor. Examples -------- Save an array to a directory on the file system (uses a :class:`DirectoryStore`):: >>> import zarr >>> import numpy as np >>> arr = np.arange(10000) >>> zarr.save_array('data/example.zarr', arr) >>> zarr.load('data/example.zarr') array([ 0, 1, 2, ..., 9997, 9998, 9999]) Save an array to a single file (uses a :class:`ZipStore`):: >>> zarr.save_array('data/example.zip', arr) >>> zarr.load('data/example.zip') array([ 0, 1, 2, ..., 9997, 9998, 9999]) """ may_need_closing = isinstance(store, str) store = normalize_store_arg(store, clobber=True) try: _create_array(arr, store=store, overwrite=True, **kwargs) finally: if may_need_closing and hasattr(store, 'close'): # needed to ensure zip file records are written store.close()
def sign_extend(self, new_length): """ Unary operation: SignExtend :param new_length: New length after sign-extension :return: A new StridedInterval """ msb = self.extract(self.bits - 1, self.bits - 1).eval(2) if msb == [ 0 ]: # All positive numbers return self.zero_extend(new_length) if msb == [ 1 ]: # All negative numbers si = self.copy() si._bits = new_length mask = (2 ** new_length - 1) - (2 ** self.bits - 1) si._lower_bound |= mask si._upper_bound |= mask else: # Both positive numbers and negative numbers numbers = self._nsplit() # Since there are both positive and negative numbers, there must be two bounds after nsplit # assert len(numbers) == 2 all_resulting_intervals = list() assert len(numbers) > 0 for n in numbers: a, b = n.lower_bound, n.upper_bound mask_a = 0 mask_b = 0 mask_n = ((1 << (new_length - n.bits)) - 1) << n.bits if StridedInterval._get_msb(a, n.bits) == 1: mask_a = mask_n if StridedInterval._get_msb(b, n.bits) == 1: mask_b = mask_n si_ = StridedInterval(bits=new_length, stride=n.stride, lower_bound=a | mask_a, upper_bound=b | mask_b) all_resulting_intervals.append(si_) si = StridedInterval.least_upper_bound(*all_resulting_intervals).normalize() si.uninitialized = self.uninitialized return si
Unary operation: SignExtend :param new_length: New length after sign-extension :return: A new StridedInterval
Below is the the instruction that describes the task: ### Input: Unary operation: SignExtend :param new_length: New length after sign-extension :return: A new StridedInterval ### Response: def sign_extend(self, new_length): """ Unary operation: SignExtend :param new_length: New length after sign-extension :return: A new StridedInterval """ msb = self.extract(self.bits - 1, self.bits - 1).eval(2) if msb == [ 0 ]: # All positive numbers return self.zero_extend(new_length) if msb == [ 1 ]: # All negative numbers si = self.copy() si._bits = new_length mask = (2 ** new_length - 1) - (2 ** self.bits - 1) si._lower_bound |= mask si._upper_bound |= mask else: # Both positive numbers and negative numbers numbers = self._nsplit() # Since there are both positive and negative numbers, there must be two bounds after nsplit # assert len(numbers) == 2 all_resulting_intervals = list() assert len(numbers) > 0 for n in numbers: a, b = n.lower_bound, n.upper_bound mask_a = 0 mask_b = 0 mask_n = ((1 << (new_length - n.bits)) - 1) << n.bits if StridedInterval._get_msb(a, n.bits) == 1: mask_a = mask_n if StridedInterval._get_msb(b, n.bits) == 1: mask_b = mask_n si_ = StridedInterval(bits=new_length, stride=n.stride, lower_bound=a | mask_a, upper_bound=b | mask_b) all_resulting_intervals.append(si_) si = StridedInterval.least_upper_bound(*all_resulting_intervals).normalize() si.uninitialized = self.uninitialized return si
def verify(self, signature): """Verifies a signature :raises InvalidJWSSignature: if the verification fails. """ try: payload = self._payload() sigin = b'.'.join([self.protected.encode('utf-8'), payload]) self.engine.verify(self.key, sigin, signature) except Exception as e: # pylint: disable=broad-except raise InvalidJWSSignature('Verification failed', repr(e)) return True
Verifies a signature :raises InvalidJWSSignature: if the verification fails.
Below is the the instruction that describes the task: ### Input: Verifies a signature :raises InvalidJWSSignature: if the verification fails. ### Response: def verify(self, signature): """Verifies a signature :raises InvalidJWSSignature: if the verification fails. """ try: payload = self._payload() sigin = b'.'.join([self.protected.encode('utf-8'), payload]) self.engine.verify(self.key, sigin, signature) except Exception as e: # pylint: disable=broad-except raise InvalidJWSSignature('Verification failed', repr(e)) return True
def main(): """Handles external calling for this module Execute this python module and provide the args shown below to external call this module to send Slack messages with attachments! :return: None """ log = logging.getLogger(mod_logger + '.main') parser = argparse.ArgumentParser(description='This Python module allows ' 'sending Slack messages.') parser.add_argument('-u', '--url', help='Slack webhook URL', required=True) parser.add_argument('-t', '--text', help='Text of the message', required=True) parser.add_argument('-n', '--channel', help='Slack channel', required=True) parser.add_argument('-i', '--icon', help='URL for the Slack icon', required=False) parser.add_argument('-c', '--color', help='Color of the Slack post', required=False) parser.add_argument('-a', '--attachment', help='Text for the Slack Attachment', required=False) parser.add_argument('-p', '--pretext', help='Pretext for the Slack attachment', required=False) args = parser.parse_args() # Create the SlackMessage object try: slack_msg = SlackMessage(args.url, channel=args.channel, icon_url=args.icon, text=args.text) except ValueError as e: msg = 'Unable to create slack message\n{ex}'.format(ex=e) log.error(msg) print(msg) return # If provided, create the SlackAttachment object if args.attachment: try: slack_att = SlackAttachment(fallback=args.attachment, color=args.color, pretext=args.pretext, text=args.attachment) except ValueError: _, ex, trace = sys.exc_info() log.error('Unable to create slack attachment\n{e}'.format(e=str(ex))) return slack_msg.add_attachment(slack_att) # Send Slack message try: slack_msg.send() except(TypeError, ValueError, IOError): _, ex, trace = sys.exc_info() log.error('Unable to send Slack message\n{e}'.format(e=str(ex))) return log.debug('Your message has been Slacked successfully!')
Handles external calling for this module Execute this python module and provide the args shown below to external call this module to send Slack messages with attachments! :return: None
Below is the the instruction that describes the task: ### Input: Handles external calling for this module Execute this python module and provide the args shown below to external call this module to send Slack messages with attachments! :return: None ### Response: def main(): """Handles external calling for this module Execute this python module and provide the args shown below to external call this module to send Slack messages with attachments! :return: None """ log = logging.getLogger(mod_logger + '.main') parser = argparse.ArgumentParser(description='This Python module allows ' 'sending Slack messages.') parser.add_argument('-u', '--url', help='Slack webhook URL', required=True) parser.add_argument('-t', '--text', help='Text of the message', required=True) parser.add_argument('-n', '--channel', help='Slack channel', required=True) parser.add_argument('-i', '--icon', help='URL for the Slack icon', required=False) parser.add_argument('-c', '--color', help='Color of the Slack post', required=False) parser.add_argument('-a', '--attachment', help='Text for the Slack Attachment', required=False) parser.add_argument('-p', '--pretext', help='Pretext for the Slack attachment', required=False) args = parser.parse_args() # Create the SlackMessage object try: slack_msg = SlackMessage(args.url, channel=args.channel, icon_url=args.icon, text=args.text) except ValueError as e: msg = 'Unable to create slack message\n{ex}'.format(ex=e) log.error(msg) print(msg) return # If provided, create the SlackAttachment object if args.attachment: try: slack_att = SlackAttachment(fallback=args.attachment, color=args.color, pretext=args.pretext, text=args.attachment) except ValueError: _, ex, trace = sys.exc_info() log.error('Unable to create slack attachment\n{e}'.format(e=str(ex))) return slack_msg.add_attachment(slack_att) # Send Slack message try: slack_msg.send() except(TypeError, ValueError, IOError): _, ex, trace = sys.exc_info() log.error('Unable to send Slack message\n{e}'.format(e=str(ex))) return log.debug('Your message has been Slacked successfully!')
def poweron_refresh(self): """Keep requesting all attributes until it works. Immediately after a power on event (POW1) the AVR is inconsistent with which attributes can be successfully queried. When we detect that power has just been turned on, we loop every second making a bulk query for every known attribute. This continues until we detect that values have been returned for at least one input name (this seems to be the laggiest of all the attributes) """ if self._poweron_refresh_successful: return else: self.refresh_all() self._loop.call_later(2, self.poweron_refresh)
Keep requesting all attributes until it works. Immediately after a power on event (POW1) the AVR is inconsistent with which attributes can be successfully queried. When we detect that power has just been turned on, we loop every second making a bulk query for every known attribute. This continues until we detect that values have been returned for at least one input name (this seems to be the laggiest of all the attributes)
Below is the the instruction that describes the task: ### Input: Keep requesting all attributes until it works. Immediately after a power on event (POW1) the AVR is inconsistent with which attributes can be successfully queried. When we detect that power has just been turned on, we loop every second making a bulk query for every known attribute. This continues until we detect that values have been returned for at least one input name (this seems to be the laggiest of all the attributes) ### Response: def poweron_refresh(self): """Keep requesting all attributes until it works. Immediately after a power on event (POW1) the AVR is inconsistent with which attributes can be successfully queried. When we detect that power has just been turned on, we loop every second making a bulk query for every known attribute. This continues until we detect that values have been returned for at least one input name (this seems to be the laggiest of all the attributes) """ if self._poweron_refresh_successful: return else: self.refresh_all() self._loop.call_later(2, self.poweron_refresh)
def save_lyrics(self, filename=None, extension='json', verbose=True, overwrite=None, binary_encoding=False): """Allows user to save song lyrics from Song object to a .json or .txt file.""" extension = extension.lstrip(".") assert (extension == 'json') or (extension == 'txt'), "format_ must be JSON or TXT" # Determine the filename if filename: for ext in ["txt", "TXT", "json", "JSON"]: filename = filename.replace("." + ext, "") filename += "." + extension else: filename = "Lyrics_{}_{}.{}".format(self.artist.replace(" ", ""), self.title.replace(" ", ""), extension).lower() filename = self._sanitize_filename(filename) # Check if file already exists write_file = False if not os.path.isfile(filename): write_file = True elif overwrite: write_file = True else: if input("{} already exists. Overwrite?\n(y/n): ".format(filename)).lower() == 'y': write_file = True # Format lyrics as either .txt or .json if extension == 'json': lyrics_to_write = {'songs': [], 'artist': self.artist} lyrics_to_write['songs'].append(self.to_dict()) else: lyrics_to_write = self.lyrics if binary_encoding: lyrics_to_write = lyrics_to_write.encode('utf8') # Write the lyrics to either a .json or .txt file if write_file: with open(filename, 'wb' if binary_encoding else 'w') as lyrics_file: if extension == 'json': json.dump(lyrics_to_write, lyrics_file) else: lyrics_file.write(lyrics_to_write) if verbose: print('Wrote {} to {}.'.format(self.title, filename)) else: if verbose: print('Skipping file save.\n') return lyrics_to_write
Allows user to save song lyrics from Song object to a .json or .txt file.
Below is the the instruction that describes the task: ### Input: Allows user to save song lyrics from Song object to a .json or .txt file. ### Response: def save_lyrics(self, filename=None, extension='json', verbose=True, overwrite=None, binary_encoding=False): """Allows user to save song lyrics from Song object to a .json or .txt file.""" extension = extension.lstrip(".") assert (extension == 'json') or (extension == 'txt'), "format_ must be JSON or TXT" # Determine the filename if filename: for ext in ["txt", "TXT", "json", "JSON"]: filename = filename.replace("." + ext, "") filename += "." + extension else: filename = "Lyrics_{}_{}.{}".format(self.artist.replace(" ", ""), self.title.replace(" ", ""), extension).lower() filename = self._sanitize_filename(filename) # Check if file already exists write_file = False if not os.path.isfile(filename): write_file = True elif overwrite: write_file = True else: if input("{} already exists. Overwrite?\n(y/n): ".format(filename)).lower() == 'y': write_file = True # Format lyrics as either .txt or .json if extension == 'json': lyrics_to_write = {'songs': [], 'artist': self.artist} lyrics_to_write['songs'].append(self.to_dict()) else: lyrics_to_write = self.lyrics if binary_encoding: lyrics_to_write = lyrics_to_write.encode('utf8') # Write the lyrics to either a .json or .txt file if write_file: with open(filename, 'wb' if binary_encoding else 'w') as lyrics_file: if extension == 'json': json.dump(lyrics_to_write, lyrics_file) else: lyrics_file.write(lyrics_to_write) if verbose: print('Wrote {} to {}.'.format(self.title, filename)) else: if verbose: print('Skipping file save.\n') return lyrics_to_write
def get_status(self, instance): """Retrives a status of a field from cache. Fields in state 'error' and 'complete' will not retain the status after the call. """ status_key, status = self._get_status(instance) if status['state'] in ['complete', 'error']: cache.delete(status_key) return status
Retrives a status of a field from cache. Fields in state 'error' and 'complete' will not retain the status after the call.
Below is the the instruction that describes the task: ### Input: Retrives a status of a field from cache. Fields in state 'error' and 'complete' will not retain the status after the call. ### Response: def get_status(self, instance): """Retrives a status of a field from cache. Fields in state 'error' and 'complete' will not retain the status after the call. """ status_key, status = self._get_status(instance) if status['state'] in ['complete', 'error']: cache.delete(status_key) return status
def parse_all(self): """Parse the __all__ definition in a module.""" assert self.current.value == "__all__" self.consume(tk.NAME) if self.current.value != "=": raise AllError("Could not evaluate contents of __all__. ") self.consume(tk.OP) if self.current.value not in "([": raise AllError("Could not evaluate contents of __all__. ") self.consume(tk.OP) self.all = [] all_content = "(" while self.current.kind != tk.OP or self.current.value not in ")]": if self.current.kind in (tk.NL, tk.COMMENT): pass elif self.current.kind == tk.STRING or self.current.value == ",": all_content += self.current.value else: raise AllError( "Unexpected token kind in __all__: {!r}. ".format( self.current.kind ) ) self.stream.move() self.consume(tk.OP) all_content += ")" try: self.all = eval(all_content, {}) except BaseException as e: raise AllError( "Could not evaluate contents of __all__." "\bThe value was {}. The exception was:\n{}".format(all_content, e) )
Parse the __all__ definition in a module.
Below is the the instruction that describes the task: ### Input: Parse the __all__ definition in a module. ### Response: def parse_all(self): """Parse the __all__ definition in a module.""" assert self.current.value == "__all__" self.consume(tk.NAME) if self.current.value != "=": raise AllError("Could not evaluate contents of __all__. ") self.consume(tk.OP) if self.current.value not in "([": raise AllError("Could not evaluate contents of __all__. ") self.consume(tk.OP) self.all = [] all_content = "(" while self.current.kind != tk.OP or self.current.value not in ")]": if self.current.kind in (tk.NL, tk.COMMENT): pass elif self.current.kind == tk.STRING or self.current.value == ",": all_content += self.current.value else: raise AllError( "Unexpected token kind in __all__: {!r}. ".format( self.current.kind ) ) self.stream.move() self.consume(tk.OP) all_content += ")" try: self.all = eval(all_content, {}) except BaseException as e: raise AllError( "Could not evaluate contents of __all__." "\bThe value was {}. The exception was:\n{}".format(all_content, e) )
def frame_to_sample(self, frame_index): """ Return a tuple containing the indices of the sample which are the first sample and the end (exclusive) of the frame with the given index. """ start = frame_index * self.hop_size end = start + self.frame_size return start, end
Return a tuple containing the indices of the sample which are the first sample and the end (exclusive) of the frame with the given index.
Below is the the instruction that describes the task: ### Input: Return a tuple containing the indices of the sample which are the first sample and the end (exclusive) of the frame with the given index. ### Response: def frame_to_sample(self, frame_index): """ Return a tuple containing the indices of the sample which are the first sample and the end (exclusive) of the frame with the given index. """ start = frame_index * self.hop_size end = start + self.frame_size return start, end
def p_simple_command_element(p): '''simple_command_element : WORD | ASSIGNMENT_WORD | redirection''' if isinstance(p[1], ast.node): p[0] = [p[1]] return parserobj = p.context p[0] = [_expandword(parserobj, p.slice[1])] # change the word node to an assignment if necessary if p.slice[1].ttype == tokenizer.tokentype.ASSIGNMENT_WORD: p[0][0].kind = 'assignment'
simple_command_element : WORD | ASSIGNMENT_WORD | redirection
Below is the the instruction that describes the task: ### Input: simple_command_element : WORD | ASSIGNMENT_WORD | redirection ### Response: def p_simple_command_element(p): '''simple_command_element : WORD | ASSIGNMENT_WORD | redirection''' if isinstance(p[1], ast.node): p[0] = [p[1]] return parserobj = p.context p[0] = [_expandword(parserobj, p.slice[1])] # change the word node to an assignment if necessary if p.slice[1].ttype == tokenizer.tokentype.ASSIGNMENT_WORD: p[0][0].kind = 'assignment'
def pmag_results_extract(res_file="pmag_results.txt", crit_file="", spec_file="", age_file="", latex=False, grade=False, WD="."): """ Generate tab delimited output file(s) with result data. Save output files and return True if successful. Possible output files: Directions, Intensities, SiteNfo, Criteria, Specimens Optional Parameters (defaults are used if not specified) ---------- res_file : name of pmag_results file (default is "pmag_results.txt") crit_file : name of criteria file (default is "pmag_criteria.txt") spec_file : name of specimen file (default is "pmag_specimens.txt") age_file : name of age file (default is "er_ages.txt") latex : boolean argument to output in LaTeX (default is False) WD : path to directory that contains input files and takes output (default is current directory, '.') """ # format outfiles if latex: latex = 1 file_type = '.tex' else: latex = 0 file_type = '.txt' dir_path = os.path.realpath(WD) outfile = os.path.join(dir_path, 'Directions' + file_type) Ioutfile = os.path.join(dir_path, 'Intensities' + file_type) Soutfile = os.path.join(dir_path, 'SiteNfo' + file_type) Specout = os.path.join(dir_path, 'Specimens' + file_type) Critout = os.path.join(dir_path, 'Criteria' + file_type) # format infiles res_file = os.path.join(dir_path, res_file) if crit_file: crit_file = os.path.join(dir_path, crit_file) if spec_file: spec_file = os.path.join(dir_path, spec_file) else: grade = False # open output files f = open(outfile, 'w') sf = open(Soutfile, 'w') fI = open(Ioutfile, 'w') if crit_file: cr = open(Critout, 'w') # set up column headers Sites, file_type = pmag.magic_read(res_file) if crit_file: Crits, file_type = pmag.magic_read(crit_file) else: Crits = [] SiteCols = ["Site", "Location", "Lat. (N)", "Long. (E)", "Age ", "Age sigma", "Units"] SiteKeys = ["er_site_names", "average_lat", "average_lon", "average_age", "average_age_sigma", "average_age_unit"] DirCols = ["Site", 'Comp.', "perc TC", "Dec.", "Inc.", "Nl", "Np", "k ", "R", "a95", "PLat", "PLong"] DirKeys = ["er_site_names", "pole_comp_name", "tilt_correction", "average_dec", "average_inc", "average_n_lines", "average_n_planes", "average_k", "average_r", "average_alpha95", "vgp_lat", "vgp_lon"] IntCols = ["Site", "N", "B (uT)", "sigma", "sigma perc", "VADM", "VADM sigma"] IntKeys = ["er_site_names", "average_int_n", "average_int", "average_int_sigma", 'average_int_sigma_perc', "vadm", "vadm_sigma"] AllowedKeys = ['specimen_frac', 'specimen_scat', 'specimen_gap_max', 'measurement_step_min', 'measurement_step_max', 'measurement_step_unit', 'specimen_polarity', 'specimen_nrm', 'specimen_direction_type', 'specimen_comp_nmb', 'specimen_mad', 'specimen_alpha95', 'specimen_n', 'specimen_int_sigma', 'specimen_int_sigma_perc', 'specimen_int_rel_sigma', 'specimen_int_rel_sigma_perc', 'specimen_int_mad', 'specimen_int_n', 'specimen_w', 'specimen_q', 'specimen_f', 'specimen_fvds', 'specimen_b_sigma', 'specimen_b_beta', 'specimen_g', 'specimen_dang', 'specimen_md', 'specimen_ptrm', 'specimen_drat', 'specimen_drats', 'specimen_rsc', 'specimen_viscosity_index', 'specimen_magn_moment', 'specimen_magn_volume', 'specimen_magn_mass', 'specimen_int_ptrm_n', 'specimen_delta', 'specimen_theta', 'specimen_gamma', 'sample_polarity', 'sample_nrm', 'sample_direction_type', 'sample_comp_nmb', 'sample_sigma', 'sample_alpha95', 'sample_n', 'sample_n_lines', 'sample_n_planes', 'sample_k', 'sample_r', 'sample_tilt_correction', 'sample_int_sigma', 'sample_int_sigma_perc', 'sample_int_rel_sigma', 'sample_int_rel_sigma_perc', 'sample_int_n', 'sample_magn_moment', 'sample_magn_volume', 'sample_magn_mass', 'site_polarity', 'site_nrm', 'site_direction_type', 'site_comp_nmb', 'site_sigma', 'site_alpha95', 'site_n', 'site_n_lines', 'site_n_planes', 'site_k', 'site_r', 'site_tilt_correction', 'site_int_sigma', 'site_int_sigma_perc', 'site_int_rel_sigma', 'site_int_rel_sigma_perc', 'site_int_n', 'site_magn_moment', 'site_magn_volume', 'site_magn_mass', 'average_age_min', 'average_age_max', 'average_age_sigma', 'average_age_unit', 'average_sigma', 'average_alpha95', 'average_n', 'average_nn', 'average_k', 'average_r', 'average_int_sigma', 'average_int_rel_sigma', 'average_int_rel_sigma_perc', 'average_int_n', 'average_int_nn', 'vgp_dp', 'vgp_dm', 'vgp_sigma', 'vgp_alpha95', 'vgp_n', 'vdm_sigma', 'vdm_n', 'vadm_sigma', 'vadm_n'] if crit_file: crit = Crits[0] # get a list of useful keys for key in list(crit.keys()): if key not in AllowedKeys: del(crit[key]) for key in list(crit.keys()): if (not crit[key]) or (eval(crit[key]) > 1000) or (eval(crit[key]) == 0): # get rid of all blank or too big ones or too little ones del(crit[key]) CritKeys = list(crit.keys()) if spec_file: Specs, file_type = pmag.magic_read(spec_file) fsp = open(Specout, 'w') # including specimen intensities if desired SpecCols = ["Site", "Specimen", "B (uT)", "MAD", "Beta", "N", "Q", "DANG", "f-vds", "DRATS", "T (C)"] SpecKeys = ['er_site_name', 'er_specimen_name', 'specimen_int', 'specimen_int_mad', 'specimen_b_beta', 'specimen_int_n', 'specimen_q', 'specimen_dang', 'specimen_fvds', 'specimen_drats', 'trange'] Xtra = ['specimen_frac', 'specimen_scat', 'specimen_gmax'] if grade: SpecCols.append('Grade') SpecKeys.append('specimen_grade') for x in Xtra: # put in the new intensity keys if present if x in list(Specs[0].keys()): SpecKeys.append(x) newkey = "" for k in x.split('_')[1:]: newkey = newkey + k + '_' SpecCols.append(newkey.strip('_')) SpecCols.append('Corrections') SpecKeys.append('corrections') # these should be multiplied by 1e6 Micro = ['specimen_int', 'average_int', 'average_int_sigma'] Zeta = ['vadm', 'vadm_sigma'] # these should be multiplied by 1e21 # write out the header information for each output file if latex: # write out the latex header stuff sep = ' & ' end = '\\\\' f.write('\\documentclass{article}\n') f.write('\\usepackage[margin=1in]{geometry}\n') f.write('\\usepackage{longtable}\n') f.write('\\begin{document}\n') sf.write('\\documentclass{article}\n') sf.write('\\usepackage[margin=1in]{geometry}\n') sf.write('\\usepackage{longtable}\n') sf.write('\\begin{document}\n') fI.write('\\documentclass{article}\n') fI.write('\\usepackage[margin=1in]{geometry}\n') fI.write('\\usepackage{longtable}\n') fI.write('\\begin{document}\n') if crit_file: cr.write('\\documentclass{article}\n') cr.write('\\usepackage[margin=1in]{geometry}\n') cr.write('\\usepackage{longtable}\n') cr.write('\\begin{document}\n') if spec_file: fsp.write('\\documentclass{article}\n') fsp.write('\\usepackage[margin=1in]{geometry}\n') fsp.write('\\usepackage{longtable}\n') fsp.write('\\begin{document}\n') tabstring = '\\begin{longtable}{' fstring = tabstring for k in range(len(SiteCols)): fstring = fstring + 'r' sf.write(fstring + '}\n') sf.write('\hline\n') fstring = tabstring for k in range(len(DirCols)): fstring = fstring + 'r' f.write(fstring + '}\n') f.write('\hline\n') fstring = tabstring for k in range(len(IntCols)): fstring = fstring + 'r' fI.write(fstring + '}\n') fI.write('\hline\n') fstring = tabstring if crit_file: for k in range(len(CritKeys)): fstring = fstring + 'r' cr.write(fstring + '}\n') cr.write('\hline\n') if spec_file: fstring = tabstring for k in range(len(SpecCols)): fstring = fstring + 'r' fsp.write(fstring + '}\n') fsp.write('\hline\n') else: # just set the tab and line endings for tab delimited sep = ' \t ' end = '' # now write out the actual column headers Soutstring, Doutstring, Ioutstring, Spoutstring, Croutstring = "", "", "", "", "" for k in range(len(SiteCols)): Soutstring = Soutstring + SiteCols[k] + sep Soutstring = Soutstring.strip(sep) Soutstring = Soutstring + end + '\n' sf.write(Soutstring) for k in range(len(DirCols)): Doutstring = Doutstring + DirCols[k] + sep Doutstring = Doutstring.strip(sep) Doutstring = Doutstring + end + '\n' f.write(Doutstring) for k in range(len(IntCols)): Ioutstring = Ioutstring + IntCols[k] + sep Ioutstring = Ioutstring.strip(sep) Ioutstring = Ioutstring + end + '\n' fI.write(Ioutstring) if crit_file: for k in range(len(CritKeys)): Croutstring = Croutstring + CritKeys[k] + sep Croutstring = Croutstring.strip(sep) Croutstring = Croutstring + end + '\n' cr.write(Croutstring) if spec_file: for k in range(len(SpecCols)): Spoutstring = Spoutstring + SpecCols[k] + sep Spoutstring = Spoutstring.strip(sep) Spoutstring = Spoutstring + end + "\n" fsp.write(Spoutstring) if latex: # put in a horizontal line in latex file f.write('\hline\n') sf.write('\hline\n') fI.write('\hline\n') if crit_file: cr.write('\hline\n') if spec_file: fsp.write('\hline\n') # do criteria if crit_file: for crit in Crits: Croutstring = "" for key in CritKeys: Croutstring = Croutstring + crit[key] + sep Croutstring = Croutstring.strip(sep) + end cr.write(Croutstring + '\n') # do directions # get all results with VGPs VGPs = pmag.get_dictitem(Sites, 'vgp_lat', '', 'F') VGPs = pmag.get_dictitem(VGPs, 'data_type', 'i', 'T') # get site level stuff for site in VGPs: if len(site['er_site_names'].split(":")) == 1: if 'er_sample_names' not in list(site.keys()): site['er_sample_names'] = '' if 'pole_comp_name' not in list(site.keys()): site['pole_comp_name'] = "A" if 'average_nn' not in list(site.keys()) and 'average_n' in list(site.keys()): site['average_nn'] = site['average_n'] if 'average_n_lines' not in list(site.keys()): site['average_n_lines'] = site['average_nn'] if 'average_n_planes' not in list(site.keys()): site['average_n_planes'] = "" Soutstring, Doutstring = "", "" for key in SiteKeys: if key in list(site.keys()): Soutstring = Soutstring + site[key] + sep Soutstring = Soutstring.strip(sep) + end sf.write(Soutstring + '\n') for key in DirKeys: if key in list(site.keys()): Doutstring = Doutstring + site[key] + sep Doutstring = Doutstring.strip(sep) + end f.write(Doutstring + '\n') # now do intensities VADMs = pmag.get_dictitem(Sites, 'vadm', '', 'F') VADMs = pmag.get_dictitem(VADMs, 'data_type', 'i', 'T') for site in VADMs: # do results level stuff if site not in VGPs: Soutstring = "" for key in SiteKeys: if key in list(site.keys()): Soutstring = Soutstring + site[key] + sep else: Soutstring = Soutstring + " " + sep Soutstring = Soutstring.strip(sep) + end sf.write(Soutstring + '\n') if len(site['er_site_names'].split(":")) == 1 and site['data_type'] == 'i': if 'average_int_sigma_perc' not in list(site.keys()): site['average_int_sigma_perc'] = "0" if site["average_int_sigma"] == "": site["average_int_sigma"] = "0" if site["average_int_sigma_perc"] == "": site["average_int_sigma_perc"] = "0" if site["vadm"] == "": site["vadm"] = "0" if site["vadm_sigma"] == "": site["vadm_sigma"] = "0" for key in list(site.keys()): # reformat vadms, intensities if key in Micro: site[key] = '%7.1f' % (float(site[key]) * 1e6) if key in Zeta: site[key] = '%7.1f' % (float(site[key]) * 1e-21) outstring = "" for key in IntKeys: if key not in list(site.keys()): site[key] = "" outstring = outstring + site[key] + sep outstring = outstring.strip(sep) + end + '\n' fI.write(outstring) # VDMs=pmag.get_dictitem(Sites,'vdm','','F') # get non-blank VDMs # for site in VDMs: # do results level stuff # if len(site['er_site_names'].split(":"))==1: # if 'average_int_sigma_perc' not in site.keys():site['average_int_sigma_perc']="0" # if site["average_int_sigma"]=="":site["average_int_sigma"]="0" # if site["average_int_sigma_perc"]=="":site["average_int_sigma_perc"]="0" # if site["vadm"]=="":site["vadm"]="0" # if site["vadm_sigma"]=="":site["vadm_sigma"]="0" # for key in site.keys(): # reformat vadms, intensities # if key in Micro: site[key]='%7.1f'%(float(site[key])*1e6) # if key in Zeta: site[key]='%7.1f'%(float(site[key])*1e-21) # outstring="" # for key in IntKeys: # outstring=outstring+site[key]+sep # fI.write(outstring.strip(sep)+'\n') if spec_file: SpecsInts = pmag.get_dictitem(Specs, 'specimen_int', '', 'F') for spec in SpecsInts: spec['trange'] = '%i' % (int(float(spec['measurement_step_min']) - 273)) + \ '-' + '%i' % (int(float(spec['measurement_step_max']) - 273)) meths = spec['magic_method_codes'].split(':') corrections = '' for meth in meths: if 'DA' in meth: corrections = corrections + meth[3:] + ':' corrections = corrections.strip(':') if corrections.strip() == "": corrections = "None" spec['corrections'] = corrections outstring = "" for key in SpecKeys: if key in Micro: spec[key] = '%7.1f' % (float(spec[key]) * 1e6) if key in Zeta: spec[key] = '%7.1f' % (float(spec[key]) * 1e-21) outstring = outstring + spec[key] + sep fsp.write(outstring.strip(sep) + end + '\n') # if latex: # write out the tail stuff f.write('\hline\n') sf.write('\hline\n') fI.write('\hline\n') f.write('\end{longtable}\n') sf.write('\end{longtable}\n') fI.write('\end{longtable}\n') f.write('\end{document}\n') sf.write('\end{document}\n') fI.write('\end{document}\n') if spec_file: fsp.write('\hline\n') fsp.write('\end{longtable}\n') fsp.write('\end{document}\n') if crit_file: cr.write('\hline\n') cr.write('\end{longtable}\n') cr.write('\end{document}\n') f.close() sf.close() fI.close() print('data saved in: ', outfile, Ioutfile, Soutfile) outfiles = [outfile, Ioutfile, Soutfile] if spec_file: fsp.close() print('specimen data saved in: ', Specout) outfiles.append(Specout) if crit_file: cr.close() print('Selection criteria saved in: ', Critout) outfiles.append(Critout) return True, outfiles
Generate tab delimited output file(s) with result data. Save output files and return True if successful. Possible output files: Directions, Intensities, SiteNfo, Criteria, Specimens Optional Parameters (defaults are used if not specified) ---------- res_file : name of pmag_results file (default is "pmag_results.txt") crit_file : name of criteria file (default is "pmag_criteria.txt") spec_file : name of specimen file (default is "pmag_specimens.txt") age_file : name of age file (default is "er_ages.txt") latex : boolean argument to output in LaTeX (default is False) WD : path to directory that contains input files and takes output (default is current directory, '.')
Below is the the instruction that describes the task: ### Input: Generate tab delimited output file(s) with result data. Save output files and return True if successful. Possible output files: Directions, Intensities, SiteNfo, Criteria, Specimens Optional Parameters (defaults are used if not specified) ---------- res_file : name of pmag_results file (default is "pmag_results.txt") crit_file : name of criteria file (default is "pmag_criteria.txt") spec_file : name of specimen file (default is "pmag_specimens.txt") age_file : name of age file (default is "er_ages.txt") latex : boolean argument to output in LaTeX (default is False) WD : path to directory that contains input files and takes output (default is current directory, '.') ### Response: def pmag_results_extract(res_file="pmag_results.txt", crit_file="", spec_file="", age_file="", latex=False, grade=False, WD="."): """ Generate tab delimited output file(s) with result data. Save output files and return True if successful. Possible output files: Directions, Intensities, SiteNfo, Criteria, Specimens Optional Parameters (defaults are used if not specified) ---------- res_file : name of pmag_results file (default is "pmag_results.txt") crit_file : name of criteria file (default is "pmag_criteria.txt") spec_file : name of specimen file (default is "pmag_specimens.txt") age_file : name of age file (default is "er_ages.txt") latex : boolean argument to output in LaTeX (default is False) WD : path to directory that contains input files and takes output (default is current directory, '.') """ # format outfiles if latex: latex = 1 file_type = '.tex' else: latex = 0 file_type = '.txt' dir_path = os.path.realpath(WD) outfile = os.path.join(dir_path, 'Directions' + file_type) Ioutfile = os.path.join(dir_path, 'Intensities' + file_type) Soutfile = os.path.join(dir_path, 'SiteNfo' + file_type) Specout = os.path.join(dir_path, 'Specimens' + file_type) Critout = os.path.join(dir_path, 'Criteria' + file_type) # format infiles res_file = os.path.join(dir_path, res_file) if crit_file: crit_file = os.path.join(dir_path, crit_file) if spec_file: spec_file = os.path.join(dir_path, spec_file) else: grade = False # open output files f = open(outfile, 'w') sf = open(Soutfile, 'w') fI = open(Ioutfile, 'w') if crit_file: cr = open(Critout, 'w') # set up column headers Sites, file_type = pmag.magic_read(res_file) if crit_file: Crits, file_type = pmag.magic_read(crit_file) else: Crits = [] SiteCols = ["Site", "Location", "Lat. (N)", "Long. (E)", "Age ", "Age sigma", "Units"] SiteKeys = ["er_site_names", "average_lat", "average_lon", "average_age", "average_age_sigma", "average_age_unit"] DirCols = ["Site", 'Comp.', "perc TC", "Dec.", "Inc.", "Nl", "Np", "k ", "R", "a95", "PLat", "PLong"] DirKeys = ["er_site_names", "pole_comp_name", "tilt_correction", "average_dec", "average_inc", "average_n_lines", "average_n_planes", "average_k", "average_r", "average_alpha95", "vgp_lat", "vgp_lon"] IntCols = ["Site", "N", "B (uT)", "sigma", "sigma perc", "VADM", "VADM sigma"] IntKeys = ["er_site_names", "average_int_n", "average_int", "average_int_sigma", 'average_int_sigma_perc', "vadm", "vadm_sigma"] AllowedKeys = ['specimen_frac', 'specimen_scat', 'specimen_gap_max', 'measurement_step_min', 'measurement_step_max', 'measurement_step_unit', 'specimen_polarity', 'specimen_nrm', 'specimen_direction_type', 'specimen_comp_nmb', 'specimen_mad', 'specimen_alpha95', 'specimen_n', 'specimen_int_sigma', 'specimen_int_sigma_perc', 'specimen_int_rel_sigma', 'specimen_int_rel_sigma_perc', 'specimen_int_mad', 'specimen_int_n', 'specimen_w', 'specimen_q', 'specimen_f', 'specimen_fvds', 'specimen_b_sigma', 'specimen_b_beta', 'specimen_g', 'specimen_dang', 'specimen_md', 'specimen_ptrm', 'specimen_drat', 'specimen_drats', 'specimen_rsc', 'specimen_viscosity_index', 'specimen_magn_moment', 'specimen_magn_volume', 'specimen_magn_mass', 'specimen_int_ptrm_n', 'specimen_delta', 'specimen_theta', 'specimen_gamma', 'sample_polarity', 'sample_nrm', 'sample_direction_type', 'sample_comp_nmb', 'sample_sigma', 'sample_alpha95', 'sample_n', 'sample_n_lines', 'sample_n_planes', 'sample_k', 'sample_r', 'sample_tilt_correction', 'sample_int_sigma', 'sample_int_sigma_perc', 'sample_int_rel_sigma', 'sample_int_rel_sigma_perc', 'sample_int_n', 'sample_magn_moment', 'sample_magn_volume', 'sample_magn_mass', 'site_polarity', 'site_nrm', 'site_direction_type', 'site_comp_nmb', 'site_sigma', 'site_alpha95', 'site_n', 'site_n_lines', 'site_n_planes', 'site_k', 'site_r', 'site_tilt_correction', 'site_int_sigma', 'site_int_sigma_perc', 'site_int_rel_sigma', 'site_int_rel_sigma_perc', 'site_int_n', 'site_magn_moment', 'site_magn_volume', 'site_magn_mass', 'average_age_min', 'average_age_max', 'average_age_sigma', 'average_age_unit', 'average_sigma', 'average_alpha95', 'average_n', 'average_nn', 'average_k', 'average_r', 'average_int_sigma', 'average_int_rel_sigma', 'average_int_rel_sigma_perc', 'average_int_n', 'average_int_nn', 'vgp_dp', 'vgp_dm', 'vgp_sigma', 'vgp_alpha95', 'vgp_n', 'vdm_sigma', 'vdm_n', 'vadm_sigma', 'vadm_n'] if crit_file: crit = Crits[0] # get a list of useful keys for key in list(crit.keys()): if key not in AllowedKeys: del(crit[key]) for key in list(crit.keys()): if (not crit[key]) or (eval(crit[key]) > 1000) or (eval(crit[key]) == 0): # get rid of all blank or too big ones or too little ones del(crit[key]) CritKeys = list(crit.keys()) if spec_file: Specs, file_type = pmag.magic_read(spec_file) fsp = open(Specout, 'w') # including specimen intensities if desired SpecCols = ["Site", "Specimen", "B (uT)", "MAD", "Beta", "N", "Q", "DANG", "f-vds", "DRATS", "T (C)"] SpecKeys = ['er_site_name', 'er_specimen_name', 'specimen_int', 'specimen_int_mad', 'specimen_b_beta', 'specimen_int_n', 'specimen_q', 'specimen_dang', 'specimen_fvds', 'specimen_drats', 'trange'] Xtra = ['specimen_frac', 'specimen_scat', 'specimen_gmax'] if grade: SpecCols.append('Grade') SpecKeys.append('specimen_grade') for x in Xtra: # put in the new intensity keys if present if x in list(Specs[0].keys()): SpecKeys.append(x) newkey = "" for k in x.split('_')[1:]: newkey = newkey + k + '_' SpecCols.append(newkey.strip('_')) SpecCols.append('Corrections') SpecKeys.append('corrections') # these should be multiplied by 1e6 Micro = ['specimen_int', 'average_int', 'average_int_sigma'] Zeta = ['vadm', 'vadm_sigma'] # these should be multiplied by 1e21 # write out the header information for each output file if latex: # write out the latex header stuff sep = ' & ' end = '\\\\' f.write('\\documentclass{article}\n') f.write('\\usepackage[margin=1in]{geometry}\n') f.write('\\usepackage{longtable}\n') f.write('\\begin{document}\n') sf.write('\\documentclass{article}\n') sf.write('\\usepackage[margin=1in]{geometry}\n') sf.write('\\usepackage{longtable}\n') sf.write('\\begin{document}\n') fI.write('\\documentclass{article}\n') fI.write('\\usepackage[margin=1in]{geometry}\n') fI.write('\\usepackage{longtable}\n') fI.write('\\begin{document}\n') if crit_file: cr.write('\\documentclass{article}\n') cr.write('\\usepackage[margin=1in]{geometry}\n') cr.write('\\usepackage{longtable}\n') cr.write('\\begin{document}\n') if spec_file: fsp.write('\\documentclass{article}\n') fsp.write('\\usepackage[margin=1in]{geometry}\n') fsp.write('\\usepackage{longtable}\n') fsp.write('\\begin{document}\n') tabstring = '\\begin{longtable}{' fstring = tabstring for k in range(len(SiteCols)): fstring = fstring + 'r' sf.write(fstring + '}\n') sf.write('\hline\n') fstring = tabstring for k in range(len(DirCols)): fstring = fstring + 'r' f.write(fstring + '}\n') f.write('\hline\n') fstring = tabstring for k in range(len(IntCols)): fstring = fstring + 'r' fI.write(fstring + '}\n') fI.write('\hline\n') fstring = tabstring if crit_file: for k in range(len(CritKeys)): fstring = fstring + 'r' cr.write(fstring + '}\n') cr.write('\hline\n') if spec_file: fstring = tabstring for k in range(len(SpecCols)): fstring = fstring + 'r' fsp.write(fstring + '}\n') fsp.write('\hline\n') else: # just set the tab and line endings for tab delimited sep = ' \t ' end = '' # now write out the actual column headers Soutstring, Doutstring, Ioutstring, Spoutstring, Croutstring = "", "", "", "", "" for k in range(len(SiteCols)): Soutstring = Soutstring + SiteCols[k] + sep Soutstring = Soutstring.strip(sep) Soutstring = Soutstring + end + '\n' sf.write(Soutstring) for k in range(len(DirCols)): Doutstring = Doutstring + DirCols[k] + sep Doutstring = Doutstring.strip(sep) Doutstring = Doutstring + end + '\n' f.write(Doutstring) for k in range(len(IntCols)): Ioutstring = Ioutstring + IntCols[k] + sep Ioutstring = Ioutstring.strip(sep) Ioutstring = Ioutstring + end + '\n' fI.write(Ioutstring) if crit_file: for k in range(len(CritKeys)): Croutstring = Croutstring + CritKeys[k] + sep Croutstring = Croutstring.strip(sep) Croutstring = Croutstring + end + '\n' cr.write(Croutstring) if spec_file: for k in range(len(SpecCols)): Spoutstring = Spoutstring + SpecCols[k] + sep Spoutstring = Spoutstring.strip(sep) Spoutstring = Spoutstring + end + "\n" fsp.write(Spoutstring) if latex: # put in a horizontal line in latex file f.write('\hline\n') sf.write('\hline\n') fI.write('\hline\n') if crit_file: cr.write('\hline\n') if spec_file: fsp.write('\hline\n') # do criteria if crit_file: for crit in Crits: Croutstring = "" for key in CritKeys: Croutstring = Croutstring + crit[key] + sep Croutstring = Croutstring.strip(sep) + end cr.write(Croutstring + '\n') # do directions # get all results with VGPs VGPs = pmag.get_dictitem(Sites, 'vgp_lat', '', 'F') VGPs = pmag.get_dictitem(VGPs, 'data_type', 'i', 'T') # get site level stuff for site in VGPs: if len(site['er_site_names'].split(":")) == 1: if 'er_sample_names' not in list(site.keys()): site['er_sample_names'] = '' if 'pole_comp_name' not in list(site.keys()): site['pole_comp_name'] = "A" if 'average_nn' not in list(site.keys()) and 'average_n' in list(site.keys()): site['average_nn'] = site['average_n'] if 'average_n_lines' not in list(site.keys()): site['average_n_lines'] = site['average_nn'] if 'average_n_planes' not in list(site.keys()): site['average_n_planes'] = "" Soutstring, Doutstring = "", "" for key in SiteKeys: if key in list(site.keys()): Soutstring = Soutstring + site[key] + sep Soutstring = Soutstring.strip(sep) + end sf.write(Soutstring + '\n') for key in DirKeys: if key in list(site.keys()): Doutstring = Doutstring + site[key] + sep Doutstring = Doutstring.strip(sep) + end f.write(Doutstring + '\n') # now do intensities VADMs = pmag.get_dictitem(Sites, 'vadm', '', 'F') VADMs = pmag.get_dictitem(VADMs, 'data_type', 'i', 'T') for site in VADMs: # do results level stuff if site not in VGPs: Soutstring = "" for key in SiteKeys: if key in list(site.keys()): Soutstring = Soutstring + site[key] + sep else: Soutstring = Soutstring + " " + sep Soutstring = Soutstring.strip(sep) + end sf.write(Soutstring + '\n') if len(site['er_site_names'].split(":")) == 1 and site['data_type'] == 'i': if 'average_int_sigma_perc' not in list(site.keys()): site['average_int_sigma_perc'] = "0" if site["average_int_sigma"] == "": site["average_int_sigma"] = "0" if site["average_int_sigma_perc"] == "": site["average_int_sigma_perc"] = "0" if site["vadm"] == "": site["vadm"] = "0" if site["vadm_sigma"] == "": site["vadm_sigma"] = "0" for key in list(site.keys()): # reformat vadms, intensities if key in Micro: site[key] = '%7.1f' % (float(site[key]) * 1e6) if key in Zeta: site[key] = '%7.1f' % (float(site[key]) * 1e-21) outstring = "" for key in IntKeys: if key not in list(site.keys()): site[key] = "" outstring = outstring + site[key] + sep outstring = outstring.strip(sep) + end + '\n' fI.write(outstring) # VDMs=pmag.get_dictitem(Sites,'vdm','','F') # get non-blank VDMs # for site in VDMs: # do results level stuff # if len(site['er_site_names'].split(":"))==1: # if 'average_int_sigma_perc' not in site.keys():site['average_int_sigma_perc']="0" # if site["average_int_sigma"]=="":site["average_int_sigma"]="0" # if site["average_int_sigma_perc"]=="":site["average_int_sigma_perc"]="0" # if site["vadm"]=="":site["vadm"]="0" # if site["vadm_sigma"]=="":site["vadm_sigma"]="0" # for key in site.keys(): # reformat vadms, intensities # if key in Micro: site[key]='%7.1f'%(float(site[key])*1e6) # if key in Zeta: site[key]='%7.1f'%(float(site[key])*1e-21) # outstring="" # for key in IntKeys: # outstring=outstring+site[key]+sep # fI.write(outstring.strip(sep)+'\n') if spec_file: SpecsInts = pmag.get_dictitem(Specs, 'specimen_int', '', 'F') for spec in SpecsInts: spec['trange'] = '%i' % (int(float(spec['measurement_step_min']) - 273)) + \ '-' + '%i' % (int(float(spec['measurement_step_max']) - 273)) meths = spec['magic_method_codes'].split(':') corrections = '' for meth in meths: if 'DA' in meth: corrections = corrections + meth[3:] + ':' corrections = corrections.strip(':') if corrections.strip() == "": corrections = "None" spec['corrections'] = corrections outstring = "" for key in SpecKeys: if key in Micro: spec[key] = '%7.1f' % (float(spec[key]) * 1e6) if key in Zeta: spec[key] = '%7.1f' % (float(spec[key]) * 1e-21) outstring = outstring + spec[key] + sep fsp.write(outstring.strip(sep) + end + '\n') # if latex: # write out the tail stuff f.write('\hline\n') sf.write('\hline\n') fI.write('\hline\n') f.write('\end{longtable}\n') sf.write('\end{longtable}\n') fI.write('\end{longtable}\n') f.write('\end{document}\n') sf.write('\end{document}\n') fI.write('\end{document}\n') if spec_file: fsp.write('\hline\n') fsp.write('\end{longtable}\n') fsp.write('\end{document}\n') if crit_file: cr.write('\hline\n') cr.write('\end{longtable}\n') cr.write('\end{document}\n') f.close() sf.close() fI.close() print('data saved in: ', outfile, Ioutfile, Soutfile) outfiles = [outfile, Ioutfile, Soutfile] if spec_file: fsp.close() print('specimen data saved in: ', Specout) outfiles.append(Specout) if crit_file: cr.close() print('Selection criteria saved in: ', Critout) outfiles.append(Critout) return True, outfiles
def discard(self, val): """ Remove the first occurrence of *val*. If *val* is not a member, does nothing. """ _maxes = self._maxes if not _maxes: return key = self._key(val) pos = bisect_left(_maxes, key) if pos == len(_maxes): return _keys = self._keys _lists = self._lists idx = bisect_left(_keys[pos], key) len_keys = len(_keys) len_sublist = len(_keys[pos]) while True: if _keys[pos][idx] != key: return if _lists[pos][idx] == val: self._delete(pos, idx) return idx += 1 if idx == len_sublist: pos += 1 if pos == len_keys: return len_sublist = len(_keys[pos]) idx = 0
Remove the first occurrence of *val*. If *val* is not a member, does nothing.
Below is the the instruction that describes the task: ### Input: Remove the first occurrence of *val*. If *val* is not a member, does nothing. ### Response: def discard(self, val): """ Remove the first occurrence of *val*. If *val* is not a member, does nothing. """ _maxes = self._maxes if not _maxes: return key = self._key(val) pos = bisect_left(_maxes, key) if pos == len(_maxes): return _keys = self._keys _lists = self._lists idx = bisect_left(_keys[pos], key) len_keys = len(_keys) len_sublist = len(_keys[pos]) while True: if _keys[pos][idx] != key: return if _lists[pos][idx] == val: self._delete(pos, idx) return idx += 1 if idx == len_sublist: pos += 1 if pos == len_keys: return len_sublist = len(_keys[pos]) idx = 0
def deepcopy(self): """ Create a deep copy of the Heatmaps object. Returns ------- imgaug.HeatmapsOnImage Deep copy. """ return HeatmapsOnImage(self.get_arr(), shape=self.shape, min_value=self.min_value, max_value=self.max_value)
Create a deep copy of the Heatmaps object. Returns ------- imgaug.HeatmapsOnImage Deep copy.
Below is the the instruction that describes the task: ### Input: Create a deep copy of the Heatmaps object. Returns ------- imgaug.HeatmapsOnImage Deep copy. ### Response: def deepcopy(self): """ Create a deep copy of the Heatmaps object. Returns ------- imgaug.HeatmapsOnImage Deep copy. """ return HeatmapsOnImage(self.get_arr(), shape=self.shape, min_value=self.min_value, max_value=self.max_value)
def _unassembled_reads2_out_file_name(self): """Checks if file name is set for reads2 output. Returns absolute path.""" if self.Parameters['-2'].isOn(): unassembled_reads2 = self._absolute( str(self.Parameters['-2'].Value)) else: raise ValueError("No reads2 (flag -2) output path specified") return unassembled_reads2
Checks if file name is set for reads2 output. Returns absolute path.
Below is the the instruction that describes the task: ### Input: Checks if file name is set for reads2 output. Returns absolute path. ### Response: def _unassembled_reads2_out_file_name(self): """Checks if file name is set for reads2 output. Returns absolute path.""" if self.Parameters['-2'].isOn(): unassembled_reads2 = self._absolute( str(self.Parameters['-2'].Value)) else: raise ValueError("No reads2 (flag -2) output path specified") return unassembled_reads2
def matchlist_by_account( self, region, encrypted_account_id, queue=None, begin_time=None, end_time=None, begin_index=None, end_index=None, season=None, champion=None, ): """ Get matchlist for ranked games played on given account ID and platform ID and filtered using given filter parameters, if any A number of optional parameters are provided for filtering. It is up to the caller to ensure that the combination of filter parameters provided is valid for the requested account, otherwise, no matches may be returned. Note that if either beginIndex or endIndex are specified, then both must be specified and endIndex must be greater than beginIndex. If endTime is specified, but not beginTime, then beginTime is effectively the start of the account's match history. If beginTime is specified, but not endTime, then endTime is effectively the current time. Note that endTime should be greater than beginTime if both are specified, although there is no maximum limit on their range. :param string region: The region to execute this request on :param string encrypted_account_id: The account ID. :param Set[int] queue: Set of queue IDs for which to filtering matchlist. :param long begin_time: The begin time to use for filtering matchlist specified as epoch milliseconds. :param long end_time: The end time to use for filtering matchlist specified as epoch milliseconds. :param int begin_index: The begin index to use for filtering matchlist. :param int end_index: The end index to use for filtering matchlist. :param Set[int] season: Set of season IDs for which to filtering matchlist. :param Set[int] champion: Set of champion IDs for which to filtering matchlist. :returns: MatchlistDto """ url, query = MatchApiV4Urls.matchlist_by_account( region=region, encrypted_account_id=encrypted_account_id, queue=queue, beginTime=begin_time, endTime=end_time, beginIndex=begin_index, endIndex=end_index, season=season, champion=champion, ) return self._raw_request(self.matchlist_by_account.__name__, region, url, query)
Get matchlist for ranked games played on given account ID and platform ID and filtered using given filter parameters, if any A number of optional parameters are provided for filtering. It is up to the caller to ensure that the combination of filter parameters provided is valid for the requested account, otherwise, no matches may be returned. Note that if either beginIndex or endIndex are specified, then both must be specified and endIndex must be greater than beginIndex. If endTime is specified, but not beginTime, then beginTime is effectively the start of the account's match history. If beginTime is specified, but not endTime, then endTime is effectively the current time. Note that endTime should be greater than beginTime if both are specified, although there is no maximum limit on their range. :param string region: The region to execute this request on :param string encrypted_account_id: The account ID. :param Set[int] queue: Set of queue IDs for which to filtering matchlist. :param long begin_time: The begin time to use for filtering matchlist specified as epoch milliseconds. :param long end_time: The end time to use for filtering matchlist specified as epoch milliseconds. :param int begin_index: The begin index to use for filtering matchlist. :param int end_index: The end index to use for filtering matchlist. :param Set[int] season: Set of season IDs for which to filtering matchlist. :param Set[int] champion: Set of champion IDs for which to filtering matchlist. :returns: MatchlistDto
Below is the the instruction that describes the task: ### Input: Get matchlist for ranked games played on given account ID and platform ID and filtered using given filter parameters, if any A number of optional parameters are provided for filtering. It is up to the caller to ensure that the combination of filter parameters provided is valid for the requested account, otherwise, no matches may be returned. Note that if either beginIndex or endIndex are specified, then both must be specified and endIndex must be greater than beginIndex. If endTime is specified, but not beginTime, then beginTime is effectively the start of the account's match history. If beginTime is specified, but not endTime, then endTime is effectively the current time. Note that endTime should be greater than beginTime if both are specified, although there is no maximum limit on their range. :param string region: The region to execute this request on :param string encrypted_account_id: The account ID. :param Set[int] queue: Set of queue IDs for which to filtering matchlist. :param long begin_time: The begin time to use for filtering matchlist specified as epoch milliseconds. :param long end_time: The end time to use for filtering matchlist specified as epoch milliseconds. :param int begin_index: The begin index to use for filtering matchlist. :param int end_index: The end index to use for filtering matchlist. :param Set[int] season: Set of season IDs for which to filtering matchlist. :param Set[int] champion: Set of champion IDs for which to filtering matchlist. :returns: MatchlistDto ### Response: def matchlist_by_account( self, region, encrypted_account_id, queue=None, begin_time=None, end_time=None, begin_index=None, end_index=None, season=None, champion=None, ): """ Get matchlist for ranked games played on given account ID and platform ID and filtered using given filter parameters, if any A number of optional parameters are provided for filtering. It is up to the caller to ensure that the combination of filter parameters provided is valid for the requested account, otherwise, no matches may be returned. Note that if either beginIndex or endIndex are specified, then both must be specified and endIndex must be greater than beginIndex. If endTime is specified, but not beginTime, then beginTime is effectively the start of the account's match history. If beginTime is specified, but not endTime, then endTime is effectively the current time. Note that endTime should be greater than beginTime if both are specified, although there is no maximum limit on their range. :param string region: The region to execute this request on :param string encrypted_account_id: The account ID. :param Set[int] queue: Set of queue IDs for which to filtering matchlist. :param long begin_time: The begin time to use for filtering matchlist specified as epoch milliseconds. :param long end_time: The end time to use for filtering matchlist specified as epoch milliseconds. :param int begin_index: The begin index to use for filtering matchlist. :param int end_index: The end index to use for filtering matchlist. :param Set[int] season: Set of season IDs for which to filtering matchlist. :param Set[int] champion: Set of champion IDs for which to filtering matchlist. :returns: MatchlistDto """ url, query = MatchApiV4Urls.matchlist_by_account( region=region, encrypted_account_id=encrypted_account_id, queue=queue, beginTime=begin_time, endTime=end_time, beginIndex=begin_index, endIndex=end_index, season=season, champion=champion, ) return self._raw_request(self.matchlist_by_account.__name__, region, url, query)
def iteritems(self): """ Iterate through the property names and values of this CIM instance. Each iteration item is a tuple of the property name (in the original lexical case) and the property value. The order of properties is preserved. """ for key, val in self.properties.iteritems(): yield (key, val.value)
Iterate through the property names and values of this CIM instance. Each iteration item is a tuple of the property name (in the original lexical case) and the property value. The order of properties is preserved.
Below is the the instruction that describes the task: ### Input: Iterate through the property names and values of this CIM instance. Each iteration item is a tuple of the property name (in the original lexical case) and the property value. The order of properties is preserved. ### Response: def iteritems(self): """ Iterate through the property names and values of this CIM instance. Each iteration item is a tuple of the property name (in the original lexical case) and the property value. The order of properties is preserved. """ for key, val in self.properties.iteritems(): yield (key, val.value)
def get_jobs(self, project, **params): """ Gets jobs from project, filtered by parameters :param project: project (repository name) to query data for :param params: keyword arguments to filter results """ return self._get_json_list(self.JOBS_ENDPOINT, project, **params)
Gets jobs from project, filtered by parameters :param project: project (repository name) to query data for :param params: keyword arguments to filter results
Below is the the instruction that describes the task: ### Input: Gets jobs from project, filtered by parameters :param project: project (repository name) to query data for :param params: keyword arguments to filter results ### Response: def get_jobs(self, project, **params): """ Gets jobs from project, filtered by parameters :param project: project (repository name) to query data for :param params: keyword arguments to filter results """ return self._get_json_list(self.JOBS_ENDPOINT, project, **params)
def fastknn(self, data: ['SASdata', str] = None, display: str = None, displayout: str = None, id: str = None, input: [str, list, dict] = None, output: [str, bool, 'SASdata'] = None, procopts: str = None, stmtpassthrough: str = None, **kwargs: dict) -> 'SASresults': """ Python method to call the FASTKNN procedure Documentation link: https://go.documentation.sas.com/?docsetId=casml&docsetTarget=casml_fastknn_toc.htm&docsetVersion=8.3&locale=en :param data: SASdata object or string. This parameter is required. :parm display: The display variable can only be a string type. :parm displayout: The displayout variable can only be a string type. :parm id: The id variable can only be a string type. :parm input: The input variable can be a string, list or dict type. It refers to the dependent, y, or label variable. :parm output: The output variable can be a string, boolean or SASdata type. The member name for a boolean is "_output". :parm procopts: The procopts variable is a generic option available for advanced use. It can only be a string type. :parm stmtpassthrough: The stmtpassthrough variable is a generic option available for advanced use. It can only be a string type. :return: SAS Result Object """
Python method to call the FASTKNN procedure Documentation link: https://go.documentation.sas.com/?docsetId=casml&docsetTarget=casml_fastknn_toc.htm&docsetVersion=8.3&locale=en :param data: SASdata object or string. This parameter is required. :parm display: The display variable can only be a string type. :parm displayout: The displayout variable can only be a string type. :parm id: The id variable can only be a string type. :parm input: The input variable can be a string, list or dict type. It refers to the dependent, y, or label variable. :parm output: The output variable can be a string, boolean or SASdata type. The member name for a boolean is "_output". :parm procopts: The procopts variable is a generic option available for advanced use. It can only be a string type. :parm stmtpassthrough: The stmtpassthrough variable is a generic option available for advanced use. It can only be a string type. :return: SAS Result Object
Below is the the instruction that describes the task: ### Input: Python method to call the FASTKNN procedure Documentation link: https://go.documentation.sas.com/?docsetId=casml&docsetTarget=casml_fastknn_toc.htm&docsetVersion=8.3&locale=en :param data: SASdata object or string. This parameter is required. :parm display: The display variable can only be a string type. :parm displayout: The displayout variable can only be a string type. :parm id: The id variable can only be a string type. :parm input: The input variable can be a string, list or dict type. It refers to the dependent, y, or label variable. :parm output: The output variable can be a string, boolean or SASdata type. The member name for a boolean is "_output". :parm procopts: The procopts variable is a generic option available for advanced use. It can only be a string type. :parm stmtpassthrough: The stmtpassthrough variable is a generic option available for advanced use. It can only be a string type. :return: SAS Result Object ### Response: def fastknn(self, data: ['SASdata', str] = None, display: str = None, displayout: str = None, id: str = None, input: [str, list, dict] = None, output: [str, bool, 'SASdata'] = None, procopts: str = None, stmtpassthrough: str = None, **kwargs: dict) -> 'SASresults': """ Python method to call the FASTKNN procedure Documentation link: https://go.documentation.sas.com/?docsetId=casml&docsetTarget=casml_fastknn_toc.htm&docsetVersion=8.3&locale=en :param data: SASdata object or string. This parameter is required. :parm display: The display variable can only be a string type. :parm displayout: The displayout variable can only be a string type. :parm id: The id variable can only be a string type. :parm input: The input variable can be a string, list or dict type. It refers to the dependent, y, or label variable. :parm output: The output variable can be a string, boolean or SASdata type. The member name for a boolean is "_output". :parm procopts: The procopts variable is a generic option available for advanced use. It can only be a string type. :parm stmtpassthrough: The stmtpassthrough variable is a generic option available for advanced use. It can only be a string type. :return: SAS Result Object """
def cnst_AT(self, X): r"""Compute :math:`A^T \mathbf{x}` where :math:`A \mathbf{x}` is a component of ADMM problem constraint. In this case :math:`A^T \mathbf{x} = (\Gamma_0^T \;\; \Gamma_1^T \;\; \ldots \;\; I) \mathbf{x}`. """ return np.sum(self.cnst_A0T(X), axis=-1) + self.cnst_A1T(X)
r"""Compute :math:`A^T \mathbf{x}` where :math:`A \mathbf{x}` is a component of ADMM problem constraint. In this case :math:`A^T \mathbf{x} = (\Gamma_0^T \;\; \Gamma_1^T \;\; \ldots \;\; I) \mathbf{x}`.
Below is the the instruction that describes the task: ### Input: r"""Compute :math:`A^T \mathbf{x}` where :math:`A \mathbf{x}` is a component of ADMM problem constraint. In this case :math:`A^T \mathbf{x} = (\Gamma_0^T \;\; \Gamma_1^T \;\; \ldots \;\; I) \mathbf{x}`. ### Response: def cnst_AT(self, X): r"""Compute :math:`A^T \mathbf{x}` where :math:`A \mathbf{x}` is a component of ADMM problem constraint. In this case :math:`A^T \mathbf{x} = (\Gamma_0^T \;\; \Gamma_1^T \;\; \ldots \;\; I) \mathbf{x}`. """ return np.sum(self.cnst_A0T(X), axis=-1) + self.cnst_A1T(X)
def companyDF(symbol, token='', version=''): '''Company reference data https://iexcloud.io/docs/api/#company Updates at 4am and 5am UTC every day Args: symbol (string); Ticker to request token (string); Access token version (string); API version Returns: DataFrame: result ''' c = company(symbol, token, version) df = _companyToDF(c) return df
Company reference data https://iexcloud.io/docs/api/#company Updates at 4am and 5am UTC every day Args: symbol (string); Ticker to request token (string); Access token version (string); API version Returns: DataFrame: result
Below is the the instruction that describes the task: ### Input: Company reference data https://iexcloud.io/docs/api/#company Updates at 4am and 5am UTC every day Args: symbol (string); Ticker to request token (string); Access token version (string); API version Returns: DataFrame: result ### Response: def companyDF(symbol, token='', version=''): '''Company reference data https://iexcloud.io/docs/api/#company Updates at 4am and 5am UTC every day Args: symbol (string); Ticker to request token (string); Access token version (string); API version Returns: DataFrame: result ''' c = company(symbol, token, version) df = _companyToDF(c) return df
def list_contents(self): """List the contents of this directory :return: A LsInfo object that contains directories and files :rtype: :class:`~.LsInfo` or :class:`~.ErrorInfo` Here is an example usage:: # let dirinfo be a DirectoryInfo object ldata = dirinfo.list_contents() if isinstance(ldata, ErrorInfo): # Do some error handling logger.warn("Error listing file info: (%s) %s", ldata.errno, ldata.message) # It's of type LsInfo else: # Look at all the files for finfo in ldata.files: logger.info("Found file %s of size %s", finfo.path, finfo.size) # Look at all the directories for dinfo in ldata.directories: logger.info("Found directory %s of last modified %s", dinfo.path, dinfo.last_modified) """ target = DeviceTarget(self.device_id) return self._fssapi.list_files(target, self.path)[self.device_id]
List the contents of this directory :return: A LsInfo object that contains directories and files :rtype: :class:`~.LsInfo` or :class:`~.ErrorInfo` Here is an example usage:: # let dirinfo be a DirectoryInfo object ldata = dirinfo.list_contents() if isinstance(ldata, ErrorInfo): # Do some error handling logger.warn("Error listing file info: (%s) %s", ldata.errno, ldata.message) # It's of type LsInfo else: # Look at all the files for finfo in ldata.files: logger.info("Found file %s of size %s", finfo.path, finfo.size) # Look at all the directories for dinfo in ldata.directories: logger.info("Found directory %s of last modified %s", dinfo.path, dinfo.last_modified)
Below is the the instruction that describes the task: ### Input: List the contents of this directory :return: A LsInfo object that contains directories and files :rtype: :class:`~.LsInfo` or :class:`~.ErrorInfo` Here is an example usage:: # let dirinfo be a DirectoryInfo object ldata = dirinfo.list_contents() if isinstance(ldata, ErrorInfo): # Do some error handling logger.warn("Error listing file info: (%s) %s", ldata.errno, ldata.message) # It's of type LsInfo else: # Look at all the files for finfo in ldata.files: logger.info("Found file %s of size %s", finfo.path, finfo.size) # Look at all the directories for dinfo in ldata.directories: logger.info("Found directory %s of last modified %s", dinfo.path, dinfo.last_modified) ### Response: def list_contents(self): """List the contents of this directory :return: A LsInfo object that contains directories and files :rtype: :class:`~.LsInfo` or :class:`~.ErrorInfo` Here is an example usage:: # let dirinfo be a DirectoryInfo object ldata = dirinfo.list_contents() if isinstance(ldata, ErrorInfo): # Do some error handling logger.warn("Error listing file info: (%s) %s", ldata.errno, ldata.message) # It's of type LsInfo else: # Look at all the files for finfo in ldata.files: logger.info("Found file %s of size %s", finfo.path, finfo.size) # Look at all the directories for dinfo in ldata.directories: logger.info("Found directory %s of last modified %s", dinfo.path, dinfo.last_modified) """ target = DeviceTarget(self.device_id) return self._fssapi.list_files(target, self.path)[self.device_id]
def _get_value_from_match(self, key, match): """ Gets the value of the property in the given MatchObject. Args: key (str): Key of the property looked-up. match (MatchObject): The matched property. Return: The discovered value, as a string or boolean. """ value = match.groups(1)[0] clean_value = str(value).lstrip().rstrip() if clean_value == 'true': self._log.info('Got value of "%s" as boolean true.', key) return True if clean_value == 'false': self._log.info('Got value of "%s" as boolean false.', key) return False try: float_value = float(clean_value) self._log.info('Got value of "%s" as float "%f".', key, float_value) return float_value except ValueError: self._log.info('Got value of "%s" as string "%s".', key, clean_value) return clean_value
Gets the value of the property in the given MatchObject. Args: key (str): Key of the property looked-up. match (MatchObject): The matched property. Return: The discovered value, as a string or boolean.
Below is the the instruction that describes the task: ### Input: Gets the value of the property in the given MatchObject. Args: key (str): Key of the property looked-up. match (MatchObject): The matched property. Return: The discovered value, as a string or boolean. ### Response: def _get_value_from_match(self, key, match): """ Gets the value of the property in the given MatchObject. Args: key (str): Key of the property looked-up. match (MatchObject): The matched property. Return: The discovered value, as a string or boolean. """ value = match.groups(1)[0] clean_value = str(value).lstrip().rstrip() if clean_value == 'true': self._log.info('Got value of "%s" as boolean true.', key) return True if clean_value == 'false': self._log.info('Got value of "%s" as boolean false.', key) return False try: float_value = float(clean_value) self._log.info('Got value of "%s" as float "%f".', key, float_value) return float_value except ValueError: self._log.info('Got value of "%s" as string "%s".', key, clean_value) return clean_value
def _ions(self, f): """ This is a generator that returns the mzs being measured during each time segment, one segment at a time. """ outside_pos = f.tell() doff = find_offset(f, 4 * b'\xff' + 'HapsSearch'.encode('ascii')) # actual end of prev section is 34 bytes before, but assume 1 rec f.seek(doff - 62) # seek backwards to find the FFFFFFFF header while True: f.seek(f.tell() - 8) if f.read(4) == 4 * b'\xff': break f.seek(f.tell() + 64) nsegments = struct.unpack('<I', f.read(4))[0] for _ in range(nsegments): # first 32 bytes are segment name, rest are something else? f.seek(f.tell() + 96) nions = struct.unpack('<I', f.read(4))[0] ions = [] for _ in range(nions): # TODO: check that itype is actually a SIM/full scan switch i1, i2, _, _, _, _, itype, _ = struct.unpack('<' + 8 * 'I', f.read(32)) if itype == 0: # SIM ions.append(i1 / 100.) else: # full scan # TODO: this might be a little hacky? # ideally we would need to know n for this, e.g.: # ions += np.linspace(i1 / 100, i2 / 100, n).tolist() ions += np.arange(i1 / 100., i2 / 100. + 1, 1).tolist() # save the file position and load the position # that we were at before we started this code inside_pos = f.tell() f.seek(outside_pos) yield ions outside_pos = f.tell() f.seek(inside_pos) f.seek(outside_pos)
This is a generator that returns the mzs being measured during each time segment, one segment at a time.
Below is the the instruction that describes the task: ### Input: This is a generator that returns the mzs being measured during each time segment, one segment at a time. ### Response: def _ions(self, f): """ This is a generator that returns the mzs being measured during each time segment, one segment at a time. """ outside_pos = f.tell() doff = find_offset(f, 4 * b'\xff' + 'HapsSearch'.encode('ascii')) # actual end of prev section is 34 bytes before, but assume 1 rec f.seek(doff - 62) # seek backwards to find the FFFFFFFF header while True: f.seek(f.tell() - 8) if f.read(4) == 4 * b'\xff': break f.seek(f.tell() + 64) nsegments = struct.unpack('<I', f.read(4))[0] for _ in range(nsegments): # first 32 bytes are segment name, rest are something else? f.seek(f.tell() + 96) nions = struct.unpack('<I', f.read(4))[0] ions = [] for _ in range(nions): # TODO: check that itype is actually a SIM/full scan switch i1, i2, _, _, _, _, itype, _ = struct.unpack('<' + 8 * 'I', f.read(32)) if itype == 0: # SIM ions.append(i1 / 100.) else: # full scan # TODO: this might be a little hacky? # ideally we would need to know n for this, e.g.: # ions += np.linspace(i1 / 100, i2 / 100, n).tolist() ions += np.arange(i1 / 100., i2 / 100. + 1, 1).tolist() # save the file position and load the position # that we were at before we started this code inside_pos = f.tell() f.seek(outside_pos) yield ions outside_pos = f.tell() f.seek(inside_pos) f.seek(outside_pos)
def create(self, count): """Create a pattern of the specified length.""" space, self.space = tee(self.space) limit = reduce(mul, map(len, self.sets)) * self.position logging.debug('limit: %s', limit) if limit >= count: return ''.join(islice(space, count)) else: raise IndexError('{count} Overflows {sets}!'.format( count=count, sets=self.sets))
Create a pattern of the specified length.
Below is the the instruction that describes the task: ### Input: Create a pattern of the specified length. ### Response: def create(self, count): """Create a pattern of the specified length.""" space, self.space = tee(self.space) limit = reduce(mul, map(len, self.sets)) * self.position logging.debug('limit: %s', limit) if limit >= count: return ''.join(islice(space, count)) else: raise IndexError('{count} Overflows {sets}!'.format( count=count, sets=self.sets))
def generate(self, pattern=None): """ Generates and returns random name as a list of strings. """ lst = self._lists[pattern] while True: result = lst[self._randrange(lst.length)] # 1. Check that there are no duplicates # 2. Check that there are no duplicate prefixes # 3. Check max slug length n = len(result) if (self._ensure_unique and len(set(result)) != n or self._check_prefix and len(set(x[:self._check_prefix] for x in result)) != n or self._max_slug_length and sum(len(x) for x in result) + n - 1 > self._max_slug_length): continue return result
Generates and returns random name as a list of strings.
Below is the the instruction that describes the task: ### Input: Generates and returns random name as a list of strings. ### Response: def generate(self, pattern=None): """ Generates and returns random name as a list of strings. """ lst = self._lists[pattern] while True: result = lst[self._randrange(lst.length)] # 1. Check that there are no duplicates # 2. Check that there are no duplicate prefixes # 3. Check max slug length n = len(result) if (self._ensure_unique and len(set(result)) != n or self._check_prefix and len(set(x[:self._check_prefix] for x in result)) != n or self._max_slug_length and sum(len(x) for x in result) + n - 1 > self._max_slug_length): continue return result
def _connect(self): "Connects a socket to the server using options defined in `config`." self.socket = socket.socket() self.socket.connect((self.config['host'], self.config['port'])) self.cmd("NICK %s" % self.config['nick']) self.cmd("USER %s %s bla :%s" % (self.config['ident'], self.config['host'], self.config['realname']))
Connects a socket to the server using options defined in `config`.
Below is the the instruction that describes the task: ### Input: Connects a socket to the server using options defined in `config`. ### Response: def _connect(self): "Connects a socket to the server using options defined in `config`." self.socket = socket.socket() self.socket.connect((self.config['host'], self.config['port'])) self.cmd("NICK %s" % self.config['nick']) self.cmd("USER %s %s bla :%s" % (self.config['ident'], self.config['host'], self.config['realname']))
def dispatch_event(self,event_type,*args): """ Internal event handling method. This method extends the behavior inherited from :py:meth:`pyglet.window.Window.dispatch_event()` by calling the various :py:meth:`handleEvent()` methods. By default, :py:meth:`Peng.handleEvent()`\ , :py:meth:`handleEvent()` and :py:meth:`Menu.handleEvent()` are called in this order to handle events. Note that some events may not be handled by all handlers during early startup. """ super(PengWindow,self).dispatch_event(event_type,*args) try: p = self.peng m = self.menu except AttributeError: # To prevent early startup errors if hasattr(self,"peng") and self.peng.cfg["debug.events.logerr"]: print("Error:") traceback.print_exc() return p.handleEvent(event_type,args,self) self.handleEvent(event_type,args) m.handleEvent(event_type,args)
Internal event handling method. This method extends the behavior inherited from :py:meth:`pyglet.window.Window.dispatch_event()` by calling the various :py:meth:`handleEvent()` methods. By default, :py:meth:`Peng.handleEvent()`\ , :py:meth:`handleEvent()` and :py:meth:`Menu.handleEvent()` are called in this order to handle events. Note that some events may not be handled by all handlers during early startup.
Below is the the instruction that describes the task: ### Input: Internal event handling method. This method extends the behavior inherited from :py:meth:`pyglet.window.Window.dispatch_event()` by calling the various :py:meth:`handleEvent()` methods. By default, :py:meth:`Peng.handleEvent()`\ , :py:meth:`handleEvent()` and :py:meth:`Menu.handleEvent()` are called in this order to handle events. Note that some events may not be handled by all handlers during early startup. ### Response: def dispatch_event(self,event_type,*args): """ Internal event handling method. This method extends the behavior inherited from :py:meth:`pyglet.window.Window.dispatch_event()` by calling the various :py:meth:`handleEvent()` methods. By default, :py:meth:`Peng.handleEvent()`\ , :py:meth:`handleEvent()` and :py:meth:`Menu.handleEvent()` are called in this order to handle events. Note that some events may not be handled by all handlers during early startup. """ super(PengWindow,self).dispatch_event(event_type,*args) try: p = self.peng m = self.menu except AttributeError: # To prevent early startup errors if hasattr(self,"peng") and self.peng.cfg["debug.events.logerr"]: print("Error:") traceback.print_exc() return p.handleEvent(event_type,args,self) self.handleEvent(event_type,args) m.handleEvent(event_type,args)
def resolveFilenameConflicts(self, dialog=True): """Goes through list of DPs to make sure that their destination names do not clash. Applies new names. Returns True if some conflicts were resolved. If dialog is True, shows confirrmation dialog.""" resolved = self.wdplv.resolveFilenameConflicts() if resolved and dialog: QMessageBox.warning(self, "Filename conflicts", """<P> <NOBR>PURR has found duplicate destination filenames among your data products.</NOBR> This is not allowed, so some filenames have been adjusted to avoid name clashes. Please review the changes before saving this entry. </P>""", QMessageBox.Ok, 0) return resolved
Goes through list of DPs to make sure that their destination names do not clash. Applies new names. Returns True if some conflicts were resolved. If dialog is True, shows confirrmation dialog.
Below is the the instruction that describes the task: ### Input: Goes through list of DPs to make sure that their destination names do not clash. Applies new names. Returns True if some conflicts were resolved. If dialog is True, shows confirrmation dialog. ### Response: def resolveFilenameConflicts(self, dialog=True): """Goes through list of DPs to make sure that their destination names do not clash. Applies new names. Returns True if some conflicts were resolved. If dialog is True, shows confirrmation dialog.""" resolved = self.wdplv.resolveFilenameConflicts() if resolved and dialog: QMessageBox.warning(self, "Filename conflicts", """<P> <NOBR>PURR has found duplicate destination filenames among your data products.</NOBR> This is not allowed, so some filenames have been adjusted to avoid name clashes. Please review the changes before saving this entry. </P>""", QMessageBox.Ok, 0) return resolved
def profile(profile_name): ''' Activate specified profile CLI Example: .. code-block:: bash salt '*' tuned.profile virtual-guest ''' # run tuned-adm with the profile specified result = __salt__['cmd.retcode']('tuned-adm profile {0}'.format(profile_name)) if int(result) != 0: return False return '{0}'.format(profile_name)
Activate specified profile CLI Example: .. code-block:: bash salt '*' tuned.profile virtual-guest
Below is the the instruction that describes the task: ### Input: Activate specified profile CLI Example: .. code-block:: bash salt '*' tuned.profile virtual-guest ### Response: def profile(profile_name): ''' Activate specified profile CLI Example: .. code-block:: bash salt '*' tuned.profile virtual-guest ''' # run tuned-adm with the profile specified result = __salt__['cmd.retcode']('tuned-adm profile {0}'.format(profile_name)) if int(result) != 0: return False return '{0}'.format(profile_name)
def fit_transform(self, Z): """Fit LSI model to X and perform dimensionality reduction on X. Parameters ---------- X : {array-like, sparse matrix}, shape (n_samples, n_features) Training data. Returns ------- X_new : array, shape (n_samples, n_components) Reduced version of X. This will always be a dense array. """ X = Z[:, 'X'] if isinstance(Z, DictRDD) else Z check_rdd(X, (sp.spmatrix, np.ndarray)) if self.algorithm == "em": X = X.persist() # boosting iterative svm Sigma, V = svd_em(X, k=self.n_components, maxiter=self.n_iter, tol=self.tol, compute_u=False, seed=self.random_state) self.components_ = V X.unpersist() return self.transform(Z) else: # TODO: raise warning non distributed return super(SparkTruncatedSVD, self).fit_transform(X.tosparse())
Fit LSI model to X and perform dimensionality reduction on X. Parameters ---------- X : {array-like, sparse matrix}, shape (n_samples, n_features) Training data. Returns ------- X_new : array, shape (n_samples, n_components) Reduced version of X. This will always be a dense array.
Below is the the instruction that describes the task: ### Input: Fit LSI model to X and perform dimensionality reduction on X. Parameters ---------- X : {array-like, sparse matrix}, shape (n_samples, n_features) Training data. Returns ------- X_new : array, shape (n_samples, n_components) Reduced version of X. This will always be a dense array. ### Response: def fit_transform(self, Z): """Fit LSI model to X and perform dimensionality reduction on X. Parameters ---------- X : {array-like, sparse matrix}, shape (n_samples, n_features) Training data. Returns ------- X_new : array, shape (n_samples, n_components) Reduced version of X. This will always be a dense array. """ X = Z[:, 'X'] if isinstance(Z, DictRDD) else Z check_rdd(X, (sp.spmatrix, np.ndarray)) if self.algorithm == "em": X = X.persist() # boosting iterative svm Sigma, V = svd_em(X, k=self.n_components, maxiter=self.n_iter, tol=self.tol, compute_u=False, seed=self.random_state) self.components_ = V X.unpersist() return self.transform(Z) else: # TODO: raise warning non distributed return super(SparkTruncatedSVD, self).fit_transform(X.tosparse())
def get_generated_cols(X_original, X_transformed, to_transform): """ Returns a list of the generated/transformed columns. Arguments: X_original: df the original (input) DataFrame. X_transformed: df the transformed (current) DataFrame. to_transform: [str] a list of columns that were transformed (as in the original DataFrame), commonly self.cols. Output: a list of columns that were transformed (as in the current DataFrame). """ original_cols = list(X_original.columns) if len(to_transform) > 0: [original_cols.remove(c) for c in to_transform] current_cols = list(X_transformed.columns) if len(original_cols) > 0: [current_cols.remove(c) for c in original_cols] return current_cols
Returns a list of the generated/transformed columns. Arguments: X_original: df the original (input) DataFrame. X_transformed: df the transformed (current) DataFrame. to_transform: [str] a list of columns that were transformed (as in the original DataFrame), commonly self.cols. Output: a list of columns that were transformed (as in the current DataFrame).
Below is the the instruction that describes the task: ### Input: Returns a list of the generated/transformed columns. Arguments: X_original: df the original (input) DataFrame. X_transformed: df the transformed (current) DataFrame. to_transform: [str] a list of columns that were transformed (as in the original DataFrame), commonly self.cols. Output: a list of columns that were transformed (as in the current DataFrame). ### Response: def get_generated_cols(X_original, X_transformed, to_transform): """ Returns a list of the generated/transformed columns. Arguments: X_original: df the original (input) DataFrame. X_transformed: df the transformed (current) DataFrame. to_transform: [str] a list of columns that were transformed (as in the original DataFrame), commonly self.cols. Output: a list of columns that were transformed (as in the current DataFrame). """ original_cols = list(X_original.columns) if len(to_transform) > 0: [original_cols.remove(c) for c in to_transform] current_cols = list(X_transformed.columns) if len(original_cols) > 0: [current_cols.remove(c) for c in original_cols] return current_cols
def write_groovy_script_and_configs( filename, content, job_configs, view_configs=None): """Write out the groovy script and configs to file. This writes the reconfigure script to the file location and places the expanded configs in subdirectories 'view_configs' / 'job_configs' that the script can then access when run. """ with open(filename, 'w') as h: h.write(content) if view_configs: view_config_dir = os.path.join(os.path.dirname(filename), 'view_configs') if not os.path.isdir(view_config_dir): os.makedirs(view_config_dir) for config_name, config_body in view_configs.items(): config_filename = os.path.join(view_config_dir, config_name) with open(config_filename, 'w') as config_fh: config_fh.write(config_body) job_config_dir = os.path.join(os.path.dirname(filename), 'job_configs') if not os.path.isdir(job_config_dir): os.makedirs(job_config_dir) # prefix each config file with a serial number to maintain order format_str = '%0' + str(len(str(len(job_configs)))) + 'd' i = 0 for config_name, config_body in job_configs.items(): i += 1 config_filename = os.path.join( job_config_dir, format_str % i + ' ' + config_name) with open(config_filename, 'w') as config_fh: config_fh.write(config_body)
Write out the groovy script and configs to file. This writes the reconfigure script to the file location and places the expanded configs in subdirectories 'view_configs' / 'job_configs' that the script can then access when run.
Below is the the instruction that describes the task: ### Input: Write out the groovy script and configs to file. This writes the reconfigure script to the file location and places the expanded configs in subdirectories 'view_configs' / 'job_configs' that the script can then access when run. ### Response: def write_groovy_script_and_configs( filename, content, job_configs, view_configs=None): """Write out the groovy script and configs to file. This writes the reconfigure script to the file location and places the expanded configs in subdirectories 'view_configs' / 'job_configs' that the script can then access when run. """ with open(filename, 'w') as h: h.write(content) if view_configs: view_config_dir = os.path.join(os.path.dirname(filename), 'view_configs') if not os.path.isdir(view_config_dir): os.makedirs(view_config_dir) for config_name, config_body in view_configs.items(): config_filename = os.path.join(view_config_dir, config_name) with open(config_filename, 'w') as config_fh: config_fh.write(config_body) job_config_dir = os.path.join(os.path.dirname(filename), 'job_configs') if not os.path.isdir(job_config_dir): os.makedirs(job_config_dir) # prefix each config file with a serial number to maintain order format_str = '%0' + str(len(str(len(job_configs)))) + 'd' i = 0 for config_name, config_body in job_configs.items(): i += 1 config_filename = os.path.join( job_config_dir, format_str % i + ' ' + config_name) with open(config_filename, 'w') as config_fh: config_fh.write(config_body)
def et2utc(et, formatStr, prec, lenout=_default_len_out): """ Convert an input time from ephemeris seconds past J2000 to Calendar, Day-of-Year, or Julian Date format, UTC. http://naif.jpl.nasa.gov/pub/naif/toolkit_docs/C/cspice/et2utc_c.html :param et: Input epoch, given in ephemeris seconds past J2000. :type et: float :param formatStr: Format of output epoch. :type formatStr: str :param prec: Digits of precision in fractional seconds or days. :type prec: int :param lenout: The length of the output string plus 1. :type lenout: int :return: Output time string in UTC :rtype: str """ et = ctypes.c_double(et) prec = ctypes.c_int(prec) lenout = ctypes.c_int(lenout) formatStr = stypes.stringToCharP(formatStr) utcstr = stypes.stringToCharP(lenout) libspice.et2utc_c(et, formatStr, prec, lenout, utcstr) return stypes.toPythonString(utcstr)
Convert an input time from ephemeris seconds past J2000 to Calendar, Day-of-Year, or Julian Date format, UTC. http://naif.jpl.nasa.gov/pub/naif/toolkit_docs/C/cspice/et2utc_c.html :param et: Input epoch, given in ephemeris seconds past J2000. :type et: float :param formatStr: Format of output epoch. :type formatStr: str :param prec: Digits of precision in fractional seconds or days. :type prec: int :param lenout: The length of the output string plus 1. :type lenout: int :return: Output time string in UTC :rtype: str
Below is the the instruction that describes the task: ### Input: Convert an input time from ephemeris seconds past J2000 to Calendar, Day-of-Year, or Julian Date format, UTC. http://naif.jpl.nasa.gov/pub/naif/toolkit_docs/C/cspice/et2utc_c.html :param et: Input epoch, given in ephemeris seconds past J2000. :type et: float :param formatStr: Format of output epoch. :type formatStr: str :param prec: Digits of precision in fractional seconds or days. :type prec: int :param lenout: The length of the output string plus 1. :type lenout: int :return: Output time string in UTC :rtype: str ### Response: def et2utc(et, formatStr, prec, lenout=_default_len_out): """ Convert an input time from ephemeris seconds past J2000 to Calendar, Day-of-Year, or Julian Date format, UTC. http://naif.jpl.nasa.gov/pub/naif/toolkit_docs/C/cspice/et2utc_c.html :param et: Input epoch, given in ephemeris seconds past J2000. :type et: float :param formatStr: Format of output epoch. :type formatStr: str :param prec: Digits of precision in fractional seconds or days. :type prec: int :param lenout: The length of the output string plus 1. :type lenout: int :return: Output time string in UTC :rtype: str """ et = ctypes.c_double(et) prec = ctypes.c_int(prec) lenout = ctypes.c_int(lenout) formatStr = stypes.stringToCharP(formatStr) utcstr = stypes.stringToCharP(lenout) libspice.et2utc_c(et, formatStr, prec, lenout, utcstr) return stypes.toPythonString(utcstr)
def _check_region_for_parsing(number, default_region): """Checks to see that the region code used is valid, or if it is not valid, that the number to parse starts with a + symbol so that we can attempt to infer the region from the number. Returns False if it cannot use the region provided and the region cannot be inferred. """ if not _is_valid_region_code(default_region): # If the number is None or empty, we can't infer the region. if number is None or len(number) == 0: return False match = _PLUS_CHARS_PATTERN.match(number) if match is None: return False return True
Checks to see that the region code used is valid, or if it is not valid, that the number to parse starts with a + symbol so that we can attempt to infer the region from the number. Returns False if it cannot use the region provided and the region cannot be inferred.
Below is the the instruction that describes the task: ### Input: Checks to see that the region code used is valid, or if it is not valid, that the number to parse starts with a + symbol so that we can attempt to infer the region from the number. Returns False if it cannot use the region provided and the region cannot be inferred. ### Response: def _check_region_for_parsing(number, default_region): """Checks to see that the region code used is valid, or if it is not valid, that the number to parse starts with a + symbol so that we can attempt to infer the region from the number. Returns False if it cannot use the region provided and the region cannot be inferred. """ if not _is_valid_region_code(default_region): # If the number is None or empty, we can't infer the region. if number is None or len(number) == 0: return False match = _PLUS_CHARS_PATTERN.match(number) if match is None: return False return True
def start(self, request, application, extra_roles=None): """ Continue the state machine at first state. """ # Get the authentication of the current user roles = self._get_roles_for_request(request, application) if extra_roles is not None: roles.update(extra_roles) # Ensure current user is authenticated. If user isn't applicant, # leader, delegate or admin, they probably shouldn't be here. if 'is_authorised' not in roles: return HttpResponseForbidden('<h1>Access Denied</h1>') # Go to first state. return self._next(request, application, roles, self._first_state)
Continue the state machine at first state.
Below is the the instruction that describes the task: ### Input: Continue the state machine at first state. ### Response: def start(self, request, application, extra_roles=None): """ Continue the state machine at first state. """ # Get the authentication of the current user roles = self._get_roles_for_request(request, application) if extra_roles is not None: roles.update(extra_roles) # Ensure current user is authenticated. If user isn't applicant, # leader, delegate or admin, they probably shouldn't be here. if 'is_authorised' not in roles: return HttpResponseForbidden('<h1>Access Denied</h1>') # Go to first state. return self._next(request, application, roles, self._first_state)
def migrator(state): """Tweaks will be lost for Cleverbot and its conversations.""" for tweak in ('tweak1', 'tweak2', 'tweak3'): del state[0][tweak] for convo in state[1]: if tweak in convo: del convo[tweak] return state
Tweaks will be lost for Cleverbot and its conversations.
Below is the the instruction that describes the task: ### Input: Tweaks will be lost for Cleverbot and its conversations. ### Response: def migrator(state): """Tweaks will be lost for Cleverbot and its conversations.""" for tweak in ('tweak1', 'tweak2', 'tweak3'): del state[0][tweak] for convo in state[1]: if tweak in convo: del convo[tweak] return state
def formfield_for_foreignkey_helper(inline, *args, **kwargs): """ The implementation for ``RelatedContentInline.formfield_for_foreignkey`` This takes the takes all of the ``args`` and ``kwargs`` from the call to ``formfield_for_foreignkey`` and operates on this. It returns the updated ``args`` and ``kwargs`` to be passed on to ``super``. This is solely an implementation detail as it's easier to test a function than to provide all of the expectations that the ``GenericTabularInline`` has. """ db_field = args[0] if db_field.name != "related_type": return args, kwargs initial_filter = getattr(settings, RELATED_TYPE_INITIAL_FILTER, False) if "initial" not in kwargs and initial_filter: # TODO: handle gracefully if unable to load and in non-debug initial = RelatedType.objects.get(**initial_filter).pk kwargs["initial"] = initial return args, kwargs
The implementation for ``RelatedContentInline.formfield_for_foreignkey`` This takes the takes all of the ``args`` and ``kwargs`` from the call to ``formfield_for_foreignkey`` and operates on this. It returns the updated ``args`` and ``kwargs`` to be passed on to ``super``. This is solely an implementation detail as it's easier to test a function than to provide all of the expectations that the ``GenericTabularInline`` has.
Below is the the instruction that describes the task: ### Input: The implementation for ``RelatedContentInline.formfield_for_foreignkey`` This takes the takes all of the ``args`` and ``kwargs`` from the call to ``formfield_for_foreignkey`` and operates on this. It returns the updated ``args`` and ``kwargs`` to be passed on to ``super``. This is solely an implementation detail as it's easier to test a function than to provide all of the expectations that the ``GenericTabularInline`` has. ### Response: def formfield_for_foreignkey_helper(inline, *args, **kwargs): """ The implementation for ``RelatedContentInline.formfield_for_foreignkey`` This takes the takes all of the ``args`` and ``kwargs`` from the call to ``formfield_for_foreignkey`` and operates on this. It returns the updated ``args`` and ``kwargs`` to be passed on to ``super``. This is solely an implementation detail as it's easier to test a function than to provide all of the expectations that the ``GenericTabularInline`` has. """ db_field = args[0] if db_field.name != "related_type": return args, kwargs initial_filter = getattr(settings, RELATED_TYPE_INITIAL_FILTER, False) if "initial" not in kwargs and initial_filter: # TODO: handle gracefully if unable to load and in non-debug initial = RelatedType.objects.get(**initial_filter).pk kwargs["initial"] = initial return args, kwargs
def _validate_sections(cls, sections): """Validates sections types and uniqueness.""" names = [] for section in sections: if not hasattr(section, 'name'): raise ConfigurationError('`sections` attribute requires a list of Section') name = section.name if name in names: raise ConfigurationError('`%s` section name must be unique' % name) names.append(name)
Validates sections types and uniqueness.
Below is the the instruction that describes the task: ### Input: Validates sections types and uniqueness. ### Response: def _validate_sections(cls, sections): """Validates sections types and uniqueness.""" names = [] for section in sections: if not hasattr(section, 'name'): raise ConfigurationError('`sections` attribute requires a list of Section') name = section.name if name in names: raise ConfigurationError('`%s` section name must be unique' % name) names.append(name)
def _get_firmware_update_element(self): """Get the url for firmware update :returns: firmware update url :raises: Missing resource error on missing url """ fw_update_action = self._actions.update_firmware if not fw_update_action: raise (sushy.exceptions. MissingActionError(action='#UpdateService.SimpleUpdate', resource=self._path)) return fw_update_action
Get the url for firmware update :returns: firmware update url :raises: Missing resource error on missing url
Below is the the instruction that describes the task: ### Input: Get the url for firmware update :returns: firmware update url :raises: Missing resource error on missing url ### Response: def _get_firmware_update_element(self): """Get the url for firmware update :returns: firmware update url :raises: Missing resource error on missing url """ fw_update_action = self._actions.update_firmware if not fw_update_action: raise (sushy.exceptions. MissingActionError(action='#UpdateService.SimpleUpdate', resource=self._path)) return fw_update_action
def hangup_all_calls(self): """REST Hangup All Live Calls Helper """ path = '/' + self.api_version + '/HangupAllCalls/' method = 'POST' return self.request(path, method)
REST Hangup All Live Calls Helper
Below is the the instruction that describes the task: ### Input: REST Hangup All Live Calls Helper ### Response: def hangup_all_calls(self): """REST Hangup All Live Calls Helper """ path = '/' + self.api_version + '/HangupAllCalls/' method = 'POST' return self.request(path, method)
def HexEscape(self, string, match, **unused_kwargs): """Converts a hex escaped string.""" logging.debug('HexEscape matched {0:s}.'.format(string)) hex_string = match.group(1) try: hex_string = binascii.unhexlify(hex_string) hex_string = codecs.decode(hex_string, 'utf-8') self.string += hex_string except (TypeError, binascii.Error): raise errors.ParseError('Invalid hex escape {0!s}.'.format(hex_string))
Converts a hex escaped string.
Below is the the instruction that describes the task: ### Input: Converts a hex escaped string. ### Response: def HexEscape(self, string, match, **unused_kwargs): """Converts a hex escaped string.""" logging.debug('HexEscape matched {0:s}.'.format(string)) hex_string = match.group(1) try: hex_string = binascii.unhexlify(hex_string) hex_string = codecs.decode(hex_string, 'utf-8') self.string += hex_string except (TypeError, binascii.Error): raise errors.ParseError('Invalid hex escape {0!s}.'.format(hex_string))
def warning(self, text): """ Posts a warning message adding a timestamp and logging level to it for both file and console handlers. Logger uses a redraw rate because of console flickering. That means it will not draw new messages or progress at the very time they are being logged but their timestamp will be captured at the right time. Logger will redraw at a given time period AND when new messages or progress are logged. If you still want to force redraw immediately (may produce flickering) then call 'flush' method. :param text: The text to log into file and console. """ self.queue.put(dill.dumps(LogMessageCommand(text=text, level=logging.WARNING)))
Posts a warning message adding a timestamp and logging level to it for both file and console handlers. Logger uses a redraw rate because of console flickering. That means it will not draw new messages or progress at the very time they are being logged but their timestamp will be captured at the right time. Logger will redraw at a given time period AND when new messages or progress are logged. If you still want to force redraw immediately (may produce flickering) then call 'flush' method. :param text: The text to log into file and console.
Below is the the instruction that describes the task: ### Input: Posts a warning message adding a timestamp and logging level to it for both file and console handlers. Logger uses a redraw rate because of console flickering. That means it will not draw new messages or progress at the very time they are being logged but their timestamp will be captured at the right time. Logger will redraw at a given time period AND when new messages or progress are logged. If you still want to force redraw immediately (may produce flickering) then call 'flush' method. :param text: The text to log into file and console. ### Response: def warning(self, text): """ Posts a warning message adding a timestamp and logging level to it for both file and console handlers. Logger uses a redraw rate because of console flickering. That means it will not draw new messages or progress at the very time they are being logged but their timestamp will be captured at the right time. Logger will redraw at a given time period AND when new messages or progress are logged. If you still want to force redraw immediately (may produce flickering) then call 'flush' method. :param text: The text to log into file and console. """ self.queue.put(dill.dumps(LogMessageCommand(text=text, level=logging.WARNING)))
def update_rds_databases(self): """Update list of RDS Databases for the account / region Returns: `None` """ self.log.info('Updating RDS Databases for {} / {}'.format( self.account, self.region )) # All RDS resources are polled via a Lambda collector in a central account rds_collector_account = AWSAccount.get(self.rds_collector_account) rds_session = get_aws_session(rds_collector_account) # Existing RDS resources come from database existing_rds_dbs = RDSInstance.get_all(self.account, self.region) try: # Special session pinned to a single account for Lambda invocation so we # don't have to manage lambdas in every account & region lambda_client = rds_session.client('lambda', region_name=self.rds_collector_region) # The AWS Config Lambda will collect all the non-compliant resources for all regions # within the account input_payload = json.dumps({"account_id": self.account.account_number, "region": self.region, "role": self.rds_role, "config_rule_name": self.rds_config_rule_name }).encode('utf-8') response = lambda_client.invoke(FunctionName=self.rds_function_name, InvocationType='RequestResponse', Payload=input_payload ) response_payload = json.loads(response['Payload'].read().decode('utf-8')) if response_payload['success']: rds_dbs = response_payload['data'] if rds_dbs: for db_instance in rds_dbs: tags = {t['Key']: t['Value'] for t in db_instance['tags'] or {}} properties = { 'tags': tags, 'metrics': None, 'engine': db_instance['engine'], 'creation_date': db_instance['creation_date'] } if db_instance['resource_name'] in existing_rds_dbs: rds = existing_rds_dbs[db_instance['resource_name']] if rds.update(db_instance, properties): self.log.debug('Change detected for RDS instance {}/{} ' .format(db_instance['resource_name'], properties)) else: RDSInstance.create( db_instance['resource_name'], account_id=self.account.account_id, location=db_instance['region'], properties=properties, tags=tags ) # Removal of RDS instances rk = set() erk = set() for database in rds_dbs: rk.add(database['resource_name']) for existing in existing_rds_dbs.keys(): erk.add(existing) for resource_id in erk - rk: db.session.delete(existing_rds_dbs[resource_id].resource) self.log.debug('Removed RDS instances {}/{}'.format( self.account.account_name, resource_id )) db.session.commit() else: self.log.error('RDS Lambda Execution Failed / {} / {} / {}'. format(self.account.account_name, self.region, response_payload)) except Exception as e: self.log.exception('There was a problem during RDS collection for {}/{}/{}'.format( self.account.account_name, self.region, e )) db.session.rollback()
Update list of RDS Databases for the account / region Returns: `None`
Below is the the instruction that describes the task: ### Input: Update list of RDS Databases for the account / region Returns: `None` ### Response: def update_rds_databases(self): """Update list of RDS Databases for the account / region Returns: `None` """ self.log.info('Updating RDS Databases for {} / {}'.format( self.account, self.region )) # All RDS resources are polled via a Lambda collector in a central account rds_collector_account = AWSAccount.get(self.rds_collector_account) rds_session = get_aws_session(rds_collector_account) # Existing RDS resources come from database existing_rds_dbs = RDSInstance.get_all(self.account, self.region) try: # Special session pinned to a single account for Lambda invocation so we # don't have to manage lambdas in every account & region lambda_client = rds_session.client('lambda', region_name=self.rds_collector_region) # The AWS Config Lambda will collect all the non-compliant resources for all regions # within the account input_payload = json.dumps({"account_id": self.account.account_number, "region": self.region, "role": self.rds_role, "config_rule_name": self.rds_config_rule_name }).encode('utf-8') response = lambda_client.invoke(FunctionName=self.rds_function_name, InvocationType='RequestResponse', Payload=input_payload ) response_payload = json.loads(response['Payload'].read().decode('utf-8')) if response_payload['success']: rds_dbs = response_payload['data'] if rds_dbs: for db_instance in rds_dbs: tags = {t['Key']: t['Value'] for t in db_instance['tags'] or {}} properties = { 'tags': tags, 'metrics': None, 'engine': db_instance['engine'], 'creation_date': db_instance['creation_date'] } if db_instance['resource_name'] in existing_rds_dbs: rds = existing_rds_dbs[db_instance['resource_name']] if rds.update(db_instance, properties): self.log.debug('Change detected for RDS instance {}/{} ' .format(db_instance['resource_name'], properties)) else: RDSInstance.create( db_instance['resource_name'], account_id=self.account.account_id, location=db_instance['region'], properties=properties, tags=tags ) # Removal of RDS instances rk = set() erk = set() for database in rds_dbs: rk.add(database['resource_name']) for existing in existing_rds_dbs.keys(): erk.add(existing) for resource_id in erk - rk: db.session.delete(existing_rds_dbs[resource_id].resource) self.log.debug('Removed RDS instances {}/{}'.format( self.account.account_name, resource_id )) db.session.commit() else: self.log.error('RDS Lambda Execution Failed / {} / {} / {}'. format(self.account.account_name, self.region, response_payload)) except Exception as e: self.log.exception('There was a problem during RDS collection for {}/{}/{}'.format( self.account.account_name, self.region, e )) db.session.rollback()
def func_str(func, args=[], kwargs={}, type_aliases=[], packed=False, packkw=None, truncate=False): """ string representation of function definition Returns: str: a representation of func with args, kwargs, and type_aliases Args: func (function): args (list): argument values (default = []) kwargs (dict): kwargs values (default = {}) type_aliases (list): (default = []) packed (bool): (default = False) packkw (None): (default = None) Returns: str: func_str CommandLine: python -m utool.util_str --exec-func_str Example: >>> # ENABLE_DOCTEST >>> from utool.util_str import * # NOQA >>> func = byte_str >>> args = [1024, 'MB'] >>> kwargs = dict(precision=2) >>> type_aliases = [] >>> packed = False >>> packkw = None >>> _str = func_str(func, args, kwargs, type_aliases, packed, packkw) >>> result = _str >>> print(result) byte_str(1024, 'MB', precision=2) """ import utool as ut # if truncate: # truncatekw = {'maxlen': 20} # else: truncatekw = {} argrepr_list = ([] if args is None else ut.get_itemstr_list(args, nl=False, truncate=truncate, truncatekw=truncatekw)) kwrepr_list = ([] if kwargs is None else ut.dict_itemstr_list(kwargs, explicit=True, nl=False, truncate=truncate, truncatekw=truncatekw)) repr_list = argrepr_list + kwrepr_list argskwargs_str = ', '.join(repr_list) _str = '%s(%s)' % (meta_util_six.get_funcname(func), argskwargs_str) if packed: packkw_ = dict(textwidth=80, nlprefix=' ', break_words=False) if packkw is not None: packkw_.update(packkw_) _str = packstr(_str, **packkw_) return _str
string representation of function definition Returns: str: a representation of func with args, kwargs, and type_aliases Args: func (function): args (list): argument values (default = []) kwargs (dict): kwargs values (default = {}) type_aliases (list): (default = []) packed (bool): (default = False) packkw (None): (default = None) Returns: str: func_str CommandLine: python -m utool.util_str --exec-func_str Example: >>> # ENABLE_DOCTEST >>> from utool.util_str import * # NOQA >>> func = byte_str >>> args = [1024, 'MB'] >>> kwargs = dict(precision=2) >>> type_aliases = [] >>> packed = False >>> packkw = None >>> _str = func_str(func, args, kwargs, type_aliases, packed, packkw) >>> result = _str >>> print(result) byte_str(1024, 'MB', precision=2)
Below is the the instruction that describes the task: ### Input: string representation of function definition Returns: str: a representation of func with args, kwargs, and type_aliases Args: func (function): args (list): argument values (default = []) kwargs (dict): kwargs values (default = {}) type_aliases (list): (default = []) packed (bool): (default = False) packkw (None): (default = None) Returns: str: func_str CommandLine: python -m utool.util_str --exec-func_str Example: >>> # ENABLE_DOCTEST >>> from utool.util_str import * # NOQA >>> func = byte_str >>> args = [1024, 'MB'] >>> kwargs = dict(precision=2) >>> type_aliases = [] >>> packed = False >>> packkw = None >>> _str = func_str(func, args, kwargs, type_aliases, packed, packkw) >>> result = _str >>> print(result) byte_str(1024, 'MB', precision=2) ### Response: def func_str(func, args=[], kwargs={}, type_aliases=[], packed=False, packkw=None, truncate=False): """ string representation of function definition Returns: str: a representation of func with args, kwargs, and type_aliases Args: func (function): args (list): argument values (default = []) kwargs (dict): kwargs values (default = {}) type_aliases (list): (default = []) packed (bool): (default = False) packkw (None): (default = None) Returns: str: func_str CommandLine: python -m utool.util_str --exec-func_str Example: >>> # ENABLE_DOCTEST >>> from utool.util_str import * # NOQA >>> func = byte_str >>> args = [1024, 'MB'] >>> kwargs = dict(precision=2) >>> type_aliases = [] >>> packed = False >>> packkw = None >>> _str = func_str(func, args, kwargs, type_aliases, packed, packkw) >>> result = _str >>> print(result) byte_str(1024, 'MB', precision=2) """ import utool as ut # if truncate: # truncatekw = {'maxlen': 20} # else: truncatekw = {} argrepr_list = ([] if args is None else ut.get_itemstr_list(args, nl=False, truncate=truncate, truncatekw=truncatekw)) kwrepr_list = ([] if kwargs is None else ut.dict_itemstr_list(kwargs, explicit=True, nl=False, truncate=truncate, truncatekw=truncatekw)) repr_list = argrepr_list + kwrepr_list argskwargs_str = ', '.join(repr_list) _str = '%s(%s)' % (meta_util_six.get_funcname(func), argskwargs_str) if packed: packkw_ = dict(textwidth=80, nlprefix=' ', break_words=False) if packkw is not None: packkw_.update(packkw_) _str = packstr(_str, **packkw_) return _str
def generate_username(self, user_class): """ Generate a new username for a user """ m = getattr(user_class, 'generate_username', None) if m: return m() else: max_length = user_class._meta.get_field( self.username_field).max_length return uuid.uuid4().hex[:max_length]
Generate a new username for a user
Below is the the instruction that describes the task: ### Input: Generate a new username for a user ### Response: def generate_username(self, user_class): """ Generate a new username for a user """ m = getattr(user_class, 'generate_username', None) if m: return m() else: max_length = user_class._meta.get_field( self.username_field).max_length return uuid.uuid4().hex[:max_length]
async def async_start( cls, middleware: typing.Union[typing.Iterable, Middleware] = None, loop=None, after_start=None, before_stop=None, **kwargs): """ Start an async spider :param middleware: customize middleware or a list of middleware :param loop: :param after_start: hook :param before_stop: hook :return: """ loop = loop or asyncio.get_event_loop() spider_ins = cls(middleware=middleware, loop=loop, is_async_start=True) await spider_ins._start( after_start=after_start, before_stop=before_stop)
Start an async spider :param middleware: customize middleware or a list of middleware :param loop: :param after_start: hook :param before_stop: hook :return:
Below is the the instruction that describes the task: ### Input: Start an async spider :param middleware: customize middleware or a list of middleware :param loop: :param after_start: hook :param before_stop: hook :return: ### Response: async def async_start( cls, middleware: typing.Union[typing.Iterable, Middleware] = None, loop=None, after_start=None, before_stop=None, **kwargs): """ Start an async spider :param middleware: customize middleware or a list of middleware :param loop: :param after_start: hook :param before_stop: hook :return: """ loop = loop or asyncio.get_event_loop() spider_ins = cls(middleware=middleware, loop=loop, is_async_start=True) await spider_ins._start( after_start=after_start, before_stop=before_stop)
def _dict_from_lines(lines, key_nums, sep=None): """ Helper function to parse formatted text structured like: value1 value2 ... sep key1, key2 ... key_nums is a list giving the number of keys for each line. 0 if line should be skipped. sep is a string denoting the character that separates the keys from the value (None if no separator is present). Returns: dict{key1 : value1, key2 : value2, ...} Raises: ValueError if parsing fails. """ if is_string(lines): lines = [lines] if not isinstance(key_nums, collections.abc.Iterable): key_nums = list(key_nums) if len(lines) != len(key_nums): err_msg = "lines = %s\n key_num = %s" % (str(lines), str(key_nums)) raise ValueError(err_msg) kwargs = Namespace() for (i, nk) in enumerate(key_nums): if nk == 0: continue line = lines[i] tokens = [t.strip() for t in line.split()] values, keys = tokens[:nk], "".join(tokens[nk:]) # Sanitize keys: In some case we might get strings in the form: foo[,bar] keys.replace("[", "").replace("]", "") keys = keys.split(",") if sep is not None: check = keys[0][0] if check != sep: raise ValueError("Expecting separator %s, got %s" % (sep, check)) keys[0] = keys[0][1:] if len(values) != len(keys): msg = "line: %s\n len(keys) != len(value)\nkeys: %s\n values: %s" % (line, keys, values) raise ValueError(msg) kwargs.update(zip(keys, values)) return kwargs
Helper function to parse formatted text structured like: value1 value2 ... sep key1, key2 ... key_nums is a list giving the number of keys for each line. 0 if line should be skipped. sep is a string denoting the character that separates the keys from the value (None if no separator is present). Returns: dict{key1 : value1, key2 : value2, ...} Raises: ValueError if parsing fails.
Below is the the instruction that describes the task: ### Input: Helper function to parse formatted text structured like: value1 value2 ... sep key1, key2 ... key_nums is a list giving the number of keys for each line. 0 if line should be skipped. sep is a string denoting the character that separates the keys from the value (None if no separator is present). Returns: dict{key1 : value1, key2 : value2, ...} Raises: ValueError if parsing fails. ### Response: def _dict_from_lines(lines, key_nums, sep=None): """ Helper function to parse formatted text structured like: value1 value2 ... sep key1, key2 ... key_nums is a list giving the number of keys for each line. 0 if line should be skipped. sep is a string denoting the character that separates the keys from the value (None if no separator is present). Returns: dict{key1 : value1, key2 : value2, ...} Raises: ValueError if parsing fails. """ if is_string(lines): lines = [lines] if not isinstance(key_nums, collections.abc.Iterable): key_nums = list(key_nums) if len(lines) != len(key_nums): err_msg = "lines = %s\n key_num = %s" % (str(lines), str(key_nums)) raise ValueError(err_msg) kwargs = Namespace() for (i, nk) in enumerate(key_nums): if nk == 0: continue line = lines[i] tokens = [t.strip() for t in line.split()] values, keys = tokens[:nk], "".join(tokens[nk:]) # Sanitize keys: In some case we might get strings in the form: foo[,bar] keys.replace("[", "").replace("]", "") keys = keys.split(",") if sep is not None: check = keys[0][0] if check != sep: raise ValueError("Expecting separator %s, got %s" % (sep, check)) keys[0] = keys[0][1:] if len(values) != len(keys): msg = "line: %s\n len(keys) != len(value)\nkeys: %s\n values: %s" % (line, keys, values) raise ValueError(msg) kwargs.update(zip(keys, values)) return kwargs
def new(self, bootstrap_with=None, use_timer=False, with_proof=False): """ Actual constructor of the solver. """ if not self.lingeling: self.lingeling = pysolvers.lingeling_new() if bootstrap_with: for clause in bootstrap_with: self.add_clause(clause) self.use_timer = use_timer self.call_time = 0.0 # time spent for the last call to oracle self.accu_time = 0.0 # time accumulated for all calls to oracle if with_proof: self.prfile = tempfile.TemporaryFile() pysolvers.lingeling_tracepr(self.lingeling, self.prfile)
Actual constructor of the solver.
Below is the the instruction that describes the task: ### Input: Actual constructor of the solver. ### Response: def new(self, bootstrap_with=None, use_timer=False, with_proof=False): """ Actual constructor of the solver. """ if not self.lingeling: self.lingeling = pysolvers.lingeling_new() if bootstrap_with: for clause in bootstrap_with: self.add_clause(clause) self.use_timer = use_timer self.call_time = 0.0 # time spent for the last call to oracle self.accu_time = 0.0 # time accumulated for all calls to oracle if with_proof: self.prfile = tempfile.TemporaryFile() pysolvers.lingeling_tracepr(self.lingeling, self.prfile)
def page(self, status=values.unset, date_created_after=values.unset, date_created_before=values.unset, room_sid=values.unset, page_token=values.unset, page_number=values.unset, page_size=values.unset): """ Retrieve a single page of CompositionInstance records from the API. Request is executed immediately :param CompositionInstance.Status status: Only show Compositions with the given status. :param datetime date_created_after: Only show Compositions created on or after this ISO8601 date-time with timezone. :param datetime date_created_before: Only show Compositions created before this ISO8601 date-time with timezone. :param unicode room_sid: Only show Compositions with the given Room SID. :param str page_token: PageToken provided by the API :param int page_number: Page Number, this value is simply for client state :param int page_size: Number of records to return, defaults to 50 :returns: Page of CompositionInstance :rtype: twilio.rest.video.v1.composition.CompositionPage """ params = values.of({ 'Status': status, 'DateCreatedAfter': serialize.iso8601_datetime(date_created_after), 'DateCreatedBefore': serialize.iso8601_datetime(date_created_before), 'RoomSid': room_sid, 'PageToken': page_token, 'Page': page_number, 'PageSize': page_size, }) response = self._version.page( 'GET', self._uri, params=params, ) return CompositionPage(self._version, response, self._solution)
Retrieve a single page of CompositionInstance records from the API. Request is executed immediately :param CompositionInstance.Status status: Only show Compositions with the given status. :param datetime date_created_after: Only show Compositions created on or after this ISO8601 date-time with timezone. :param datetime date_created_before: Only show Compositions created before this ISO8601 date-time with timezone. :param unicode room_sid: Only show Compositions with the given Room SID. :param str page_token: PageToken provided by the API :param int page_number: Page Number, this value is simply for client state :param int page_size: Number of records to return, defaults to 50 :returns: Page of CompositionInstance :rtype: twilio.rest.video.v1.composition.CompositionPage
Below is the the instruction that describes the task: ### Input: Retrieve a single page of CompositionInstance records from the API. Request is executed immediately :param CompositionInstance.Status status: Only show Compositions with the given status. :param datetime date_created_after: Only show Compositions created on or after this ISO8601 date-time with timezone. :param datetime date_created_before: Only show Compositions created before this ISO8601 date-time with timezone. :param unicode room_sid: Only show Compositions with the given Room SID. :param str page_token: PageToken provided by the API :param int page_number: Page Number, this value is simply for client state :param int page_size: Number of records to return, defaults to 50 :returns: Page of CompositionInstance :rtype: twilio.rest.video.v1.composition.CompositionPage ### Response: def page(self, status=values.unset, date_created_after=values.unset, date_created_before=values.unset, room_sid=values.unset, page_token=values.unset, page_number=values.unset, page_size=values.unset): """ Retrieve a single page of CompositionInstance records from the API. Request is executed immediately :param CompositionInstance.Status status: Only show Compositions with the given status. :param datetime date_created_after: Only show Compositions created on or after this ISO8601 date-time with timezone. :param datetime date_created_before: Only show Compositions created before this ISO8601 date-time with timezone. :param unicode room_sid: Only show Compositions with the given Room SID. :param str page_token: PageToken provided by the API :param int page_number: Page Number, this value is simply for client state :param int page_size: Number of records to return, defaults to 50 :returns: Page of CompositionInstance :rtype: twilio.rest.video.v1.composition.CompositionPage """ params = values.of({ 'Status': status, 'DateCreatedAfter': serialize.iso8601_datetime(date_created_after), 'DateCreatedBefore': serialize.iso8601_datetime(date_created_before), 'RoomSid': room_sid, 'PageToken': page_token, 'Page': page_number, 'PageSize': page_size, }) response = self._version.page( 'GET', self._uri, params=params, ) return CompositionPage(self._version, response, self._solution)
def round_value(val, unc=None, unc_down=None, method="publication"): """ Rounds a number *val* with a single symmetric uncertainty *unc* or asymmetric uncertainties *unc* (interpreted as *up*) and *unc_down*, and calculates the orders of their magnitudes. They both can be a float or a list of floats for simultaneous evaluation. When *val* is a :py:class:`Number` instance, its combined uncertainty is used instead. Returns a 3-tuple containing: - The string representation of the central value. - The string representations of the uncertainties in a list. For the symmetric case, this list contains only one element. - The decimal magnitude. Examples: .. code-block:: python round_value(1.23, 0.456) # -> ("123", ["46"], -2) round_value(1.23, 0.456, 0.987) # -> ("123", ["46", "99"], -2) round_value(1.23, [0.456, 0.312]) # -> ("123", [["456", "312"]], -3) vals = np.array([1.23, 4.56]) uncs = np.array([0.45678, 0.078]) round_value(vals, uncs) # -> (["1230", "4560"], [["457", "78"]], -3) """ if isinstance(val, Number): unc, unc_down = val.get_uncertainty() val = val.nominal elif unc is None: raise ValueError("unc must be set when val is not a Number instance") # prepare unc values asym = unc_down is not None unc_up = unc if not asym: unc_down = unc_up if not is_numpy(val): # treat as lists for simultaneous rounding when not numpy arrays passed_list = isinstance(unc_up, (list, tuple)) or isinstance(unc_down, (list, tuple)) unc_up = make_list(unc_up) unc_down = make_list(unc_down) # sanity checks if len(unc_up) != len(unc_down): raise ValueError("uncertainties should have same length when passed as lists") elif any(unc < 0 for unc in unc_up): raise ValueError("up uncertainties must be positive: {}".format(unc_up)) elif any(unc < 0 for unc in unc_down): raise ValueError("down uncertainties must be positive: {}".format(unc_down)) # to determine the precision, use the uncertainty with the smallest magnitude ref_mag = min(round_uncertainty(u, method=method)[1] for u in unc_up + unc_down) # convert the uncertainty and central value to match the reference magnitude scale = 1. / 10.**ref_mag val_str = match_precision(scale * val, "1") up_strs = [match_precision(scale * u, "1") for u in unc_up] down_strs = [match_precision(scale * u, "1") for u in unc_down] if passed_list: return (val_str, [up_strs, down_strs] if asym else [up_strs], ref_mag) else: return (val_str, [up_strs[0], down_strs[0]] if asym else [up_strs[0]], ref_mag) else: # sanity checks if (unc_up < 0).any(): raise ValueError("up uncertainties must be positive: {}".format(unc_up)) elif (unc_down < 0).any(): raise ValueError("down uncertainties must be positive: {}".format(unc_down)) # to determine the precision, use the uncertainty with the smallest magnitude ref_mag_up = round_uncertainty(unc_up, method=method)[1] ref_mag_down = round_uncertainty(unc_down, method=method)[1] ref_mag = min(ref_mag_up.min(), ref_mag_down.min()) scale = 1. / 10.**ref_mag val_str = match_precision(scale * val, "1") up_str = match_precision(scale * unc_up, "1") down_str = match_precision(scale * unc_down, "1") return (val_str, [up_str, down_str] if asym else [up_str], ref_mag)
Rounds a number *val* with a single symmetric uncertainty *unc* or asymmetric uncertainties *unc* (interpreted as *up*) and *unc_down*, and calculates the orders of their magnitudes. They both can be a float or a list of floats for simultaneous evaluation. When *val* is a :py:class:`Number` instance, its combined uncertainty is used instead. Returns a 3-tuple containing: - The string representation of the central value. - The string representations of the uncertainties in a list. For the symmetric case, this list contains only one element. - The decimal magnitude. Examples: .. code-block:: python round_value(1.23, 0.456) # -> ("123", ["46"], -2) round_value(1.23, 0.456, 0.987) # -> ("123", ["46", "99"], -2) round_value(1.23, [0.456, 0.312]) # -> ("123", [["456", "312"]], -3) vals = np.array([1.23, 4.56]) uncs = np.array([0.45678, 0.078]) round_value(vals, uncs) # -> (["1230", "4560"], [["457", "78"]], -3)
Below is the the instruction that describes the task: ### Input: Rounds a number *val* with a single symmetric uncertainty *unc* or asymmetric uncertainties *unc* (interpreted as *up*) and *unc_down*, and calculates the orders of their magnitudes. They both can be a float or a list of floats for simultaneous evaluation. When *val* is a :py:class:`Number` instance, its combined uncertainty is used instead. Returns a 3-tuple containing: - The string representation of the central value. - The string representations of the uncertainties in a list. For the symmetric case, this list contains only one element. - The decimal magnitude. Examples: .. code-block:: python round_value(1.23, 0.456) # -> ("123", ["46"], -2) round_value(1.23, 0.456, 0.987) # -> ("123", ["46", "99"], -2) round_value(1.23, [0.456, 0.312]) # -> ("123", [["456", "312"]], -3) vals = np.array([1.23, 4.56]) uncs = np.array([0.45678, 0.078]) round_value(vals, uncs) # -> (["1230", "4560"], [["457", "78"]], -3) ### Response: def round_value(val, unc=None, unc_down=None, method="publication"): """ Rounds a number *val* with a single symmetric uncertainty *unc* or asymmetric uncertainties *unc* (interpreted as *up*) and *unc_down*, and calculates the orders of their magnitudes. They both can be a float or a list of floats for simultaneous evaluation. When *val* is a :py:class:`Number` instance, its combined uncertainty is used instead. Returns a 3-tuple containing: - The string representation of the central value. - The string representations of the uncertainties in a list. For the symmetric case, this list contains only one element. - The decimal magnitude. Examples: .. code-block:: python round_value(1.23, 0.456) # -> ("123", ["46"], -2) round_value(1.23, 0.456, 0.987) # -> ("123", ["46", "99"], -2) round_value(1.23, [0.456, 0.312]) # -> ("123", [["456", "312"]], -3) vals = np.array([1.23, 4.56]) uncs = np.array([0.45678, 0.078]) round_value(vals, uncs) # -> (["1230", "4560"], [["457", "78"]], -3) """ if isinstance(val, Number): unc, unc_down = val.get_uncertainty() val = val.nominal elif unc is None: raise ValueError("unc must be set when val is not a Number instance") # prepare unc values asym = unc_down is not None unc_up = unc if not asym: unc_down = unc_up if not is_numpy(val): # treat as lists for simultaneous rounding when not numpy arrays passed_list = isinstance(unc_up, (list, tuple)) or isinstance(unc_down, (list, tuple)) unc_up = make_list(unc_up) unc_down = make_list(unc_down) # sanity checks if len(unc_up) != len(unc_down): raise ValueError("uncertainties should have same length when passed as lists") elif any(unc < 0 for unc in unc_up): raise ValueError("up uncertainties must be positive: {}".format(unc_up)) elif any(unc < 0 for unc in unc_down): raise ValueError("down uncertainties must be positive: {}".format(unc_down)) # to determine the precision, use the uncertainty with the smallest magnitude ref_mag = min(round_uncertainty(u, method=method)[1] for u in unc_up + unc_down) # convert the uncertainty and central value to match the reference magnitude scale = 1. / 10.**ref_mag val_str = match_precision(scale * val, "1") up_strs = [match_precision(scale * u, "1") for u in unc_up] down_strs = [match_precision(scale * u, "1") for u in unc_down] if passed_list: return (val_str, [up_strs, down_strs] if asym else [up_strs], ref_mag) else: return (val_str, [up_strs[0], down_strs[0]] if asym else [up_strs[0]], ref_mag) else: # sanity checks if (unc_up < 0).any(): raise ValueError("up uncertainties must be positive: {}".format(unc_up)) elif (unc_down < 0).any(): raise ValueError("down uncertainties must be positive: {}".format(unc_down)) # to determine the precision, use the uncertainty with the smallest magnitude ref_mag_up = round_uncertainty(unc_up, method=method)[1] ref_mag_down = round_uncertainty(unc_down, method=method)[1] ref_mag = min(ref_mag_up.min(), ref_mag_down.min()) scale = 1. / 10.**ref_mag val_str = match_precision(scale * val, "1") up_str = match_precision(scale * unc_up, "1") down_str = match_precision(scale * unc_down, "1") return (val_str, [up_str, down_str] if asym else [up_str], ref_mag)
def items(self): "Returns a list of (key, value) pairs as 2-tuples." return (list(self._pb.IntMap.items()) + list(self._pb.StringMap.items()) + list(self._pb.FloatMap.items()) + list(self._pb.BoolMap.items()))
Returns a list of (key, value) pairs as 2-tuples.
Below is the the instruction that describes the task: ### Input: Returns a list of (key, value) pairs as 2-tuples. ### Response: def items(self): "Returns a list of (key, value) pairs as 2-tuples." return (list(self._pb.IntMap.items()) + list(self._pb.StringMap.items()) + list(self._pb.FloatMap.items()) + list(self._pb.BoolMap.items()))
def str2midi(note_string): """ Given a note string name (e.g. "Bb4"), returns its MIDI pitch number. """ if note_string == "?": return nan data = note_string.strip().lower() name2delta = {"c": -9, "d": -7, "e": -5, "f": -4, "g": -2, "a": 0, "b": 2} accident2delta = {"b": -1, "#": 1, "x": 2} accidents = list(it.takewhile(lambda el: el in accident2delta, data[1:])) octave_delta = int(data[len(accidents) + 1:]) - 4 return (MIDI_A4 + name2delta[data[0]] + # Name sum(accident2delta[ac] for ac in accidents) + # Accident 12 * octave_delta # Octave )
Given a note string name (e.g. "Bb4"), returns its MIDI pitch number.
Below is the the instruction that describes the task: ### Input: Given a note string name (e.g. "Bb4"), returns its MIDI pitch number. ### Response: def str2midi(note_string): """ Given a note string name (e.g. "Bb4"), returns its MIDI pitch number. """ if note_string == "?": return nan data = note_string.strip().lower() name2delta = {"c": -9, "d": -7, "e": -5, "f": -4, "g": -2, "a": 0, "b": 2} accident2delta = {"b": -1, "#": 1, "x": 2} accidents = list(it.takewhile(lambda el: el in accident2delta, data[1:])) octave_delta = int(data[len(accidents) + 1:]) - 4 return (MIDI_A4 + name2delta[data[0]] + # Name sum(accident2delta[ac] for ac in accidents) + # Accident 12 * octave_delta # Octave )
def list_storage_accounts_rg(access_token, subscription_id, rgname): '''List the storage accounts in the specified resource group. Args: access_token (str): A valid Azure authentication token. subscription_id (str): Azure subscription id. rgname (str): Azure resource group name. Returns: HTTP response. JSON body list of storage accounts. ''' endpoint = ''.join([get_rm_endpoint(), '/subscriptions/', subscription_id, '/resourcegroups/', rgname, '/providers/Microsoft.Storage/storageAccounts', '?api-version=', STORAGE_API]) return do_get(endpoint, access_token)
List the storage accounts in the specified resource group. Args: access_token (str): A valid Azure authentication token. subscription_id (str): Azure subscription id. rgname (str): Azure resource group name. Returns: HTTP response. JSON body list of storage accounts.
Below is the the instruction that describes the task: ### Input: List the storage accounts in the specified resource group. Args: access_token (str): A valid Azure authentication token. subscription_id (str): Azure subscription id. rgname (str): Azure resource group name. Returns: HTTP response. JSON body list of storage accounts. ### Response: def list_storage_accounts_rg(access_token, subscription_id, rgname): '''List the storage accounts in the specified resource group. Args: access_token (str): A valid Azure authentication token. subscription_id (str): Azure subscription id. rgname (str): Azure resource group name. Returns: HTTP response. JSON body list of storage accounts. ''' endpoint = ''.join([get_rm_endpoint(), '/subscriptions/', subscription_id, '/resourcegroups/', rgname, '/providers/Microsoft.Storage/storageAccounts', '?api-version=', STORAGE_API]) return do_get(endpoint, access_token)
def add_bindings(self, g: Graph) -> "PrefixLibrary": """ Add bindings in the library to the graph :param g: graph to add prefixes to :return: PrefixLibrary object """ for prefix, namespace in self: g.bind(prefix.lower(), namespace) return self
Add bindings in the library to the graph :param g: graph to add prefixes to :return: PrefixLibrary object
Below is the the instruction that describes the task: ### Input: Add bindings in the library to the graph :param g: graph to add prefixes to :return: PrefixLibrary object ### Response: def add_bindings(self, g: Graph) -> "PrefixLibrary": """ Add bindings in the library to the graph :param g: graph to add prefixes to :return: PrefixLibrary object """ for prefix, namespace in self: g.bind(prefix.lower(), namespace) return self
def id_exists(ids, mods, test=None, queue=False, **kwargs): ''' Tests for the existence of a specific ID or list of IDs within the specified SLS file(s). Similar to :py:func:`state.sls_exists <salt.modules.state.sls_exists>`, returns True or False. The default environment is base``, use ``saltenv`` to specify a different environment. .. versionadded:: 2019.2.0 saltenv Specify a salt fileserver environment from which to look for the SLS files specified in the ``mods`` argument CLI Example: .. code-block:: bash salt '*' state.id_exists create_myfile,update_template filestate saltenv=dev ''' ids = salt.utils.args.split_input(ids) ids = set(ids) sls_ids = set(x['__id__'] for x in show_low_sls(mods, test=test, queue=queue, **kwargs)) return ids.issubset(sls_ids)
Tests for the existence of a specific ID or list of IDs within the specified SLS file(s). Similar to :py:func:`state.sls_exists <salt.modules.state.sls_exists>`, returns True or False. The default environment is base``, use ``saltenv`` to specify a different environment. .. versionadded:: 2019.2.0 saltenv Specify a salt fileserver environment from which to look for the SLS files specified in the ``mods`` argument CLI Example: .. code-block:: bash salt '*' state.id_exists create_myfile,update_template filestate saltenv=dev
Below is the the instruction that describes the task: ### Input: Tests for the existence of a specific ID or list of IDs within the specified SLS file(s). Similar to :py:func:`state.sls_exists <salt.modules.state.sls_exists>`, returns True or False. The default environment is base``, use ``saltenv`` to specify a different environment. .. versionadded:: 2019.2.0 saltenv Specify a salt fileserver environment from which to look for the SLS files specified in the ``mods`` argument CLI Example: .. code-block:: bash salt '*' state.id_exists create_myfile,update_template filestate saltenv=dev ### Response: def id_exists(ids, mods, test=None, queue=False, **kwargs): ''' Tests for the existence of a specific ID or list of IDs within the specified SLS file(s). Similar to :py:func:`state.sls_exists <salt.modules.state.sls_exists>`, returns True or False. The default environment is base``, use ``saltenv`` to specify a different environment. .. versionadded:: 2019.2.0 saltenv Specify a salt fileserver environment from which to look for the SLS files specified in the ``mods`` argument CLI Example: .. code-block:: bash salt '*' state.id_exists create_myfile,update_template filestate saltenv=dev ''' ids = salt.utils.args.split_input(ids) ids = set(ids) sls_ids = set(x['__id__'] for x in show_low_sls(mods, test=test, queue=queue, **kwargs)) return ids.issubset(sls_ids)
def _urlencode(items): """A Unicode-safe URLencoder.""" try: return urllib.urlencode(items) except UnicodeEncodeError: return urllib.urlencode([(k, smart_str(v)) for k, v in items])
A Unicode-safe URLencoder.
Below is the the instruction that describes the task: ### Input: A Unicode-safe URLencoder. ### Response: def _urlencode(items): """A Unicode-safe URLencoder.""" try: return urllib.urlencode(items) except UnicodeEncodeError: return urllib.urlencode([(k, smart_str(v)) for k, v in items])
def parse_stream(response): """ take stream from docker-py lib and display it to the user. this also builds a stream list and returns it. """ stream_data = [] stream = stdout for data in response: if data: try: data = data.decode('utf-8') except AttributeError as e: logger.exception("Unable to parse stream, Attribute Error Raised: {0}".format(e)) stream.write(data) continue try: normalized_data = normalize_keys(json.loads(data)) except ValueError: stream.write(data) continue except TypeError: stream.write(data) continue if 'progress' in normalized_data: stream_data.append(normalized_data) _display_progress(normalized_data, stream) elif 'error' in normalized_data: _display_error(normalized_data, stream) elif 'status' in normalized_data: stream_data.append(normalized_data) _display_status(normalized_data, stream) elif 'stream' in normalized_data: stream_data.append(normalized_data) _display_stream(normalized_data, stream) else: stream.write(data) stream.flush() return stream_data
take stream from docker-py lib and display it to the user. this also builds a stream list and returns it.
Below is the the instruction that describes the task: ### Input: take stream from docker-py lib and display it to the user. this also builds a stream list and returns it. ### Response: def parse_stream(response): """ take stream from docker-py lib and display it to the user. this also builds a stream list and returns it. """ stream_data = [] stream = stdout for data in response: if data: try: data = data.decode('utf-8') except AttributeError as e: logger.exception("Unable to parse stream, Attribute Error Raised: {0}".format(e)) stream.write(data) continue try: normalized_data = normalize_keys(json.loads(data)) except ValueError: stream.write(data) continue except TypeError: stream.write(data) continue if 'progress' in normalized_data: stream_data.append(normalized_data) _display_progress(normalized_data, stream) elif 'error' in normalized_data: _display_error(normalized_data, stream) elif 'status' in normalized_data: stream_data.append(normalized_data) _display_status(normalized_data, stream) elif 'stream' in normalized_data: stream_data.append(normalized_data) _display_stream(normalized_data, stream) else: stream.write(data) stream.flush() return stream_data
def _optimal_orientation_from_detector(detector_name, tc): """ Low-level function to be called from _optimal_dec_from_detector and _optimal_ra_from_detector""" d = Detector(detector_name) ra, dec = d.optimal_orientation(tc) return ra, dec
Low-level function to be called from _optimal_dec_from_detector and _optimal_ra_from_detector
Below is the the instruction that describes the task: ### Input: Low-level function to be called from _optimal_dec_from_detector and _optimal_ra_from_detector ### Response: def _optimal_orientation_from_detector(detector_name, tc): """ Low-level function to be called from _optimal_dec_from_detector and _optimal_ra_from_detector""" d = Detector(detector_name) ra, dec = d.optimal_orientation(tc) return ra, dec
def validate_format(self, obj, pointer=None): """ ================= ============ Expected draft04 Alias of ----------------- ------------ date-time rfc3339.datetime email email hostname hostname ipv4 ipv4 ipv6 ipv6 uri uri ================= ============ """ if 'format' in self.attrs: substituted = { 'date-time': 'rfc3339.datetime', 'email': 'email', 'hostname': 'hostname', 'ipv4': 'ipv4', 'ipv6': 'ipv6', 'uri': 'uri', }.get(self.attrs['format'], self.attrs['format']) logger.debug('use %s', substituted) try: return self.formats[substituted](obj) except ValidationError as error: logger.error(error) self.fail('Forbidden value', obj, pointer) return obj
================= ============ Expected draft04 Alias of ----------------- ------------ date-time rfc3339.datetime email email hostname hostname ipv4 ipv4 ipv6 ipv6 uri uri ================= ============
Below is the the instruction that describes the task: ### Input: ================= ============ Expected draft04 Alias of ----------------- ------------ date-time rfc3339.datetime email email hostname hostname ipv4 ipv4 ipv6 ipv6 uri uri ================= ============ ### Response: def validate_format(self, obj, pointer=None): """ ================= ============ Expected draft04 Alias of ----------------- ------------ date-time rfc3339.datetime email email hostname hostname ipv4 ipv4 ipv6 ipv6 uri uri ================= ============ """ if 'format' in self.attrs: substituted = { 'date-time': 'rfc3339.datetime', 'email': 'email', 'hostname': 'hostname', 'ipv4': 'ipv4', 'ipv6': 'ipv6', 'uri': 'uri', }.get(self.attrs['format'], self.attrs['format']) logger.debug('use %s', substituted) try: return self.formats[substituted](obj) except ValidationError as error: logger.error(error) self.fail('Forbidden value', obj, pointer) return obj
def on_key_down(self, event): """ If user does command v, re-size window in case pasting has changed the content size. """ keycode = event.GetKeyCode() meta_down = event.MetaDown() or event.GetCmdDown() if keycode == 86 and meta_down: # treat it as if it were a wx.EVT_TEXT_SIZE self.do_fit(event)
If user does command v, re-size window in case pasting has changed the content size.
Below is the the instruction that describes the task: ### Input: If user does command v, re-size window in case pasting has changed the content size. ### Response: def on_key_down(self, event): """ If user does command v, re-size window in case pasting has changed the content size. """ keycode = event.GetKeyCode() meta_down = event.MetaDown() or event.GetCmdDown() if keycode == 86 and meta_down: # treat it as if it were a wx.EVT_TEXT_SIZE self.do_fit(event)
def fit_transform(self, Z, **fit_params): """Fit all the transforms one after the other and transform the data, then use fit_transform on transformed data using the final estimator.""" Zt, fit_params = self._pre_transform(Z, **fit_params) if hasattr(self.steps[-1][-1], 'fit_transform'): return self.steps[-1][-1].fit_transform(Zt, **fit_params) else: return self.steps[-1][-1].fit(Zt, **fit_params).transform(Zt)
Fit all the transforms one after the other and transform the data, then use fit_transform on transformed data using the final estimator.
Below is the the instruction that describes the task: ### Input: Fit all the transforms one after the other and transform the data, then use fit_transform on transformed data using the final estimator. ### Response: def fit_transform(self, Z, **fit_params): """Fit all the transforms one after the other and transform the data, then use fit_transform on transformed data using the final estimator.""" Zt, fit_params = self._pre_transform(Z, **fit_params) if hasattr(self.steps[-1][-1], 'fit_transform'): return self.steps[-1][-1].fit_transform(Zt, **fit_params) else: return self.steps[-1][-1].fit(Zt, **fit_params).transform(Zt)
def get_manifest_list(image, registry, insecure=False, dockercfg_path=None): """Return manifest list for image. :param image: ImageName, the remote image to inspect :param registry: str, URI for registry, if URI schema is not provided, https:// will be used :param insecure: bool, when True registry's cert is not verified :param dockercfg_path: str, dirname of .dockercfg location :return: response, or None, with manifest list """ version = 'v2_list' registry_session = RegistrySession(registry, insecure=insecure, dockercfg_path=dockercfg_path) response, _ = get_manifest(image, registry_session, version) return response
Return manifest list for image. :param image: ImageName, the remote image to inspect :param registry: str, URI for registry, if URI schema is not provided, https:// will be used :param insecure: bool, when True registry's cert is not verified :param dockercfg_path: str, dirname of .dockercfg location :return: response, or None, with manifest list
Below is the the instruction that describes the task: ### Input: Return manifest list for image. :param image: ImageName, the remote image to inspect :param registry: str, URI for registry, if URI schema is not provided, https:// will be used :param insecure: bool, when True registry's cert is not verified :param dockercfg_path: str, dirname of .dockercfg location :return: response, or None, with manifest list ### Response: def get_manifest_list(image, registry, insecure=False, dockercfg_path=None): """Return manifest list for image. :param image: ImageName, the remote image to inspect :param registry: str, URI for registry, if URI schema is not provided, https:// will be used :param insecure: bool, when True registry's cert is not verified :param dockercfg_path: str, dirname of .dockercfg location :return: response, or None, with manifest list """ version = 'v2_list' registry_session = RegistrySession(registry, insecure=insecure, dockercfg_path=dockercfg_path) response, _ = get_manifest(image, registry_session, version) return response
def _init_formats(self): """ Initialise default formats. """ theme = self._color_scheme # normal message format fmt = QtGui.QTextCharFormat() fmt.setForeground(theme.foreground) fmt.setBackground(theme.background) self._formats[OutputFormat.NormalMessageFormat] = fmt # error message fmt = QtGui.QTextCharFormat() fmt.setForeground(theme.error) fmt.setBackground(theme.background) self._formats[OutputFormat.ErrorMessageFormat] = fmt # debug message fmt = QtGui.QTextCharFormat() fmt.setForeground(theme.custom) fmt.setBackground(theme.background) self._formats[OutputFormat.CustomFormat] = fmt
Initialise default formats.
Below is the the instruction that describes the task: ### Input: Initialise default formats. ### Response: def _init_formats(self): """ Initialise default formats. """ theme = self._color_scheme # normal message format fmt = QtGui.QTextCharFormat() fmt.setForeground(theme.foreground) fmt.setBackground(theme.background) self._formats[OutputFormat.NormalMessageFormat] = fmt # error message fmt = QtGui.QTextCharFormat() fmt.setForeground(theme.error) fmt.setBackground(theme.background) self._formats[OutputFormat.ErrorMessageFormat] = fmt # debug message fmt = QtGui.QTextCharFormat() fmt.setForeground(theme.custom) fmt.setBackground(theme.background) self._formats[OutputFormat.CustomFormat] = fmt
def _compute_handshake(self): """Compute the authentication handshake value. :return: the computed hash value. :returntype: `str`""" return hashlib.sha1(to_utf8(self.stream_id)+to_utf8(self.secret)).hexdigest()
Compute the authentication handshake value. :return: the computed hash value. :returntype: `str`
Below is the the instruction that describes the task: ### Input: Compute the authentication handshake value. :return: the computed hash value. :returntype: `str` ### Response: def _compute_handshake(self): """Compute the authentication handshake value. :return: the computed hash value. :returntype: `str`""" return hashlib.sha1(to_utf8(self.stream_id)+to_utf8(self.secret)).hexdigest()
def get_env_short(env): """ Given an env, return <env_short> if env is valid Args: env: an environment, such as "prod", "staging", "proto<N>", "mgmt.<account_alias>" Returns: the shortname of the env, such as "prod", "staging", "proto", "mgmt" Raises: ValueError if env is misformatted or doesn't name a known environment """ env_valid(env) if env.find(".") > -1: env_short, ext = env.split(".") else: env_short = env.strip(".0123456789") return env_short
Given an env, return <env_short> if env is valid Args: env: an environment, such as "prod", "staging", "proto<N>", "mgmt.<account_alias>" Returns: the shortname of the env, such as "prod", "staging", "proto", "mgmt" Raises: ValueError if env is misformatted or doesn't name a known environment
Below is the the instruction that describes the task: ### Input: Given an env, return <env_short> if env is valid Args: env: an environment, such as "prod", "staging", "proto<N>", "mgmt.<account_alias>" Returns: the shortname of the env, such as "prod", "staging", "proto", "mgmt" Raises: ValueError if env is misformatted or doesn't name a known environment ### Response: def get_env_short(env): """ Given an env, return <env_short> if env is valid Args: env: an environment, such as "prod", "staging", "proto<N>", "mgmt.<account_alias>" Returns: the shortname of the env, such as "prod", "staging", "proto", "mgmt" Raises: ValueError if env is misformatted or doesn't name a known environment """ env_valid(env) if env.find(".") > -1: env_short, ext = env.split(".") else: env_short = env.strip(".0123456789") return env_short
def get_most_recent_update_time(self): """ Indicated most recent update of the instance, assumption based on: - if currentWorkflow exists, its startedAt time is most recent update. - else max of workflowHistory startedAt is most recent update. """ def parse_time(t): if t: return time.gmtime(t/1000) return None try: max_wf_started_at = max([i.get('startedAt') for i in self.workflowHistory]) return parse_time(max_wf_started_at) except ValueError: return None
Indicated most recent update of the instance, assumption based on: - if currentWorkflow exists, its startedAt time is most recent update. - else max of workflowHistory startedAt is most recent update.
Below is the the instruction that describes the task: ### Input: Indicated most recent update of the instance, assumption based on: - if currentWorkflow exists, its startedAt time is most recent update. - else max of workflowHistory startedAt is most recent update. ### Response: def get_most_recent_update_time(self): """ Indicated most recent update of the instance, assumption based on: - if currentWorkflow exists, its startedAt time is most recent update. - else max of workflowHistory startedAt is most recent update. """ def parse_time(t): if t: return time.gmtime(t/1000) return None try: max_wf_started_at = max([i.get('startedAt') for i in self.workflowHistory]) return parse_time(max_wf_started_at) except ValueError: return None
def localize(self, lng: str) -> str: """ Evaluate the given string with respect to the locale defined by ``lng``. If no string is available in the currently active language, this will give you the string in the system's default language. If this is unavailable as well, it will give you the string in the first language available. :param lng: A locale code, e.g. ``de``. If you specify a code including a country or region like ``de-AT``, exact matches will be used preferably, but if only a ``de`` or ``de-AT`` translation exists, this might be returned as well. """ if self.data is None: return "" if isinstance(self.data, dict): firstpart = lng.split('-')[0] similar = [l for l in self.data.keys() if (l.startswith(firstpart + "-") or firstpart == l) and l != lng] if self.data.get(lng): return self.data[lng] elif self.data.get(firstpart): return self.data[firstpart] elif similar and any([self.data.get(s) for s in similar]): for s in similar: if self.data.get(s): return self.data.get(s) elif self.data.get(settings.LANGUAGE_CODE): return self.data[settings.LANGUAGE_CODE] elif len(self.data): return list(self.data.items())[0][1] else: return "" else: return str(self.data)
Evaluate the given string with respect to the locale defined by ``lng``. If no string is available in the currently active language, this will give you the string in the system's default language. If this is unavailable as well, it will give you the string in the first language available. :param lng: A locale code, e.g. ``de``. If you specify a code including a country or region like ``de-AT``, exact matches will be used preferably, but if only a ``de`` or ``de-AT`` translation exists, this might be returned as well.
Below is the the instruction that describes the task: ### Input: Evaluate the given string with respect to the locale defined by ``lng``. If no string is available in the currently active language, this will give you the string in the system's default language. If this is unavailable as well, it will give you the string in the first language available. :param lng: A locale code, e.g. ``de``. If you specify a code including a country or region like ``de-AT``, exact matches will be used preferably, but if only a ``de`` or ``de-AT`` translation exists, this might be returned as well. ### Response: def localize(self, lng: str) -> str: """ Evaluate the given string with respect to the locale defined by ``lng``. If no string is available in the currently active language, this will give you the string in the system's default language. If this is unavailable as well, it will give you the string in the first language available. :param lng: A locale code, e.g. ``de``. If you specify a code including a country or region like ``de-AT``, exact matches will be used preferably, but if only a ``de`` or ``de-AT`` translation exists, this might be returned as well. """ if self.data is None: return "" if isinstance(self.data, dict): firstpart = lng.split('-')[0] similar = [l for l in self.data.keys() if (l.startswith(firstpart + "-") or firstpart == l) and l != lng] if self.data.get(lng): return self.data[lng] elif self.data.get(firstpart): return self.data[firstpart] elif similar and any([self.data.get(s) for s in similar]): for s in similar: if self.data.get(s): return self.data.get(s) elif self.data.get(settings.LANGUAGE_CODE): return self.data[settings.LANGUAGE_CODE] elif len(self.data): return list(self.data.items())[0][1] else: return "" else: return str(self.data)
def get_changed_vars(section: SoS_Step): '''changed vars are variables that are "shared" and therefore "provides" to others ''' if 'shared' not in section.options: return set() changed_vars = set() svars = section.options['shared'] if isinstance(svars, str): changed_vars.add(svars) svars = {svars: svars} elif isinstance(svars, Sequence): for item in svars: if isinstance(item, str): changed_vars.add(item) elif isinstance(item, Mapping): changed_vars |= set(item.keys()) else: raise ValueError( f'Option shared should be a string, a mapping of expression, or list of string or mappings. {svars} provided' ) elif isinstance(svars, Mapping): changed_vars |= set(svars.keys()) else: raise ValueError( f'Option shared should be a string, a mapping of expression, or list of string or mappings. {svars} provided' ) return changed_vars
changed vars are variables that are "shared" and therefore "provides" to others
Below is the the instruction that describes the task: ### Input: changed vars are variables that are "shared" and therefore "provides" to others ### Response: def get_changed_vars(section: SoS_Step): '''changed vars are variables that are "shared" and therefore "provides" to others ''' if 'shared' not in section.options: return set() changed_vars = set() svars = section.options['shared'] if isinstance(svars, str): changed_vars.add(svars) svars = {svars: svars} elif isinstance(svars, Sequence): for item in svars: if isinstance(item, str): changed_vars.add(item) elif isinstance(item, Mapping): changed_vars |= set(item.keys()) else: raise ValueError( f'Option shared should be a string, a mapping of expression, or list of string or mappings. {svars} provided' ) elif isinstance(svars, Mapping): changed_vars |= set(svars.keys()) else: raise ValueError( f'Option shared should be a string, a mapping of expression, or list of string or mappings. {svars} provided' ) return changed_vars
def selenium_retry(target=None, retry=True): """Decorator to turn on automatic retries of flaky selenium failures. Decorate a robotframework library class to turn on retries for all selenium calls from that library: @selenium_retry class MyLibrary(object): # Decorate a method to turn it back off for that method @selenium_retry(False) def some_keyword(self): self.selenium.click_button('foo') Or turn it off by default but turn it on for some methods (the class-level decorator is still required): @selenium_retry(False) class MyLibrary(object): @selenium_retry(True) def some_keyword(self): self.selenium.click_button('foo') """ if isinstance(target, bool): # Decorator was called with a single boolean argument retry = target target = None def decorate(target): if isinstance(target, type): cls = target # Metaclass time. # We're going to generate a new subclass that: # a) mixes in RetryingSeleniumLibraryMixin # b) sets the initial value of `retry_selenium` return type( cls.__name__, (cls, RetryingSeleniumLibraryMixin), {"retry_selenium": retry, "__doc__": cls.__doc__}, ) func = target @functools.wraps(func) def run_with_retry(self, *args, **kwargs): # Set the retry setting and run the original function. old_retry = self.retry_selenium self.retry = retry try: return func(self, *args, **kwargs) finally: # Restore the previous value self.retry_selenium = old_retry set_pdb_trace() run_with_retry.is_selenium_retry_decorator = True return run_with_retry if target is None: # Decorator is being used with arguments return decorate else: # Decorator was used without arguments return decorate(target)
Decorator to turn on automatic retries of flaky selenium failures. Decorate a robotframework library class to turn on retries for all selenium calls from that library: @selenium_retry class MyLibrary(object): # Decorate a method to turn it back off for that method @selenium_retry(False) def some_keyword(self): self.selenium.click_button('foo') Or turn it off by default but turn it on for some methods (the class-level decorator is still required): @selenium_retry(False) class MyLibrary(object): @selenium_retry(True) def some_keyword(self): self.selenium.click_button('foo')
Below is the the instruction that describes the task: ### Input: Decorator to turn on automatic retries of flaky selenium failures. Decorate a robotframework library class to turn on retries for all selenium calls from that library: @selenium_retry class MyLibrary(object): # Decorate a method to turn it back off for that method @selenium_retry(False) def some_keyword(self): self.selenium.click_button('foo') Or turn it off by default but turn it on for some methods (the class-level decorator is still required): @selenium_retry(False) class MyLibrary(object): @selenium_retry(True) def some_keyword(self): self.selenium.click_button('foo') ### Response: def selenium_retry(target=None, retry=True): """Decorator to turn on automatic retries of flaky selenium failures. Decorate a robotframework library class to turn on retries for all selenium calls from that library: @selenium_retry class MyLibrary(object): # Decorate a method to turn it back off for that method @selenium_retry(False) def some_keyword(self): self.selenium.click_button('foo') Or turn it off by default but turn it on for some methods (the class-level decorator is still required): @selenium_retry(False) class MyLibrary(object): @selenium_retry(True) def some_keyword(self): self.selenium.click_button('foo') """ if isinstance(target, bool): # Decorator was called with a single boolean argument retry = target target = None def decorate(target): if isinstance(target, type): cls = target # Metaclass time. # We're going to generate a new subclass that: # a) mixes in RetryingSeleniumLibraryMixin # b) sets the initial value of `retry_selenium` return type( cls.__name__, (cls, RetryingSeleniumLibraryMixin), {"retry_selenium": retry, "__doc__": cls.__doc__}, ) func = target @functools.wraps(func) def run_with_retry(self, *args, **kwargs): # Set the retry setting and run the original function. old_retry = self.retry_selenium self.retry = retry try: return func(self, *args, **kwargs) finally: # Restore the previous value self.retry_selenium = old_retry set_pdb_trace() run_with_retry.is_selenium_retry_decorator = True return run_with_retry if target is None: # Decorator is being used with arguments return decorate else: # Decorator was used without arguments return decorate(target)
def branches(self): """Return a list of branches for given repository :return: [str] """ # get all remote branches refs = filter(lambda l: isinstance(l, git.RemoteReference), self.repo.references) # filter out HEAD branch refs = filter(lambda l: l.name != "origin/HEAD", refs) # filter out all branches not starting with 'origin/' refs = filter(lambda l: l.name.startswith("origin/"), refs) for ref in refs: self.refs[ref.name[7:]] = ref # remove 'origin/' prefix return map(lambda l: l.name[7:], refs)
Return a list of branches for given repository :return: [str]
Below is the the instruction that describes the task: ### Input: Return a list of branches for given repository :return: [str] ### Response: def branches(self): """Return a list of branches for given repository :return: [str] """ # get all remote branches refs = filter(lambda l: isinstance(l, git.RemoteReference), self.repo.references) # filter out HEAD branch refs = filter(lambda l: l.name != "origin/HEAD", refs) # filter out all branches not starting with 'origin/' refs = filter(lambda l: l.name.startswith("origin/"), refs) for ref in refs: self.refs[ref.name[7:]] = ref # remove 'origin/' prefix return map(lambda l: l.name[7:], refs)
def resource_urls(request): """Global values to pass to templates""" url_parsed = urlparse(settings.SEARCH_URL) defaults = dict( APP_NAME=__description__, APP_VERSION=__version__, SITE_URL=settings.SITE_URL.rstrip('/'), SEARCH_TYPE=settings.SEARCH_TYPE, SEARCH_URL=settings.SEARCH_URL, SEARCH_IP='%s://%s:%s' % (url_parsed.scheme, url_parsed.hostname, url_parsed.port) ) return defaults
Global values to pass to templates
Below is the the instruction that describes the task: ### Input: Global values to pass to templates ### Response: def resource_urls(request): """Global values to pass to templates""" url_parsed = urlparse(settings.SEARCH_URL) defaults = dict( APP_NAME=__description__, APP_VERSION=__version__, SITE_URL=settings.SITE_URL.rstrip('/'), SEARCH_TYPE=settings.SEARCH_TYPE, SEARCH_URL=settings.SEARCH_URL, SEARCH_IP='%s://%s:%s' % (url_parsed.scheme, url_parsed.hostname, url_parsed.port) ) return defaults
def list_users(): ''' Return a list of all users on Windows Returns: list: A list of all users on the system CLI Example: .. code-block:: bash salt '*' user.list_users ''' res = 0 user_list = [] dowhile = True try: while res or dowhile: dowhile = False (users, _, res) = win32net.NetUserEnum( None, 0, win32netcon.FILTER_NORMAL_ACCOUNT, res, win32netcon.MAX_PREFERRED_LENGTH ) for user in users: user_list.append(user['name']) return user_list except win32net.error: pass
Return a list of all users on Windows Returns: list: A list of all users on the system CLI Example: .. code-block:: bash salt '*' user.list_users
Below is the the instruction that describes the task: ### Input: Return a list of all users on Windows Returns: list: A list of all users on the system CLI Example: .. code-block:: bash salt '*' user.list_users ### Response: def list_users(): ''' Return a list of all users on Windows Returns: list: A list of all users on the system CLI Example: .. code-block:: bash salt '*' user.list_users ''' res = 0 user_list = [] dowhile = True try: while res or dowhile: dowhile = False (users, _, res) = win32net.NetUserEnum( None, 0, win32netcon.FILTER_NORMAL_ACCOUNT, res, win32netcon.MAX_PREFERRED_LENGTH ) for user in users: user_list.append(user['name']) return user_list except win32net.error: pass
def record_set_properties(object_id, input_params={}, always_retry=True, **kwargs): """ Invokes the /record-xxxx/setProperties API method. For more info, see: https://wiki.dnanexus.com/API-Specification-v1.0.0/Properties#API-method%3A-%2Fclass-xxxx%2FsetProperties """ return DXHTTPRequest('/%s/setProperties' % object_id, input_params, always_retry=always_retry, **kwargs)
Invokes the /record-xxxx/setProperties API method. For more info, see: https://wiki.dnanexus.com/API-Specification-v1.0.0/Properties#API-method%3A-%2Fclass-xxxx%2FsetProperties
Below is the the instruction that describes the task: ### Input: Invokes the /record-xxxx/setProperties API method. For more info, see: https://wiki.dnanexus.com/API-Specification-v1.0.0/Properties#API-method%3A-%2Fclass-xxxx%2FsetProperties ### Response: def record_set_properties(object_id, input_params={}, always_retry=True, **kwargs): """ Invokes the /record-xxxx/setProperties API method. For more info, see: https://wiki.dnanexus.com/API-Specification-v1.0.0/Properties#API-method%3A-%2Fclass-xxxx%2FsetProperties """ return DXHTTPRequest('/%s/setProperties' % object_id, input_params, always_retry=always_retry, **kwargs)
def _get_service_exec(): ''' Returns the path to the sysv service manager (either update-rc.d or chkconfig) ''' contextkey = 'systemd._get_service_exec' if contextkey not in __context__: executables = ('update-rc.d', 'chkconfig') for executable in executables: service_exec = salt.utils.path.which(executable) if service_exec is not None: break else: raise CommandExecutionError( 'Unable to find sysv service manager (tried {0})'.format( ', '.join(executables) ) ) __context__[contextkey] = service_exec return __context__[contextkey]
Returns the path to the sysv service manager (either update-rc.d or chkconfig)
Below is the the instruction that describes the task: ### Input: Returns the path to the sysv service manager (either update-rc.d or chkconfig) ### Response: def _get_service_exec(): ''' Returns the path to the sysv service manager (either update-rc.d or chkconfig) ''' contextkey = 'systemd._get_service_exec' if contextkey not in __context__: executables = ('update-rc.d', 'chkconfig') for executable in executables: service_exec = salt.utils.path.which(executable) if service_exec is not None: break else: raise CommandExecutionError( 'Unable to find sysv service manager (tried {0})'.format( ', '.join(executables) ) ) __context__[contextkey] = service_exec return __context__[contextkey]
def get_all_policies(self, as_group=None, policy_names=None, max_records=None, next_token=None): """ Returns descriptions of what each policy does. This action supports pagination. If the response includes a token, there are more records available. To get the additional records, repeat the request with the response token as the NextToken parameter. If no group name or list of policy names are provided, all available policies are returned. :type as_name: str :param as_name: The name of the :class:`boto.ec2.autoscale.group.AutoScalingGroup` to filter for. :type names: list :param names: List of policy names which should be searched for. :type max_records: int :param max_records: Maximum amount of groups to return. """ params = {} if as_group: params['AutoScalingGroupName'] = as_group if policy_names: self.build_list_params(params, policy_names, 'PolicyNames') if max_records: params['MaxRecords'] = max_records if next_token: params['NextToken'] = next_token return self.get_list('DescribePolicies', params, [('member', ScalingPolicy)])
Returns descriptions of what each policy does. This action supports pagination. If the response includes a token, there are more records available. To get the additional records, repeat the request with the response token as the NextToken parameter. If no group name or list of policy names are provided, all available policies are returned. :type as_name: str :param as_name: The name of the :class:`boto.ec2.autoscale.group.AutoScalingGroup` to filter for. :type names: list :param names: List of policy names which should be searched for. :type max_records: int :param max_records: Maximum amount of groups to return.
Below is the the instruction that describes the task: ### Input: Returns descriptions of what each policy does. This action supports pagination. If the response includes a token, there are more records available. To get the additional records, repeat the request with the response token as the NextToken parameter. If no group name or list of policy names are provided, all available policies are returned. :type as_name: str :param as_name: The name of the :class:`boto.ec2.autoscale.group.AutoScalingGroup` to filter for. :type names: list :param names: List of policy names which should be searched for. :type max_records: int :param max_records: Maximum amount of groups to return. ### Response: def get_all_policies(self, as_group=None, policy_names=None, max_records=None, next_token=None): """ Returns descriptions of what each policy does. This action supports pagination. If the response includes a token, there are more records available. To get the additional records, repeat the request with the response token as the NextToken parameter. If no group name or list of policy names are provided, all available policies are returned. :type as_name: str :param as_name: The name of the :class:`boto.ec2.autoscale.group.AutoScalingGroup` to filter for. :type names: list :param names: List of policy names which should be searched for. :type max_records: int :param max_records: Maximum amount of groups to return. """ params = {} if as_group: params['AutoScalingGroupName'] = as_group if policy_names: self.build_list_params(params, policy_names, 'PolicyNames') if max_records: params['MaxRecords'] = max_records if next_token: params['NextToken'] = next_token return self.get_list('DescribePolicies', params, [('member', ScalingPolicy)])
def _parse_byte_data(self, byte_data): """Extract the values from byte string.""" self.data_type = b''.join(unpack('cccc', byte_data[:4])).decode() self.run = unpack('>i', byte_data[4:8])[0] self.udp_sequence = unpack('>i', byte_data[8:12])[0] self.timestamp, self.ns_ticks = unpack('>II', byte_data[12:20]) self.dom_id = unpack('>i', byte_data[20:24])[0] dom_status_bits = unpack('>I', byte_data[24:28])[0] self.dom_status = "{0:032b}".format(dom_status_bits) self.human_readable_timestamp = datetime.datetime.fromtimestamp( int(self.timestamp), UTC_TZ ).strftime('%Y-%m-%d %H:%M:%S')
Extract the values from byte string.
Below is the the instruction that describes the task: ### Input: Extract the values from byte string. ### Response: def _parse_byte_data(self, byte_data): """Extract the values from byte string.""" self.data_type = b''.join(unpack('cccc', byte_data[:4])).decode() self.run = unpack('>i', byte_data[4:8])[0] self.udp_sequence = unpack('>i', byte_data[8:12])[0] self.timestamp, self.ns_ticks = unpack('>II', byte_data[12:20]) self.dom_id = unpack('>i', byte_data[20:24])[0] dom_status_bits = unpack('>I', byte_data[24:28])[0] self.dom_status = "{0:032b}".format(dom_status_bits) self.human_readable_timestamp = datetime.datetime.fromtimestamp( int(self.timestamp), UTC_TZ ).strftime('%Y-%m-%d %H:%M:%S')
def super_mro(self): """Get the MRO which will be used to lookup attributes in this super.""" if not isinstance(self.mro_pointer, scoped_nodes.ClassDef): raise exceptions.SuperError( "The first argument to super must be a subtype of " "type, not {mro_pointer}.", super_=self, ) if isinstance(self.type, scoped_nodes.ClassDef): # `super(type, type)`, most likely in a class method. self._class_based = True mro_type = self.type else: mro_type = getattr(self.type, "_proxied", None) if not isinstance(mro_type, (bases.Instance, scoped_nodes.ClassDef)): raise exceptions.SuperError( "The second argument to super must be an " "instance or subtype of type, not {type}.", super_=self, ) if not mro_type.newstyle: raise exceptions.SuperError( "Unable to call super on old-style classes.", super_=self ) mro = mro_type.mro() if self.mro_pointer not in mro: raise exceptions.SuperError( "The second argument to super must be an " "instance or subtype of type, not {type}.", super_=self, ) index = mro.index(self.mro_pointer) return mro[index + 1 :]
Get the MRO which will be used to lookup attributes in this super.
Below is the the instruction that describes the task: ### Input: Get the MRO which will be used to lookup attributes in this super. ### Response: def super_mro(self): """Get the MRO which will be used to lookup attributes in this super.""" if not isinstance(self.mro_pointer, scoped_nodes.ClassDef): raise exceptions.SuperError( "The first argument to super must be a subtype of " "type, not {mro_pointer}.", super_=self, ) if isinstance(self.type, scoped_nodes.ClassDef): # `super(type, type)`, most likely in a class method. self._class_based = True mro_type = self.type else: mro_type = getattr(self.type, "_proxied", None) if not isinstance(mro_type, (bases.Instance, scoped_nodes.ClassDef)): raise exceptions.SuperError( "The second argument to super must be an " "instance or subtype of type, not {type}.", super_=self, ) if not mro_type.newstyle: raise exceptions.SuperError( "Unable to call super on old-style classes.", super_=self ) mro = mro_type.mro() if self.mro_pointer not in mro: raise exceptions.SuperError( "The second argument to super must be an " "instance or subtype of type, not {type}.", super_=self, ) index = mro.index(self.mro_pointer) return mro[index + 1 :]
def export(self): """ Generate a NIDM-Results export. """ try: if not os.path.isdir(self.export_dir): os.mkdir(self.export_dir) # Initialise main bundle self._create_bundle(self.version) self.add_object(self.software) # Add model fitting steps if not isinstance(self.model_fittings, list): self.model_fittings = list(self.model_fittings.values()) for model_fitting in self.model_fittings: # Design Matrix # model_fitting.activity.used(model_fitting.design_matrix) self.bundle.used(model_fitting.activity.id, model_fitting.design_matrix.id) self.add_object(model_fitting.design_matrix) # *** Export visualisation of the design matrix self.add_object(model_fitting.design_matrix.image) if model_fitting.design_matrix.image.file is not None: self.add_object(model_fitting.design_matrix.image.file) if model_fitting.design_matrix.hrf_models is not None: # drift model self.add_object(model_fitting.design_matrix.drift_model) if self.version['major'] > 1 or \ (self.version['major'] == 1 and self.version['minor'] >= 3): # Machine # model_fitting.data.wasAttributedTo(model_fitting.machine) self.bundle.wasAttributedTo(model_fitting.data.id, model_fitting.machine.id) self.add_object(model_fitting.machine) # Imaged subject or group(s) for sub in model_fitting.subjects: self.add_object(sub) # model_fitting.data.wasAttributedTo(sub) self.bundle.wasAttributedTo(model_fitting.data.id, sub.id) # Data # model_fitting.activity.used(model_fitting.data) self.bundle.used(model_fitting.activity.id, model_fitting.data.id) self.add_object(model_fitting.data) # Error Model # model_fitting.activity.used(model_fitting.error_model) self.bundle.used(model_fitting.activity.id, model_fitting.error_model.id) self.add_object(model_fitting.error_model) # Parameter Estimate Maps for param_estimate in model_fitting.param_estimates: # param_estimate.wasGeneratedBy(model_fitting.activity) self.bundle.wasGeneratedBy(param_estimate.id, model_fitting.activity.id) self.add_object(param_estimate) self.add_object(param_estimate.coord_space) self.add_object(param_estimate.file) if param_estimate.derfrom is not None: self.bundle.wasDerivedFrom(param_estimate.id, param_estimate.derfrom.id) self.add_object(param_estimate.derfrom) self.add_object(param_estimate.derfrom.file, export_file=False) # Residual Mean Squares Map # model_fitting.rms_map.wasGeneratedBy(model_fitting.activity) self.add_object(model_fitting.rms_map) self.bundle.wasGeneratedBy(model_fitting.rms_map.id, model_fitting.activity.id) self.add_object(model_fitting.rms_map.coord_space) self.add_object(model_fitting.rms_map.file) if model_fitting.rms_map.derfrom is not None: self.bundle.wasDerivedFrom( model_fitting.rms_map.id, model_fitting.rms_map.derfrom.id) self.add_object(model_fitting.rms_map.derfrom) self.add_object(model_fitting.rms_map.derfrom.file, export_file=False) # Resels per Voxel Map if model_fitting.rpv_map is not None: self.add_object(model_fitting.rpv_map) self.bundle.wasGeneratedBy(model_fitting.rpv_map.id, model_fitting.activity.id) self.add_object(model_fitting.rpv_map.coord_space) self.add_object(model_fitting.rpv_map.file) if model_fitting.rpv_map.inf_id is not None: self.bundle.used(model_fitting.rpv_map.inf_id, model_fitting.rpv_map.id) if model_fitting.rpv_map.derfrom is not None: self.bundle.wasDerivedFrom( model_fitting.rpv_map.id, model_fitting.rpv_map.derfrom.id) self.add_object(model_fitting.rpv_map.derfrom) self.add_object(model_fitting.rpv_map.derfrom.file, export_file=False) # Mask # model_fitting.mask_map.wasGeneratedBy(model_fitting.activity) self.bundle.wasGeneratedBy(model_fitting.mask_map.id, model_fitting.activity.id) self.add_object(model_fitting.mask_map) if model_fitting.mask_map.derfrom is not None: self.bundle.wasDerivedFrom( model_fitting.mask_map.id, model_fitting.mask_map.derfrom.id) self.add_object(model_fitting.mask_map.derfrom) self.add_object(model_fitting.mask_map.derfrom.file, export_file=False) # Create coordinate space export self.add_object(model_fitting.mask_map.coord_space) # Create "Mask map" entity self.add_object(model_fitting.mask_map.file) # Grand Mean map # model_fitting.grand_mean_map.wasGeneratedBy(model_fitting.activity) self.bundle.wasGeneratedBy(model_fitting.grand_mean_map.id, model_fitting.activity.id) self.add_object(model_fitting.grand_mean_map) # Coordinate space entity self.add_object(model_fitting.grand_mean_map.coord_space) # Grand Mean Map entity self.add_object(model_fitting.grand_mean_map.file) # Model Parameters Estimation activity self.add_object(model_fitting.activity) self.bundle.wasAssociatedWith(model_fitting.activity.id, self.software.id) # model_fitting.activity.wasAssociatedWith(self.software) # self.add_object(model_fitting) # Add contrast estimation steps analysis_masks = dict() for (model_fitting_id, pe_ids), contrasts in list( self.contrasts.items()): for contrast in contrasts: model_fitting = self._get_model_fitting(model_fitting_id) # for contrast in contrasts: # contrast.estimation.used(model_fitting.rms_map) self.bundle.used(contrast.estimation.id, model_fitting.rms_map.id) # contrast.estimation.used(model_fitting.mask_map) self.bundle.used(contrast.estimation.id, model_fitting.mask_map.id) analysis_masks[contrast.estimation.id] = \ model_fitting.mask_map.id self.bundle.used(contrast.estimation.id, contrast.weights.id) self.bundle.used(contrast.estimation.id, model_fitting.design_matrix.id) # contrast.estimation.wasAssociatedWith(self.software) self.bundle.wasAssociatedWith(contrast.estimation.id, self.software.id) for pe_id in pe_ids: # contrast.estimation.used(pe_id) self.bundle.used(contrast.estimation.id, pe_id) # Create estimation activity self.add_object(contrast.estimation) # Create contrast weights self.add_object(contrast.weights) if contrast.contrast_map is not None: # Create contrast Map # contrast.contrast_map.wasGeneratedBy(contrast.estimation) self.bundle.wasGeneratedBy(contrast.contrast_map.id, contrast.estimation.id) self.add_object(contrast.contrast_map) self.add_object(contrast.contrast_map.coord_space) # Copy contrast map in export directory self.add_object(contrast.contrast_map.file) if contrast.contrast_map.derfrom is not None: self.bundle.wasDerivedFrom( contrast.contrast_map.id, contrast.contrast_map.derfrom.id) self.add_object(contrast.contrast_map.derfrom) self.add_object(contrast.contrast_map.derfrom.file, export_file=False) # Create Std Err. Map (T-tests) or Explained Mean Sq. Map # (F-tests) # contrast.stderr_or_expl_mean_sq_map.wasGeneratedBy # (contrast.estimation) stderr_explmeansq_map = ( contrast.stderr_or_expl_mean_sq_map) self.bundle.wasGeneratedBy( stderr_explmeansq_map.id, contrast.estimation.id) self.add_object(stderr_explmeansq_map) self.add_object( stderr_explmeansq_map.coord_space) if isinstance(stderr_explmeansq_map, ContrastStdErrMap) and \ stderr_explmeansq_map.contrast_var: self.add_object( stderr_explmeansq_map.contrast_var) if stderr_explmeansq_map.var_coord_space: self.add_object( stderr_explmeansq_map.var_coord_space) if stderr_explmeansq_map.contrast_var.coord_space: self.add_object( stderr_explmeansq_map.contrast_var.coord_space) self.add_object( stderr_explmeansq_map.contrast_var.file, export_file=False) self.bundle.wasDerivedFrom( stderr_explmeansq_map.id, stderr_explmeansq_map.contrast_var.id) self.add_object(stderr_explmeansq_map.file) # Create Statistic Map # contrast.stat_map.wasGeneratedBy(contrast.estimation) self.bundle.wasGeneratedBy(contrast.stat_map.id, contrast.estimation.id) self.add_object(contrast.stat_map) self.add_object(contrast.stat_map.coord_space) # Copy Statistical map in export directory self.add_object(contrast.stat_map.file) if contrast.stat_map.derfrom is not None: self.bundle.wasDerivedFrom( contrast.stat_map.id, contrast.stat_map.derfrom.id) self.add_object(contrast.stat_map.derfrom) self.add_object(contrast.stat_map.derfrom.file, export_file=False) # Create Z Statistic Map if contrast.z_stat_map: # contrast.z_stat_map.wasGeneratedBy(contrast.estimation) self.bundle.wasGeneratedBy(contrast.z_stat_map.id, contrast.estimation.id) self.add_object(contrast.z_stat_map) self.add_object(contrast.z_stat_map.coord_space) # Copy Statistical map in export directory self.add_object(contrast.z_stat_map.file) # self.add_object(contrast) # Add inference steps for contrast_id, inferences in list(self.inferences.items()): contrast = self._get_contrast(contrast_id) for inference in inferences: if contrast.z_stat_map: used_id = contrast.z_stat_map.id else: used_id = contrast.stat_map.id # inference.inference_act.used(used_id) self.bundle.used(inference.inference_act.id, used_id) # inference.inference_act.wasAssociatedWith(self.software) self.bundle.wasAssociatedWith(inference.inference_act.id, self.software.id) # self.add_object(inference) # Excursion set # inference.excursion_set.wasGeneratedBy(inference.inference_act) self.bundle.wasGeneratedBy(inference.excursion_set.id, inference.inference_act.id) self.add_object(inference.excursion_set) self.add_object(inference.excursion_set.coord_space) if inference.excursion_set.visu is not None: self.add_object(inference.excursion_set.visu) if inference.excursion_set.visu.file is not None: self.add_object(inference.excursion_set.visu.file) # Copy "Excursion set map" file in export directory self.add_object(inference.excursion_set.file) if inference.excursion_set.clust_map is not None: self.add_object(inference.excursion_set.clust_map) self.add_object(inference.excursion_set.clust_map.file) self.add_object( inference.excursion_set.clust_map.coord_space) if inference.excursion_set.mip is not None: self.add_object(inference.excursion_set.mip) self.add_object(inference.excursion_set.mip.file) # Height threshold if inference.height_thresh.equiv_thresh is not None: for equiv in inference.height_thresh.equiv_thresh: self.add_object(equiv) self.add_object(inference.height_thresh) # Extent threshold if inference.extent_thresh.equiv_thresh is not None: for equiv in inference.extent_thresh.equiv_thresh: self.add_object(equiv) self.add_object(inference.extent_thresh) # Display Mask (potentially more than 1) if inference.disp_mask: for mask in inference.disp_mask: # inference.inference_act.used(mask) self.bundle.used(inference.inference_act.id, mask.id) self.add_object(mask) # Create coordinate space entity self.add_object(mask.coord_space) # Create "Display Mask Map" entity self.add_object(mask.file) if mask.derfrom is not None: self.bundle.wasDerivedFrom(mask.id, mask.derfrom.id) self.add_object(mask.derfrom) self.add_object(mask.derfrom.file, export_file=False) # Search Space self.bundle.wasGeneratedBy(inference.search_space.id, inference.inference_act.id) # inference.search_space.wasGeneratedBy(inference.inference_act) self.add_object(inference.search_space) self.add_object(inference.search_space.coord_space) # Copy "Mask map" in export directory self.add_object(inference.search_space.file) # Peak Definition if inference.peak_criteria: # inference.inference_act.used(inference.peak_criteria) self.bundle.used(inference.inference_act.id, inference.peak_criteria.id) self.add_object(inference.peak_criteria) # Cluster Definition if inference.cluster_criteria: # inference.inference_act.used(inference.cluster_criteria) self.bundle.used(inference.inference_act.id, inference.cluster_criteria.id) self.add_object(inference.cluster_criteria) if inference.clusters: # Clusters and peaks for cluster in inference.clusters: # cluster.wasDerivedFrom(inference.excursion_set) self.bundle.wasDerivedFrom( cluster.id, inference.excursion_set.id) self.add_object(cluster) for peak in cluster.peaks: self.bundle.wasDerivedFrom(peak.id, cluster.id) self.add_object(peak) self.add_object(peak.coordinate) if cluster.cog is not None: self.bundle.wasDerivedFrom(cluster.cog.id, cluster.id) self.add_object(cluster.cog) self.add_object(cluster.cog.coordinate) # Inference activity # inference.inference_act.wasAssociatedWith(inference.software_id) # inference.inference_act.used(inference.height_thresh) self.bundle.used(inference.inference_act.id, inference.height_thresh.id) # inference.inference_act.used(inference.extent_thresh) self.bundle.used(inference.inference_act.id, inference.extent_thresh.id) self.bundle.used(inference.inference_act.id, analysis_masks[contrast.estimation.id]) self.add_object(inference.inference_act) # Write-out prov file self.save_prov_to_files() return self.out_dir except Exception: self.cleanup() raise
Generate a NIDM-Results export.
Below is the the instruction that describes the task: ### Input: Generate a NIDM-Results export. ### Response: def export(self): """ Generate a NIDM-Results export. """ try: if not os.path.isdir(self.export_dir): os.mkdir(self.export_dir) # Initialise main bundle self._create_bundle(self.version) self.add_object(self.software) # Add model fitting steps if not isinstance(self.model_fittings, list): self.model_fittings = list(self.model_fittings.values()) for model_fitting in self.model_fittings: # Design Matrix # model_fitting.activity.used(model_fitting.design_matrix) self.bundle.used(model_fitting.activity.id, model_fitting.design_matrix.id) self.add_object(model_fitting.design_matrix) # *** Export visualisation of the design matrix self.add_object(model_fitting.design_matrix.image) if model_fitting.design_matrix.image.file is not None: self.add_object(model_fitting.design_matrix.image.file) if model_fitting.design_matrix.hrf_models is not None: # drift model self.add_object(model_fitting.design_matrix.drift_model) if self.version['major'] > 1 or \ (self.version['major'] == 1 and self.version['minor'] >= 3): # Machine # model_fitting.data.wasAttributedTo(model_fitting.machine) self.bundle.wasAttributedTo(model_fitting.data.id, model_fitting.machine.id) self.add_object(model_fitting.machine) # Imaged subject or group(s) for sub in model_fitting.subjects: self.add_object(sub) # model_fitting.data.wasAttributedTo(sub) self.bundle.wasAttributedTo(model_fitting.data.id, sub.id) # Data # model_fitting.activity.used(model_fitting.data) self.bundle.used(model_fitting.activity.id, model_fitting.data.id) self.add_object(model_fitting.data) # Error Model # model_fitting.activity.used(model_fitting.error_model) self.bundle.used(model_fitting.activity.id, model_fitting.error_model.id) self.add_object(model_fitting.error_model) # Parameter Estimate Maps for param_estimate in model_fitting.param_estimates: # param_estimate.wasGeneratedBy(model_fitting.activity) self.bundle.wasGeneratedBy(param_estimate.id, model_fitting.activity.id) self.add_object(param_estimate) self.add_object(param_estimate.coord_space) self.add_object(param_estimate.file) if param_estimate.derfrom is not None: self.bundle.wasDerivedFrom(param_estimate.id, param_estimate.derfrom.id) self.add_object(param_estimate.derfrom) self.add_object(param_estimate.derfrom.file, export_file=False) # Residual Mean Squares Map # model_fitting.rms_map.wasGeneratedBy(model_fitting.activity) self.add_object(model_fitting.rms_map) self.bundle.wasGeneratedBy(model_fitting.rms_map.id, model_fitting.activity.id) self.add_object(model_fitting.rms_map.coord_space) self.add_object(model_fitting.rms_map.file) if model_fitting.rms_map.derfrom is not None: self.bundle.wasDerivedFrom( model_fitting.rms_map.id, model_fitting.rms_map.derfrom.id) self.add_object(model_fitting.rms_map.derfrom) self.add_object(model_fitting.rms_map.derfrom.file, export_file=False) # Resels per Voxel Map if model_fitting.rpv_map is not None: self.add_object(model_fitting.rpv_map) self.bundle.wasGeneratedBy(model_fitting.rpv_map.id, model_fitting.activity.id) self.add_object(model_fitting.rpv_map.coord_space) self.add_object(model_fitting.rpv_map.file) if model_fitting.rpv_map.inf_id is not None: self.bundle.used(model_fitting.rpv_map.inf_id, model_fitting.rpv_map.id) if model_fitting.rpv_map.derfrom is not None: self.bundle.wasDerivedFrom( model_fitting.rpv_map.id, model_fitting.rpv_map.derfrom.id) self.add_object(model_fitting.rpv_map.derfrom) self.add_object(model_fitting.rpv_map.derfrom.file, export_file=False) # Mask # model_fitting.mask_map.wasGeneratedBy(model_fitting.activity) self.bundle.wasGeneratedBy(model_fitting.mask_map.id, model_fitting.activity.id) self.add_object(model_fitting.mask_map) if model_fitting.mask_map.derfrom is not None: self.bundle.wasDerivedFrom( model_fitting.mask_map.id, model_fitting.mask_map.derfrom.id) self.add_object(model_fitting.mask_map.derfrom) self.add_object(model_fitting.mask_map.derfrom.file, export_file=False) # Create coordinate space export self.add_object(model_fitting.mask_map.coord_space) # Create "Mask map" entity self.add_object(model_fitting.mask_map.file) # Grand Mean map # model_fitting.grand_mean_map.wasGeneratedBy(model_fitting.activity) self.bundle.wasGeneratedBy(model_fitting.grand_mean_map.id, model_fitting.activity.id) self.add_object(model_fitting.grand_mean_map) # Coordinate space entity self.add_object(model_fitting.grand_mean_map.coord_space) # Grand Mean Map entity self.add_object(model_fitting.grand_mean_map.file) # Model Parameters Estimation activity self.add_object(model_fitting.activity) self.bundle.wasAssociatedWith(model_fitting.activity.id, self.software.id) # model_fitting.activity.wasAssociatedWith(self.software) # self.add_object(model_fitting) # Add contrast estimation steps analysis_masks = dict() for (model_fitting_id, pe_ids), contrasts in list( self.contrasts.items()): for contrast in contrasts: model_fitting = self._get_model_fitting(model_fitting_id) # for contrast in contrasts: # contrast.estimation.used(model_fitting.rms_map) self.bundle.used(contrast.estimation.id, model_fitting.rms_map.id) # contrast.estimation.used(model_fitting.mask_map) self.bundle.used(contrast.estimation.id, model_fitting.mask_map.id) analysis_masks[contrast.estimation.id] = \ model_fitting.mask_map.id self.bundle.used(contrast.estimation.id, contrast.weights.id) self.bundle.used(contrast.estimation.id, model_fitting.design_matrix.id) # contrast.estimation.wasAssociatedWith(self.software) self.bundle.wasAssociatedWith(contrast.estimation.id, self.software.id) for pe_id in pe_ids: # contrast.estimation.used(pe_id) self.bundle.used(contrast.estimation.id, pe_id) # Create estimation activity self.add_object(contrast.estimation) # Create contrast weights self.add_object(contrast.weights) if contrast.contrast_map is not None: # Create contrast Map # contrast.contrast_map.wasGeneratedBy(contrast.estimation) self.bundle.wasGeneratedBy(contrast.contrast_map.id, contrast.estimation.id) self.add_object(contrast.contrast_map) self.add_object(contrast.contrast_map.coord_space) # Copy contrast map in export directory self.add_object(contrast.contrast_map.file) if contrast.contrast_map.derfrom is not None: self.bundle.wasDerivedFrom( contrast.contrast_map.id, contrast.contrast_map.derfrom.id) self.add_object(contrast.contrast_map.derfrom) self.add_object(contrast.contrast_map.derfrom.file, export_file=False) # Create Std Err. Map (T-tests) or Explained Mean Sq. Map # (F-tests) # contrast.stderr_or_expl_mean_sq_map.wasGeneratedBy # (contrast.estimation) stderr_explmeansq_map = ( contrast.stderr_or_expl_mean_sq_map) self.bundle.wasGeneratedBy( stderr_explmeansq_map.id, contrast.estimation.id) self.add_object(stderr_explmeansq_map) self.add_object( stderr_explmeansq_map.coord_space) if isinstance(stderr_explmeansq_map, ContrastStdErrMap) and \ stderr_explmeansq_map.contrast_var: self.add_object( stderr_explmeansq_map.contrast_var) if stderr_explmeansq_map.var_coord_space: self.add_object( stderr_explmeansq_map.var_coord_space) if stderr_explmeansq_map.contrast_var.coord_space: self.add_object( stderr_explmeansq_map.contrast_var.coord_space) self.add_object( stderr_explmeansq_map.contrast_var.file, export_file=False) self.bundle.wasDerivedFrom( stderr_explmeansq_map.id, stderr_explmeansq_map.contrast_var.id) self.add_object(stderr_explmeansq_map.file) # Create Statistic Map # contrast.stat_map.wasGeneratedBy(contrast.estimation) self.bundle.wasGeneratedBy(contrast.stat_map.id, contrast.estimation.id) self.add_object(contrast.stat_map) self.add_object(contrast.stat_map.coord_space) # Copy Statistical map in export directory self.add_object(contrast.stat_map.file) if contrast.stat_map.derfrom is not None: self.bundle.wasDerivedFrom( contrast.stat_map.id, contrast.stat_map.derfrom.id) self.add_object(contrast.stat_map.derfrom) self.add_object(contrast.stat_map.derfrom.file, export_file=False) # Create Z Statistic Map if contrast.z_stat_map: # contrast.z_stat_map.wasGeneratedBy(contrast.estimation) self.bundle.wasGeneratedBy(contrast.z_stat_map.id, contrast.estimation.id) self.add_object(contrast.z_stat_map) self.add_object(contrast.z_stat_map.coord_space) # Copy Statistical map in export directory self.add_object(contrast.z_stat_map.file) # self.add_object(contrast) # Add inference steps for contrast_id, inferences in list(self.inferences.items()): contrast = self._get_contrast(contrast_id) for inference in inferences: if contrast.z_stat_map: used_id = contrast.z_stat_map.id else: used_id = contrast.stat_map.id # inference.inference_act.used(used_id) self.bundle.used(inference.inference_act.id, used_id) # inference.inference_act.wasAssociatedWith(self.software) self.bundle.wasAssociatedWith(inference.inference_act.id, self.software.id) # self.add_object(inference) # Excursion set # inference.excursion_set.wasGeneratedBy(inference.inference_act) self.bundle.wasGeneratedBy(inference.excursion_set.id, inference.inference_act.id) self.add_object(inference.excursion_set) self.add_object(inference.excursion_set.coord_space) if inference.excursion_set.visu is not None: self.add_object(inference.excursion_set.visu) if inference.excursion_set.visu.file is not None: self.add_object(inference.excursion_set.visu.file) # Copy "Excursion set map" file in export directory self.add_object(inference.excursion_set.file) if inference.excursion_set.clust_map is not None: self.add_object(inference.excursion_set.clust_map) self.add_object(inference.excursion_set.clust_map.file) self.add_object( inference.excursion_set.clust_map.coord_space) if inference.excursion_set.mip is not None: self.add_object(inference.excursion_set.mip) self.add_object(inference.excursion_set.mip.file) # Height threshold if inference.height_thresh.equiv_thresh is not None: for equiv in inference.height_thresh.equiv_thresh: self.add_object(equiv) self.add_object(inference.height_thresh) # Extent threshold if inference.extent_thresh.equiv_thresh is not None: for equiv in inference.extent_thresh.equiv_thresh: self.add_object(equiv) self.add_object(inference.extent_thresh) # Display Mask (potentially more than 1) if inference.disp_mask: for mask in inference.disp_mask: # inference.inference_act.used(mask) self.bundle.used(inference.inference_act.id, mask.id) self.add_object(mask) # Create coordinate space entity self.add_object(mask.coord_space) # Create "Display Mask Map" entity self.add_object(mask.file) if mask.derfrom is not None: self.bundle.wasDerivedFrom(mask.id, mask.derfrom.id) self.add_object(mask.derfrom) self.add_object(mask.derfrom.file, export_file=False) # Search Space self.bundle.wasGeneratedBy(inference.search_space.id, inference.inference_act.id) # inference.search_space.wasGeneratedBy(inference.inference_act) self.add_object(inference.search_space) self.add_object(inference.search_space.coord_space) # Copy "Mask map" in export directory self.add_object(inference.search_space.file) # Peak Definition if inference.peak_criteria: # inference.inference_act.used(inference.peak_criteria) self.bundle.used(inference.inference_act.id, inference.peak_criteria.id) self.add_object(inference.peak_criteria) # Cluster Definition if inference.cluster_criteria: # inference.inference_act.used(inference.cluster_criteria) self.bundle.used(inference.inference_act.id, inference.cluster_criteria.id) self.add_object(inference.cluster_criteria) if inference.clusters: # Clusters and peaks for cluster in inference.clusters: # cluster.wasDerivedFrom(inference.excursion_set) self.bundle.wasDerivedFrom( cluster.id, inference.excursion_set.id) self.add_object(cluster) for peak in cluster.peaks: self.bundle.wasDerivedFrom(peak.id, cluster.id) self.add_object(peak) self.add_object(peak.coordinate) if cluster.cog is not None: self.bundle.wasDerivedFrom(cluster.cog.id, cluster.id) self.add_object(cluster.cog) self.add_object(cluster.cog.coordinate) # Inference activity # inference.inference_act.wasAssociatedWith(inference.software_id) # inference.inference_act.used(inference.height_thresh) self.bundle.used(inference.inference_act.id, inference.height_thresh.id) # inference.inference_act.used(inference.extent_thresh) self.bundle.used(inference.inference_act.id, inference.extent_thresh.id) self.bundle.used(inference.inference_act.id, analysis_masks[contrast.estimation.id]) self.add_object(inference.inference_act) # Write-out prov file self.save_prov_to_files() return self.out_dir except Exception: self.cleanup() raise
def add(self, uuid): """ Adds a key to the HyperLogLog """ if uuid: # Computing the hash try: x = hash64(uuid) except UnicodeEncodeError: x = hash64(uuid.encode('ascii', 'ignore')) # Finding the register to update by using the first b bits as an index j = x & ((1 << self.b) - 1) # Remove those b bits w = x >> self.b # Find the first 0 in the remaining bit pattern self.M[j] = max(self.M[j], self._get_rho(w, self.bitcount_arr))
Adds a key to the HyperLogLog
Below is the the instruction that describes the task: ### Input: Adds a key to the HyperLogLog ### Response: def add(self, uuid): """ Adds a key to the HyperLogLog """ if uuid: # Computing the hash try: x = hash64(uuid) except UnicodeEncodeError: x = hash64(uuid.encode('ascii', 'ignore')) # Finding the register to update by using the first b bits as an index j = x & ((1 << self.b) - 1) # Remove those b bits w = x >> self.b # Find the first 0 in the remaining bit pattern self.M[j] = max(self.M[j], self._get_rho(w, self.bitcount_arr))
def raise_for_response(self, responses): """ Constructs appropriate exception from list of responses and raises it. """ exception_messages = [self.client.format_exception_message(response) for response in responses] if len(exception_messages) == 1: message = exception_messages[0] else: message = "[%s]" % ", ".join(exception_messages) raise PostmarkerException(message)
Constructs appropriate exception from list of responses and raises it.
Below is the the instruction that describes the task: ### Input: Constructs appropriate exception from list of responses and raises it. ### Response: def raise_for_response(self, responses): """ Constructs appropriate exception from list of responses and raises it. """ exception_messages = [self.client.format_exception_message(response) for response in responses] if len(exception_messages) == 1: message = exception_messages[0] else: message = "[%s]" % ", ".join(exception_messages) raise PostmarkerException(message)
def com_google_fonts_check_name_license(ttFont, license): """Check copyright namerecords match license file.""" from fontbakery.constants import PLACEHOLDER_LICENSING_TEXT failed = False placeholder = PLACEHOLDER_LICENSING_TEXT[license] entry_found = False for i, nameRecord in enumerate(ttFont["name"].names): if nameRecord.nameID == NameID.LICENSE_DESCRIPTION: entry_found = True value = nameRecord.toUnicode() if value != placeholder: failed = True yield FAIL, Message("wrong", \ ("License file {} exists but" " NameID {} (LICENSE DESCRIPTION) value" " on platform {} ({})" " is not specified for that." " Value was: \"{}\"" " Must be changed to \"{}\"" "").format(license, NameID.LICENSE_DESCRIPTION, nameRecord.platformID, PlatformID(nameRecord.platformID).name, value, placeholder)) if not entry_found: yield FAIL, Message("missing", \ ("Font lacks NameID {} " "(LICENSE DESCRIPTION). A proper licensing entry" " must be set.").format(NameID.LICENSE_DESCRIPTION)) elif not failed: yield PASS, "Licensing entry on name table is correctly set."
Check copyright namerecords match license file.
Below is the the instruction that describes the task: ### Input: Check copyright namerecords match license file. ### Response: def com_google_fonts_check_name_license(ttFont, license): """Check copyright namerecords match license file.""" from fontbakery.constants import PLACEHOLDER_LICENSING_TEXT failed = False placeholder = PLACEHOLDER_LICENSING_TEXT[license] entry_found = False for i, nameRecord in enumerate(ttFont["name"].names): if nameRecord.nameID == NameID.LICENSE_DESCRIPTION: entry_found = True value = nameRecord.toUnicode() if value != placeholder: failed = True yield FAIL, Message("wrong", \ ("License file {} exists but" " NameID {} (LICENSE DESCRIPTION) value" " on platform {} ({})" " is not specified for that." " Value was: \"{}\"" " Must be changed to \"{}\"" "").format(license, NameID.LICENSE_DESCRIPTION, nameRecord.platformID, PlatformID(nameRecord.platformID).name, value, placeholder)) if not entry_found: yield FAIL, Message("missing", \ ("Font lacks NameID {} " "(LICENSE DESCRIPTION). A proper licensing entry" " must be set.").format(NameID.LICENSE_DESCRIPTION)) elif not failed: yield PASS, "Licensing entry on name table is correctly set."
def render_dot(self, code, options, format, prefix='graphviz'): # type: (nodes.NodeVisitor, unicode, Dict, unicode, unicode) -> Tuple[unicode, unicode] """Render graphviz code into a PNG or PDF output file.""" graphviz_dot = options.get('graphviz_dot', self.builder.config.graphviz_dot) hashkey = (code + str(options) + str(graphviz_dot) + str(self.builder.config.graphviz_dot_args)).encode('utf-8') fname = '%s-%s.%s' % (prefix, sha1(hashkey).hexdigest(), format) relfn = posixpath.join(self.builder.imgpath, fname) outfn = path.join(self.builder.outdir, self.builder.imagedir, fname) if path.isfile(outfn): return relfn, outfn if (hasattr(self.builder, '_graphviz_warned_dot') and self.builder._graphviz_warned_dot.get(graphviz_dot)): return None, None ensuredir(path.dirname(outfn)) # graphviz expects UTF-8 by default if isinstance(code, text_type): code = code.encode('utf-8') dot_args = [graphviz_dot] dot_args.extend(self.builder.config.graphviz_dot_args) dot_args.extend(['-T' + format, '-o' + outfn]) if format == 'png': dot_args.extend(['-Tcmapx', '-o%s.map' % outfn]) try: p = Popen(dot_args, stdout=PIPE, stdin=PIPE, stderr=PIPE) except OSError as err: if err.errno != ENOENT: # No such file or directory raise logger.warning(__('dot command %r cannot be run (needed for graphviz ' 'output), check the graphviz_dot setting'), graphviz_dot) if not hasattr(self.builder, '_graphviz_warned_dot'): self.builder._graphviz_warned_dot = {} self.builder._graphviz_warned_dot[graphviz_dot] = True return None, None try: # Graphviz may close standard input when an error occurs, # resulting in a broken pipe on communicate() stdout, stderr = p.communicate(code) except (OSError, IOError) as err: if err.errno not in (EPIPE, EINVAL): raise # in this case, read the standard output and standard error streams # directly, to get the error message(s) stdout, stderr = p.stdout.read(), p.stderr.read() p.wait() if p.returncode != 0: raise GraphvizError(__('dot exited with error:\n[stderr]\n%s\n' '[stdout]\n%s') % (stderr, stdout)) if not path.isfile(outfn): raise GraphvizError(__('dot did not produce an output file:\n[stderr]\n%s\n' '[stdout]\n%s') % (stderr, stdout)) return relfn, outfn
Render graphviz code into a PNG or PDF output file.
Below is the the instruction that describes the task: ### Input: Render graphviz code into a PNG or PDF output file. ### Response: def render_dot(self, code, options, format, prefix='graphviz'): # type: (nodes.NodeVisitor, unicode, Dict, unicode, unicode) -> Tuple[unicode, unicode] """Render graphviz code into a PNG or PDF output file.""" graphviz_dot = options.get('graphviz_dot', self.builder.config.graphviz_dot) hashkey = (code + str(options) + str(graphviz_dot) + str(self.builder.config.graphviz_dot_args)).encode('utf-8') fname = '%s-%s.%s' % (prefix, sha1(hashkey).hexdigest(), format) relfn = posixpath.join(self.builder.imgpath, fname) outfn = path.join(self.builder.outdir, self.builder.imagedir, fname) if path.isfile(outfn): return relfn, outfn if (hasattr(self.builder, '_graphviz_warned_dot') and self.builder._graphviz_warned_dot.get(graphviz_dot)): return None, None ensuredir(path.dirname(outfn)) # graphviz expects UTF-8 by default if isinstance(code, text_type): code = code.encode('utf-8') dot_args = [graphviz_dot] dot_args.extend(self.builder.config.graphviz_dot_args) dot_args.extend(['-T' + format, '-o' + outfn]) if format == 'png': dot_args.extend(['-Tcmapx', '-o%s.map' % outfn]) try: p = Popen(dot_args, stdout=PIPE, stdin=PIPE, stderr=PIPE) except OSError as err: if err.errno != ENOENT: # No such file or directory raise logger.warning(__('dot command %r cannot be run (needed for graphviz ' 'output), check the graphviz_dot setting'), graphviz_dot) if not hasattr(self.builder, '_graphviz_warned_dot'): self.builder._graphviz_warned_dot = {} self.builder._graphviz_warned_dot[graphviz_dot] = True return None, None try: # Graphviz may close standard input when an error occurs, # resulting in a broken pipe on communicate() stdout, stderr = p.communicate(code) except (OSError, IOError) as err: if err.errno not in (EPIPE, EINVAL): raise # in this case, read the standard output and standard error streams # directly, to get the error message(s) stdout, stderr = p.stdout.read(), p.stderr.read() p.wait() if p.returncode != 0: raise GraphvizError(__('dot exited with error:\n[stderr]\n%s\n' '[stdout]\n%s') % (stderr, stdout)) if not path.isfile(outfn): raise GraphvizError(__('dot did not produce an output file:\n[stderr]\n%s\n' '[stdout]\n%s') % (stderr, stdout)) return relfn, outfn
def regression_tikhonov(G, y, M, tau=0): r"""Solve a regression problem on graph via Tikhonov minimization. The function solves .. math:: \operatorname*{arg min}_x \| M x - y \|_2^2 + \tau \ x^T L x if :math:`\tau > 0`, and .. math:: \operatorname*{arg min}_x x^T L x \ \text{ s. t. } \ y = M x otherwise. Parameters ---------- G : :class:`pygsp.graphs.Graph` y : array, length G.n_vertices Measurements. M : array of boolean, length G.n_vertices Masking vector. tau : float Regularization parameter. Returns ------- x : array, length G.n_vertices Recovered values :math:`x`. Examples -------- >>> from pygsp import graphs, filters, learning >>> import matplotlib.pyplot as plt >>> >>> G = graphs.Sensor(N=100, seed=42) >>> G.estimate_lmax() Create a smooth ground truth signal: >>> filt = lambda x: 1 / (1 + 10*x) >>> filt = filters.Filter(G, filt) >>> rs = np.random.RandomState(42) >>> signal = filt.analyze(rs.normal(size=G.n_vertices)) Construct a measurement signal from a binary mask: >>> mask = rs.uniform(0, 1, G.n_vertices) > 0.5 >>> measures = signal.copy() >>> measures[~mask] = np.nan Solve the regression problem by reconstructing the signal: >>> recovery = learning.regression_tikhonov(G, measures, mask, tau=0) Plot the results: >>> fig, (ax1, ax2, ax3) = plt.subplots(1, 3, sharey=True, figsize=(10, 3)) >>> limits = [signal.min(), signal.max()] >>> _ = G.plot_signal(signal, ax=ax1, limits=limits, title='Ground truth') >>> _ = G.plot_signal(measures, ax=ax2, limits=limits, title='Measures') >>> _ = G.plot_signal(recovery, ax=ax3, limits=limits, title='Recovery') >>> _ = fig.tight_layout() """ if tau > 0: y[M == False] = 0 if sparse.issparse(G.L): def Op(x): return (M * x.T).T + tau * (G.L.dot(x)) LinearOp = sparse.linalg.LinearOperator([G.N, G.N], Op) if y.ndim > 1: sol = np.empty(shape=y.shape) res = np.empty(shape=y.shape[1]) for i in range(y.shape[1]): sol[:, i], res[i] = sparse.linalg.cg( LinearOp, y[:, i]) else: sol, res = sparse.linalg.cg(LinearOp, y) # TODO: do something with the residual... return sol else: # Creating this matrix may be problematic in term of memory. # Consider using an operator instead... if type(G.L).__module__ == np.__name__: LinearOp = np.diag(M*1) + tau * G.L return np.linalg.solve(LinearOp, M * y) else: if np.prod(M.shape) != G.n_vertices: raise ValueError("M should be of size [G.n_vertices,]") indl = M indu = (M == False) Luu = G.L[indu, :][:, indu] Wul = - G.L[indu, :][:, indl] if sparse.issparse(G.L): sol_part = sparse.linalg.spsolve(Luu, Wul.dot(y[indl])) else: sol_part = np.linalg.solve(Luu, np.matmul(Wul, y[indl])) sol = y.copy() sol[indu] = sol_part return sol
r"""Solve a regression problem on graph via Tikhonov minimization. The function solves .. math:: \operatorname*{arg min}_x \| M x - y \|_2^2 + \tau \ x^T L x if :math:`\tau > 0`, and .. math:: \operatorname*{arg min}_x x^T L x \ \text{ s. t. } \ y = M x otherwise. Parameters ---------- G : :class:`pygsp.graphs.Graph` y : array, length G.n_vertices Measurements. M : array of boolean, length G.n_vertices Masking vector. tau : float Regularization parameter. Returns ------- x : array, length G.n_vertices Recovered values :math:`x`. Examples -------- >>> from pygsp import graphs, filters, learning >>> import matplotlib.pyplot as plt >>> >>> G = graphs.Sensor(N=100, seed=42) >>> G.estimate_lmax() Create a smooth ground truth signal: >>> filt = lambda x: 1 / (1 + 10*x) >>> filt = filters.Filter(G, filt) >>> rs = np.random.RandomState(42) >>> signal = filt.analyze(rs.normal(size=G.n_vertices)) Construct a measurement signal from a binary mask: >>> mask = rs.uniform(0, 1, G.n_vertices) > 0.5 >>> measures = signal.copy() >>> measures[~mask] = np.nan Solve the regression problem by reconstructing the signal: >>> recovery = learning.regression_tikhonov(G, measures, mask, tau=0) Plot the results: >>> fig, (ax1, ax2, ax3) = plt.subplots(1, 3, sharey=True, figsize=(10, 3)) >>> limits = [signal.min(), signal.max()] >>> _ = G.plot_signal(signal, ax=ax1, limits=limits, title='Ground truth') >>> _ = G.plot_signal(measures, ax=ax2, limits=limits, title='Measures') >>> _ = G.plot_signal(recovery, ax=ax3, limits=limits, title='Recovery') >>> _ = fig.tight_layout()
Below is the the instruction that describes the task: ### Input: r"""Solve a regression problem on graph via Tikhonov minimization. The function solves .. math:: \operatorname*{arg min}_x \| M x - y \|_2^2 + \tau \ x^T L x if :math:`\tau > 0`, and .. math:: \operatorname*{arg min}_x x^T L x \ \text{ s. t. } \ y = M x otherwise. Parameters ---------- G : :class:`pygsp.graphs.Graph` y : array, length G.n_vertices Measurements. M : array of boolean, length G.n_vertices Masking vector. tau : float Regularization parameter. Returns ------- x : array, length G.n_vertices Recovered values :math:`x`. Examples -------- >>> from pygsp import graphs, filters, learning >>> import matplotlib.pyplot as plt >>> >>> G = graphs.Sensor(N=100, seed=42) >>> G.estimate_lmax() Create a smooth ground truth signal: >>> filt = lambda x: 1 / (1 + 10*x) >>> filt = filters.Filter(G, filt) >>> rs = np.random.RandomState(42) >>> signal = filt.analyze(rs.normal(size=G.n_vertices)) Construct a measurement signal from a binary mask: >>> mask = rs.uniform(0, 1, G.n_vertices) > 0.5 >>> measures = signal.copy() >>> measures[~mask] = np.nan Solve the regression problem by reconstructing the signal: >>> recovery = learning.regression_tikhonov(G, measures, mask, tau=0) Plot the results: >>> fig, (ax1, ax2, ax3) = plt.subplots(1, 3, sharey=True, figsize=(10, 3)) >>> limits = [signal.min(), signal.max()] >>> _ = G.plot_signal(signal, ax=ax1, limits=limits, title='Ground truth') >>> _ = G.plot_signal(measures, ax=ax2, limits=limits, title='Measures') >>> _ = G.plot_signal(recovery, ax=ax3, limits=limits, title='Recovery') >>> _ = fig.tight_layout() ### Response: def regression_tikhonov(G, y, M, tau=0): r"""Solve a regression problem on graph via Tikhonov minimization. The function solves .. math:: \operatorname*{arg min}_x \| M x - y \|_2^2 + \tau \ x^T L x if :math:`\tau > 0`, and .. math:: \operatorname*{arg min}_x x^T L x \ \text{ s. t. } \ y = M x otherwise. Parameters ---------- G : :class:`pygsp.graphs.Graph` y : array, length G.n_vertices Measurements. M : array of boolean, length G.n_vertices Masking vector. tau : float Regularization parameter. Returns ------- x : array, length G.n_vertices Recovered values :math:`x`. Examples -------- >>> from pygsp import graphs, filters, learning >>> import matplotlib.pyplot as plt >>> >>> G = graphs.Sensor(N=100, seed=42) >>> G.estimate_lmax() Create a smooth ground truth signal: >>> filt = lambda x: 1 / (1 + 10*x) >>> filt = filters.Filter(G, filt) >>> rs = np.random.RandomState(42) >>> signal = filt.analyze(rs.normal(size=G.n_vertices)) Construct a measurement signal from a binary mask: >>> mask = rs.uniform(0, 1, G.n_vertices) > 0.5 >>> measures = signal.copy() >>> measures[~mask] = np.nan Solve the regression problem by reconstructing the signal: >>> recovery = learning.regression_tikhonov(G, measures, mask, tau=0) Plot the results: >>> fig, (ax1, ax2, ax3) = plt.subplots(1, 3, sharey=True, figsize=(10, 3)) >>> limits = [signal.min(), signal.max()] >>> _ = G.plot_signal(signal, ax=ax1, limits=limits, title='Ground truth') >>> _ = G.plot_signal(measures, ax=ax2, limits=limits, title='Measures') >>> _ = G.plot_signal(recovery, ax=ax3, limits=limits, title='Recovery') >>> _ = fig.tight_layout() """ if tau > 0: y[M == False] = 0 if sparse.issparse(G.L): def Op(x): return (M * x.T).T + tau * (G.L.dot(x)) LinearOp = sparse.linalg.LinearOperator([G.N, G.N], Op) if y.ndim > 1: sol = np.empty(shape=y.shape) res = np.empty(shape=y.shape[1]) for i in range(y.shape[1]): sol[:, i], res[i] = sparse.linalg.cg( LinearOp, y[:, i]) else: sol, res = sparse.linalg.cg(LinearOp, y) # TODO: do something with the residual... return sol else: # Creating this matrix may be problematic in term of memory. # Consider using an operator instead... if type(G.L).__module__ == np.__name__: LinearOp = np.diag(M*1) + tau * G.L return np.linalg.solve(LinearOp, M * y) else: if np.prod(M.shape) != G.n_vertices: raise ValueError("M should be of size [G.n_vertices,]") indl = M indu = (M == False) Luu = G.L[indu, :][:, indu] Wul = - G.L[indu, :][:, indl] if sparse.issparse(G.L): sol_part = sparse.linalg.spsolve(Luu, Wul.dot(y[indl])) else: sol_part = np.linalg.solve(Luu, np.matmul(Wul, y[indl])) sol = y.copy() sol[indu] = sol_part return sol
def ycoord(self): """The y coordinate :class:`xarray.Variable`""" return self.decoder.get_y(self.data, coords=self.data.coords)
The y coordinate :class:`xarray.Variable`
Below is the the instruction that describes the task: ### Input: The y coordinate :class:`xarray.Variable` ### Response: def ycoord(self): """The y coordinate :class:`xarray.Variable`""" return self.decoder.get_y(self.data, coords=self.data.coords)
def update_job(JobId=None, RoleARN=None, Notification=None, Resources=None, AddressId=None, ShippingOption=None, Description=None, SnowballCapacityPreference=None, ForwardingAddressId=None): """ While a job's JobState value is New , you can update some of the information associated with a job. Once the job changes to a different job state, usually within 60 minutes of the job being created, this action is no longer available. See also: AWS API Documentation Examples This action allows you to update certain parameters for a job. Once the job changes to a different job state, usually within 60 minutes of the job being created, this action is no longer available. Expected Output: :example: response = client.update_job( JobId='string', RoleARN='string', Notification={ 'SnsTopicARN': 'string', 'JobStatesToNotify': [ 'New'|'PreparingAppliance'|'PreparingShipment'|'InTransitToCustomer'|'WithCustomer'|'InTransitToAWS'|'WithAWS'|'InProgress'|'Complete'|'Cancelled'|'Listing'|'Pending', ], 'NotifyAll': True|False }, Resources={ 'S3Resources': [ { 'BucketArn': 'string', 'KeyRange': { 'BeginMarker': 'string', 'EndMarker': 'string' } }, ], 'LambdaResources': [ { 'LambdaArn': 'string', 'EventTriggers': [ { 'EventResourceARN': 'string' }, ] }, ] }, AddressId='string', ShippingOption='SECOND_DAY'|'NEXT_DAY'|'EXPRESS'|'STANDARD', Description='string', SnowballCapacityPreference='T50'|'T80'|'T100'|'NoPreference', ForwardingAddressId='string' ) :type JobId: string :param JobId: [REQUIRED] The job ID of the job that you want to update, for example JID123e4567-e89b-12d3-a456-426655440000 . :type RoleARN: string :param RoleARN: The new role Amazon Resource Name (ARN) that you want to associate with this job. To create a role ARN, use the CreateRole AWS Identity and Access Management (IAM) API action. :type Notification: dict :param Notification: The new or updated Notification object. SnsTopicARN (string) --The new SNS TopicArn that you want to associate with this job. You can create Amazon Resource Names (ARNs) for topics by using the CreateTopic Amazon SNS API action. You can subscribe email addresses to an Amazon SNS topic through the AWS Management Console, or by using the Subscribe AWS Simple Notification Service (SNS) API action. JobStatesToNotify (list) --The list of job states that will trigger a notification for this job. (string) -- NotifyAll (boolean) --Any change in job state will trigger a notification for this job. :type Resources: dict :param Resources: The updated S3Resource object (for a single Amazon S3 bucket or key range), or the updated JobResource object (for multiple buckets or key ranges). S3Resources (list) --An array of S3Resource objects. (dict) --Each S3Resource object represents an Amazon S3 bucket that your transferred data will be exported from or imported into. For export jobs, this object can have an optional KeyRange value. The length of the range is defined at job creation, and has either an inclusive BeginMarker , an inclusive EndMarker , or both. Ranges are UTF-8 binary sorted. BucketArn (string) --The Amazon Resource Name (ARN) of an Amazon S3 bucket. KeyRange (dict) --For export jobs, you can provide an optional KeyRange within a specific Amazon S3 bucket. The length of the range is defined at job creation, and has either an inclusive BeginMarker , an inclusive EndMarker , or both. Ranges are UTF-8 binary sorted. BeginMarker (string) --The key that starts an optional key range for an export job. Ranges are inclusive and UTF-8 binary sorted. EndMarker (string) --The key that ends an optional key range for an export job. Ranges are inclusive and UTF-8 binary sorted. LambdaResources (list) --The Python-language Lambda functions for this job. (dict) --Identifies LambdaArn (string) --An Amazon Resource Name (ARN) that represents an AWS Lambda function to be triggered by PUT object actions on the associated local Amazon S3 resource. EventTriggers (list) --The array of ARNs for S3Resource objects to trigger the LambdaResource objects associated with this job. (dict) --The container for the EventTriggerDefinition$EventResourceARN . EventResourceARN (string) --The Amazon Resource Name (ARN) for any local Amazon S3 resource that is an AWS Lambda function's event trigger associated with this job. :type AddressId: string :param AddressId: The ID of the updated Address object. :type ShippingOption: string :param ShippingOption: The updated shipping option value of this job's ShippingDetails object. :type Description: string :param Description: The updated description of this job's JobMetadata object. :type SnowballCapacityPreference: string :param SnowballCapacityPreference: The updated SnowballCapacityPreference of this job's JobMetadata object. The 50 TB Snowballs are only available in the US regions. :type ForwardingAddressId: string :param ForwardingAddressId: The updated ID for the forwarding address for a job. This field is not supported in most regions. :rtype: dict :return: {} :returns: (dict) -- """ pass
While a job's JobState value is New , you can update some of the information associated with a job. Once the job changes to a different job state, usually within 60 minutes of the job being created, this action is no longer available. See also: AWS API Documentation Examples This action allows you to update certain parameters for a job. Once the job changes to a different job state, usually within 60 minutes of the job being created, this action is no longer available. Expected Output: :example: response = client.update_job( JobId='string', RoleARN='string', Notification={ 'SnsTopicARN': 'string', 'JobStatesToNotify': [ 'New'|'PreparingAppliance'|'PreparingShipment'|'InTransitToCustomer'|'WithCustomer'|'InTransitToAWS'|'WithAWS'|'InProgress'|'Complete'|'Cancelled'|'Listing'|'Pending', ], 'NotifyAll': True|False }, Resources={ 'S3Resources': [ { 'BucketArn': 'string', 'KeyRange': { 'BeginMarker': 'string', 'EndMarker': 'string' } }, ], 'LambdaResources': [ { 'LambdaArn': 'string', 'EventTriggers': [ { 'EventResourceARN': 'string' }, ] }, ] }, AddressId='string', ShippingOption='SECOND_DAY'|'NEXT_DAY'|'EXPRESS'|'STANDARD', Description='string', SnowballCapacityPreference='T50'|'T80'|'T100'|'NoPreference', ForwardingAddressId='string' ) :type JobId: string :param JobId: [REQUIRED] The job ID of the job that you want to update, for example JID123e4567-e89b-12d3-a456-426655440000 . :type RoleARN: string :param RoleARN: The new role Amazon Resource Name (ARN) that you want to associate with this job. To create a role ARN, use the CreateRole AWS Identity and Access Management (IAM) API action. :type Notification: dict :param Notification: The new or updated Notification object. SnsTopicARN (string) --The new SNS TopicArn that you want to associate with this job. You can create Amazon Resource Names (ARNs) for topics by using the CreateTopic Amazon SNS API action. You can subscribe email addresses to an Amazon SNS topic through the AWS Management Console, or by using the Subscribe AWS Simple Notification Service (SNS) API action. JobStatesToNotify (list) --The list of job states that will trigger a notification for this job. (string) -- NotifyAll (boolean) --Any change in job state will trigger a notification for this job. :type Resources: dict :param Resources: The updated S3Resource object (for a single Amazon S3 bucket or key range), or the updated JobResource object (for multiple buckets or key ranges). S3Resources (list) --An array of S3Resource objects. (dict) --Each S3Resource object represents an Amazon S3 bucket that your transferred data will be exported from or imported into. For export jobs, this object can have an optional KeyRange value. The length of the range is defined at job creation, and has either an inclusive BeginMarker , an inclusive EndMarker , or both. Ranges are UTF-8 binary sorted. BucketArn (string) --The Amazon Resource Name (ARN) of an Amazon S3 bucket. KeyRange (dict) --For export jobs, you can provide an optional KeyRange within a specific Amazon S3 bucket. The length of the range is defined at job creation, and has either an inclusive BeginMarker , an inclusive EndMarker , or both. Ranges are UTF-8 binary sorted. BeginMarker (string) --The key that starts an optional key range for an export job. Ranges are inclusive and UTF-8 binary sorted. EndMarker (string) --The key that ends an optional key range for an export job. Ranges are inclusive and UTF-8 binary sorted. LambdaResources (list) --The Python-language Lambda functions for this job. (dict) --Identifies LambdaArn (string) --An Amazon Resource Name (ARN) that represents an AWS Lambda function to be triggered by PUT object actions on the associated local Amazon S3 resource. EventTriggers (list) --The array of ARNs for S3Resource objects to trigger the LambdaResource objects associated with this job. (dict) --The container for the EventTriggerDefinition$EventResourceARN . EventResourceARN (string) --The Amazon Resource Name (ARN) for any local Amazon S3 resource that is an AWS Lambda function's event trigger associated with this job. :type AddressId: string :param AddressId: The ID of the updated Address object. :type ShippingOption: string :param ShippingOption: The updated shipping option value of this job's ShippingDetails object. :type Description: string :param Description: The updated description of this job's JobMetadata object. :type SnowballCapacityPreference: string :param SnowballCapacityPreference: The updated SnowballCapacityPreference of this job's JobMetadata object. The 50 TB Snowballs are only available in the US regions. :type ForwardingAddressId: string :param ForwardingAddressId: The updated ID for the forwarding address for a job. This field is not supported in most regions. :rtype: dict :return: {} :returns: (dict) --
Below is the the instruction that describes the task: ### Input: While a job's JobState value is New , you can update some of the information associated with a job. Once the job changes to a different job state, usually within 60 minutes of the job being created, this action is no longer available. See also: AWS API Documentation Examples This action allows you to update certain parameters for a job. Once the job changes to a different job state, usually within 60 minutes of the job being created, this action is no longer available. Expected Output: :example: response = client.update_job( JobId='string', RoleARN='string', Notification={ 'SnsTopicARN': 'string', 'JobStatesToNotify': [ 'New'|'PreparingAppliance'|'PreparingShipment'|'InTransitToCustomer'|'WithCustomer'|'InTransitToAWS'|'WithAWS'|'InProgress'|'Complete'|'Cancelled'|'Listing'|'Pending', ], 'NotifyAll': True|False }, Resources={ 'S3Resources': [ { 'BucketArn': 'string', 'KeyRange': { 'BeginMarker': 'string', 'EndMarker': 'string' } }, ], 'LambdaResources': [ { 'LambdaArn': 'string', 'EventTriggers': [ { 'EventResourceARN': 'string' }, ] }, ] }, AddressId='string', ShippingOption='SECOND_DAY'|'NEXT_DAY'|'EXPRESS'|'STANDARD', Description='string', SnowballCapacityPreference='T50'|'T80'|'T100'|'NoPreference', ForwardingAddressId='string' ) :type JobId: string :param JobId: [REQUIRED] The job ID of the job that you want to update, for example JID123e4567-e89b-12d3-a456-426655440000 . :type RoleARN: string :param RoleARN: The new role Amazon Resource Name (ARN) that you want to associate with this job. To create a role ARN, use the CreateRole AWS Identity and Access Management (IAM) API action. :type Notification: dict :param Notification: The new or updated Notification object. SnsTopicARN (string) --The new SNS TopicArn that you want to associate with this job. You can create Amazon Resource Names (ARNs) for topics by using the CreateTopic Amazon SNS API action. You can subscribe email addresses to an Amazon SNS topic through the AWS Management Console, or by using the Subscribe AWS Simple Notification Service (SNS) API action. JobStatesToNotify (list) --The list of job states that will trigger a notification for this job. (string) -- NotifyAll (boolean) --Any change in job state will trigger a notification for this job. :type Resources: dict :param Resources: The updated S3Resource object (for a single Amazon S3 bucket or key range), or the updated JobResource object (for multiple buckets or key ranges). S3Resources (list) --An array of S3Resource objects. (dict) --Each S3Resource object represents an Amazon S3 bucket that your transferred data will be exported from or imported into. For export jobs, this object can have an optional KeyRange value. The length of the range is defined at job creation, and has either an inclusive BeginMarker , an inclusive EndMarker , or both. Ranges are UTF-8 binary sorted. BucketArn (string) --The Amazon Resource Name (ARN) of an Amazon S3 bucket. KeyRange (dict) --For export jobs, you can provide an optional KeyRange within a specific Amazon S3 bucket. The length of the range is defined at job creation, and has either an inclusive BeginMarker , an inclusive EndMarker , or both. Ranges are UTF-8 binary sorted. BeginMarker (string) --The key that starts an optional key range for an export job. Ranges are inclusive and UTF-8 binary sorted. EndMarker (string) --The key that ends an optional key range for an export job. Ranges are inclusive and UTF-8 binary sorted. LambdaResources (list) --The Python-language Lambda functions for this job. (dict) --Identifies LambdaArn (string) --An Amazon Resource Name (ARN) that represents an AWS Lambda function to be triggered by PUT object actions on the associated local Amazon S3 resource. EventTriggers (list) --The array of ARNs for S3Resource objects to trigger the LambdaResource objects associated with this job. (dict) --The container for the EventTriggerDefinition$EventResourceARN . EventResourceARN (string) --The Amazon Resource Name (ARN) for any local Amazon S3 resource that is an AWS Lambda function's event trigger associated with this job. :type AddressId: string :param AddressId: The ID of the updated Address object. :type ShippingOption: string :param ShippingOption: The updated shipping option value of this job's ShippingDetails object. :type Description: string :param Description: The updated description of this job's JobMetadata object. :type SnowballCapacityPreference: string :param SnowballCapacityPreference: The updated SnowballCapacityPreference of this job's JobMetadata object. The 50 TB Snowballs are only available in the US regions. :type ForwardingAddressId: string :param ForwardingAddressId: The updated ID for the forwarding address for a job. This field is not supported in most regions. :rtype: dict :return: {} :returns: (dict) -- ### Response: def update_job(JobId=None, RoleARN=None, Notification=None, Resources=None, AddressId=None, ShippingOption=None, Description=None, SnowballCapacityPreference=None, ForwardingAddressId=None): """ While a job's JobState value is New , you can update some of the information associated with a job. Once the job changes to a different job state, usually within 60 minutes of the job being created, this action is no longer available. See also: AWS API Documentation Examples This action allows you to update certain parameters for a job. Once the job changes to a different job state, usually within 60 minutes of the job being created, this action is no longer available. Expected Output: :example: response = client.update_job( JobId='string', RoleARN='string', Notification={ 'SnsTopicARN': 'string', 'JobStatesToNotify': [ 'New'|'PreparingAppliance'|'PreparingShipment'|'InTransitToCustomer'|'WithCustomer'|'InTransitToAWS'|'WithAWS'|'InProgress'|'Complete'|'Cancelled'|'Listing'|'Pending', ], 'NotifyAll': True|False }, Resources={ 'S3Resources': [ { 'BucketArn': 'string', 'KeyRange': { 'BeginMarker': 'string', 'EndMarker': 'string' } }, ], 'LambdaResources': [ { 'LambdaArn': 'string', 'EventTriggers': [ { 'EventResourceARN': 'string' }, ] }, ] }, AddressId='string', ShippingOption='SECOND_DAY'|'NEXT_DAY'|'EXPRESS'|'STANDARD', Description='string', SnowballCapacityPreference='T50'|'T80'|'T100'|'NoPreference', ForwardingAddressId='string' ) :type JobId: string :param JobId: [REQUIRED] The job ID of the job that you want to update, for example JID123e4567-e89b-12d3-a456-426655440000 . :type RoleARN: string :param RoleARN: The new role Amazon Resource Name (ARN) that you want to associate with this job. To create a role ARN, use the CreateRole AWS Identity and Access Management (IAM) API action. :type Notification: dict :param Notification: The new or updated Notification object. SnsTopicARN (string) --The new SNS TopicArn that you want to associate with this job. You can create Amazon Resource Names (ARNs) for topics by using the CreateTopic Amazon SNS API action. You can subscribe email addresses to an Amazon SNS topic through the AWS Management Console, or by using the Subscribe AWS Simple Notification Service (SNS) API action. JobStatesToNotify (list) --The list of job states that will trigger a notification for this job. (string) -- NotifyAll (boolean) --Any change in job state will trigger a notification for this job. :type Resources: dict :param Resources: The updated S3Resource object (for a single Amazon S3 bucket or key range), or the updated JobResource object (for multiple buckets or key ranges). S3Resources (list) --An array of S3Resource objects. (dict) --Each S3Resource object represents an Amazon S3 bucket that your transferred data will be exported from or imported into. For export jobs, this object can have an optional KeyRange value. The length of the range is defined at job creation, and has either an inclusive BeginMarker , an inclusive EndMarker , or both. Ranges are UTF-8 binary sorted. BucketArn (string) --The Amazon Resource Name (ARN) of an Amazon S3 bucket. KeyRange (dict) --For export jobs, you can provide an optional KeyRange within a specific Amazon S3 bucket. The length of the range is defined at job creation, and has either an inclusive BeginMarker , an inclusive EndMarker , or both. Ranges are UTF-8 binary sorted. BeginMarker (string) --The key that starts an optional key range for an export job. Ranges are inclusive and UTF-8 binary sorted. EndMarker (string) --The key that ends an optional key range for an export job. Ranges are inclusive and UTF-8 binary sorted. LambdaResources (list) --The Python-language Lambda functions for this job. (dict) --Identifies LambdaArn (string) --An Amazon Resource Name (ARN) that represents an AWS Lambda function to be triggered by PUT object actions on the associated local Amazon S3 resource. EventTriggers (list) --The array of ARNs for S3Resource objects to trigger the LambdaResource objects associated with this job. (dict) --The container for the EventTriggerDefinition$EventResourceARN . EventResourceARN (string) --The Amazon Resource Name (ARN) for any local Amazon S3 resource that is an AWS Lambda function's event trigger associated with this job. :type AddressId: string :param AddressId: The ID of the updated Address object. :type ShippingOption: string :param ShippingOption: The updated shipping option value of this job's ShippingDetails object. :type Description: string :param Description: The updated description of this job's JobMetadata object. :type SnowballCapacityPreference: string :param SnowballCapacityPreference: The updated SnowballCapacityPreference of this job's JobMetadata object. The 50 TB Snowballs are only available in the US regions. :type ForwardingAddressId: string :param ForwardingAddressId: The updated ID for the forwarding address for a job. This field is not supported in most regions. :rtype: dict :return: {} :returns: (dict) -- """ pass
def enforce_csrf(self, request): """ Enforce CSRF validation for session based authentication. """ reason = CSRFCheck().process_view(request, None, (), {}) if reason: # CSRF failed, bail with explicit error message raise exceptions.PermissionDenied('CSRF Failed: %s' % reason)
Enforce CSRF validation for session based authentication.
Below is the the instruction that describes the task: ### Input: Enforce CSRF validation for session based authentication. ### Response: def enforce_csrf(self, request): """ Enforce CSRF validation for session based authentication. """ reason = CSRFCheck().process_view(request, None, (), {}) if reason: # CSRF failed, bail with explicit error message raise exceptions.PermissionDenied('CSRF Failed: %s' % reason)
def _condition_as_sql(self, qn, connection): ''' Return sql for condition. ''' def escape(value): if isinstance(value, bool): value = str(int(value)) if isinstance(value, six.string_types): # Escape params used with LIKE if '%' in value: value = value.replace('%', '%%') # Escape single quotes if "'" in value: value = value.replace("'", "''") # Add single quote to text values value = "'" + value + "'" return value sql, param = self.condition.query.where.as_sql(qn, connection) param = map(escape, param) return sql % tuple(param)
Return sql for condition.
Below is the the instruction that describes the task: ### Input: Return sql for condition. ### Response: def _condition_as_sql(self, qn, connection): ''' Return sql for condition. ''' def escape(value): if isinstance(value, bool): value = str(int(value)) if isinstance(value, six.string_types): # Escape params used with LIKE if '%' in value: value = value.replace('%', '%%') # Escape single quotes if "'" in value: value = value.replace("'", "''") # Add single quote to text values value = "'" + value + "'" return value sql, param = self.condition.query.where.as_sql(qn, connection) param = map(escape, param) return sql % tuple(param)
def __parse_identities(self, json): """Parse identities using Eclipse format. The Eclipse identities format is a JSON document under the "commiters" key. The document should follow the next schema: { 'committers' : { 'john': { 'affiliations': { '1': { 'active': '2001-01-01', 'inactive': null, 'name': 'Organization 1' } }, 'email': [ 'john@example.com' ], 'first': 'John', 'id': 'john', 'last': 'Doe', 'primary': 'john.doe@example.com' } } } :parse json: JSON object to parse :raise InvalidFormatError: raised when the format of the JSON is not valid. """ try: for committer in json['committers'].values(): name = self.__encode(committer['first'] + ' ' + committer['last']) email = self.__encode(committer['primary']) username = self.__encode(committer['id']) uuid = username uid = UniqueIdentity(uuid=uuid) identity = Identity(name=name, email=email, username=username, source=self.source, uuid=uuid) uid.identities.append(identity) if 'email' in committer: for alt_email in committer['email']: alt_email = self.__encode(alt_email) if alt_email == email: continue identity = Identity(name=name, email=alt_email, username=username, source=self.source, uuid=uuid) uid.identities.append(identity) if 'affiliations' in committer: enrollments = self.__parse_affiliations_json(committer['affiliations'], uuid) for rol in enrollments: uid.enrollments.append(rol) self._identities[uuid] = uid except KeyError as e: msg = "invalid json format. Attribute %s not found" % e.args raise InvalidFormatError(cause=msg)
Parse identities using Eclipse format. The Eclipse identities format is a JSON document under the "commiters" key. The document should follow the next schema: { 'committers' : { 'john': { 'affiliations': { '1': { 'active': '2001-01-01', 'inactive': null, 'name': 'Organization 1' } }, 'email': [ 'john@example.com' ], 'first': 'John', 'id': 'john', 'last': 'Doe', 'primary': 'john.doe@example.com' } } } :parse json: JSON object to parse :raise InvalidFormatError: raised when the format of the JSON is not valid.
Below is the the instruction that describes the task: ### Input: Parse identities using Eclipse format. The Eclipse identities format is a JSON document under the "commiters" key. The document should follow the next schema: { 'committers' : { 'john': { 'affiliations': { '1': { 'active': '2001-01-01', 'inactive': null, 'name': 'Organization 1' } }, 'email': [ 'john@example.com' ], 'first': 'John', 'id': 'john', 'last': 'Doe', 'primary': 'john.doe@example.com' } } } :parse json: JSON object to parse :raise InvalidFormatError: raised when the format of the JSON is not valid. ### Response: def __parse_identities(self, json): """Parse identities using Eclipse format. The Eclipse identities format is a JSON document under the "commiters" key. The document should follow the next schema: { 'committers' : { 'john': { 'affiliations': { '1': { 'active': '2001-01-01', 'inactive': null, 'name': 'Organization 1' } }, 'email': [ 'john@example.com' ], 'first': 'John', 'id': 'john', 'last': 'Doe', 'primary': 'john.doe@example.com' } } } :parse json: JSON object to parse :raise InvalidFormatError: raised when the format of the JSON is not valid. """ try: for committer in json['committers'].values(): name = self.__encode(committer['first'] + ' ' + committer['last']) email = self.__encode(committer['primary']) username = self.__encode(committer['id']) uuid = username uid = UniqueIdentity(uuid=uuid) identity = Identity(name=name, email=email, username=username, source=self.source, uuid=uuid) uid.identities.append(identity) if 'email' in committer: for alt_email in committer['email']: alt_email = self.__encode(alt_email) if alt_email == email: continue identity = Identity(name=name, email=alt_email, username=username, source=self.source, uuid=uuid) uid.identities.append(identity) if 'affiliations' in committer: enrollments = self.__parse_affiliations_json(committer['affiliations'], uuid) for rol in enrollments: uid.enrollments.append(rol) self._identities[uuid] = uid except KeyError as e: msg = "invalid json format. Attribute %s not found" % e.args raise InvalidFormatError(cause=msg)
def transcodeImage(self, media, height, width, opacity=100, saturation=100): """ Returns the URL for a transcoded image from the specified media object. Returns None if no media specified (needed if user tries to pass thumb or art directly). Parameters: height (int): Height to transcode the image to. width (int): Width to transcode the image to. opacity (int): Opacity of the resulting image (possibly deprecated). saturation (int): Saturating of the resulting image. """ if media: transcode_url = '/photo/:/transcode?height=%s&width=%s&opacity=%s&saturation=%s&url=%s' % ( height, width, opacity, saturation, media) return self.url(transcode_url, includeToken=True)
Returns the URL for a transcoded image from the specified media object. Returns None if no media specified (needed if user tries to pass thumb or art directly). Parameters: height (int): Height to transcode the image to. width (int): Width to transcode the image to. opacity (int): Opacity of the resulting image (possibly deprecated). saturation (int): Saturating of the resulting image.
Below is the the instruction that describes the task: ### Input: Returns the URL for a transcoded image from the specified media object. Returns None if no media specified (needed if user tries to pass thumb or art directly). Parameters: height (int): Height to transcode the image to. width (int): Width to transcode the image to. opacity (int): Opacity of the resulting image (possibly deprecated). saturation (int): Saturating of the resulting image. ### Response: def transcodeImage(self, media, height, width, opacity=100, saturation=100): """ Returns the URL for a transcoded image from the specified media object. Returns None if no media specified (needed if user tries to pass thumb or art directly). Parameters: height (int): Height to transcode the image to. width (int): Width to transcode the image to. opacity (int): Opacity of the resulting image (possibly deprecated). saturation (int): Saturating of the resulting image. """ if media: transcode_url = '/photo/:/transcode?height=%s&width=%s&opacity=%s&saturation=%s&url=%s' % ( height, width, opacity, saturation, media) return self.url(transcode_url, includeToken=True)
def create_server(initialize=True): """Create a server""" with provider() as p: host_string = p.create_server() if initialize: env.host_string = host_string initialize_server()
Create a server
Below is the the instruction that describes the task: ### Input: Create a server ### Response: def create_server(initialize=True): """Create a server""" with provider() as p: host_string = p.create_server() if initialize: env.host_string = host_string initialize_server()
def accordions(self, form): """ return the chlidren of the given form in a dict allowing to render them in accordions with a grid layout :param form: the form object """ fixed = [] accordions = OrderedDict() for child in form.children: section = getattr(child.schema, 'section', '') if not section: fixed.append(child) else: if section not in accordions.keys(): accordions[section] = { 'tag_id': random_tag_id(), 'children': [], 'name': section, "error": False, } if child.error: accordions[section]['error'] = True accordions[section]['children'].append(child) grids = getattr(self, "grids", {}) named_grids = getattr(self, 'named_grids', {}) if grids != {}: method = self._childgroup elif named_grids != {}: method = self._childgroup_by_name grids = named_grids else: warnings.warn(u"Missing both grids and named_grids argument") for accordion in accordions.values(): name = accordion['name'] grid = grids.get(name) if grid is not None: children = accordion.pop('children') accordion['rows'] = method(children, grid) return fixed, accordions
return the chlidren of the given form in a dict allowing to render them in accordions with a grid layout :param form: the form object
Below is the the instruction that describes the task: ### Input: return the chlidren of the given form in a dict allowing to render them in accordions with a grid layout :param form: the form object ### Response: def accordions(self, form): """ return the chlidren of the given form in a dict allowing to render them in accordions with a grid layout :param form: the form object """ fixed = [] accordions = OrderedDict() for child in form.children: section = getattr(child.schema, 'section', '') if not section: fixed.append(child) else: if section not in accordions.keys(): accordions[section] = { 'tag_id': random_tag_id(), 'children': [], 'name': section, "error": False, } if child.error: accordions[section]['error'] = True accordions[section]['children'].append(child) grids = getattr(self, "grids", {}) named_grids = getattr(self, 'named_grids', {}) if grids != {}: method = self._childgroup elif named_grids != {}: method = self._childgroup_by_name grids = named_grids else: warnings.warn(u"Missing both grids and named_grids argument") for accordion in accordions.values(): name = accordion['name'] grid = grids.get(name) if grid is not None: children = accordion.pop('children') accordion['rows'] = method(children, grid) return fixed, accordions
def add(self, name, interface, inputs=None, outputs=None, requirements=None, wall_time=None, annotations=None, **kwargs): """ Adds a processing Node to the pipeline Parameters ---------- name : str Name for the node interface : nipype.Interface The interface to use for the node inputs : dict[str, (str, FileFormat) | (Node, str)] Connections from inputs of the pipeline and outputs of other nodes to inputs of node. The keys of the dictionary are the field names and the values are 2-tuple containing either the name of the data spec and the data format it is expected in for pipeline inputs or the sending Node and the the name of an output of the sending Node. Note that pipeline inputs can be specified outside this method using the 'connect_input' method and connections between nodes with the the 'connect' method. outputs : dict[str, (str, FileFormat)] Connections to outputs of the pipeline from fields of the interface. The keys of the dictionary are the names of the data specs that will be written to and the values are the interface field name and the data format it is produced in. Note that output connections can also be specified using the 'connect_output' method. requirements : list(Requirement) List of required packages need for the node to run (default: []) wall_time : float Time required to execute the node in minutes (default: 1) mem_gb : int Required memory for the node in GB n_procs : int Preferred number of threads to run the node on (default: 1) annotations : dict[str, *] Additional annotations to add to the node, which may be used by the Processor node to optimise execution (e.g. 'gpu': True) iterfield : str Name of field to be passed an iterable to iterator over. If present, a MapNode will be created instead of a regular node joinsource : str Name of iterator field to join. Typically one of the implicit iterators (i.e. Study.SUBJECT_ID or Study.VISIT_ID) to join over the subjects and/or visits joinfield : str Name of field to pass the joined list when creating a JoinNode Returns ------- node : Node The Node object that has been added to the pipeline """ if annotations is None: annotations = {} if requirements is None: requirements = [] if wall_time is None: wall_time = self.study.processor.default_wall_time if 'mem_gb' not in kwargs or kwargs['mem_gb'] is None: kwargs['mem_gb'] = self.study.processor.default_mem_gb if 'iterfield' in kwargs: if 'joinfield' in kwargs or 'joinsource' in kwargs: raise ArcanaDesignError( "Cannot provide both joinsource and iterfield to when " "attempting to add '{}' node to {}" .foramt(name, self._error_msg_loc)) node_cls = self.study.environment.node_types['map'] elif 'joinsource' in kwargs or 'joinfield' in kwargs: if not ('joinfield' in kwargs and 'joinsource' in kwargs): raise ArcanaDesignError( "Both joinsource and joinfield kwargs are required to " "create a JoinNode (see {})".format(name, self._error_msg_loc)) joinsource = kwargs['joinsource'] if joinsource in self.study.ITERFIELDS: self._iterator_joins.add(joinsource) node_cls = self.study.environment.node_types['join'] # Prepend name of pipeline of joinsource to match name of nodes kwargs['joinsource'] = '{}_{}'.format(self.name, joinsource) else: node_cls = self.study.environment.node_types['base'] # Create node node = node_cls(self.study.environment, interface, name="{}_{}".format(self._name, name), requirements=requirements, wall_time=wall_time, annotations=annotations, **kwargs) # Ensure node is added to workflow self._workflow.add_nodes([node]) # Connect inputs, outputs and internal connections if inputs is not None: assert isinstance(inputs, dict) for node_input, connect_from in inputs.items(): if isinstance(connect_from[0], basestring): input_spec, input_format = connect_from self.connect_input(input_spec, node, node_input, input_format) else: conn_node, conn_field = connect_from self.connect(conn_node, conn_field, node, node_input) if outputs is not None: assert isinstance(outputs, dict) for output_spec, (node_output, output_format) in outputs.items(): self.connect_output(output_spec, node, node_output, output_format) return node
Adds a processing Node to the pipeline Parameters ---------- name : str Name for the node interface : nipype.Interface The interface to use for the node inputs : dict[str, (str, FileFormat) | (Node, str)] Connections from inputs of the pipeline and outputs of other nodes to inputs of node. The keys of the dictionary are the field names and the values are 2-tuple containing either the name of the data spec and the data format it is expected in for pipeline inputs or the sending Node and the the name of an output of the sending Node. Note that pipeline inputs can be specified outside this method using the 'connect_input' method and connections between nodes with the the 'connect' method. outputs : dict[str, (str, FileFormat)] Connections to outputs of the pipeline from fields of the interface. The keys of the dictionary are the names of the data specs that will be written to and the values are the interface field name and the data format it is produced in. Note that output connections can also be specified using the 'connect_output' method. requirements : list(Requirement) List of required packages need for the node to run (default: []) wall_time : float Time required to execute the node in minutes (default: 1) mem_gb : int Required memory for the node in GB n_procs : int Preferred number of threads to run the node on (default: 1) annotations : dict[str, *] Additional annotations to add to the node, which may be used by the Processor node to optimise execution (e.g. 'gpu': True) iterfield : str Name of field to be passed an iterable to iterator over. If present, a MapNode will be created instead of a regular node joinsource : str Name of iterator field to join. Typically one of the implicit iterators (i.e. Study.SUBJECT_ID or Study.VISIT_ID) to join over the subjects and/or visits joinfield : str Name of field to pass the joined list when creating a JoinNode Returns ------- node : Node The Node object that has been added to the pipeline
Below is the the instruction that describes the task: ### Input: Adds a processing Node to the pipeline Parameters ---------- name : str Name for the node interface : nipype.Interface The interface to use for the node inputs : dict[str, (str, FileFormat) | (Node, str)] Connections from inputs of the pipeline and outputs of other nodes to inputs of node. The keys of the dictionary are the field names and the values are 2-tuple containing either the name of the data spec and the data format it is expected in for pipeline inputs or the sending Node and the the name of an output of the sending Node. Note that pipeline inputs can be specified outside this method using the 'connect_input' method and connections between nodes with the the 'connect' method. outputs : dict[str, (str, FileFormat)] Connections to outputs of the pipeline from fields of the interface. The keys of the dictionary are the names of the data specs that will be written to and the values are the interface field name and the data format it is produced in. Note that output connections can also be specified using the 'connect_output' method. requirements : list(Requirement) List of required packages need for the node to run (default: []) wall_time : float Time required to execute the node in minutes (default: 1) mem_gb : int Required memory for the node in GB n_procs : int Preferred number of threads to run the node on (default: 1) annotations : dict[str, *] Additional annotations to add to the node, which may be used by the Processor node to optimise execution (e.g. 'gpu': True) iterfield : str Name of field to be passed an iterable to iterator over. If present, a MapNode will be created instead of a regular node joinsource : str Name of iterator field to join. Typically one of the implicit iterators (i.e. Study.SUBJECT_ID or Study.VISIT_ID) to join over the subjects and/or visits joinfield : str Name of field to pass the joined list when creating a JoinNode Returns ------- node : Node The Node object that has been added to the pipeline ### Response: def add(self, name, interface, inputs=None, outputs=None, requirements=None, wall_time=None, annotations=None, **kwargs): """ Adds a processing Node to the pipeline Parameters ---------- name : str Name for the node interface : nipype.Interface The interface to use for the node inputs : dict[str, (str, FileFormat) | (Node, str)] Connections from inputs of the pipeline and outputs of other nodes to inputs of node. The keys of the dictionary are the field names and the values are 2-tuple containing either the name of the data spec and the data format it is expected in for pipeline inputs or the sending Node and the the name of an output of the sending Node. Note that pipeline inputs can be specified outside this method using the 'connect_input' method and connections between nodes with the the 'connect' method. outputs : dict[str, (str, FileFormat)] Connections to outputs of the pipeline from fields of the interface. The keys of the dictionary are the names of the data specs that will be written to and the values are the interface field name and the data format it is produced in. Note that output connections can also be specified using the 'connect_output' method. requirements : list(Requirement) List of required packages need for the node to run (default: []) wall_time : float Time required to execute the node in minutes (default: 1) mem_gb : int Required memory for the node in GB n_procs : int Preferred number of threads to run the node on (default: 1) annotations : dict[str, *] Additional annotations to add to the node, which may be used by the Processor node to optimise execution (e.g. 'gpu': True) iterfield : str Name of field to be passed an iterable to iterator over. If present, a MapNode will be created instead of a regular node joinsource : str Name of iterator field to join. Typically one of the implicit iterators (i.e. Study.SUBJECT_ID or Study.VISIT_ID) to join over the subjects and/or visits joinfield : str Name of field to pass the joined list when creating a JoinNode Returns ------- node : Node The Node object that has been added to the pipeline """ if annotations is None: annotations = {} if requirements is None: requirements = [] if wall_time is None: wall_time = self.study.processor.default_wall_time if 'mem_gb' not in kwargs or kwargs['mem_gb'] is None: kwargs['mem_gb'] = self.study.processor.default_mem_gb if 'iterfield' in kwargs: if 'joinfield' in kwargs or 'joinsource' in kwargs: raise ArcanaDesignError( "Cannot provide both joinsource and iterfield to when " "attempting to add '{}' node to {}" .foramt(name, self._error_msg_loc)) node_cls = self.study.environment.node_types['map'] elif 'joinsource' in kwargs or 'joinfield' in kwargs: if not ('joinfield' in kwargs and 'joinsource' in kwargs): raise ArcanaDesignError( "Both joinsource and joinfield kwargs are required to " "create a JoinNode (see {})".format(name, self._error_msg_loc)) joinsource = kwargs['joinsource'] if joinsource in self.study.ITERFIELDS: self._iterator_joins.add(joinsource) node_cls = self.study.environment.node_types['join'] # Prepend name of pipeline of joinsource to match name of nodes kwargs['joinsource'] = '{}_{}'.format(self.name, joinsource) else: node_cls = self.study.environment.node_types['base'] # Create node node = node_cls(self.study.environment, interface, name="{}_{}".format(self._name, name), requirements=requirements, wall_time=wall_time, annotations=annotations, **kwargs) # Ensure node is added to workflow self._workflow.add_nodes([node]) # Connect inputs, outputs and internal connections if inputs is not None: assert isinstance(inputs, dict) for node_input, connect_from in inputs.items(): if isinstance(connect_from[0], basestring): input_spec, input_format = connect_from self.connect_input(input_spec, node, node_input, input_format) else: conn_node, conn_field = connect_from self.connect(conn_node, conn_field, node, node_input) if outputs is not None: assert isinstance(outputs, dict) for output_spec, (node_output, output_format) in outputs.items(): self.connect_output(output_spec, node, node_output, output_format) return node
def create_DOM_node_from_dict(d, name, parent_node): """ Dumps dict data to an ``xml.etree.ElementTree.SubElement`` DOM subtree object and attaches it to the specified DOM parent node. The created subtree object is named after the specified name. If the supplied dict is ``None`` no DOM node is created for it as well as no DOM subnodes are generated for eventual ``None`` values found inside the dict :param d: the input dictionary :type d: dict :param name: the name for the DOM subtree to be created :type name: str :param parent_node: the parent DOM node the newly created subtree must be attached to :type parent_node: ``xml.etree.ElementTree.Element`` or derivative objects :returns: ``xml.etree.ElementTree.SubElementTree`` object """ if d is not None: root_dict_node = ET.SubElement(parent_node, name) for key, value in d.items(): if value is not None: node = ET.SubElement(root_dict_node, key) node.text = str(value) return root_dict_node
Dumps dict data to an ``xml.etree.ElementTree.SubElement`` DOM subtree object and attaches it to the specified DOM parent node. The created subtree object is named after the specified name. If the supplied dict is ``None`` no DOM node is created for it as well as no DOM subnodes are generated for eventual ``None`` values found inside the dict :param d: the input dictionary :type d: dict :param name: the name for the DOM subtree to be created :type name: str :param parent_node: the parent DOM node the newly created subtree must be attached to :type parent_node: ``xml.etree.ElementTree.Element`` or derivative objects :returns: ``xml.etree.ElementTree.SubElementTree`` object
Below is the the instruction that describes the task: ### Input: Dumps dict data to an ``xml.etree.ElementTree.SubElement`` DOM subtree object and attaches it to the specified DOM parent node. The created subtree object is named after the specified name. If the supplied dict is ``None`` no DOM node is created for it as well as no DOM subnodes are generated for eventual ``None`` values found inside the dict :param d: the input dictionary :type d: dict :param name: the name for the DOM subtree to be created :type name: str :param parent_node: the parent DOM node the newly created subtree must be attached to :type parent_node: ``xml.etree.ElementTree.Element`` or derivative objects :returns: ``xml.etree.ElementTree.SubElementTree`` object ### Response: def create_DOM_node_from_dict(d, name, parent_node): """ Dumps dict data to an ``xml.etree.ElementTree.SubElement`` DOM subtree object and attaches it to the specified DOM parent node. The created subtree object is named after the specified name. If the supplied dict is ``None`` no DOM node is created for it as well as no DOM subnodes are generated for eventual ``None`` values found inside the dict :param d: the input dictionary :type d: dict :param name: the name for the DOM subtree to be created :type name: str :param parent_node: the parent DOM node the newly created subtree must be attached to :type parent_node: ``xml.etree.ElementTree.Element`` or derivative objects :returns: ``xml.etree.ElementTree.SubElementTree`` object """ if d is not None: root_dict_node = ET.SubElement(parent_node, name) for key, value in d.items(): if value is not None: node = ET.SubElement(root_dict_node, key) node.text = str(value) return root_dict_node
def script_template(state, host, template_filename, chdir=None, **data): ''' Generate, upload and execute a local script template on the remote host. + template_filename: local script template filename + chdir: directory to cd into before executing the script ''' temp_file = state.get_temp_filename(template_filename) yield files.template(state, host, template_filename, temp_file, **data) yield chmod(temp_file, '+x') if chdir: yield 'cd {0} && {1}'.format(chdir, temp_file) else: yield temp_file
Generate, upload and execute a local script template on the remote host. + template_filename: local script template filename + chdir: directory to cd into before executing the script
Below is the the instruction that describes the task: ### Input: Generate, upload and execute a local script template on the remote host. + template_filename: local script template filename + chdir: directory to cd into before executing the script ### Response: def script_template(state, host, template_filename, chdir=None, **data): ''' Generate, upload and execute a local script template on the remote host. + template_filename: local script template filename + chdir: directory to cd into before executing the script ''' temp_file = state.get_temp_filename(template_filename) yield files.template(state, host, template_filename, temp_file, **data) yield chmod(temp_file, '+x') if chdir: yield 'cd {0} && {1}'.format(chdir, temp_file) else: yield temp_file