code
stringlengths
75
104k
docstring
stringlengths
1
46.9k
text
stringlengths
164
112k
def decompress_messages(self, offmsgs): """ Decompress pre-defined compressed fields for each message. Msgs should be unpacked before this step. """ for offmsg in offmsgs: yield offmsg.message.key, self.decompress_fun(offmsg.message.value)
Decompress pre-defined compressed fields for each message. Msgs should be unpacked before this step.
Below is the the instruction that describes the task: ### Input: Decompress pre-defined compressed fields for each message. Msgs should be unpacked before this step. ### Response: def decompress_messages(self, offmsgs): """ Decompress pre-defined compressed fields for each message. Msgs should be unpacked before this step. """ for offmsg in offmsgs: yield offmsg.message.key, self.decompress_fun(offmsg.message.value)
def add_file_to_archive(self, name: str) -> None: """ Any class in its ``from_params`` method can request that some of its input files be added to the archive by calling this method. For example, if some class ``A`` had an ``input_file`` parameter, it could call ``` params.add_file_to_archive("input_file") ``` which would store the supplied value for ``input_file`` at the key ``previous.history.and.then.input_file``. The ``files_to_archive`` dict is shared with child instances via the ``_check_is_dict`` method, so that the final mapping can be retrieved from the top-level ``Params`` object. NOTE: You must call ``add_file_to_archive`` before you ``pop()`` the parameter, because the ``Params`` instance looks up the value of the filename inside itself. If the ``loading_from_archive`` flag is True, this will be a no-op. """ if not self.loading_from_archive: self.files_to_archive[f"{self.history}{name}"] = cached_path(self.get(name))
Any class in its ``from_params`` method can request that some of its input files be added to the archive by calling this method. For example, if some class ``A`` had an ``input_file`` parameter, it could call ``` params.add_file_to_archive("input_file") ``` which would store the supplied value for ``input_file`` at the key ``previous.history.and.then.input_file``. The ``files_to_archive`` dict is shared with child instances via the ``_check_is_dict`` method, so that the final mapping can be retrieved from the top-level ``Params`` object. NOTE: You must call ``add_file_to_archive`` before you ``pop()`` the parameter, because the ``Params`` instance looks up the value of the filename inside itself. If the ``loading_from_archive`` flag is True, this will be a no-op.
Below is the the instruction that describes the task: ### Input: Any class in its ``from_params`` method can request that some of its input files be added to the archive by calling this method. For example, if some class ``A`` had an ``input_file`` parameter, it could call ``` params.add_file_to_archive("input_file") ``` which would store the supplied value for ``input_file`` at the key ``previous.history.and.then.input_file``. The ``files_to_archive`` dict is shared with child instances via the ``_check_is_dict`` method, so that the final mapping can be retrieved from the top-level ``Params`` object. NOTE: You must call ``add_file_to_archive`` before you ``pop()`` the parameter, because the ``Params`` instance looks up the value of the filename inside itself. If the ``loading_from_archive`` flag is True, this will be a no-op. ### Response: def add_file_to_archive(self, name: str) -> None: """ Any class in its ``from_params`` method can request that some of its input files be added to the archive by calling this method. For example, if some class ``A`` had an ``input_file`` parameter, it could call ``` params.add_file_to_archive("input_file") ``` which would store the supplied value for ``input_file`` at the key ``previous.history.and.then.input_file``. The ``files_to_archive`` dict is shared with child instances via the ``_check_is_dict`` method, so that the final mapping can be retrieved from the top-level ``Params`` object. NOTE: You must call ``add_file_to_archive`` before you ``pop()`` the parameter, because the ``Params`` instance looks up the value of the filename inside itself. If the ``loading_from_archive`` flag is True, this will be a no-op. """ if not self.loading_from_archive: self.files_to_archive[f"{self.history}{name}"] = cached_path(self.get(name))
def stroke(self, *args): '''Set a stroke color, applying it to new paths. :param args: color in supported format ''' if args is not None: self._canvas.strokecolor = self.color(*args) return self._canvas.strokecolor
Set a stroke color, applying it to new paths. :param args: color in supported format
Below is the the instruction that describes the task: ### Input: Set a stroke color, applying it to new paths. :param args: color in supported format ### Response: def stroke(self, *args): '''Set a stroke color, applying it to new paths. :param args: color in supported format ''' if args is not None: self._canvas.strokecolor = self.color(*args) return self._canvas.strokecolor
def support_zrangebylex(self): """ Returns True if zrangebylex is available. Checks are done in the client library (redis-py) AND the redis server. Result is cached, so done only one time. """ if not hasattr(self, '_support_zrangebylex'): try: self._support_zrangebylex = self.redis_version >= (2, 8, 9) \ and hasattr(self.connection, 'zrangebylex') except: self._support_zrangebylex = False return self._support_zrangebylex
Returns True if zrangebylex is available. Checks are done in the client library (redis-py) AND the redis server. Result is cached, so done only one time.
Below is the the instruction that describes the task: ### Input: Returns True if zrangebylex is available. Checks are done in the client library (redis-py) AND the redis server. Result is cached, so done only one time. ### Response: def support_zrangebylex(self): """ Returns True if zrangebylex is available. Checks are done in the client library (redis-py) AND the redis server. Result is cached, so done only one time. """ if not hasattr(self, '_support_zrangebylex'): try: self._support_zrangebylex = self.redis_version >= (2, 8, 9) \ and hasattr(self.connection, 'zrangebylex') except: self._support_zrangebylex = False return self._support_zrangebylex
def get_all_pipelines(self): '''Return all pipelines as a list Returns: List[PipelineDefinition]: ''' pipelines = list(map(self.get_pipeline, self.pipeline_dict.keys())) # This does uniqueness check self._construct_solid_defs(pipelines) return pipelines
Return all pipelines as a list Returns: List[PipelineDefinition]:
Below is the the instruction that describes the task: ### Input: Return all pipelines as a list Returns: List[PipelineDefinition]: ### Response: def get_all_pipelines(self): '''Return all pipelines as a list Returns: List[PipelineDefinition]: ''' pipelines = list(map(self.get_pipeline, self.pipeline_dict.keys())) # This does uniqueness check self._construct_solid_defs(pipelines) return pipelines
def data_to_binary(self): """ :return: bytes """ return bytes([ COMMAND_CODE, self.channels_to_byte([self.channel]), self.timeout, self.status, self.led_status, self.blind_position, self.locked_inhibit_forced, self.alarm_auto_mode_selection ])
:return: bytes
Below is the the instruction that describes the task: ### Input: :return: bytes ### Response: def data_to_binary(self): """ :return: bytes """ return bytes([ COMMAND_CODE, self.channels_to_byte([self.channel]), self.timeout, self.status, self.led_status, self.blind_position, self.locked_inhibit_forced, self.alarm_auto_mode_selection ])
def list(cls, args): # pylint: disable=unused-argument """List all installed NApps and inform whether they are enabled.""" mgr = NAppsManager() # Add status napps = [napp + ('[ie]',) for napp in mgr.get_enabled()] napps += [napp + ('[i-]',) for napp in mgr.get_disabled()] # Sort, add description and reorder columns napps.sort() napps_ordered = [] for user, name, status in napps: description = mgr.get_description(user, name) version = mgr.get_version(user, name) napp_id = f'{user}/{name}' if version: napp_id += f':{version}' napps_ordered.append((status, napp_id, description)) cls.print_napps(napps_ordered)
List all installed NApps and inform whether they are enabled.
Below is the the instruction that describes the task: ### Input: List all installed NApps and inform whether they are enabled. ### Response: def list(cls, args): # pylint: disable=unused-argument """List all installed NApps and inform whether they are enabled.""" mgr = NAppsManager() # Add status napps = [napp + ('[ie]',) for napp in mgr.get_enabled()] napps += [napp + ('[i-]',) for napp in mgr.get_disabled()] # Sort, add description and reorder columns napps.sort() napps_ordered = [] for user, name, status in napps: description = mgr.get_description(user, name) version = mgr.get_version(user, name) napp_id = f'{user}/{name}' if version: napp_id += f':{version}' napps_ordered.append((status, napp_id, description)) cls.print_napps(napps_ordered)
def align_to_other(self, other, mapping, self_root_pair, other_root_pair = None): ''' root atoms are atom which all other unmapped atoms will be mapped off of ''' if other_root_pair == None: other_root_pair = self_root_pair assert( len(self_root_pair) == len(other_root_pair) ) unmoved_atom_names = [] new_coords = [ None for x in xrange( len(self_root_pair) ) ] for atom in self.names: if atom in self_root_pair: i = self_root_pair.index(atom) assert( new_coords[i] == None ) new_coords[i] = self.get_coords_for_name(atom) if atom in mapping: other_atom = mapping[atom] self.set_coords_for_name( atom, other.get_coords_for_name(other_atom) ) else: unmoved_atom_names.append(atom) # Move unmoved coordinates after all other atoms have been moved (so that # references will have been moved already) if None in new_coords: print new_coords assert( None not in new_coords ) ref_coords = [other.get_coords_for_name(x) for x in other_root_pair] # Calculate translation and rotation matrices U, new_centroid, ref_centroid = calc_rotation_translation_matrices( ref_coords, new_coords ) for atom in unmoved_atom_names: original_coord = self.get_coords_for_name(atom) self.set_coords_for_name( atom, rotate_and_translate_coord(original_coord, U, new_centroid, ref_centroid) ) self.chain = other.chain
root atoms are atom which all other unmapped atoms will be mapped off of
Below is the the instruction that describes the task: ### Input: root atoms are atom which all other unmapped atoms will be mapped off of ### Response: def align_to_other(self, other, mapping, self_root_pair, other_root_pair = None): ''' root atoms are atom which all other unmapped atoms will be mapped off of ''' if other_root_pair == None: other_root_pair = self_root_pair assert( len(self_root_pair) == len(other_root_pair) ) unmoved_atom_names = [] new_coords = [ None for x in xrange( len(self_root_pair) ) ] for atom in self.names: if atom in self_root_pair: i = self_root_pair.index(atom) assert( new_coords[i] == None ) new_coords[i] = self.get_coords_for_name(atom) if atom in mapping: other_atom = mapping[atom] self.set_coords_for_name( atom, other.get_coords_for_name(other_atom) ) else: unmoved_atom_names.append(atom) # Move unmoved coordinates after all other atoms have been moved (so that # references will have been moved already) if None in new_coords: print new_coords assert( None not in new_coords ) ref_coords = [other.get_coords_for_name(x) for x in other_root_pair] # Calculate translation and rotation matrices U, new_centroid, ref_centroid = calc_rotation_translation_matrices( ref_coords, new_coords ) for atom in unmoved_atom_names: original_coord = self.get_coords_for_name(atom) self.set_coords_for_name( atom, rotate_and_translate_coord(original_coord, U, new_centroid, ref_centroid) ) self.chain = other.chain
def create(streamIds, **kwargs): """ Creates and loads data into a Confluence, which is a collection of River Streams. :param streamIds: (list) Each data id in this list is a list of strings: 1. river name 2. stream name 3. field name :param kwargs: Passed into Confluence constructor :return: (Confluence) """ print "Creating Confluence for the following RiverStreams:" \ "\n\t%s" % ",\n\t".join([":".join(row) for row in streamIds]) confluence = Confluence(streamIds, **kwargs) confluence.load() return confluence
Creates and loads data into a Confluence, which is a collection of River Streams. :param streamIds: (list) Each data id in this list is a list of strings: 1. river name 2. stream name 3. field name :param kwargs: Passed into Confluence constructor :return: (Confluence)
Below is the the instruction that describes the task: ### Input: Creates and loads data into a Confluence, which is a collection of River Streams. :param streamIds: (list) Each data id in this list is a list of strings: 1. river name 2. stream name 3. field name :param kwargs: Passed into Confluence constructor :return: (Confluence) ### Response: def create(streamIds, **kwargs): """ Creates and loads data into a Confluence, which is a collection of River Streams. :param streamIds: (list) Each data id in this list is a list of strings: 1. river name 2. stream name 3. field name :param kwargs: Passed into Confluence constructor :return: (Confluence) """ print "Creating Confluence for the following RiverStreams:" \ "\n\t%s" % ",\n\t".join([":".join(row) for row in streamIds]) confluence = Confluence(streamIds, **kwargs) confluence.load() return confluence
def marginalize(self, variables, inplace=True): """ Modifies the distribution with marginalized values. Parameters ---------- variables: iterator over any hashable object. List of variables over which marginalization is to be done. inplace: boolean If inplace=True it will modify the distribution itself, else would return a new distribution. Returns ------- GaussianDistribution or None : if inplace=True (default) returns None if inplace=False return a new GaussianDistribution instance Examples -------- >>> import numpy as np >>> from pgmpy.factors.distributions import GaussianDistribution as GD >>> dis = GD(variables=['x1', 'x2', 'x3'], ... mean=[1, -3, 4], ... cov=[[4, 2, -2], ... [2, 5, -5], ... [-2, -5, 8]])) >>> dis.variables ['x1', 'x2', 'x3'] >>> dis.mean array([[ 1], [-3], [ 4]]) >>> dis.covariance array([[ 4, 2, -2], [ 2, 5, -5], [-2, -5, 8]]) >>> dis.marginalize(['x3']) dis.variables ['x1', 'x2'] >>> dis.mean array([[ 1.], [-3.]])) >>> dis.covariance array([[4., 2.], [2., 5.]]) """ if not isinstance(variables, list): raise TypeError("variables: Expected type list or array-like," "got type {var_type}".format( var_type=type(variables))) phi = self if inplace else self.copy() index_to_keep = [self.variables.index(var) for var in self.variables if var not in variables] phi.variables = [phi.variables[index] for index in index_to_keep] phi.mean = phi.mean[index_to_keep] phi.covariance = phi.covariance[np.ix_(index_to_keep, index_to_keep)] phi._precision_matrix = None if not inplace: return phi
Modifies the distribution with marginalized values. Parameters ---------- variables: iterator over any hashable object. List of variables over which marginalization is to be done. inplace: boolean If inplace=True it will modify the distribution itself, else would return a new distribution. Returns ------- GaussianDistribution or None : if inplace=True (default) returns None if inplace=False return a new GaussianDistribution instance Examples -------- >>> import numpy as np >>> from pgmpy.factors.distributions import GaussianDistribution as GD >>> dis = GD(variables=['x1', 'x2', 'x3'], ... mean=[1, -3, 4], ... cov=[[4, 2, -2], ... [2, 5, -5], ... [-2, -5, 8]])) >>> dis.variables ['x1', 'x2', 'x3'] >>> dis.mean array([[ 1], [-3], [ 4]]) >>> dis.covariance array([[ 4, 2, -2], [ 2, 5, -5], [-2, -5, 8]]) >>> dis.marginalize(['x3']) dis.variables ['x1', 'x2'] >>> dis.mean array([[ 1.], [-3.]])) >>> dis.covariance array([[4., 2.], [2., 5.]])
Below is the the instruction that describes the task: ### Input: Modifies the distribution with marginalized values. Parameters ---------- variables: iterator over any hashable object. List of variables over which marginalization is to be done. inplace: boolean If inplace=True it will modify the distribution itself, else would return a new distribution. Returns ------- GaussianDistribution or None : if inplace=True (default) returns None if inplace=False return a new GaussianDistribution instance Examples -------- >>> import numpy as np >>> from pgmpy.factors.distributions import GaussianDistribution as GD >>> dis = GD(variables=['x1', 'x2', 'x3'], ... mean=[1, -3, 4], ... cov=[[4, 2, -2], ... [2, 5, -5], ... [-2, -5, 8]])) >>> dis.variables ['x1', 'x2', 'x3'] >>> dis.mean array([[ 1], [-3], [ 4]]) >>> dis.covariance array([[ 4, 2, -2], [ 2, 5, -5], [-2, -5, 8]]) >>> dis.marginalize(['x3']) dis.variables ['x1', 'x2'] >>> dis.mean array([[ 1.], [-3.]])) >>> dis.covariance array([[4., 2.], [2., 5.]]) ### Response: def marginalize(self, variables, inplace=True): """ Modifies the distribution with marginalized values. Parameters ---------- variables: iterator over any hashable object. List of variables over which marginalization is to be done. inplace: boolean If inplace=True it will modify the distribution itself, else would return a new distribution. Returns ------- GaussianDistribution or None : if inplace=True (default) returns None if inplace=False return a new GaussianDistribution instance Examples -------- >>> import numpy as np >>> from pgmpy.factors.distributions import GaussianDistribution as GD >>> dis = GD(variables=['x1', 'x2', 'x3'], ... mean=[1, -3, 4], ... cov=[[4, 2, -2], ... [2, 5, -5], ... [-2, -5, 8]])) >>> dis.variables ['x1', 'x2', 'x3'] >>> dis.mean array([[ 1], [-3], [ 4]]) >>> dis.covariance array([[ 4, 2, -2], [ 2, 5, -5], [-2, -5, 8]]) >>> dis.marginalize(['x3']) dis.variables ['x1', 'x2'] >>> dis.mean array([[ 1.], [-3.]])) >>> dis.covariance array([[4., 2.], [2., 5.]]) """ if not isinstance(variables, list): raise TypeError("variables: Expected type list or array-like," "got type {var_type}".format( var_type=type(variables))) phi = self if inplace else self.copy() index_to_keep = [self.variables.index(var) for var in self.variables if var not in variables] phi.variables = [phi.variables[index] for index in index_to_keep] phi.mean = phi.mean[index_to_keep] phi.covariance = phi.covariance[np.ix_(index_to_keep, index_to_keep)] phi._precision_matrix = None if not inplace: return phi
def copy_subrange_of_file(input_file, file_start, file_end, output_filehandle): """Copies the range (in bytes) between fileStart and fileEnd to the given output file handle. """ with open(input_file, 'r') as fileHandle: fileHandle.seek(file_start) data = fileHandle.read(file_end - file_start) assert len(data) == file_end - file_start output_filehandle.write(data)
Copies the range (in bytes) between fileStart and fileEnd to the given output file handle.
Below is the the instruction that describes the task: ### Input: Copies the range (in bytes) between fileStart and fileEnd to the given output file handle. ### Response: def copy_subrange_of_file(input_file, file_start, file_end, output_filehandle): """Copies the range (in bytes) between fileStart and fileEnd to the given output file handle. """ with open(input_file, 'r') as fileHandle: fileHandle.seek(file_start) data = fileHandle.read(file_end - file_start) assert len(data) == file_end - file_start output_filehandle.write(data)
def parse_config_for_selected_keys(content, keys): """ Parse a config from a magic cell body for selected config keys. For example, if 'content' is: config_item1: value1 config_item2: value2 config_item3: value3 and 'keys' are: [config_item1, config_item3] The results will be a tuple of 1. The parsed config items (dict): {config_item1: value1, config_item3: value3} 2. The remaining content (string): config_item2: value2 Args: content: the input content. A string. It has to be a yaml or JSON string. keys: a list of keys to retrieve from content. Note that it only checks top level keys in the dict. Returns: A tuple. First is the parsed config including only selected keys. Second is the remaining content. Raises: Exception if the content is not a valid yaml or JSON string. """ config_items = {key: None for key in keys} if not content: return config_items, content stripped = content.strip() if len(stripped) == 0: return {}, None elif stripped[0] == '{': config = json.loads(content) else: config = yaml.load(content) if not isinstance(config, dict): raise ValueError('Invalid config.') for key in keys: config_items[key] = config.pop(key, None) if not config: return config_items, None if stripped[0] == '{': content_out = json.dumps(config, indent=4) else: content_out = yaml.dump(config, default_flow_style=False) return config_items, content_out
Parse a config from a magic cell body for selected config keys. For example, if 'content' is: config_item1: value1 config_item2: value2 config_item3: value3 and 'keys' are: [config_item1, config_item3] The results will be a tuple of 1. The parsed config items (dict): {config_item1: value1, config_item3: value3} 2. The remaining content (string): config_item2: value2 Args: content: the input content. A string. It has to be a yaml or JSON string. keys: a list of keys to retrieve from content. Note that it only checks top level keys in the dict. Returns: A tuple. First is the parsed config including only selected keys. Second is the remaining content. Raises: Exception if the content is not a valid yaml or JSON string.
Below is the the instruction that describes the task: ### Input: Parse a config from a magic cell body for selected config keys. For example, if 'content' is: config_item1: value1 config_item2: value2 config_item3: value3 and 'keys' are: [config_item1, config_item3] The results will be a tuple of 1. The parsed config items (dict): {config_item1: value1, config_item3: value3} 2. The remaining content (string): config_item2: value2 Args: content: the input content. A string. It has to be a yaml or JSON string. keys: a list of keys to retrieve from content. Note that it only checks top level keys in the dict. Returns: A tuple. First is the parsed config including only selected keys. Second is the remaining content. Raises: Exception if the content is not a valid yaml or JSON string. ### Response: def parse_config_for_selected_keys(content, keys): """ Parse a config from a magic cell body for selected config keys. For example, if 'content' is: config_item1: value1 config_item2: value2 config_item3: value3 and 'keys' are: [config_item1, config_item3] The results will be a tuple of 1. The parsed config items (dict): {config_item1: value1, config_item3: value3} 2. The remaining content (string): config_item2: value2 Args: content: the input content. A string. It has to be a yaml or JSON string. keys: a list of keys to retrieve from content. Note that it only checks top level keys in the dict. Returns: A tuple. First is the parsed config including only selected keys. Second is the remaining content. Raises: Exception if the content is not a valid yaml or JSON string. """ config_items = {key: None for key in keys} if not content: return config_items, content stripped = content.strip() if len(stripped) == 0: return {}, None elif stripped[0] == '{': config = json.loads(content) else: config = yaml.load(content) if not isinstance(config, dict): raise ValueError('Invalid config.') for key in keys: config_items[key] = config.pop(key, None) if not config: return config_items, None if stripped[0] == '{': content_out = json.dumps(config, indent=4) else: content_out = yaml.dump(config, default_flow_style=False) return config_items, content_out
def _which_ip_protocol(element): """ Validate the protocol addresses for the element. Most elements can have an IPv4 or IPv6 address assigned on the same element. This allows elements to be validated and placed on the right network. :return: boolean tuple :rtype: tuple(ipv4, ipv6) """ try: if element.typeof in ('host', 'router'): return getattr(element, 'address', False), getattr(element, 'ipv6_address', False) elif element.typeof == 'netlink': gateway = element.gateway if gateway.typeof == 'router': return getattr(gateway, 'address', False), getattr(gateway, 'ipv6_address', False) # It's an engine, return true elif element.typeof == 'network': return getattr(element, 'ipv4_network', False), getattr(element, 'ipv6_network', False) except AttributeError: pass # Always return true so that the calling function assumes the element # is valid for the routing node. This could fail when submitting but # we don't want to prevent adding elements yet since this could change return True, True
Validate the protocol addresses for the element. Most elements can have an IPv4 or IPv6 address assigned on the same element. This allows elements to be validated and placed on the right network. :return: boolean tuple :rtype: tuple(ipv4, ipv6)
Below is the the instruction that describes the task: ### Input: Validate the protocol addresses for the element. Most elements can have an IPv4 or IPv6 address assigned on the same element. This allows elements to be validated and placed on the right network. :return: boolean tuple :rtype: tuple(ipv4, ipv6) ### Response: def _which_ip_protocol(element): """ Validate the protocol addresses for the element. Most elements can have an IPv4 or IPv6 address assigned on the same element. This allows elements to be validated and placed on the right network. :return: boolean tuple :rtype: tuple(ipv4, ipv6) """ try: if element.typeof in ('host', 'router'): return getattr(element, 'address', False), getattr(element, 'ipv6_address', False) elif element.typeof == 'netlink': gateway = element.gateway if gateway.typeof == 'router': return getattr(gateway, 'address', False), getattr(gateway, 'ipv6_address', False) # It's an engine, return true elif element.typeof == 'network': return getattr(element, 'ipv4_network', False), getattr(element, 'ipv6_network', False) except AttributeError: pass # Always return true so that the calling function assumes the element # is valid for the routing node. This could fail when submitting but # we don't want to prevent adding elements yet since this could change return True, True
def node_coord_in_direction(tile_id, direction): """ Returns the node coordinate in the given direction at the given tile identifier. :param tile_id: tile identifier, int :param direction: direction, str :return: node coord, int """ tile_coord = tile_id_to_coord(tile_id) for node_coord in nodes_touching_tile(tile_id): if tile_node_offset_to_direction(node_coord - tile_coord) == direction: return node_coord raise ValueError('No node found in direction={} at tile_id={}'.format( direction, tile_id ))
Returns the node coordinate in the given direction at the given tile identifier. :param tile_id: tile identifier, int :param direction: direction, str :return: node coord, int
Below is the the instruction that describes the task: ### Input: Returns the node coordinate in the given direction at the given tile identifier. :param tile_id: tile identifier, int :param direction: direction, str :return: node coord, int ### Response: def node_coord_in_direction(tile_id, direction): """ Returns the node coordinate in the given direction at the given tile identifier. :param tile_id: tile identifier, int :param direction: direction, str :return: node coord, int """ tile_coord = tile_id_to_coord(tile_id) for node_coord in nodes_touching_tile(tile_id): if tile_node_offset_to_direction(node_coord - tile_coord) == direction: return node_coord raise ValueError('No node found in direction={} at tile_id={}'.format( direction, tile_id ))
def check(cls): """Test to see if the running host is a RHEL installation. Checks for the presence of the "Red Hat Enterprise Linux" release string at the beginning of the NAME field in the `/etc/os-release` file and returns ``True`` if it is found, and ``False`` otherwise. :returns: ``True`` if the host is running RHEL or ``False`` otherwise. """ if not os.path.exists(OS_RELEASE): return False with open(OS_RELEASE, "r") as f: for line in f: if line.startswith("NAME"): (name, value) = line.split("=") value = value.strip("\"'") if value.startswith(cls.distro): return True return False
Test to see if the running host is a RHEL installation. Checks for the presence of the "Red Hat Enterprise Linux" release string at the beginning of the NAME field in the `/etc/os-release` file and returns ``True`` if it is found, and ``False`` otherwise. :returns: ``True`` if the host is running RHEL or ``False`` otherwise.
Below is the the instruction that describes the task: ### Input: Test to see if the running host is a RHEL installation. Checks for the presence of the "Red Hat Enterprise Linux" release string at the beginning of the NAME field in the `/etc/os-release` file and returns ``True`` if it is found, and ``False`` otherwise. :returns: ``True`` if the host is running RHEL or ``False`` otherwise. ### Response: def check(cls): """Test to see if the running host is a RHEL installation. Checks for the presence of the "Red Hat Enterprise Linux" release string at the beginning of the NAME field in the `/etc/os-release` file and returns ``True`` if it is found, and ``False`` otherwise. :returns: ``True`` if the host is running RHEL or ``False`` otherwise. """ if not os.path.exists(OS_RELEASE): return False with open(OS_RELEASE, "r") as f: for line in f: if line.startswith("NAME"): (name, value) = line.split("=") value = value.strip("\"'") if value.startswith(cls.distro): return True return False
def computeSlop(self, step, divisor): """Compute the slop that would result from step and divisor. Return the slop, or None if this combination can't cover the full range. See chooseStep() for the definition of "slop". """ bottom = step * math.floor(self.minValue / float(step) + EPSILON) top = bottom + step * divisor if top >= self.maxValue - EPSILON * step: return max(top - self.maxValue, self.minValue - bottom) else: return None
Compute the slop that would result from step and divisor. Return the slop, or None if this combination can't cover the full range. See chooseStep() for the definition of "slop".
Below is the the instruction that describes the task: ### Input: Compute the slop that would result from step and divisor. Return the slop, or None if this combination can't cover the full range. See chooseStep() for the definition of "slop". ### Response: def computeSlop(self, step, divisor): """Compute the slop that would result from step and divisor. Return the slop, or None if this combination can't cover the full range. See chooseStep() for the definition of "slop". """ bottom = step * math.floor(self.minValue / float(step) + EPSILON) top = bottom + step * divisor if top >= self.maxValue - EPSILON * step: return max(top - self.maxValue, self.minValue - bottom) else: return None
def p_generate_if_woelse(self, p): 'generate_if : IF LPAREN cond RPAREN gif_true_item' p[0] = IfStatement(p[3], p[5], None, lineno=p.lineno(1)) p.set_lineno(0, p.lineno(1))
generate_if : IF LPAREN cond RPAREN gif_true_item
Below is the the instruction that describes the task: ### Input: generate_if : IF LPAREN cond RPAREN gif_true_item ### Response: def p_generate_if_woelse(self, p): 'generate_if : IF LPAREN cond RPAREN gif_true_item' p[0] = IfStatement(p[3], p[5], None, lineno=p.lineno(1)) p.set_lineno(0, p.lineno(1))
def addToTimeInv(self,*params): ''' Adds any number of parameters to time_inv for this instance. Parameters ---------- params : string Any number of strings naming attributes to be added to time_inv Returns ------- None ''' for param in params: if param not in self.time_inv: self.time_inv.append(param)
Adds any number of parameters to time_inv for this instance. Parameters ---------- params : string Any number of strings naming attributes to be added to time_inv Returns ------- None
Below is the the instruction that describes the task: ### Input: Adds any number of parameters to time_inv for this instance. Parameters ---------- params : string Any number of strings naming attributes to be added to time_inv Returns ------- None ### Response: def addToTimeInv(self,*params): ''' Adds any number of parameters to time_inv for this instance. Parameters ---------- params : string Any number of strings naming attributes to be added to time_inv Returns ------- None ''' for param in params: if param not in self.time_inv: self.time_inv.append(param)
def cli(ctx, organism="", sequence=""): """Get the features for an organism / sequence Output: A standard apollo feature dictionary ({"features": [{...}]}) """ return ctx.gi.annotations.get_features(organism=organism, sequence=sequence)
Get the features for an organism / sequence Output: A standard apollo feature dictionary ({"features": [{...}]})
Below is the the instruction that describes the task: ### Input: Get the features for an organism / sequence Output: A standard apollo feature dictionary ({"features": [{...}]}) ### Response: def cli(ctx, organism="", sequence=""): """Get the features for an organism / sequence Output: A standard apollo feature dictionary ({"features": [{...}]}) """ return ctx.gi.annotations.get_features(organism=organism, sequence=sequence)
def serialize(cls, obj, buf, lineLength, validate): """ Apple's Address Book is *really* weird with images, it expects base64 data to have very specific whitespace. It seems Address Book can handle PHOTO if it's not wrapped, so don't wrap it. """ if wacky_apple_photo_serialize: lineLength = REALLY_LARGE VCardTextBehavior.serialize(obj, buf, lineLength, validate)
Apple's Address Book is *really* weird with images, it expects base64 data to have very specific whitespace. It seems Address Book can handle PHOTO if it's not wrapped, so don't wrap it.
Below is the the instruction that describes the task: ### Input: Apple's Address Book is *really* weird with images, it expects base64 data to have very specific whitespace. It seems Address Book can handle PHOTO if it's not wrapped, so don't wrap it. ### Response: def serialize(cls, obj, buf, lineLength, validate): """ Apple's Address Book is *really* weird with images, it expects base64 data to have very specific whitespace. It seems Address Book can handle PHOTO if it's not wrapped, so don't wrap it. """ if wacky_apple_photo_serialize: lineLength = REALLY_LARGE VCardTextBehavior.serialize(obj, buf, lineLength, validate)
def Size(self): """ Get the total size in bytes of the object. Returns: int: size. """ s = super(Block, self).Size() + GetVarSize(self.Transactions) return s
Get the total size in bytes of the object. Returns: int: size.
Below is the the instruction that describes the task: ### Input: Get the total size in bytes of the object. Returns: int: size. ### Response: def Size(self): """ Get the total size in bytes of the object. Returns: int: size. """ s = super(Block, self).Size() + GetVarSize(self.Transactions) return s
def list_engines_by_priority(engines=None): """ Return a list of engines supported sorted by each priority. """ if engines is None: engines = ENGINES return sorted(engines, key=operator.methodcaller("priority"))
Return a list of engines supported sorted by each priority.
Below is the the instruction that describes the task: ### Input: Return a list of engines supported sorted by each priority. ### Response: def list_engines_by_priority(engines=None): """ Return a list of engines supported sorted by each priority. """ if engines is None: engines = ENGINES return sorted(engines, key=operator.methodcaller("priority"))
def _queue_edge_tiles(self, dx, dy): """ Queue edge tiles and clear edge areas on buffer if needed :param dx: Edge along X axis to enqueue :param dy: Edge along Y axis to enqueue :return: None """ v = self._tile_view tw, th = self.data.tile_size self._tile_queue = iter([]) def append(rect): self._tile_queue = chain(self._tile_queue, self.data.get_tile_images_by_rect(rect)) # TODO: optimize so fill is only used when map is smaller than buffer self._clear_surface(self._buffer, ((rect[0] - v.left) * tw, (rect[1] - v.top) * th, rect[2] * tw, rect[3] * th)) if dx > 0: # right side append((v.right - 1, v.top, dx, v.height)) elif dx < 0: # left side append((v.left, v.top, -dx, v.height)) if dy > 0: # bottom side append((v.left, v.bottom - 1, v.width, dy)) elif dy < 0: # top side append((v.left, v.top, v.width, -dy))
Queue edge tiles and clear edge areas on buffer if needed :param dx: Edge along X axis to enqueue :param dy: Edge along Y axis to enqueue :return: None
Below is the the instruction that describes the task: ### Input: Queue edge tiles and clear edge areas on buffer if needed :param dx: Edge along X axis to enqueue :param dy: Edge along Y axis to enqueue :return: None ### Response: def _queue_edge_tiles(self, dx, dy): """ Queue edge tiles and clear edge areas on buffer if needed :param dx: Edge along X axis to enqueue :param dy: Edge along Y axis to enqueue :return: None """ v = self._tile_view tw, th = self.data.tile_size self._tile_queue = iter([]) def append(rect): self._tile_queue = chain(self._tile_queue, self.data.get_tile_images_by_rect(rect)) # TODO: optimize so fill is only used when map is smaller than buffer self._clear_surface(self._buffer, ((rect[0] - v.left) * tw, (rect[1] - v.top) * th, rect[2] * tw, rect[3] * th)) if dx > 0: # right side append((v.right - 1, v.top, dx, v.height)) elif dx < 0: # left side append((v.left, v.top, -dx, v.height)) if dy > 0: # bottom side append((v.left, v.bottom - 1, v.width, dy)) elif dy < 0: # top side append((v.left, v.top, v.width, -dy))
def get_heat_kernel(network_id): """Return the identifier of a heat kernel calculated for a given network. Parameters ---------- network_id : str The UUID of the network in NDEx. Returns ------- kernel_id : str The identifier of the heat kernel calculated for the given network. """ url = ndex_relevance + '/%s/generate_ndex_heat_kernel' % network_id res = ndex_client.send_request(url, {}, is_json=True, use_get=True) if res is None: logger.error('Could not get heat kernel for network %s.' % network_id) return None kernel_id = res.get('kernel_id') if kernel_id is None: logger.error('Could not get heat kernel for network %s.' % network_id) return None return kernel_id
Return the identifier of a heat kernel calculated for a given network. Parameters ---------- network_id : str The UUID of the network in NDEx. Returns ------- kernel_id : str The identifier of the heat kernel calculated for the given network.
Below is the the instruction that describes the task: ### Input: Return the identifier of a heat kernel calculated for a given network. Parameters ---------- network_id : str The UUID of the network in NDEx. Returns ------- kernel_id : str The identifier of the heat kernel calculated for the given network. ### Response: def get_heat_kernel(network_id): """Return the identifier of a heat kernel calculated for a given network. Parameters ---------- network_id : str The UUID of the network in NDEx. Returns ------- kernel_id : str The identifier of the heat kernel calculated for the given network. """ url = ndex_relevance + '/%s/generate_ndex_heat_kernel' % network_id res = ndex_client.send_request(url, {}, is_json=True, use_get=True) if res is None: logger.error('Could not get heat kernel for network %s.' % network_id) return None kernel_id = res.get('kernel_id') if kernel_id is None: logger.error('Could not get heat kernel for network %s.' % network_id) return None return kernel_id
def _CreateSingleValueCondition(self, value, operator): """Creates a single-value condition with the provided value and operator.""" if isinstance(value, str) or isinstance(value, unicode): value = '"%s"' % value return '%s %s %s' % (self._field, operator, value)
Creates a single-value condition with the provided value and operator.
Below is the the instruction that describes the task: ### Input: Creates a single-value condition with the provided value and operator. ### Response: def _CreateSingleValueCondition(self, value, operator): """Creates a single-value condition with the provided value and operator.""" if isinstance(value, str) or isinstance(value, unicode): value = '"%s"' % value return '%s %s %s' % (self._field, operator, value)
def onSelectRow(self, event): """ Highlight or unhighlight a row for possible deletion. """ grid = self.grid row = event.Row default = (255, 255, 255, 255) highlight = (191, 216, 216, 255) cell_color = grid.GetCellBackgroundColour(row, 0) attr = wx.grid.GridCellAttr() if cell_color == default: attr.SetBackgroundColour(highlight) self.selected_rows.add(row) else: attr.SetBackgroundColour(default) try: self.selected_rows.remove(row) except KeyError: pass if self.selected_rows and self.deleteRowButton: self.deleteRowButton.Enable() else: self.deleteRowButton.Disable() grid.SetRowAttr(row, attr) grid.Refresh()
Highlight or unhighlight a row for possible deletion.
Below is the the instruction that describes the task: ### Input: Highlight or unhighlight a row for possible deletion. ### Response: def onSelectRow(self, event): """ Highlight or unhighlight a row for possible deletion. """ grid = self.grid row = event.Row default = (255, 255, 255, 255) highlight = (191, 216, 216, 255) cell_color = grid.GetCellBackgroundColour(row, 0) attr = wx.grid.GridCellAttr() if cell_color == default: attr.SetBackgroundColour(highlight) self.selected_rows.add(row) else: attr.SetBackgroundColour(default) try: self.selected_rows.remove(row) except KeyError: pass if self.selected_rows and self.deleteRowButton: self.deleteRowButton.Enable() else: self.deleteRowButton.Disable() grid.SetRowAttr(row, attr) grid.Refresh()
def pad(self, data, block_size): """ :meth:`.WBlockPadding.pad` method implementation """ padding_symbol = self.padding_symbol() blocks_count = (len(data) // block_size) if (len(data) % block_size) != 0: blocks_count += 1 total_length = blocks_count * block_size return self._fill(data, total_length, padding_symbol)
:meth:`.WBlockPadding.pad` method implementation
Below is the the instruction that describes the task: ### Input: :meth:`.WBlockPadding.pad` method implementation ### Response: def pad(self, data, block_size): """ :meth:`.WBlockPadding.pad` method implementation """ padding_symbol = self.padding_symbol() blocks_count = (len(data) // block_size) if (len(data) % block_size) != 0: blocks_count += 1 total_length = blocks_count * block_size return self._fill(data, total_length, padding_symbol)
def _to_unit_base(self, base_unit, values, unit, from_unit): """Return values in a given unit given the input from_unit.""" self._is_numeric(values) namespace = {'self': self, 'values': values} if not from_unit == base_unit: self.is_unit_acceptable(from_unit, True) statement = '[self._{}_to_{}(val) for val in values]'.format( self._clean(from_unit), self._clean(base_unit)) values = eval(statement, namespace) namespace['values'] = values if not unit == base_unit: self.is_unit_acceptable(unit, True) statement = '[self._{}_to_{}(val) for val in values]'.format( self._clean(base_unit), self._clean(unit)) values = eval(statement, namespace) return values
Return values in a given unit given the input from_unit.
Below is the the instruction that describes the task: ### Input: Return values in a given unit given the input from_unit. ### Response: def _to_unit_base(self, base_unit, values, unit, from_unit): """Return values in a given unit given the input from_unit.""" self._is_numeric(values) namespace = {'self': self, 'values': values} if not from_unit == base_unit: self.is_unit_acceptable(from_unit, True) statement = '[self._{}_to_{}(val) for val in values]'.format( self._clean(from_unit), self._clean(base_unit)) values = eval(statement, namespace) namespace['values'] = values if not unit == base_unit: self.is_unit_acceptable(unit, True) statement = '[self._{}_to_{}(val) for val in values]'.format( self._clean(base_unit), self._clean(unit)) values = eval(statement, namespace) return values
def _convert_and_assert_per_example_weights_compatible( input_, per_example_weights, dtype): """Converts per_example_weights to a tensor and validates the shape.""" per_example_weights = tf.convert_to_tensor( per_example_weights, name='per_example_weights', dtype=dtype) if input_.get_shape().ndims: expected_length = input_.get_shape().dims[0] message = ('per_example_weights must have rank 1 and length %s, but was: %s' % (expected_length, per_example_weights.get_shape())) else: expected_length = None message = ('per_example_weights must have rank 1 and length equal to the ' 'first dimension of inputs (unknown), but was: %s' % per_example_weights.get_shape()) if per_example_weights.get_shape().ndims not in (1, None): raise ValueError(message) if not per_example_weights.get_shape().is_compatible_with((expected_length,)): raise ValueError(message) return per_example_weights
Converts per_example_weights to a tensor and validates the shape.
Below is the the instruction that describes the task: ### Input: Converts per_example_weights to a tensor and validates the shape. ### Response: def _convert_and_assert_per_example_weights_compatible( input_, per_example_weights, dtype): """Converts per_example_weights to a tensor and validates the shape.""" per_example_weights = tf.convert_to_tensor( per_example_weights, name='per_example_weights', dtype=dtype) if input_.get_shape().ndims: expected_length = input_.get_shape().dims[0] message = ('per_example_weights must have rank 1 and length %s, but was: %s' % (expected_length, per_example_weights.get_shape())) else: expected_length = None message = ('per_example_weights must have rank 1 and length equal to the ' 'first dimension of inputs (unknown), but was: %s' % per_example_weights.get_shape()) if per_example_weights.get_shape().ndims not in (1, None): raise ValueError(message) if not per_example_weights.get_shape().is_compatible_with((expected_length,)): raise ValueError(message) return per_example_weights
def loop_until_timeout_or_true(timeout_s, function, sleep_s=1): # pylint: disable=invalid-name """Loops until the specified function returns True or a timeout is reached. Note: The function may return anything which evaluates to implicit True. This function will loop calling it as long as it continues to return something which evaluates to False. We ensure this method is called at least once regardless of timeout. Args: timeout_s: The number of seconds to wait until a timeout condition is reached. As a convenience, this accepts None to mean never timeout. Can also be passed a PolledTimeout object instead of an integer. function: The function to call each iteration. sleep_s: The number of seconds to wait after calling the function. Returns: Whatever the function returned last. """ return loop_until_timeout_or_valid(timeout_s, function, lambda x: x, sleep_s)
Loops until the specified function returns True or a timeout is reached. Note: The function may return anything which evaluates to implicit True. This function will loop calling it as long as it continues to return something which evaluates to False. We ensure this method is called at least once regardless of timeout. Args: timeout_s: The number of seconds to wait until a timeout condition is reached. As a convenience, this accepts None to mean never timeout. Can also be passed a PolledTimeout object instead of an integer. function: The function to call each iteration. sleep_s: The number of seconds to wait after calling the function. Returns: Whatever the function returned last.
Below is the the instruction that describes the task: ### Input: Loops until the specified function returns True or a timeout is reached. Note: The function may return anything which evaluates to implicit True. This function will loop calling it as long as it continues to return something which evaluates to False. We ensure this method is called at least once regardless of timeout. Args: timeout_s: The number of seconds to wait until a timeout condition is reached. As a convenience, this accepts None to mean never timeout. Can also be passed a PolledTimeout object instead of an integer. function: The function to call each iteration. sleep_s: The number of seconds to wait after calling the function. Returns: Whatever the function returned last. ### Response: def loop_until_timeout_or_true(timeout_s, function, sleep_s=1): # pylint: disable=invalid-name """Loops until the specified function returns True or a timeout is reached. Note: The function may return anything which evaluates to implicit True. This function will loop calling it as long as it continues to return something which evaluates to False. We ensure this method is called at least once regardless of timeout. Args: timeout_s: The number of seconds to wait until a timeout condition is reached. As a convenience, this accepts None to mean never timeout. Can also be passed a PolledTimeout object instead of an integer. function: The function to call each iteration. sleep_s: The number of seconds to wait after calling the function. Returns: Whatever the function returned last. """ return loop_until_timeout_or_valid(timeout_s, function, lambda x: x, sleep_s)
def read_utf8_string(self, length): """ Reads a UTF-8 string from the stream. @rtype: C{unicode} """ s = struct.unpack("%s%ds" % (self.endian, length), self.read(length))[0] return s.decode('utf-8')
Reads a UTF-8 string from the stream. @rtype: C{unicode}
Below is the the instruction that describes the task: ### Input: Reads a UTF-8 string from the stream. @rtype: C{unicode} ### Response: def read_utf8_string(self, length): """ Reads a UTF-8 string from the stream. @rtype: C{unicode} """ s = struct.unpack("%s%ds" % (self.endian, length), self.read(length))[0] return s.decode('utf-8')
def timestamp_to_datetime(response): "Converts a unix timestamp to a Python datetime object" if not response: return None try: response = int(response) except ValueError: return None return datetime.datetime.fromtimestamp(response)
Converts a unix timestamp to a Python datetime object
Below is the the instruction that describes the task: ### Input: Converts a unix timestamp to a Python datetime object ### Response: def timestamp_to_datetime(response): "Converts a unix timestamp to a Python datetime object" if not response: return None try: response = int(response) except ValueError: return None return datetime.datetime.fromtimestamp(response)
def listDatasets(self, dataset="", parent_dataset="", is_dataset_valid=1, release_version="", pset_hash="", app_name="", output_module_label="", global_tag="", processing_version=0, acquisition_era_name="", run_num=-1, physics_group_name="", logical_file_name="", primary_ds_name="", primary_ds_type="", processed_ds_name='', data_tier_name="", dataset_access_type="VALID", prep_id='', create_by="", last_modified_by="", min_cdate='0', max_cdate='0', min_ldate='0', max_ldate='0', cdate='0', ldate='0', detail=False, dataset_id=-1): """ API to list dataset(s) in DBS * You can use ANY combination of these parameters in this API * In absence of parameters, all valid datasets known to the DBS instance will be returned :param dataset: Full dataset (path) of the dataset. :type dataset: str :param parent_dataset: Full dataset (path) of the dataset :type parent_dataset: str :param release_version: cmssw version :type release_version: str :param pset_hash: pset hash :type pset_hash: str :param app_name: Application name (generally it is cmsRun) :type app_name: str :param output_module_label: output_module_label :type output_module_label: str :param global_tag: global_tag :type global_tag: str :param processing_version: Processing Version :type processing_version: str :param acquisition_era_name: Acquisition Era :type acquisition_era_name: str :param run_num: Specify a specific run number or range. Possible format are: run_num, 'run_min-run_max' or ['run_min-run_max', run1, run2, ...]. run_num=1 is not allowed. :type run_num: int,list,str :param physics_group_name: List only dataset having physics_group_name attribute :type physics_group_name: str :param logical_file_name: List dataset containing the logical_file_name :type logical_file_name: str :param primary_ds_name: Primary Dataset Name :type primary_ds_name: str :param primary_ds_type: Primary Dataset Type (Type of data, MC/DATA) :type primary_ds_type: str :param processed_ds_name: List datasets having this processed dataset name :type processed_ds_name: str :param data_tier_name: Data Tier :type data_tier_name: str :param dataset_access_type: Dataset Access Type ( PRODUCTION, DEPRECATED etc.) :type dataset_access_type: str :param prep_id: prep_id :type prep_id: str :param create_by: Creator of the dataset :type create_by: str :param last_modified_by: Last modifier of the dataset :type last_modified_by: str :param min_cdate: Lower limit for the creation date (unixtime) (Optional) :type min_cdate: int, str :param max_cdate: Upper limit for the creation date (unixtime) (Optional) :type max_cdate: int, str :param min_ldate: Lower limit for the last modification date (unixtime) (Optional) :type min_ldate: int, str :param max_ldate: Upper limit for the last modification date (unixtime) (Optional) :type max_ldate: int, str :param cdate: creation date (unixtime) (Optional) :type cdate: int, str :param ldate: last modification date (unixtime) (Optional) :type ldate: int, str :param detail: List all details of a dataset :type detail: bool :param dataset_id: dataset table primary key used by CMS Computing Analytics. :type dataset_id: int, long, str :returns: List of dictionaries containing the following keys (dataset). If the detail option is used. The dictionary contain the following keys (primary_ds_name, physics_group_name, acquisition_era_name, create_by, dataset_access_type, data_tier_name, last_modified_by, creation_date, processing_version, processed_ds_name, xtcrosssection, last_modification_date, dataset_id, dataset, prep_id, primary_ds_type) :rtype: list of dicts """ dataset = dataset.replace("*", "%") parent_dataset = parent_dataset.replace("*", "%") release_version = release_version.replace("*", "%") pset_hash = pset_hash.replace("*", "%") app_name = app_name.replace("*", "%") output_module_label = output_module_label.replace("*", "%") global_tag = global_tag.replace("*", "%") logical_file_name = logical_file_name.replace("*", "%") physics_group_name = physics_group_name.replace("*", "%") primary_ds_name = primary_ds_name.replace("*", "%") primary_ds_type = primary_ds_type.replace("*", "%") data_tier_name = data_tier_name.replace("*", "%") dataset_access_type = dataset_access_type.replace("*", "%") processed_ds_name = processed_ds_name.replace("*", "%") acquisition_era_name = acquisition_era_name.replace("*", "%") #processing_version = processing_version.replace("*", "%") #create_by and last_modified_by have be full spelled, no wildcard will allowed. #We got them from request head so they can be either HN account name or DN. #This is depended on how an user's account is set up. # # In the next release we will require dataset has no wildcard in it. # DBS will reject wildcard search with dataset name with listDatasets call. # One should seperate the dataset into primary , process and datatier if any wildcard. # YG Oct 26, 2016 # Some of users were overwhiled by the API change. So we split the wildcarded dataset in the server instead of by the client. # YG Dec. 9 2016 # # run_num=1 caused full table scan and CERN DBS reported some of the queries ran more than 50 hours # We will disbale all the run_num=1 calls in DBS. Run_num=1 will be OK when logical_file_name is given. # YG Jan. 15 2019 # if (run_num != -1 and logical_file_name ==''): for r in parseRunRange(run_num): if isinstance(r, basestring) or isinstance(r, int) or isinstance(r, long): if r == 1 or r == '1': dbsExceptionHandler("dbsException-invalid-input", "Run_num=1 is not a valid input.", self.logger.exception) elif isinstance(r, run_tuple): if r[0] == r[1]: dbsExceptionHandler('dbsException-invalid-input', "DBS run range must be apart at least by 1.", self.logger.exception) elif r[0] <= 1 <= r[1]: dbsExceptionHandler("dbsException-invalid-input", "Run_num=1 is not a valid input.", self.logger.exception) if( dataset and ( dataset == "/%/%/%" or dataset== "/%" or dataset == "/%/%" ) ): dataset='' elif( dataset and ( dataset.find('%') != -1 ) ) : junk, primary_ds_name, processed_ds_name, data_tier_name = dataset.split('/') dataset = '' if ( primary_ds_name == '%' ): primary_ds_name = '' if( processed_ds_name == '%' ): processed_ds_name = '' if ( data_tier_name == '%' ): data_tier_name = '' try: dataset_id = int(dataset_id) except: dbsExceptionHandler("dbsException-invalid-input2", "Invalid Input for dataset_id that has to be an int.", self.logger.exception, 'dataset_id has to be an int.') if create_by.find('*')!=-1 or create_by.find('%')!=-1 or last_modified_by.find('*')!=-1\ or last_modified_by.find('%')!=-1: dbsExceptionHandler("dbsException-invalid-input2", "Invalid Input for create_by or last_modified_by.\ No wildcard allowed.", self.logger.exception, 'No wildcards allowed for create_by or last_modified_by') try: if isinstance(min_cdate, basestring) and ('*' in min_cdate or '%' in min_cdate): min_cdate = 0 else: try: min_cdate = int(min_cdate) except: dbsExceptionHandler("dbsException-invalid-input", "invalid input for min_cdate") if isinstance(max_cdate, basestring) and ('*' in max_cdate or '%' in max_cdate): max_cdate = 0 else: try: max_cdate = int(max_cdate) except: dbsExceptionHandler("dbsException-invalid-input", "invalid input for max_cdate") if isinstance(min_ldate, basestring) and ('*' in min_ldate or '%' in min_ldate): min_ldate = 0 else: try: min_ldate = int(min_ldate) except: dbsExceptionHandler("dbsException-invalid-input", "invalid input for min_ldate") if isinstance(max_ldate, basestring) and ('*' in max_ldate or '%' in max_ldate): max_ldate = 0 else: try: max_ldate = int(max_ldate) except: dbsExceptionHandler("dbsException-invalid-input", "invalid input for max_ldate") if isinstance(cdate, basestring) and ('*' in cdate or '%' in cdate): cdate = 0 else: try: cdate = int(cdate) except: dbsExceptionHandler("dbsException-invalid-input", "invalid input for cdate") if isinstance(ldate, basestring) and ('*' in ldate or '%' in ldate): ldate = 0 else: try: ldate = int(ldate) except: dbsExceptionHandler("dbsException-invalid-input", "invalid input for ldate") except dbsException as de: dbsExceptionHandler(de.eCode, de.message, self.logger.exception, de.serverError) except Exception as ex: sError = "DBSReaderModel/listDatasets. %s \n. Exception trace: \n %s" \ % (ex, traceback.format_exc()) dbsExceptionHandler('dbsException-server-error', dbsExceptionCode['dbsException-server-error'], self.logger.exception, sError) detail = detail in (True, 1, "True", "1", 'true') try: return self.dbsDataset.listDatasets(dataset, parent_dataset, is_dataset_valid, release_version, pset_hash, app_name, output_module_label, global_tag, processing_version, acquisition_era_name, run_num, physics_group_name, logical_file_name, primary_ds_name, primary_ds_type, processed_ds_name, data_tier_name, dataset_access_type, prep_id, create_by, last_modified_by, min_cdate, max_cdate, min_ldate, max_ldate, cdate, ldate, detail, dataset_id) except dbsException as de: dbsExceptionHandler(de.eCode, de.message, self.logger.exception, de.serverError) except Exception as ex: sError = "DBSReaderModel/listdatasets. %s.\n Exception trace: \n %s" % (ex, traceback.format_exc()) dbsExceptionHandler('dbsException-server-error', dbsExceptionCode['dbsException-server-error'], self.logger.exception, sError)
API to list dataset(s) in DBS * You can use ANY combination of these parameters in this API * In absence of parameters, all valid datasets known to the DBS instance will be returned :param dataset: Full dataset (path) of the dataset. :type dataset: str :param parent_dataset: Full dataset (path) of the dataset :type parent_dataset: str :param release_version: cmssw version :type release_version: str :param pset_hash: pset hash :type pset_hash: str :param app_name: Application name (generally it is cmsRun) :type app_name: str :param output_module_label: output_module_label :type output_module_label: str :param global_tag: global_tag :type global_tag: str :param processing_version: Processing Version :type processing_version: str :param acquisition_era_name: Acquisition Era :type acquisition_era_name: str :param run_num: Specify a specific run number or range. Possible format are: run_num, 'run_min-run_max' or ['run_min-run_max', run1, run2, ...]. run_num=1 is not allowed. :type run_num: int,list,str :param physics_group_name: List only dataset having physics_group_name attribute :type physics_group_name: str :param logical_file_name: List dataset containing the logical_file_name :type logical_file_name: str :param primary_ds_name: Primary Dataset Name :type primary_ds_name: str :param primary_ds_type: Primary Dataset Type (Type of data, MC/DATA) :type primary_ds_type: str :param processed_ds_name: List datasets having this processed dataset name :type processed_ds_name: str :param data_tier_name: Data Tier :type data_tier_name: str :param dataset_access_type: Dataset Access Type ( PRODUCTION, DEPRECATED etc.) :type dataset_access_type: str :param prep_id: prep_id :type prep_id: str :param create_by: Creator of the dataset :type create_by: str :param last_modified_by: Last modifier of the dataset :type last_modified_by: str :param min_cdate: Lower limit for the creation date (unixtime) (Optional) :type min_cdate: int, str :param max_cdate: Upper limit for the creation date (unixtime) (Optional) :type max_cdate: int, str :param min_ldate: Lower limit for the last modification date (unixtime) (Optional) :type min_ldate: int, str :param max_ldate: Upper limit for the last modification date (unixtime) (Optional) :type max_ldate: int, str :param cdate: creation date (unixtime) (Optional) :type cdate: int, str :param ldate: last modification date (unixtime) (Optional) :type ldate: int, str :param detail: List all details of a dataset :type detail: bool :param dataset_id: dataset table primary key used by CMS Computing Analytics. :type dataset_id: int, long, str :returns: List of dictionaries containing the following keys (dataset). If the detail option is used. The dictionary contain the following keys (primary_ds_name, physics_group_name, acquisition_era_name, create_by, dataset_access_type, data_tier_name, last_modified_by, creation_date, processing_version, processed_ds_name, xtcrosssection, last_modification_date, dataset_id, dataset, prep_id, primary_ds_type) :rtype: list of dicts
Below is the the instruction that describes the task: ### Input: API to list dataset(s) in DBS * You can use ANY combination of these parameters in this API * In absence of parameters, all valid datasets known to the DBS instance will be returned :param dataset: Full dataset (path) of the dataset. :type dataset: str :param parent_dataset: Full dataset (path) of the dataset :type parent_dataset: str :param release_version: cmssw version :type release_version: str :param pset_hash: pset hash :type pset_hash: str :param app_name: Application name (generally it is cmsRun) :type app_name: str :param output_module_label: output_module_label :type output_module_label: str :param global_tag: global_tag :type global_tag: str :param processing_version: Processing Version :type processing_version: str :param acquisition_era_name: Acquisition Era :type acquisition_era_name: str :param run_num: Specify a specific run number or range. Possible format are: run_num, 'run_min-run_max' or ['run_min-run_max', run1, run2, ...]. run_num=1 is not allowed. :type run_num: int,list,str :param physics_group_name: List only dataset having physics_group_name attribute :type physics_group_name: str :param logical_file_name: List dataset containing the logical_file_name :type logical_file_name: str :param primary_ds_name: Primary Dataset Name :type primary_ds_name: str :param primary_ds_type: Primary Dataset Type (Type of data, MC/DATA) :type primary_ds_type: str :param processed_ds_name: List datasets having this processed dataset name :type processed_ds_name: str :param data_tier_name: Data Tier :type data_tier_name: str :param dataset_access_type: Dataset Access Type ( PRODUCTION, DEPRECATED etc.) :type dataset_access_type: str :param prep_id: prep_id :type prep_id: str :param create_by: Creator of the dataset :type create_by: str :param last_modified_by: Last modifier of the dataset :type last_modified_by: str :param min_cdate: Lower limit for the creation date (unixtime) (Optional) :type min_cdate: int, str :param max_cdate: Upper limit for the creation date (unixtime) (Optional) :type max_cdate: int, str :param min_ldate: Lower limit for the last modification date (unixtime) (Optional) :type min_ldate: int, str :param max_ldate: Upper limit for the last modification date (unixtime) (Optional) :type max_ldate: int, str :param cdate: creation date (unixtime) (Optional) :type cdate: int, str :param ldate: last modification date (unixtime) (Optional) :type ldate: int, str :param detail: List all details of a dataset :type detail: bool :param dataset_id: dataset table primary key used by CMS Computing Analytics. :type dataset_id: int, long, str :returns: List of dictionaries containing the following keys (dataset). If the detail option is used. The dictionary contain the following keys (primary_ds_name, physics_group_name, acquisition_era_name, create_by, dataset_access_type, data_tier_name, last_modified_by, creation_date, processing_version, processed_ds_name, xtcrosssection, last_modification_date, dataset_id, dataset, prep_id, primary_ds_type) :rtype: list of dicts ### Response: def listDatasets(self, dataset="", parent_dataset="", is_dataset_valid=1, release_version="", pset_hash="", app_name="", output_module_label="", global_tag="", processing_version=0, acquisition_era_name="", run_num=-1, physics_group_name="", logical_file_name="", primary_ds_name="", primary_ds_type="", processed_ds_name='', data_tier_name="", dataset_access_type="VALID", prep_id='', create_by="", last_modified_by="", min_cdate='0', max_cdate='0', min_ldate='0', max_ldate='0', cdate='0', ldate='0', detail=False, dataset_id=-1): """ API to list dataset(s) in DBS * You can use ANY combination of these parameters in this API * In absence of parameters, all valid datasets known to the DBS instance will be returned :param dataset: Full dataset (path) of the dataset. :type dataset: str :param parent_dataset: Full dataset (path) of the dataset :type parent_dataset: str :param release_version: cmssw version :type release_version: str :param pset_hash: pset hash :type pset_hash: str :param app_name: Application name (generally it is cmsRun) :type app_name: str :param output_module_label: output_module_label :type output_module_label: str :param global_tag: global_tag :type global_tag: str :param processing_version: Processing Version :type processing_version: str :param acquisition_era_name: Acquisition Era :type acquisition_era_name: str :param run_num: Specify a specific run number or range. Possible format are: run_num, 'run_min-run_max' or ['run_min-run_max', run1, run2, ...]. run_num=1 is not allowed. :type run_num: int,list,str :param physics_group_name: List only dataset having physics_group_name attribute :type physics_group_name: str :param logical_file_name: List dataset containing the logical_file_name :type logical_file_name: str :param primary_ds_name: Primary Dataset Name :type primary_ds_name: str :param primary_ds_type: Primary Dataset Type (Type of data, MC/DATA) :type primary_ds_type: str :param processed_ds_name: List datasets having this processed dataset name :type processed_ds_name: str :param data_tier_name: Data Tier :type data_tier_name: str :param dataset_access_type: Dataset Access Type ( PRODUCTION, DEPRECATED etc.) :type dataset_access_type: str :param prep_id: prep_id :type prep_id: str :param create_by: Creator of the dataset :type create_by: str :param last_modified_by: Last modifier of the dataset :type last_modified_by: str :param min_cdate: Lower limit for the creation date (unixtime) (Optional) :type min_cdate: int, str :param max_cdate: Upper limit for the creation date (unixtime) (Optional) :type max_cdate: int, str :param min_ldate: Lower limit for the last modification date (unixtime) (Optional) :type min_ldate: int, str :param max_ldate: Upper limit for the last modification date (unixtime) (Optional) :type max_ldate: int, str :param cdate: creation date (unixtime) (Optional) :type cdate: int, str :param ldate: last modification date (unixtime) (Optional) :type ldate: int, str :param detail: List all details of a dataset :type detail: bool :param dataset_id: dataset table primary key used by CMS Computing Analytics. :type dataset_id: int, long, str :returns: List of dictionaries containing the following keys (dataset). If the detail option is used. The dictionary contain the following keys (primary_ds_name, physics_group_name, acquisition_era_name, create_by, dataset_access_type, data_tier_name, last_modified_by, creation_date, processing_version, processed_ds_name, xtcrosssection, last_modification_date, dataset_id, dataset, prep_id, primary_ds_type) :rtype: list of dicts """ dataset = dataset.replace("*", "%") parent_dataset = parent_dataset.replace("*", "%") release_version = release_version.replace("*", "%") pset_hash = pset_hash.replace("*", "%") app_name = app_name.replace("*", "%") output_module_label = output_module_label.replace("*", "%") global_tag = global_tag.replace("*", "%") logical_file_name = logical_file_name.replace("*", "%") physics_group_name = physics_group_name.replace("*", "%") primary_ds_name = primary_ds_name.replace("*", "%") primary_ds_type = primary_ds_type.replace("*", "%") data_tier_name = data_tier_name.replace("*", "%") dataset_access_type = dataset_access_type.replace("*", "%") processed_ds_name = processed_ds_name.replace("*", "%") acquisition_era_name = acquisition_era_name.replace("*", "%") #processing_version = processing_version.replace("*", "%") #create_by and last_modified_by have be full spelled, no wildcard will allowed. #We got them from request head so they can be either HN account name or DN. #This is depended on how an user's account is set up. # # In the next release we will require dataset has no wildcard in it. # DBS will reject wildcard search with dataset name with listDatasets call. # One should seperate the dataset into primary , process and datatier if any wildcard. # YG Oct 26, 2016 # Some of users were overwhiled by the API change. So we split the wildcarded dataset in the server instead of by the client. # YG Dec. 9 2016 # # run_num=1 caused full table scan and CERN DBS reported some of the queries ran more than 50 hours # We will disbale all the run_num=1 calls in DBS. Run_num=1 will be OK when logical_file_name is given. # YG Jan. 15 2019 # if (run_num != -1 and logical_file_name ==''): for r in parseRunRange(run_num): if isinstance(r, basestring) or isinstance(r, int) or isinstance(r, long): if r == 1 or r == '1': dbsExceptionHandler("dbsException-invalid-input", "Run_num=1 is not a valid input.", self.logger.exception) elif isinstance(r, run_tuple): if r[0] == r[1]: dbsExceptionHandler('dbsException-invalid-input', "DBS run range must be apart at least by 1.", self.logger.exception) elif r[0] <= 1 <= r[1]: dbsExceptionHandler("dbsException-invalid-input", "Run_num=1 is not a valid input.", self.logger.exception) if( dataset and ( dataset == "/%/%/%" or dataset== "/%" or dataset == "/%/%" ) ): dataset='' elif( dataset and ( dataset.find('%') != -1 ) ) : junk, primary_ds_name, processed_ds_name, data_tier_name = dataset.split('/') dataset = '' if ( primary_ds_name == '%' ): primary_ds_name = '' if( processed_ds_name == '%' ): processed_ds_name = '' if ( data_tier_name == '%' ): data_tier_name = '' try: dataset_id = int(dataset_id) except: dbsExceptionHandler("dbsException-invalid-input2", "Invalid Input for dataset_id that has to be an int.", self.logger.exception, 'dataset_id has to be an int.') if create_by.find('*')!=-1 or create_by.find('%')!=-1 or last_modified_by.find('*')!=-1\ or last_modified_by.find('%')!=-1: dbsExceptionHandler("dbsException-invalid-input2", "Invalid Input for create_by or last_modified_by.\ No wildcard allowed.", self.logger.exception, 'No wildcards allowed for create_by or last_modified_by') try: if isinstance(min_cdate, basestring) and ('*' in min_cdate or '%' in min_cdate): min_cdate = 0 else: try: min_cdate = int(min_cdate) except: dbsExceptionHandler("dbsException-invalid-input", "invalid input for min_cdate") if isinstance(max_cdate, basestring) and ('*' in max_cdate or '%' in max_cdate): max_cdate = 0 else: try: max_cdate = int(max_cdate) except: dbsExceptionHandler("dbsException-invalid-input", "invalid input for max_cdate") if isinstance(min_ldate, basestring) and ('*' in min_ldate or '%' in min_ldate): min_ldate = 0 else: try: min_ldate = int(min_ldate) except: dbsExceptionHandler("dbsException-invalid-input", "invalid input for min_ldate") if isinstance(max_ldate, basestring) and ('*' in max_ldate or '%' in max_ldate): max_ldate = 0 else: try: max_ldate = int(max_ldate) except: dbsExceptionHandler("dbsException-invalid-input", "invalid input for max_ldate") if isinstance(cdate, basestring) and ('*' in cdate or '%' in cdate): cdate = 0 else: try: cdate = int(cdate) except: dbsExceptionHandler("dbsException-invalid-input", "invalid input for cdate") if isinstance(ldate, basestring) and ('*' in ldate or '%' in ldate): ldate = 0 else: try: ldate = int(ldate) except: dbsExceptionHandler("dbsException-invalid-input", "invalid input for ldate") except dbsException as de: dbsExceptionHandler(de.eCode, de.message, self.logger.exception, de.serverError) except Exception as ex: sError = "DBSReaderModel/listDatasets. %s \n. Exception trace: \n %s" \ % (ex, traceback.format_exc()) dbsExceptionHandler('dbsException-server-error', dbsExceptionCode['dbsException-server-error'], self.logger.exception, sError) detail = detail in (True, 1, "True", "1", 'true') try: return self.dbsDataset.listDatasets(dataset, parent_dataset, is_dataset_valid, release_version, pset_hash, app_name, output_module_label, global_tag, processing_version, acquisition_era_name, run_num, physics_group_name, logical_file_name, primary_ds_name, primary_ds_type, processed_ds_name, data_tier_name, dataset_access_type, prep_id, create_by, last_modified_by, min_cdate, max_cdate, min_ldate, max_ldate, cdate, ldate, detail, dataset_id) except dbsException as de: dbsExceptionHandler(de.eCode, de.message, self.logger.exception, de.serverError) except Exception as ex: sError = "DBSReaderModel/listdatasets. %s.\n Exception trace: \n %s" % (ex, traceback.format_exc()) dbsExceptionHandler('dbsException-server-error', dbsExceptionCode['dbsException-server-error'], self.logger.exception, sError)
def joint_sfs_folded_scaled(ac1, ac2, n1=None, n2=None): """Compute the joint folded site frequency spectrum between two populations, scaled such that a constant value is expected across the spectrum for neutral variation, constant population size and unrelated populations. Parameters ---------- ac1 : array_like, int, shape (n_variants, 2) Allele counts for the first population. ac2 : array_like, int, shape (n_variants, 2) Allele counts for the second population. n1, n2 : int, optional The total number of chromosomes called in each population. Returns ------- joint_sfs_folded_scaled : ndarray, int, shape (n1//2 + 1, n2//2 + 1) Array where the (i, j)th element is the scaled frequency of variant sites with a minor allele count of i in the first population and j in the second population. """ # noqa # check inputs ac1, n1 = _check_ac_n(ac1, n1) ac2, n2 = _check_ac_n(ac2, n2) # compute site frequency spectrum s = joint_sfs_folded(ac1, ac2, n1=n1, n2=n2) # apply scaling s = scale_joint_sfs_folded(s, n1, n2) return s
Compute the joint folded site frequency spectrum between two populations, scaled such that a constant value is expected across the spectrum for neutral variation, constant population size and unrelated populations. Parameters ---------- ac1 : array_like, int, shape (n_variants, 2) Allele counts for the first population. ac2 : array_like, int, shape (n_variants, 2) Allele counts for the second population. n1, n2 : int, optional The total number of chromosomes called in each population. Returns ------- joint_sfs_folded_scaled : ndarray, int, shape (n1//2 + 1, n2//2 + 1) Array where the (i, j)th element is the scaled frequency of variant sites with a minor allele count of i in the first population and j in the second population.
Below is the the instruction that describes the task: ### Input: Compute the joint folded site frequency spectrum between two populations, scaled such that a constant value is expected across the spectrum for neutral variation, constant population size and unrelated populations. Parameters ---------- ac1 : array_like, int, shape (n_variants, 2) Allele counts for the first population. ac2 : array_like, int, shape (n_variants, 2) Allele counts for the second population. n1, n2 : int, optional The total number of chromosomes called in each population. Returns ------- joint_sfs_folded_scaled : ndarray, int, shape (n1//2 + 1, n2//2 + 1) Array where the (i, j)th element is the scaled frequency of variant sites with a minor allele count of i in the first population and j in the second population. ### Response: def joint_sfs_folded_scaled(ac1, ac2, n1=None, n2=None): """Compute the joint folded site frequency spectrum between two populations, scaled such that a constant value is expected across the spectrum for neutral variation, constant population size and unrelated populations. Parameters ---------- ac1 : array_like, int, shape (n_variants, 2) Allele counts for the first population. ac2 : array_like, int, shape (n_variants, 2) Allele counts for the second population. n1, n2 : int, optional The total number of chromosomes called in each population. Returns ------- joint_sfs_folded_scaled : ndarray, int, shape (n1//2 + 1, n2//2 + 1) Array where the (i, j)th element is the scaled frequency of variant sites with a minor allele count of i in the first population and j in the second population. """ # noqa # check inputs ac1, n1 = _check_ac_n(ac1, n1) ac2, n2 = _check_ac_n(ac2, n2) # compute site frequency spectrum s = joint_sfs_folded(ac1, ac2, n1=n1, n2=n2) # apply scaling s = scale_joint_sfs_folded(s, n1, n2) return s
def create_document(self, data, throw_on_exists=False): """ Creates a new document in the remote and locally cached database, using the data provided. If an _id is included in the data then depending on that _id either a :class:`~cloudant.document.Document` or a :class:`~cloudant.design_document.DesignDocument` object will be added to the locally cached database and returned by this method. :param dict data: Dictionary of document JSON data, containing _id. :param bool throw_on_exists: Optional flag dictating whether to raise an exception if the document already exists in the database. :returns: A :class:`~cloudant.document.Document` or :class:`~cloudant.design_document.DesignDocument` instance corresponding to the new document in the database. """ docid = data.get('_id', None) doc = None if docid and docid.startswith('_design/'): doc = DesignDocument(self, docid) else: doc = Document(self, docid) doc.update(data) try: doc.create() except HTTPError as error: if error.response.status_code == 409: if throw_on_exists: raise CloudantDatabaseException(409, docid) else: raise super(CouchDatabase, self).__setitem__(doc['_id'], doc) return doc
Creates a new document in the remote and locally cached database, using the data provided. If an _id is included in the data then depending on that _id either a :class:`~cloudant.document.Document` or a :class:`~cloudant.design_document.DesignDocument` object will be added to the locally cached database and returned by this method. :param dict data: Dictionary of document JSON data, containing _id. :param bool throw_on_exists: Optional flag dictating whether to raise an exception if the document already exists in the database. :returns: A :class:`~cloudant.document.Document` or :class:`~cloudant.design_document.DesignDocument` instance corresponding to the new document in the database.
Below is the the instruction that describes the task: ### Input: Creates a new document in the remote and locally cached database, using the data provided. If an _id is included in the data then depending on that _id either a :class:`~cloudant.document.Document` or a :class:`~cloudant.design_document.DesignDocument` object will be added to the locally cached database and returned by this method. :param dict data: Dictionary of document JSON data, containing _id. :param bool throw_on_exists: Optional flag dictating whether to raise an exception if the document already exists in the database. :returns: A :class:`~cloudant.document.Document` or :class:`~cloudant.design_document.DesignDocument` instance corresponding to the new document in the database. ### Response: def create_document(self, data, throw_on_exists=False): """ Creates a new document in the remote and locally cached database, using the data provided. If an _id is included in the data then depending on that _id either a :class:`~cloudant.document.Document` or a :class:`~cloudant.design_document.DesignDocument` object will be added to the locally cached database and returned by this method. :param dict data: Dictionary of document JSON data, containing _id. :param bool throw_on_exists: Optional flag dictating whether to raise an exception if the document already exists in the database. :returns: A :class:`~cloudant.document.Document` or :class:`~cloudant.design_document.DesignDocument` instance corresponding to the new document in the database. """ docid = data.get('_id', None) doc = None if docid and docid.startswith('_design/'): doc = DesignDocument(self, docid) else: doc = Document(self, docid) doc.update(data) try: doc.create() except HTTPError as error: if error.response.status_code == 409: if throw_on_exists: raise CloudantDatabaseException(409, docid) else: raise super(CouchDatabase, self).__setitem__(doc['_id'], doc) return doc
def increase_by_changes(self, changes_amount, ratio): """Increase version by amount of changes :param changes_amount: Number of changes done :param ratio: Ratio changes :return: Increases version accordingly to changes """ increases = round(changes_amount * ratio) return self.increase(int(increases))
Increase version by amount of changes :param changes_amount: Number of changes done :param ratio: Ratio changes :return: Increases version accordingly to changes
Below is the the instruction that describes the task: ### Input: Increase version by amount of changes :param changes_amount: Number of changes done :param ratio: Ratio changes :return: Increases version accordingly to changes ### Response: def increase_by_changes(self, changes_amount, ratio): """Increase version by amount of changes :param changes_amount: Number of changes done :param ratio: Ratio changes :return: Increases version accordingly to changes """ increases = round(changes_amount * ratio) return self.increase(int(increases))
def get_conn(): ''' Return a conn object for the passed VM data ''' driver = get_driver(Provider.GCE) provider = get_configured_provider() project = config.get_cloud_config_value('project', provider, __opts__) email = config.get_cloud_config_value( 'service_account_email_address', provider, __opts__) private_key = config.get_cloud_config_value( 'service_account_private_key', provider, __opts__) gce = driver(email, private_key, project=project) gce.connection.user_agent_append('{0}/{1}'.format(_UA_PRODUCT, _UA_VERSION)) return gce
Return a conn object for the passed VM data
Below is the the instruction that describes the task: ### Input: Return a conn object for the passed VM data ### Response: def get_conn(): ''' Return a conn object for the passed VM data ''' driver = get_driver(Provider.GCE) provider = get_configured_provider() project = config.get_cloud_config_value('project', provider, __opts__) email = config.get_cloud_config_value( 'service_account_email_address', provider, __opts__) private_key = config.get_cloud_config_value( 'service_account_private_key', provider, __opts__) gce = driver(email, private_key, project=project) gce.connection.user_agent_append('{0}/{1}'.format(_UA_PRODUCT, _UA_VERSION)) return gce
def create(self, friendly_name, code_length=values.unset, lookup_enabled=values.unset, skip_sms_to_landlines=values.unset, dtmf_input_required=values.unset, tts_name=values.unset, psd2_enabled=values.unset): """ Create a new ServiceInstance :param unicode friendly_name: A string to describe the verification service :param unicode code_length: The length of the verification code to generate :param bool lookup_enabled: Whether to perform a lookup with each verification :param bool skip_sms_to_landlines: Whether to skip sending SMS verifications to landlines :param bool dtmf_input_required: Whether to ask the user to press a number before delivering the verify code in a phone call :param unicode tts_name: The name of an alternative text-to-speech service to use in phone calls :param bool psd2_enabled: Whether to pass PSD2 transaction parameters when starting a verification :returns: Newly created ServiceInstance :rtype: twilio.rest.verify.v2.service.ServiceInstance """ data = values.of({ 'FriendlyName': friendly_name, 'CodeLength': code_length, 'LookupEnabled': lookup_enabled, 'SkipSmsToLandlines': skip_sms_to_landlines, 'DtmfInputRequired': dtmf_input_required, 'TtsName': tts_name, 'Psd2Enabled': psd2_enabled, }) payload = self._version.create( 'POST', self._uri, data=data, ) return ServiceInstance(self._version, payload, )
Create a new ServiceInstance :param unicode friendly_name: A string to describe the verification service :param unicode code_length: The length of the verification code to generate :param bool lookup_enabled: Whether to perform a lookup with each verification :param bool skip_sms_to_landlines: Whether to skip sending SMS verifications to landlines :param bool dtmf_input_required: Whether to ask the user to press a number before delivering the verify code in a phone call :param unicode tts_name: The name of an alternative text-to-speech service to use in phone calls :param bool psd2_enabled: Whether to pass PSD2 transaction parameters when starting a verification :returns: Newly created ServiceInstance :rtype: twilio.rest.verify.v2.service.ServiceInstance
Below is the the instruction that describes the task: ### Input: Create a new ServiceInstance :param unicode friendly_name: A string to describe the verification service :param unicode code_length: The length of the verification code to generate :param bool lookup_enabled: Whether to perform a lookup with each verification :param bool skip_sms_to_landlines: Whether to skip sending SMS verifications to landlines :param bool dtmf_input_required: Whether to ask the user to press a number before delivering the verify code in a phone call :param unicode tts_name: The name of an alternative text-to-speech service to use in phone calls :param bool psd2_enabled: Whether to pass PSD2 transaction parameters when starting a verification :returns: Newly created ServiceInstance :rtype: twilio.rest.verify.v2.service.ServiceInstance ### Response: def create(self, friendly_name, code_length=values.unset, lookup_enabled=values.unset, skip_sms_to_landlines=values.unset, dtmf_input_required=values.unset, tts_name=values.unset, psd2_enabled=values.unset): """ Create a new ServiceInstance :param unicode friendly_name: A string to describe the verification service :param unicode code_length: The length of the verification code to generate :param bool lookup_enabled: Whether to perform a lookup with each verification :param bool skip_sms_to_landlines: Whether to skip sending SMS verifications to landlines :param bool dtmf_input_required: Whether to ask the user to press a number before delivering the verify code in a phone call :param unicode tts_name: The name of an alternative text-to-speech service to use in phone calls :param bool psd2_enabled: Whether to pass PSD2 transaction parameters when starting a verification :returns: Newly created ServiceInstance :rtype: twilio.rest.verify.v2.service.ServiceInstance """ data = values.of({ 'FriendlyName': friendly_name, 'CodeLength': code_length, 'LookupEnabled': lookup_enabled, 'SkipSmsToLandlines': skip_sms_to_landlines, 'DtmfInputRequired': dtmf_input_required, 'TtsName': tts_name, 'Psd2Enabled': psd2_enabled, }) payload = self._version.create( 'POST', self._uri, data=data, ) return ServiceInstance(self._version, payload, )
def finish(self, blueprint, documents): """Finish a list of pre-assembled documents""" # Reset the blueprint blueprint.reset() # Finish the documents finished = [] for document in documents: finished.append(blueprint.finish(document)) return finished
Finish a list of pre-assembled documents
Below is the the instruction that describes the task: ### Input: Finish a list of pre-assembled documents ### Response: def finish(self, blueprint, documents): """Finish a list of pre-assembled documents""" # Reset the blueprint blueprint.reset() # Finish the documents finished = [] for document in documents: finished.append(blueprint.finish(document)) return finished
def solve(self): """Run the ACE calculational loop.""" self._initialize() while self._outer_error_is_decreasing() and self._outer_iters < MAX_OUTERS: print('* Starting outer iteration {0:03d}. Current err = {1:12.5E}' ''.format(self._outer_iters, self._last_outer_error)) self._iterate_to_update_x_transforms() self._update_y_transform() self._outer_iters += 1
Run the ACE calculational loop.
Below is the the instruction that describes the task: ### Input: Run the ACE calculational loop. ### Response: def solve(self): """Run the ACE calculational loop.""" self._initialize() while self._outer_error_is_decreasing() and self._outer_iters < MAX_OUTERS: print('* Starting outer iteration {0:03d}. Current err = {1:12.5E}' ''.format(self._outer_iters, self._last_outer_error)) self._iterate_to_update_x_transforms() self._update_y_transform() self._outer_iters += 1
def show_list(self, the_list, cur_p=''): ''' List of the user collections. ''' current_page_num = int(cur_p) if cur_p else 1 current_page_num = 1 if current_page_num < 1 else current_page_num num_of_cat = MCollect.count_of_user(self.userinfo.uid) page_num = int(num_of_cat / CMS_CFG['list_num']) + 1 kwd = {'current_page': current_page_num} self.render('misc/collect/list.html', recs_collect=MCollect.query_pager_by_all(self.userinfo.uid, current_page_num).objects(), pager=tools.gen_pager_purecss('/collect/{0}'.format(the_list), page_num, current_page_num), userinfo=self.userinfo, cfg=CMS_CFG, kwd=kwd)
List of the user collections.
Below is the the instruction that describes the task: ### Input: List of the user collections. ### Response: def show_list(self, the_list, cur_p=''): ''' List of the user collections. ''' current_page_num = int(cur_p) if cur_p else 1 current_page_num = 1 if current_page_num < 1 else current_page_num num_of_cat = MCollect.count_of_user(self.userinfo.uid) page_num = int(num_of_cat / CMS_CFG['list_num']) + 1 kwd = {'current_page': current_page_num} self.render('misc/collect/list.html', recs_collect=MCollect.query_pager_by_all(self.userinfo.uid, current_page_num).objects(), pager=tools.gen_pager_purecss('/collect/{0}'.format(the_list), page_num, current_page_num), userinfo=self.userinfo, cfg=CMS_CFG, kwd=kwd)
def operations(self): """Instance depends on the API version: * 2018-03-31: :class:`Operations<azure.mgmt.containerservice.v2018_03_31.operations.Operations>` * 2018-08-01-preview: :class:`Operations<azure.mgmt.containerservice.v2018_08_01_preview.operations.Operations>` * 2019-02-01: :class:`Operations<azure.mgmt.containerservice.v2019_02_01.operations.Operations>` """ api_version = self._get_api_version('operations') if api_version == '2018-03-31': from .v2018_03_31.operations import Operations as OperationClass elif api_version == '2018-08-01-preview': from .v2018_08_01_preview.operations import Operations as OperationClass elif api_version == '2019-02-01': from .v2019_02_01.operations import Operations as OperationClass else: raise NotImplementedError("APIVersion {} is not available".format(api_version)) return OperationClass(self._client, self.config, Serializer(self._models_dict(api_version)), Deserializer(self._models_dict(api_version)))
Instance depends on the API version: * 2018-03-31: :class:`Operations<azure.mgmt.containerservice.v2018_03_31.operations.Operations>` * 2018-08-01-preview: :class:`Operations<azure.mgmt.containerservice.v2018_08_01_preview.operations.Operations>` * 2019-02-01: :class:`Operations<azure.mgmt.containerservice.v2019_02_01.operations.Operations>`
Below is the the instruction that describes the task: ### Input: Instance depends on the API version: * 2018-03-31: :class:`Operations<azure.mgmt.containerservice.v2018_03_31.operations.Operations>` * 2018-08-01-preview: :class:`Operations<azure.mgmt.containerservice.v2018_08_01_preview.operations.Operations>` * 2019-02-01: :class:`Operations<azure.mgmt.containerservice.v2019_02_01.operations.Operations>` ### Response: def operations(self): """Instance depends on the API version: * 2018-03-31: :class:`Operations<azure.mgmt.containerservice.v2018_03_31.operations.Operations>` * 2018-08-01-preview: :class:`Operations<azure.mgmt.containerservice.v2018_08_01_preview.operations.Operations>` * 2019-02-01: :class:`Operations<azure.mgmt.containerservice.v2019_02_01.operations.Operations>` """ api_version = self._get_api_version('operations') if api_version == '2018-03-31': from .v2018_03_31.operations import Operations as OperationClass elif api_version == '2018-08-01-preview': from .v2018_08_01_preview.operations import Operations as OperationClass elif api_version == '2019-02-01': from .v2019_02_01.operations import Operations as OperationClass else: raise NotImplementedError("APIVersion {} is not available".format(api_version)) return OperationClass(self._client, self.config, Serializer(self._models_dict(api_version)), Deserializer(self._models_dict(api_version)))
def backend_notification(self, event=None, parameters=None): """The Alignak backend raises an event to the Alignak arbiter ----- Possible events are: - creation, for a realm or an host creation - deletion, for a realm or an host deletion Calls the reload configuration function if event is creation or deletion Else, nothing for the moment! In case of any error, this function returns an object containing some properties: '_status': 'ERR' because of the error `_message`: some more explanations about the error The `_status` field is 'OK' with an according `_message` to explain what the Arbiter will do depending upon the notification. :return: dict """ # request_parameters = cherrypy.request.json # event = request_parameters.get('event', event) # parameters = request_parameters.get('parameters', parameters) if event is None: data = cherrypy.request.json event = data.get('event', None) if parameters is None: data = cherrypy.request.json parameters = data.get('parameters', None) logger.warning("I got a backend notification: %s / %s", event, parameters) # For a configuration reload event... if event in ['creation', 'deletion']: # If I'm the master, ignore the command and raise a log if not self.app.is_master: message = u"I received a request to reload the monitored configuration. " \ u"I am not the Master arbiter, I ignore and continue to run." logger.warning(message) return {'_status': u'ERR', '_message': message} message = "I received a request to reload the monitored configuration." if self.app.loading_configuration: message += "I am still reloading the monitored configuration ;)" logger.warning(message) self.app.need_config_reload = True return {'_status': u'OK', '_message': message} return {'_status': u'OK', '_message': u"No action to do"}
The Alignak backend raises an event to the Alignak arbiter ----- Possible events are: - creation, for a realm or an host creation - deletion, for a realm or an host deletion Calls the reload configuration function if event is creation or deletion Else, nothing for the moment! In case of any error, this function returns an object containing some properties: '_status': 'ERR' because of the error `_message`: some more explanations about the error The `_status` field is 'OK' with an according `_message` to explain what the Arbiter will do depending upon the notification. :return: dict
Below is the the instruction that describes the task: ### Input: The Alignak backend raises an event to the Alignak arbiter ----- Possible events are: - creation, for a realm or an host creation - deletion, for a realm or an host deletion Calls the reload configuration function if event is creation or deletion Else, nothing for the moment! In case of any error, this function returns an object containing some properties: '_status': 'ERR' because of the error `_message`: some more explanations about the error The `_status` field is 'OK' with an according `_message` to explain what the Arbiter will do depending upon the notification. :return: dict ### Response: def backend_notification(self, event=None, parameters=None): """The Alignak backend raises an event to the Alignak arbiter ----- Possible events are: - creation, for a realm or an host creation - deletion, for a realm or an host deletion Calls the reload configuration function if event is creation or deletion Else, nothing for the moment! In case of any error, this function returns an object containing some properties: '_status': 'ERR' because of the error `_message`: some more explanations about the error The `_status` field is 'OK' with an according `_message` to explain what the Arbiter will do depending upon the notification. :return: dict """ # request_parameters = cherrypy.request.json # event = request_parameters.get('event', event) # parameters = request_parameters.get('parameters', parameters) if event is None: data = cherrypy.request.json event = data.get('event', None) if parameters is None: data = cherrypy.request.json parameters = data.get('parameters', None) logger.warning("I got a backend notification: %s / %s", event, parameters) # For a configuration reload event... if event in ['creation', 'deletion']: # If I'm the master, ignore the command and raise a log if not self.app.is_master: message = u"I received a request to reload the monitored configuration. " \ u"I am not the Master arbiter, I ignore and continue to run." logger.warning(message) return {'_status': u'ERR', '_message': message} message = "I received a request to reload the monitored configuration." if self.app.loading_configuration: message += "I am still reloading the monitored configuration ;)" logger.warning(message) self.app.need_config_reload = True return {'_status': u'OK', '_message': message} return {'_status': u'OK', '_message': u"No action to do"}
def encompasses(self, span): """ Returns true if the given span fits inside this one """ if isinstance(span, list): return [sp for sp in span if self._encompasses(sp)] return self._encompasses(span)
Returns true if the given span fits inside this one
Below is the the instruction that describes the task: ### Input: Returns true if the given span fits inside this one ### Response: def encompasses(self, span): """ Returns true if the given span fits inside this one """ if isinstance(span, list): return [sp for sp in span if self._encompasses(sp)] return self._encompasses(span)
def add_bool_option(self, *args, **kwargs): """ Add a boolean option. @keyword help: Option description. """ dest = [o for o in args if o.startswith("--")][0].replace("--", "").replace("-", "_") self.parser.add_option(dest=dest, action="store_true", default=False, help=kwargs['help'], *args)
Add a boolean option. @keyword help: Option description.
Below is the the instruction that describes the task: ### Input: Add a boolean option. @keyword help: Option description. ### Response: def add_bool_option(self, *args, **kwargs): """ Add a boolean option. @keyword help: Option description. """ dest = [o for o in args if o.startswith("--")][0].replace("--", "").replace("-", "_") self.parser.add_option(dest=dest, action="store_true", default=False, help=kwargs['help'], *args)
def sample_lonlat(self, n): """ Sample 2D distribution of points in lon, lat """ # From http://en.wikipedia.org/wiki/Ellipse#General_parametric_form # However, Martin et al. (2009) use PA theta "from North to East" # Definition of phi (position angle) is offset by pi/4 # Definition of t (eccentric anamoly) remains the same (x,y-frame usual) # In the end, everything is trouble because we use glon, glat... radius = self.sample_radius(n) a = radius; b = self.jacobian * radius t = 2. * np.pi * np.random.rand(n) cost,sint = np.cos(t),np.sin(t) phi = np.pi/2. - np.deg2rad(self.theta) cosphi,sinphi = np.cos(phi),np.sin(phi) x = a*cost*cosphi - b*sint*sinphi y = a*cost*sinphi + b*sint*cosphi if self.projector is None: logger.debug("Creating AITOFF projector for sampling") projector = Projector(self.lon,self.lat,'ait') else: projector = self.projector lon, lat = projector.imageToSphere(x, y) return lon, lat
Sample 2D distribution of points in lon, lat
Below is the the instruction that describes the task: ### Input: Sample 2D distribution of points in lon, lat ### Response: def sample_lonlat(self, n): """ Sample 2D distribution of points in lon, lat """ # From http://en.wikipedia.org/wiki/Ellipse#General_parametric_form # However, Martin et al. (2009) use PA theta "from North to East" # Definition of phi (position angle) is offset by pi/4 # Definition of t (eccentric anamoly) remains the same (x,y-frame usual) # In the end, everything is trouble because we use glon, glat... radius = self.sample_radius(n) a = radius; b = self.jacobian * radius t = 2. * np.pi * np.random.rand(n) cost,sint = np.cos(t),np.sin(t) phi = np.pi/2. - np.deg2rad(self.theta) cosphi,sinphi = np.cos(phi),np.sin(phi) x = a*cost*cosphi - b*sint*sinphi y = a*cost*sinphi + b*sint*cosphi if self.projector is None: logger.debug("Creating AITOFF projector for sampling") projector = Projector(self.lon,self.lat,'ait') else: projector = self.projector lon, lat = projector.imageToSphere(x, y) return lon, lat
def destroy(name, call=None): ''' destroy a machine by name :param name: name given to the machine :param call: call value in this case is 'action' :return: array of booleans , true if successfully stopped and true if successfully removed CLI Example: .. code-block:: bash salt-cloud -d vm_name ''' if call == 'function': raise SaltCloudSystemExit( 'The destroy action must be called with -d, --destroy, ' '-a or --action.' ) __utils__['cloud.fire_event']( 'event', 'destroying instance', 'salt/cloud/{0}/destroying'.format(name), args={'name': name}, sock_dir=__opts__['sock_dir'], transport=__opts__['transport'] ) datacenter_id = get_datacenter_id() conn = get_conn() node = get_node(conn, name) attached_volumes = None delete_volumes = config.get_cloud_config_value( 'delete_volumes', get_configured_provider(), __opts__, search_global=False ) # Get volumes before the server is deleted attached_volumes = conn.get_attached_volumes( datacenter_id=datacenter_id, server_id=node['id'] ) conn.delete_server(datacenter_id=datacenter_id, server_id=node['id']) # The server is deleted and now is safe to delete the volumes if delete_volumes: for vol in attached_volumes['items']: log.debug('Deleting volume %s', vol['id']) conn.delete_volume( datacenter_id=datacenter_id, volume_id=vol['id'] ) log.debug('Deleted volume %s', vol['id']) __utils__['cloud.fire_event']( 'event', 'destroyed instance', 'salt/cloud/{0}/destroyed'.format(name), args={'name': name}, sock_dir=__opts__['sock_dir'], transport=__opts__['transport'] ) if __opts__.get('update_cachedir', False) is True: __utils__['cloud.delete_minion_cachedir']( name, __active_provider_name__.split(':')[0], __opts__ ) return True
destroy a machine by name :param name: name given to the machine :param call: call value in this case is 'action' :return: array of booleans , true if successfully stopped and true if successfully removed CLI Example: .. code-block:: bash salt-cloud -d vm_name
Below is the the instruction that describes the task: ### Input: destroy a machine by name :param name: name given to the machine :param call: call value in this case is 'action' :return: array of booleans , true if successfully stopped and true if successfully removed CLI Example: .. code-block:: bash salt-cloud -d vm_name ### Response: def destroy(name, call=None): ''' destroy a machine by name :param name: name given to the machine :param call: call value in this case is 'action' :return: array of booleans , true if successfully stopped and true if successfully removed CLI Example: .. code-block:: bash salt-cloud -d vm_name ''' if call == 'function': raise SaltCloudSystemExit( 'The destroy action must be called with -d, --destroy, ' '-a or --action.' ) __utils__['cloud.fire_event']( 'event', 'destroying instance', 'salt/cloud/{0}/destroying'.format(name), args={'name': name}, sock_dir=__opts__['sock_dir'], transport=__opts__['transport'] ) datacenter_id = get_datacenter_id() conn = get_conn() node = get_node(conn, name) attached_volumes = None delete_volumes = config.get_cloud_config_value( 'delete_volumes', get_configured_provider(), __opts__, search_global=False ) # Get volumes before the server is deleted attached_volumes = conn.get_attached_volumes( datacenter_id=datacenter_id, server_id=node['id'] ) conn.delete_server(datacenter_id=datacenter_id, server_id=node['id']) # The server is deleted and now is safe to delete the volumes if delete_volumes: for vol in attached_volumes['items']: log.debug('Deleting volume %s', vol['id']) conn.delete_volume( datacenter_id=datacenter_id, volume_id=vol['id'] ) log.debug('Deleted volume %s', vol['id']) __utils__['cloud.fire_event']( 'event', 'destroyed instance', 'salt/cloud/{0}/destroyed'.format(name), args={'name': name}, sock_dir=__opts__['sock_dir'], transport=__opts__['transport'] ) if __opts__.get('update_cachedir', False) is True: __utils__['cloud.delete_minion_cachedir']( name, __active_provider_name__.split(':')[0], __opts__ ) return True
def put_sync(self, **kwargs): ''' PUT: puts data into the Firebase. Requires the 'point' parameter as a keyworded argument. ''' self.amust(("point", "data"), kwargs) response = requests.put(self.url_correct(kwargs["point"], kwargs.get("auth", self.__auth)), data=json.dumps(kwargs["data"])) self.catch_error(response) return response.content
PUT: puts data into the Firebase. Requires the 'point' parameter as a keyworded argument.
Below is the the instruction that describes the task: ### Input: PUT: puts data into the Firebase. Requires the 'point' parameter as a keyworded argument. ### Response: def put_sync(self, **kwargs): ''' PUT: puts data into the Firebase. Requires the 'point' parameter as a keyworded argument. ''' self.amust(("point", "data"), kwargs) response = requests.put(self.url_correct(kwargs["point"], kwargs.get("auth", self.__auth)), data=json.dumps(kwargs["data"])) self.catch_error(response) return response.content
def configure_audit_decorator(graph): """ Configure the audit decorator. Example Usage: @graph.audit def login(username, password): ... """ include_request_body = int(graph.config.audit.include_request_body) include_response_body = int(graph.config.audit.include_response_body) include_path = strtobool(graph.config.audit.include_path) include_query_string = strtobool(graph.config.audit.include_query_string) def _audit(func): @wraps(func) def wrapper(*args, **kwargs): options = AuditOptions( include_request_body=include_request_body, include_response_body=include_response_body, include_path=include_path, include_query_string=include_query_string, ) return _audit_request(options, func, graph.request_context, *args, **kwargs) return wrapper return _audit
Configure the audit decorator. Example Usage: @graph.audit def login(username, password): ...
Below is the the instruction that describes the task: ### Input: Configure the audit decorator. Example Usage: @graph.audit def login(username, password): ... ### Response: def configure_audit_decorator(graph): """ Configure the audit decorator. Example Usage: @graph.audit def login(username, password): ... """ include_request_body = int(graph.config.audit.include_request_body) include_response_body = int(graph.config.audit.include_response_body) include_path = strtobool(graph.config.audit.include_path) include_query_string = strtobool(graph.config.audit.include_query_string) def _audit(func): @wraps(func) def wrapper(*args, **kwargs): options = AuditOptions( include_request_body=include_request_body, include_response_body=include_response_body, include_path=include_path, include_query_string=include_query_string, ) return _audit_request(options, func, graph.request_context, *args, **kwargs) return wrapper return _audit
def tnr(y, z): """True negative rate `tn / (tn + fp)` """ tp, tn, fp, fn = contingency_table(y, z) return tn / (tn + fp)
True negative rate `tn / (tn + fp)`
Below is the the instruction that describes the task: ### Input: True negative rate `tn / (tn + fp)` ### Response: def tnr(y, z): """True negative rate `tn / (tn + fp)` """ tp, tn, fp, fn = contingency_table(y, z) return tn / (tn + fp)
def _flush(self): """ Flush metadata to the backing file :return: """ with open(self.metadata_file, 'w') as f: json.dump(self.metadata, f)
Flush metadata to the backing file :return:
Below is the the instruction that describes the task: ### Input: Flush metadata to the backing file :return: ### Response: def _flush(self): """ Flush metadata to the backing file :return: """ with open(self.metadata_file, 'w') as f: json.dump(self.metadata, f)
def _scan_smaller(self, seq, threshold=''): """ m._scan_smaller(seq, threshold='') -- Internal utility function for performing sequence scans The sequence is smaller than the PSSM. Are there good matches to regions of the PSSM? """ ll = self.ll #Shortcut for Log-likelihood matrix matches = [] endpoints = [] scores = [] w = self.width oseq = seq seq = seq.upper() for offset in range(self.width-len(seq)+1): #Check if +/-1 needed maximum = 0 for i in range(len(seq)): maximum = maximum + max(ll[i+offset].values()) if not threshold: threshold = 0.8 * maximum total_f = 0 total_r = 0 for i in range(len(seq)): total_f = total_f + ll[i+offset ][ seq[i] ] total_r = total_r + ll[w-(i+offset)-1][revcomp[seq[i]]] if 0: print "\t\t%s vs %s: F=%6.2f R=%6.2f %6.2f %4.2f"%(oseq, self.oneletter[offset:offset+len(seq)], total_f, total_r, maximum, max([total_f,total_r])/self.maxscore) if total_f > threshold and total_f > total_r: endpoints.append( (offset,offset+self.width-1) ) scores.append(total_f) matches.append(oseq[offset:offset+self.width]) elif total_r > threshold: endpoints.append( (offset,offset+self.width-1) ) scores.append(total_r) matches.append(oseq[offset:offset+self.width]) return(matches,endpoints,scores)
m._scan_smaller(seq, threshold='') -- Internal utility function for performing sequence scans The sequence is smaller than the PSSM. Are there good matches to regions of the PSSM?
Below is the the instruction that describes the task: ### Input: m._scan_smaller(seq, threshold='') -- Internal utility function for performing sequence scans The sequence is smaller than the PSSM. Are there good matches to regions of the PSSM? ### Response: def _scan_smaller(self, seq, threshold=''): """ m._scan_smaller(seq, threshold='') -- Internal utility function for performing sequence scans The sequence is smaller than the PSSM. Are there good matches to regions of the PSSM? """ ll = self.ll #Shortcut for Log-likelihood matrix matches = [] endpoints = [] scores = [] w = self.width oseq = seq seq = seq.upper() for offset in range(self.width-len(seq)+1): #Check if +/-1 needed maximum = 0 for i in range(len(seq)): maximum = maximum + max(ll[i+offset].values()) if not threshold: threshold = 0.8 * maximum total_f = 0 total_r = 0 for i in range(len(seq)): total_f = total_f + ll[i+offset ][ seq[i] ] total_r = total_r + ll[w-(i+offset)-1][revcomp[seq[i]]] if 0: print "\t\t%s vs %s: F=%6.2f R=%6.2f %6.2f %4.2f"%(oseq, self.oneletter[offset:offset+len(seq)], total_f, total_r, maximum, max([total_f,total_r])/self.maxscore) if total_f > threshold and total_f > total_r: endpoints.append( (offset,offset+self.width-1) ) scores.append(total_f) matches.append(oseq[offset:offset+self.width]) elif total_r > threshold: endpoints.append( (offset,offset+self.width-1) ) scores.append(total_r) matches.append(oseq[offset:offset+self.width]) return(matches,endpoints,scores)
def is_logon(self, verify=False): """ Return a boolean indicating whether the session is currently logged on to the HMC. By default, this method checks whether there is a session-id set and considers that sufficient for determining that the session is logged on. The `verify` parameter can be used to verify the validity of a session-id that is already set, by issuing a dummy operation ("Get Console Properties") to the HMC. Parameters: verify (bool): If a session-id is already set, verify its validity. """ if self._session_id is None: return False if verify: try: self.get('/api/console', logon_required=True) except ServerAuthError: return False return True
Return a boolean indicating whether the session is currently logged on to the HMC. By default, this method checks whether there is a session-id set and considers that sufficient for determining that the session is logged on. The `verify` parameter can be used to verify the validity of a session-id that is already set, by issuing a dummy operation ("Get Console Properties") to the HMC. Parameters: verify (bool): If a session-id is already set, verify its validity.
Below is the the instruction that describes the task: ### Input: Return a boolean indicating whether the session is currently logged on to the HMC. By default, this method checks whether there is a session-id set and considers that sufficient for determining that the session is logged on. The `verify` parameter can be used to verify the validity of a session-id that is already set, by issuing a dummy operation ("Get Console Properties") to the HMC. Parameters: verify (bool): If a session-id is already set, verify its validity. ### Response: def is_logon(self, verify=False): """ Return a boolean indicating whether the session is currently logged on to the HMC. By default, this method checks whether there is a session-id set and considers that sufficient for determining that the session is logged on. The `verify` parameter can be used to verify the validity of a session-id that is already set, by issuing a dummy operation ("Get Console Properties") to the HMC. Parameters: verify (bool): If a session-id is already set, verify its validity. """ if self._session_id is None: return False if verify: try: self.get('/api/console', logon_required=True) except ServerAuthError: return False return True
def _schedule(self): '''Schedule check function.''' if self._running: _logger.debug('Schedule check function.') self._call_later_handle = self._event_loop.call_later( self._timeout, self._check)
Schedule check function.
Below is the the instruction that describes the task: ### Input: Schedule check function. ### Response: def _schedule(self): '''Schedule check function.''' if self._running: _logger.debug('Schedule check function.') self._call_later_handle = self._event_loop.call_later( self._timeout, self._check)
def make_and_return_path_from_path_and_folder_names(path, folder_names): """ For a given path, create a directory structure composed of a set of folders and return the path to the \ inner-most folder. For example, if path='/path/to/folders', and folder_names=['folder1', 'folder2'], the directory created will be '/path/to/folders/folder1/folder2/' and the returned path will be '/path/to/folders/folder1/folder2/'. If the folders already exist, routine continues as normal. Parameters ---------- path : str The path where the directories are created. folder_names : [str] The names of the folders which are created in the path directory. Returns ------- path A string specifying the path to the inner-most folder created. Examples -------- path = '/path/to/folders' path = make_and_return_path(path=path, folder_names=['folder1', 'folder2']. """ for folder_name in folder_names: path += folder_name + '/' try: os.makedirs(path) except FileExistsError: pass return path
For a given path, create a directory structure composed of a set of folders and return the path to the \ inner-most folder. For example, if path='/path/to/folders', and folder_names=['folder1', 'folder2'], the directory created will be '/path/to/folders/folder1/folder2/' and the returned path will be '/path/to/folders/folder1/folder2/'. If the folders already exist, routine continues as normal. Parameters ---------- path : str The path where the directories are created. folder_names : [str] The names of the folders which are created in the path directory. Returns ------- path A string specifying the path to the inner-most folder created. Examples -------- path = '/path/to/folders' path = make_and_return_path(path=path, folder_names=['folder1', 'folder2'].
Below is the the instruction that describes the task: ### Input: For a given path, create a directory structure composed of a set of folders and return the path to the \ inner-most folder. For example, if path='/path/to/folders', and folder_names=['folder1', 'folder2'], the directory created will be '/path/to/folders/folder1/folder2/' and the returned path will be '/path/to/folders/folder1/folder2/'. If the folders already exist, routine continues as normal. Parameters ---------- path : str The path where the directories are created. folder_names : [str] The names of the folders which are created in the path directory. Returns ------- path A string specifying the path to the inner-most folder created. Examples -------- path = '/path/to/folders' path = make_and_return_path(path=path, folder_names=['folder1', 'folder2']. ### Response: def make_and_return_path_from_path_and_folder_names(path, folder_names): """ For a given path, create a directory structure composed of a set of folders and return the path to the \ inner-most folder. For example, if path='/path/to/folders', and folder_names=['folder1', 'folder2'], the directory created will be '/path/to/folders/folder1/folder2/' and the returned path will be '/path/to/folders/folder1/folder2/'. If the folders already exist, routine continues as normal. Parameters ---------- path : str The path where the directories are created. folder_names : [str] The names of the folders which are created in the path directory. Returns ------- path A string specifying the path to the inner-most folder created. Examples -------- path = '/path/to/folders' path = make_and_return_path(path=path, folder_names=['folder1', 'folder2']. """ for folder_name in folder_names: path += folder_name + '/' try: os.makedirs(path) except FileExistsError: pass return path
def block(broker): """Path: /sys/block directories starting with . or ram or dm- or loop""" remove = (".", "ram", "dm-", "loop") tmp = "/dev/%s" return[(tmp % f) for f in os.listdir("/sys/block") if not f.startswith(remove)]
Path: /sys/block directories starting with . or ram or dm- or loop
Below is the the instruction that describes the task: ### Input: Path: /sys/block directories starting with . or ram or dm- or loop ### Response: def block(broker): """Path: /sys/block directories starting with . or ram or dm- or loop""" remove = (".", "ram", "dm-", "loop") tmp = "/dev/%s" return[(tmp % f) for f in os.listdir("/sys/block") if not f.startswith(remove)]
def get_text_for_html(html_content): ''' Take the HTML content (from, for example, an email) and construct a simple plain text version of that content (for example, for inclusion in a multipart email message). ''' soup = BeautifulSoup(html_content) # kill all script and style elements for script in soup(["script", "style"]): script.extract() # rip it out # Replace all links with HREF with the link text and the href in brackets for a in soup.findAll('a', href=True): a.replaceWith('%s <%s>' % (a.string, a.get('href'))) # get text text = soup.get_text() # break into lines and remove leading and trailing space on each lines = (line.strip() for line in text.splitlines()) # break multi-headlines into a line each chunks = (phrase.strip() for line in lines for phrase in line.split(" ")) # drop blank lines text = '\n'.join(chunk for chunk in chunks if chunk) return text
Take the HTML content (from, for example, an email) and construct a simple plain text version of that content (for example, for inclusion in a multipart email message).
Below is the the instruction that describes the task: ### Input: Take the HTML content (from, for example, an email) and construct a simple plain text version of that content (for example, for inclusion in a multipart email message). ### Response: def get_text_for_html(html_content): ''' Take the HTML content (from, for example, an email) and construct a simple plain text version of that content (for example, for inclusion in a multipart email message). ''' soup = BeautifulSoup(html_content) # kill all script and style elements for script in soup(["script", "style"]): script.extract() # rip it out # Replace all links with HREF with the link text and the href in brackets for a in soup.findAll('a', href=True): a.replaceWith('%s <%s>' % (a.string, a.get('href'))) # get text text = soup.get_text() # break into lines and remove leading and trailing space on each lines = (line.strip() for line in text.splitlines()) # break multi-headlines into a line each chunks = (phrase.strip() for line in lines for phrase in line.split(" ")) # drop blank lines text = '\n'.join(chunk for chunk in chunks if chunk) return text
def small_doc(obj, indent="", max_width=80): """ Finds a useful small doc representation of an object. Parameters ---------- obj : Any object, which the documentation representation should be taken from. indent : Result indentation string to be insert in front of all lines. max_width : Each line of the result may have at most this length. Returns ------- For classes, modules, functions, methods, properties and StrategyDict instances, returns the first paragraph in the doctring of the given object, as a list of strings, stripped at right and with indent at left. For other inputs, it will use themselves cast to string as their docstring. """ if not getattr(obj, "__doc__", False): data = [el.strip() for el in str(obj).splitlines()] if len(data) == 1: if data[0].startswith("<audiolazy.lazy_"): # Instance data = data[0].split("0x", -1)[0] + "0x...>" # Hide its address else: data = "".join(["``", data[0], "``"]) else: data = " ".join(data) # No docstring elif (not obj.__doc__) or (obj.__doc__.strip() == ""): data = "\ * * * * ...no docstring... * * * * \ " # Docstring else: data = (el.strip() for el in obj.__doc__.strip().splitlines()) data = " ".join(it.takewhile(lambda el: el != "", data)) # Ensure max_width (word wrap) max_width -= len(indent) result = [] for word in data.split(): if len(word) <= max_width: if result: if len(result[-1]) + len(word) + 1 <= max_width: word = " ".join([result.pop(), word]) result.append(word) else: result = [word] else: # Splits big words result.extend("".join(w) for w in blocks(word, max_width, padval="")) # Apply indentation and finishes return [indent + el for el in result]
Finds a useful small doc representation of an object. Parameters ---------- obj : Any object, which the documentation representation should be taken from. indent : Result indentation string to be insert in front of all lines. max_width : Each line of the result may have at most this length. Returns ------- For classes, modules, functions, methods, properties and StrategyDict instances, returns the first paragraph in the doctring of the given object, as a list of strings, stripped at right and with indent at left. For other inputs, it will use themselves cast to string as their docstring.
Below is the the instruction that describes the task: ### Input: Finds a useful small doc representation of an object. Parameters ---------- obj : Any object, which the documentation representation should be taken from. indent : Result indentation string to be insert in front of all lines. max_width : Each line of the result may have at most this length. Returns ------- For classes, modules, functions, methods, properties and StrategyDict instances, returns the first paragraph in the doctring of the given object, as a list of strings, stripped at right and with indent at left. For other inputs, it will use themselves cast to string as their docstring. ### Response: def small_doc(obj, indent="", max_width=80): """ Finds a useful small doc representation of an object. Parameters ---------- obj : Any object, which the documentation representation should be taken from. indent : Result indentation string to be insert in front of all lines. max_width : Each line of the result may have at most this length. Returns ------- For classes, modules, functions, methods, properties and StrategyDict instances, returns the first paragraph in the doctring of the given object, as a list of strings, stripped at right and with indent at left. For other inputs, it will use themselves cast to string as their docstring. """ if not getattr(obj, "__doc__", False): data = [el.strip() for el in str(obj).splitlines()] if len(data) == 1: if data[0].startswith("<audiolazy.lazy_"): # Instance data = data[0].split("0x", -1)[0] + "0x...>" # Hide its address else: data = "".join(["``", data[0], "``"]) else: data = " ".join(data) # No docstring elif (not obj.__doc__) or (obj.__doc__.strip() == ""): data = "\ * * * * ...no docstring... * * * * \ " # Docstring else: data = (el.strip() for el in obj.__doc__.strip().splitlines()) data = " ".join(it.takewhile(lambda el: el != "", data)) # Ensure max_width (word wrap) max_width -= len(indent) result = [] for word in data.split(): if len(word) <= max_width: if result: if len(result[-1]) + len(word) + 1 <= max_width: word = " ".join([result.pop(), word]) result.append(word) else: result = [word] else: # Splits big words result.extend("".join(w) for w in blocks(word, max_width, padval="")) # Apply indentation and finishes return [indent + el for el in result]
def add_genelist(self, list_id, gene_ids, case_obj=None): """Create a new gene list and optionally link to cases.""" new_genelist = GeneList(list_id=list_id) new_genelist.gene_ids = gene_ids if case_obj: new_genelist.cases.append(case_obj) self.session.add(new_genelist) self.save() return new_genelist
Create a new gene list and optionally link to cases.
Below is the the instruction that describes the task: ### Input: Create a new gene list and optionally link to cases. ### Response: def add_genelist(self, list_id, gene_ids, case_obj=None): """Create a new gene list and optionally link to cases.""" new_genelist = GeneList(list_id=list_id) new_genelist.gene_ids = gene_ids if case_obj: new_genelist.cases.append(case_obj) self.session.add(new_genelist) self.save() return new_genelist
def _lockstep_fcn(values): """ Wrapper to ensure that all processes execute together """ numrequired, fcn, args = values with _process_lock: _numdone.value += 1 # yep this is an ugly busy loop, do something better please # when we care about the performance of this call and not just the # guarantee it provides (ok... maybe never) while 1: if _numdone.value == numrequired: return fcn(args)
Wrapper to ensure that all processes execute together
Below is the the instruction that describes the task: ### Input: Wrapper to ensure that all processes execute together ### Response: def _lockstep_fcn(values): """ Wrapper to ensure that all processes execute together """ numrequired, fcn, args = values with _process_lock: _numdone.value += 1 # yep this is an ugly busy loop, do something better please # when we care about the performance of this call and not just the # guarantee it provides (ok... maybe never) while 1: if _numdone.value == numrequired: return fcn(args)
def init_app(self, app, entry_point_group='invenio_queues.queues'): """Flask application initialization.""" self.init_config(app) app.extensions['invenio-queues'] = _InvenioQueuesState( app, app.config['QUEUES_CONNECTION_POOL'], entry_point_group=entry_point_group ) return app
Flask application initialization.
Below is the the instruction that describes the task: ### Input: Flask application initialization. ### Response: def init_app(self, app, entry_point_group='invenio_queues.queues'): """Flask application initialization.""" self.init_config(app) app.extensions['invenio-queues'] = _InvenioQueuesState( app, app.config['QUEUES_CONNECTION_POOL'], entry_point_group=entry_point_group ) return app
def get_dropout(x, rate=0.0, init=True): """Dropout x with dropout_rate = rate. Apply zero dropout during init or prediction time. Args: x: 4-D Tensor, shape=(NHWC). rate: Dropout rate. init: Initialization. Returns: x: activations after dropout. """ if init or rate == 0: return x return tf.layers.dropout(x, rate=rate, training=True)
Dropout x with dropout_rate = rate. Apply zero dropout during init or prediction time. Args: x: 4-D Tensor, shape=(NHWC). rate: Dropout rate. init: Initialization. Returns: x: activations after dropout.
Below is the the instruction that describes the task: ### Input: Dropout x with dropout_rate = rate. Apply zero dropout during init or prediction time. Args: x: 4-D Tensor, shape=(NHWC). rate: Dropout rate. init: Initialization. Returns: x: activations after dropout. ### Response: def get_dropout(x, rate=0.0, init=True): """Dropout x with dropout_rate = rate. Apply zero dropout during init or prediction time. Args: x: 4-D Tensor, shape=(NHWC). rate: Dropout rate. init: Initialization. Returns: x: activations after dropout. """ if init or rate == 0: return x return tf.layers.dropout(x, rate=rate, training=True)
async def serviceQueues(self, limit=None) -> int: """ Service at most `limit` messages from the inBox. :param limit: the maximum number of messages to service :return: the number of messages successfully processed """ return await self.inBoxRouter.handleAll(self.filterMsgs(self.inBox), limit)
Service at most `limit` messages from the inBox. :param limit: the maximum number of messages to service :return: the number of messages successfully processed
Below is the the instruction that describes the task: ### Input: Service at most `limit` messages from the inBox. :param limit: the maximum number of messages to service :return: the number of messages successfully processed ### Response: async def serviceQueues(self, limit=None) -> int: """ Service at most `limit` messages from the inBox. :param limit: the maximum number of messages to service :return: the number of messages successfully processed """ return await self.inBoxRouter.handleAll(self.filterMsgs(self.inBox), limit)
def add_subtrack(self, subtrack): """ Add a child :class:`Track`. """ self.add_child(subtrack) self.subtracks.append(subtrack)
Add a child :class:`Track`.
Below is the the instruction that describes the task: ### Input: Add a child :class:`Track`. ### Response: def add_subtrack(self, subtrack): """ Add a child :class:`Track`. """ self.add_child(subtrack) self.subtracks.append(subtrack)
def ProcessMessage(self, message): """Processes a stats response from the client.""" self.ProcessResponse(message.source.Basename(), message.payload)
Processes a stats response from the client.
Below is the the instruction that describes the task: ### Input: Processes a stats response from the client. ### Response: def ProcessMessage(self, message): """Processes a stats response from the client.""" self.ProcessResponse(message.source.Basename(), message.payload)
def append(self, item): """ Try to add an item to this element. If the item is of the wrong type, and if this element has a sub-type, then try to create such a sub-type and insert the item into that, instead. This happens recursively, so (in python-markup): L [ u'Foo' ] actually creates: L [ LE [ P [ T [ u'Foo' ] ] ] ] If that doesn't work, raise a TypeError. """ okay = True if not isinstance(item, self.contentType): if hasattr(self.contentType, 'contentType'): try: item = self.contentType(content=[item]) except TypeError: okay = False else: okay = False if not okay: raise TypeError("Wrong content type for %s: %s (%s)" % ( self.__class__.__name__, repr(type(item)), repr(item))) self.content.append(item)
Try to add an item to this element. If the item is of the wrong type, and if this element has a sub-type, then try to create such a sub-type and insert the item into that, instead. This happens recursively, so (in python-markup): L [ u'Foo' ] actually creates: L [ LE [ P [ T [ u'Foo' ] ] ] ] If that doesn't work, raise a TypeError.
Below is the the instruction that describes the task: ### Input: Try to add an item to this element. If the item is of the wrong type, and if this element has a sub-type, then try to create such a sub-type and insert the item into that, instead. This happens recursively, so (in python-markup): L [ u'Foo' ] actually creates: L [ LE [ P [ T [ u'Foo' ] ] ] ] If that doesn't work, raise a TypeError. ### Response: def append(self, item): """ Try to add an item to this element. If the item is of the wrong type, and if this element has a sub-type, then try to create such a sub-type and insert the item into that, instead. This happens recursively, so (in python-markup): L [ u'Foo' ] actually creates: L [ LE [ P [ T [ u'Foo' ] ] ] ] If that doesn't work, raise a TypeError. """ okay = True if not isinstance(item, self.contentType): if hasattr(self.contentType, 'contentType'): try: item = self.contentType(content=[item]) except TypeError: okay = False else: okay = False if not okay: raise TypeError("Wrong content type for %s: %s (%s)" % ( self.__class__.__name__, repr(type(item)), repr(item))) self.content.append(item)
def printHelp(script): """Print Help Prints out the arguments needs to run the script Returns: None """ print 'Reconsider cli script copyright 2016 OuroborosCoding' print '' print script + ' --source=localhost:28015 --destination=somedomain.com:28015' print script + ' --destination=somedomain.com:28015 --dbs=production,staging' print '' print 'Usage:' print ' --source=[string] A RethinkDB connection string representing the' print ' source host. Defaults to "localhost:28015"' print ' --destination=[string] A RethinkDB connection string representing the' print ' destination host. Required.' print ' --db A single DB name, or a comma separated list of' print ' databases on the source which will be copied to the' print ' destination. Defaults to all DBs on the source' print ' host' print ' --verbose Will print out what\'s happening during the clone' print ' --help Prints this message' print '' print 'A RethinkDB connection string is defined as: host[:port[:user[:password]]]' print 'Valid: localhost, localhost:28015, localhost:28015:root:asdf' print 'Invalid: localhost:root, 28015, root:asdf, localhost:root:asdf'
Print Help Prints out the arguments needs to run the script Returns: None
Below is the the instruction that describes the task: ### Input: Print Help Prints out the arguments needs to run the script Returns: None ### Response: def printHelp(script): """Print Help Prints out the arguments needs to run the script Returns: None """ print 'Reconsider cli script copyright 2016 OuroborosCoding' print '' print script + ' --source=localhost:28015 --destination=somedomain.com:28015' print script + ' --destination=somedomain.com:28015 --dbs=production,staging' print '' print 'Usage:' print ' --source=[string] A RethinkDB connection string representing the' print ' source host. Defaults to "localhost:28015"' print ' --destination=[string] A RethinkDB connection string representing the' print ' destination host. Required.' print ' --db A single DB name, or a comma separated list of' print ' databases on the source which will be copied to the' print ' destination. Defaults to all DBs on the source' print ' host' print ' --verbose Will print out what\'s happening during the clone' print ' --help Prints this message' print '' print 'A RethinkDB connection string is defined as: host[:port[:user[:password]]]' print 'Valid: localhost, localhost:28015, localhost:28015:root:asdf' print 'Invalid: localhost:root, 28015, root:asdf, localhost:root:asdf'
def links(self) -> _Links: """All found links on page, in as–is form. Only works for Atom feeds.""" return list(set(x.text for x in self.xpath('//link')))
All found links on page, in as–is form. Only works for Atom feeds.
Below is the the instruction that describes the task: ### Input: All found links on page, in as–is form. Only works for Atom feeds. ### Response: def links(self) -> _Links: """All found links on page, in as–is form. Only works for Atom feeds.""" return list(set(x.text for x in self.xpath('//link')))
def find_event_with_outgoing_edges(self, event_name, desired_relations): """Gets a list of event nodes with the specified event_name and outgoing edges annotated with each of the specified relations. Parameters ---------- event_name : str Look for event nodes with this name desired_relations : list[str] Look for event nodes with outgoing edges annotated with each of these relations Returns ------- event_nodes : list[str] Event nodes that fit the desired criteria """ G = self.G desired_relations = set(desired_relations) desired_event_nodes = [] for node in G.node.keys(): if G.node[node]['is_event'] and G.node[node]['type'] == event_name: has_relations = [G.edges[node, edge[1]]['relation'] for edge in G.edges(node)] has_relations = set(has_relations) # Did the outgoing edges from this node have all of the # desired relations? if desired_relations.issubset(has_relations): desired_event_nodes.append(node) return desired_event_nodes
Gets a list of event nodes with the specified event_name and outgoing edges annotated with each of the specified relations. Parameters ---------- event_name : str Look for event nodes with this name desired_relations : list[str] Look for event nodes with outgoing edges annotated with each of these relations Returns ------- event_nodes : list[str] Event nodes that fit the desired criteria
Below is the the instruction that describes the task: ### Input: Gets a list of event nodes with the specified event_name and outgoing edges annotated with each of the specified relations. Parameters ---------- event_name : str Look for event nodes with this name desired_relations : list[str] Look for event nodes with outgoing edges annotated with each of these relations Returns ------- event_nodes : list[str] Event nodes that fit the desired criteria ### Response: def find_event_with_outgoing_edges(self, event_name, desired_relations): """Gets a list of event nodes with the specified event_name and outgoing edges annotated with each of the specified relations. Parameters ---------- event_name : str Look for event nodes with this name desired_relations : list[str] Look for event nodes with outgoing edges annotated with each of these relations Returns ------- event_nodes : list[str] Event nodes that fit the desired criteria """ G = self.G desired_relations = set(desired_relations) desired_event_nodes = [] for node in G.node.keys(): if G.node[node]['is_event'] and G.node[node]['type'] == event_name: has_relations = [G.edges[node, edge[1]]['relation'] for edge in G.edges(node)] has_relations = set(has_relations) # Did the outgoing edges from this node have all of the # desired relations? if desired_relations.issubset(has_relations): desired_event_nodes.append(node) return desired_event_nodes
def usn_v4_record(header, record): """Extracts USN V4 record information.""" length, major_version, minor_version = header fields = V4_RECORD.unpack_from(record, RECORD_HEADER.size) raise NotImplementedError('Not implemented')
Extracts USN V4 record information.
Below is the the instruction that describes the task: ### Input: Extracts USN V4 record information. ### Response: def usn_v4_record(header, record): """Extracts USN V4 record information.""" length, major_version, minor_version = header fields = V4_RECORD.unpack_from(record, RECORD_HEADER.size) raise NotImplementedError('Not implemented')
def __remove_dir(self, ftp, remote_path): """ Helper function to perform delete operation on the remote server :param ftp: SFTP handle to perform delete operation(s) :param remote_path: Remote path to remove """ # Iterate over the remote path and perform remove operations files = ftp.listdir(remote_path) for filename in files: # Attempt to remove the file (if exception then path is directory) path = remote_path + self.separator + filename try: ftp.remove(path) except IOError: self.__remove_dir(ftp, path) # Remove the original directory requested ftp.rmdir(remote_path)
Helper function to perform delete operation on the remote server :param ftp: SFTP handle to perform delete operation(s) :param remote_path: Remote path to remove
Below is the the instruction that describes the task: ### Input: Helper function to perform delete operation on the remote server :param ftp: SFTP handle to perform delete operation(s) :param remote_path: Remote path to remove ### Response: def __remove_dir(self, ftp, remote_path): """ Helper function to perform delete operation on the remote server :param ftp: SFTP handle to perform delete operation(s) :param remote_path: Remote path to remove """ # Iterate over the remote path and perform remove operations files = ftp.listdir(remote_path) for filename in files: # Attempt to remove the file (if exception then path is directory) path = remote_path + self.separator + filename try: ftp.remove(path) except IOError: self.__remove_dir(ftp, path) # Remove the original directory requested ftp.rmdir(remote_path)
def GET(self, courseid, taskid, isLTI): """ GET request """ username = self.user_manager.session_username() # Fetch the course try: course = self.course_factory.get_course(courseid) except exceptions.CourseNotFoundException as ex: raise web.notfound(str(ex)) if isLTI and not self.user_manager.course_is_user_registered(course): self.user_manager.course_register_user(course, force=True) if not self.user_manager.course_is_open_to_user(course, username, isLTI): return self.template_helper.get_renderer().course_unavailable() # Fetch the task try: tasks = OrderedDict((tid, t) for tid, t in course.get_tasks().items() if self.user_manager.task_is_visible_by_user(t, username, isLTI)) task = tasks[taskid] except exceptions.TaskNotFoundException as ex: raise web.notfound(str(ex)) if not self.user_manager.task_is_visible_by_user(task, username, isLTI): return self.template_helper.get_renderer().task_unavailable() # Compute previous and next taskid keys = list(tasks.keys()) index = keys.index(taskid) previous_taskid = keys[index - 1] if index > 0 else None next_taskid = keys[index + 1] if index < len(keys) - 1 else None self.user_manager.user_saw_task(username, courseid, taskid) is_staff = self.user_manager.has_staff_rights_on_course(course, username) userinput = web.input() if "submissionid" in userinput and "questionid" in userinput: # Download a previously submitted file submission = self.submission_manager.get_submission(userinput["submissionid"], user_check=not is_staff) if submission is None: raise web.notfound() sinput = self.submission_manager.get_input_from_submission(submission, True) if userinput["questionid"] not in sinput: raise web.notfound() if isinstance(sinput[userinput["questionid"]], dict): # File uploaded previously mimetypes.init() mime_type = mimetypes.guess_type(urllib.request.pathname2url(sinput[userinput["questionid"]]['filename'])) web.header('Content-Type', mime_type[0]) return sinput[userinput["questionid"]]['value'] else: # Other file, download it as text web.header('Content-Type', 'text/plain') return sinput[userinput["questionid"]] else: # Generate random inputs and save it into db random.seed(str(username if username is not None else "") + taskid + courseid + str( time.time() if task.regenerate_input_random() else "")) random_input_list = [random.random() for i in range(task.get_number_input_random())] user_task = self.database.user_tasks.find_one_and_update( { "courseid": task.get_course_id(), "taskid": task.get_id(), "username": self.user_manager.session_username() }, { "$set": {"random": random_input_list} }, return_document=ReturnDocument.AFTER ) submissionid = user_task.get('submissionid', None) eval_submission = self.database.submissions.find_one({'_id': ObjectId(submissionid)}) if submissionid else None students = [self.user_manager.session_username()] if task.is_group_task() and not self.user_manager.has_admin_rights_on_course(course, username): group = self.database.aggregations.find_one( {"courseid": task.get_course_id(), "groups.students": self.user_manager.session_username()}, {"groups": {"$elemMatch": {"students": self.user_manager.session_username()}}}) if group is not None and len(group["groups"]) > 0: students = group["groups"][0]["students"] # we don't care for the other case, as the student won't be able to submit. submissions = self.submission_manager.get_user_submissions(task) if self.user_manager.session_logged_in() else [] user_info = self.database.users.find_one({"username": username}) # Display the task itself return self.template_helper.get_renderer().task(user_info, course, task, submissions, students, eval_submission, user_task, previous_taskid, next_taskid, self.webterm_link, random_input_list)
GET request
Below is the the instruction that describes the task: ### Input: GET request ### Response: def GET(self, courseid, taskid, isLTI): """ GET request """ username = self.user_manager.session_username() # Fetch the course try: course = self.course_factory.get_course(courseid) except exceptions.CourseNotFoundException as ex: raise web.notfound(str(ex)) if isLTI and not self.user_manager.course_is_user_registered(course): self.user_manager.course_register_user(course, force=True) if not self.user_manager.course_is_open_to_user(course, username, isLTI): return self.template_helper.get_renderer().course_unavailable() # Fetch the task try: tasks = OrderedDict((tid, t) for tid, t in course.get_tasks().items() if self.user_manager.task_is_visible_by_user(t, username, isLTI)) task = tasks[taskid] except exceptions.TaskNotFoundException as ex: raise web.notfound(str(ex)) if not self.user_manager.task_is_visible_by_user(task, username, isLTI): return self.template_helper.get_renderer().task_unavailable() # Compute previous and next taskid keys = list(tasks.keys()) index = keys.index(taskid) previous_taskid = keys[index - 1] if index > 0 else None next_taskid = keys[index + 1] if index < len(keys) - 1 else None self.user_manager.user_saw_task(username, courseid, taskid) is_staff = self.user_manager.has_staff_rights_on_course(course, username) userinput = web.input() if "submissionid" in userinput and "questionid" in userinput: # Download a previously submitted file submission = self.submission_manager.get_submission(userinput["submissionid"], user_check=not is_staff) if submission is None: raise web.notfound() sinput = self.submission_manager.get_input_from_submission(submission, True) if userinput["questionid"] not in sinput: raise web.notfound() if isinstance(sinput[userinput["questionid"]], dict): # File uploaded previously mimetypes.init() mime_type = mimetypes.guess_type(urllib.request.pathname2url(sinput[userinput["questionid"]]['filename'])) web.header('Content-Type', mime_type[0]) return sinput[userinput["questionid"]]['value'] else: # Other file, download it as text web.header('Content-Type', 'text/plain') return sinput[userinput["questionid"]] else: # Generate random inputs and save it into db random.seed(str(username if username is not None else "") + taskid + courseid + str( time.time() if task.regenerate_input_random() else "")) random_input_list = [random.random() for i in range(task.get_number_input_random())] user_task = self.database.user_tasks.find_one_and_update( { "courseid": task.get_course_id(), "taskid": task.get_id(), "username": self.user_manager.session_username() }, { "$set": {"random": random_input_list} }, return_document=ReturnDocument.AFTER ) submissionid = user_task.get('submissionid', None) eval_submission = self.database.submissions.find_one({'_id': ObjectId(submissionid)}) if submissionid else None students = [self.user_manager.session_username()] if task.is_group_task() and not self.user_manager.has_admin_rights_on_course(course, username): group = self.database.aggregations.find_one( {"courseid": task.get_course_id(), "groups.students": self.user_manager.session_username()}, {"groups": {"$elemMatch": {"students": self.user_manager.session_username()}}}) if group is not None and len(group["groups"]) > 0: students = group["groups"][0]["students"] # we don't care for the other case, as the student won't be able to submit. submissions = self.submission_manager.get_user_submissions(task) if self.user_manager.session_logged_in() else [] user_info = self.database.users.find_one({"username": username}) # Display the task itself return self.template_helper.get_renderer().task(user_info, course, task, submissions, students, eval_submission, user_task, previous_taskid, next_taskid, self.webterm_link, random_input_list)
def retrieve_element_or_default(self, location, default=None): """ Args: location: default: Returns: """ loc_descriptor = self._get_location_descriptor(location) # find node node = None try: node = self._get_node(loc_descriptor) except Exception as e: return default return node.get_element()
Args: location: default: Returns:
Below is the the instruction that describes the task: ### Input: Args: location: default: Returns: ### Response: def retrieve_element_or_default(self, location, default=None): """ Args: location: default: Returns: """ loc_descriptor = self._get_location_descriptor(location) # find node node = None try: node = self._get_node(loc_descriptor) except Exception as e: return default return node.get_element()
def get_record(self, record): """ Reads a dom xml element in oaidc format and returns the bibrecord object """ self.document = record rec = create_record() language = self._get_language() if language and language != 'en': record_add_field(rec, '041', subfields=[('a', language)]) publisher = self._get_publisher() date = self._get_date() if publisher and date: record_add_field(rec, '260', subfields=[('b', publisher), ('c', date)]) elif publisher: record_add_field(rec, '260', subfields=[('b', publisher)]) elif date: record_add_field(rec, '260', subfields=[('c', date)]) title = self._get_title() if title: record_add_field(rec, '245', subfields=[('a', title)]) record_copyright = self._get_copyright() if record_copyright: record_add_field(rec, '540', subfields=[('a', record_copyright)]) subject = self._get_subject() if subject: record_add_field(rec, '650', ind1='1', ind2='7', subfields=[('a', subject), ('2', 'PoS')]) authors = self._get_authors() first_author = True for author in authors: subfields = [('a', author[0])] for affiliation in author[1]: subfields.append(('v', affiliation)) if first_author: record_add_field(rec, '100', subfields=subfields) first_author = False else: record_add_field(rec, '700', subfields=subfields) identifier = self.get_identifier() conference = identifier.split(':')[2] conference = conference.split('/')[0] contribution = identifier.split(':')[2] contribution = contribution.split('/')[1] record_add_field(rec, '773', subfields=[('p', 'PoS'), ('v', conference.replace(' ', '')), ('c', contribution), ('y', date[:4])]) record_add_field(rec, '980', subfields=[('a', 'ConferencePaper')]) record_add_field(rec, '980', subfields=[('a', 'HEP')]) return rec
Reads a dom xml element in oaidc format and returns the bibrecord object
Below is the the instruction that describes the task: ### Input: Reads a dom xml element in oaidc format and returns the bibrecord object ### Response: def get_record(self, record): """ Reads a dom xml element in oaidc format and returns the bibrecord object """ self.document = record rec = create_record() language = self._get_language() if language and language != 'en': record_add_field(rec, '041', subfields=[('a', language)]) publisher = self._get_publisher() date = self._get_date() if publisher and date: record_add_field(rec, '260', subfields=[('b', publisher), ('c', date)]) elif publisher: record_add_field(rec, '260', subfields=[('b', publisher)]) elif date: record_add_field(rec, '260', subfields=[('c', date)]) title = self._get_title() if title: record_add_field(rec, '245', subfields=[('a', title)]) record_copyright = self._get_copyright() if record_copyright: record_add_field(rec, '540', subfields=[('a', record_copyright)]) subject = self._get_subject() if subject: record_add_field(rec, '650', ind1='1', ind2='7', subfields=[('a', subject), ('2', 'PoS')]) authors = self._get_authors() first_author = True for author in authors: subfields = [('a', author[0])] for affiliation in author[1]: subfields.append(('v', affiliation)) if first_author: record_add_field(rec, '100', subfields=subfields) first_author = False else: record_add_field(rec, '700', subfields=subfields) identifier = self.get_identifier() conference = identifier.split(':')[2] conference = conference.split('/')[0] contribution = identifier.split(':')[2] contribution = contribution.split('/')[1] record_add_field(rec, '773', subfields=[('p', 'PoS'), ('v', conference.replace(' ', '')), ('c', contribution), ('y', date[:4])]) record_add_field(rec, '980', subfields=[('a', 'ConferencePaper')]) record_add_field(rec, '980', subfields=[('a', 'HEP')]) return rec
def QDS_StockDayWarpper(func): """ 日线QDS装饰器 """ def warpper(*args, **kwargs): data = func(*args, **kwargs) if isinstance(data.index, pd.MultiIndex): return QA_DataStruct_Stock_day(data) else: return QA_DataStruct_Stock_day( data.assign(date=pd.to_datetime(data.date) ).set_index(['date', 'code'], drop=False), dtype='stock_day' ) return warpper
日线QDS装饰器
Below is the the instruction that describes the task: ### Input: 日线QDS装饰器 ### Response: def QDS_StockDayWarpper(func): """ 日线QDS装饰器 """ def warpper(*args, **kwargs): data = func(*args, **kwargs) if isinstance(data.index, pd.MultiIndex): return QA_DataStruct_Stock_day(data) else: return QA_DataStruct_Stock_day( data.assign(date=pd.to_datetime(data.date) ).set_index(['date', 'code'], drop=False), dtype='stock_day' ) return warpper
def loop_tk(kernel): """Start a kernel with the Tk event loop.""" import Tkinter doi = kernel.do_one_iteration # Tk uses milliseconds poll_interval = int(1000*kernel._poll_interval) # For Tkinter, we create a Tk object and call its withdraw method. class Timer(object): def __init__(self, func): self.app = Tkinter.Tk() self.app.withdraw() self.func = func def on_timer(self): self.func() self.app.after(poll_interval, self.on_timer) def start(self): self.on_timer() # Call it once to get things going. self.app.mainloop() kernel.timer = Timer(doi) kernel.timer.start()
Start a kernel with the Tk event loop.
Below is the the instruction that describes the task: ### Input: Start a kernel with the Tk event loop. ### Response: def loop_tk(kernel): """Start a kernel with the Tk event loop.""" import Tkinter doi = kernel.do_one_iteration # Tk uses milliseconds poll_interval = int(1000*kernel._poll_interval) # For Tkinter, we create a Tk object and call its withdraw method. class Timer(object): def __init__(self, func): self.app = Tkinter.Tk() self.app.withdraw() self.func = func def on_timer(self): self.func() self.app.after(poll_interval, self.on_timer) def start(self): self.on_timer() # Call it once to get things going. self.app.mainloop() kernel.timer = Timer(doi) kernel.timer.start()
def moving_average(self, data, days): """ 計算移動平均數 :rtype: 序列 舊→新 """ result = [] data = data[:] for dummy in range(len(data) - int(days) + 1): result.append(round(sum(data[-days:]) / days, 2)) data.pop() result.reverse() return result
計算移動平均數 :rtype: 序列 舊→新
Below is the the instruction that describes the task: ### Input: 計算移動平均數 :rtype: 序列 舊→新 ### Response: def moving_average(self, data, days): """ 計算移動平均數 :rtype: 序列 舊→新 """ result = [] data = data[:] for dummy in range(len(data) - int(days) + 1): result.append(round(sum(data[-days:]) / days, 2)) data.pop() result.reverse() return result
def interval_timer(interval, func, *args, **kwargs): '''Interval timer function. Taken from: http://stackoverflow.com/questions/22498038/improvement-on-interval-python/22498708 ''' stopped = Event() def loop(): while not stopped.wait(interval): # the first call is after interval func(*args, **kwargs) Thread(name='IntervalTimerThread', target=loop).start() return stopped.set
Interval timer function. Taken from: http://stackoverflow.com/questions/22498038/improvement-on-interval-python/22498708
Below is the the instruction that describes the task: ### Input: Interval timer function. Taken from: http://stackoverflow.com/questions/22498038/improvement-on-interval-python/22498708 ### Response: def interval_timer(interval, func, *args, **kwargs): '''Interval timer function. Taken from: http://stackoverflow.com/questions/22498038/improvement-on-interval-python/22498708 ''' stopped = Event() def loop(): while not stopped.wait(interval): # the first call is after interval func(*args, **kwargs) Thread(name='IntervalTimerThread', target=loop).start() return stopped.set
def download_bhavcopy(self, d): """returns bhavcopy as csv file.""" # ex_url = "https://www.nseindia.com/content/historical/EQUITIES/2011/NOV/cm08NOV2011bhav.csv.zip" url = self.get_bhavcopy_url(d) filename = self.get_bhavcopy_filename(d) # response = requests.get(url, headers=self.headers) response = self.opener.open(Request(url, None, self.headers)) zip_file_handle = io.BytesIO(response.read()) zf = zipfile.ZipFile(zip_file_handle) try: result = zf.read(filename) except KeyError: result = zf.read(zf.filelist[0].filename) return result
returns bhavcopy as csv file.
Below is the the instruction that describes the task: ### Input: returns bhavcopy as csv file. ### Response: def download_bhavcopy(self, d): """returns bhavcopy as csv file.""" # ex_url = "https://www.nseindia.com/content/historical/EQUITIES/2011/NOV/cm08NOV2011bhav.csv.zip" url = self.get_bhavcopy_url(d) filename = self.get_bhavcopy_filename(d) # response = requests.get(url, headers=self.headers) response = self.opener.open(Request(url, None, self.headers)) zip_file_handle = io.BytesIO(response.read()) zf = zipfile.ZipFile(zip_file_handle) try: result = zf.read(filename) except KeyError: result = zf.read(zf.filelist[0].filename) return result
def reorder(self, dst_order, arr, src_order=None): """Reorder the output array to match that needed by the viewer.""" if dst_order is None: dst_order = self.viewer.rgb_order if src_order is None: src_order = self.rgb_order if src_order != dst_order: arr = trcalc.reorder_image(dst_order, arr, src_order) return arr
Reorder the output array to match that needed by the viewer.
Below is the the instruction that describes the task: ### Input: Reorder the output array to match that needed by the viewer. ### Response: def reorder(self, dst_order, arr, src_order=None): """Reorder the output array to match that needed by the viewer.""" if dst_order is None: dst_order = self.viewer.rgb_order if src_order is None: src_order = self.rgb_order if src_order != dst_order: arr = trcalc.reorder_image(dst_order, arr, src_order) return arr
def fromTFExample(bytestr): """Deserializes a TFExample from a byte string""" example = tf.train.Example() example.ParseFromString(bytestr) return example
Deserializes a TFExample from a byte string
Below is the the instruction that describes the task: ### Input: Deserializes a TFExample from a byte string ### Response: def fromTFExample(bytestr): """Deserializes a TFExample from a byte string""" example = tf.train.Example() example.ParseFromString(bytestr) return example
def read_and_save_data(info_df, raw_dir, sep=";", force_raw=False, force_cellpy=False, export_cycles=False, shifted_cycles=False, export_raw=True, export_ica=False, save=True, use_cellpy_stat_file=False, parent_level="CellpyData", last_cycle=None, ): """Reads and saves cell data defined by the info-DataFrame. The function iterates through the ``info_df`` and loads data from the runs. It saves individual data for each run (if selected), as well as returns a list of ``cellpy`` summary DataFrames, a list of the indexes (one for each run; same as used as index in the ``info_df``), as well as a list with indexes of runs (cells) where an error was encountered during loading. Args: use_cellpy_stat_file: use the stat file to perform the calculations. info_df: pandas.DataFrame with information about the runs. raw_dir: path to location where you want to save raw data. sep: delimiter to use when exporting to csv. force_raw: load raw data even-though cellpy-file is up-to-date. force_cellpy: load cellpy files even-though cellpy-file is not up-to-date. export_cycles: set to True for exporting cycles to csv. shifted_cycles: set to True for exporting the cycles with a cumulated shift. export_raw: set to True for exporting raw data to csv. export_ica: set to True for calculating and exporting dQ/dV to csv. save: set to False to prevent saving a cellpy-file. parent_level: optional, should use "cellpydata" for older hdf5-files and default for newer ones. Returns: frames (list of cellpy summary DataFrames), keys (list of indexes), errors (list of indexes that encountered errors). """ no_export = False do_export_dqdv = export_ica keys = [] frames = [] number_of_runs = len(info_df) counter = 0 errors = [] for indx, row in info_df.iterrows(): counter += 1 h_txt = "[" + counter * "|" + (number_of_runs - counter) * "." + "]" l_txt = "starting to process file # %i (index=%s)" % (counter, indx) logger.debug(l_txt) print(h_txt) if not row.raw_file_names and not force_cellpy: logger.info("File(s) not found!") logger.info(indx) logger.debug("File(s) not found for index=%s" % indx) errors.append(indx) continue else: logger.info(f"Processing {indx}") cell_data = cellreader.CellpyData() if not force_cellpy: logger.info("setting cycle mode (%s)..." % row.cell_type) cell_data.set_cycle_mode(row.cell_type) logger.info("loading cell") if not force_cellpy: logger.info("not forcing") try: cell_data.loadcell(raw_files=row.raw_file_names, cellpy_file=row.cellpy_file_names, mass=row.masses, summary_on_raw=True, force_raw=force_raw, use_cellpy_stat_file=use_cellpy_stat_file) except Exception as e: logger.debug('Failed to load: ' + str(e)) errors.append("loadcell:" + str(indx)) continue else: logger.info("forcing") try: cell_data.load(row.cellpy_file_names, parent_level=parent_level) except Exception as e: logger.info(f"Critical exception encountered {type(e)} " "- skipping this file") logger.debug('Failed to load. Error-message: ' + str(e)) errors.append("load:" + str(indx)) continue if not cell_data.check(): logger.info("...not loaded...") logger.debug("Did not pass check(). Could not load cell!") errors.append("check:" + str(indx)) continue logger.info("...loaded successfully...") keys.append(indx) summary_tmp = cell_data.dataset.dfsummary logger.info("Trying to get summary_data") if summary_tmp is None: logger.info("No existing summary made - running make_summary") cell_data.make_summary(find_end_voltage=True, find_ir=True) if summary_tmp.index.name == b"Cycle_Index": logger.debug("Strange: 'Cycle_Index' is a byte-string") summary_tmp.index.name = 'Cycle_Index' if not summary_tmp.index.name == "Cycle_Index": logger.debug("Setting index to Cycle_Index") # check if it is a byte-string if b"Cycle_Index" in summary_tmp.columns: logger.debug("Seems to be a byte-string in the column-headers") summary_tmp.rename(columns={b"Cycle_Index": 'Cycle_Index'}, inplace=True) summary_tmp.set_index("Cycle_Index", inplace=True) frames.append(summary_tmp) if save: if not row.fixed: logger.info("saving cell to %s" % row.cellpy_file_names) cell_data.ensure_step_table = True cell_data.save(row.cellpy_file_names) else: logger.debug("saving cell skipped (set to 'fixed' in info_df)") if no_export: continue if export_raw: logger.info("exporting csv") cell_data.to_csv(raw_dir, sep=sep, cycles=export_cycles, shifted=shifted_cycles, raw=export_raw, last_cycle=last_cycle) if do_export_dqdv: logger.info("exporting dqdv") try: export_dqdv(cell_data, savedir=raw_dir, sep=sep, last_cycle=last_cycle) except Exception as e: logging.error("Could not make/export dq/dv data") logger.debug("Failed to make/export " "dq/dv data (%s): %s" % (indx, str(e))) errors.append("ica:" + str(indx)) if len(errors) > 0: logger.error("Finished with errors!") logger.debug(errors) else: logger.info("Finished") return frames, keys, errors
Reads and saves cell data defined by the info-DataFrame. The function iterates through the ``info_df`` and loads data from the runs. It saves individual data for each run (if selected), as well as returns a list of ``cellpy`` summary DataFrames, a list of the indexes (one for each run; same as used as index in the ``info_df``), as well as a list with indexes of runs (cells) where an error was encountered during loading. Args: use_cellpy_stat_file: use the stat file to perform the calculations. info_df: pandas.DataFrame with information about the runs. raw_dir: path to location where you want to save raw data. sep: delimiter to use when exporting to csv. force_raw: load raw data even-though cellpy-file is up-to-date. force_cellpy: load cellpy files even-though cellpy-file is not up-to-date. export_cycles: set to True for exporting cycles to csv. shifted_cycles: set to True for exporting the cycles with a cumulated shift. export_raw: set to True for exporting raw data to csv. export_ica: set to True for calculating and exporting dQ/dV to csv. save: set to False to prevent saving a cellpy-file. parent_level: optional, should use "cellpydata" for older hdf5-files and default for newer ones. Returns: frames (list of cellpy summary DataFrames), keys (list of indexes), errors (list of indexes that encountered errors).
Below is the the instruction that describes the task: ### Input: Reads and saves cell data defined by the info-DataFrame. The function iterates through the ``info_df`` and loads data from the runs. It saves individual data for each run (if selected), as well as returns a list of ``cellpy`` summary DataFrames, a list of the indexes (one for each run; same as used as index in the ``info_df``), as well as a list with indexes of runs (cells) where an error was encountered during loading. Args: use_cellpy_stat_file: use the stat file to perform the calculations. info_df: pandas.DataFrame with information about the runs. raw_dir: path to location where you want to save raw data. sep: delimiter to use when exporting to csv. force_raw: load raw data even-though cellpy-file is up-to-date. force_cellpy: load cellpy files even-though cellpy-file is not up-to-date. export_cycles: set to True for exporting cycles to csv. shifted_cycles: set to True for exporting the cycles with a cumulated shift. export_raw: set to True for exporting raw data to csv. export_ica: set to True for calculating and exporting dQ/dV to csv. save: set to False to prevent saving a cellpy-file. parent_level: optional, should use "cellpydata" for older hdf5-files and default for newer ones. Returns: frames (list of cellpy summary DataFrames), keys (list of indexes), errors (list of indexes that encountered errors). ### Response: def read_and_save_data(info_df, raw_dir, sep=";", force_raw=False, force_cellpy=False, export_cycles=False, shifted_cycles=False, export_raw=True, export_ica=False, save=True, use_cellpy_stat_file=False, parent_level="CellpyData", last_cycle=None, ): """Reads and saves cell data defined by the info-DataFrame. The function iterates through the ``info_df`` and loads data from the runs. It saves individual data for each run (if selected), as well as returns a list of ``cellpy`` summary DataFrames, a list of the indexes (one for each run; same as used as index in the ``info_df``), as well as a list with indexes of runs (cells) where an error was encountered during loading. Args: use_cellpy_stat_file: use the stat file to perform the calculations. info_df: pandas.DataFrame with information about the runs. raw_dir: path to location where you want to save raw data. sep: delimiter to use when exporting to csv. force_raw: load raw data even-though cellpy-file is up-to-date. force_cellpy: load cellpy files even-though cellpy-file is not up-to-date. export_cycles: set to True for exporting cycles to csv. shifted_cycles: set to True for exporting the cycles with a cumulated shift. export_raw: set to True for exporting raw data to csv. export_ica: set to True for calculating and exporting dQ/dV to csv. save: set to False to prevent saving a cellpy-file. parent_level: optional, should use "cellpydata" for older hdf5-files and default for newer ones. Returns: frames (list of cellpy summary DataFrames), keys (list of indexes), errors (list of indexes that encountered errors). """ no_export = False do_export_dqdv = export_ica keys = [] frames = [] number_of_runs = len(info_df) counter = 0 errors = [] for indx, row in info_df.iterrows(): counter += 1 h_txt = "[" + counter * "|" + (number_of_runs - counter) * "." + "]" l_txt = "starting to process file # %i (index=%s)" % (counter, indx) logger.debug(l_txt) print(h_txt) if not row.raw_file_names and not force_cellpy: logger.info("File(s) not found!") logger.info(indx) logger.debug("File(s) not found for index=%s" % indx) errors.append(indx) continue else: logger.info(f"Processing {indx}") cell_data = cellreader.CellpyData() if not force_cellpy: logger.info("setting cycle mode (%s)..." % row.cell_type) cell_data.set_cycle_mode(row.cell_type) logger.info("loading cell") if not force_cellpy: logger.info("not forcing") try: cell_data.loadcell(raw_files=row.raw_file_names, cellpy_file=row.cellpy_file_names, mass=row.masses, summary_on_raw=True, force_raw=force_raw, use_cellpy_stat_file=use_cellpy_stat_file) except Exception as e: logger.debug('Failed to load: ' + str(e)) errors.append("loadcell:" + str(indx)) continue else: logger.info("forcing") try: cell_data.load(row.cellpy_file_names, parent_level=parent_level) except Exception as e: logger.info(f"Critical exception encountered {type(e)} " "- skipping this file") logger.debug('Failed to load. Error-message: ' + str(e)) errors.append("load:" + str(indx)) continue if not cell_data.check(): logger.info("...not loaded...") logger.debug("Did not pass check(). Could not load cell!") errors.append("check:" + str(indx)) continue logger.info("...loaded successfully...") keys.append(indx) summary_tmp = cell_data.dataset.dfsummary logger.info("Trying to get summary_data") if summary_tmp is None: logger.info("No existing summary made - running make_summary") cell_data.make_summary(find_end_voltage=True, find_ir=True) if summary_tmp.index.name == b"Cycle_Index": logger.debug("Strange: 'Cycle_Index' is a byte-string") summary_tmp.index.name = 'Cycle_Index' if not summary_tmp.index.name == "Cycle_Index": logger.debug("Setting index to Cycle_Index") # check if it is a byte-string if b"Cycle_Index" in summary_tmp.columns: logger.debug("Seems to be a byte-string in the column-headers") summary_tmp.rename(columns={b"Cycle_Index": 'Cycle_Index'}, inplace=True) summary_tmp.set_index("Cycle_Index", inplace=True) frames.append(summary_tmp) if save: if not row.fixed: logger.info("saving cell to %s" % row.cellpy_file_names) cell_data.ensure_step_table = True cell_data.save(row.cellpy_file_names) else: logger.debug("saving cell skipped (set to 'fixed' in info_df)") if no_export: continue if export_raw: logger.info("exporting csv") cell_data.to_csv(raw_dir, sep=sep, cycles=export_cycles, shifted=shifted_cycles, raw=export_raw, last_cycle=last_cycle) if do_export_dqdv: logger.info("exporting dqdv") try: export_dqdv(cell_data, savedir=raw_dir, sep=sep, last_cycle=last_cycle) except Exception as e: logging.error("Could not make/export dq/dv data") logger.debug("Failed to make/export " "dq/dv data (%s): %s" % (indx, str(e))) errors.append("ica:" + str(indx)) if len(errors) > 0: logger.error("Finished with errors!") logger.debug(errors) else: logger.info("Finished") return frames, keys, errors
def _process_response(cls, response): """ Examine the response and raise an error is something is off """ if len(response) != 1: raise BadResponseError("Malformed response: {}".format(response)) stats = list(itervalues(response))[0] if not len(stats): raise BadResponseError("Malformed response for host: {}".format(stats)) return stats
Examine the response and raise an error is something is off
Below is the the instruction that describes the task: ### Input: Examine the response and raise an error is something is off ### Response: def _process_response(cls, response): """ Examine the response and raise an error is something is off """ if len(response) != 1: raise BadResponseError("Malformed response: {}".format(response)) stats = list(itervalues(response))[0] if not len(stats): raise BadResponseError("Malformed response for host: {}".format(stats)) return stats
def try_pull_image_from_registry(self, image_name, image_tag): """ Tries to pull a image with the tag ``image_tag`` from registry set by ``use_registry_name``. After the image is pulled, it's tagged with ``image_name``:``image_tag`` so lookup can be made locally next time. :return: A :class:`Image <docker.models.images.Image>` instance if the image exists, ``None`` otherwise. :rtype: Optional[docker.models.images.Image] """ try: image = self.client.images.pull(self.use_registry_name, image_tag) except (docker.errors.ImageNotFound, docker.errors.NotFound): # the image doesn't exist logger.info("Tried to pull %s:%s from a registry, not found", self.use_registry_name, image_tag) return None logger.info("Pulled %s:%s from registry, tagged %s:%s", self.use_registry_name, image_tag, image_name, image_tag) # the name and tag are different on the repo, let's tag it with local name so exists checks run smoothly image.tag(image_name, image_tag) return image
Tries to pull a image with the tag ``image_tag`` from registry set by ``use_registry_name``. After the image is pulled, it's tagged with ``image_name``:``image_tag`` so lookup can be made locally next time. :return: A :class:`Image <docker.models.images.Image>` instance if the image exists, ``None`` otherwise. :rtype: Optional[docker.models.images.Image]
Below is the the instruction that describes the task: ### Input: Tries to pull a image with the tag ``image_tag`` from registry set by ``use_registry_name``. After the image is pulled, it's tagged with ``image_name``:``image_tag`` so lookup can be made locally next time. :return: A :class:`Image <docker.models.images.Image>` instance if the image exists, ``None`` otherwise. :rtype: Optional[docker.models.images.Image] ### Response: def try_pull_image_from_registry(self, image_name, image_tag): """ Tries to pull a image with the tag ``image_tag`` from registry set by ``use_registry_name``. After the image is pulled, it's tagged with ``image_name``:``image_tag`` so lookup can be made locally next time. :return: A :class:`Image <docker.models.images.Image>` instance if the image exists, ``None`` otherwise. :rtype: Optional[docker.models.images.Image] """ try: image = self.client.images.pull(self.use_registry_name, image_tag) except (docker.errors.ImageNotFound, docker.errors.NotFound): # the image doesn't exist logger.info("Tried to pull %s:%s from a registry, not found", self.use_registry_name, image_tag) return None logger.info("Pulled %s:%s from registry, tagged %s:%s", self.use_registry_name, image_tag, image_name, image_tag) # the name and tag are different on the repo, let's tag it with local name so exists checks run smoothly image.tag(image_name, image_tag) return image
def transaction_start(self, name): """ start a transaction this will increment transaction semaphore and pass it to _transaction_start() """ if not name: raise ValueError("Transaction name cannot be empty") #uid = id(self) self.transaction_count += 1 logger.debug("{}. Start transaction {}".format(self.transaction_count, name)) if self.transaction_count == 1: self._transaction_start() else: self._transaction_started(name) return self.transaction_count
start a transaction this will increment transaction semaphore and pass it to _transaction_start()
Below is the the instruction that describes the task: ### Input: start a transaction this will increment transaction semaphore and pass it to _transaction_start() ### Response: def transaction_start(self, name): """ start a transaction this will increment transaction semaphore and pass it to _transaction_start() """ if not name: raise ValueError("Transaction name cannot be empty") #uid = id(self) self.transaction_count += 1 logger.debug("{}. Start transaction {}".format(self.transaction_count, name)) if self.transaction_count == 1: self._transaction_start() else: self._transaction_started(name) return self.transaction_count
def close(self): """ Close the node process. """ if self._closed: return False log.info("{module}: '{name}' [{id}]: is closing".format(module=self.manager.module_name, name=self.name, id=self.id)) if self._console: self._manager.port_manager.release_tcp_port(self._console, self._project) self._console = None if self._wrap_console: self._manager.port_manager.release_tcp_port(self._internal_console_port, self._project) self._internal_console_port = None if self._aux: self._manager.port_manager.release_tcp_port(self._aux, self._project) self._aux = None self._closed = True return True
Close the node process.
Below is the the instruction that describes the task: ### Input: Close the node process. ### Response: def close(self): """ Close the node process. """ if self._closed: return False log.info("{module}: '{name}' [{id}]: is closing".format(module=self.manager.module_name, name=self.name, id=self.id)) if self._console: self._manager.port_manager.release_tcp_port(self._console, self._project) self._console = None if self._wrap_console: self._manager.port_manager.release_tcp_port(self._internal_console_port, self._project) self._internal_console_port = None if self._aux: self._manager.port_manager.release_tcp_port(self._aux, self._project) self._aux = None self._closed = True return True
def __taint_store(self, instr): """Taint STM instruction. """ # Get memory address. op2_val = self.__emu.read_operand(instr.operands[2]) # Get taint information. op0_size = instr.operands[0].size op0_taint = self.get_operand_taint(instr.operands[0]) # Propagate taint. self.set_memory_taint(op2_val, op0_size // 8, op0_taint)
Taint STM instruction.
Below is the the instruction that describes the task: ### Input: Taint STM instruction. ### Response: def __taint_store(self, instr): """Taint STM instruction. """ # Get memory address. op2_val = self.__emu.read_operand(instr.operands[2]) # Get taint information. op0_size = instr.operands[0].size op0_taint = self.get_operand_taint(instr.operands[0]) # Propagate taint. self.set_memory_taint(op2_val, op0_size // 8, op0_taint)
def add_reaction_constraints(model, reactions, Constraint): """ Add the stoichiometric coefficients as constraints. Parameters ---------- model : optlang.Model The transposed stoichiometric matrix representation. reactions : iterable Container of `cobra.Reaction` instances. Constraint : optlang.Constraint The constraint class for the specific interface. """ constraints = [] for rxn in reactions: expression = add( [c * model.variables[m.id] for m, c in rxn.metabolites.items()]) constraints.append(Constraint(expression, lb=0, ub=0, name=rxn.id)) model.add(constraints)
Add the stoichiometric coefficients as constraints. Parameters ---------- model : optlang.Model The transposed stoichiometric matrix representation. reactions : iterable Container of `cobra.Reaction` instances. Constraint : optlang.Constraint The constraint class for the specific interface.
Below is the the instruction that describes the task: ### Input: Add the stoichiometric coefficients as constraints. Parameters ---------- model : optlang.Model The transposed stoichiometric matrix representation. reactions : iterable Container of `cobra.Reaction` instances. Constraint : optlang.Constraint The constraint class for the specific interface. ### Response: def add_reaction_constraints(model, reactions, Constraint): """ Add the stoichiometric coefficients as constraints. Parameters ---------- model : optlang.Model The transposed stoichiometric matrix representation. reactions : iterable Container of `cobra.Reaction` instances. Constraint : optlang.Constraint The constraint class for the specific interface. """ constraints = [] for rxn in reactions: expression = add( [c * model.variables[m.id] for m, c in rxn.metabolites.items()]) constraints.append(Constraint(expression, lb=0, ub=0, name=rxn.id)) model.add(constraints)
def check(call_fct): """ Decorator for optionable __call__ method It check the given option values """ # wrap the method @wraps(call_fct) def checked_call(self, *args, **kwargs): self.set_options_values(kwargs, parse=False, strict=True) options_values = self.get_options_values(hidden=True) return call_fct(self, *args, **options_values) # add a flag on the new method to indicate that it is 'checked' checked_call._checked = True checked_call._no_check = call_fct return checked_call
Decorator for optionable __call__ method It check the given option values
Below is the the instruction that describes the task: ### Input: Decorator for optionable __call__ method It check the given option values ### Response: def check(call_fct): """ Decorator for optionable __call__ method It check the given option values """ # wrap the method @wraps(call_fct) def checked_call(self, *args, **kwargs): self.set_options_values(kwargs, parse=False, strict=True) options_values = self.get_options_values(hidden=True) return call_fct(self, *args, **options_values) # add a flag on the new method to indicate that it is 'checked' checked_call._checked = True checked_call._no_check = call_fct return checked_call
def wait_for_edge(channel, trigger, timeout=-1): """ This function is designed to block execution of your program until an edge is detected. :param channel: the channel based on the numbering system you have specified (:py:attr:`GPIO.BOARD`, :py:attr:`GPIO.BCM` or :py:attr:`GPIO.SUNXI`). :param trigger: The event to detect, one of: :py:attr:`GPIO.RISING`, :py:attr:`GPIO.FALLING` or :py:attr:`GPIO.BOTH`. :param timeout: (optional) TODO In other words, the polling example above that waits for a button press could be rewritten as: .. code:: python GPIO.wait_for_edge(channel, GPIO.RISING) Note that you can detect edges of type :py:attr:`GPIO.RISING`, :py:attr`GPIO.FALLING` or :py:attr:`GPIO.BOTH`. The advantage of doing it this way is that it uses a negligible amount of CPU, so there is plenty left for other tasks. If you only want to wait for a certain length of time, you can use the timeout parameter: .. code:: python # wait for up to 5 seconds for a rising edge (timeout is in milliseconds) channel = GPIO.wait_for_edge(channel, GPIO_RISING, timeout=5000) if channel is None: print('Timeout occurred') else: print('Edge detected on channel', channel) """ _check_configured(channel, direction=IN) pin = get_gpio_pin(_mode, channel) if event.blocking_wait_for_edge(pin, trigger, timeout) is not None: return channel
This function is designed to block execution of your program until an edge is detected. :param channel: the channel based on the numbering system you have specified (:py:attr:`GPIO.BOARD`, :py:attr:`GPIO.BCM` or :py:attr:`GPIO.SUNXI`). :param trigger: The event to detect, one of: :py:attr:`GPIO.RISING`, :py:attr:`GPIO.FALLING` or :py:attr:`GPIO.BOTH`. :param timeout: (optional) TODO In other words, the polling example above that waits for a button press could be rewritten as: .. code:: python GPIO.wait_for_edge(channel, GPIO.RISING) Note that you can detect edges of type :py:attr:`GPIO.RISING`, :py:attr`GPIO.FALLING` or :py:attr:`GPIO.BOTH`. The advantage of doing it this way is that it uses a negligible amount of CPU, so there is plenty left for other tasks. If you only want to wait for a certain length of time, you can use the timeout parameter: .. code:: python # wait for up to 5 seconds for a rising edge (timeout is in milliseconds) channel = GPIO.wait_for_edge(channel, GPIO_RISING, timeout=5000) if channel is None: print('Timeout occurred') else: print('Edge detected on channel', channel)
Below is the the instruction that describes the task: ### Input: This function is designed to block execution of your program until an edge is detected. :param channel: the channel based on the numbering system you have specified (:py:attr:`GPIO.BOARD`, :py:attr:`GPIO.BCM` or :py:attr:`GPIO.SUNXI`). :param trigger: The event to detect, one of: :py:attr:`GPIO.RISING`, :py:attr:`GPIO.FALLING` or :py:attr:`GPIO.BOTH`. :param timeout: (optional) TODO In other words, the polling example above that waits for a button press could be rewritten as: .. code:: python GPIO.wait_for_edge(channel, GPIO.RISING) Note that you can detect edges of type :py:attr:`GPIO.RISING`, :py:attr`GPIO.FALLING` or :py:attr:`GPIO.BOTH`. The advantage of doing it this way is that it uses a negligible amount of CPU, so there is plenty left for other tasks. If you only want to wait for a certain length of time, you can use the timeout parameter: .. code:: python # wait for up to 5 seconds for a rising edge (timeout is in milliseconds) channel = GPIO.wait_for_edge(channel, GPIO_RISING, timeout=5000) if channel is None: print('Timeout occurred') else: print('Edge detected on channel', channel) ### Response: def wait_for_edge(channel, trigger, timeout=-1): """ This function is designed to block execution of your program until an edge is detected. :param channel: the channel based on the numbering system you have specified (:py:attr:`GPIO.BOARD`, :py:attr:`GPIO.BCM` or :py:attr:`GPIO.SUNXI`). :param trigger: The event to detect, one of: :py:attr:`GPIO.RISING`, :py:attr:`GPIO.FALLING` or :py:attr:`GPIO.BOTH`. :param timeout: (optional) TODO In other words, the polling example above that waits for a button press could be rewritten as: .. code:: python GPIO.wait_for_edge(channel, GPIO.RISING) Note that you can detect edges of type :py:attr:`GPIO.RISING`, :py:attr`GPIO.FALLING` or :py:attr:`GPIO.BOTH`. The advantage of doing it this way is that it uses a negligible amount of CPU, so there is plenty left for other tasks. If you only want to wait for a certain length of time, you can use the timeout parameter: .. code:: python # wait for up to 5 seconds for a rising edge (timeout is in milliseconds) channel = GPIO.wait_for_edge(channel, GPIO_RISING, timeout=5000) if channel is None: print('Timeout occurred') else: print('Edge detected on channel', channel) """ _check_configured(channel, direction=IN) pin = get_gpio_pin(_mode, channel) if event.blocking_wait_for_edge(pin, trigger, timeout) is not None: return channel
def indices(self, fit): """return the set of indices to be reevaluated for noise measurement. Given the first values are the earliest, this is a useful policy also with a time changing objective. """ ## meta_parameters.noise_reeval_multiplier == 1.0 lam_reev = 1.0 * (self.lam_reeval if self.lam_reeval else 2 + len(fit) / 20) lam_reev = int(lam_reev) + ((lam_reev % 1) > np.random.rand()) ## meta_parameters.noise_choose_reeval == 1 choice = 1 if choice == 1: # take n_first first and reev - n_first best of the remaining n_first = lam_reev - lam_reev // 2 sort_idx = np.argsort(array(fit, copy=False)[n_first:]) + n_first return np.array(list(range(0, n_first)) + list(sort_idx[0:lam_reev - n_first]), copy=False) elif choice == 2: idx_sorted = np.argsort(array(fit, copy=False)) # take lam_reev equally spaced, starting with best linsp = np.linspace(0, len(fit) - len(fit) / lam_reev, lam_reev) return idx_sorted[[int(i) for i in linsp]] # take the ``lam_reeval`` best from the first ``2 * lam_reeval + 2`` values. elif choice == 3: return np.argsort(array(fit, copy=False)[:2 * (lam_reev + 1)])[:lam_reev] else: raise ValueError('unrecognized choice value %d for noise reev' % choice)
return the set of indices to be reevaluated for noise measurement. Given the first values are the earliest, this is a useful policy also with a time changing objective.
Below is the the instruction that describes the task: ### Input: return the set of indices to be reevaluated for noise measurement. Given the first values are the earliest, this is a useful policy also with a time changing objective. ### Response: def indices(self, fit): """return the set of indices to be reevaluated for noise measurement. Given the first values are the earliest, this is a useful policy also with a time changing objective. """ ## meta_parameters.noise_reeval_multiplier == 1.0 lam_reev = 1.0 * (self.lam_reeval if self.lam_reeval else 2 + len(fit) / 20) lam_reev = int(lam_reev) + ((lam_reev % 1) > np.random.rand()) ## meta_parameters.noise_choose_reeval == 1 choice = 1 if choice == 1: # take n_first first and reev - n_first best of the remaining n_first = lam_reev - lam_reev // 2 sort_idx = np.argsort(array(fit, copy=False)[n_first:]) + n_first return np.array(list(range(0, n_first)) + list(sort_idx[0:lam_reev - n_first]), copy=False) elif choice == 2: idx_sorted = np.argsort(array(fit, copy=False)) # take lam_reev equally spaced, starting with best linsp = np.linspace(0, len(fit) - len(fit) / lam_reev, lam_reev) return idx_sorted[[int(i) for i in linsp]] # take the ``lam_reeval`` best from the first ``2 * lam_reeval + 2`` values. elif choice == 3: return np.argsort(array(fit, copy=False)[:2 * (lam_reev + 1)])[:lam_reev] else: raise ValueError('unrecognized choice value %d for noise reev' % choice)
def information_gain(reference_beats, estimated_beats, bins=41): """Get the information gain - K-L divergence of the beat error histogram to a uniform histogram Examples -------- >>> reference_beats = mir_eval.io.load_events('reference.txt') >>> reference_beats = mir_eval.beat.trim_beats(reference_beats) >>> estimated_beats = mir_eval.io.load_events('estimated.txt') >>> estimated_beats = mir_eval.beat.trim_beats(estimated_beats) >>> information_gain = mir_eval.beat.information_gain(reference_beats, estimated_beats) Parameters ---------- reference_beats : np.ndarray reference beat times, in seconds estimated_beats : np.ndarray query beat times, in seconds bins : int Number of bins in the beat error histogram (Default value = 41) Returns ------- information_gain_score : float Entropy of beat error histogram """ validate(reference_beats, estimated_beats) # If an even number of bins is provided, # there will be no bin centered at zero, so warn the user. if not bins % 2: warnings.warn("bins parameter is even, " "so there will not be a bin centered at zero.") # Warn when only one beat is provided for either estimated or reference, # report a warning if reference_beats.size == 1: warnings.warn("Only one reference beat was provided, so beat intervals" " cannot be computed.") if estimated_beats.size == 1: warnings.warn("Only one estimated beat was provided, so beat intervals" " cannot be computed.") # When estimated or reference beats have <= 1 beats, can't compute the # metric, so return 0 if estimated_beats.size <= 1 or reference_beats.size <= 1: return 0. # Get entropy for reference beats->estimated beats # and estimated beats->reference beats forward_entropy = _get_entropy(reference_beats, estimated_beats, bins) backward_entropy = _get_entropy(estimated_beats, reference_beats, bins) # Pick the larger of the entropies norm = np.log2(bins) if forward_entropy > backward_entropy: # Note that the beat evaluation toolbox does not normalize information_gain_score = (norm - forward_entropy)/norm else: information_gain_score = (norm - backward_entropy)/norm return information_gain_score
Get the information gain - K-L divergence of the beat error histogram to a uniform histogram Examples -------- >>> reference_beats = mir_eval.io.load_events('reference.txt') >>> reference_beats = mir_eval.beat.trim_beats(reference_beats) >>> estimated_beats = mir_eval.io.load_events('estimated.txt') >>> estimated_beats = mir_eval.beat.trim_beats(estimated_beats) >>> information_gain = mir_eval.beat.information_gain(reference_beats, estimated_beats) Parameters ---------- reference_beats : np.ndarray reference beat times, in seconds estimated_beats : np.ndarray query beat times, in seconds bins : int Number of bins in the beat error histogram (Default value = 41) Returns ------- information_gain_score : float Entropy of beat error histogram
Below is the the instruction that describes the task: ### Input: Get the information gain - K-L divergence of the beat error histogram to a uniform histogram Examples -------- >>> reference_beats = mir_eval.io.load_events('reference.txt') >>> reference_beats = mir_eval.beat.trim_beats(reference_beats) >>> estimated_beats = mir_eval.io.load_events('estimated.txt') >>> estimated_beats = mir_eval.beat.trim_beats(estimated_beats) >>> information_gain = mir_eval.beat.information_gain(reference_beats, estimated_beats) Parameters ---------- reference_beats : np.ndarray reference beat times, in seconds estimated_beats : np.ndarray query beat times, in seconds bins : int Number of bins in the beat error histogram (Default value = 41) Returns ------- information_gain_score : float Entropy of beat error histogram ### Response: def information_gain(reference_beats, estimated_beats, bins=41): """Get the information gain - K-L divergence of the beat error histogram to a uniform histogram Examples -------- >>> reference_beats = mir_eval.io.load_events('reference.txt') >>> reference_beats = mir_eval.beat.trim_beats(reference_beats) >>> estimated_beats = mir_eval.io.load_events('estimated.txt') >>> estimated_beats = mir_eval.beat.trim_beats(estimated_beats) >>> information_gain = mir_eval.beat.information_gain(reference_beats, estimated_beats) Parameters ---------- reference_beats : np.ndarray reference beat times, in seconds estimated_beats : np.ndarray query beat times, in seconds bins : int Number of bins in the beat error histogram (Default value = 41) Returns ------- information_gain_score : float Entropy of beat error histogram """ validate(reference_beats, estimated_beats) # If an even number of bins is provided, # there will be no bin centered at zero, so warn the user. if not bins % 2: warnings.warn("bins parameter is even, " "so there will not be a bin centered at zero.") # Warn when only one beat is provided for either estimated or reference, # report a warning if reference_beats.size == 1: warnings.warn("Only one reference beat was provided, so beat intervals" " cannot be computed.") if estimated_beats.size == 1: warnings.warn("Only one estimated beat was provided, so beat intervals" " cannot be computed.") # When estimated or reference beats have <= 1 beats, can't compute the # metric, so return 0 if estimated_beats.size <= 1 or reference_beats.size <= 1: return 0. # Get entropy for reference beats->estimated beats # and estimated beats->reference beats forward_entropy = _get_entropy(reference_beats, estimated_beats, bins) backward_entropy = _get_entropy(estimated_beats, reference_beats, bins) # Pick the larger of the entropies norm = np.log2(bins) if forward_entropy > backward_entropy: # Note that the beat evaluation toolbox does not normalize information_gain_score = (norm - forward_entropy)/norm else: information_gain_score = (norm - backward_entropy)/norm return information_gain_score
def fill_n_todo(self): """ Calculate and record the number of edge pixels left to do on each tile """ left = self.left right = self.right top = self.top bottom = self.bottom for i in xrange(self.n_chunks): self.n_todo.ravel()[i] = np.sum([left.ravel()[i].n_todo, right.ravel()[i].n_todo, top.ravel()[i].n_todo, bottom.ravel()[i].n_todo])
Calculate and record the number of edge pixels left to do on each tile
Below is the the instruction that describes the task: ### Input: Calculate and record the number of edge pixels left to do on each tile ### Response: def fill_n_todo(self): """ Calculate and record the number of edge pixels left to do on each tile """ left = self.left right = self.right top = self.top bottom = self.bottom for i in xrange(self.n_chunks): self.n_todo.ravel()[i] = np.sum([left.ravel()[i].n_todo, right.ravel()[i].n_todo, top.ravel()[i].n_todo, bottom.ravel()[i].n_todo])
def _load(self): """ Load editable settings from the database and return them as a dict. Delete any settings from the database that are no longer registered, and emit a warning if there are settings that are defined in both settings.py and the database. """ from yacms.conf.models import Setting removed_settings = [] conflicting_settings = [] new_cache = {} for setting_obj in Setting.objects.all(): # Check that the Setting object corresponds to a setting that has # been declared in code using ``register_setting()``. If not, add # it to a list of items to be deleted from the database later. try: setting = registry[setting_obj.name] except KeyError: removed_settings.append(setting_obj.name) continue # Convert a string from the database to the correct Python type. setting_value = self._to_python(setting, setting_obj.value) # If a setting is defined both in the database and in settings.py, # raise a warning and use the value defined in settings.py. if hasattr(django_settings, setting["name"]): if setting_value != setting["default"]: conflicting_settings.append(setting_obj.name) continue # If nothing went wrong, use the value from the database! new_cache[setting["name"]] = setting_value if removed_settings: Setting.objects.filter(name__in=removed_settings).delete() if conflicting_settings: warn("These settings are defined in both settings.py and " "the database: %s. The settings.py values will be used." % ", ".join(conflicting_settings)) return new_cache
Load editable settings from the database and return them as a dict. Delete any settings from the database that are no longer registered, and emit a warning if there are settings that are defined in both settings.py and the database.
Below is the the instruction that describes the task: ### Input: Load editable settings from the database and return them as a dict. Delete any settings from the database that are no longer registered, and emit a warning if there are settings that are defined in both settings.py and the database. ### Response: def _load(self): """ Load editable settings from the database and return them as a dict. Delete any settings from the database that are no longer registered, and emit a warning if there are settings that are defined in both settings.py and the database. """ from yacms.conf.models import Setting removed_settings = [] conflicting_settings = [] new_cache = {} for setting_obj in Setting.objects.all(): # Check that the Setting object corresponds to a setting that has # been declared in code using ``register_setting()``. If not, add # it to a list of items to be deleted from the database later. try: setting = registry[setting_obj.name] except KeyError: removed_settings.append(setting_obj.name) continue # Convert a string from the database to the correct Python type. setting_value = self._to_python(setting, setting_obj.value) # If a setting is defined both in the database and in settings.py, # raise a warning and use the value defined in settings.py. if hasattr(django_settings, setting["name"]): if setting_value != setting["default"]: conflicting_settings.append(setting_obj.name) continue # If nothing went wrong, use the value from the database! new_cache[setting["name"]] = setting_value if removed_settings: Setting.objects.filter(name__in=removed_settings).delete() if conflicting_settings: warn("These settings are defined in both settings.py and " "the database: %s. The settings.py values will be used." % ", ".join(conflicting_settings)) return new_cache
def fit_radius_from_potentials(z, SampleFreq, Damping, HistBins=100, show_fig=False): """ Fits the dynamical potential to the Steady State Potential by varying the Radius. z : ndarray Position data SampleFreq : float frequency at which the position data was sampled Damping : float value of damping (in radians/second) HistBins : int number of values at which to evaluate the steady state potential / perform the fitting to the dynamical potential Returns ------- Radius : float Radius of the nanoparticle RadiusError : float One Standard Deviation Error in the Radius from the Fit (doesn't take into account possible error in damping) fig : matplotlib.figure.Figure object figure showing fitted dynamical potential and stationary potential ax : matplotlib.axes.Axes object axes for above figure """ dt = 1/SampleFreq boltzmann=Boltzmann temp=300 # why halved?? density=1800 SteadyStatePotnl = list(steady_state_potential(z, HistBins=HistBins)) yoffset=min(SteadyStatePotnl[1]) SteadyStatePotnl[1] -= yoffset SpringPotnlFunc = dynamical_potential(z, dt) SpringPotnl = SpringPotnlFunc(z) kBT_Gamma = temp*boltzmann*1/Damping DynamicPotentialFunc = make_dynamical_potential_func(kBT_Gamma, density, SpringPotnlFunc) FitSoln = _curve_fit(DynamicPotentialFunc, SteadyStatePotnl[0], SteadyStatePotnl[1], p0 = 50) print(FitSoln) popt, pcov = FitSoln perr = _np.sqrt(_np.diag(pcov)) Radius, RadiusError = popt[0], perr[0] mass=((4/3)*pi*((Radius*10**-9)**3))*density yfit=(kBT_Gamma/mass) Y = yfit*SpringPotnl fig, ax = _plt.subplots() ax.plot(SteadyStatePotnl[0], SteadyStatePotnl[1], 'bo', label="Steady State Potential") _plt.plot(z,Y, 'r-', label="Dynamical Potential") ax.legend(loc='best') ax.set_ylabel('U ($k_{B} T $ Joules)') ax.set_xlabel('Distance (mV)') _plt.tight_layout() if show_fig == True: _plt.show() return Radius*1e-9, RadiusError*1e-9, fig, ax
Fits the dynamical potential to the Steady State Potential by varying the Radius. z : ndarray Position data SampleFreq : float frequency at which the position data was sampled Damping : float value of damping (in radians/second) HistBins : int number of values at which to evaluate the steady state potential / perform the fitting to the dynamical potential Returns ------- Radius : float Radius of the nanoparticle RadiusError : float One Standard Deviation Error in the Radius from the Fit (doesn't take into account possible error in damping) fig : matplotlib.figure.Figure object figure showing fitted dynamical potential and stationary potential ax : matplotlib.axes.Axes object axes for above figure
Below is the the instruction that describes the task: ### Input: Fits the dynamical potential to the Steady State Potential by varying the Radius. z : ndarray Position data SampleFreq : float frequency at which the position data was sampled Damping : float value of damping (in radians/second) HistBins : int number of values at which to evaluate the steady state potential / perform the fitting to the dynamical potential Returns ------- Radius : float Radius of the nanoparticle RadiusError : float One Standard Deviation Error in the Radius from the Fit (doesn't take into account possible error in damping) fig : matplotlib.figure.Figure object figure showing fitted dynamical potential and stationary potential ax : matplotlib.axes.Axes object axes for above figure ### Response: def fit_radius_from_potentials(z, SampleFreq, Damping, HistBins=100, show_fig=False): """ Fits the dynamical potential to the Steady State Potential by varying the Radius. z : ndarray Position data SampleFreq : float frequency at which the position data was sampled Damping : float value of damping (in radians/second) HistBins : int number of values at which to evaluate the steady state potential / perform the fitting to the dynamical potential Returns ------- Radius : float Radius of the nanoparticle RadiusError : float One Standard Deviation Error in the Radius from the Fit (doesn't take into account possible error in damping) fig : matplotlib.figure.Figure object figure showing fitted dynamical potential and stationary potential ax : matplotlib.axes.Axes object axes for above figure """ dt = 1/SampleFreq boltzmann=Boltzmann temp=300 # why halved?? density=1800 SteadyStatePotnl = list(steady_state_potential(z, HistBins=HistBins)) yoffset=min(SteadyStatePotnl[1]) SteadyStatePotnl[1] -= yoffset SpringPotnlFunc = dynamical_potential(z, dt) SpringPotnl = SpringPotnlFunc(z) kBT_Gamma = temp*boltzmann*1/Damping DynamicPotentialFunc = make_dynamical_potential_func(kBT_Gamma, density, SpringPotnlFunc) FitSoln = _curve_fit(DynamicPotentialFunc, SteadyStatePotnl[0], SteadyStatePotnl[1], p0 = 50) print(FitSoln) popt, pcov = FitSoln perr = _np.sqrt(_np.diag(pcov)) Radius, RadiusError = popt[0], perr[0] mass=((4/3)*pi*((Radius*10**-9)**3))*density yfit=(kBT_Gamma/mass) Y = yfit*SpringPotnl fig, ax = _plt.subplots() ax.plot(SteadyStatePotnl[0], SteadyStatePotnl[1], 'bo', label="Steady State Potential") _plt.plot(z,Y, 'r-', label="Dynamical Potential") ax.legend(loc='best') ax.set_ylabel('U ($k_{B} T $ Joules)') ax.set_xlabel('Distance (mV)') _plt.tight_layout() if show_fig == True: _plt.show() return Radius*1e-9, RadiusError*1e-9, fig, ax
def grid_stack_for_simulation(cls, shape, pixel_scale, psf_shape, sub_grid_size=2): """Setup a grid-stack of grid_stack for simulating an image of a strong lens, whereby the grid's use \ padded-grid_stack to ensure that the PSF blurring in the simulation routine (*ccd.PrepatoryImage.simulate*) \ is not degraded due to edge effects. Parameters ----------- shape : (int, int) The 2D shape of the array, where all pixels are used to generate the grid-stack's grid_stack. pixel_scale : float The size of each pixel in arc seconds. psf_shape : (int, int) The shape of the PSF used in the analysis, which defines how much the grid's must be masked to mitigate \ edge effects. sub_grid_size : int The size of a sub-pixel's sub-grid (sub_grid_size x sub_grid_size). """ return cls.padded_grid_stack_from_mask_sub_grid_size_and_psf_shape(mask=msk.Mask(array=np.full(shape, False), pixel_scale=pixel_scale), sub_grid_size=sub_grid_size, psf_shape=psf_shape)
Setup a grid-stack of grid_stack for simulating an image of a strong lens, whereby the grid's use \ padded-grid_stack to ensure that the PSF blurring in the simulation routine (*ccd.PrepatoryImage.simulate*) \ is not degraded due to edge effects. Parameters ----------- shape : (int, int) The 2D shape of the array, where all pixels are used to generate the grid-stack's grid_stack. pixel_scale : float The size of each pixel in arc seconds. psf_shape : (int, int) The shape of the PSF used in the analysis, which defines how much the grid's must be masked to mitigate \ edge effects. sub_grid_size : int The size of a sub-pixel's sub-grid (sub_grid_size x sub_grid_size).
Below is the the instruction that describes the task: ### Input: Setup a grid-stack of grid_stack for simulating an image of a strong lens, whereby the grid's use \ padded-grid_stack to ensure that the PSF blurring in the simulation routine (*ccd.PrepatoryImage.simulate*) \ is not degraded due to edge effects. Parameters ----------- shape : (int, int) The 2D shape of the array, where all pixels are used to generate the grid-stack's grid_stack. pixel_scale : float The size of each pixel in arc seconds. psf_shape : (int, int) The shape of the PSF used in the analysis, which defines how much the grid's must be masked to mitigate \ edge effects. sub_grid_size : int The size of a sub-pixel's sub-grid (sub_grid_size x sub_grid_size). ### Response: def grid_stack_for_simulation(cls, shape, pixel_scale, psf_shape, sub_grid_size=2): """Setup a grid-stack of grid_stack for simulating an image of a strong lens, whereby the grid's use \ padded-grid_stack to ensure that the PSF blurring in the simulation routine (*ccd.PrepatoryImage.simulate*) \ is not degraded due to edge effects. Parameters ----------- shape : (int, int) The 2D shape of the array, where all pixels are used to generate the grid-stack's grid_stack. pixel_scale : float The size of each pixel in arc seconds. psf_shape : (int, int) The shape of the PSF used in the analysis, which defines how much the grid's must be masked to mitigate \ edge effects. sub_grid_size : int The size of a sub-pixel's sub-grid (sub_grid_size x sub_grid_size). """ return cls.padded_grid_stack_from_mask_sub_grid_size_and_psf_shape(mask=msk.Mask(array=np.full(shape, False), pixel_scale=pixel_scale), sub_grid_size=sub_grid_size, psf_shape=psf_shape)
def listFileParentsByLumi(self, block_name='', logical_file_name=[]): """ required parameter: block_name returns: [{child_parent_id_list: [(cid1, pid1), (cid2, pid2), ... (cidn, pidn)]}] """ #self.logger.debug("lfn %s, block_name %s" % (logical_file_name, block_name)) if not block_name: dbsExceptionHandler('dbsException-invalid-input', \ "Child block_name is required for fileparents/listFileParentsByLumi api", self.logger.exception ) with self.dbi.connection() as conn: sqlresult = self.fileparentbylumi.execute(conn, block_name, logical_file_name) return [{"child_parent_id_list":sqlresult}]
required parameter: block_name returns: [{child_parent_id_list: [(cid1, pid1), (cid2, pid2), ... (cidn, pidn)]}]
Below is the the instruction that describes the task: ### Input: required parameter: block_name returns: [{child_parent_id_list: [(cid1, pid1), (cid2, pid2), ... (cidn, pidn)]}] ### Response: def listFileParentsByLumi(self, block_name='', logical_file_name=[]): """ required parameter: block_name returns: [{child_parent_id_list: [(cid1, pid1), (cid2, pid2), ... (cidn, pidn)]}] """ #self.logger.debug("lfn %s, block_name %s" % (logical_file_name, block_name)) if not block_name: dbsExceptionHandler('dbsException-invalid-input', \ "Child block_name is required for fileparents/listFileParentsByLumi api", self.logger.exception ) with self.dbi.connection() as conn: sqlresult = self.fileparentbylumi.execute(conn, block_name, logical_file_name) return [{"child_parent_id_list":sqlresult}]
def _libvirt_creds(): ''' Returns the user and group that the disk images should be owned by ''' g_cmd = 'grep ^\\s*group /etc/libvirt/qemu.conf' u_cmd = 'grep ^\\s*user /etc/libvirt/qemu.conf' try: stdout = subprocess.Popen(g_cmd, shell=True, stdout=subprocess.PIPE).communicate()[0] group = salt.utils.stringutils.to_str(stdout).split('"')[1] except IndexError: group = 'root' try: stdout = subprocess.Popen(u_cmd, shell=True, stdout=subprocess.PIPE).communicate()[0] user = salt.utils.stringutils.to_str(stdout).split('"')[1] except IndexError: user = 'root' return {'user': user, 'group': group}
Returns the user and group that the disk images should be owned by
Below is the the instruction that describes the task: ### Input: Returns the user and group that the disk images should be owned by ### Response: def _libvirt_creds(): ''' Returns the user and group that the disk images should be owned by ''' g_cmd = 'grep ^\\s*group /etc/libvirt/qemu.conf' u_cmd = 'grep ^\\s*user /etc/libvirt/qemu.conf' try: stdout = subprocess.Popen(g_cmd, shell=True, stdout=subprocess.PIPE).communicate()[0] group = salt.utils.stringutils.to_str(stdout).split('"')[1] except IndexError: group = 'root' try: stdout = subprocess.Popen(u_cmd, shell=True, stdout=subprocess.PIPE).communicate()[0] user = salt.utils.stringutils.to_str(stdout).split('"')[1] except IndexError: user = 'root' return {'user': user, 'group': group}
def parse_comments_for_file(filename): """ Return a list of all parsed comments in a file. Mostly for testing & interactive use. """ return [parse_comment(strip_stars(comment), next_line) for comment, next_line in get_doc_comments(read_file(filename))]
Return a list of all parsed comments in a file. Mostly for testing & interactive use.
Below is the the instruction that describes the task: ### Input: Return a list of all parsed comments in a file. Mostly for testing & interactive use. ### Response: def parse_comments_for_file(filename): """ Return a list of all parsed comments in a file. Mostly for testing & interactive use. """ return [parse_comment(strip_stars(comment), next_line) for comment, next_line in get_doc_comments(read_file(filename))]
def get_one_ping_per_client(pings): """ Returns a single ping for each client in the RDD. THIS METHOD IS NOT RECOMMENDED: The ping to be returned is essentially selected at random. It is also expensive as it requires data to be shuffled around. It should be run only after extracting a subset with get_pings_properties. """ if isinstance(pings.first(), binary_type): pings = pings.map(lambda p: json.loads(p.decode('utf-8'))) filtered = pings.filter(lambda p: "clientID" in p or "clientId" in p) if not filtered: raise ValueError("Missing clientID/clientId attribute.") if "clientID" in filtered.first(): client_id = "clientID" # v2 else: client_id = "clientId" # v4 return filtered.map(lambda p: (p[client_id], p)) \ .reduceByKey(lambda p1, p2: p1) \ .map(lambda p: p[1])
Returns a single ping for each client in the RDD. THIS METHOD IS NOT RECOMMENDED: The ping to be returned is essentially selected at random. It is also expensive as it requires data to be shuffled around. It should be run only after extracting a subset with get_pings_properties.
Below is the the instruction that describes the task: ### Input: Returns a single ping for each client in the RDD. THIS METHOD IS NOT RECOMMENDED: The ping to be returned is essentially selected at random. It is also expensive as it requires data to be shuffled around. It should be run only after extracting a subset with get_pings_properties. ### Response: def get_one_ping_per_client(pings): """ Returns a single ping for each client in the RDD. THIS METHOD IS NOT RECOMMENDED: The ping to be returned is essentially selected at random. It is also expensive as it requires data to be shuffled around. It should be run only after extracting a subset with get_pings_properties. """ if isinstance(pings.first(), binary_type): pings = pings.map(lambda p: json.loads(p.decode('utf-8'))) filtered = pings.filter(lambda p: "clientID" in p or "clientId" in p) if not filtered: raise ValueError("Missing clientID/clientId attribute.") if "clientID" in filtered.first(): client_id = "clientID" # v2 else: client_id = "clientId" # v4 return filtered.map(lambda p: (p[client_id], p)) \ .reduceByKey(lambda p1, p2: p1) \ .map(lambda p: p[1])