code
stringlengths
75
104k
docstring
stringlengths
1
46.9k
text
stringlengths
164
112k
def count_features_type(features): """ Counts three different types of features (float, integer, binary). :param features: pandas.DataFrame A dataset in a panda's data frame :returns a tuple (binary, integer, float) """ counter={k.name: v for k, v in features.columns.to_series().groupby(features.dtypes)} binary=0 if ('int64' in counter): binary=len(set(features.loc[:, (features<=1).all(axis=0)].columns.values) & set(features.loc[:, (features>=0).all(axis=0)].columns.values) & set(counter['int64'])) return (binary,len(counter['int64'])-binary if 'int64' in counter else 0,len(counter['float64']) if 'float64' in counter else 0)
Counts three different types of features (float, integer, binary). :param features: pandas.DataFrame A dataset in a panda's data frame :returns a tuple (binary, integer, float)
Below is the the instruction that describes the task: ### Input: Counts three different types of features (float, integer, binary). :param features: pandas.DataFrame A dataset in a panda's data frame :returns a tuple (binary, integer, float) ### Response: def count_features_type(features): """ Counts three different types of features (float, integer, binary). :param features: pandas.DataFrame A dataset in a panda's data frame :returns a tuple (binary, integer, float) """ counter={k.name: v for k, v in features.columns.to_series().groupby(features.dtypes)} binary=0 if ('int64' in counter): binary=len(set(features.loc[:, (features<=1).all(axis=0)].columns.values) & set(features.loc[:, (features>=0).all(axis=0)].columns.values) & set(counter['int64'])) return (binary,len(counter['int64'])-binary if 'int64' in counter else 0,len(counter['float64']) if 'float64' in counter else 0)
def format_last_online(last_online): """ Return the upper limit in seconds that a profile may have been online. If last_online is an int, return that int. Otherwise if last_online is a str, convert the string into an int. Returns ---------- int """ if isinstance(last_online, str): if last_online.lower() in ('day', 'today'): last_online_int = 86400 # 3600 * 24 elif last_online.lower() == 'week': last_online_int = 604800 # 3600 * 24 * 7 elif last_online.lower() == 'month': last_online_int = 2678400 # 3600 * 24 * 31 elif last_online.lower() == 'year': last_online_int = 31536000 # 3600 * 365 elif last_online.lower() == 'decade': last_online_int = 315360000 # 3600 * 365 * 10 else: # Defaults any other strings to last hour last_online_int = 3600 else: last_online_int = last_online return last_online_int
Return the upper limit in seconds that a profile may have been online. If last_online is an int, return that int. Otherwise if last_online is a str, convert the string into an int. Returns ---------- int
Below is the the instruction that describes the task: ### Input: Return the upper limit in seconds that a profile may have been online. If last_online is an int, return that int. Otherwise if last_online is a str, convert the string into an int. Returns ---------- int ### Response: def format_last_online(last_online): """ Return the upper limit in seconds that a profile may have been online. If last_online is an int, return that int. Otherwise if last_online is a str, convert the string into an int. Returns ---------- int """ if isinstance(last_online, str): if last_online.lower() in ('day', 'today'): last_online_int = 86400 # 3600 * 24 elif last_online.lower() == 'week': last_online_int = 604800 # 3600 * 24 * 7 elif last_online.lower() == 'month': last_online_int = 2678400 # 3600 * 24 * 31 elif last_online.lower() == 'year': last_online_int = 31536000 # 3600 * 365 elif last_online.lower() == 'decade': last_online_int = 315360000 # 3600 * 365 * 10 else: # Defaults any other strings to last hour last_online_int = 3600 else: last_online_int = last_online return last_online_int
def MethodName(self, name, separator='_'): """Generate a valid method name from name.""" if name is None: return None name = Names.__ToCamel(name, separator=separator) return Names.CleanName(name)
Generate a valid method name from name.
Below is the the instruction that describes the task: ### Input: Generate a valid method name from name. ### Response: def MethodName(self, name, separator='_'): """Generate a valid method name from name.""" if name is None: return None name = Names.__ToCamel(name, separator=separator) return Names.CleanName(name)
def put_nowait(self, item, size=None): """"Equivalent of ``put(item, False)``.""" # Don't mark this method into a switchpoint as put() will never switch # if block is False. return self.put(item, False, size=size)
Equivalent of ``put(item, False)``.
Below is the the instruction that describes the task: ### Input: Equivalent of ``put(item, False)``. ### Response: def put_nowait(self, item, size=None): """"Equivalent of ``put(item, False)``.""" # Don't mark this method into a switchpoint as put() will never switch # if block is False. return self.put(item, False, size=size)
def log_to_stream(level=None, fmt=None, datefmt=None): """ Send log messages to the console. Parameters ---------- level : int, optional An optional logging level that will apply only to this stream handler. fmt : str, optional An optional format string that will be used for the log messages. datefmt : str, optional An optional format string for formatting dates in the log messages. """ _add_log_handler( logging.StreamHandler(), fmt=fmt, datefmt=datefmt, propagate=False)
Send log messages to the console. Parameters ---------- level : int, optional An optional logging level that will apply only to this stream handler. fmt : str, optional An optional format string that will be used for the log messages. datefmt : str, optional An optional format string for formatting dates in the log messages.
Below is the the instruction that describes the task: ### Input: Send log messages to the console. Parameters ---------- level : int, optional An optional logging level that will apply only to this stream handler. fmt : str, optional An optional format string that will be used for the log messages. datefmt : str, optional An optional format string for formatting dates in the log messages. ### Response: def log_to_stream(level=None, fmt=None, datefmt=None): """ Send log messages to the console. Parameters ---------- level : int, optional An optional logging level that will apply only to this stream handler. fmt : str, optional An optional format string that will be used for the log messages. datefmt : str, optional An optional format string for formatting dates in the log messages. """ _add_log_handler( logging.StreamHandler(), fmt=fmt, datefmt=datefmt, propagate=False)
def get(self, id): """Get a object by id Args: id (int): Object id Returns: Object: Object with specified id None: If object not found """ for obj in self.model.db: if obj["id"] == id: return self._cast_model(obj) return None
Get a object by id Args: id (int): Object id Returns: Object: Object with specified id None: If object not found
Below is the the instruction that describes the task: ### Input: Get a object by id Args: id (int): Object id Returns: Object: Object with specified id None: If object not found ### Response: def get(self, id): """Get a object by id Args: id (int): Object id Returns: Object: Object with specified id None: If object not found """ for obj in self.model.db: if obj["id"] == id: return self._cast_model(obj) return None
def set_state(self, light_id, **kwargs): ''' Sets state on the light, can be used like this: .. code-block:: python set_state(1, xy=[1,2]) ''' light = self.get_light(light_id) url = '/api/%s/lights/%s/state' % (self.username, light.light_id) response = self.make_request('PUT', url, kwargs) setting_count = len(kwargs.items()) success_count = 0 for data in response: if 'success' in data: success_count += 1 if success_count == setting_count: return True else: return False
Sets state on the light, can be used like this: .. code-block:: python set_state(1, xy=[1,2])
Below is the the instruction that describes the task: ### Input: Sets state on the light, can be used like this: .. code-block:: python set_state(1, xy=[1,2]) ### Response: def set_state(self, light_id, **kwargs): ''' Sets state on the light, can be used like this: .. code-block:: python set_state(1, xy=[1,2]) ''' light = self.get_light(light_id) url = '/api/%s/lights/%s/state' % (self.username, light.light_id) response = self.make_request('PUT', url, kwargs) setting_count = len(kwargs.items()) success_count = 0 for data in response: if 'success' in data: success_count += 1 if success_count == setting_count: return True else: return False
def get_values (feature, properties): """ Returns all values of the given feature specified by the given property set. """ if feature[0] != '<': feature = '<' + feature + '>' result = [] for p in properties: if get_grist (p) == feature: result.append (replace_grist (p, '')) return result
Returns all values of the given feature specified by the given property set.
Below is the the instruction that describes the task: ### Input: Returns all values of the given feature specified by the given property set. ### Response: def get_values (feature, properties): """ Returns all values of the given feature specified by the given property set. """ if feature[0] != '<': feature = '<' + feature + '>' result = [] for p in properties: if get_grist (p) == feature: result.append (replace_grist (p, '')) return result
def reorient_image(input_image, output_image): """ Change the orientation of the Image data in order to be in LAS space x will represent the coronal plane, y the sagittal and z the axial plane. x increases from Right (R) to Left (L), y from Posterior (P) to Anterior (A) and z from Inferior (I) to Superior (S) :returns: The output image in nibabel form :param output_image: filepath to the nibabel image :param input_image: filepath to the nibabel image """ # Use the imageVolume module to find which coordinate corresponds to each plane # and get the image data in RAS orientation # print 'Reading nifti' image = load(input_image) # 4d have a different conversion to 3d # print 'Reorganizing data' if image.nifti_data.squeeze().ndim == 4: new_image = _reorient_4d(image) elif image.nifti_data.squeeze().ndim == 3: new_image = _reorient_3d(image) else: raise Exception('Only 3d and 4d images are supported') # print 'Recreating affine' affine = image.nifti.affine # Based on VolumeImage.py where slice orientation 1 represents the axial plane # Flipping on the data may be needed based on x_inverted, y_inverted, ZInverted # Create new affine header by changing the order of the columns of the input image header # the last column with the origin depends on the origin of the original image, the size and the direction of x,y,z new_affine = numpy.eye(4) new_affine[:, 0] = affine[:, image.sagittal_orientation.normal_component] new_affine[:, 1] = affine[:, image.coronal_orientation.normal_component] new_affine[:, 2] = affine[:, image.axial_orientation.normal_component] point = [0, 0, 0, 1] # If the orientation of coordinates is inverted, then the origin of the "new" image # would correspond to the last voxel of the original image # First we need to find which point is the origin point in image coordinates # and then transform it in world coordinates if not image.axial_orientation.x_inverted: new_affine[:, 0] = - new_affine[:, 0] point[image.sagittal_orientation.normal_component] = image.dimensions[ image.sagittal_orientation.normal_component] - 1 # new_affine[0, 3] = - new_affine[0, 3] if image.axial_orientation.y_inverted: new_affine[:, 1] = - new_affine[:, 1] point[image.coronal_orientation.normal_component] = image.dimensions[ image.coronal_orientation.normal_component] - 1 # new_affine[1, 3] = - new_affine[1, 3] if image.coronal_orientation.y_inverted: new_affine[:, 2] = - new_affine[:, 2] point[image.axial_orientation.normal_component] = image.dimensions[image.axial_orientation.normal_component] - 1 # new_affine[2, 3] = - new_affine[2, 3] new_affine[:, 3] = numpy.dot(affine, point) # DONE: Needs to update new_affine, so that there is no translation difference between the original # and created image (now there is 1-2 voxels translation) # print 'Creating new nifti image' nibabel.nifti1.Nifti1Image(new_image, new_affine).to_filename(output_image)
Change the orientation of the Image data in order to be in LAS space x will represent the coronal plane, y the sagittal and z the axial plane. x increases from Right (R) to Left (L), y from Posterior (P) to Anterior (A) and z from Inferior (I) to Superior (S) :returns: The output image in nibabel form :param output_image: filepath to the nibabel image :param input_image: filepath to the nibabel image
Below is the the instruction that describes the task: ### Input: Change the orientation of the Image data in order to be in LAS space x will represent the coronal plane, y the sagittal and z the axial plane. x increases from Right (R) to Left (L), y from Posterior (P) to Anterior (A) and z from Inferior (I) to Superior (S) :returns: The output image in nibabel form :param output_image: filepath to the nibabel image :param input_image: filepath to the nibabel image ### Response: def reorient_image(input_image, output_image): """ Change the orientation of the Image data in order to be in LAS space x will represent the coronal plane, y the sagittal and z the axial plane. x increases from Right (R) to Left (L), y from Posterior (P) to Anterior (A) and z from Inferior (I) to Superior (S) :returns: The output image in nibabel form :param output_image: filepath to the nibabel image :param input_image: filepath to the nibabel image """ # Use the imageVolume module to find which coordinate corresponds to each plane # and get the image data in RAS orientation # print 'Reading nifti' image = load(input_image) # 4d have a different conversion to 3d # print 'Reorganizing data' if image.nifti_data.squeeze().ndim == 4: new_image = _reorient_4d(image) elif image.nifti_data.squeeze().ndim == 3: new_image = _reorient_3d(image) else: raise Exception('Only 3d and 4d images are supported') # print 'Recreating affine' affine = image.nifti.affine # Based on VolumeImage.py where slice orientation 1 represents the axial plane # Flipping on the data may be needed based on x_inverted, y_inverted, ZInverted # Create new affine header by changing the order of the columns of the input image header # the last column with the origin depends on the origin of the original image, the size and the direction of x,y,z new_affine = numpy.eye(4) new_affine[:, 0] = affine[:, image.sagittal_orientation.normal_component] new_affine[:, 1] = affine[:, image.coronal_orientation.normal_component] new_affine[:, 2] = affine[:, image.axial_orientation.normal_component] point = [0, 0, 0, 1] # If the orientation of coordinates is inverted, then the origin of the "new" image # would correspond to the last voxel of the original image # First we need to find which point is the origin point in image coordinates # and then transform it in world coordinates if not image.axial_orientation.x_inverted: new_affine[:, 0] = - new_affine[:, 0] point[image.sagittal_orientation.normal_component] = image.dimensions[ image.sagittal_orientation.normal_component] - 1 # new_affine[0, 3] = - new_affine[0, 3] if image.axial_orientation.y_inverted: new_affine[:, 1] = - new_affine[:, 1] point[image.coronal_orientation.normal_component] = image.dimensions[ image.coronal_orientation.normal_component] - 1 # new_affine[1, 3] = - new_affine[1, 3] if image.coronal_orientation.y_inverted: new_affine[:, 2] = - new_affine[:, 2] point[image.axial_orientation.normal_component] = image.dimensions[image.axial_orientation.normal_component] - 1 # new_affine[2, 3] = - new_affine[2, 3] new_affine[:, 3] = numpy.dot(affine, point) # DONE: Needs to update new_affine, so that there is no translation difference between the original # and created image (now there is 1-2 voxels translation) # print 'Creating new nifti image' nibabel.nifti1.Nifti1Image(new_image, new_affine).to_filename(output_image)
def get_crystal_field_spin(self, coordination: str = "oct", spin_config: str = "high"): """ Calculate the crystal field spin based on coordination and spin configuration. Only works for transition metal species. Args: coordination (str): Only oct and tet are supported at the moment. spin_config (str): Supported keywords are "high" or "low". Returns: Crystal field spin in Bohr magneton. Raises: AttributeError if species is not a valid transition metal or has an invalid oxidation state. ValueError if invalid coordination or spin_config. """ if coordination not in ("oct", "tet") or \ spin_config not in ("high", "low"): raise ValueError("Invalid coordination or spin config.") elec = self.full_electronic_structure if len(elec) < 4 or elec[-1][1] != "s" or elec[-2][1] != "d": raise AttributeError( "Invalid element {} for crystal field calculation.".format( self.symbol)) nelectrons = elec[-1][2] + elec[-2][2] - self.oxi_state if nelectrons < 0 or nelectrons > 10: raise AttributeError( "Invalid oxidation state {} for element {}" .format(self.oxi_state, self.symbol)) if spin_config == "high": return nelectrons if nelectrons <= 5 else 10 - nelectrons elif spin_config == "low": if coordination == "oct": if nelectrons <= 3: return nelectrons elif nelectrons <= 6: return 6 - nelectrons elif nelectrons <= 8: return nelectrons - 6 else: return 10 - nelectrons elif coordination == "tet": if nelectrons <= 2: return nelectrons elif nelectrons <= 4: return 4 - nelectrons elif nelectrons <= 7: return nelectrons - 4 else: return 10 - nelectrons
Calculate the crystal field spin based on coordination and spin configuration. Only works for transition metal species. Args: coordination (str): Only oct and tet are supported at the moment. spin_config (str): Supported keywords are "high" or "low". Returns: Crystal field spin in Bohr magneton. Raises: AttributeError if species is not a valid transition metal or has an invalid oxidation state. ValueError if invalid coordination or spin_config.
Below is the the instruction that describes the task: ### Input: Calculate the crystal field spin based on coordination and spin configuration. Only works for transition metal species. Args: coordination (str): Only oct and tet are supported at the moment. spin_config (str): Supported keywords are "high" or "low". Returns: Crystal field spin in Bohr magneton. Raises: AttributeError if species is not a valid transition metal or has an invalid oxidation state. ValueError if invalid coordination or spin_config. ### Response: def get_crystal_field_spin(self, coordination: str = "oct", spin_config: str = "high"): """ Calculate the crystal field spin based on coordination and spin configuration. Only works for transition metal species. Args: coordination (str): Only oct and tet are supported at the moment. spin_config (str): Supported keywords are "high" or "low". Returns: Crystal field spin in Bohr magneton. Raises: AttributeError if species is not a valid transition metal or has an invalid oxidation state. ValueError if invalid coordination or spin_config. """ if coordination not in ("oct", "tet") or \ spin_config not in ("high", "low"): raise ValueError("Invalid coordination or spin config.") elec = self.full_electronic_structure if len(elec) < 4 or elec[-1][1] != "s" or elec[-2][1] != "d": raise AttributeError( "Invalid element {} for crystal field calculation.".format( self.symbol)) nelectrons = elec[-1][2] + elec[-2][2] - self.oxi_state if nelectrons < 0 or nelectrons > 10: raise AttributeError( "Invalid oxidation state {} for element {}" .format(self.oxi_state, self.symbol)) if spin_config == "high": return nelectrons if nelectrons <= 5 else 10 - nelectrons elif spin_config == "low": if coordination == "oct": if nelectrons <= 3: return nelectrons elif nelectrons <= 6: return 6 - nelectrons elif nelectrons <= 8: return nelectrons - 6 else: return 10 - nelectrons elif coordination == "tet": if nelectrons <= 2: return nelectrons elif nelectrons <= 4: return 4 - nelectrons elif nelectrons <= 7: return nelectrons - 4 else: return 10 - nelectrons
def create_schema(self, schema): """Create specified schema if it does not already exist """ if schema not in self.schemas: sql = "CREATE SCHEMA " + schema self.execute(sql)
Create specified schema if it does not already exist
Below is the the instruction that describes the task: ### Input: Create specified schema if it does not already exist ### Response: def create_schema(self, schema): """Create specified schema if it does not already exist """ if schema not in self.schemas: sql = "CREATE SCHEMA " + schema self.execute(sql)
def decompress(data, compression, width, height, depth, version=1): """Decompress raw data. :param data: compressed data bytes. :param compression: compression type, see :py:class:`~psd_tools.constants.Compression`. :param width: width. :param height: height. :param depth: bit depth of the pixel. :param version: psd file version. :return: decompressed data bytes. """ length = width * height * depth // 8 result = None if compression == Compression.RAW: result = data[:length] elif compression == Compression.PACK_BITS: result = decode_packbits(data, height, version) elif compression == Compression.ZIP: result = zlib.decompress(data) else: decompressed = zlib.decompress(data) result = decode_prediction(decompressed, width, height, depth) assert len(result) == length, 'len=%d, expected=%d' % ( len(result), length ) return result
Decompress raw data. :param data: compressed data bytes. :param compression: compression type, see :py:class:`~psd_tools.constants.Compression`. :param width: width. :param height: height. :param depth: bit depth of the pixel. :param version: psd file version. :return: decompressed data bytes.
Below is the the instruction that describes the task: ### Input: Decompress raw data. :param data: compressed data bytes. :param compression: compression type, see :py:class:`~psd_tools.constants.Compression`. :param width: width. :param height: height. :param depth: bit depth of the pixel. :param version: psd file version. :return: decompressed data bytes. ### Response: def decompress(data, compression, width, height, depth, version=1): """Decompress raw data. :param data: compressed data bytes. :param compression: compression type, see :py:class:`~psd_tools.constants.Compression`. :param width: width. :param height: height. :param depth: bit depth of the pixel. :param version: psd file version. :return: decompressed data bytes. """ length = width * height * depth // 8 result = None if compression == Compression.RAW: result = data[:length] elif compression == Compression.PACK_BITS: result = decode_packbits(data, height, version) elif compression == Compression.ZIP: result = zlib.decompress(data) else: decompressed = zlib.decompress(data) result = decode_prediction(decompressed, width, height, depth) assert len(result) == length, 'len=%d, expected=%d' % ( len(result), length ) return result
def _set_summary(self, v, load=False): """ Setter method for summary, mapped from YANG variable /mpls_state/summary (container) If this variable is read-only (config: false) in the source YANG file, then _set_summary is considered as a private method. Backends looking to populate this variable should do so via calling thisObj._set_summary() directly. YANG Description: MPLS Summary """ if hasattr(v, "_utype"): v = v._utype(v) try: t = YANGDynClass(v,base=summary.summary, is_container='container', presence=False, yang_name="summary", rest_name="summary", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'callpoint': u'mpls-summary', u'cli-suppress-show-path': None}}, namespace='urn:brocade.com:mgmt:brocade-mpls-operational', defining_module='brocade-mpls-operational', yang_type='container', is_config=False) except (TypeError, ValueError): raise ValueError({ 'error-string': """summary must be of a type compatible with container""", 'defined-type': "container", 'generated-type': """YANGDynClass(base=summary.summary, is_container='container', presence=False, yang_name="summary", rest_name="summary", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'callpoint': u'mpls-summary', u'cli-suppress-show-path': None}}, namespace='urn:brocade.com:mgmt:brocade-mpls-operational', defining_module='brocade-mpls-operational', yang_type='container', is_config=False)""", }) self.__summary = t if hasattr(self, '_set'): self._set()
Setter method for summary, mapped from YANG variable /mpls_state/summary (container) If this variable is read-only (config: false) in the source YANG file, then _set_summary is considered as a private method. Backends looking to populate this variable should do so via calling thisObj._set_summary() directly. YANG Description: MPLS Summary
Below is the the instruction that describes the task: ### Input: Setter method for summary, mapped from YANG variable /mpls_state/summary (container) If this variable is read-only (config: false) in the source YANG file, then _set_summary is considered as a private method. Backends looking to populate this variable should do so via calling thisObj._set_summary() directly. YANG Description: MPLS Summary ### Response: def _set_summary(self, v, load=False): """ Setter method for summary, mapped from YANG variable /mpls_state/summary (container) If this variable is read-only (config: false) in the source YANG file, then _set_summary is considered as a private method. Backends looking to populate this variable should do so via calling thisObj._set_summary() directly. YANG Description: MPLS Summary """ if hasattr(v, "_utype"): v = v._utype(v) try: t = YANGDynClass(v,base=summary.summary, is_container='container', presence=False, yang_name="summary", rest_name="summary", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'callpoint': u'mpls-summary', u'cli-suppress-show-path': None}}, namespace='urn:brocade.com:mgmt:brocade-mpls-operational', defining_module='brocade-mpls-operational', yang_type='container', is_config=False) except (TypeError, ValueError): raise ValueError({ 'error-string': """summary must be of a type compatible with container""", 'defined-type': "container", 'generated-type': """YANGDynClass(base=summary.summary, is_container='container', presence=False, yang_name="summary", rest_name="summary", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'callpoint': u'mpls-summary', u'cli-suppress-show-path': None}}, namespace='urn:brocade.com:mgmt:brocade-mpls-operational', defining_module='brocade-mpls-operational', yang_type='container', is_config=False)""", }) self.__summary = t if hasattr(self, '_set'): self._set()
def authenticationResponse(): """AUTHENTICATION RESPONSE Section 9.2.3""" a = TpPd(pd=0x5) b = MessageType(mesType=0x14) # 00010100 c = AuthenticationParameterSRES() packet = a / b / c return packet
AUTHENTICATION RESPONSE Section 9.2.3
Below is the the instruction that describes the task: ### Input: AUTHENTICATION RESPONSE Section 9.2.3 ### Response: def authenticationResponse(): """AUTHENTICATION RESPONSE Section 9.2.3""" a = TpPd(pd=0x5) b = MessageType(mesType=0x14) # 00010100 c = AuthenticationParameterSRES() packet = a / b / c return packet
def get_size(vm_): ''' Return the VM's size object ''' vm_size = config.get_cloud_config_value('size', vm_, __opts__) sizes = avail_sizes() if not vm_size: return sizes['Small Instance'] for size in sizes: combinations = (six.text_type(sizes[size]['id']), six.text_type(size)) if vm_size and six.text_type(vm_size) in combinations: return sizes[size] raise SaltCloudNotFound( 'The specified size, \'{0}\', could not be found.'.format(vm_size) )
Return the VM's size object
Below is the the instruction that describes the task: ### Input: Return the VM's size object ### Response: def get_size(vm_): ''' Return the VM's size object ''' vm_size = config.get_cloud_config_value('size', vm_, __opts__) sizes = avail_sizes() if not vm_size: return sizes['Small Instance'] for size in sizes: combinations = (six.text_type(sizes[size]['id']), six.text_type(size)) if vm_size and six.text_type(vm_size) in combinations: return sizes[size] raise SaltCloudNotFound( 'The specified size, \'{0}\', could not be found.'.format(vm_size) )
def generate_subplots(self): """ Generates the subplots for the number of given models. """ _, axes = plt.subplots(len(self.models), sharex=True, sharey=True) return axes
Generates the subplots for the number of given models.
Below is the the instruction that describes the task: ### Input: Generates the subplots for the number of given models. ### Response: def generate_subplots(self): """ Generates the subplots for the number of given models. """ _, axes = plt.subplots(len(self.models), sharex=True, sharey=True) return axes
def _minimal_export_traces(self, outdir=None, analytes=None, samples=None, subset='All_Analyses'): """ Used for exporting minimal dataset. DON'T USE. """ if analytes is None: analytes = self.analytes elif isinstance(analytes, str): analytes = [analytes] if samples is not None: subset = self.make_subset(samples) samples = self._get_samples(subset) focus_stage = 'rawdata' # ud = 'counts' if not os.path.isdir(outdir): os.mkdir(outdir) for s in samples: d = self.data[s].data[focus_stage] out = Bunch() for a in analytes: out[a] = d[a] out = pd.DataFrame(out, index=self.data[s].Time) out.index.name = 'Time' d = dateutil.parser.parse(self.data[s].meta['date']) header = ['# Minimal Reproduction Dataset Exported from LATOOLS on %s' % (time.strftime('%Y:%m:%d %H:%M:%S')), "# Analysis described in '../analysis.lalog'", '# Run latools.reproduce to import analysis.', '#', '# Sample: %s' % (s), '# Analysis Time: ' + d.strftime('%Y-%m-%d %H:%M:%S')] header = '\n'.join(header) + '\n' csv = out.to_csv() with open('%s/%s.csv' % (outdir, s), 'w') as f: f.write(header) f.write(csv) return
Used for exporting minimal dataset. DON'T USE.
Below is the the instruction that describes the task: ### Input: Used for exporting minimal dataset. DON'T USE. ### Response: def _minimal_export_traces(self, outdir=None, analytes=None, samples=None, subset='All_Analyses'): """ Used for exporting minimal dataset. DON'T USE. """ if analytes is None: analytes = self.analytes elif isinstance(analytes, str): analytes = [analytes] if samples is not None: subset = self.make_subset(samples) samples = self._get_samples(subset) focus_stage = 'rawdata' # ud = 'counts' if not os.path.isdir(outdir): os.mkdir(outdir) for s in samples: d = self.data[s].data[focus_stage] out = Bunch() for a in analytes: out[a] = d[a] out = pd.DataFrame(out, index=self.data[s].Time) out.index.name = 'Time' d = dateutil.parser.parse(self.data[s].meta['date']) header = ['# Minimal Reproduction Dataset Exported from LATOOLS on %s' % (time.strftime('%Y:%m:%d %H:%M:%S')), "# Analysis described in '../analysis.lalog'", '# Run latools.reproduce to import analysis.', '#', '# Sample: %s' % (s), '# Analysis Time: ' + d.strftime('%Y-%m-%d %H:%M:%S')] header = '\n'.join(header) + '\n' csv = out.to_csv() with open('%s/%s.csv' % (outdir, s), 'w') as f: f.write(header) f.write(csv) return
def trim_display_field(self, value, max_length): """Return a value for display; if longer than max length, use ellipsis.""" if not value: return '' if len(value) > max_length: return value[:max_length - 3] + '...' return value
Return a value for display; if longer than max length, use ellipsis.
Below is the the instruction that describes the task: ### Input: Return a value for display; if longer than max length, use ellipsis. ### Response: def trim_display_field(self, value, max_length): """Return a value for display; if longer than max length, use ellipsis.""" if not value: return '' if len(value) > max_length: return value[:max_length - 3] + '...' return value
def rule_match(component, cmd): '''see if one rule component matches''' if component == cmd: return True expanded = rule_expand(component, cmd) if cmd in expanded: return True return False
see if one rule component matches
Below is the the instruction that describes the task: ### Input: see if one rule component matches ### Response: def rule_match(component, cmd): '''see if one rule component matches''' if component == cmd: return True expanded = rule_expand(component, cmd) if cmd in expanded: return True return False
def write(self): """Write sequences predicted to be Rabs as a fasta file. :return: Number of written sequences :rtype: int """ rabs = [x.seqrecord for x in self.gproteins.values() if x.is_rab()] return SeqIO.write(rabs, self.tmpfname + '.phase2', 'fasta')
Write sequences predicted to be Rabs as a fasta file. :return: Number of written sequences :rtype: int
Below is the the instruction that describes the task: ### Input: Write sequences predicted to be Rabs as a fasta file. :return: Number of written sequences :rtype: int ### Response: def write(self): """Write sequences predicted to be Rabs as a fasta file. :return: Number of written sequences :rtype: int """ rabs = [x.seqrecord for x in self.gproteins.values() if x.is_rab()] return SeqIO.write(rabs, self.tmpfname + '.phase2', 'fasta')
def basic_transform(val): '''A basic transform for strings and integers.''' if isinstance(val, int): return struct.pack('>i', val) else: return safe_lower_utf8(val)
A basic transform for strings and integers.
Below is the the instruction that describes the task: ### Input: A basic transform for strings and integers. ### Response: def basic_transform(val): '''A basic transform for strings and integers.''' if isinstance(val, int): return struct.pack('>i', val) else: return safe_lower_utf8(val)
def Clone(self, tools=[], toolpath=None, parse_flags = None, **kw): """Return a copy of a construction Environment. The copy is like a Python "deep copy"--that is, independent copies are made recursively of each objects--except that a reference is copied when an object is not deep-copyable (like a function). There are no references to any mutable objects in the original Environment. """ builders = self._dict.get('BUILDERS', {}) clone = copy.copy(self) # BUILDERS is not safe to do a simple copy clone._dict = semi_deepcopy_dict(self._dict, ['BUILDERS']) clone._dict['BUILDERS'] = BuilderDict(builders, clone) # Check the methods added via AddMethod() and re-bind them to # the cloned environment. Only do this if the attribute hasn't # been overwritten by the user explicitly and still points to # the added method. clone.added_methods = [] for mw in self.added_methods: if mw == getattr(self, mw.name): clone.added_methods.append(mw.clone(clone)) clone._memo = {} # Apply passed-in variables before the tools # so the tools can use the new variables kw = copy_non_reserved_keywords(kw) new = {} for key, value in kw.items(): new[key] = SCons.Subst.scons_subst_once(value, self, key) clone.Replace(**new) apply_tools(clone, tools, toolpath) # apply them again in case the tools overwrote them clone.Replace(**new) # Finally, apply any flags to be merged in if parse_flags: clone.MergeFlags(parse_flags) if SCons.Debug.track_instances: logInstanceCreation(self, 'Environment.EnvironmentClone') return clone
Return a copy of a construction Environment. The copy is like a Python "deep copy"--that is, independent copies are made recursively of each objects--except that a reference is copied when an object is not deep-copyable (like a function). There are no references to any mutable objects in the original Environment.
Below is the the instruction that describes the task: ### Input: Return a copy of a construction Environment. The copy is like a Python "deep copy"--that is, independent copies are made recursively of each objects--except that a reference is copied when an object is not deep-copyable (like a function). There are no references to any mutable objects in the original Environment. ### Response: def Clone(self, tools=[], toolpath=None, parse_flags = None, **kw): """Return a copy of a construction Environment. The copy is like a Python "deep copy"--that is, independent copies are made recursively of each objects--except that a reference is copied when an object is not deep-copyable (like a function). There are no references to any mutable objects in the original Environment. """ builders = self._dict.get('BUILDERS', {}) clone = copy.copy(self) # BUILDERS is not safe to do a simple copy clone._dict = semi_deepcopy_dict(self._dict, ['BUILDERS']) clone._dict['BUILDERS'] = BuilderDict(builders, clone) # Check the methods added via AddMethod() and re-bind them to # the cloned environment. Only do this if the attribute hasn't # been overwritten by the user explicitly and still points to # the added method. clone.added_methods = [] for mw in self.added_methods: if mw == getattr(self, mw.name): clone.added_methods.append(mw.clone(clone)) clone._memo = {} # Apply passed-in variables before the tools # so the tools can use the new variables kw = copy_non_reserved_keywords(kw) new = {} for key, value in kw.items(): new[key] = SCons.Subst.scons_subst_once(value, self, key) clone.Replace(**new) apply_tools(clone, tools, toolpath) # apply them again in case the tools overwrote them clone.Replace(**new) # Finally, apply any flags to be merged in if parse_flags: clone.MergeFlags(parse_flags) if SCons.Debug.track_instances: logInstanceCreation(self, 'Environment.EnvironmentClone') return clone
def t_PARBREAK(self, token): ur'\n{2,}' token.lexer.lineno += len(token.value) return token
ur'\n{2,}
Below is the the instruction that describes the task: ### Input: ur'\n{2,} ### Response: def t_PARBREAK(self, token): ur'\n{2,}' token.lexer.lineno += len(token.value) return token
def to_gsea_path(graph: BELGraph, path: str) -> None: """Write the genes/gene products to a GRP file at the given path for use with GSEA gene set enrichment analysis.""" with open(path, 'w') as file: to_gsea(graph, file)
Write the genes/gene products to a GRP file at the given path for use with GSEA gene set enrichment analysis.
Below is the the instruction that describes the task: ### Input: Write the genes/gene products to a GRP file at the given path for use with GSEA gene set enrichment analysis. ### Response: def to_gsea_path(graph: BELGraph, path: str) -> None: """Write the genes/gene products to a GRP file at the given path for use with GSEA gene set enrichment analysis.""" with open(path, 'w') as file: to_gsea(graph, file)
def parse_xml_to_obj(self, xml_file, check_version=True, check_root=True, encoding=None): """Creates a STIX binding object from the supplied xml file. Args: xml_file: A filename/path or a file-like object representing a STIX instance document check_version: Inspect the version before parsing. check_root: Inspect the root element before parsing. encoding: The character encoding of the input `xml_file`. Raises: .UnknownVersionError: If `check_version` is ``True`` and `xml_file` does not contain STIX version information. .UnsupportedVersionError: If `check_version` is ``False`` and `xml_file` contains an unsupported STIX version. .UnsupportedRootElement: If `check_root` is ``True`` and `xml_file` contains an invalid root element. """ root = get_etree_root(xml_file, encoding=encoding) if check_root: self._check_root_tag(root) if check_version: self._check_version(root) entity_class = self.get_entity_class(root.tag) entity_obj = entity_class._binding_class.factory() entity_obj.build(root) return entity_obj
Creates a STIX binding object from the supplied xml file. Args: xml_file: A filename/path or a file-like object representing a STIX instance document check_version: Inspect the version before parsing. check_root: Inspect the root element before parsing. encoding: The character encoding of the input `xml_file`. Raises: .UnknownVersionError: If `check_version` is ``True`` and `xml_file` does not contain STIX version information. .UnsupportedVersionError: If `check_version` is ``False`` and `xml_file` contains an unsupported STIX version. .UnsupportedRootElement: If `check_root` is ``True`` and `xml_file` contains an invalid root element.
Below is the the instruction that describes the task: ### Input: Creates a STIX binding object from the supplied xml file. Args: xml_file: A filename/path or a file-like object representing a STIX instance document check_version: Inspect the version before parsing. check_root: Inspect the root element before parsing. encoding: The character encoding of the input `xml_file`. Raises: .UnknownVersionError: If `check_version` is ``True`` and `xml_file` does not contain STIX version information. .UnsupportedVersionError: If `check_version` is ``False`` and `xml_file` contains an unsupported STIX version. .UnsupportedRootElement: If `check_root` is ``True`` and `xml_file` contains an invalid root element. ### Response: def parse_xml_to_obj(self, xml_file, check_version=True, check_root=True, encoding=None): """Creates a STIX binding object from the supplied xml file. Args: xml_file: A filename/path or a file-like object representing a STIX instance document check_version: Inspect the version before parsing. check_root: Inspect the root element before parsing. encoding: The character encoding of the input `xml_file`. Raises: .UnknownVersionError: If `check_version` is ``True`` and `xml_file` does not contain STIX version information. .UnsupportedVersionError: If `check_version` is ``False`` and `xml_file` contains an unsupported STIX version. .UnsupportedRootElement: If `check_root` is ``True`` and `xml_file` contains an invalid root element. """ root = get_etree_root(xml_file, encoding=encoding) if check_root: self._check_root_tag(root) if check_version: self._check_version(root) entity_class = self.get_entity_class(root.tag) entity_obj = entity_class._binding_class.factory() entity_obj.build(root) return entity_obj
def getParamLabels(self): """ Parameters: ---------------------------------------------------------------------- retval: a dictionary of model parameter labels. For each entry the key is the name of the parameter and the value is the value chosen for it. """ params = self.__unwrapParams() # Hypersearch v2 stores the flattened parameter settings in "particleState" if "particleState" in params: retval = dict() queue = [(pair, retval) for pair in params["particleState"]["varStates"].iteritems()] while len(queue) > 0: pair, output = queue.pop() k, v = pair if ("position" in v and "bestPosition" in v and "velocity" in v): output[k] = v["position"] else: if k not in output: output[k] = dict() queue.extend((pair, output[k]) for pair in v.iteritems()) return retval
Parameters: ---------------------------------------------------------------------- retval: a dictionary of model parameter labels. For each entry the key is the name of the parameter and the value is the value chosen for it.
Below is the the instruction that describes the task: ### Input: Parameters: ---------------------------------------------------------------------- retval: a dictionary of model parameter labels. For each entry the key is the name of the parameter and the value is the value chosen for it. ### Response: def getParamLabels(self): """ Parameters: ---------------------------------------------------------------------- retval: a dictionary of model parameter labels. For each entry the key is the name of the parameter and the value is the value chosen for it. """ params = self.__unwrapParams() # Hypersearch v2 stores the flattened parameter settings in "particleState" if "particleState" in params: retval = dict() queue = [(pair, retval) for pair in params["particleState"]["varStates"].iteritems()] while len(queue) > 0: pair, output = queue.pop() k, v = pair if ("position" in v and "bestPosition" in v and "velocity" in v): output[k] = v["position"] else: if k not in output: output[k] = dict() queue.extend((pair, output[k]) for pair in v.iteritems()) return retval
def add_key(self, key_id, key): """ :: POST /:login/keys :param key_id: label for the new key :type key_id: :py:class:`basestring` :param key: the full SSH RSA public key :type key: :py:class:`str` Uploads a public key to be added to the account's credentials. """ data = {'name': str(key_id), 'key': str(key)} j, _ = self.request('POST', '/keys', data=data) return j
:: POST /:login/keys :param key_id: label for the new key :type key_id: :py:class:`basestring` :param key: the full SSH RSA public key :type key: :py:class:`str` Uploads a public key to be added to the account's credentials.
Below is the the instruction that describes the task: ### Input: :: POST /:login/keys :param key_id: label for the new key :type key_id: :py:class:`basestring` :param key: the full SSH RSA public key :type key: :py:class:`str` Uploads a public key to be added to the account's credentials. ### Response: def add_key(self, key_id, key): """ :: POST /:login/keys :param key_id: label for the new key :type key_id: :py:class:`basestring` :param key: the full SSH RSA public key :type key: :py:class:`str` Uploads a public key to be added to the account's credentials. """ data = {'name': str(key_id), 'key': str(key)} j, _ = self.request('POST', '/keys', data=data) return j
def name_to_hex(name, spec=u'css3'): """ Convert a color name to a normalized hexadecimal color value. The optional keyword argument ``spec`` determines which specification's list of color names will be used; valid values are ``html4``, ``css2``, ``css21`` and ``css3``, and the default is ``css3``. When no color of that name exists in the given specification, ``ValueError`` is raised. """ if spec not in SUPPORTED_SPECIFICATIONS: raise ValueError(SPECIFICATION_ERROR_TEMPLATE.format(spec=spec)) normalized = name.lower() hex_value = {u'css2': CSS2_NAMES_TO_HEX, u'css21': CSS21_NAMES_TO_HEX, u'css3': CSS3_NAMES_TO_HEX, u'html4': HTML4_NAMES_TO_HEX}[spec].get(normalized) if hex_value is None: raise ValueError( u"'{name}' is not defined as a named color in {spec}".format( name=name, spec=spec ) ) return hex_value
Convert a color name to a normalized hexadecimal color value. The optional keyword argument ``spec`` determines which specification's list of color names will be used; valid values are ``html4``, ``css2``, ``css21`` and ``css3``, and the default is ``css3``. When no color of that name exists in the given specification, ``ValueError`` is raised.
Below is the the instruction that describes the task: ### Input: Convert a color name to a normalized hexadecimal color value. The optional keyword argument ``spec`` determines which specification's list of color names will be used; valid values are ``html4``, ``css2``, ``css21`` and ``css3``, and the default is ``css3``. When no color of that name exists in the given specification, ``ValueError`` is raised. ### Response: def name_to_hex(name, spec=u'css3'): """ Convert a color name to a normalized hexadecimal color value. The optional keyword argument ``spec`` determines which specification's list of color names will be used; valid values are ``html4``, ``css2``, ``css21`` and ``css3``, and the default is ``css3``. When no color of that name exists in the given specification, ``ValueError`` is raised. """ if spec not in SUPPORTED_SPECIFICATIONS: raise ValueError(SPECIFICATION_ERROR_TEMPLATE.format(spec=spec)) normalized = name.lower() hex_value = {u'css2': CSS2_NAMES_TO_HEX, u'css21': CSS21_NAMES_TO_HEX, u'css3': CSS3_NAMES_TO_HEX, u'html4': HTML4_NAMES_TO_HEX}[spec].get(normalized) if hex_value is None: raise ValueError( u"'{name}' is not defined as a named color in {spec}".format( name=name, spec=spec ) ) return hex_value
def lint(filename, lines, config): """Lints a file. Args: filename: string: filename to lint. lines: list[int]|None: list of lines that we want to capture. If None, then all lines will be captured. config: dict[string: linter]: mapping from extension to a linter function. Returns: dict: if there were errors running the command then the field 'error' will have the reasons in a list. if the lint process was skipped, then a field 'skipped' will be set with the reasons. Otherwise, the field 'comments' will have the messages. """ _, ext = os.path.splitext(filename) if ext in config: output = collections.defaultdict(list) for linter in config[ext]: linter_output = linter(filename, lines) for category, values in linter_output[filename].items(): output[category].extend(values) if 'comments' in output: output['comments'] = sorted( output['comments'], key=lambda x: (x.get('line', -1), x.get('column', -1))) return {filename: dict(output)} else: return { filename: { 'skipped': [ 'no linter is defined or enabled for files' ' with extension "%s"' % ext ] } }
Lints a file. Args: filename: string: filename to lint. lines: list[int]|None: list of lines that we want to capture. If None, then all lines will be captured. config: dict[string: linter]: mapping from extension to a linter function. Returns: dict: if there were errors running the command then the field 'error' will have the reasons in a list. if the lint process was skipped, then a field 'skipped' will be set with the reasons. Otherwise, the field 'comments' will have the messages.
Below is the the instruction that describes the task: ### Input: Lints a file. Args: filename: string: filename to lint. lines: list[int]|None: list of lines that we want to capture. If None, then all lines will be captured. config: dict[string: linter]: mapping from extension to a linter function. Returns: dict: if there were errors running the command then the field 'error' will have the reasons in a list. if the lint process was skipped, then a field 'skipped' will be set with the reasons. Otherwise, the field 'comments' will have the messages. ### Response: def lint(filename, lines, config): """Lints a file. Args: filename: string: filename to lint. lines: list[int]|None: list of lines that we want to capture. If None, then all lines will be captured. config: dict[string: linter]: mapping from extension to a linter function. Returns: dict: if there were errors running the command then the field 'error' will have the reasons in a list. if the lint process was skipped, then a field 'skipped' will be set with the reasons. Otherwise, the field 'comments' will have the messages. """ _, ext = os.path.splitext(filename) if ext in config: output = collections.defaultdict(list) for linter in config[ext]: linter_output = linter(filename, lines) for category, values in linter_output[filename].items(): output[category].extend(values) if 'comments' in output: output['comments'] = sorted( output['comments'], key=lambda x: (x.get('line', -1), x.get('column', -1))) return {filename: dict(output)} else: return { filename: { 'skipped': [ 'no linter is defined or enabled for files' ' with extension "%s"' % ext ] } }
def create_service(self, name, flavor_id, domains, origins, restrictions=None, caching=None): """Create a new CDN service. Arguments: name: The name of the service. flavor_id: The ID of the flavor to use for this service. domains: A list of dictionaries, each of which has a required key "domain" and optional key "protocol" (the default protocol is http). origins: A list of dictionaries, each of which has a required key "origin" which is the URL or IP address to pull origin content from. Optional keys include "port" to use a port other than the default of 80, and "ssl" to enable SSL, which is disabled by default. caching: An optional """ return self._services_manager.create(name, flavor_id, domains, origins, restrictions, caching)
Create a new CDN service. Arguments: name: The name of the service. flavor_id: The ID of the flavor to use for this service. domains: A list of dictionaries, each of which has a required key "domain" and optional key "protocol" (the default protocol is http). origins: A list of dictionaries, each of which has a required key "origin" which is the URL or IP address to pull origin content from. Optional keys include "port" to use a port other than the default of 80, and "ssl" to enable SSL, which is disabled by default. caching: An optional
Below is the the instruction that describes the task: ### Input: Create a new CDN service. Arguments: name: The name of the service. flavor_id: The ID of the flavor to use for this service. domains: A list of dictionaries, each of which has a required key "domain" and optional key "protocol" (the default protocol is http). origins: A list of dictionaries, each of which has a required key "origin" which is the URL or IP address to pull origin content from. Optional keys include "port" to use a port other than the default of 80, and "ssl" to enable SSL, which is disabled by default. caching: An optional ### Response: def create_service(self, name, flavor_id, domains, origins, restrictions=None, caching=None): """Create a new CDN service. Arguments: name: The name of the service. flavor_id: The ID of the flavor to use for this service. domains: A list of dictionaries, each of which has a required key "domain" and optional key "protocol" (the default protocol is http). origins: A list of dictionaries, each of which has a required key "origin" which is the URL or IP address to pull origin content from. Optional keys include "port" to use a port other than the default of 80, and "ssl" to enable SSL, which is disabled by default. caching: An optional """ return self._services_manager.create(name, flavor_id, domains, origins, restrictions, caching)
def decode_offset_fetch_response(cls, data): """ Decode bytes to an OffsetFetchResponse :param bytes data: bytes to decode """ ((correlation_id,), cur) = relative_unpack('>i', data, 0) ((num_topics,), cur) = relative_unpack('>i', data, cur) for _i in range(num_topics): (topic, cur) = read_short_ascii(data, cur) ((num_partitions,), cur) = relative_unpack('>i', data, cur) for _i in range(num_partitions): ((partition, offset), cur) = relative_unpack('>iq', data, cur) (metadata, cur) = read_short_bytes(data, cur) ((error,), cur) = relative_unpack('>h', data, cur) yield OffsetFetchResponse(topic, partition, offset, metadata, error)
Decode bytes to an OffsetFetchResponse :param bytes data: bytes to decode
Below is the the instruction that describes the task: ### Input: Decode bytes to an OffsetFetchResponse :param bytes data: bytes to decode ### Response: def decode_offset_fetch_response(cls, data): """ Decode bytes to an OffsetFetchResponse :param bytes data: bytes to decode """ ((correlation_id,), cur) = relative_unpack('>i', data, 0) ((num_topics,), cur) = relative_unpack('>i', data, cur) for _i in range(num_topics): (topic, cur) = read_short_ascii(data, cur) ((num_partitions,), cur) = relative_unpack('>i', data, cur) for _i in range(num_partitions): ((partition, offset), cur) = relative_unpack('>iq', data, cur) (metadata, cur) = read_short_bytes(data, cur) ((error,), cur) = relative_unpack('>h', data, cur) yield OffsetFetchResponse(topic, partition, offset, metadata, error)
def edit_config_input_edit_content_url_url(self, **kwargs): """Auto Generated Code """ config = ET.Element("config") edit_config = ET.Element("edit_config") config = edit_config input = ET.SubElement(edit_config, "input") edit_content = ET.SubElement(input, "edit-content") url = ET.SubElement(edit_content, "url") url = ET.SubElement(url, "url") url.text = kwargs.pop('url') callback = kwargs.pop('callback', self._callback) return callback(config)
Auto Generated Code
Below is the the instruction that describes the task: ### Input: Auto Generated Code ### Response: def edit_config_input_edit_content_url_url(self, **kwargs): """Auto Generated Code """ config = ET.Element("config") edit_config = ET.Element("edit_config") config = edit_config input = ET.SubElement(edit_config, "input") edit_content = ET.SubElement(input, "edit-content") url = ET.SubElement(edit_content, "url") url = ET.SubElement(url, "url") url.text = kwargs.pop('url') callback = kwargs.pop('callback', self._callback) return callback(config)
def _join_inlines(self, soup): """Unwraps inline elements defined in self._inline_tags. """ elements = soup.find_all(True) for elem in elements: if self._inline(elem): elem.unwrap() return soup
Unwraps inline elements defined in self._inline_tags.
Below is the the instruction that describes the task: ### Input: Unwraps inline elements defined in self._inline_tags. ### Response: def _join_inlines(self, soup): """Unwraps inline elements defined in self._inline_tags. """ elements = soup.find_all(True) for elem in elements: if self._inline(elem): elem.unwrap() return soup
def geweke(self, chain=None, first=0.1, last=0.5, threshold=0.05): """ Runs the Geweke diagnostic on the supplied chains. Parameters ---------- chain : int|str, optional Which chain to run the diagnostic on. By default, this is `None`, which will run the diagnostic on all chains. You can also supply and integer (the chain index) or a string, for the chain name (if you set one). first : float, optional The amount of the start of the chain to use last : float, optional The end amount of the chain to use threshold : float, optional The p-value to use when testing for normality. Returns ------- float whether or not the chains pass the test """ if chain is None: return np.all([self.geweke(k, threshold=threshold) for k in range(len(self.parent.chains))]) index = self.parent._get_chain(chain) assert len(index) == 1, "Please specify only one chain, have %d chains" % len(index) chain = self.parent.chains[index[0]] num_walkers = chain.walkers assert num_walkers is not None and num_walkers > 0, \ "You need to specify the number of walkers to use the Geweke diagnostic." name = chain.name data = chain.chain chains = np.split(data, num_walkers) n = 1.0 * chains[0].shape[0] n_start = int(np.floor(first * n)) n_end = int(np.floor((1 - last) * n)) mean_start = np.array([np.mean(c[:n_start, i]) for c in chains for i in range(c.shape[1])]) var_start = np.array([self._spec(c[:n_start, i]) / c[:n_start, i].size for c in chains for i in range(c.shape[1])]) mean_end = np.array([np.mean(c[n_end:, i]) for c in chains for i in range(c.shape[1])]) var_end = np.array([self._spec(c[n_end:, i]) / c[n_end:, i].size for c in chains for i in range(c.shape[1])]) zs = (mean_start - mean_end) / (np.sqrt(var_start + var_end)) _, pvalue = normaltest(zs) print("Gweke Statistic for chain %s has p-value %e" % (name, pvalue)) return pvalue > threshold
Runs the Geweke diagnostic on the supplied chains. Parameters ---------- chain : int|str, optional Which chain to run the diagnostic on. By default, this is `None`, which will run the diagnostic on all chains. You can also supply and integer (the chain index) or a string, for the chain name (if you set one). first : float, optional The amount of the start of the chain to use last : float, optional The end amount of the chain to use threshold : float, optional The p-value to use when testing for normality. Returns ------- float whether or not the chains pass the test
Below is the the instruction that describes the task: ### Input: Runs the Geweke diagnostic on the supplied chains. Parameters ---------- chain : int|str, optional Which chain to run the diagnostic on. By default, this is `None`, which will run the diagnostic on all chains. You can also supply and integer (the chain index) or a string, for the chain name (if you set one). first : float, optional The amount of the start of the chain to use last : float, optional The end amount of the chain to use threshold : float, optional The p-value to use when testing for normality. Returns ------- float whether or not the chains pass the test ### Response: def geweke(self, chain=None, first=0.1, last=0.5, threshold=0.05): """ Runs the Geweke diagnostic on the supplied chains. Parameters ---------- chain : int|str, optional Which chain to run the diagnostic on. By default, this is `None`, which will run the diagnostic on all chains. You can also supply and integer (the chain index) or a string, for the chain name (if you set one). first : float, optional The amount of the start of the chain to use last : float, optional The end amount of the chain to use threshold : float, optional The p-value to use when testing for normality. Returns ------- float whether or not the chains pass the test """ if chain is None: return np.all([self.geweke(k, threshold=threshold) for k in range(len(self.parent.chains))]) index = self.parent._get_chain(chain) assert len(index) == 1, "Please specify only one chain, have %d chains" % len(index) chain = self.parent.chains[index[0]] num_walkers = chain.walkers assert num_walkers is not None and num_walkers > 0, \ "You need to specify the number of walkers to use the Geweke diagnostic." name = chain.name data = chain.chain chains = np.split(data, num_walkers) n = 1.0 * chains[0].shape[0] n_start = int(np.floor(first * n)) n_end = int(np.floor((1 - last) * n)) mean_start = np.array([np.mean(c[:n_start, i]) for c in chains for i in range(c.shape[1])]) var_start = np.array([self._spec(c[:n_start, i]) / c[:n_start, i].size for c in chains for i in range(c.shape[1])]) mean_end = np.array([np.mean(c[n_end:, i]) for c in chains for i in range(c.shape[1])]) var_end = np.array([self._spec(c[n_end:, i]) / c[n_end:, i].size for c in chains for i in range(c.shape[1])]) zs = (mean_start - mean_end) / (np.sqrt(var_start + var_end)) _, pvalue = normaltest(zs) print("Gweke Statistic for chain %s has p-value %e" % (name, pvalue)) return pvalue > threshold
def get_block(self, x,y,z, coord=False): """Return the id of the block at x, y, z.""" """ Laid out like: (0,0,0), (0,1,0), (0,2,0) ... (0,127,0), (0,0,1), (0,1,1), (0,2,1) ... (0,127,1), (0,0,2) ... (0,127,15), (1,0,0), (1,1,0) ... (15,127,15) :: blocks = [] for x in range(15): for z in range(15): for y in range(127): blocks.append(Block(x,y,z)) """ offset = y + z*128 + x*128*16 if (coord == False) else coord[1] + coord[2]*128 + coord[0]*128*16 return self.blocksList[offset]
Return the id of the block at x, y, z.
Below is the the instruction that describes the task: ### Input: Return the id of the block at x, y, z. ### Response: def get_block(self, x,y,z, coord=False): """Return the id of the block at x, y, z.""" """ Laid out like: (0,0,0), (0,1,0), (0,2,0) ... (0,127,0), (0,0,1), (0,1,1), (0,2,1) ... (0,127,1), (0,0,2) ... (0,127,15), (1,0,0), (1,1,0) ... (15,127,15) :: blocks = [] for x in range(15): for z in range(15): for y in range(127): blocks.append(Block(x,y,z)) """ offset = y + z*128 + x*128*16 if (coord == False) else coord[1] + coord[2]*128 + coord[0]*128*16 return self.blocksList[offset]
def centerOnItems(self, items = None): """ Centers on the given items, if no items are supplied, then all items will be centered on. :param items | [<QGraphicsItem>, ..] """ if not items: rect = self.scene().visibleItemsBoundingRect() if not rect.width(): rect = self.scene().sceneRect() self.centerOn(rect.center()) else: self.centerOn(self.scene().calculateBoundingRect(items).center())
Centers on the given items, if no items are supplied, then all items will be centered on. :param items | [<QGraphicsItem>, ..]
Below is the the instruction that describes the task: ### Input: Centers on the given items, if no items are supplied, then all items will be centered on. :param items | [<QGraphicsItem>, ..] ### Response: def centerOnItems(self, items = None): """ Centers on the given items, if no items are supplied, then all items will be centered on. :param items | [<QGraphicsItem>, ..] """ if not items: rect = self.scene().visibleItemsBoundingRect() if not rect.width(): rect = self.scene().sceneRect() self.centerOn(rect.center()) else: self.centerOn(self.scene().calculateBoundingRect(items).center())
def load_configuration(): """Load the configuration""" (belbio_conf_fp, belbio_secrets_fp) = get_belbio_conf_files() log.info(f"Using conf: {belbio_conf_fp} and secrets files: {belbio_secrets_fp} ") config = {} if belbio_conf_fp: with open(belbio_conf_fp, "r") as f: config = yaml.load(f, Loader=yaml.SafeLoader) config["source_files"] = {} config["source_files"]["conf"] = belbio_conf_fp if belbio_secrets_fp: with open(belbio_secrets_fp, "r") as f: secrets = yaml.load(f, Loader=yaml.SafeLoader) config["secrets"] = copy.deepcopy(secrets) if "source_files" in config: config["source_files"]["secrets"] = belbio_secrets_fp get_versions(config) # TODO - needs to be completed # add_environment_vars(config) return config
Load the configuration
Below is the the instruction that describes the task: ### Input: Load the configuration ### Response: def load_configuration(): """Load the configuration""" (belbio_conf_fp, belbio_secrets_fp) = get_belbio_conf_files() log.info(f"Using conf: {belbio_conf_fp} and secrets files: {belbio_secrets_fp} ") config = {} if belbio_conf_fp: with open(belbio_conf_fp, "r") as f: config = yaml.load(f, Loader=yaml.SafeLoader) config["source_files"] = {} config["source_files"]["conf"] = belbio_conf_fp if belbio_secrets_fp: with open(belbio_secrets_fp, "r") as f: secrets = yaml.load(f, Loader=yaml.SafeLoader) config["secrets"] = copy.deepcopy(secrets) if "source_files" in config: config["source_files"]["secrets"] = belbio_secrets_fp get_versions(config) # TODO - needs to be completed # add_environment_vars(config) return config
def version_from_xml_filename(filename): "extract the numeric version from the xml filename" try: filename_parts = filename.split(os.sep)[-1].split('-') except AttributeError: return None if len(filename_parts) == 3: try: return int(filename_parts[-1].lstrip('v').rstrip('.xml')) except ValueError: return None else: return None
extract the numeric version from the xml filename
Below is the the instruction that describes the task: ### Input: extract the numeric version from the xml filename ### Response: def version_from_xml_filename(filename): "extract the numeric version from the xml filename" try: filename_parts = filename.split(os.sep)[-1].split('-') except AttributeError: return None if len(filename_parts) == 3: try: return int(filename_parts[-1].lstrip('v').rstrip('.xml')) except ValueError: return None else: return None
def trust(self, scope, vk): """Start trusting a particular key for given scope.""" self.data['verifiers'].append({'scope': scope, 'vk': vk}) return self
Start trusting a particular key for given scope.
Below is the the instruction that describes the task: ### Input: Start trusting a particular key for given scope. ### Response: def trust(self, scope, vk): """Start trusting a particular key for given scope.""" self.data['verifiers'].append({'scope': scope, 'vk': vk}) return self
def ParseOptions(cls, options, configuration_object): """Parses and validates options. Args: options (argparse.Namespace): parser options. configuration_object (CLITool): object to be configured by the argument helper. Raises: BadConfigObject: when the configuration object is of the wrong type. BadConfigOption: if the collection file does not exist. """ if not isinstance(configuration_object, tools.CLITool): raise errors.BadConfigObject( 'Configuration object is not an instance of CLITool') filter_file = cls._ParseStringOption(options, 'file_filter') # Search the data location for the filter file. if filter_file and not os.path.isfile(filter_file): data_location = getattr(configuration_object, '_data_location', None) if data_location: filter_file_basename = os.path.basename(filter_file) filter_file_path = os.path.join(data_location, filter_file_basename) if os.path.isfile(filter_file_path): filter_file = filter_file_path if filter_file and not os.path.isfile(filter_file): raise errors.BadConfigOption( 'No such collection filter file: {0:s}.'.format(filter_file)) setattr(configuration_object, '_filter_file', filter_file)
Parses and validates options. Args: options (argparse.Namespace): parser options. configuration_object (CLITool): object to be configured by the argument helper. Raises: BadConfigObject: when the configuration object is of the wrong type. BadConfigOption: if the collection file does not exist.
Below is the the instruction that describes the task: ### Input: Parses and validates options. Args: options (argparse.Namespace): parser options. configuration_object (CLITool): object to be configured by the argument helper. Raises: BadConfigObject: when the configuration object is of the wrong type. BadConfigOption: if the collection file does not exist. ### Response: def ParseOptions(cls, options, configuration_object): """Parses and validates options. Args: options (argparse.Namespace): parser options. configuration_object (CLITool): object to be configured by the argument helper. Raises: BadConfigObject: when the configuration object is of the wrong type. BadConfigOption: if the collection file does not exist. """ if not isinstance(configuration_object, tools.CLITool): raise errors.BadConfigObject( 'Configuration object is not an instance of CLITool') filter_file = cls._ParseStringOption(options, 'file_filter') # Search the data location for the filter file. if filter_file and not os.path.isfile(filter_file): data_location = getattr(configuration_object, '_data_location', None) if data_location: filter_file_basename = os.path.basename(filter_file) filter_file_path = os.path.join(data_location, filter_file_basename) if os.path.isfile(filter_file_path): filter_file = filter_file_path if filter_file and not os.path.isfile(filter_file): raise errors.BadConfigOption( 'No such collection filter file: {0:s}.'.format(filter_file)) setattr(configuration_object, '_filter_file', filter_file)
def location_based_search(self, lng, lat, distance, unit="miles", attribute_map=None, page=0, limit=50): """Search based on location and other attribute filters :param long lng: Longitude parameter :param long lat: Latitude parameter :param int distance: The radius of the query :param str unit: The unit of measure for the query, defaults to miles :param dict attribute_map: Additional attributes to apply to the location bases query :param int page: The page to return :param int limit: Number of results per page :returns: List of objects :rtype: list """ #Determine what type of radian conversion you want base on a unit of measure if unit == "miles": distance = float(distance/69) else: distance = float(distance/111.045) #Start with geospatial query query = { "loc" : { "$within": { "$center" : [[lng, lat], distance]} } } #Allow querying additional attributes if attribute_map: query = dict(query.items() + attribute_map.items()) results = yield self.find(query, page=page, limit=limit) raise Return(self._list_cursor_to_json(results))
Search based on location and other attribute filters :param long lng: Longitude parameter :param long lat: Latitude parameter :param int distance: The radius of the query :param str unit: The unit of measure for the query, defaults to miles :param dict attribute_map: Additional attributes to apply to the location bases query :param int page: The page to return :param int limit: Number of results per page :returns: List of objects :rtype: list
Below is the the instruction that describes the task: ### Input: Search based on location and other attribute filters :param long lng: Longitude parameter :param long lat: Latitude parameter :param int distance: The radius of the query :param str unit: The unit of measure for the query, defaults to miles :param dict attribute_map: Additional attributes to apply to the location bases query :param int page: The page to return :param int limit: Number of results per page :returns: List of objects :rtype: list ### Response: def location_based_search(self, lng, lat, distance, unit="miles", attribute_map=None, page=0, limit=50): """Search based on location and other attribute filters :param long lng: Longitude parameter :param long lat: Latitude parameter :param int distance: The radius of the query :param str unit: The unit of measure for the query, defaults to miles :param dict attribute_map: Additional attributes to apply to the location bases query :param int page: The page to return :param int limit: Number of results per page :returns: List of objects :rtype: list """ #Determine what type of radian conversion you want base on a unit of measure if unit == "miles": distance = float(distance/69) else: distance = float(distance/111.045) #Start with geospatial query query = { "loc" : { "$within": { "$center" : [[lng, lat], distance]} } } #Allow querying additional attributes if attribute_map: query = dict(query.items() + attribute_map.items()) results = yield self.find(query, page=page, limit=limit) raise Return(self._list_cursor_to_json(results))
def create_token(key, payload): """Auth token generator payload should be a json encodable data structure """ token = hmac.new(key) token.update(json.dumps(payload)) return token.hexdigest()
Auth token generator payload should be a json encodable data structure
Below is the the instruction that describes the task: ### Input: Auth token generator payload should be a json encodable data structure ### Response: def create_token(key, payload): """Auth token generator payload should be a json encodable data structure """ token = hmac.new(key) token.update(json.dumps(payload)) return token.hexdigest()
def help_for_command(command): """Get the help text (signature + docstring) for a command (function).""" help_text = pydoc.text.document(command) # remove backspaces return re.subn('.\\x08', '', help_text)[0]
Get the help text (signature + docstring) for a command (function).
Below is the the instruction that describes the task: ### Input: Get the help text (signature + docstring) for a command (function). ### Response: def help_for_command(command): """Get the help text (signature + docstring) for a command (function).""" help_text = pydoc.text.document(command) # remove backspaces return re.subn('.\\x08', '', help_text)[0]
def set_info_handler(codec, handler, data=None): """Wraps openjp2 library function opj_set_info_handler. Set the info handler use by openjpeg. Parameters ---------- codec : CODEC_TYPE Codec initialized by create_compress function. handler : python function The callback function to be used. user_data : anything User/client data. Raises ------ RuntimeError If the OpenJPEG library routine opj_set_info_handler fails. """ OPENJP2.opj_set_info_handler.argtypes = [CODEC_TYPE, ctypes.c_void_p, ctypes.c_void_p] OPENJP2.opj_set_info_handler.restype = check_error OPENJP2.opj_set_info_handler(codec, handler, data)
Wraps openjp2 library function opj_set_info_handler. Set the info handler use by openjpeg. Parameters ---------- codec : CODEC_TYPE Codec initialized by create_compress function. handler : python function The callback function to be used. user_data : anything User/client data. Raises ------ RuntimeError If the OpenJPEG library routine opj_set_info_handler fails.
Below is the the instruction that describes the task: ### Input: Wraps openjp2 library function opj_set_info_handler. Set the info handler use by openjpeg. Parameters ---------- codec : CODEC_TYPE Codec initialized by create_compress function. handler : python function The callback function to be used. user_data : anything User/client data. Raises ------ RuntimeError If the OpenJPEG library routine opj_set_info_handler fails. ### Response: def set_info_handler(codec, handler, data=None): """Wraps openjp2 library function opj_set_info_handler. Set the info handler use by openjpeg. Parameters ---------- codec : CODEC_TYPE Codec initialized by create_compress function. handler : python function The callback function to be used. user_data : anything User/client data. Raises ------ RuntimeError If the OpenJPEG library routine opj_set_info_handler fails. """ OPENJP2.opj_set_info_handler.argtypes = [CODEC_TYPE, ctypes.c_void_p, ctypes.c_void_p] OPENJP2.opj_set_info_handler.restype = check_error OPENJP2.opj_set_info_handler(codec, handler, data)
def train_df(self, df): """ Train scale from a dataframe """ aesthetics = sorted(set(self.aesthetics) & set(df.columns)) for ae in aesthetics: self.train(df[ae])
Train scale from a dataframe
Below is the the instruction that describes the task: ### Input: Train scale from a dataframe ### Response: def train_df(self, df): """ Train scale from a dataframe """ aesthetics = sorted(set(self.aesthetics) & set(df.columns)) for ae in aesthetics: self.train(df[ae])
def removeItem(self, index): """Alias for removeComponent""" self._stim.removeComponent(index.row(), index.column())
Alias for removeComponent
Below is the the instruction that describes the task: ### Input: Alias for removeComponent ### Response: def removeItem(self, index): """Alias for removeComponent""" self._stim.removeComponent(index.row(), index.column())
def extract_stack(f=None, limit=None): """equivalent to traceback.extract_stack(), but also works with psyco """ if f is not None: raise RuntimeError("Timba.utils.extract_stack: f has to be None, don't ask why") # normally we can just use the traceback.extract_stack() function and # cut out the last frame (which is just ourselves). However, under psyco # this seems to return an empty list, so we use sys._getframe() instead lim = limit if lim is not None: lim += 1 tb = traceback.extract_stack(None, lim) if tb: return tb[:-1]; # skip current frame # else presumably running under psyco return nonportable_extract_stack(f, limit)
equivalent to traceback.extract_stack(), but also works with psyco
Below is the the instruction that describes the task: ### Input: equivalent to traceback.extract_stack(), but also works with psyco ### Response: def extract_stack(f=None, limit=None): """equivalent to traceback.extract_stack(), but also works with psyco """ if f is not None: raise RuntimeError("Timba.utils.extract_stack: f has to be None, don't ask why") # normally we can just use the traceback.extract_stack() function and # cut out the last frame (which is just ourselves). However, under psyco # this seems to return an empty list, so we use sys._getframe() instead lim = limit if lim is not None: lim += 1 tb = traceback.extract_stack(None, lim) if tb: return tb[:-1]; # skip current frame # else presumably running under psyco return nonportable_extract_stack(f, limit)
def rev_reg_id2cred_def_id_tag(rr_id: str) -> (str, str): """ Given a revocation registry identifier, return its corresponding credential definition identifier and (stringified int) tag. Raise BadIdentifier if input is not a revocation registry identifier. :param rr_id: revocation registry identifier :return: credential definition identifier and tag """ if ok_rev_reg_id(rr_id): return ( ':'.join(rr_id.split(':')[2:-2]), # rev reg id comprises (prefixes):<cred_def_id>:(suffixes) str(rr_id.split(':')[-1]) # tag is last token ) raise BadIdentifier('Bad revocation registry identifier {}'.format(rr_id))
Given a revocation registry identifier, return its corresponding credential definition identifier and (stringified int) tag. Raise BadIdentifier if input is not a revocation registry identifier. :param rr_id: revocation registry identifier :return: credential definition identifier and tag
Below is the the instruction that describes the task: ### Input: Given a revocation registry identifier, return its corresponding credential definition identifier and (stringified int) tag. Raise BadIdentifier if input is not a revocation registry identifier. :param rr_id: revocation registry identifier :return: credential definition identifier and tag ### Response: def rev_reg_id2cred_def_id_tag(rr_id: str) -> (str, str): """ Given a revocation registry identifier, return its corresponding credential definition identifier and (stringified int) tag. Raise BadIdentifier if input is not a revocation registry identifier. :param rr_id: revocation registry identifier :return: credential definition identifier and tag """ if ok_rev_reg_id(rr_id): return ( ':'.join(rr_id.split(':')[2:-2]), # rev reg id comprises (prefixes):<cred_def_id>:(suffixes) str(rr_id.split(':')[-1]) # tag is last token ) raise BadIdentifier('Bad revocation registry identifier {}'.format(rr_id))
def clean(self): """Return a copy of this Text instance with invalid characters removed.""" return Text(self.__text_cleaner.clean(self[TEXT]), **self.__kwargs)
Return a copy of this Text instance with invalid characters removed.
Below is the the instruction that describes the task: ### Input: Return a copy of this Text instance with invalid characters removed. ### Response: def clean(self): """Return a copy of this Text instance with invalid characters removed.""" return Text(self.__text_cleaner.clean(self[TEXT]), **self.__kwargs)
def __get_logged_in_id(self): """ Fetch the logged in users ID, with caching. ID is reset on calls to log_in. """ if self.__logged_in_id == None: self.__logged_in_id = self.account_verify_credentials().id return self.__logged_in_id
Fetch the logged in users ID, with caching. ID is reset on calls to log_in.
Below is the the instruction that describes the task: ### Input: Fetch the logged in users ID, with caching. ID is reset on calls to log_in. ### Response: def __get_logged_in_id(self): """ Fetch the logged in users ID, with caching. ID is reset on calls to log_in. """ if self.__logged_in_id == None: self.__logged_in_id = self.account_verify_credentials().id return self.__logged_in_id
def read(self, size=None): """Reads a byte string from the file-like object at the current offset. The function will read a byte string of the specified size or all of the remaining data if no size was specified. Args: size (Optional[int]): number of bytes to read, where None is all remaining data. Returns: bytes: data read. Raises: IOError: if the read failed. OSError: if the read failed. """ if not self._is_open: raise IOError('Not opened.') if self._current_offset < 0: raise IOError( 'Invalid current offset: {0:d} value less than zero.'.format( self._current_offset)) if self._decoded_stream_size is None: self._decoded_stream_size = self._GetDecodedStreamSize() if self._decoded_stream_size < 0: raise IOError('Invalid decoded stream size.') if self._current_offset >= self._decoded_stream_size: return b'' if self._realign_offset: self._AlignDecodedDataOffset(self._current_offset) self._realign_offset = False if size is None: size = self._decoded_stream_size if self._current_offset + size > self._decoded_stream_size: size = self._decoded_stream_size - self._current_offset decoded_data = b'' if size == 0: return decoded_data while size > self._decoded_data_size: decoded_data = b''.join([ decoded_data, self._decoded_data[self._decoded_data_offset:]]) remaining_decoded_data_size = ( self._decoded_data_size - self._decoded_data_offset) self._current_offset += remaining_decoded_data_size size -= remaining_decoded_data_size if self._current_offset >= self._decoded_stream_size: break read_count = self._ReadEncodedData(self._ENCODED_DATA_BUFFER_SIZE) self._decoded_data_offset = 0 if read_count == 0: break if size > 0: slice_start_offset = self._decoded_data_offset slice_end_offset = slice_start_offset + size decoded_data = b''.join([ decoded_data, self._decoded_data[slice_start_offset:slice_end_offset]]) self._decoded_data_offset += size self._current_offset += size return decoded_data
Reads a byte string from the file-like object at the current offset. The function will read a byte string of the specified size or all of the remaining data if no size was specified. Args: size (Optional[int]): number of bytes to read, where None is all remaining data. Returns: bytes: data read. Raises: IOError: if the read failed. OSError: if the read failed.
Below is the the instruction that describes the task: ### Input: Reads a byte string from the file-like object at the current offset. The function will read a byte string of the specified size or all of the remaining data if no size was specified. Args: size (Optional[int]): number of bytes to read, where None is all remaining data. Returns: bytes: data read. Raises: IOError: if the read failed. OSError: if the read failed. ### Response: def read(self, size=None): """Reads a byte string from the file-like object at the current offset. The function will read a byte string of the specified size or all of the remaining data if no size was specified. Args: size (Optional[int]): number of bytes to read, where None is all remaining data. Returns: bytes: data read. Raises: IOError: if the read failed. OSError: if the read failed. """ if not self._is_open: raise IOError('Not opened.') if self._current_offset < 0: raise IOError( 'Invalid current offset: {0:d} value less than zero.'.format( self._current_offset)) if self._decoded_stream_size is None: self._decoded_stream_size = self._GetDecodedStreamSize() if self._decoded_stream_size < 0: raise IOError('Invalid decoded stream size.') if self._current_offset >= self._decoded_stream_size: return b'' if self._realign_offset: self._AlignDecodedDataOffset(self._current_offset) self._realign_offset = False if size is None: size = self._decoded_stream_size if self._current_offset + size > self._decoded_stream_size: size = self._decoded_stream_size - self._current_offset decoded_data = b'' if size == 0: return decoded_data while size > self._decoded_data_size: decoded_data = b''.join([ decoded_data, self._decoded_data[self._decoded_data_offset:]]) remaining_decoded_data_size = ( self._decoded_data_size - self._decoded_data_offset) self._current_offset += remaining_decoded_data_size size -= remaining_decoded_data_size if self._current_offset >= self._decoded_stream_size: break read_count = self._ReadEncodedData(self._ENCODED_DATA_BUFFER_SIZE) self._decoded_data_offset = 0 if read_count == 0: break if size > 0: slice_start_offset = self._decoded_data_offset slice_end_offset = slice_start_offset + size decoded_data = b''.join([ decoded_data, self._decoded_data[slice_start_offset:slice_end_offset]]) self._decoded_data_offset += size self._current_offset += size return decoded_data
def run_migrations_offline(): """ Run migrations in 'offline' mode. This configures the context with just a URL and not an Engine, though an Engine is acceptable here as well. By skipping the Engine creation we don't even need a DBAPI to be available. Calls to context.execute() here emit the given string to the script output. """ url = context.config.get_main_option("sqlalchemy.url") context.configure(url=url, compare_server_default=True) with context.begin_transaction(): context.run_migrations()
Run migrations in 'offline' mode. This configures the context with just a URL and not an Engine, though an Engine is acceptable here as well. By skipping the Engine creation we don't even need a DBAPI to be available. Calls to context.execute() here emit the given string to the script output.
Below is the the instruction that describes the task: ### Input: Run migrations in 'offline' mode. This configures the context with just a URL and not an Engine, though an Engine is acceptable here as well. By skipping the Engine creation we don't even need a DBAPI to be available. Calls to context.execute() here emit the given string to the script output. ### Response: def run_migrations_offline(): """ Run migrations in 'offline' mode. This configures the context with just a URL and not an Engine, though an Engine is acceptable here as well. By skipping the Engine creation we don't even need a DBAPI to be available. Calls to context.execute() here emit the given string to the script output. """ url = context.config.get_main_option("sqlalchemy.url") context.configure(url=url, compare_server_default=True) with context.begin_transaction(): context.run_migrations()
def cumulate(self, axis): """Returns new histogram with all data cumulated along axis.""" axis = self.get_axis_number(axis) return Histdd.from_histogram(np.cumsum(self.histogram, axis=axis), bin_edges=self.bin_edges, axis_names=self.axis_names)
Returns new histogram with all data cumulated along axis.
Below is the the instruction that describes the task: ### Input: Returns new histogram with all data cumulated along axis. ### Response: def cumulate(self, axis): """Returns new histogram with all data cumulated along axis.""" axis = self.get_axis_number(axis) return Histdd.from_histogram(np.cumsum(self.histogram, axis=axis), bin_edges=self.bin_edges, axis_names=self.axis_names)
def send(self, api, force_send): """ Send this item using api. :param api: D4S2Api sends messages to D4S2 :param force_send: bool should we send even if the item already exists """ item_id = self.get_existing_item_id(api) if not item_id: item_id = self.create_item_returning_id(api) api.send_item(self.destination, item_id, force_send) else: if force_send: api.send_item(self.destination, item_id, force_send) else: item_type = D4S2Api.DEST_TO_NAME.get(self.destination, "Item") msg = "{} already sent. Run with --resend argument to resend." raise D4S2Error(msg.format(item_type), warning=True)
Send this item using api. :param api: D4S2Api sends messages to D4S2 :param force_send: bool should we send even if the item already exists
Below is the the instruction that describes the task: ### Input: Send this item using api. :param api: D4S2Api sends messages to D4S2 :param force_send: bool should we send even if the item already exists ### Response: def send(self, api, force_send): """ Send this item using api. :param api: D4S2Api sends messages to D4S2 :param force_send: bool should we send even if the item already exists """ item_id = self.get_existing_item_id(api) if not item_id: item_id = self.create_item_returning_id(api) api.send_item(self.destination, item_id, force_send) else: if force_send: api.send_item(self.destination, item_id, force_send) else: item_type = D4S2Api.DEST_TO_NAME.get(self.destination, "Item") msg = "{} already sent. Run with --resend argument to resend." raise D4S2Error(msg.format(item_type), warning=True)
def update_badge_users_count(self): """ Denormalizes ``Badge.users.count()`` into ``Bagdes.users_count`` field. """ logger.debug('→ Badge %s: syncing users count...', self.slug) badge, updated = self.badge, False if not badge: logger.debug( '✘ Badge %s: does not exist in the database (run badgify_sync badges)', self.slug) return (self.slug, updated) old_value, new_value = badge.users_count, badge.users.count() if old_value != new_value: badge = Badge.objects.get(slug=self.slug) badge.users_count = new_value badge.save() updated = True if updated: logger.debug('✓ Badge %s: updated users count (from %d to %d)', self.slug, old_value, new_value) else: logger.debug('✓ Badge %s: users count up-to-date (%d)', self.slug, new_value) return (badge, updated)
Denormalizes ``Badge.users.count()`` into ``Bagdes.users_count`` field.
Below is the the instruction that describes the task: ### Input: Denormalizes ``Badge.users.count()`` into ``Bagdes.users_count`` field. ### Response: def update_badge_users_count(self): """ Denormalizes ``Badge.users.count()`` into ``Bagdes.users_count`` field. """ logger.debug('→ Badge %s: syncing users count...', self.slug) badge, updated = self.badge, False if not badge: logger.debug( '✘ Badge %s: does not exist in the database (run badgify_sync badges)', self.slug) return (self.slug, updated) old_value, new_value = badge.users_count, badge.users.count() if old_value != new_value: badge = Badge.objects.get(slug=self.slug) badge.users_count = new_value badge.save() updated = True if updated: logger.debug('✓ Badge %s: updated users count (from %d to %d)', self.slug, old_value, new_value) else: logger.debug('✓ Badge %s: users count up-to-date (%d)', self.slug, new_value) return (badge, updated)
def convert_readme_to_rst(): """ Attempt to convert a README.md file into README.rst """ project_files = os.listdir('.') for filename in project_files: if filename.lower() == 'readme': raise ProjectError( 'found {} in project directory...'.format(filename) + 'not sure what to do with it, refusing to convert' ) elif filename.lower() == 'readme.rst': raise ProjectError( 'found {} in project directory...'.format(filename) + 'refusing to overwrite' ) for filename in project_files: if filename.lower() == 'readme.md': rst_filename = 'README.rst' logger.info('converting {} to {}'.format(filename, rst_filename)) try: rst_content = pypandoc.convert(filename, 'rst') with open('README.rst', 'w') as rst_file: rst_file.write(rst_content) return except OSError as e: raise ProjectError( 'could not convert readme to rst due to pypandoc error:' + os.linesep + str(e) ) raise ProjectError('could not find any README.md file to convert')
Attempt to convert a README.md file into README.rst
Below is the the instruction that describes the task: ### Input: Attempt to convert a README.md file into README.rst ### Response: def convert_readme_to_rst(): """ Attempt to convert a README.md file into README.rst """ project_files = os.listdir('.') for filename in project_files: if filename.lower() == 'readme': raise ProjectError( 'found {} in project directory...'.format(filename) + 'not sure what to do with it, refusing to convert' ) elif filename.lower() == 'readme.rst': raise ProjectError( 'found {} in project directory...'.format(filename) + 'refusing to overwrite' ) for filename in project_files: if filename.lower() == 'readme.md': rst_filename = 'README.rst' logger.info('converting {} to {}'.format(filename, rst_filename)) try: rst_content = pypandoc.convert(filename, 'rst') with open('README.rst', 'w') as rst_file: rst_file.write(rst_content) return except OSError as e: raise ProjectError( 'could not convert readme to rst due to pypandoc error:' + os.linesep + str(e) ) raise ProjectError('could not find any README.md file to convert')
def remove(text, what, count=None, strip=False): ''' Like ``replace``, where ``new`` replacement is an empty string. ''' return replace(text, what, '', count=count, strip=strip)
Like ``replace``, where ``new`` replacement is an empty string.
Below is the the instruction that describes the task: ### Input: Like ``replace``, where ``new`` replacement is an empty string. ### Response: def remove(text, what, count=None, strip=False): ''' Like ``replace``, where ``new`` replacement is an empty string. ''' return replace(text, what, '', count=count, strip=strip)
def dbRestore(self, db_value, context=None): """ Converts a stored database value to Python. :param py_value: <variant> :param context: <orb.Context> :return: <variant> """ if db_value is None: return None elif isinstance(db_value, (str, unicode)): return self.valueFromString(db_value, context=context) else: return super(AbstractDatetimeColumn, self).dbRestore(db_value, context=context)
Converts a stored database value to Python. :param py_value: <variant> :param context: <orb.Context> :return: <variant>
Below is the the instruction that describes the task: ### Input: Converts a stored database value to Python. :param py_value: <variant> :param context: <orb.Context> :return: <variant> ### Response: def dbRestore(self, db_value, context=None): """ Converts a stored database value to Python. :param py_value: <variant> :param context: <orb.Context> :return: <variant> """ if db_value is None: return None elif isinstance(db_value, (str, unicode)): return self.valueFromString(db_value, context=context) else: return super(AbstractDatetimeColumn, self).dbRestore(db_value, context=context)
def underlying_symbol(self): """ [str] 合约标的代码,目前除股指期货(IH, IF, IC)之外的期货合约,这一字段全部为’null’(期货专用) """ try: return self.__dict__["underlying_symbol"] except (KeyError, ValueError): raise AttributeError( "Instrument(order_book_id={}) has no attribute 'underlying_symbol' ".format(self.order_book_id) )
[str] 合约标的代码,目前除股指期货(IH, IF, IC)之外的期货合约,这一字段全部为’null’(期货专用)
Below is the the instruction that describes the task: ### Input: [str] 合约标的代码,目前除股指期货(IH, IF, IC)之外的期货合约,这一字段全部为’null’(期货专用) ### Response: def underlying_symbol(self): """ [str] 合约标的代码,目前除股指期货(IH, IF, IC)之外的期货合约,这一字段全部为’null’(期货专用) """ try: return self.__dict__["underlying_symbol"] except (KeyError, ValueError): raise AttributeError( "Instrument(order_book_id={}) has no attribute 'underlying_symbol' ".format(self.order_book_id) )
def process_file(path): """ Open a single labeled image at path and get needed information, return as a dictionary""" info = dict() with fits.open(path) as hdu: head = hdu[0].header data = hdu[0].data labels = {theme: value for value, theme in list(hdu[1].data)} info['filename'] = os.path.basename(path) info['trainer'] = head['expert'] info['date-label'] = dateparser.parse(head['date-lab']) info['date-observation'] = dateparser.parse(head['date-end']) for theme in themes: info[theme + "_count"] = np.sum(data == labels[theme]) return info
Open a single labeled image at path and get needed information, return as a dictionary
Below is the the instruction that describes the task: ### Input: Open a single labeled image at path and get needed information, return as a dictionary ### Response: def process_file(path): """ Open a single labeled image at path and get needed information, return as a dictionary""" info = dict() with fits.open(path) as hdu: head = hdu[0].header data = hdu[0].data labels = {theme: value for value, theme in list(hdu[1].data)} info['filename'] = os.path.basename(path) info['trainer'] = head['expert'] info['date-label'] = dateparser.parse(head['date-lab']) info['date-observation'] = dateparser.parse(head['date-end']) for theme in themes: info[theme + "_count"] = np.sum(data == labels[theme]) return info
def expr2dimacssat(ex): """Convert an expression into an equivalent DIMACS SAT string.""" if not ex.simple: raise ValueError("expected ex to be simplified") litmap, nvars = ex.encode_inputs() formula = _expr2sat(ex, litmap) if 'xor' in formula: if '=' in formula: fmt = 'satex' else: fmt = 'satx' elif '=' in formula: fmt = 'sate' else: fmt = 'sat' return "p {} {}\n{}".format(fmt, nvars, formula)
Convert an expression into an equivalent DIMACS SAT string.
Below is the the instruction that describes the task: ### Input: Convert an expression into an equivalent DIMACS SAT string. ### Response: def expr2dimacssat(ex): """Convert an expression into an equivalent DIMACS SAT string.""" if not ex.simple: raise ValueError("expected ex to be simplified") litmap, nvars = ex.encode_inputs() formula = _expr2sat(ex, litmap) if 'xor' in formula: if '=' in formula: fmt = 'satex' else: fmt = 'satx' elif '=' in formula: fmt = 'sate' else: fmt = 'sat' return "p {} {}\n{}".format(fmt, nvars, formula)
def write(self, data): """ write data on the OUT endpoint associated to the HID interface """ # report_size = 64 # if self.ep_out: # report_size = self.ep_out.wMaxPacketSize # # for _ in range(report_size - len(data)): # data.append(0) self.read_sem.release() if not self.ep_out: bmRequestType = 0x21 #Host to device request of type Class of Recipient Interface bmRequest = 0x09 #Set_REPORT (HID class-specific request for transferring data over EP0) wValue = 0x200 #Issuing an OUT report wIndex = self.intf_number #mBed Board interface number for HID self.dev.ctrl_transfer(bmRequestType, bmRequest, wValue, wIndex, data) return #raise ValueError('EP_OUT endpoint is NULL') self.ep_out.write(data) #logging.debug('sent: %s', data) return
write data on the OUT endpoint associated to the HID interface
Below is the the instruction that describes the task: ### Input: write data on the OUT endpoint associated to the HID interface ### Response: def write(self, data): """ write data on the OUT endpoint associated to the HID interface """ # report_size = 64 # if self.ep_out: # report_size = self.ep_out.wMaxPacketSize # # for _ in range(report_size - len(data)): # data.append(0) self.read_sem.release() if not self.ep_out: bmRequestType = 0x21 #Host to device request of type Class of Recipient Interface bmRequest = 0x09 #Set_REPORT (HID class-specific request for transferring data over EP0) wValue = 0x200 #Issuing an OUT report wIndex = self.intf_number #mBed Board interface number for HID self.dev.ctrl_transfer(bmRequestType, bmRequest, wValue, wIndex, data) return #raise ValueError('EP_OUT endpoint is NULL') self.ep_out.write(data) #logging.debug('sent: %s', data) return
def contains_point(self, x, y): """ :param x: x coordinate of a point :param y: y coordinate of a point :returns: True if the point (x, y) is on the curve, False otherwise """ if x is None and y is None: return True return (y * y - (x * x * x + self._a * x + self._b)) % self._p == 0
:param x: x coordinate of a point :param y: y coordinate of a point :returns: True if the point (x, y) is on the curve, False otherwise
Below is the the instruction that describes the task: ### Input: :param x: x coordinate of a point :param y: y coordinate of a point :returns: True if the point (x, y) is on the curve, False otherwise ### Response: def contains_point(self, x, y): """ :param x: x coordinate of a point :param y: y coordinate of a point :returns: True if the point (x, y) is on the curve, False otherwise """ if x is None and y is None: return True return (y * y - (x * x * x + self._a * x + self._b)) % self._p == 0
def _list_object_parts(self, bucket_name, object_name, upload_id): """ List all parts. :param bucket_name: Bucket name to list parts for. :param object_name: Object name to list parts for. :param upload_id: Upload id of the previously uploaded object name. """ is_valid_bucket_name(bucket_name) is_non_empty_string(object_name) is_non_empty_string(upload_id) query = { 'uploadId': upload_id, 'max-parts': '1000' } is_truncated = True part_number_marker = '' while is_truncated: if part_number_marker: query['part-number-marker'] = str(part_number_marker) response = self._url_open('GET', bucket_name=bucket_name, object_name=object_name, query=query) parts, is_truncated, part_number_marker = parse_list_parts( response.data, bucket_name=bucket_name, object_name=object_name, upload_id=upload_id ) for part in parts: yield part
List all parts. :param bucket_name: Bucket name to list parts for. :param object_name: Object name to list parts for. :param upload_id: Upload id of the previously uploaded object name.
Below is the the instruction that describes the task: ### Input: List all parts. :param bucket_name: Bucket name to list parts for. :param object_name: Object name to list parts for. :param upload_id: Upload id of the previously uploaded object name. ### Response: def _list_object_parts(self, bucket_name, object_name, upload_id): """ List all parts. :param bucket_name: Bucket name to list parts for. :param object_name: Object name to list parts for. :param upload_id: Upload id of the previously uploaded object name. """ is_valid_bucket_name(bucket_name) is_non_empty_string(object_name) is_non_empty_string(upload_id) query = { 'uploadId': upload_id, 'max-parts': '1000' } is_truncated = True part_number_marker = '' while is_truncated: if part_number_marker: query['part-number-marker'] = str(part_number_marker) response = self._url_open('GET', bucket_name=bucket_name, object_name=object_name, query=query) parts, is_truncated, part_number_marker = parse_list_parts( response.data, bucket_name=bucket_name, object_name=object_name, upload_id=upload_id ) for part in parts: yield part
def append(self, element): """Append new element to the SVG figure""" try: self.root.append(element.root) except AttributeError: self.root.append(GroupElement(element).root)
Append new element to the SVG figure
Below is the the instruction that describes the task: ### Input: Append new element to the SVG figure ### Response: def append(self, element): """Append new element to the SVG figure""" try: self.root.append(element.root) except AttributeError: self.root.append(GroupElement(element).root)
def get(self, section, key, default=NO_DEFAULT_VALUE): """ Get config value with data type transformation (from str) :param str section: Section to get config for. :param str key: Key to get config for. :param default: Default value for key if key was not found. :return: Value for the section/key or `default` if set and key does not exist. If not default is set, then return None. """ self._read_sources() if (section, key) in self._dot_keys: section, key = self._dot_keys[(section, key)] try: value = self._parser.get(section, key) except Exception: if default == NO_DEFAULT_VALUE: return None else: return default return self._typed_value(value)
Get config value with data type transformation (from str) :param str section: Section to get config for. :param str key: Key to get config for. :param default: Default value for key if key was not found. :return: Value for the section/key or `default` if set and key does not exist. If not default is set, then return None.
Below is the the instruction that describes the task: ### Input: Get config value with data type transformation (from str) :param str section: Section to get config for. :param str key: Key to get config for. :param default: Default value for key if key was not found. :return: Value for the section/key or `default` if set and key does not exist. If not default is set, then return None. ### Response: def get(self, section, key, default=NO_DEFAULT_VALUE): """ Get config value with data type transformation (from str) :param str section: Section to get config for. :param str key: Key to get config for. :param default: Default value for key if key was not found. :return: Value for the section/key or `default` if set and key does not exist. If not default is set, then return None. """ self._read_sources() if (section, key) in self._dot_keys: section, key = self._dot_keys[(section, key)] try: value = self._parser.get(section, key) except Exception: if default == NO_DEFAULT_VALUE: return None else: return default return self._typed_value(value)
def with_metaclass(meta, *bases): """Create a base class with a metaclass.""" # This requires a bit of explanation: the basic idea is to make a dummy # metaclass for one level of class instantiation that replaces itself with # the actual metaclass. class metaclass(type): def __new__(cls, name, this_bases, d): return meta(name, bases, d) @classmethod def __prepare__(cls, name, this_bases): return meta.__prepare__(name, bases) return type.__new__(metaclass, 'temporary_class', (), {})
Create a base class with a metaclass.
Below is the the instruction that describes the task: ### Input: Create a base class with a metaclass. ### Response: def with_metaclass(meta, *bases): """Create a base class with a metaclass.""" # This requires a bit of explanation: the basic idea is to make a dummy # metaclass for one level of class instantiation that replaces itself with # the actual metaclass. class metaclass(type): def __new__(cls, name, this_bases, d): return meta(name, bases, d) @classmethod def __prepare__(cls, name, this_bases): return meta.__prepare__(name, bases) return type.__new__(metaclass, 'temporary_class', (), {})
def get_defined_srms(srm_file): """ Returns list of SRMS defined in the SRM database """ srms = read_table(srm_file) return np.asanyarray(srms.index.unique())
Returns list of SRMS defined in the SRM database
Below is the the instruction that describes the task: ### Input: Returns list of SRMS defined in the SRM database ### Response: def get_defined_srms(srm_file): """ Returns list of SRMS defined in the SRM database """ srms = read_table(srm_file) return np.asanyarray(srms.index.unique())
def hash_key(self): """ Returns the canonical hash of a mail. """ if self.conf.message_id: message_id = self.message.get('Message-Id') if message_id: return message_id.strip() logger.error( "No Message-ID in {}: {}".format(self.path, self.header_text)) raise MissingMessageID return hashlib.sha224(self.canonical_headers).hexdigest()
Returns the canonical hash of a mail.
Below is the the instruction that describes the task: ### Input: Returns the canonical hash of a mail. ### Response: def hash_key(self): """ Returns the canonical hash of a mail. """ if self.conf.message_id: message_id = self.message.get('Message-Id') if message_id: return message_id.strip() logger.error( "No Message-ID in {}: {}".format(self.path, self.header_text)) raise MissingMessageID return hashlib.sha224(self.canonical_headers).hexdigest()
def create_context_plot(ra, dec, name="Your object"): """Creates a K2FootprintPlot showing a given position in context with respect to the campaigns.""" plot = K2FootprintPlot() plot.plot_galactic() plot.plot_ecliptic() for c in range(0, 20): plot.plot_campaign_outline(c, facecolor="#666666") # for c in [11, 12, 13, 14, 15, 16]: # plot.plot_campaign_outline(c, facecolor="green") plot.ax.scatter(ra, dec, marker='x', s=250, lw=3, color="red", zorder=500) plot.ax.text(ra, dec - 2, name, ha="center", va="top", color="red", fontsize=20, fontweight='bold', zorder=501) return plot
Creates a K2FootprintPlot showing a given position in context with respect to the campaigns.
Below is the the instruction that describes the task: ### Input: Creates a K2FootprintPlot showing a given position in context with respect to the campaigns. ### Response: def create_context_plot(ra, dec, name="Your object"): """Creates a K2FootprintPlot showing a given position in context with respect to the campaigns.""" plot = K2FootprintPlot() plot.plot_galactic() plot.plot_ecliptic() for c in range(0, 20): plot.plot_campaign_outline(c, facecolor="#666666") # for c in [11, 12, 13, 14, 15, 16]: # plot.plot_campaign_outline(c, facecolor="green") plot.ax.scatter(ra, dec, marker='x', s=250, lw=3, color="red", zorder=500) plot.ax.text(ra, dec - 2, name, ha="center", va="top", color="red", fontsize=20, fontweight='bold', zorder=501) return plot
def blend(self, blend_recipe, join_base, join_blend): """Blend a recipe into the base recipe. This performs an inner join of the blend_recipe to the base recipe's SQL. """ assert isinstance(blend_recipe, Recipe) self.blend_recipes.append(blend_recipe) self.blend_types.append('inner') self.blend_criteria.append((join_base, join_blend)) self.dirty = True return self.recipe
Blend a recipe into the base recipe. This performs an inner join of the blend_recipe to the base recipe's SQL.
Below is the the instruction that describes the task: ### Input: Blend a recipe into the base recipe. This performs an inner join of the blend_recipe to the base recipe's SQL. ### Response: def blend(self, blend_recipe, join_base, join_blend): """Blend a recipe into the base recipe. This performs an inner join of the blend_recipe to the base recipe's SQL. """ assert isinstance(blend_recipe, Recipe) self.blend_recipes.append(blend_recipe) self.blend_types.append('inner') self.blend_criteria.append((join_base, join_blend)) self.dirty = True return self.recipe
def _get_storage_resource(self): """Gets the SmartStorage resource if exists. :raises: IloCommandNotSupportedError if the resource SmartStorage doesn't exist. :returns the tuple of SmartStorage URI, Headers and settings. """ system = self._get_host_details() if ('links' in system['Oem']['Hp'] and 'SmartStorage' in system['Oem']['Hp']['links']): # Get the SmartStorage URI and Settings storage_uri = system['Oem']['Hp']['links']['SmartStorage']['href'] status, headers, storage_settings = self._rest_get(storage_uri) if status >= 300: msg = self._get_extended_error(storage_settings) raise exception.IloError(msg) return headers, storage_uri, storage_settings else: msg = ('"links/SmartStorage" section in ComputerSystem/Oem/Hp' ' does not exist') raise exception.IloCommandNotSupportedError(msg)
Gets the SmartStorage resource if exists. :raises: IloCommandNotSupportedError if the resource SmartStorage doesn't exist. :returns the tuple of SmartStorage URI, Headers and settings.
Below is the the instruction that describes the task: ### Input: Gets the SmartStorage resource if exists. :raises: IloCommandNotSupportedError if the resource SmartStorage doesn't exist. :returns the tuple of SmartStorage URI, Headers and settings. ### Response: def _get_storage_resource(self): """Gets the SmartStorage resource if exists. :raises: IloCommandNotSupportedError if the resource SmartStorage doesn't exist. :returns the tuple of SmartStorage URI, Headers and settings. """ system = self._get_host_details() if ('links' in system['Oem']['Hp'] and 'SmartStorage' in system['Oem']['Hp']['links']): # Get the SmartStorage URI and Settings storage_uri = system['Oem']['Hp']['links']['SmartStorage']['href'] status, headers, storage_settings = self._rest_get(storage_uri) if status >= 300: msg = self._get_extended_error(storage_settings) raise exception.IloError(msg) return headers, storage_uri, storage_settings else: msg = ('"links/SmartStorage" section in ComputerSystem/Oem/Hp' ' does not exist') raise exception.IloCommandNotSupportedError(msg)
def get_frame(self, outdata, frames, timedata, status): """ Callback function for the audio stream. Don't use directly. """ if not self.keep_listening: raise sd.CallbackStop try: chunk = self.queue.get_nowait() # Check if the chunk contains the expected number of frames # The callback function raises a ValueError otherwise. if chunk.shape[0] == frames: outdata[:] = chunk else: outdata.fill(0) except Empty: outdata.fill(0)
Callback function for the audio stream. Don't use directly.
Below is the the instruction that describes the task: ### Input: Callback function for the audio stream. Don't use directly. ### Response: def get_frame(self, outdata, frames, timedata, status): """ Callback function for the audio stream. Don't use directly. """ if not self.keep_listening: raise sd.CallbackStop try: chunk = self.queue.get_nowait() # Check if the chunk contains the expected number of frames # The callback function raises a ValueError otherwise. if chunk.shape[0] == frames: outdata[:] = chunk else: outdata.fill(0) except Empty: outdata.fill(0)
def _convert_entity_to_json(source): ''' Converts an entity object to json to send. The entity format is: { "Address":"Mountain View", "Age":23, "AmountDue":200.23, "CustomerCode@odata.type":"Edm.Guid", "CustomerCode":"c9da6455-213d-42c9-9a79-3e9149a57833", "CustomerSince@odata.type":"Edm.DateTime", "CustomerSince":"2008-07-10T00:00:00", "IsActive":true, "NumberOfOrders@odata.type":"Edm.Int64", "NumberOfOrders":"255", "PartitionKey":"mypartitionkey", "RowKey":"myrowkey" } ''' properties = {} # set properties type for types we know if value has no type info. # if value has type info, then set the type to value.type for name, value in source.items(): mtype = '' if isinstance(value, EntityProperty): conv = _EDM_TO_ENTITY_CONVERSIONS.get(value.type) if conv is None: raise TypeError( _ERROR_TYPE_NOT_SUPPORTED.format(value.type)) mtype, value = conv(value.value) else: conv = _PYTHON_TO_ENTITY_CONVERSIONS.get(type(value)) if conv is None and sys.version_info >= (3,) and value is None: conv = _to_entity_none if conv is None: raise TypeError( _ERROR_CANNOT_SERIALIZE_VALUE_TO_ENTITY.format( type(value).__name__)) mtype, value = conv(value) # form the property node properties[name] = value if mtype: properties[name + '@odata.type'] = mtype # generate the entity_body return dumps(properties)
Converts an entity object to json to send. The entity format is: { "Address":"Mountain View", "Age":23, "AmountDue":200.23, "CustomerCode@odata.type":"Edm.Guid", "CustomerCode":"c9da6455-213d-42c9-9a79-3e9149a57833", "CustomerSince@odata.type":"Edm.DateTime", "CustomerSince":"2008-07-10T00:00:00", "IsActive":true, "NumberOfOrders@odata.type":"Edm.Int64", "NumberOfOrders":"255", "PartitionKey":"mypartitionkey", "RowKey":"myrowkey" }
Below is the the instruction that describes the task: ### Input: Converts an entity object to json to send. The entity format is: { "Address":"Mountain View", "Age":23, "AmountDue":200.23, "CustomerCode@odata.type":"Edm.Guid", "CustomerCode":"c9da6455-213d-42c9-9a79-3e9149a57833", "CustomerSince@odata.type":"Edm.DateTime", "CustomerSince":"2008-07-10T00:00:00", "IsActive":true, "NumberOfOrders@odata.type":"Edm.Int64", "NumberOfOrders":"255", "PartitionKey":"mypartitionkey", "RowKey":"myrowkey" } ### Response: def _convert_entity_to_json(source): ''' Converts an entity object to json to send. The entity format is: { "Address":"Mountain View", "Age":23, "AmountDue":200.23, "CustomerCode@odata.type":"Edm.Guid", "CustomerCode":"c9da6455-213d-42c9-9a79-3e9149a57833", "CustomerSince@odata.type":"Edm.DateTime", "CustomerSince":"2008-07-10T00:00:00", "IsActive":true, "NumberOfOrders@odata.type":"Edm.Int64", "NumberOfOrders":"255", "PartitionKey":"mypartitionkey", "RowKey":"myrowkey" } ''' properties = {} # set properties type for types we know if value has no type info. # if value has type info, then set the type to value.type for name, value in source.items(): mtype = '' if isinstance(value, EntityProperty): conv = _EDM_TO_ENTITY_CONVERSIONS.get(value.type) if conv is None: raise TypeError( _ERROR_TYPE_NOT_SUPPORTED.format(value.type)) mtype, value = conv(value.value) else: conv = _PYTHON_TO_ENTITY_CONVERSIONS.get(type(value)) if conv is None and sys.version_info >= (3,) and value is None: conv = _to_entity_none if conv is None: raise TypeError( _ERROR_CANNOT_SERIALIZE_VALUE_TO_ENTITY.format( type(value).__name__)) mtype, value = conv(value) # form the property node properties[name] = value if mtype: properties[name + '@odata.type'] = mtype # generate the entity_body return dumps(properties)
def add_filter_by_extension(self, extensions, filter_type=DefaultFilterType): """ Add a files filter by extensions to this iterator. :param extensions: single extension or list of extensions to filter by. for example: ["py", "js", "cpp", ...] """ self.add_filter(FilterExtension(extensions), filter_type) return self
Add a files filter by extensions to this iterator. :param extensions: single extension or list of extensions to filter by. for example: ["py", "js", "cpp", ...]
Below is the the instruction that describes the task: ### Input: Add a files filter by extensions to this iterator. :param extensions: single extension or list of extensions to filter by. for example: ["py", "js", "cpp", ...] ### Response: def add_filter_by_extension(self, extensions, filter_type=DefaultFilterType): """ Add a files filter by extensions to this iterator. :param extensions: single extension or list of extensions to filter by. for example: ["py", "js", "cpp", ...] """ self.add_filter(FilterExtension(extensions), filter_type) return self
def is_magic(s): """Check whether given string is a __magic__ Python identifier. :return: Whether ``s`` is a __magic__ Python identifier """ if not is_identifier(s): return False return len(s) > 4 and s.startswith('__') and s.endswith('__')
Check whether given string is a __magic__ Python identifier. :return: Whether ``s`` is a __magic__ Python identifier
Below is the the instruction that describes the task: ### Input: Check whether given string is a __magic__ Python identifier. :return: Whether ``s`` is a __magic__ Python identifier ### Response: def is_magic(s): """Check whether given string is a __magic__ Python identifier. :return: Whether ``s`` is a __magic__ Python identifier """ if not is_identifier(s): return False return len(s) > 4 and s.startswith('__') and s.endswith('__')
def max_variance_genes(data, nbins=5, frac=0.2): """ This function identifies the genes that have the max variance across a number of bins sorted by mean. Args: data (array): genes x cells nbins (int): number of bins to sort genes by mean expression level. Default: 10. frac (float): fraction of genes to return per bin - between 0 and 1. Default: 0.1 Returns: list of gene indices (list of ints) """ # TODO: profile, make more efficient for large matrices # 8000 cells: 0.325 seconds # top time: sparse.csc_tocsr, csc_matvec, astype, copy, mul_scalar # 73233 cells: 5.347 seconds, 4.762 s in sparse_var # csc_tocsr: 1.736 s # copy: 1.028 s # astype: 0.999 s # there is almost certainly something superlinear in this method # maybe it's to_csr? indices = [] if sparse.issparse(data): means, var = sparse_mean_var(data) else: means = data.mean(1) var = data.var(1) mean_indices = means.argsort() n_elements = int(data.shape[0]/nbins) frac_elements = int(n_elements*frac) for i in range(nbins): bin_i = mean_indices[i*n_elements : (i+1)*n_elements] if i==nbins-1: bin_i = mean_indices[i*n_elements :] var_i = var[bin_i] var_sorted = var_i.argsort() top_var_indices = var_sorted[len(bin_i) - frac_elements:] ind = bin_i[top_var_indices] # filter out genes with zero variance ind = [index for index in ind if var[index]>0] indices.extend(ind) return indices
This function identifies the genes that have the max variance across a number of bins sorted by mean. Args: data (array): genes x cells nbins (int): number of bins to sort genes by mean expression level. Default: 10. frac (float): fraction of genes to return per bin - between 0 and 1. Default: 0.1 Returns: list of gene indices (list of ints)
Below is the the instruction that describes the task: ### Input: This function identifies the genes that have the max variance across a number of bins sorted by mean. Args: data (array): genes x cells nbins (int): number of bins to sort genes by mean expression level. Default: 10. frac (float): fraction of genes to return per bin - between 0 and 1. Default: 0.1 Returns: list of gene indices (list of ints) ### Response: def max_variance_genes(data, nbins=5, frac=0.2): """ This function identifies the genes that have the max variance across a number of bins sorted by mean. Args: data (array): genes x cells nbins (int): number of bins to sort genes by mean expression level. Default: 10. frac (float): fraction of genes to return per bin - between 0 and 1. Default: 0.1 Returns: list of gene indices (list of ints) """ # TODO: profile, make more efficient for large matrices # 8000 cells: 0.325 seconds # top time: sparse.csc_tocsr, csc_matvec, astype, copy, mul_scalar # 73233 cells: 5.347 seconds, 4.762 s in sparse_var # csc_tocsr: 1.736 s # copy: 1.028 s # astype: 0.999 s # there is almost certainly something superlinear in this method # maybe it's to_csr? indices = [] if sparse.issparse(data): means, var = sparse_mean_var(data) else: means = data.mean(1) var = data.var(1) mean_indices = means.argsort() n_elements = int(data.shape[0]/nbins) frac_elements = int(n_elements*frac) for i in range(nbins): bin_i = mean_indices[i*n_elements : (i+1)*n_elements] if i==nbins-1: bin_i = mean_indices[i*n_elements :] var_i = var[bin_i] var_sorted = var_i.argsort() top_var_indices = var_sorted[len(bin_i) - frac_elements:] ind = bin_i[top_var_indices] # filter out genes with zero variance ind = [index for index in ind if var[index]>0] indices.extend(ind) return indices
def enumeration(*values, **kwargs): ''' Create an |Enumeration| object from a sequence of values. Call ``enumeration`` with a sequence of (unique) strings to create an Enumeration object: .. code-block:: python #: Specify the horizontal alignment for rendering text TextAlign = enumeration("left", "right", "center") Args: values (str) : string enumeration values, passed as positional arguments The order of arguments is the order of the enumeration, and the first element will be considered the default value when used to create |Enum| properties. Keyword Args: case_sensitive (bool, optional) : Whether validation should consider case or not (default: True) quote (bool, optional): Whther values should be quoted in the string representations (default: False) Raises: ValueError if values empty, if any value is not a string or not unique Returns: Enumeration ''' if not (values and all(isinstance(value, string_types) and value for value in values)): raise ValueError("expected a non-empty sequence of strings, got %s" % values) if len(values) != len(set(values)): raise ValueError("enumeration items must be unique, got %s" % values) attrs = {value: value for value in values} attrs.update({ "_values": list(values), "_default": values[0], "_case_sensitive": kwargs.get("case_sensitive", True), "_quote": kwargs.get("quote", False), }) return type(str("Enumeration"), (Enumeration,), attrs)()
Create an |Enumeration| object from a sequence of values. Call ``enumeration`` with a sequence of (unique) strings to create an Enumeration object: .. code-block:: python #: Specify the horizontal alignment for rendering text TextAlign = enumeration("left", "right", "center") Args: values (str) : string enumeration values, passed as positional arguments The order of arguments is the order of the enumeration, and the first element will be considered the default value when used to create |Enum| properties. Keyword Args: case_sensitive (bool, optional) : Whether validation should consider case or not (default: True) quote (bool, optional): Whther values should be quoted in the string representations (default: False) Raises: ValueError if values empty, if any value is not a string or not unique Returns: Enumeration
Below is the the instruction that describes the task: ### Input: Create an |Enumeration| object from a sequence of values. Call ``enumeration`` with a sequence of (unique) strings to create an Enumeration object: .. code-block:: python #: Specify the horizontal alignment for rendering text TextAlign = enumeration("left", "right", "center") Args: values (str) : string enumeration values, passed as positional arguments The order of arguments is the order of the enumeration, and the first element will be considered the default value when used to create |Enum| properties. Keyword Args: case_sensitive (bool, optional) : Whether validation should consider case or not (default: True) quote (bool, optional): Whther values should be quoted in the string representations (default: False) Raises: ValueError if values empty, if any value is not a string or not unique Returns: Enumeration ### Response: def enumeration(*values, **kwargs): ''' Create an |Enumeration| object from a sequence of values. Call ``enumeration`` with a sequence of (unique) strings to create an Enumeration object: .. code-block:: python #: Specify the horizontal alignment for rendering text TextAlign = enumeration("left", "right", "center") Args: values (str) : string enumeration values, passed as positional arguments The order of arguments is the order of the enumeration, and the first element will be considered the default value when used to create |Enum| properties. Keyword Args: case_sensitive (bool, optional) : Whether validation should consider case or not (default: True) quote (bool, optional): Whther values should be quoted in the string representations (default: False) Raises: ValueError if values empty, if any value is not a string or not unique Returns: Enumeration ''' if not (values and all(isinstance(value, string_types) and value for value in values)): raise ValueError("expected a non-empty sequence of strings, got %s" % values) if len(values) != len(set(values)): raise ValueError("enumeration items must be unique, got %s" % values) attrs = {value: value for value in values} attrs.update({ "_values": list(values), "_default": values[0], "_case_sensitive": kwargs.get("case_sensitive", True), "_quote": kwargs.get("quote", False), }) return type(str("Enumeration"), (Enumeration,), attrs)()
def nb_to_q_nums(nb) -> list: """ Gets question numbers from each cell in the notebook """ def q_num(cell): assert cell.metadata.tags return first(filter(lambda t: 'q' in t, cell.metadata.tags)) return [q_num(cell) for cell in nb['cells']]
Gets question numbers from each cell in the notebook
Below is the the instruction that describes the task: ### Input: Gets question numbers from each cell in the notebook ### Response: def nb_to_q_nums(nb) -> list: """ Gets question numbers from each cell in the notebook """ def q_num(cell): assert cell.metadata.tags return first(filter(lambda t: 'q' in t, cell.metadata.tags)) return [q_num(cell) for cell in nb['cells']]
def snip_string(string, max_len=20, snip_string='...', snip_point=0.5): """ Snips a string so that it is no longer than max_len, replacing deleted characters with the snip_string. The snip is done at snip_point, which is a fraction between 0 and 1, indicating relatively where along the string to snip. snip_point of 0.5 would be the middle. >>> snip_string('this is long', 8) 'this ...' >>> snip_string('this is long', 8, snip_point=0.5) 'th...ong' >>> snip_string('this is long', 12) 'this is long' >>> snip_string('this is long', 8, '~') 'this is~' >>> snip_string('this is long', 8, '~', 0.5) 'thi~long' """ if len(string) <= max_len: new_string = string else: visible_len = (max_len - len(snip_string)) start_len = int(visible_len*snip_point) end_len = visible_len-start_len new_string = string[0:start_len]+ snip_string if end_len > 0: new_string += string[-end_len:] return new_string
Snips a string so that it is no longer than max_len, replacing deleted characters with the snip_string. The snip is done at snip_point, which is a fraction between 0 and 1, indicating relatively where along the string to snip. snip_point of 0.5 would be the middle. >>> snip_string('this is long', 8) 'this ...' >>> snip_string('this is long', 8, snip_point=0.5) 'th...ong' >>> snip_string('this is long', 12) 'this is long' >>> snip_string('this is long', 8, '~') 'this is~' >>> snip_string('this is long', 8, '~', 0.5) 'thi~long'
Below is the the instruction that describes the task: ### Input: Snips a string so that it is no longer than max_len, replacing deleted characters with the snip_string. The snip is done at snip_point, which is a fraction between 0 and 1, indicating relatively where along the string to snip. snip_point of 0.5 would be the middle. >>> snip_string('this is long', 8) 'this ...' >>> snip_string('this is long', 8, snip_point=0.5) 'th...ong' >>> snip_string('this is long', 12) 'this is long' >>> snip_string('this is long', 8, '~') 'this is~' >>> snip_string('this is long', 8, '~', 0.5) 'thi~long' ### Response: def snip_string(string, max_len=20, snip_string='...', snip_point=0.5): """ Snips a string so that it is no longer than max_len, replacing deleted characters with the snip_string. The snip is done at snip_point, which is a fraction between 0 and 1, indicating relatively where along the string to snip. snip_point of 0.5 would be the middle. >>> snip_string('this is long', 8) 'this ...' >>> snip_string('this is long', 8, snip_point=0.5) 'th...ong' >>> snip_string('this is long', 12) 'this is long' >>> snip_string('this is long', 8, '~') 'this is~' >>> snip_string('this is long', 8, '~', 0.5) 'thi~long' """ if len(string) <= max_len: new_string = string else: visible_len = (max_len - len(snip_string)) start_len = int(visible_len*snip_point) end_len = visible_len-start_len new_string = string[0:start_len]+ snip_string if end_len > 0: new_string += string[-end_len:] return new_string
def get_sequence_value(node): """Convert an element with DataType Sequence to a DataFrame. Note this may be a naive implementation as I assume that bulk data is always a table """ assert node.datatype() == 15 data = defaultdict(list) cols = [] for i in range(node.numValues()): row = node.getValue(i) if i == 0: # Get the ordered cols and assume they are constant cols = [str(row.getElement(_).name()) for _ in range(row.numElements())] for cidx in range(row.numElements()): col = row.getElement(cidx) data[str(col.name())].append(XmlHelper.as_value(col)) return pd.DataFrame(data, columns=cols)
Convert an element with DataType Sequence to a DataFrame. Note this may be a naive implementation as I assume that bulk data is always a table
Below is the the instruction that describes the task: ### Input: Convert an element with DataType Sequence to a DataFrame. Note this may be a naive implementation as I assume that bulk data is always a table ### Response: def get_sequence_value(node): """Convert an element with DataType Sequence to a DataFrame. Note this may be a naive implementation as I assume that bulk data is always a table """ assert node.datatype() == 15 data = defaultdict(list) cols = [] for i in range(node.numValues()): row = node.getValue(i) if i == 0: # Get the ordered cols and assume they are constant cols = [str(row.getElement(_).name()) for _ in range(row.numElements())] for cidx in range(row.numElements()): col = row.getElement(cidx) data[str(col.name())].append(XmlHelper.as_value(col)) return pd.DataFrame(data, columns=cols)
def register(): """Plugin registration.""" if not plim: logger.warning('`slim` failed to load dependency `plim`. ' '`slim` plugin not loaded.') return if not mako: logger.warning('`slim` failed to load dependency `mako`. ' '`slim` plugin not loaded.') return if not bs: logger.warning('`slim` failed to load dependency `BeautifulSoup4`. ' '`slim` plugin not loaded.') return if not minify: logger.warning('`slim` failed to load dependency `htmlmin`. ' '`slim` plugin not loaded.') return signals.get_writer.connect(get_writer)
Plugin registration.
Below is the the instruction that describes the task: ### Input: Plugin registration. ### Response: def register(): """Plugin registration.""" if not plim: logger.warning('`slim` failed to load dependency `plim`. ' '`slim` plugin not loaded.') return if not mako: logger.warning('`slim` failed to load dependency `mako`. ' '`slim` plugin not loaded.') return if not bs: logger.warning('`slim` failed to load dependency `BeautifulSoup4`. ' '`slim` plugin not loaded.') return if not minify: logger.warning('`slim` failed to load dependency `htmlmin`. ' '`slim` plugin not loaded.') return signals.get_writer.connect(get_writer)
def client(self): """Get the DB client associated with the task (open a new one if necessary) Returns: ozelot.client.Client: DB client """ if self._client is None: self._client = client.get_client() return self._client
Get the DB client associated with the task (open a new one if necessary) Returns: ozelot.client.Client: DB client
Below is the the instruction that describes the task: ### Input: Get the DB client associated with the task (open a new one if necessary) Returns: ozelot.client.Client: DB client ### Response: def client(self): """Get the DB client associated with the task (open a new one if necessary) Returns: ozelot.client.Client: DB client """ if self._client is None: self._client = client.get_client() return self._client
def add_scale(self, factor, encoding=None, chunk_size=None, info=None): """ Generate a new downsample scale to for the info file and return an updated dictionary. You'll still need to call self.commit_info() to make it permenant. Required: factor: int (x,y,z), e.g. (2,2,1) would represent a reduction of 2x in x and y Optional: encoding: force new layer to e.g. jpeg or compressed_segmentation chunk_size: force new layer to new chunk size Returns: info dict """ if not info: info = self.info # e.g. {"encoding": "raw", "chunk_sizes": [[64, 64, 64]], "key": "4_4_40", # "resolution": [4, 4, 40], "voxel_offset": [0, 0, 0], # "size": [2048, 2048, 256]} fullres = info['scales'][0] # If the voxel_offset is not divisible by the ratio, # zooming out will slightly shift the data. # Imagine the offset is 10 # the mip 1 will have an offset of 5 # the mip 2 will have an offset of 2 instead of 2.5 # meaning that it will be half a pixel to the left if not chunk_size: chunk_size = lib.find_closest_divisor(fullres['chunk_sizes'][0], closest_to=[64,64,64]) if encoding is None: encoding = fullres['encoding'] newscale = { u"encoding": encoding, u"chunk_sizes": [ list(map(int, chunk_size)) ], u"resolution": list(map(int, Vec(*fullres['resolution']) * factor )), u"voxel_offset": downscale(fullres['voxel_offset'], factor, np.floor), u"size": downscale(fullres['size'], factor, np.ceil), } if newscale['encoding'] == 'compressed_segmentation': if 'compressed_segmentation_block_size' in fullres: newscale['compressed_segmentation_block_size'] = fullres['compressed_segmentation_block_size'] else: newscale['compressed_segmentation_block_size'] = (8,8,8) newscale[u'key'] = str("_".join([ str(res) for res in newscale['resolution']])) new_res = np.array(newscale['resolution'], dtype=int) preexisting = False for index, scale in enumerate(info['scales']): res = np.array(scale['resolution'], dtype=int) if np.array_equal(new_res, res): preexisting = True info['scales'][index] = newscale break if not preexisting: info['scales'].append(newscale) return newscale
Generate a new downsample scale to for the info file and return an updated dictionary. You'll still need to call self.commit_info() to make it permenant. Required: factor: int (x,y,z), e.g. (2,2,1) would represent a reduction of 2x in x and y Optional: encoding: force new layer to e.g. jpeg or compressed_segmentation chunk_size: force new layer to new chunk size Returns: info dict
Below is the the instruction that describes the task: ### Input: Generate a new downsample scale to for the info file and return an updated dictionary. You'll still need to call self.commit_info() to make it permenant. Required: factor: int (x,y,z), e.g. (2,2,1) would represent a reduction of 2x in x and y Optional: encoding: force new layer to e.g. jpeg or compressed_segmentation chunk_size: force new layer to new chunk size Returns: info dict ### Response: def add_scale(self, factor, encoding=None, chunk_size=None, info=None): """ Generate a new downsample scale to for the info file and return an updated dictionary. You'll still need to call self.commit_info() to make it permenant. Required: factor: int (x,y,z), e.g. (2,2,1) would represent a reduction of 2x in x and y Optional: encoding: force new layer to e.g. jpeg or compressed_segmentation chunk_size: force new layer to new chunk size Returns: info dict """ if not info: info = self.info # e.g. {"encoding": "raw", "chunk_sizes": [[64, 64, 64]], "key": "4_4_40", # "resolution": [4, 4, 40], "voxel_offset": [0, 0, 0], # "size": [2048, 2048, 256]} fullres = info['scales'][0] # If the voxel_offset is not divisible by the ratio, # zooming out will slightly shift the data. # Imagine the offset is 10 # the mip 1 will have an offset of 5 # the mip 2 will have an offset of 2 instead of 2.5 # meaning that it will be half a pixel to the left if not chunk_size: chunk_size = lib.find_closest_divisor(fullres['chunk_sizes'][0], closest_to=[64,64,64]) if encoding is None: encoding = fullres['encoding'] newscale = { u"encoding": encoding, u"chunk_sizes": [ list(map(int, chunk_size)) ], u"resolution": list(map(int, Vec(*fullres['resolution']) * factor )), u"voxel_offset": downscale(fullres['voxel_offset'], factor, np.floor), u"size": downscale(fullres['size'], factor, np.ceil), } if newscale['encoding'] == 'compressed_segmentation': if 'compressed_segmentation_block_size' in fullres: newscale['compressed_segmentation_block_size'] = fullres['compressed_segmentation_block_size'] else: newscale['compressed_segmentation_block_size'] = (8,8,8) newscale[u'key'] = str("_".join([ str(res) for res in newscale['resolution']])) new_res = np.array(newscale['resolution'], dtype=int) preexisting = False for index, scale in enumerate(info['scales']): res = np.array(scale['resolution'], dtype=int) if np.array_equal(new_res, res): preexisting = True info['scales'][index] = newscale break if not preexisting: info['scales'].append(newscale) return newscale
def ls(types, as_json): # pylint: disable=invalid-name """List all available sensors""" sensors = W1ThermSensor.get_available_sensors(types) if as_json: data = [ {"id": i, "hwid": s.id, "type": s.type_name} for i, s in enumerate(sensors, 1) ] click.echo(json.dumps(data, indent=4, sort_keys=True)) else: click.echo( "Found {0} sensors:".format(click.style(str(len(sensors)), bold=True)) ) for i, sensor in enumerate(sensors, 1): click.echo( " {0}. HWID: {1} Type: {2}".format( click.style(str(i), bold=True), click.style(sensor.id, bold=True), click.style(sensor.type_name, bold=True), ) )
List all available sensors
Below is the the instruction that describes the task: ### Input: List all available sensors ### Response: def ls(types, as_json): # pylint: disable=invalid-name """List all available sensors""" sensors = W1ThermSensor.get_available_sensors(types) if as_json: data = [ {"id": i, "hwid": s.id, "type": s.type_name} for i, s in enumerate(sensors, 1) ] click.echo(json.dumps(data, indent=4, sort_keys=True)) else: click.echo( "Found {0} sensors:".format(click.style(str(len(sensors)), bold=True)) ) for i, sensor in enumerate(sensors, 1): click.echo( " {0}. HWID: {1} Type: {2}".format( click.style(str(i), bold=True), click.style(sensor.id, bold=True), click.style(sensor.type_name, bold=True), ) )
def set_direct(self, address_value_dict): """Called in the context manager's set method to either overwrite the value for an address, or create a new future and immediately set a value in the future. Args: address_value_dict (dict of str:bytes): The unique full addresses with bytes to set at that address. Raises: AuthorizationException """ with self._lock: for address, value in address_value_dict.items(): self._validate_write(address) if address in self._state: self._state[address].set_result(result=value) else: fut = _ContextFuture(address=address) self._state[address] = fut fut.set_result(result=value)
Called in the context manager's set method to either overwrite the value for an address, or create a new future and immediately set a value in the future. Args: address_value_dict (dict of str:bytes): The unique full addresses with bytes to set at that address. Raises: AuthorizationException
Below is the the instruction that describes the task: ### Input: Called in the context manager's set method to either overwrite the value for an address, or create a new future and immediately set a value in the future. Args: address_value_dict (dict of str:bytes): The unique full addresses with bytes to set at that address. Raises: AuthorizationException ### Response: def set_direct(self, address_value_dict): """Called in the context manager's set method to either overwrite the value for an address, or create a new future and immediately set a value in the future. Args: address_value_dict (dict of str:bytes): The unique full addresses with bytes to set at that address. Raises: AuthorizationException """ with self._lock: for address, value in address_value_dict.items(): self._validate_write(address) if address in self._state: self._state[address].set_result(result=value) else: fut = _ContextFuture(address=address) self._state[address] = fut fut.set_result(result=value)
def destroy(ctx, app, expire_hit, sandbox): """Tear down an experiment server.""" if expire_hit: ctx.invoke(expire, app=app, sandbox=sandbox, exit=False) HerokuApp(app).destroy()
Tear down an experiment server.
Below is the the instruction that describes the task: ### Input: Tear down an experiment server. ### Response: def destroy(ctx, app, expire_hit, sandbox): """Tear down an experiment server.""" if expire_hit: ctx.invoke(expire, app=app, sandbox=sandbox, exit=False) HerokuApp(app).destroy()
def query_cumulative_flags(ifo, segment_names, start_time, end_time, source='any', server="segments.ligo.org", veto_definer=None, bounds=None, padding=None, override_ifos=None, cache=False): """Return the times where any flag is active Parameters ---------- ifo: string or dict The interferometer to query (H1, L1). If a dict, an element for each flag name must be provided. segment_name: list of strings The status flag to query from LOSC. start_time: int The starting gps time to begin querying from LOSC end_time: int The end gps time of the query source: str, Optional Choice between "GWOSC" or "dqsegdb". If dqsegdb, the server option may also be given. The default is to try GWOSC first then try dqsegdb. server: str, Optional The server path. Only used with dqsegdb atm. veto_definer: str, Optional The path to a veto definer to define groups of flags which themselves define a set of segments. bounds: dict, Optional Dict containing start end tuples keyed by the flag name which indicated places which should have a distinct time period to be active. padding: dict, Optional Dict keyed by the flag name. Each element is a tuple (start_pad, end_pad) which indicates how to change the segment boundaries. override_ifos: dict, Optional A dict keyed by flag_name to override the ifo option on a per flag basis. Returns --------- segments: glue.segments.segmentlist List of segments """ total_segs = segmentlist([]) for flag_name in segment_names: ifo_name = ifo if override_ifos is not None and flag_name in override_ifos: ifo_name = override_ifos[flag_name] segs = query_flag(ifo_name, flag_name, start_time, end_time, source=source, server=server, veto_definer=veto_definer, cache=cache) if padding and flag_name in padding: s, e = padding[flag_name] segs2 = segmentlist([]) for seg in segs: segs2.append(segment(seg[0] + s, seg[1] + e)) segs = segs2 if bounds is not None and flag_name in bounds: s, e = bounds[flag_name] valid = segmentlist([segment([s, e])]) segs = (segs & valid).coalesce() total_segs = (total_segs + segs).coalesce() return total_segs
Return the times where any flag is active Parameters ---------- ifo: string or dict The interferometer to query (H1, L1). If a dict, an element for each flag name must be provided. segment_name: list of strings The status flag to query from LOSC. start_time: int The starting gps time to begin querying from LOSC end_time: int The end gps time of the query source: str, Optional Choice between "GWOSC" or "dqsegdb". If dqsegdb, the server option may also be given. The default is to try GWOSC first then try dqsegdb. server: str, Optional The server path. Only used with dqsegdb atm. veto_definer: str, Optional The path to a veto definer to define groups of flags which themselves define a set of segments. bounds: dict, Optional Dict containing start end tuples keyed by the flag name which indicated places which should have a distinct time period to be active. padding: dict, Optional Dict keyed by the flag name. Each element is a tuple (start_pad, end_pad) which indicates how to change the segment boundaries. override_ifos: dict, Optional A dict keyed by flag_name to override the ifo option on a per flag basis. Returns --------- segments: glue.segments.segmentlist List of segments
Below is the the instruction that describes the task: ### Input: Return the times where any flag is active Parameters ---------- ifo: string or dict The interferometer to query (H1, L1). If a dict, an element for each flag name must be provided. segment_name: list of strings The status flag to query from LOSC. start_time: int The starting gps time to begin querying from LOSC end_time: int The end gps time of the query source: str, Optional Choice between "GWOSC" or "dqsegdb". If dqsegdb, the server option may also be given. The default is to try GWOSC first then try dqsegdb. server: str, Optional The server path. Only used with dqsegdb atm. veto_definer: str, Optional The path to a veto definer to define groups of flags which themselves define a set of segments. bounds: dict, Optional Dict containing start end tuples keyed by the flag name which indicated places which should have a distinct time period to be active. padding: dict, Optional Dict keyed by the flag name. Each element is a tuple (start_pad, end_pad) which indicates how to change the segment boundaries. override_ifos: dict, Optional A dict keyed by flag_name to override the ifo option on a per flag basis. Returns --------- segments: glue.segments.segmentlist List of segments ### Response: def query_cumulative_flags(ifo, segment_names, start_time, end_time, source='any', server="segments.ligo.org", veto_definer=None, bounds=None, padding=None, override_ifos=None, cache=False): """Return the times where any flag is active Parameters ---------- ifo: string or dict The interferometer to query (H1, L1). If a dict, an element for each flag name must be provided. segment_name: list of strings The status flag to query from LOSC. start_time: int The starting gps time to begin querying from LOSC end_time: int The end gps time of the query source: str, Optional Choice between "GWOSC" or "dqsegdb". If dqsegdb, the server option may also be given. The default is to try GWOSC first then try dqsegdb. server: str, Optional The server path. Only used with dqsegdb atm. veto_definer: str, Optional The path to a veto definer to define groups of flags which themselves define a set of segments. bounds: dict, Optional Dict containing start end tuples keyed by the flag name which indicated places which should have a distinct time period to be active. padding: dict, Optional Dict keyed by the flag name. Each element is a tuple (start_pad, end_pad) which indicates how to change the segment boundaries. override_ifos: dict, Optional A dict keyed by flag_name to override the ifo option on a per flag basis. Returns --------- segments: glue.segments.segmentlist List of segments """ total_segs = segmentlist([]) for flag_name in segment_names: ifo_name = ifo if override_ifos is not None and flag_name in override_ifos: ifo_name = override_ifos[flag_name] segs = query_flag(ifo_name, flag_name, start_time, end_time, source=source, server=server, veto_definer=veto_definer, cache=cache) if padding and flag_name in padding: s, e = padding[flag_name] segs2 = segmentlist([]) for seg in segs: segs2.append(segment(seg[0] + s, seg[1] + e)) segs = segs2 if bounds is not None and flag_name in bounds: s, e = bounds[flag_name] valid = segmentlist([segment([s, e])]) segs = (segs & valid).coalesce() total_segs = (total_segs + segs).coalesce() return total_segs
def url(self, name): """Return URL of resource""" scheme = 'http' path = self._prepend_name_prefix(name) query = '' fragment = '' url_tuple = (scheme, self.netloc, path, query, fragment) return urllib.parse.urlunsplit(url_tuple)
Return URL of resource
Below is the the instruction that describes the task: ### Input: Return URL of resource ### Response: def url(self, name): """Return URL of resource""" scheme = 'http' path = self._prepend_name_prefix(name) query = '' fragment = '' url_tuple = (scheme, self.netloc, path, query, fragment) return urllib.parse.urlunsplit(url_tuple)
def dist(self, src, tar): """Return the NCD between two strings using zlib compression. Parameters ---------- src : str Source string for comparison tar : str Target string for comparison Returns ------- float Compression distance Examples -------- >>> cmp = NCDzlib() >>> cmp.dist('cat', 'hat') 0.3333333333333333 >>> cmp.dist('Niall', 'Neil') 0.45454545454545453 >>> cmp.dist('aluminum', 'Catalan') 0.5714285714285714 >>> cmp.dist('ATCG', 'TAGC') 0.4 """ if src == tar: return 0.0 src = src.encode('utf-8') tar = tar.encode('utf-8') self._compressor.compress(src) src_comp = self._compressor.flush(zlib.Z_FULL_FLUSH) self._compressor.compress(tar) tar_comp = self._compressor.flush(zlib.Z_FULL_FLUSH) self._compressor.compress(src + tar) concat_comp = self._compressor.flush(zlib.Z_FULL_FLUSH) self._compressor.compress(tar + src) concat_comp2 = self._compressor.flush(zlib.Z_FULL_FLUSH) return ( min(len(concat_comp), len(concat_comp2)) - min(len(src_comp), len(tar_comp)) ) / max(len(src_comp), len(tar_comp))
Return the NCD between two strings using zlib compression. Parameters ---------- src : str Source string for comparison tar : str Target string for comparison Returns ------- float Compression distance Examples -------- >>> cmp = NCDzlib() >>> cmp.dist('cat', 'hat') 0.3333333333333333 >>> cmp.dist('Niall', 'Neil') 0.45454545454545453 >>> cmp.dist('aluminum', 'Catalan') 0.5714285714285714 >>> cmp.dist('ATCG', 'TAGC') 0.4
Below is the the instruction that describes the task: ### Input: Return the NCD between two strings using zlib compression. Parameters ---------- src : str Source string for comparison tar : str Target string for comparison Returns ------- float Compression distance Examples -------- >>> cmp = NCDzlib() >>> cmp.dist('cat', 'hat') 0.3333333333333333 >>> cmp.dist('Niall', 'Neil') 0.45454545454545453 >>> cmp.dist('aluminum', 'Catalan') 0.5714285714285714 >>> cmp.dist('ATCG', 'TAGC') 0.4 ### Response: def dist(self, src, tar): """Return the NCD between two strings using zlib compression. Parameters ---------- src : str Source string for comparison tar : str Target string for comparison Returns ------- float Compression distance Examples -------- >>> cmp = NCDzlib() >>> cmp.dist('cat', 'hat') 0.3333333333333333 >>> cmp.dist('Niall', 'Neil') 0.45454545454545453 >>> cmp.dist('aluminum', 'Catalan') 0.5714285714285714 >>> cmp.dist('ATCG', 'TAGC') 0.4 """ if src == tar: return 0.0 src = src.encode('utf-8') tar = tar.encode('utf-8') self._compressor.compress(src) src_comp = self._compressor.flush(zlib.Z_FULL_FLUSH) self._compressor.compress(tar) tar_comp = self._compressor.flush(zlib.Z_FULL_FLUSH) self._compressor.compress(src + tar) concat_comp = self._compressor.flush(zlib.Z_FULL_FLUSH) self._compressor.compress(tar + src) concat_comp2 = self._compressor.flush(zlib.Z_FULL_FLUSH) return ( min(len(concat_comp), len(concat_comp2)) - min(len(src_comp), len(tar_comp)) ) / max(len(src_comp), len(tar_comp))
def virtual_network_absent(name, resource_group, connection_auth=None): ''' .. versionadded:: 2019.2.0 Ensure a virtual network does not exist in the resource group. :param name: Name of the virtual network. :param resource_group: The resource group assigned to the virtual network. :param connection_auth: A dict with subscription and authentication parameters to be used in connecting to the Azure Resource Manager API. ''' ret = { 'name': name, 'result': False, 'comment': '', 'changes': {} } if not isinstance(connection_auth, dict): ret['comment'] = 'Connection information must be specified via connection_auth dictionary!' return ret vnet = __salt__['azurearm_network.virtual_network_get']( name, resource_group, azurearm_log_level='info', **connection_auth ) if 'error' in vnet: ret['result'] = True ret['comment'] = 'Virtual network {0} was not found.'.format(name) return ret elif __opts__['test']: ret['comment'] = 'Virtual network {0} would be deleted.'.format(name) ret['result'] = None ret['changes'] = { 'old': vnet, 'new': {}, } return ret deleted = __salt__['azurearm_network.virtual_network_delete'](name, resource_group, **connection_auth) if deleted: ret['result'] = True ret['comment'] = 'Virtual network {0} has been deleted.'.format(name) ret['changes'] = { 'old': vnet, 'new': {} } return ret ret['comment'] = 'Failed to delete virtual network {0}!'.format(name) return ret
.. versionadded:: 2019.2.0 Ensure a virtual network does not exist in the resource group. :param name: Name of the virtual network. :param resource_group: The resource group assigned to the virtual network. :param connection_auth: A dict with subscription and authentication parameters to be used in connecting to the Azure Resource Manager API.
Below is the the instruction that describes the task: ### Input: .. versionadded:: 2019.2.0 Ensure a virtual network does not exist in the resource group. :param name: Name of the virtual network. :param resource_group: The resource group assigned to the virtual network. :param connection_auth: A dict with subscription and authentication parameters to be used in connecting to the Azure Resource Manager API. ### Response: def virtual_network_absent(name, resource_group, connection_auth=None): ''' .. versionadded:: 2019.2.0 Ensure a virtual network does not exist in the resource group. :param name: Name of the virtual network. :param resource_group: The resource group assigned to the virtual network. :param connection_auth: A dict with subscription and authentication parameters to be used in connecting to the Azure Resource Manager API. ''' ret = { 'name': name, 'result': False, 'comment': '', 'changes': {} } if not isinstance(connection_auth, dict): ret['comment'] = 'Connection information must be specified via connection_auth dictionary!' return ret vnet = __salt__['azurearm_network.virtual_network_get']( name, resource_group, azurearm_log_level='info', **connection_auth ) if 'error' in vnet: ret['result'] = True ret['comment'] = 'Virtual network {0} was not found.'.format(name) return ret elif __opts__['test']: ret['comment'] = 'Virtual network {0} would be deleted.'.format(name) ret['result'] = None ret['changes'] = { 'old': vnet, 'new': {}, } return ret deleted = __salt__['azurearm_network.virtual_network_delete'](name, resource_group, **connection_auth) if deleted: ret['result'] = True ret['comment'] = 'Virtual network {0} has been deleted.'.format(name) ret['changes'] = { 'old': vnet, 'new': {} } return ret ret['comment'] = 'Failed to delete virtual network {0}!'.format(name) return ret
def get_resps(self, source_tree): """Returns a sorted list of all resps in `source_tree`, and the elements that bear @resp attributes. :param source_tree: XML tree of source document :type source_tree: `etree._ElementTree` :rtype: `tuple` of `lists` """ resps = set() bearers = source_tree.xpath('//*[@resp]', namespaces=constants.NAMESPACES) for bearer in bearers: for resp in resp_splitter.split(bearer.get('resp')): if resp: resps.add(tuple(resp.split('|', maxsplit=1))) return sorted(resps), bearers
Returns a sorted list of all resps in `source_tree`, and the elements that bear @resp attributes. :param source_tree: XML tree of source document :type source_tree: `etree._ElementTree` :rtype: `tuple` of `lists`
Below is the the instruction that describes the task: ### Input: Returns a sorted list of all resps in `source_tree`, and the elements that bear @resp attributes. :param source_tree: XML tree of source document :type source_tree: `etree._ElementTree` :rtype: `tuple` of `lists` ### Response: def get_resps(self, source_tree): """Returns a sorted list of all resps in `source_tree`, and the elements that bear @resp attributes. :param source_tree: XML tree of source document :type source_tree: `etree._ElementTree` :rtype: `tuple` of `lists` """ resps = set() bearers = source_tree.xpath('//*[@resp]', namespaces=constants.NAMESPACES) for bearer in bearers: for resp in resp_splitter.split(bearer.get('resp')): if resp: resps.add(tuple(resp.split('|', maxsplit=1))) return sorted(resps), bearers
def to_genotypes(self, ploidy, copy=False): """Reshape a haplotype array to view it as genotypes by restoring the ploidy dimension. Parameters ---------- ploidy : int The sample ploidy. copy : bool, optional If True, make a copy of data. Returns ------- g : ndarray, int, shape (n_variants, n_samples, ploidy) Genotype array (sharing same underlying buffer). copy : bool, optional If True, copy the data. Examples -------- >>> import allel >>> h = allel.HaplotypeArray([[0, 0, 0, 1], ... [0, 1, 1, 1], ... [0, 2, -1, -1]], dtype='i1') >>> h.to_genotypes(ploidy=2) <GenotypeArray shape=(3, 2, 2) dtype=int8> 0/0 0/1 0/1 1/1 0/2 ./. """ # check ploidy is compatible if (self.shape[1] % ploidy) > 0: raise ValueError('incompatible ploidy') # reshape newshape = (self.shape[0], -1, ploidy) data = self.reshape(newshape) # wrap g = GenotypeArray(data, copy=copy) return g
Reshape a haplotype array to view it as genotypes by restoring the ploidy dimension. Parameters ---------- ploidy : int The sample ploidy. copy : bool, optional If True, make a copy of data. Returns ------- g : ndarray, int, shape (n_variants, n_samples, ploidy) Genotype array (sharing same underlying buffer). copy : bool, optional If True, copy the data. Examples -------- >>> import allel >>> h = allel.HaplotypeArray([[0, 0, 0, 1], ... [0, 1, 1, 1], ... [0, 2, -1, -1]], dtype='i1') >>> h.to_genotypes(ploidy=2) <GenotypeArray shape=(3, 2, 2) dtype=int8> 0/0 0/1 0/1 1/1 0/2 ./.
Below is the the instruction that describes the task: ### Input: Reshape a haplotype array to view it as genotypes by restoring the ploidy dimension. Parameters ---------- ploidy : int The sample ploidy. copy : bool, optional If True, make a copy of data. Returns ------- g : ndarray, int, shape (n_variants, n_samples, ploidy) Genotype array (sharing same underlying buffer). copy : bool, optional If True, copy the data. Examples -------- >>> import allel >>> h = allel.HaplotypeArray([[0, 0, 0, 1], ... [0, 1, 1, 1], ... [0, 2, -1, -1]], dtype='i1') >>> h.to_genotypes(ploidy=2) <GenotypeArray shape=(3, 2, 2) dtype=int8> 0/0 0/1 0/1 1/1 0/2 ./. ### Response: def to_genotypes(self, ploidy, copy=False): """Reshape a haplotype array to view it as genotypes by restoring the ploidy dimension. Parameters ---------- ploidy : int The sample ploidy. copy : bool, optional If True, make a copy of data. Returns ------- g : ndarray, int, shape (n_variants, n_samples, ploidy) Genotype array (sharing same underlying buffer). copy : bool, optional If True, copy the data. Examples -------- >>> import allel >>> h = allel.HaplotypeArray([[0, 0, 0, 1], ... [0, 1, 1, 1], ... [0, 2, -1, -1]], dtype='i1') >>> h.to_genotypes(ploidy=2) <GenotypeArray shape=(3, 2, 2) dtype=int8> 0/0 0/1 0/1 1/1 0/2 ./. """ # check ploidy is compatible if (self.shape[1] % ploidy) > 0: raise ValueError('incompatible ploidy') # reshape newshape = (self.shape[0], -1, ploidy) data = self.reshape(newshape) # wrap g = GenotypeArray(data, copy=copy) return g
def connect(self, listener, pass_signal=False): """ Connect a new listener to this signal :param listener: The listener (callable) to add :param pass_signal: An optional argument that controls if the signal object is explicitly passed to this listener when it is being fired. If enabled, a ``signal=`` keyword argument is passed to the listener function. :returns: None The listener will be called whenever :meth:`fire()` or :meth:`__call__()` are called. The listener is appended to the list of listeners. Duplicates are not checked and if a listener is added twice it gets called twice. """ info = listenerinfo(listener, pass_signal) self._listeners.append(info) _logger.debug("connect %r to %r", str(listener), self._name) # Track listeners in the instances only if inspect.ismethod(listener): listener_object = listener.__self__ # Ensure that the instance has __listeners__ property if not hasattr(listener_object, "__listeners__"): listener_object.__listeners__ = collections.defaultdict(list) # Append the signals a listener is connected to listener_object.__listeners__[listener].append(self)
Connect a new listener to this signal :param listener: The listener (callable) to add :param pass_signal: An optional argument that controls if the signal object is explicitly passed to this listener when it is being fired. If enabled, a ``signal=`` keyword argument is passed to the listener function. :returns: None The listener will be called whenever :meth:`fire()` or :meth:`__call__()` are called. The listener is appended to the list of listeners. Duplicates are not checked and if a listener is added twice it gets called twice.
Below is the the instruction that describes the task: ### Input: Connect a new listener to this signal :param listener: The listener (callable) to add :param pass_signal: An optional argument that controls if the signal object is explicitly passed to this listener when it is being fired. If enabled, a ``signal=`` keyword argument is passed to the listener function. :returns: None The listener will be called whenever :meth:`fire()` or :meth:`__call__()` are called. The listener is appended to the list of listeners. Duplicates are not checked and if a listener is added twice it gets called twice. ### Response: def connect(self, listener, pass_signal=False): """ Connect a new listener to this signal :param listener: The listener (callable) to add :param pass_signal: An optional argument that controls if the signal object is explicitly passed to this listener when it is being fired. If enabled, a ``signal=`` keyword argument is passed to the listener function. :returns: None The listener will be called whenever :meth:`fire()` or :meth:`__call__()` are called. The listener is appended to the list of listeners. Duplicates are not checked and if a listener is added twice it gets called twice. """ info = listenerinfo(listener, pass_signal) self._listeners.append(info) _logger.debug("connect %r to %r", str(listener), self._name) # Track listeners in the instances only if inspect.ismethod(listener): listener_object = listener.__self__ # Ensure that the instance has __listeners__ property if not hasattr(listener_object, "__listeners__"): listener_object.__listeners__ = collections.defaultdict(list) # Append the signals a listener is connected to listener_object.__listeners__[listener].append(self)
def PathList(self, pathlist): """ Returns the cached _PathList object for the specified pathlist, creating and caching a new object as necessary. """ pathlist = self._PathList_key(pathlist) try: memo_dict = self._memo['PathList'] except KeyError: memo_dict = {} self._memo['PathList'] = memo_dict else: try: return memo_dict[pathlist] except KeyError: pass result = _PathList(pathlist) memo_dict[pathlist] = result return result
Returns the cached _PathList object for the specified pathlist, creating and caching a new object as necessary.
Below is the the instruction that describes the task: ### Input: Returns the cached _PathList object for the specified pathlist, creating and caching a new object as necessary. ### Response: def PathList(self, pathlist): """ Returns the cached _PathList object for the specified pathlist, creating and caching a new object as necessary. """ pathlist = self._PathList_key(pathlist) try: memo_dict = self._memo['PathList'] except KeyError: memo_dict = {} self._memo['PathList'] = memo_dict else: try: return memo_dict[pathlist] except KeyError: pass result = _PathList(pathlist) memo_dict[pathlist] = result return result
def command_long_encode(self, target_system, target_component, command, confirmation, param1, param2, param3, param4, param5, param6, param7): ''' Send a command with up to seven parameters to the MAV target_system : System which should execute the command (uint8_t) target_component : Component which should execute the command, 0 for all components (uint8_t) command : Command ID, as defined by MAV_CMD enum. (uint16_t) confirmation : 0: First transmission of this command. 1-255: Confirmation transmissions (e.g. for kill command) (uint8_t) param1 : Parameter 1, as defined by MAV_CMD enum. (float) param2 : Parameter 2, as defined by MAV_CMD enum. (float) param3 : Parameter 3, as defined by MAV_CMD enum. (float) param4 : Parameter 4, as defined by MAV_CMD enum. (float) param5 : Parameter 5, as defined by MAV_CMD enum. (float) param6 : Parameter 6, as defined by MAV_CMD enum. (float) param7 : Parameter 7, as defined by MAV_CMD enum. (float) ''' return MAVLink_command_long_message(target_system, target_component, command, confirmation, param1, param2, param3, param4, param5, param6, param7)
Send a command with up to seven parameters to the MAV target_system : System which should execute the command (uint8_t) target_component : Component which should execute the command, 0 for all components (uint8_t) command : Command ID, as defined by MAV_CMD enum. (uint16_t) confirmation : 0: First transmission of this command. 1-255: Confirmation transmissions (e.g. for kill command) (uint8_t) param1 : Parameter 1, as defined by MAV_CMD enum. (float) param2 : Parameter 2, as defined by MAV_CMD enum. (float) param3 : Parameter 3, as defined by MAV_CMD enum. (float) param4 : Parameter 4, as defined by MAV_CMD enum. (float) param5 : Parameter 5, as defined by MAV_CMD enum. (float) param6 : Parameter 6, as defined by MAV_CMD enum. (float) param7 : Parameter 7, as defined by MAV_CMD enum. (float)
Below is the the instruction that describes the task: ### Input: Send a command with up to seven parameters to the MAV target_system : System which should execute the command (uint8_t) target_component : Component which should execute the command, 0 for all components (uint8_t) command : Command ID, as defined by MAV_CMD enum. (uint16_t) confirmation : 0: First transmission of this command. 1-255: Confirmation transmissions (e.g. for kill command) (uint8_t) param1 : Parameter 1, as defined by MAV_CMD enum. (float) param2 : Parameter 2, as defined by MAV_CMD enum. (float) param3 : Parameter 3, as defined by MAV_CMD enum. (float) param4 : Parameter 4, as defined by MAV_CMD enum. (float) param5 : Parameter 5, as defined by MAV_CMD enum. (float) param6 : Parameter 6, as defined by MAV_CMD enum. (float) param7 : Parameter 7, as defined by MAV_CMD enum. (float) ### Response: def command_long_encode(self, target_system, target_component, command, confirmation, param1, param2, param3, param4, param5, param6, param7): ''' Send a command with up to seven parameters to the MAV target_system : System which should execute the command (uint8_t) target_component : Component which should execute the command, 0 for all components (uint8_t) command : Command ID, as defined by MAV_CMD enum. (uint16_t) confirmation : 0: First transmission of this command. 1-255: Confirmation transmissions (e.g. for kill command) (uint8_t) param1 : Parameter 1, as defined by MAV_CMD enum. (float) param2 : Parameter 2, as defined by MAV_CMD enum. (float) param3 : Parameter 3, as defined by MAV_CMD enum. (float) param4 : Parameter 4, as defined by MAV_CMD enum. (float) param5 : Parameter 5, as defined by MAV_CMD enum. (float) param6 : Parameter 6, as defined by MAV_CMD enum. (float) param7 : Parameter 7, as defined by MAV_CMD enum. (float) ''' return MAVLink_command_long_message(target_system, target_component, command, confirmation, param1, param2, param3, param4, param5, param6, param7)
def add_field(self, key, value, field): ''':meth:`add_field` must be used to add a field to an existing instance of Model. This method is required so that serialization of the data is possible. Data on existing fields (defined in the class) can be reassigned without using this method. ''' self._extra[key] = field setattr(self, key, value)
:meth:`add_field` must be used to add a field to an existing instance of Model. This method is required so that serialization of the data is possible. Data on existing fields (defined in the class) can be reassigned without using this method.
Below is the the instruction that describes the task: ### Input: :meth:`add_field` must be used to add a field to an existing instance of Model. This method is required so that serialization of the data is possible. Data on existing fields (defined in the class) can be reassigned without using this method. ### Response: def add_field(self, key, value, field): ''':meth:`add_field` must be used to add a field to an existing instance of Model. This method is required so that serialization of the data is possible. Data on existing fields (defined in the class) can be reassigned without using this method. ''' self._extra[key] = field setattr(self, key, value)
def covariance(left, right, where=None, how='sample'): """ Compute covariance of two numeric array Parameters ---------- how : {'sample', 'pop'}, default 'sample' Returns ------- cov : double scalar """ expr = ops.Covariance(left, right, how, where).to_expr() return expr
Compute covariance of two numeric array Parameters ---------- how : {'sample', 'pop'}, default 'sample' Returns ------- cov : double scalar
Below is the the instruction that describes the task: ### Input: Compute covariance of two numeric array Parameters ---------- how : {'sample', 'pop'}, default 'sample' Returns ------- cov : double scalar ### Response: def covariance(left, right, where=None, how='sample'): """ Compute covariance of two numeric array Parameters ---------- how : {'sample', 'pop'}, default 'sample' Returns ------- cov : double scalar """ expr = ops.Covariance(left, right, how, where).to_expr() return expr
async def xgroup_create(self, name: str, group: str, stream_id='$') -> bool: """ [NOTICE] Not officially released yet XGROUP is used in order to create, destroy and manage consumer groups. :param name: name of the stream :param group: name of the consumer group :param stream_id: If we provide $ as we did, then only new messages arriving in the stream from now on will be provided to the consumers in the group. If we specify 0 instead the consumer group will consume all the messages in the stream history to start with. Of course, you can specify any other valid ID """ return await self.execute_command('XGROUP CREATE', name, group, stream_id)
[NOTICE] Not officially released yet XGROUP is used in order to create, destroy and manage consumer groups. :param name: name of the stream :param group: name of the consumer group :param stream_id: If we provide $ as we did, then only new messages arriving in the stream from now on will be provided to the consumers in the group. If we specify 0 instead the consumer group will consume all the messages in the stream history to start with. Of course, you can specify any other valid ID
Below is the the instruction that describes the task: ### Input: [NOTICE] Not officially released yet XGROUP is used in order to create, destroy and manage consumer groups. :param name: name of the stream :param group: name of the consumer group :param stream_id: If we provide $ as we did, then only new messages arriving in the stream from now on will be provided to the consumers in the group. If we specify 0 instead the consumer group will consume all the messages in the stream history to start with. Of course, you can specify any other valid ID ### Response: async def xgroup_create(self, name: str, group: str, stream_id='$') -> bool: """ [NOTICE] Not officially released yet XGROUP is used in order to create, destroy and manage consumer groups. :param name: name of the stream :param group: name of the consumer group :param stream_id: If we provide $ as we did, then only new messages arriving in the stream from now on will be provided to the consumers in the group. If we specify 0 instead the consumer group will consume all the messages in the stream history to start with. Of course, you can specify any other valid ID """ return await self.execute_command('XGROUP CREATE', name, group, stream_id)
def remove_useless_start_nucleotides(self): '''Removes duplicated nucleotides at the start of REF and ALT. But always leaves at least one nucleotide in each of REF and ALT. eg if variant is at position 42, REF=GCTGA, ALT=GCA, then sets position=41, REF=CTGA, ALT=CA. Assumes only one ALT, and does nothing if there is >1 ALT''' if len(self.REF) == 1 or len(self.ALT) != 1: return i = 0 while i < len(self.REF) and i < len(self.ALT[0]) and self.REF[i] == self.ALT[0][i]: i += 1 if i > 0: self.REF = self.REF[i - 1:] self.ALT = [self.ALT[0][i - 1:]] self.POS += i - 1
Removes duplicated nucleotides at the start of REF and ALT. But always leaves at least one nucleotide in each of REF and ALT. eg if variant is at position 42, REF=GCTGA, ALT=GCA, then sets position=41, REF=CTGA, ALT=CA. Assumes only one ALT, and does nothing if there is >1 ALT
Below is the the instruction that describes the task: ### Input: Removes duplicated nucleotides at the start of REF and ALT. But always leaves at least one nucleotide in each of REF and ALT. eg if variant is at position 42, REF=GCTGA, ALT=GCA, then sets position=41, REF=CTGA, ALT=CA. Assumes only one ALT, and does nothing if there is >1 ALT ### Response: def remove_useless_start_nucleotides(self): '''Removes duplicated nucleotides at the start of REF and ALT. But always leaves at least one nucleotide in each of REF and ALT. eg if variant is at position 42, REF=GCTGA, ALT=GCA, then sets position=41, REF=CTGA, ALT=CA. Assumes only one ALT, and does nothing if there is >1 ALT''' if len(self.REF) == 1 or len(self.ALT) != 1: return i = 0 while i < len(self.REF) and i < len(self.ALT[0]) and self.REF[i] == self.ALT[0][i]: i += 1 if i > 0: self.REF = self.REF[i - 1:] self.ALT = [self.ALT[0][i - 1:]] self.POS += i - 1