code
stringlengths
75
104k
docstring
stringlengths
1
46.9k
text
stringlengths
164
112k
def _parse_reply(request): """Tries to parse reply from NIOS. Raises exception with content if reply is not in json format """ try: return jsonutils.loads(request.content) except ValueError: raise ib_ex.InfobloxConnectionError(reason=request.content)
Tries to parse reply from NIOS. Raises exception with content if reply is not in json format
Below is the the instruction that describes the task: ### Input: Tries to parse reply from NIOS. Raises exception with content if reply is not in json format ### Response: def _parse_reply(request): """Tries to parse reply from NIOS. Raises exception with content if reply is not in json format """ try: return jsonutils.loads(request.content) except ValueError: raise ib_ex.InfobloxConnectionError(reason=request.content)
def secret_from_transfer_task( transfer_task: Optional[TransferTask], secrethash: SecretHash, ) -> Optional[Secret]: """Return the secret for the transfer, None on EMPTY_SECRET.""" assert isinstance(transfer_task, InitiatorTask) transfer_state = transfer_task.manager_state.initiator_transfers[secrethash] if transfer_state is None: return None return transfer_state.transfer_description.secret
Return the secret for the transfer, None on EMPTY_SECRET.
Below is the the instruction that describes the task: ### Input: Return the secret for the transfer, None on EMPTY_SECRET. ### Response: def secret_from_transfer_task( transfer_task: Optional[TransferTask], secrethash: SecretHash, ) -> Optional[Secret]: """Return the secret for the transfer, None on EMPTY_SECRET.""" assert isinstance(transfer_task, InitiatorTask) transfer_state = transfer_task.manager_state.initiator_transfers[secrethash] if transfer_state is None: return None return transfer_state.transfer_description.secret
def fix_e225(self, result): """Fix missing whitespace around operator.""" target = self.source[result['line'] - 1] offset = result['column'] - 1 fixed = target[:offset] + ' ' + target[offset:] # Only proceed if non-whitespace characters match. # And make sure we don't break the indentation. if ( fixed.replace(' ', '') == target.replace(' ', '') and _get_indentation(fixed) == _get_indentation(target) ): self.source[result['line'] - 1] = fixed error_code = result.get('id', 0) try: ts = generate_tokens(fixed) except (SyntaxError, tokenize.TokenError): return if not check_syntax(fixed.lstrip()): return errors = list( pycodestyle.missing_whitespace_around_operator(fixed, ts)) for e in reversed(errors): if error_code != e[1].split()[0]: continue offset = e[0][1] fixed = fixed[:offset] + ' ' + fixed[offset:] self.source[result['line'] - 1] = fixed else: return []
Fix missing whitespace around operator.
Below is the the instruction that describes the task: ### Input: Fix missing whitespace around operator. ### Response: def fix_e225(self, result): """Fix missing whitespace around operator.""" target = self.source[result['line'] - 1] offset = result['column'] - 1 fixed = target[:offset] + ' ' + target[offset:] # Only proceed if non-whitespace characters match. # And make sure we don't break the indentation. if ( fixed.replace(' ', '') == target.replace(' ', '') and _get_indentation(fixed) == _get_indentation(target) ): self.source[result['line'] - 1] = fixed error_code = result.get('id', 0) try: ts = generate_tokens(fixed) except (SyntaxError, tokenize.TokenError): return if not check_syntax(fixed.lstrip()): return errors = list( pycodestyle.missing_whitespace_around_operator(fixed, ts)) for e in reversed(errors): if error_code != e[1].split()[0]: continue offset = e[0][1] fixed = fixed[:offset] + ' ' + fixed[offset:] self.source[result['line'] - 1] = fixed else: return []
def normframe(I: np.ndarray, Clim: tuple) -> np.ndarray: """ inputs: ------- I: 2-D Numpy array of grayscale image data Clim: length 2 of tuple or numpy 1-D array specifying lowest and highest expected values in grayscale image """ Vmin = Clim[0] Vmax = Clim[1] # stretch to [0,1] return (I.astype(np.float32).clip(Vmin, Vmax) - Vmin) / (Vmax - Vmin)
inputs: ------- I: 2-D Numpy array of grayscale image data Clim: length 2 of tuple or numpy 1-D array specifying lowest and highest expected values in grayscale image
Below is the the instruction that describes the task: ### Input: inputs: ------- I: 2-D Numpy array of grayscale image data Clim: length 2 of tuple or numpy 1-D array specifying lowest and highest expected values in grayscale image ### Response: def normframe(I: np.ndarray, Clim: tuple) -> np.ndarray: """ inputs: ------- I: 2-D Numpy array of grayscale image data Clim: length 2 of tuple or numpy 1-D array specifying lowest and highest expected values in grayscale image """ Vmin = Clim[0] Vmax = Clim[1] # stretch to [0,1] return (I.astype(np.float32).clip(Vmin, Vmax) - Vmin) / (Vmax - Vmin)
def encode(self, object_): """ Encodes an object. Args: object_ (object): Object to encode. Returns: object: Encoding of the object. """ if self.enforce_reversible: self.enforce_reversible = False if self.decode(self.encode(object_)) != object_: raise ValueError('Encoding is not reversible for "%s"' % object_) self.enforce_reversible = True return object_
Encodes an object. Args: object_ (object): Object to encode. Returns: object: Encoding of the object.
Below is the the instruction that describes the task: ### Input: Encodes an object. Args: object_ (object): Object to encode. Returns: object: Encoding of the object. ### Response: def encode(self, object_): """ Encodes an object. Args: object_ (object): Object to encode. Returns: object: Encoding of the object. """ if self.enforce_reversible: self.enforce_reversible = False if self.decode(self.encode(object_)) != object_: raise ValueError('Encoding is not reversible for "%s"' % object_) self.enforce_reversible = True return object_
def __get_posix_path(self): """Return the path with / as the path separator, regardless of platform.""" if os_sep_is_slash: return self else: entry = self.get() r = entry.get_path().replace(OS_SEP, '/') return SCons.Subst.SpecialAttrWrapper(r, entry.name + "_posix")
Return the path with / as the path separator, regardless of platform.
Below is the the instruction that describes the task: ### Input: Return the path with / as the path separator, regardless of platform. ### Response: def __get_posix_path(self): """Return the path with / as the path separator, regardless of platform.""" if os_sep_is_slash: return self else: entry = self.get() r = entry.get_path().replace(OS_SEP, '/') return SCons.Subst.SpecialAttrWrapper(r, entry.name + "_posix")
def document(self, wrapper): """ Get the document root. For I{document/literal}, this is the name of the wrapper element qualified by the schema's target namespace. @param wrapper: The method name. @type wrapper: L{xsd.sxbase.SchemaObject} @return: A root element. @rtype: L{Element} """ tag = wrapper[1].name ns = wrapper[1].namespace("ns0") return Element(tag, ns=ns)
Get the document root. For I{document/literal}, this is the name of the wrapper element qualified by the schema's target namespace. @param wrapper: The method name. @type wrapper: L{xsd.sxbase.SchemaObject} @return: A root element. @rtype: L{Element}
Below is the the instruction that describes the task: ### Input: Get the document root. For I{document/literal}, this is the name of the wrapper element qualified by the schema's target namespace. @param wrapper: The method name. @type wrapper: L{xsd.sxbase.SchemaObject} @return: A root element. @rtype: L{Element} ### Response: def document(self, wrapper): """ Get the document root. For I{document/literal}, this is the name of the wrapper element qualified by the schema's target namespace. @param wrapper: The method name. @type wrapper: L{xsd.sxbase.SchemaObject} @return: A root element. @rtype: L{Element} """ tag = wrapper[1].name ns = wrapper[1].namespace("ns0") return Element(tag, ns=ns)
def get_mim_phenotypes(genemap_lines): """Get a dictionary with phenotypes Use the mim numbers for phenotypes as keys and phenotype information as values. Args: genemap_lines(iterable(str)) Returns: phenotypes_found(dict): A dictionary with mim_numbers as keys and dictionaries with phenotype information as values. { 'description': str, # Description of the phenotype 'hgnc_symbols': set(), # Associated hgnc symbols 'inheritance': set(), # Associated phenotypes 'mim_number': int, # mim number of phenotype } """ # Set with all omim numbers that are phenotypes # Parsed from mim2gene.txt phenotype_mims = set() phenotypes_found = {} # Genemap is a file with one entry per gene. # Each line hold a lot of information and in specific it # has information about the phenotypes that a gene is associated with # From this source we collect inheritane patterns and what hgnc symbols # a phenotype is associated with for entry in parse_genemap2(genemap_lines): hgnc_symbol = entry['hgnc_symbol'] for phenotype in entry['phenotypes']: mim_nr = phenotype['mim_number'] if mim_nr in phenotypes_found: phenotype_entry = phenotypes_found[mim_nr] phenotype_entry['inheritance'] = phenotype_entry['inheritance'].union(phenotype['inheritance']) phenotype_entry['hgnc_symbols'].add(hgnc_symbol) else: phenotype['hgnc_symbols'] = set([hgnc_symbol]) phenotypes_found[mim_nr] = phenotype return phenotypes_found
Get a dictionary with phenotypes Use the mim numbers for phenotypes as keys and phenotype information as values. Args: genemap_lines(iterable(str)) Returns: phenotypes_found(dict): A dictionary with mim_numbers as keys and dictionaries with phenotype information as values. { 'description': str, # Description of the phenotype 'hgnc_symbols': set(), # Associated hgnc symbols 'inheritance': set(), # Associated phenotypes 'mim_number': int, # mim number of phenotype }
Below is the the instruction that describes the task: ### Input: Get a dictionary with phenotypes Use the mim numbers for phenotypes as keys and phenotype information as values. Args: genemap_lines(iterable(str)) Returns: phenotypes_found(dict): A dictionary with mim_numbers as keys and dictionaries with phenotype information as values. { 'description': str, # Description of the phenotype 'hgnc_symbols': set(), # Associated hgnc symbols 'inheritance': set(), # Associated phenotypes 'mim_number': int, # mim number of phenotype } ### Response: def get_mim_phenotypes(genemap_lines): """Get a dictionary with phenotypes Use the mim numbers for phenotypes as keys and phenotype information as values. Args: genemap_lines(iterable(str)) Returns: phenotypes_found(dict): A dictionary with mim_numbers as keys and dictionaries with phenotype information as values. { 'description': str, # Description of the phenotype 'hgnc_symbols': set(), # Associated hgnc symbols 'inheritance': set(), # Associated phenotypes 'mim_number': int, # mim number of phenotype } """ # Set with all omim numbers that are phenotypes # Parsed from mim2gene.txt phenotype_mims = set() phenotypes_found = {} # Genemap is a file with one entry per gene. # Each line hold a lot of information and in specific it # has information about the phenotypes that a gene is associated with # From this source we collect inheritane patterns and what hgnc symbols # a phenotype is associated with for entry in parse_genemap2(genemap_lines): hgnc_symbol = entry['hgnc_symbol'] for phenotype in entry['phenotypes']: mim_nr = phenotype['mim_number'] if mim_nr in phenotypes_found: phenotype_entry = phenotypes_found[mim_nr] phenotype_entry['inheritance'] = phenotype_entry['inheritance'].union(phenotype['inheritance']) phenotype_entry['hgnc_symbols'].add(hgnc_symbol) else: phenotype['hgnc_symbols'] = set([hgnc_symbol]) phenotypes_found[mim_nr] = phenotype return phenotypes_found
def __get_managed_files_dpkg(self): ''' Get a list of all system files, belonging to the Debian package manager. ''' dirs = set() links = set() files = set() for pkg_name in salt.utils.stringutils.to_str(self._syscall("dpkg-query", None, None, '-Wf', '${binary:Package}\\n')[0]).split(os.linesep): pkg_name = pkg_name.strip() if not pkg_name: continue for resource in salt.utils.stringutils.to_str(self._syscall("dpkg", None, None, '-L', pkg_name)[0]).split(os.linesep): resource = resource.strip() if not resource or resource in ['/', './', '.']: continue if os.path.isdir(resource): dirs.add(resource) elif os.path.islink(resource): links.add(resource) elif os.path.isfile(resource): files.add(resource) return sorted(files), sorted(dirs), sorted(links)
Get a list of all system files, belonging to the Debian package manager.
Below is the the instruction that describes the task: ### Input: Get a list of all system files, belonging to the Debian package manager. ### Response: def __get_managed_files_dpkg(self): ''' Get a list of all system files, belonging to the Debian package manager. ''' dirs = set() links = set() files = set() for pkg_name in salt.utils.stringutils.to_str(self._syscall("dpkg-query", None, None, '-Wf', '${binary:Package}\\n')[0]).split(os.linesep): pkg_name = pkg_name.strip() if not pkg_name: continue for resource in salt.utils.stringutils.to_str(self._syscall("dpkg", None, None, '-L', pkg_name)[0]).split(os.linesep): resource = resource.strip() if not resource or resource in ['/', './', '.']: continue if os.path.isdir(resource): dirs.add(resource) elif os.path.islink(resource): links.add(resource) elif os.path.isfile(resource): files.add(resource) return sorted(files), sorted(dirs), sorted(links)
def chord_length_distribution(im, bins=None, log=False, voxel_size=1, normalization='count'): r""" Determines the distribution of chord lengths in an image containing chords. Parameters ---------- im : ND-image An image with chords drawn in the pore space, as produced by ``apply_chords`` or ``apply_chords_3d``. ``im`` can be either boolean, in which case each chord will be identified using ``scipy.ndimage.label``, or numerical values in which case it is assumed that chords have already been identifed and labeled. In both cases, the size of each chord will be computed as the number of voxels belonging to each labelled region. bins : scalar or array_like If a scalar is given it is interpreted as the number of bins to use, and if an array is given they are used as the bins directly. log : Boolean If true, the logarithm of the chord lengths will be used, which can make the data more clear. normalization : string Indicates how to normalize the bin heights. Options are: *'count' or 'number'* - (default) This simply counts the number of chords in each bin in the normal sense of a histogram. This is the rigorous definition according to Torquato [1]. *'length'* - This multiplies the number of chords in each bin by the chord length (i.e. bin size). The normalization scheme accounts for the fact that long chords are less frequent than shorert chords, thus giving a more balanced distribution. voxel_size : scalar The size of a voxel side in preferred units. The default is 1, so the user can apply the scaling to the returned results after the fact. Returns ------- result : named_tuple A tuple containing the following elements, which can be retrieved by attribute name: *L* or *logL* - chord length, equivalent to ``bin_centers`` *pdf* - probability density function *cdf* - cumulative density function *relfreq* - relative frequency chords in each bin. The sum of all bin heights is 1.0. For the cumulative relativce, use *cdf* which is already normalized to 1. *bin_centers* - the center point of each bin *bin_edges* - locations of bin divisions, including 1 more value than the number of bins *bin_widths* - useful for passing to the ``width`` argument of ``matplotlib.pyplot.bar`` References ---------- [1] Torquato, S. Random Heterogeneous Materials: Mircostructure and Macroscopic Properties. Springer, New York (2002) - See page 45 & 292 """ x = chord_counts(im) if bins is None: bins = sp.array(range(0, x.max()+2))*voxel_size x = x*voxel_size if log: x = sp.log10(x) if normalization == 'length': h = list(sp.histogram(x, bins=bins, density=False)) h[0] = h[0]*(h[1][1:]+h[1][:-1])/2 # Scale bin heigths by length h[0] = h[0]/h[0].sum()/(h[1][1:]-h[1][:-1]) # Normalize h[0] manually elif normalization in ['number', 'count']: h = sp.histogram(x, bins=bins, density=True) else: raise Exception('Unsupported normalization:', normalization) h = _parse_histogram(h) cld = namedtuple('chord_length_distribution', (log*'log' + 'L', 'pdf', 'cdf', 'relfreq', 'bin_centers', 'bin_edges', 'bin_widths')) return cld(h.bin_centers, h.pdf, h.cdf, h.relfreq, h.bin_centers, h.bin_edges, h.bin_widths)
r""" Determines the distribution of chord lengths in an image containing chords. Parameters ---------- im : ND-image An image with chords drawn in the pore space, as produced by ``apply_chords`` or ``apply_chords_3d``. ``im`` can be either boolean, in which case each chord will be identified using ``scipy.ndimage.label``, or numerical values in which case it is assumed that chords have already been identifed and labeled. In both cases, the size of each chord will be computed as the number of voxels belonging to each labelled region. bins : scalar or array_like If a scalar is given it is interpreted as the number of bins to use, and if an array is given they are used as the bins directly. log : Boolean If true, the logarithm of the chord lengths will be used, which can make the data more clear. normalization : string Indicates how to normalize the bin heights. Options are: *'count' or 'number'* - (default) This simply counts the number of chords in each bin in the normal sense of a histogram. This is the rigorous definition according to Torquato [1]. *'length'* - This multiplies the number of chords in each bin by the chord length (i.e. bin size). The normalization scheme accounts for the fact that long chords are less frequent than shorert chords, thus giving a more balanced distribution. voxel_size : scalar The size of a voxel side in preferred units. The default is 1, so the user can apply the scaling to the returned results after the fact. Returns ------- result : named_tuple A tuple containing the following elements, which can be retrieved by attribute name: *L* or *logL* - chord length, equivalent to ``bin_centers`` *pdf* - probability density function *cdf* - cumulative density function *relfreq* - relative frequency chords in each bin. The sum of all bin heights is 1.0. For the cumulative relativce, use *cdf* which is already normalized to 1. *bin_centers* - the center point of each bin *bin_edges* - locations of bin divisions, including 1 more value than the number of bins *bin_widths* - useful for passing to the ``width`` argument of ``matplotlib.pyplot.bar`` References ---------- [1] Torquato, S. Random Heterogeneous Materials: Mircostructure and Macroscopic Properties. Springer, New York (2002) - See page 45 & 292
Below is the the instruction that describes the task: ### Input: r""" Determines the distribution of chord lengths in an image containing chords. Parameters ---------- im : ND-image An image with chords drawn in the pore space, as produced by ``apply_chords`` or ``apply_chords_3d``. ``im`` can be either boolean, in which case each chord will be identified using ``scipy.ndimage.label``, or numerical values in which case it is assumed that chords have already been identifed and labeled. In both cases, the size of each chord will be computed as the number of voxels belonging to each labelled region. bins : scalar or array_like If a scalar is given it is interpreted as the number of bins to use, and if an array is given they are used as the bins directly. log : Boolean If true, the logarithm of the chord lengths will be used, which can make the data more clear. normalization : string Indicates how to normalize the bin heights. Options are: *'count' or 'number'* - (default) This simply counts the number of chords in each bin in the normal sense of a histogram. This is the rigorous definition according to Torquato [1]. *'length'* - This multiplies the number of chords in each bin by the chord length (i.e. bin size). The normalization scheme accounts for the fact that long chords are less frequent than shorert chords, thus giving a more balanced distribution. voxel_size : scalar The size of a voxel side in preferred units. The default is 1, so the user can apply the scaling to the returned results after the fact. Returns ------- result : named_tuple A tuple containing the following elements, which can be retrieved by attribute name: *L* or *logL* - chord length, equivalent to ``bin_centers`` *pdf* - probability density function *cdf* - cumulative density function *relfreq* - relative frequency chords in each bin. The sum of all bin heights is 1.0. For the cumulative relativce, use *cdf* which is already normalized to 1. *bin_centers* - the center point of each bin *bin_edges* - locations of bin divisions, including 1 more value than the number of bins *bin_widths* - useful for passing to the ``width`` argument of ``matplotlib.pyplot.bar`` References ---------- [1] Torquato, S. Random Heterogeneous Materials: Mircostructure and Macroscopic Properties. Springer, New York (2002) - See page 45 & 292 ### Response: def chord_length_distribution(im, bins=None, log=False, voxel_size=1, normalization='count'): r""" Determines the distribution of chord lengths in an image containing chords. Parameters ---------- im : ND-image An image with chords drawn in the pore space, as produced by ``apply_chords`` or ``apply_chords_3d``. ``im`` can be either boolean, in which case each chord will be identified using ``scipy.ndimage.label``, or numerical values in which case it is assumed that chords have already been identifed and labeled. In both cases, the size of each chord will be computed as the number of voxels belonging to each labelled region. bins : scalar or array_like If a scalar is given it is interpreted as the number of bins to use, and if an array is given they are used as the bins directly. log : Boolean If true, the logarithm of the chord lengths will be used, which can make the data more clear. normalization : string Indicates how to normalize the bin heights. Options are: *'count' or 'number'* - (default) This simply counts the number of chords in each bin in the normal sense of a histogram. This is the rigorous definition according to Torquato [1]. *'length'* - This multiplies the number of chords in each bin by the chord length (i.e. bin size). The normalization scheme accounts for the fact that long chords are less frequent than shorert chords, thus giving a more balanced distribution. voxel_size : scalar The size of a voxel side in preferred units. The default is 1, so the user can apply the scaling to the returned results after the fact. Returns ------- result : named_tuple A tuple containing the following elements, which can be retrieved by attribute name: *L* or *logL* - chord length, equivalent to ``bin_centers`` *pdf* - probability density function *cdf* - cumulative density function *relfreq* - relative frequency chords in each bin. The sum of all bin heights is 1.0. For the cumulative relativce, use *cdf* which is already normalized to 1. *bin_centers* - the center point of each bin *bin_edges* - locations of bin divisions, including 1 more value than the number of bins *bin_widths* - useful for passing to the ``width`` argument of ``matplotlib.pyplot.bar`` References ---------- [1] Torquato, S. Random Heterogeneous Materials: Mircostructure and Macroscopic Properties. Springer, New York (2002) - See page 45 & 292 """ x = chord_counts(im) if bins is None: bins = sp.array(range(0, x.max()+2))*voxel_size x = x*voxel_size if log: x = sp.log10(x) if normalization == 'length': h = list(sp.histogram(x, bins=bins, density=False)) h[0] = h[0]*(h[1][1:]+h[1][:-1])/2 # Scale bin heigths by length h[0] = h[0]/h[0].sum()/(h[1][1:]-h[1][:-1]) # Normalize h[0] manually elif normalization in ['number', 'count']: h = sp.histogram(x, bins=bins, density=True) else: raise Exception('Unsupported normalization:', normalization) h = _parse_histogram(h) cld = namedtuple('chord_length_distribution', (log*'log' + 'L', 'pdf', 'cdf', 'relfreq', 'bin_centers', 'bin_edges', 'bin_widths')) return cld(h.bin_centers, h.pdf, h.cdf, h.relfreq, h.bin_centers, h.bin_edges, h.bin_widths)
def loggerLevel(self, logger): """ Returns the level for the inputed logger. :param logger | <str> :return <int> """ try: return self._loggerLevels[logger] except KeyError: items = sorted(self._loggerLevels.items()) for key, lvl in items: if logger.startswith(key): return lvl return logging.NOTSET
Returns the level for the inputed logger. :param logger | <str> :return <int>
Below is the the instruction that describes the task: ### Input: Returns the level for the inputed logger. :param logger | <str> :return <int> ### Response: def loggerLevel(self, logger): """ Returns the level for the inputed logger. :param logger | <str> :return <int> """ try: return self._loggerLevels[logger] except KeyError: items = sorted(self._loggerLevels.items()) for key, lvl in items: if logger.startswith(key): return lvl return logging.NOTSET
def _receive_message(self, message): """ Incoming message callback Calls Gateway.onReceive event hook Providers are required to: * Cast phone numbers to digits-only * Support both ASCII and Unicode messages * Populate `message.msgid` and `message.meta` fields * If this method fails with an exception, the provider is required to respond with an error to the service :type message: IncomingMessage :param message: The received message :rtype: IncomingMessage """ # Populate fields message.provider = self.name # Fire the event hook self.gateway.onReceive(message) # Finish return message
Incoming message callback Calls Gateway.onReceive event hook Providers are required to: * Cast phone numbers to digits-only * Support both ASCII and Unicode messages * Populate `message.msgid` and `message.meta` fields * If this method fails with an exception, the provider is required to respond with an error to the service :type message: IncomingMessage :param message: The received message :rtype: IncomingMessage
Below is the the instruction that describes the task: ### Input: Incoming message callback Calls Gateway.onReceive event hook Providers are required to: * Cast phone numbers to digits-only * Support both ASCII and Unicode messages * Populate `message.msgid` and `message.meta` fields * If this method fails with an exception, the provider is required to respond with an error to the service :type message: IncomingMessage :param message: The received message :rtype: IncomingMessage ### Response: def _receive_message(self, message): """ Incoming message callback Calls Gateway.onReceive event hook Providers are required to: * Cast phone numbers to digits-only * Support both ASCII and Unicode messages * Populate `message.msgid` and `message.meta` fields * If this method fails with an exception, the provider is required to respond with an error to the service :type message: IncomingMessage :param message: The received message :rtype: IncomingMessage """ # Populate fields message.provider = self.name # Fire the event hook self.gateway.onReceive(message) # Finish return message
def windowed_iterable(self): """ That returns only the window """ # Seek to offset effective_offset = max(0,self.item_view.iterable_index) for i,item in enumerate(self.iterable): if i<effective_offset: continue elif i>=(effective_offset+self.item_view.iterable_fetch_size): return yield item
That returns only the window
Below is the the instruction that describes the task: ### Input: That returns only the window ### Response: def windowed_iterable(self): """ That returns only the window """ # Seek to offset effective_offset = max(0,self.item_view.iterable_index) for i,item in enumerate(self.iterable): if i<effective_offset: continue elif i>=(effective_offset+self.item_view.iterable_fetch_size): return yield item
def __update_membership(self): """! @brief Update membership for each point in line with current cluster centers. """ data_difference = numpy.zeros((len(self.__centers), len(self.__data))) for i in range(len(self.__centers)): data_difference[i] = numpy.sum(numpy.square(self.__data - self.__centers[i]), axis=1) for i in range(len(self.__data)): for j in range(len(self.__centers)): divider = sum([pow(data_difference[j][i] / data_difference[k][i], self.__degree) for k in range(len(self.__centers)) if data_difference[k][i] != 0.0]) if divider != 0.0: self.__membership[i][j] = 1.0 / divider else: self.__membership[i][j] = 1.0
! @brief Update membership for each point in line with current cluster centers.
Below is the the instruction that describes the task: ### Input: ! @brief Update membership for each point in line with current cluster centers. ### Response: def __update_membership(self): """! @brief Update membership for each point in line with current cluster centers. """ data_difference = numpy.zeros((len(self.__centers), len(self.__data))) for i in range(len(self.__centers)): data_difference[i] = numpy.sum(numpy.square(self.__data - self.__centers[i]), axis=1) for i in range(len(self.__data)): for j in range(len(self.__centers)): divider = sum([pow(data_difference[j][i] / data_difference[k][i], self.__degree) for k in range(len(self.__centers)) if data_difference[k][i] != 0.0]) if divider != 0.0: self.__membership[i][j] = 1.0 / divider else: self.__membership[i][j] = 1.0
def relpath_posix(recwalk_result, pardir, fromwinpath=False): ''' Helper function to convert all paths to relative posix like paths (to ease comparison) ''' return recwalk_result[0], path2unix(os.path.join(os.path.relpath(recwalk_result[0], pardir),recwalk_result[1]), nojoin=True, fromwinpath=fromwinpath)
Helper function to convert all paths to relative posix like paths (to ease comparison)
Below is the the instruction that describes the task: ### Input: Helper function to convert all paths to relative posix like paths (to ease comparison) ### Response: def relpath_posix(recwalk_result, pardir, fromwinpath=False): ''' Helper function to convert all paths to relative posix like paths (to ease comparison) ''' return recwalk_result[0], path2unix(os.path.join(os.path.relpath(recwalk_result[0], pardir),recwalk_result[1]), nojoin=True, fromwinpath=fromwinpath)
def addItem( self, item ): """ Adds the item to the scene and redraws the item. :param item | <QGraphicsItem> """ result = super(XCalendarScene, self).addItem(item) if ( isinstance(item, XCalendarItem) ): item.rebuild() return result
Adds the item to the scene and redraws the item. :param item | <QGraphicsItem>
Below is the the instruction that describes the task: ### Input: Adds the item to the scene and redraws the item. :param item | <QGraphicsItem> ### Response: def addItem( self, item ): """ Adds the item to the scene and redraws the item. :param item | <QGraphicsItem> """ result = super(XCalendarScene, self).addItem(item) if ( isinstance(item, XCalendarItem) ): item.rebuild() return result
def precision_matrix(self): """ Returns the precision matrix of the distribution. Precision is defined as the inverse of the variance. This method returns the inverse matrix of the covariance. Examples -------- >>> import numpy as np >>> from pgmpy.factors.distributions import GaussianDistribution as GD >>> dis = GD(variables=['x1', 'x2', 'x3'], ... mean=[1, -3, 4], ... cov=[[4, 2, -2], ... [2, 5, -5], ... [-2, -5, 8]])) >>> dis.precision_matrix array([[ 0.3125 , -0.125 , 0. ], [-0.125 , 0.58333333, 0.33333333], [ 0. , 0.33333333, 0.33333333]]) """ if self._precision_matrix is None: self._precision_matrix = np.linalg.inv(self.covariance) return self._precision_matrix
Returns the precision matrix of the distribution. Precision is defined as the inverse of the variance. This method returns the inverse matrix of the covariance. Examples -------- >>> import numpy as np >>> from pgmpy.factors.distributions import GaussianDistribution as GD >>> dis = GD(variables=['x1', 'x2', 'x3'], ... mean=[1, -3, 4], ... cov=[[4, 2, -2], ... [2, 5, -5], ... [-2, -5, 8]])) >>> dis.precision_matrix array([[ 0.3125 , -0.125 , 0. ], [-0.125 , 0.58333333, 0.33333333], [ 0. , 0.33333333, 0.33333333]])
Below is the the instruction that describes the task: ### Input: Returns the precision matrix of the distribution. Precision is defined as the inverse of the variance. This method returns the inverse matrix of the covariance. Examples -------- >>> import numpy as np >>> from pgmpy.factors.distributions import GaussianDistribution as GD >>> dis = GD(variables=['x1', 'x2', 'x3'], ... mean=[1, -3, 4], ... cov=[[4, 2, -2], ... [2, 5, -5], ... [-2, -5, 8]])) >>> dis.precision_matrix array([[ 0.3125 , -0.125 , 0. ], [-0.125 , 0.58333333, 0.33333333], [ 0. , 0.33333333, 0.33333333]]) ### Response: def precision_matrix(self): """ Returns the precision matrix of the distribution. Precision is defined as the inverse of the variance. This method returns the inverse matrix of the covariance. Examples -------- >>> import numpy as np >>> from pgmpy.factors.distributions import GaussianDistribution as GD >>> dis = GD(variables=['x1', 'x2', 'x3'], ... mean=[1, -3, 4], ... cov=[[4, 2, -2], ... [2, 5, -5], ... [-2, -5, 8]])) >>> dis.precision_matrix array([[ 0.3125 , -0.125 , 0. ], [-0.125 , 0.58333333, 0.33333333], [ 0. , 0.33333333, 0.33333333]]) """ if self._precision_matrix is None: self._precision_matrix = np.linalg.inv(self.covariance) return self._precision_matrix
def new(self, key=None, data=None, content_type='application/json', encoded_data=None): """A shortcut for manually instantiating a new :class:`~riak.riak_object.RiakObject` or a new :class:`~riak.datatypes.Datatype`, based on the presence and value of the :attr:`datatype <BucketType.datatype>` bucket property. When the bucket contains a :class:`~riak.datatypes.Datatype`, all arguments are ignored except ``key``, otherwise they are used to initialize the :class:`~riak.riak_object.RiakObject`. :param key: Name of the key. Leaving this to be None (default) will make Riak generate the key on store. :type key: str :param data: The data to store in a :class:`~riak.riak_object.RiakObject`, see :attr:`RiakObject.data <riak.riak_object.RiakObject.data>`. :type data: object :param content_type: The media type of the data stored in the :class:`~riak.riak_object.RiakObject`, see :attr:`RiakObject.content_type <riak.riak_object.RiakObject.content_type>`. :type content_type: str :param encoded_data: The encoded data to store in a :class:`~riak.riak_object.RiakObject`, see :attr:`RiakObject.encoded_data <riak.riak_object.RiakObject.encoded_data>`. :type encoded_data: str :rtype: :class:`~riak.riak_object.RiakObject` or :class:`~riak.datatypes.Datatype` """ from riak import RiakObject if self.bucket_type.datatype: return TYPES[self.bucket_type.datatype](bucket=self, key=key) if PY2: try: if isinstance(data, string_types): data = data.encode('ascii') except UnicodeError: raise TypeError('Unicode data values are not supported.') obj = RiakObject(self._client, self, key) obj.content_type = content_type if data is not None: obj.data = data if encoded_data is not None: obj.encoded_data = encoded_data return obj
A shortcut for manually instantiating a new :class:`~riak.riak_object.RiakObject` or a new :class:`~riak.datatypes.Datatype`, based on the presence and value of the :attr:`datatype <BucketType.datatype>` bucket property. When the bucket contains a :class:`~riak.datatypes.Datatype`, all arguments are ignored except ``key``, otherwise they are used to initialize the :class:`~riak.riak_object.RiakObject`. :param key: Name of the key. Leaving this to be None (default) will make Riak generate the key on store. :type key: str :param data: The data to store in a :class:`~riak.riak_object.RiakObject`, see :attr:`RiakObject.data <riak.riak_object.RiakObject.data>`. :type data: object :param content_type: The media type of the data stored in the :class:`~riak.riak_object.RiakObject`, see :attr:`RiakObject.content_type <riak.riak_object.RiakObject.content_type>`. :type content_type: str :param encoded_data: The encoded data to store in a :class:`~riak.riak_object.RiakObject`, see :attr:`RiakObject.encoded_data <riak.riak_object.RiakObject.encoded_data>`. :type encoded_data: str :rtype: :class:`~riak.riak_object.RiakObject` or :class:`~riak.datatypes.Datatype`
Below is the the instruction that describes the task: ### Input: A shortcut for manually instantiating a new :class:`~riak.riak_object.RiakObject` or a new :class:`~riak.datatypes.Datatype`, based on the presence and value of the :attr:`datatype <BucketType.datatype>` bucket property. When the bucket contains a :class:`~riak.datatypes.Datatype`, all arguments are ignored except ``key``, otherwise they are used to initialize the :class:`~riak.riak_object.RiakObject`. :param key: Name of the key. Leaving this to be None (default) will make Riak generate the key on store. :type key: str :param data: The data to store in a :class:`~riak.riak_object.RiakObject`, see :attr:`RiakObject.data <riak.riak_object.RiakObject.data>`. :type data: object :param content_type: The media type of the data stored in the :class:`~riak.riak_object.RiakObject`, see :attr:`RiakObject.content_type <riak.riak_object.RiakObject.content_type>`. :type content_type: str :param encoded_data: The encoded data to store in a :class:`~riak.riak_object.RiakObject`, see :attr:`RiakObject.encoded_data <riak.riak_object.RiakObject.encoded_data>`. :type encoded_data: str :rtype: :class:`~riak.riak_object.RiakObject` or :class:`~riak.datatypes.Datatype` ### Response: def new(self, key=None, data=None, content_type='application/json', encoded_data=None): """A shortcut for manually instantiating a new :class:`~riak.riak_object.RiakObject` or a new :class:`~riak.datatypes.Datatype`, based on the presence and value of the :attr:`datatype <BucketType.datatype>` bucket property. When the bucket contains a :class:`~riak.datatypes.Datatype`, all arguments are ignored except ``key``, otherwise they are used to initialize the :class:`~riak.riak_object.RiakObject`. :param key: Name of the key. Leaving this to be None (default) will make Riak generate the key on store. :type key: str :param data: The data to store in a :class:`~riak.riak_object.RiakObject`, see :attr:`RiakObject.data <riak.riak_object.RiakObject.data>`. :type data: object :param content_type: The media type of the data stored in the :class:`~riak.riak_object.RiakObject`, see :attr:`RiakObject.content_type <riak.riak_object.RiakObject.content_type>`. :type content_type: str :param encoded_data: The encoded data to store in a :class:`~riak.riak_object.RiakObject`, see :attr:`RiakObject.encoded_data <riak.riak_object.RiakObject.encoded_data>`. :type encoded_data: str :rtype: :class:`~riak.riak_object.RiakObject` or :class:`~riak.datatypes.Datatype` """ from riak import RiakObject if self.bucket_type.datatype: return TYPES[self.bucket_type.datatype](bucket=self, key=key) if PY2: try: if isinstance(data, string_types): data = data.encode('ascii') except UnicodeError: raise TypeError('Unicode data values are not supported.') obj = RiakObject(self._client, self, key) obj.content_type = content_type if data is not None: obj.data = data if encoded_data is not None: obj.encoded_data = encoded_data return obj
def get_fields(Model, parent_field="", model_stack=None, stack_limit=2, excludes=['permissions', 'comment', 'content_type']): """ Given a Model, return a list of lists of strings with important stuff: ... ['test_user__user__customuser', 'customuser', 'User', 'RelatedObject'] ['test_user__unique_id', 'unique_id', 'TestUser', 'CharField'] ['test_user__confirmed', 'confirmed', 'TestUser', 'BooleanField'] ... """ out_fields = [] if model_stack is None: model_stack = [] # github.com/omab/python-social-auth/commit/d8637cec02422374e4102231488481170dc51057 if isinstance(Model, basestring): app_label, model_name = Model.split('.') Model = models.get_model(app_label, model_name) fields = Model._meta.fields + Model._meta.many_to_many + Model._meta.get_all_related_objects() model_stack.append(Model) # do a variety of checks to ensure recursion isnt being redundant stop_recursion = False if len(model_stack) > stack_limit: # rudimentary CustomUser->User->CustomUser->User detection if model_stack[-3] == model_stack[-1]: stop_recursion = True # stack depth shouldn't exceed x if len(model_stack) > 5: stop_recursion = True # we've hit a point where we are repeating models if len(set(model_stack)) != len(model_stack): stop_recursion = True if stop_recursion: return [] # give empty list for "extend" for field in fields: field_name = field.name if isinstance(field, RelatedObject): field_name = field.field.related_query_name() if parent_field: full_field = "__".join([parent_field, field_name]) else: full_field = field_name if len([True for exclude in excludes if (exclude in full_field)]): continue # add to the list out_fields.append([full_field, field_name, Model, field.__class__]) if not stop_recursion and \ (isinstance(field, ForeignKey) or isinstance(field, OneToOneField) or \ isinstance(field, RelatedObject) or isinstance(field, ManyToManyField)): if isinstance(field, RelatedObject): RelModel = field.model #field_names.extend(get_fields(RelModel, full_field, True)) else: RelModel = field.related.parent_model out_fields.extend(get_fields(RelModel, full_field, list(model_stack))) return out_fields
Given a Model, return a list of lists of strings with important stuff: ... ['test_user__user__customuser', 'customuser', 'User', 'RelatedObject'] ['test_user__unique_id', 'unique_id', 'TestUser', 'CharField'] ['test_user__confirmed', 'confirmed', 'TestUser', 'BooleanField'] ...
Below is the the instruction that describes the task: ### Input: Given a Model, return a list of lists of strings with important stuff: ... ['test_user__user__customuser', 'customuser', 'User', 'RelatedObject'] ['test_user__unique_id', 'unique_id', 'TestUser', 'CharField'] ['test_user__confirmed', 'confirmed', 'TestUser', 'BooleanField'] ... ### Response: def get_fields(Model, parent_field="", model_stack=None, stack_limit=2, excludes=['permissions', 'comment', 'content_type']): """ Given a Model, return a list of lists of strings with important stuff: ... ['test_user__user__customuser', 'customuser', 'User', 'RelatedObject'] ['test_user__unique_id', 'unique_id', 'TestUser', 'CharField'] ['test_user__confirmed', 'confirmed', 'TestUser', 'BooleanField'] ... """ out_fields = [] if model_stack is None: model_stack = [] # github.com/omab/python-social-auth/commit/d8637cec02422374e4102231488481170dc51057 if isinstance(Model, basestring): app_label, model_name = Model.split('.') Model = models.get_model(app_label, model_name) fields = Model._meta.fields + Model._meta.many_to_many + Model._meta.get_all_related_objects() model_stack.append(Model) # do a variety of checks to ensure recursion isnt being redundant stop_recursion = False if len(model_stack) > stack_limit: # rudimentary CustomUser->User->CustomUser->User detection if model_stack[-3] == model_stack[-1]: stop_recursion = True # stack depth shouldn't exceed x if len(model_stack) > 5: stop_recursion = True # we've hit a point where we are repeating models if len(set(model_stack)) != len(model_stack): stop_recursion = True if stop_recursion: return [] # give empty list for "extend" for field in fields: field_name = field.name if isinstance(field, RelatedObject): field_name = field.field.related_query_name() if parent_field: full_field = "__".join([parent_field, field_name]) else: full_field = field_name if len([True for exclude in excludes if (exclude in full_field)]): continue # add to the list out_fields.append([full_field, field_name, Model, field.__class__]) if not stop_recursion and \ (isinstance(field, ForeignKey) or isinstance(field, OneToOneField) or \ isinstance(field, RelatedObject) or isinstance(field, ManyToManyField)): if isinstance(field, RelatedObject): RelModel = field.model #field_names.extend(get_fields(RelModel, full_field, True)) else: RelModel = field.related.parent_model out_fields.extend(get_fields(RelModel, full_field, list(model_stack))) return out_fields
def weekdays(self): """ Returns the number of weekdays this item has. :return <int> """ if self.itemStyle() == self.ItemStyle.Group: out = 0 for i in range(self.childCount()): out += self.child(i).weekdays() return out else: dstart = self.dateStart().toPyDate() dend = self.dateEnd().toPyDate() return projex.dates.weekdays(dstart, dend)
Returns the number of weekdays this item has. :return <int>
Below is the the instruction that describes the task: ### Input: Returns the number of weekdays this item has. :return <int> ### Response: def weekdays(self): """ Returns the number of weekdays this item has. :return <int> """ if self.itemStyle() == self.ItemStyle.Group: out = 0 for i in range(self.childCount()): out += self.child(i).weekdays() return out else: dstart = self.dateStart().toPyDate() dend = self.dateEnd().toPyDate() return projex.dates.weekdays(dstart, dend)
def _resolve_path(obj, path=None): """Resolve django-like path eg. object2__object3 for object Args: obj: The object the view is displaying. path (str, optional): Description Returns: A oject at end of resolved path """ if path: for attr_name in path.split('__'): obj = getattr(obj, attr_name) return obj
Resolve django-like path eg. object2__object3 for object Args: obj: The object the view is displaying. path (str, optional): Description Returns: A oject at end of resolved path
Below is the the instruction that describes the task: ### Input: Resolve django-like path eg. object2__object3 for object Args: obj: The object the view is displaying. path (str, optional): Description Returns: A oject at end of resolved path ### Response: def _resolve_path(obj, path=None): """Resolve django-like path eg. object2__object3 for object Args: obj: The object the view is displaying. path (str, optional): Description Returns: A oject at end of resolved path """ if path: for attr_name in path.split('__'): obj = getattr(obj, attr_name) return obj
def get_performance_signatures(self, project, **params): ''' Gets a set of performance signatures associated with a project and time range ''' results = self._get_json(self.PERFORMANCE_SIGNATURES_ENDPOINT, project, **params) return PerformanceSignatureCollection(results)
Gets a set of performance signatures associated with a project and time range
Below is the the instruction that describes the task: ### Input: Gets a set of performance signatures associated with a project and time range ### Response: def get_performance_signatures(self, project, **params): ''' Gets a set of performance signatures associated with a project and time range ''' results = self._get_json(self.PERFORMANCE_SIGNATURES_ENDPOINT, project, **params) return PerformanceSignatureCollection(results)
def _read_all_attribute_info(self): """Read all attribute properties, g, r, and z attributes""" num = copy.deepcopy(self._num_attrs) fname = copy.deepcopy(self.fname) out = fortran_cdf.inquire_all_attr(fname, num, len(fname)) status = out[0] names = out[1].astype('U') scopes = out[2] max_gentries = out[3] max_rentries = out[4] max_zentries = out[5] attr_nums = out[6] global_attrs_info = {} var_attrs_info = {} if status == 0: for name, scope, gentry, rentry, zentry, num in zip(names, scopes, max_gentries, max_rentries, max_zentries, attr_nums): name = ''.join(name) name = name.rstrip() nug = {} nug['scope'] = scope nug['max_gentry'] = gentry nug['max_rentry'] = rentry nug['max_zentry'] = zentry nug['attr_num'] = num flag = (gentry == 0) & (rentry == 0) & (zentry == 0) if not flag: if scope == 1: global_attrs_info[name] = nug elif scope == 2: var_attrs_info[name] = nug self.global_attrs_info = global_attrs_info self.var_attrs_info = var_attrs_info else: raise IOError(fortran_cdf.statusreporter(status))
Read all attribute properties, g, r, and z attributes
Below is the the instruction that describes the task: ### Input: Read all attribute properties, g, r, and z attributes ### Response: def _read_all_attribute_info(self): """Read all attribute properties, g, r, and z attributes""" num = copy.deepcopy(self._num_attrs) fname = copy.deepcopy(self.fname) out = fortran_cdf.inquire_all_attr(fname, num, len(fname)) status = out[0] names = out[1].astype('U') scopes = out[2] max_gentries = out[3] max_rentries = out[4] max_zentries = out[5] attr_nums = out[6] global_attrs_info = {} var_attrs_info = {} if status == 0: for name, scope, gentry, rentry, zentry, num in zip(names, scopes, max_gentries, max_rentries, max_zentries, attr_nums): name = ''.join(name) name = name.rstrip() nug = {} nug['scope'] = scope nug['max_gentry'] = gentry nug['max_rentry'] = rentry nug['max_zentry'] = zentry nug['attr_num'] = num flag = (gentry == 0) & (rentry == 0) & (zentry == 0) if not flag: if scope == 1: global_attrs_info[name] = nug elif scope == 2: var_attrs_info[name] = nug self.global_attrs_info = global_attrs_info self.var_attrs_info = var_attrs_info else: raise IOError(fortran_cdf.statusreporter(status))
def attr_membership(attr_val, value_set, attr_type=basestring, modifier_fn=lambda x: x): """ Helper function passed to netCDF4.Dataset.get_attributes_by_value Checks that `attr_val` exists, has the same type as `attr_type`, and is contained in `value_set` attr_val: The value of the attribute being checked attr_type: A type object that the `attr_val` is expected to have the same type as. If the type is not the same, a warning is issued and the code attempts to cast `attr_val` to the expected type. value_set: The set against which membership for `attr_val` is tested modifier_fn: A function to apply to attr_val prior to applying the set membership test """ if attr_val is None: return False if not isinstance(attr_val, attr_type): warnings.warn("Attribute is of type {}, {} expected. " "Attempting to cast to expected type.".format(type(attr_val), attr_type)) try: # if the expected type is basestring, try casting to unicode type # since basestring can't be instantiated if attr_type is basestring: new_attr_val = six.text_type(attr_val) else: new_attr_val = attr_type(attr_val) # catch casting errors except (ValueError, UnicodeEncodeError) as e: warnings.warn("Could not cast to type {}".format(attr_type)) return False else: new_attr_val = attr_val try: is_in_set = modifier_fn(new_attr_val) in value_set except Exception as e: warnings.warn('Could not apply modifier function {} to value: ' ' {}'.format(modifier_fn, e.msg)) return False return is_in_set
Helper function passed to netCDF4.Dataset.get_attributes_by_value Checks that `attr_val` exists, has the same type as `attr_type`, and is contained in `value_set` attr_val: The value of the attribute being checked attr_type: A type object that the `attr_val` is expected to have the same type as. If the type is not the same, a warning is issued and the code attempts to cast `attr_val` to the expected type. value_set: The set against which membership for `attr_val` is tested modifier_fn: A function to apply to attr_val prior to applying the set membership test
Below is the the instruction that describes the task: ### Input: Helper function passed to netCDF4.Dataset.get_attributes_by_value Checks that `attr_val` exists, has the same type as `attr_type`, and is contained in `value_set` attr_val: The value of the attribute being checked attr_type: A type object that the `attr_val` is expected to have the same type as. If the type is not the same, a warning is issued and the code attempts to cast `attr_val` to the expected type. value_set: The set against which membership for `attr_val` is tested modifier_fn: A function to apply to attr_val prior to applying the set membership test ### Response: def attr_membership(attr_val, value_set, attr_type=basestring, modifier_fn=lambda x: x): """ Helper function passed to netCDF4.Dataset.get_attributes_by_value Checks that `attr_val` exists, has the same type as `attr_type`, and is contained in `value_set` attr_val: The value of the attribute being checked attr_type: A type object that the `attr_val` is expected to have the same type as. If the type is not the same, a warning is issued and the code attempts to cast `attr_val` to the expected type. value_set: The set against which membership for `attr_val` is tested modifier_fn: A function to apply to attr_val prior to applying the set membership test """ if attr_val is None: return False if not isinstance(attr_val, attr_type): warnings.warn("Attribute is of type {}, {} expected. " "Attempting to cast to expected type.".format(type(attr_val), attr_type)) try: # if the expected type is basestring, try casting to unicode type # since basestring can't be instantiated if attr_type is basestring: new_attr_val = six.text_type(attr_val) else: new_attr_val = attr_type(attr_val) # catch casting errors except (ValueError, UnicodeEncodeError) as e: warnings.warn("Could not cast to type {}".format(attr_type)) return False else: new_attr_val = attr_val try: is_in_set = modifier_fn(new_attr_val) in value_set except Exception as e: warnings.warn('Could not apply modifier function {} to value: ' ' {}'.format(modifier_fn, e.msg)) return False return is_in_set
def player_move(board): '''Shows the board to the player on the console and asks them to make a move.''' print(board, end='\n\n') x, y = input('Enter move (e.g. 2b): ') print() return int(x) - 1, ord(y) - ord('a')
Shows the board to the player on the console and asks them to make a move.
Below is the the instruction that describes the task: ### Input: Shows the board to the player on the console and asks them to make a move. ### Response: def player_move(board): '''Shows the board to the player on the console and asks them to make a move.''' print(board, end='\n\n') x, y = input('Enter move (e.g. 2b): ') print() return int(x) - 1, ord(y) - ord('a')
def approximate_spectral_radius(A, tol=0.01, maxiter=15, restart=5, symmetric=None, initial_guess=None, return_vector=False): """Approximate the spectral radius of a matrix. Parameters ---------- A : {dense or sparse matrix} E.g. csr_matrix, csc_matrix, ndarray, etc. tol : {scalar} Relative tolerance of approximation, i.e., the error divided by the approximate spectral radius is compared to tol. maxiter : {integer} Maximum number of iterations to perform restart : {integer} Number of restarted Arnoldi processes. For example, a value of 0 will run Arnoldi once, for maxiter iterations, and a value of 1 will restart Arnoldi once, using the maximal eigenvector from the first Arnoldi process as the initial guess. symmetric : {boolean} True - if A is symmetric Lanczos iteration is used (more efficient) False - if A is non-symmetric Arnoldi iteration is used (less efficient) initial_guess : {array|None} If n x 1 array, then use as initial guess for Arnoldi/Lanczos. If None, then use a random initial guess. return_vector : {boolean} True - return an approximate dominant eigenvector, in addition to the spectral radius. False - Do not return the approximate dominant eigenvector Returns ------- An approximation to the spectral radius of A, and if return_vector=True, then also return the approximate dominant eigenvector Notes ----- The spectral radius is approximated by looking at the Ritz eigenvalues. Arnoldi iteration (or Lanczos) is used to project the matrix A onto a Krylov subspace: H = Q* A Q. The eigenvalues of H (i.e. the Ritz eigenvalues) should represent the eigenvalues of A in the sense that the minimum and maximum values are usually well matched (for the symmetric case it is true since the eigenvalues are real). References ---------- .. [1] Z. Bai, J. Demmel, J. Dongarra, A. Ruhe, and H. van der Vorst, editors. "Templates for the Solution of Algebraic Eigenvalue Problems: A Practical Guide", SIAM, Philadelphia, 2000. Examples -------- >>> from pyamg.util.linalg import approximate_spectral_radius >>> import numpy as np >>> from scipy.linalg import eigvals, norm >>> A = np.array([[1.,0.],[0.,1.]]) >>> print approximate_spectral_radius(A,maxiter=3) 1.0 >>> print max([norm(x) for x in eigvals(A)]) 1.0 """ if not hasattr(A, 'rho') or return_vector: # somehow more restart causes a nonsymmetric case to fail...look at # this what about A.dtype=int? convert somehow? # The use of the restart vector v0 requires that the full Krylov # subspace V be stored. So, set symmetric to False. symmetric = False if maxiter < 1: raise ValueError('expected maxiter > 0') if restart < 0: raise ValueError('expected restart >= 0') if A.dtype == int: raise ValueError('expected A to be float (complex or real)') if A.shape[0] != A.shape[1]: raise ValueError('expected square A') if initial_guess is None: v0 = sp.rand(A.shape[1], 1) if A.dtype == complex: v0 = v0 + 1.0j * sp.rand(A.shape[1], 1) else: if initial_guess.shape[0] != A.shape[0]: raise ValueError('initial_guess and A must have same shape') if (len(initial_guess.shape) > 1) and (initial_guess.shape[1] > 1): raise ValueError('initial_guess must be an (n,1) or\ (n,) vector') v0 = initial_guess.reshape(-1, 1) v0 = np.array(v0, dtype=A.dtype) for j in range(restart+1): [evect, ev, H, V, breakdown_flag] =\ _approximate_eigenvalues(A, tol, maxiter, symmetric, initial_guess=v0) # Calculate error in dominant eigenvector nvecs = ev.shape[0] max_index = np.abs(ev).argmax() error = H[nvecs, nvecs-1]*evect[-1, max_index] # error is a fast way of calculating the following line # error2 = ( A - ev[max_index]*sp.mat( # sp.eye(A.shape[0],A.shape[1])) )*\ # ( sp.mat(sp.hstack(V[:-1]))*\ # evect[:,max_index].reshape(-1,1) ) # print str(error) + " " + str(sp.linalg.norm(e2)) if (np.abs(error)/np.abs(ev[max_index]) < tol) or\ breakdown_flag: # halt if below relative tolerance v0 = np.dot(np.hstack(V[:-1]), evect[:, max_index].reshape(-1, 1)) break else: v0 = np.dot(np.hstack(V[:-1]), evect[:, max_index].reshape(-1, 1)) # end j-loop rho = np.abs(ev[max_index]) if sparse.isspmatrix(A): A.rho = rho if return_vector: return (rho, v0) else: return rho else: return A.rho
Approximate the spectral radius of a matrix. Parameters ---------- A : {dense or sparse matrix} E.g. csr_matrix, csc_matrix, ndarray, etc. tol : {scalar} Relative tolerance of approximation, i.e., the error divided by the approximate spectral radius is compared to tol. maxiter : {integer} Maximum number of iterations to perform restart : {integer} Number of restarted Arnoldi processes. For example, a value of 0 will run Arnoldi once, for maxiter iterations, and a value of 1 will restart Arnoldi once, using the maximal eigenvector from the first Arnoldi process as the initial guess. symmetric : {boolean} True - if A is symmetric Lanczos iteration is used (more efficient) False - if A is non-symmetric Arnoldi iteration is used (less efficient) initial_guess : {array|None} If n x 1 array, then use as initial guess for Arnoldi/Lanczos. If None, then use a random initial guess. return_vector : {boolean} True - return an approximate dominant eigenvector, in addition to the spectral radius. False - Do not return the approximate dominant eigenvector Returns ------- An approximation to the spectral radius of A, and if return_vector=True, then also return the approximate dominant eigenvector Notes ----- The spectral radius is approximated by looking at the Ritz eigenvalues. Arnoldi iteration (or Lanczos) is used to project the matrix A onto a Krylov subspace: H = Q* A Q. The eigenvalues of H (i.e. the Ritz eigenvalues) should represent the eigenvalues of A in the sense that the minimum and maximum values are usually well matched (for the symmetric case it is true since the eigenvalues are real). References ---------- .. [1] Z. Bai, J. Demmel, J. Dongarra, A. Ruhe, and H. van der Vorst, editors. "Templates for the Solution of Algebraic Eigenvalue Problems: A Practical Guide", SIAM, Philadelphia, 2000. Examples -------- >>> from pyamg.util.linalg import approximate_spectral_radius >>> import numpy as np >>> from scipy.linalg import eigvals, norm >>> A = np.array([[1.,0.],[0.,1.]]) >>> print approximate_spectral_radius(A,maxiter=3) 1.0 >>> print max([norm(x) for x in eigvals(A)]) 1.0
Below is the the instruction that describes the task: ### Input: Approximate the spectral radius of a matrix. Parameters ---------- A : {dense or sparse matrix} E.g. csr_matrix, csc_matrix, ndarray, etc. tol : {scalar} Relative tolerance of approximation, i.e., the error divided by the approximate spectral radius is compared to tol. maxiter : {integer} Maximum number of iterations to perform restart : {integer} Number of restarted Arnoldi processes. For example, a value of 0 will run Arnoldi once, for maxiter iterations, and a value of 1 will restart Arnoldi once, using the maximal eigenvector from the first Arnoldi process as the initial guess. symmetric : {boolean} True - if A is symmetric Lanczos iteration is used (more efficient) False - if A is non-symmetric Arnoldi iteration is used (less efficient) initial_guess : {array|None} If n x 1 array, then use as initial guess for Arnoldi/Lanczos. If None, then use a random initial guess. return_vector : {boolean} True - return an approximate dominant eigenvector, in addition to the spectral radius. False - Do not return the approximate dominant eigenvector Returns ------- An approximation to the spectral radius of A, and if return_vector=True, then also return the approximate dominant eigenvector Notes ----- The spectral radius is approximated by looking at the Ritz eigenvalues. Arnoldi iteration (or Lanczos) is used to project the matrix A onto a Krylov subspace: H = Q* A Q. The eigenvalues of H (i.e. the Ritz eigenvalues) should represent the eigenvalues of A in the sense that the minimum and maximum values are usually well matched (for the symmetric case it is true since the eigenvalues are real). References ---------- .. [1] Z. Bai, J. Demmel, J. Dongarra, A. Ruhe, and H. van der Vorst, editors. "Templates for the Solution of Algebraic Eigenvalue Problems: A Practical Guide", SIAM, Philadelphia, 2000. Examples -------- >>> from pyamg.util.linalg import approximate_spectral_radius >>> import numpy as np >>> from scipy.linalg import eigvals, norm >>> A = np.array([[1.,0.],[0.,1.]]) >>> print approximate_spectral_radius(A,maxiter=3) 1.0 >>> print max([norm(x) for x in eigvals(A)]) 1.0 ### Response: def approximate_spectral_radius(A, tol=0.01, maxiter=15, restart=5, symmetric=None, initial_guess=None, return_vector=False): """Approximate the spectral radius of a matrix. Parameters ---------- A : {dense or sparse matrix} E.g. csr_matrix, csc_matrix, ndarray, etc. tol : {scalar} Relative tolerance of approximation, i.e., the error divided by the approximate spectral radius is compared to tol. maxiter : {integer} Maximum number of iterations to perform restart : {integer} Number of restarted Arnoldi processes. For example, a value of 0 will run Arnoldi once, for maxiter iterations, and a value of 1 will restart Arnoldi once, using the maximal eigenvector from the first Arnoldi process as the initial guess. symmetric : {boolean} True - if A is symmetric Lanczos iteration is used (more efficient) False - if A is non-symmetric Arnoldi iteration is used (less efficient) initial_guess : {array|None} If n x 1 array, then use as initial guess for Arnoldi/Lanczos. If None, then use a random initial guess. return_vector : {boolean} True - return an approximate dominant eigenvector, in addition to the spectral radius. False - Do not return the approximate dominant eigenvector Returns ------- An approximation to the spectral radius of A, and if return_vector=True, then also return the approximate dominant eigenvector Notes ----- The spectral radius is approximated by looking at the Ritz eigenvalues. Arnoldi iteration (or Lanczos) is used to project the matrix A onto a Krylov subspace: H = Q* A Q. The eigenvalues of H (i.e. the Ritz eigenvalues) should represent the eigenvalues of A in the sense that the minimum and maximum values are usually well matched (for the symmetric case it is true since the eigenvalues are real). References ---------- .. [1] Z. Bai, J. Demmel, J. Dongarra, A. Ruhe, and H. van der Vorst, editors. "Templates for the Solution of Algebraic Eigenvalue Problems: A Practical Guide", SIAM, Philadelphia, 2000. Examples -------- >>> from pyamg.util.linalg import approximate_spectral_radius >>> import numpy as np >>> from scipy.linalg import eigvals, norm >>> A = np.array([[1.,0.],[0.,1.]]) >>> print approximate_spectral_radius(A,maxiter=3) 1.0 >>> print max([norm(x) for x in eigvals(A)]) 1.0 """ if not hasattr(A, 'rho') or return_vector: # somehow more restart causes a nonsymmetric case to fail...look at # this what about A.dtype=int? convert somehow? # The use of the restart vector v0 requires that the full Krylov # subspace V be stored. So, set symmetric to False. symmetric = False if maxiter < 1: raise ValueError('expected maxiter > 0') if restart < 0: raise ValueError('expected restart >= 0') if A.dtype == int: raise ValueError('expected A to be float (complex or real)') if A.shape[0] != A.shape[1]: raise ValueError('expected square A') if initial_guess is None: v0 = sp.rand(A.shape[1], 1) if A.dtype == complex: v0 = v0 + 1.0j * sp.rand(A.shape[1], 1) else: if initial_guess.shape[0] != A.shape[0]: raise ValueError('initial_guess and A must have same shape') if (len(initial_guess.shape) > 1) and (initial_guess.shape[1] > 1): raise ValueError('initial_guess must be an (n,1) or\ (n,) vector') v0 = initial_guess.reshape(-1, 1) v0 = np.array(v0, dtype=A.dtype) for j in range(restart+1): [evect, ev, H, V, breakdown_flag] =\ _approximate_eigenvalues(A, tol, maxiter, symmetric, initial_guess=v0) # Calculate error in dominant eigenvector nvecs = ev.shape[0] max_index = np.abs(ev).argmax() error = H[nvecs, nvecs-1]*evect[-1, max_index] # error is a fast way of calculating the following line # error2 = ( A - ev[max_index]*sp.mat( # sp.eye(A.shape[0],A.shape[1])) )*\ # ( sp.mat(sp.hstack(V[:-1]))*\ # evect[:,max_index].reshape(-1,1) ) # print str(error) + " " + str(sp.linalg.norm(e2)) if (np.abs(error)/np.abs(ev[max_index]) < tol) or\ breakdown_flag: # halt if below relative tolerance v0 = np.dot(np.hstack(V[:-1]), evect[:, max_index].reshape(-1, 1)) break else: v0 = np.dot(np.hstack(V[:-1]), evect[:, max_index].reshape(-1, 1)) # end j-loop rho = np.abs(ev[max_index]) if sparse.isspmatrix(A): A.rho = rho if return_vector: return (rho, v0) else: return rho else: return A.rho
def _convert_boolean_config_value(config, name, default=True): """Convert the named field to bool. The current value should be one of the strings "yes" or "no". It will be replaced with its boolean counterpart. If the field is not present in the config object, the default value is used. :param config: the config section where to set the option :type config: configobj.ConfigObj :param name: the name of the option to convert :type name: str :param default: the default value to use if the option was not previously set :type default: bool :returns: None """ if name not in config: config[name] = default elif config[name] == "yes": config[name] = True elif config[name] == "no": config[name] = False else: raise ValueError("Error in config file\nInvalid value for %s " "parameter\nPossible values: yes, no" % name)
Convert the named field to bool. The current value should be one of the strings "yes" or "no". It will be replaced with its boolean counterpart. If the field is not present in the config object, the default value is used. :param config: the config section where to set the option :type config: configobj.ConfigObj :param name: the name of the option to convert :type name: str :param default: the default value to use if the option was not previously set :type default: bool :returns: None
Below is the the instruction that describes the task: ### Input: Convert the named field to bool. The current value should be one of the strings "yes" or "no". It will be replaced with its boolean counterpart. If the field is not present in the config object, the default value is used. :param config: the config section where to set the option :type config: configobj.ConfigObj :param name: the name of the option to convert :type name: str :param default: the default value to use if the option was not previously set :type default: bool :returns: None ### Response: def _convert_boolean_config_value(config, name, default=True): """Convert the named field to bool. The current value should be one of the strings "yes" or "no". It will be replaced with its boolean counterpart. If the field is not present in the config object, the default value is used. :param config: the config section where to set the option :type config: configobj.ConfigObj :param name: the name of the option to convert :type name: str :param default: the default value to use if the option was not previously set :type default: bool :returns: None """ if name not in config: config[name] = default elif config[name] == "yes": config[name] = True elif config[name] == "no": config[name] = False else: raise ValueError("Error in config file\nInvalid value for %s " "parameter\nPossible values: yes, no" % name)
def wait(self, wait_time=0): """ Returns a :class:`~retask.task.Task` object from the queue. Returns ``False`` if it timeouts. :arg wait_time: Time in seconds to wait, default is infinite. :return: :class:`~retask.task.Task` object from the queue or False if it timeouts. .. doctest:: >>> from retask import Queue >>> q = Queue('test') >>> q.connect() True >>> task = q.wait() >>> print task.data {u'name': u'kushal'} .. note:: This is a blocking call, you can specity wait_time argument for timeout. """ if not self.connected: raise ConnectionError('Queue is not connected') data = self.rdb.brpop(self._name, wait_time) if data: task = Task() task.__dict__ = json.loads(data[1]) return task else: return False
Returns a :class:`~retask.task.Task` object from the queue. Returns ``False`` if it timeouts. :arg wait_time: Time in seconds to wait, default is infinite. :return: :class:`~retask.task.Task` object from the queue or False if it timeouts. .. doctest:: >>> from retask import Queue >>> q = Queue('test') >>> q.connect() True >>> task = q.wait() >>> print task.data {u'name': u'kushal'} .. note:: This is a blocking call, you can specity wait_time argument for timeout.
Below is the the instruction that describes the task: ### Input: Returns a :class:`~retask.task.Task` object from the queue. Returns ``False`` if it timeouts. :arg wait_time: Time in seconds to wait, default is infinite. :return: :class:`~retask.task.Task` object from the queue or False if it timeouts. .. doctest:: >>> from retask import Queue >>> q = Queue('test') >>> q.connect() True >>> task = q.wait() >>> print task.data {u'name': u'kushal'} .. note:: This is a blocking call, you can specity wait_time argument for timeout. ### Response: def wait(self, wait_time=0): """ Returns a :class:`~retask.task.Task` object from the queue. Returns ``False`` if it timeouts. :arg wait_time: Time in seconds to wait, default is infinite. :return: :class:`~retask.task.Task` object from the queue or False if it timeouts. .. doctest:: >>> from retask import Queue >>> q = Queue('test') >>> q.connect() True >>> task = q.wait() >>> print task.data {u'name': u'kushal'} .. note:: This is a blocking call, you can specity wait_time argument for timeout. """ if not self.connected: raise ConnectionError('Queue is not connected') data = self.rdb.brpop(self._name, wait_time) if data: task = Task() task.__dict__ = json.loads(data[1]) return task else: return False
def _get_site_type_dummy_variables(self, sites): """ Get site type dummy variables, three different site classes, based on the shear wave velocity intervals in the uppermost 30 m, Vs30, according to the NEHRP: class A-B: Vs30 > 760 m/s class C: Vs30 = 360 − 760 m/s class D: Vs30 < 360 m/s """ S = np.zeros(len(sites.vs30)) SS = np.zeros(len(sites.vs30)) # Class C; 180 m/s <= Vs30 <= 360 m/s. idx = (sites.vs30 < 360.0) SS[idx] = 1.0 # Class B; 360 m/s <= Vs30 <= 760 m/s. (NEHRP) idx = (sites.vs30 >= 360.0) & (sites.vs30 < 760) S[idx] = 1.0 return S, SS
Get site type dummy variables, three different site classes, based on the shear wave velocity intervals in the uppermost 30 m, Vs30, according to the NEHRP: class A-B: Vs30 > 760 m/s class C: Vs30 = 360 − 760 m/s class D: Vs30 < 360 m/s
Below is the the instruction that describes the task: ### Input: Get site type dummy variables, three different site classes, based on the shear wave velocity intervals in the uppermost 30 m, Vs30, according to the NEHRP: class A-B: Vs30 > 760 m/s class C: Vs30 = 360 − 760 m/s class D: Vs30 < 360 m/s ### Response: def _get_site_type_dummy_variables(self, sites): """ Get site type dummy variables, three different site classes, based on the shear wave velocity intervals in the uppermost 30 m, Vs30, according to the NEHRP: class A-B: Vs30 > 760 m/s class C: Vs30 = 360 − 760 m/s class D: Vs30 < 360 m/s """ S = np.zeros(len(sites.vs30)) SS = np.zeros(len(sites.vs30)) # Class C; 180 m/s <= Vs30 <= 360 m/s. idx = (sites.vs30 < 360.0) SS[idx] = 1.0 # Class B; 360 m/s <= Vs30 <= 760 m/s. (NEHRP) idx = (sites.vs30 >= 360.0) & (sites.vs30 < 760) S[idx] = 1.0 return S, SS
def unpack(endian, fmt, data): """Unpack a byte string to the given format. If the byte string contains more bytes than required for the given format, the function returns a tuple of values. """ if fmt == 's': # read data as an array of chars val = struct.unpack(''.join([endian, str(len(data)), 's']), data)[0] else: # read a number of values num = len(data) // struct.calcsize(fmt) val = struct.unpack(''.join([endian, str(num), fmt]), data) if len(val) == 1: val = val[0] return val
Unpack a byte string to the given format. If the byte string contains more bytes than required for the given format, the function returns a tuple of values.
Below is the the instruction that describes the task: ### Input: Unpack a byte string to the given format. If the byte string contains more bytes than required for the given format, the function returns a tuple of values. ### Response: def unpack(endian, fmt, data): """Unpack a byte string to the given format. If the byte string contains more bytes than required for the given format, the function returns a tuple of values. """ if fmt == 's': # read data as an array of chars val = struct.unpack(''.join([endian, str(len(data)), 's']), data)[0] else: # read a number of values num = len(data) // struct.calcsize(fmt) val = struct.unpack(''.join([endian, str(num), fmt]), data) if len(val) == 1: val = val[0] return val
def parse_requirements(path): """Rudimentary parser for the `requirements.txt` file We just want to separate regular packages from links to pass them to the `install_requires` and `dependency_links` params of the `setup()` function properly. """ try: requirements = map(str.strip, local_file(path).splitlines()) except IOError: raise RuntimeError("Couldn't find the `requirements.txt' file :(") links = [] pkgs = [] for req in requirements: if not req: continue if 'http:' in req or 'https:' in req: links.append(req) name, version = re.findall("\#egg=([^\-]+)-(.+$)", req)[0] pkgs.append('{0}=={1}'.format(name, version)) else: pkgs.append(req) return pkgs, links
Rudimentary parser for the `requirements.txt` file We just want to separate regular packages from links to pass them to the `install_requires` and `dependency_links` params of the `setup()` function properly.
Below is the the instruction that describes the task: ### Input: Rudimentary parser for the `requirements.txt` file We just want to separate regular packages from links to pass them to the `install_requires` and `dependency_links` params of the `setup()` function properly. ### Response: def parse_requirements(path): """Rudimentary parser for the `requirements.txt` file We just want to separate regular packages from links to pass them to the `install_requires` and `dependency_links` params of the `setup()` function properly. """ try: requirements = map(str.strip, local_file(path).splitlines()) except IOError: raise RuntimeError("Couldn't find the `requirements.txt' file :(") links = [] pkgs = [] for req in requirements: if not req: continue if 'http:' in req or 'https:' in req: links.append(req) name, version = re.findall("\#egg=([^\-]+)-(.+$)", req)[0] pkgs.append('{0}=={1}'.format(name, version)) else: pkgs.append(req) return pkgs, links
def unquote(s): """Unquote the indicated string.""" # Ignore the left- and rightmost chars (which should be quotes). # Use the Python engine to decode the escape sequence i, N = 1, len(s) - 1 ret = [] while i < N: if s[i] == '\\' and i < N - 1: ret.append(UNQUOTE_MAP.get(s[i+1], s[i+1])) i += 2 else: ret.append(s[i]) i += 1 return ''.join(ret)
Unquote the indicated string.
Below is the the instruction that describes the task: ### Input: Unquote the indicated string. ### Response: def unquote(s): """Unquote the indicated string.""" # Ignore the left- and rightmost chars (which should be quotes). # Use the Python engine to decode the escape sequence i, N = 1, len(s) - 1 ret = [] while i < N: if s[i] == '\\' and i < N - 1: ret.append(UNQUOTE_MAP.get(s[i+1], s[i+1])) i += 2 else: ret.append(s[i]) i += 1 return ''.join(ret)
def from_protobuf(cls, msg): """Create an instance from a protobuf message.""" if not isinstance(msg, cls._protobuf_cls): raise TypeError("Expected message of type " "%r" % cls._protobuf_cls.__name__) kwargs = {k: getattr(msg, k) for k in cls._get_params()} return cls(**kwargs)
Create an instance from a protobuf message.
Below is the the instruction that describes the task: ### Input: Create an instance from a protobuf message. ### Response: def from_protobuf(cls, msg): """Create an instance from a protobuf message.""" if not isinstance(msg, cls._protobuf_cls): raise TypeError("Expected message of type " "%r" % cls._protobuf_cls.__name__) kwargs = {k: getattr(msg, k) for k in cls._get_params()} return cls(**kwargs)
def kwinsert(clas,pool_or_cursor,**kwargs): "kwargs version of insert" returning = kwargs.pop('returning',None) fields,vals = zip(*kwargs.items()) # note: don't do SpecialField resolution here; clas.insert takes care of it return clas.insert(pool_or_cursor,fields,vals,returning=returning)
kwargs version of insert
Below is the the instruction that describes the task: ### Input: kwargs version of insert ### Response: def kwinsert(clas,pool_or_cursor,**kwargs): "kwargs version of insert" returning = kwargs.pop('returning',None) fields,vals = zip(*kwargs.items()) # note: don't do SpecialField resolution here; clas.insert takes care of it return clas.insert(pool_or_cursor,fields,vals,returning=returning)
async def _async_wait_for_process( future_process: Any, out: Optional[Union[TeeCapture, IO[str]]] = sys.stdout, err: Optional[Union[TeeCapture, IO[str]]] = sys.stderr ) -> CommandOutput: """Awaits the creation and completion of an asynchronous process. Args: future_process: The eventually created process. out: Where to write stuff emitted by the process' stdout. err: Where to write stuff emitted by the process' stderr. Returns: A (captured output, captured error output, return code) triplet. """ process = await future_process future_output = _async_forward(process.stdout, out) future_err_output = _async_forward(process.stderr, err) output, err_output = await asyncio.gather(future_output, future_err_output) await process.wait() return CommandOutput(output, err_output, process.returncode)
Awaits the creation and completion of an asynchronous process. Args: future_process: The eventually created process. out: Where to write stuff emitted by the process' stdout. err: Where to write stuff emitted by the process' stderr. Returns: A (captured output, captured error output, return code) triplet.
Below is the the instruction that describes the task: ### Input: Awaits the creation and completion of an asynchronous process. Args: future_process: The eventually created process. out: Where to write stuff emitted by the process' stdout. err: Where to write stuff emitted by the process' stderr. Returns: A (captured output, captured error output, return code) triplet. ### Response: async def _async_wait_for_process( future_process: Any, out: Optional[Union[TeeCapture, IO[str]]] = sys.stdout, err: Optional[Union[TeeCapture, IO[str]]] = sys.stderr ) -> CommandOutput: """Awaits the creation and completion of an asynchronous process. Args: future_process: The eventually created process. out: Where to write stuff emitted by the process' stdout. err: Where to write stuff emitted by the process' stderr. Returns: A (captured output, captured error output, return code) triplet. """ process = await future_process future_output = _async_forward(process.stdout, out) future_err_output = _async_forward(process.stderr, err) output, err_output = await asyncio.gather(future_output, future_err_output) await process.wait() return CommandOutput(output, err_output, process.returncode)
def linkify(text, attrs={}): """ Convert URL-like and email-like strings into links. """ def separate_parentheses(s): start = re_find(r'^\(*', s) end = re_find(r'\)*$', s) n = min(len(start), len(end)) if n: return s[:n], s[n:-n], s[-n:] else: return '', s, '' def link_repl(url, proto='http://'): opening, url, closing = separate_parentheses(url) punct = re_find(punct_re, url) if punct: url = url[:-len(punct)] if re.search(proto_re, url): href = url else: href = proto + url href = escape_url(href) repl = u'{0!s}<a href="{1!s}"{2!s}>{3!s}</a>{4!s}{5!s}' return repl.format(opening, href, attrs_text, url, punct, closing) def repl(match): matches = match.groupdict() if matches['url']: return link_repl(matches['url']) else: return link_repl(matches['email'], proto='mailto:') # Prepare attrs attr = ' {0!s}="{1!s}"' attrs_text = ''.join(starmap(attr.format, attrs.items())) # Make replaces return re.sub(combined_re, repl, force_unicode(text))
Convert URL-like and email-like strings into links.
Below is the the instruction that describes the task: ### Input: Convert URL-like and email-like strings into links. ### Response: def linkify(text, attrs={}): """ Convert URL-like and email-like strings into links. """ def separate_parentheses(s): start = re_find(r'^\(*', s) end = re_find(r'\)*$', s) n = min(len(start), len(end)) if n: return s[:n], s[n:-n], s[-n:] else: return '', s, '' def link_repl(url, proto='http://'): opening, url, closing = separate_parentheses(url) punct = re_find(punct_re, url) if punct: url = url[:-len(punct)] if re.search(proto_re, url): href = url else: href = proto + url href = escape_url(href) repl = u'{0!s}<a href="{1!s}"{2!s}>{3!s}</a>{4!s}{5!s}' return repl.format(opening, href, attrs_text, url, punct, closing) def repl(match): matches = match.groupdict() if matches['url']: return link_repl(matches['url']) else: return link_repl(matches['email'], proto='mailto:') # Prepare attrs attr = ' {0!s}="{1!s}"' attrs_text = ''.join(starmap(attr.format, attrs.items())) # Make replaces return re.sub(combined_re, repl, force_unicode(text))
def remove_all(self, *tagnames): """ Remove all child elements whose tagname (e.g. 'a:p') appears in *tagnames*. """ for tagname in tagnames: matching = self.findall(qn(tagname)) for child in matching: self.remove(child)
Remove all child elements whose tagname (e.g. 'a:p') appears in *tagnames*.
Below is the the instruction that describes the task: ### Input: Remove all child elements whose tagname (e.g. 'a:p') appears in *tagnames*. ### Response: def remove_all(self, *tagnames): """ Remove all child elements whose tagname (e.g. 'a:p') appears in *tagnames*. """ for tagname in tagnames: matching = self.findall(qn(tagname)) for child in matching: self.remove(child)
def return_value(self): """ Returns the value for this expectation or raises the proper exception. """ if self._raises: # Handle exceptions if inspect.isclass(self._raises): raise self._raises() else: raise self._raises else: if isinstance(self._returns, tuple): return tuple([x.value if isinstance(x, Variable) else x for x in self._returns]) return self._returns.value if isinstance(self._returns, Variable) \ else self._returns
Returns the value for this expectation or raises the proper exception.
Below is the the instruction that describes the task: ### Input: Returns the value for this expectation or raises the proper exception. ### Response: def return_value(self): """ Returns the value for this expectation or raises the proper exception. """ if self._raises: # Handle exceptions if inspect.isclass(self._raises): raise self._raises() else: raise self._raises else: if isinstance(self._returns, tuple): return tuple([x.value if isinstance(x, Variable) else x for x in self._returns]) return self._returns.value if isinstance(self._returns, Variable) \ else self._returns
def abs_area(max): """ Point area palette (continuous), with area proportional to value. Parameters ---------- max : float A number representing the maximum size Returns ------- out : function Palette function that takes a sequence of values in the range ``[0, 1]`` and returns values in the range ``[0, max]``. Examples -------- >>> x = np.arange(0, .8, .1)**2 >>> palette = abs_area(5) >>> palette(x) array([0. , 0.5, 1. , 1.5, 2. , 2.5, 3. , 3.5]) Compared to :func:`area_pal`, :func:`abs_area` will handle values in the range ``[-1, 0]`` without returning ``np.nan``. And values whose absolute value is greater than 1 will be clipped to the maximum. """ def abs_area_palette(x): return rescale(np.sqrt(np.abs(x)), to=(0, max), _from=(0, 1)) return abs_area_palette
Point area palette (continuous), with area proportional to value. Parameters ---------- max : float A number representing the maximum size Returns ------- out : function Palette function that takes a sequence of values in the range ``[0, 1]`` and returns values in the range ``[0, max]``. Examples -------- >>> x = np.arange(0, .8, .1)**2 >>> palette = abs_area(5) >>> palette(x) array([0. , 0.5, 1. , 1.5, 2. , 2.5, 3. , 3.5]) Compared to :func:`area_pal`, :func:`abs_area` will handle values in the range ``[-1, 0]`` without returning ``np.nan``. And values whose absolute value is greater than 1 will be clipped to the maximum.
Below is the the instruction that describes the task: ### Input: Point area palette (continuous), with area proportional to value. Parameters ---------- max : float A number representing the maximum size Returns ------- out : function Palette function that takes a sequence of values in the range ``[0, 1]`` and returns values in the range ``[0, max]``. Examples -------- >>> x = np.arange(0, .8, .1)**2 >>> palette = abs_area(5) >>> palette(x) array([0. , 0.5, 1. , 1.5, 2. , 2.5, 3. , 3.5]) Compared to :func:`area_pal`, :func:`abs_area` will handle values in the range ``[-1, 0]`` without returning ``np.nan``. And values whose absolute value is greater than 1 will be clipped to the maximum. ### Response: def abs_area(max): """ Point area palette (continuous), with area proportional to value. Parameters ---------- max : float A number representing the maximum size Returns ------- out : function Palette function that takes a sequence of values in the range ``[0, 1]`` and returns values in the range ``[0, max]``. Examples -------- >>> x = np.arange(0, .8, .1)**2 >>> palette = abs_area(5) >>> palette(x) array([0. , 0.5, 1. , 1.5, 2. , 2.5, 3. , 3.5]) Compared to :func:`area_pal`, :func:`abs_area` will handle values in the range ``[-1, 0]`` without returning ``np.nan``. And values whose absolute value is greater than 1 will be clipped to the maximum. """ def abs_area_palette(x): return rescale(np.sqrt(np.abs(x)), to=(0, max), _from=(0, 1)) return abs_area_palette
def wrap(access_pyxb, read_only=False): """Work with the AccessPolicy in a SystemMetadata PyXB object. Args: access_pyxb : AccessPolicy PyXB object The AccessPolicy to modify. read_only: bool Do not update the wrapped AccessPolicy. When only a single AccessPolicy operation is needed, there's no need to use this context manager. Instead, use the generated context manager wrappers. """ w = AccessPolicyWrapper(access_pyxb) yield w if not read_only: w.get_normalized_pyxb()
Work with the AccessPolicy in a SystemMetadata PyXB object. Args: access_pyxb : AccessPolicy PyXB object The AccessPolicy to modify. read_only: bool Do not update the wrapped AccessPolicy. When only a single AccessPolicy operation is needed, there's no need to use this context manager. Instead, use the generated context manager wrappers.
Below is the the instruction that describes the task: ### Input: Work with the AccessPolicy in a SystemMetadata PyXB object. Args: access_pyxb : AccessPolicy PyXB object The AccessPolicy to modify. read_only: bool Do not update the wrapped AccessPolicy. When only a single AccessPolicy operation is needed, there's no need to use this context manager. Instead, use the generated context manager wrappers. ### Response: def wrap(access_pyxb, read_only=False): """Work with the AccessPolicy in a SystemMetadata PyXB object. Args: access_pyxb : AccessPolicy PyXB object The AccessPolicy to modify. read_only: bool Do not update the wrapped AccessPolicy. When only a single AccessPolicy operation is needed, there's no need to use this context manager. Instead, use the generated context manager wrappers. """ w = AccessPolicyWrapper(access_pyxb) yield w if not read_only: w.get_normalized_pyxb()
def durability(self, persist_to=-1, replicate_to=-1, timeout=0.0): """Returns a context manager which will apply the given persistence/replication settings to all mutation operations when active :param int persist_to: :param int replicate_to: See :meth:`endure` for the meaning of these two values Thus, something like:: with cb.durability(persist_to=3): cb.upsert("foo", "foo_value") cb.upsert("bar", "bar_value") cb.upsert("baz", "baz_value") is equivalent to:: cb.upsert("foo", "foo_value", persist_to=3) cb.upsert("bar", "bar_value", persist_to=3) cb.upsert("baz", "baz_value", persist_to=3) .. versionadded:: 1.2.0 .. seealso:: :meth:`endure` """ return DurabilityContext(self, persist_to, replicate_to, timeout)
Returns a context manager which will apply the given persistence/replication settings to all mutation operations when active :param int persist_to: :param int replicate_to: See :meth:`endure` for the meaning of these two values Thus, something like:: with cb.durability(persist_to=3): cb.upsert("foo", "foo_value") cb.upsert("bar", "bar_value") cb.upsert("baz", "baz_value") is equivalent to:: cb.upsert("foo", "foo_value", persist_to=3) cb.upsert("bar", "bar_value", persist_to=3) cb.upsert("baz", "baz_value", persist_to=3) .. versionadded:: 1.2.0 .. seealso:: :meth:`endure`
Below is the the instruction that describes the task: ### Input: Returns a context manager which will apply the given persistence/replication settings to all mutation operations when active :param int persist_to: :param int replicate_to: See :meth:`endure` for the meaning of these two values Thus, something like:: with cb.durability(persist_to=3): cb.upsert("foo", "foo_value") cb.upsert("bar", "bar_value") cb.upsert("baz", "baz_value") is equivalent to:: cb.upsert("foo", "foo_value", persist_to=3) cb.upsert("bar", "bar_value", persist_to=3) cb.upsert("baz", "baz_value", persist_to=3) .. versionadded:: 1.2.0 .. seealso:: :meth:`endure` ### Response: def durability(self, persist_to=-1, replicate_to=-1, timeout=0.0): """Returns a context manager which will apply the given persistence/replication settings to all mutation operations when active :param int persist_to: :param int replicate_to: See :meth:`endure` for the meaning of these two values Thus, something like:: with cb.durability(persist_to=3): cb.upsert("foo", "foo_value") cb.upsert("bar", "bar_value") cb.upsert("baz", "baz_value") is equivalent to:: cb.upsert("foo", "foo_value", persist_to=3) cb.upsert("bar", "bar_value", persist_to=3) cb.upsert("baz", "baz_value", persist_to=3) .. versionadded:: 1.2.0 .. seealso:: :meth:`endure` """ return DurabilityContext(self, persist_to, replicate_to, timeout)
def get_instruments_vocabulary(self, analysis_brain): """Returns a vocabulary with the valid and active instruments available for the analysis passed in. If the option "Allow instrument entry of results" for the Analysis is disabled, the function returns an empty vocabulary. If the analysis passed in is a Reference Analysis (Blank or Control), the vocabulary, the invalid instruments will be included in the vocabulary too. The vocabulary is a list of dictionaries. Each dictionary has the following structure: {'ResultValue': <instrument_UID>, 'ResultText': <instrument_Title>} :param analysis_brain: A single Analysis or ReferenceAnalysis :type analysis_brain: Analysis or.ReferenceAnalysis :return: A vocabulary with the instruments for the analysis :rtype: A list of dicts: [{'ResultValue':UID, 'ResultText':Title}] """ if not analysis_brain.getInstrumentEntryOfResults: # Instrument entry of results for this analysis is not allowed return list() # If the analysis is a QC analysis, display all instruments, including # those uncalibrated or for which the last QC test failed. meta_type = analysis_brain.meta_type uncalibrated = meta_type == 'ReferenceAnalysis' if meta_type == 'DuplicateAnalysis': base_analysis_type = analysis_brain.getAnalysisPortalType uncalibrated = base_analysis_type == 'ReferenceAnalysis' uids = analysis_brain.getAllowedInstrumentUIDs query = {'portal_type': 'Instrument', 'is_active': True, 'UID': uids} brains = api.search(query, 'bika_setup_catalog') vocab = [{'ResultValue': '', 'ResultText': _('None')}] for brain in brains: instrument = self.get_object(brain) if uncalibrated and not instrument.isOutOfDate(): # Is a QC analysis, include instrument also if is not valid vocab.append({'ResultValue': instrument.UID(), 'ResultText': instrument.Title()}) if instrument.isValid(): # Only add the 'valid' instruments: certificate # on-date and valid internal calibration tests vocab.append({'ResultValue': instrument.UID(), 'ResultText': instrument.Title()}) return vocab
Returns a vocabulary with the valid and active instruments available for the analysis passed in. If the option "Allow instrument entry of results" for the Analysis is disabled, the function returns an empty vocabulary. If the analysis passed in is a Reference Analysis (Blank or Control), the vocabulary, the invalid instruments will be included in the vocabulary too. The vocabulary is a list of dictionaries. Each dictionary has the following structure: {'ResultValue': <instrument_UID>, 'ResultText': <instrument_Title>} :param analysis_brain: A single Analysis or ReferenceAnalysis :type analysis_brain: Analysis or.ReferenceAnalysis :return: A vocabulary with the instruments for the analysis :rtype: A list of dicts: [{'ResultValue':UID, 'ResultText':Title}]
Below is the the instruction that describes the task: ### Input: Returns a vocabulary with the valid and active instruments available for the analysis passed in. If the option "Allow instrument entry of results" for the Analysis is disabled, the function returns an empty vocabulary. If the analysis passed in is a Reference Analysis (Blank or Control), the vocabulary, the invalid instruments will be included in the vocabulary too. The vocabulary is a list of dictionaries. Each dictionary has the following structure: {'ResultValue': <instrument_UID>, 'ResultText': <instrument_Title>} :param analysis_brain: A single Analysis or ReferenceAnalysis :type analysis_brain: Analysis or.ReferenceAnalysis :return: A vocabulary with the instruments for the analysis :rtype: A list of dicts: [{'ResultValue':UID, 'ResultText':Title}] ### Response: def get_instruments_vocabulary(self, analysis_brain): """Returns a vocabulary with the valid and active instruments available for the analysis passed in. If the option "Allow instrument entry of results" for the Analysis is disabled, the function returns an empty vocabulary. If the analysis passed in is a Reference Analysis (Blank or Control), the vocabulary, the invalid instruments will be included in the vocabulary too. The vocabulary is a list of dictionaries. Each dictionary has the following structure: {'ResultValue': <instrument_UID>, 'ResultText': <instrument_Title>} :param analysis_brain: A single Analysis or ReferenceAnalysis :type analysis_brain: Analysis or.ReferenceAnalysis :return: A vocabulary with the instruments for the analysis :rtype: A list of dicts: [{'ResultValue':UID, 'ResultText':Title}] """ if not analysis_brain.getInstrumentEntryOfResults: # Instrument entry of results for this analysis is not allowed return list() # If the analysis is a QC analysis, display all instruments, including # those uncalibrated or for which the last QC test failed. meta_type = analysis_brain.meta_type uncalibrated = meta_type == 'ReferenceAnalysis' if meta_type == 'DuplicateAnalysis': base_analysis_type = analysis_brain.getAnalysisPortalType uncalibrated = base_analysis_type == 'ReferenceAnalysis' uids = analysis_brain.getAllowedInstrumentUIDs query = {'portal_type': 'Instrument', 'is_active': True, 'UID': uids} brains = api.search(query, 'bika_setup_catalog') vocab = [{'ResultValue': '', 'ResultText': _('None')}] for brain in brains: instrument = self.get_object(brain) if uncalibrated and not instrument.isOutOfDate(): # Is a QC analysis, include instrument also if is not valid vocab.append({'ResultValue': instrument.UID(), 'ResultText': instrument.Title()}) if instrument.isValid(): # Only add the 'valid' instruments: certificate # on-date and valid internal calibration tests vocab.append({'ResultValue': instrument.UID(), 'ResultText': instrument.Title()}) return vocab
def removeDataFrameColumns(self, columns): """ Removes columns from the dataframe. :param columns: [(int, str)] :return: (bool) True on success, False on failure. """ if not self.editable: return False if columns: deleted = 0 errored = False for (position, name) in columns: position = position - deleted if position < 0: position = 0 self.beginRemoveColumns(QtCore.QModelIndex(), position, position) try: self._dataFrame.drop(name, axis=1, inplace=True) except ValueError as e: errored = True continue self.endRemoveColumns() deleted += 1 self.dataChanged.emit() if errored: return False else: return True return False
Removes columns from the dataframe. :param columns: [(int, str)] :return: (bool) True on success, False on failure.
Below is the the instruction that describes the task: ### Input: Removes columns from the dataframe. :param columns: [(int, str)] :return: (bool) True on success, False on failure. ### Response: def removeDataFrameColumns(self, columns): """ Removes columns from the dataframe. :param columns: [(int, str)] :return: (bool) True on success, False on failure. """ if not self.editable: return False if columns: deleted = 0 errored = False for (position, name) in columns: position = position - deleted if position < 0: position = 0 self.beginRemoveColumns(QtCore.QModelIndex(), position, position) try: self._dataFrame.drop(name, axis=1, inplace=True) except ValueError as e: errored = True continue self.endRemoveColumns() deleted += 1 self.dataChanged.emit() if errored: return False else: return True return False
def url_quote_part (s, safechars='/', encoding=None): """Wrap urllib.quote() to support unicode strings. A unicode string is first converted to UTF-8. After that urllib.quote() is called.""" if isinstance(s, unicode): if encoding is None: encoding = url_encoding s = s.encode(encoding, 'ignore') return urllib.quote(s, safechars)
Wrap urllib.quote() to support unicode strings. A unicode string is first converted to UTF-8. After that urllib.quote() is called.
Below is the the instruction that describes the task: ### Input: Wrap urllib.quote() to support unicode strings. A unicode string is first converted to UTF-8. After that urllib.quote() is called. ### Response: def url_quote_part (s, safechars='/', encoding=None): """Wrap urllib.quote() to support unicode strings. A unicode string is first converted to UTF-8. After that urllib.quote() is called.""" if isinstance(s, unicode): if encoding is None: encoding = url_encoding s = s.encode(encoding, 'ignore') return urllib.quote(s, safechars)
def _ast_op_exclude_to_code(self, opr, **kwargs): """Convert an AST exclude op to python source code.""" opl, opr = opr.operands lines = ["exclusion("] lines.extend(self._indent(self._ast_to_code(opl))) lines[-1] += "," lines.extend(self._indent(self._ast_to_code(opr))) lines.append(")") return lines
Convert an AST exclude op to python source code.
Below is the the instruction that describes the task: ### Input: Convert an AST exclude op to python source code. ### Response: def _ast_op_exclude_to_code(self, opr, **kwargs): """Convert an AST exclude op to python source code.""" opl, opr = opr.operands lines = ["exclusion("] lines.extend(self._indent(self._ast_to_code(opl))) lines[-1] += "," lines.extend(self._indent(self._ast_to_code(opr))) lines.append(")") return lines
def update(self, data): """ Update this dictionary with th key-value pairs from a given dictionary """ if not isinstance(data, dict): raise TypeError('Data to update must be in a dictionary.') for k, v in data.items(): arr = np.array(v) try: self[k] = arr except TypeError: logging.warning("Values under key ({}) not supported by VTK".format(k)) return
Update this dictionary with th key-value pairs from a given dictionary
Below is the the instruction that describes the task: ### Input: Update this dictionary with th key-value pairs from a given dictionary ### Response: def update(self, data): """ Update this dictionary with th key-value pairs from a given dictionary """ if not isinstance(data, dict): raise TypeError('Data to update must be in a dictionary.') for k, v in data.items(): arr = np.array(v) try: self[k] = arr except TypeError: logging.warning("Values under key ({}) not supported by VTK".format(k)) return
def compile(self, optimizer, loss, metrics=None): """ Args: optimizer (tf.train.Optimizer): loss, metrics: string or list of strings """ if isinstance(loss, six.string_types): loss = [loss] if metrics is None: metrics = [] if isinstance(metrics, six.string_types): metrics = [metrics] self._stats_to_inference = loss + metrics + [TOTAL_LOSS_NAME] setup_keras_trainer( self.trainer, get_model=self.get_model, input_signature=self.input_signature, target_signature=self.target_signature, input=self.input, optimizer=optimizer, loss=loss, metrics=metrics)
Args: optimizer (tf.train.Optimizer): loss, metrics: string or list of strings
Below is the the instruction that describes the task: ### Input: Args: optimizer (tf.train.Optimizer): loss, metrics: string or list of strings ### Response: def compile(self, optimizer, loss, metrics=None): """ Args: optimizer (tf.train.Optimizer): loss, metrics: string or list of strings """ if isinstance(loss, six.string_types): loss = [loss] if metrics is None: metrics = [] if isinstance(metrics, six.string_types): metrics = [metrics] self._stats_to_inference = loss + metrics + [TOTAL_LOSS_NAME] setup_keras_trainer( self.trainer, get_model=self.get_model, input_signature=self.input_signature, target_signature=self.target_signature, input=self.input, optimizer=optimizer, loss=loss, metrics=metrics)
def num_rings(self): """The number of rings a device with the :attr:`~libinput.constant.DeviceCapability.TABLET_PAD` capability provides. Returns: int: The number of rings or 0 if the device has no rings. Raises: AttributeError """ num = self._libinput.libinput_device_tablet_pad_get_num_rings( self._handle) if num < 0: raise AttributeError('This device is not a tablet pad device') return num
The number of rings a device with the :attr:`~libinput.constant.DeviceCapability.TABLET_PAD` capability provides. Returns: int: The number of rings or 0 if the device has no rings. Raises: AttributeError
Below is the the instruction that describes the task: ### Input: The number of rings a device with the :attr:`~libinput.constant.DeviceCapability.TABLET_PAD` capability provides. Returns: int: The number of rings or 0 if the device has no rings. Raises: AttributeError ### Response: def num_rings(self): """The number of rings a device with the :attr:`~libinput.constant.DeviceCapability.TABLET_PAD` capability provides. Returns: int: The number of rings or 0 if the device has no rings. Raises: AttributeError """ num = self._libinput.libinput_device_tablet_pad_get_num_rings( self._handle) if num < 0: raise AttributeError('This device is not a tablet pad device') return num
def _script_load(script): ''' Borrowed/modified from my book, Redis in Action: https://github.com/josiahcarlson/redis-in-action/blob/master/python/ch11_listing_source.py Used for Lua scripting support when writing against Redis 2.6+ to allow for multiple unique columns per model. ''' script = script.encode('utf-8') if isinstance(script, six.text_type) else script sha = [None, sha1(script).hexdigest()] def call(conn, keys=[], args=[], force_eval=False): keys = tuple(keys) args = tuple(args) if not force_eval: if not sha[0]: try: # executing the script implicitly loads it return conn.execute_command( 'EVAL', script, len(keys), *(keys + args)) finally: # thread safe by re-using the GIL ;) del sha[:-1] try: return conn.execute_command( "EVALSHA", sha[0], len(keys), *(keys+args)) except redis.exceptions.ResponseError as msg: if not any(msg.args[0].startswith(nsm) for nsm in NO_SCRIPT_MESSAGES): raise return conn.execute_command( "EVAL", script, len(keys), *(keys+args)) return call
Borrowed/modified from my book, Redis in Action: https://github.com/josiahcarlson/redis-in-action/blob/master/python/ch11_listing_source.py Used for Lua scripting support when writing against Redis 2.6+ to allow for multiple unique columns per model.
Below is the the instruction that describes the task: ### Input: Borrowed/modified from my book, Redis in Action: https://github.com/josiahcarlson/redis-in-action/blob/master/python/ch11_listing_source.py Used for Lua scripting support when writing against Redis 2.6+ to allow for multiple unique columns per model. ### Response: def _script_load(script): ''' Borrowed/modified from my book, Redis in Action: https://github.com/josiahcarlson/redis-in-action/blob/master/python/ch11_listing_source.py Used for Lua scripting support when writing against Redis 2.6+ to allow for multiple unique columns per model. ''' script = script.encode('utf-8') if isinstance(script, six.text_type) else script sha = [None, sha1(script).hexdigest()] def call(conn, keys=[], args=[], force_eval=False): keys = tuple(keys) args = tuple(args) if not force_eval: if not sha[0]: try: # executing the script implicitly loads it return conn.execute_command( 'EVAL', script, len(keys), *(keys + args)) finally: # thread safe by re-using the GIL ;) del sha[:-1] try: return conn.execute_command( "EVALSHA", sha[0], len(keys), *(keys+args)) except redis.exceptions.ResponseError as msg: if not any(msg.args[0].startswith(nsm) for nsm in NO_SCRIPT_MESSAGES): raise return conn.execute_command( "EVAL", script, len(keys), *(keys+args)) return call
def Preprocess(self, data): """Preprocess the given data, ready for parsing.""" # Add whitespace to line continuations. data = data.replace(":\\", ": \\") # Strip comments manually because sudoers has multiple meanings for '#'. data = SudoersFieldParser.COMMENTS_RE.sub("", data) return data
Preprocess the given data, ready for parsing.
Below is the the instruction that describes the task: ### Input: Preprocess the given data, ready for parsing. ### Response: def Preprocess(self, data): """Preprocess the given data, ready for parsing.""" # Add whitespace to line continuations. data = data.replace(":\\", ": \\") # Strip comments manually because sudoers has multiple meanings for '#'. data = SudoersFieldParser.COMMENTS_RE.sub("", data) return data
def _pre_index_check(handler, host=None, core_name=None): ''' PRIVATE METHOD - MASTER CALL Does a pre-check to make sure that all the options are set and that we can talk to solr before trying to send a command to solr. This Command should only be issued to masters. handler : str The import handler to check the state of host : str (None): The solr host to query. __opts__['host'] is default core_name (None): The name of the solr core if using cores. Leave this blank if you are not using cores or if you want to check all cores. REQUIRED if you are using cores. Return: dict<str,obj>:: {'success':boolean, 'data':dict, 'errors':list, 'warnings':list} ''' # make sure that it's a master minion if _get_none_or_value(host) is None and not _is_master(): err = [ 'solr.pre_indexing_check can only be called by "master" minions'] return _get_return_dict(False, err) # solr can run out of memory quickly if the dih is processing multiple # handlers at the same time, so if it's a multicore setup require a # core_name param. if _get_none_or_value(core_name) is None and _check_for_cores(): errors = ['solr.full_import is not safe to multiple handlers at once'] return _get_return_dict(False, errors=errors) # check to make sure that we're not already indexing resp = import_status(handler, host, core_name) if resp['success']: status = resp['data']['status'] if status == 'busy': warn = ['An indexing process is already running.'] return _get_return_dict(True, warnings=warn) if status != 'idle': errors = ['Unknown status: "{0}"'.format(status)] return _get_return_dict(False, data=resp['data'], errors=errors) else: errors = ['Status check failed. Response details: {0}'.format(resp)] return _get_return_dict(False, data=resp['data'], errors=errors) return resp
PRIVATE METHOD - MASTER CALL Does a pre-check to make sure that all the options are set and that we can talk to solr before trying to send a command to solr. This Command should only be issued to masters. handler : str The import handler to check the state of host : str (None): The solr host to query. __opts__['host'] is default core_name (None): The name of the solr core if using cores. Leave this blank if you are not using cores or if you want to check all cores. REQUIRED if you are using cores. Return: dict<str,obj>:: {'success':boolean, 'data':dict, 'errors':list, 'warnings':list}
Below is the the instruction that describes the task: ### Input: PRIVATE METHOD - MASTER CALL Does a pre-check to make sure that all the options are set and that we can talk to solr before trying to send a command to solr. This Command should only be issued to masters. handler : str The import handler to check the state of host : str (None): The solr host to query. __opts__['host'] is default core_name (None): The name of the solr core if using cores. Leave this blank if you are not using cores or if you want to check all cores. REQUIRED if you are using cores. Return: dict<str,obj>:: {'success':boolean, 'data':dict, 'errors':list, 'warnings':list} ### Response: def _pre_index_check(handler, host=None, core_name=None): ''' PRIVATE METHOD - MASTER CALL Does a pre-check to make sure that all the options are set and that we can talk to solr before trying to send a command to solr. This Command should only be issued to masters. handler : str The import handler to check the state of host : str (None): The solr host to query. __opts__['host'] is default core_name (None): The name of the solr core if using cores. Leave this blank if you are not using cores or if you want to check all cores. REQUIRED if you are using cores. Return: dict<str,obj>:: {'success':boolean, 'data':dict, 'errors':list, 'warnings':list} ''' # make sure that it's a master minion if _get_none_or_value(host) is None and not _is_master(): err = [ 'solr.pre_indexing_check can only be called by "master" minions'] return _get_return_dict(False, err) # solr can run out of memory quickly if the dih is processing multiple # handlers at the same time, so if it's a multicore setup require a # core_name param. if _get_none_or_value(core_name) is None and _check_for_cores(): errors = ['solr.full_import is not safe to multiple handlers at once'] return _get_return_dict(False, errors=errors) # check to make sure that we're not already indexing resp = import_status(handler, host, core_name) if resp['success']: status = resp['data']['status'] if status == 'busy': warn = ['An indexing process is already running.'] return _get_return_dict(True, warnings=warn) if status != 'idle': errors = ['Unknown status: "{0}"'.format(status)] return _get_return_dict(False, data=resp['data'], errors=errors) else: errors = ['Status check failed. Response details: {0}'.format(resp)] return _get_return_dict(False, data=resp['data'], errors=errors) return resp
def buildPath(self, parent, path): """ Build the specifed pat as a/b/c where missing intermediate nodes are built automatically. @param parent: A parent element on which the path is built. @type parent: I{Element} @param path: A simple path separated by (/). @type path: basestring @return: The leaf node of I{path}. @rtype: L{Element} """ for tag in path.split('/'): child = parent.getChild(tag) if child is None: child = Element(tag, parent) parent = child return child
Build the specifed pat as a/b/c where missing intermediate nodes are built automatically. @param parent: A parent element on which the path is built. @type parent: I{Element} @param path: A simple path separated by (/). @type path: basestring @return: The leaf node of I{path}. @rtype: L{Element}
Below is the the instruction that describes the task: ### Input: Build the specifed pat as a/b/c where missing intermediate nodes are built automatically. @param parent: A parent element on which the path is built. @type parent: I{Element} @param path: A simple path separated by (/). @type path: basestring @return: The leaf node of I{path}. @rtype: L{Element} ### Response: def buildPath(self, parent, path): """ Build the specifed pat as a/b/c where missing intermediate nodes are built automatically. @param parent: A parent element on which the path is built. @type parent: I{Element} @param path: A simple path separated by (/). @type path: basestring @return: The leaf node of I{path}. @rtype: L{Element} """ for tag in path.split('/'): child = parent.getChild(tag) if child is None: child = Element(tag, parent) parent = child return child
def template_to_text(tmpl, debug=0): """ convert parse tree template to text """ tarr = [] for item in tmpl.itertext(): tarr.append(item) text = "{{%s}}" % "|".join(tarr).strip() if debug > 1: print("+ template_to_text:") print(" %s" % text) return text
convert parse tree template to text
Below is the the instruction that describes the task: ### Input: convert parse tree template to text ### Response: def template_to_text(tmpl, debug=0): """ convert parse tree template to text """ tarr = [] for item in tmpl.itertext(): tarr.append(item) text = "{{%s}}" % "|".join(tarr).strip() if debug > 1: print("+ template_to_text:") print(" %s" % text) return text
def import_ecdsakey_from_pem(pem, scheme='ecdsa-sha2-nistp256'): """ <Purpose> Import either a public or private ECDSA PEM. In contrast to the other explicit import functions (import_ecdsakey_from_public_pem and import_ecdsakey_from_private_pem), this function is useful for when it is not known whether 'pem' is private or public. <Arguments> pem: A string in PEM format. scheme: The signature scheme used by the imported key. <Exceptions> securesystemslib.exceptions.FormatError, if 'pem' is improperly formatted. <Side Effects> None. <Returns> A dictionary containing the ECDSA keys and other identifying information. Conforms to 'securesystemslib.formats.ECDSAKEY_SCHEMA'. """ # Does 'pem' have the correct format? # This check will ensure arguments has the appropriate number # of objects and object types, and that all dict keys are properly named. # Raise 'securesystemslib.exceptions.FormatError' if the check fails. securesystemslib.formats.PEMECDSA_SCHEMA.check_match(pem) # Is 'scheme' properly formatted? securesystemslib.formats.ECDSA_SCHEME_SCHEMA.check_match(scheme) public_pem = '' private_pem = '' # Ensure the PEM string has a public or private header and footer. Although # a simple validation of 'pem' is performed here, a fully valid PEM string is # needed later to successfully verify signatures. Performing stricter # validation of PEMs are left to the external libraries that use 'pem'. if is_pem_public(pem): public_pem = extract_pem(pem, private_pem=False) elif is_pem_private(pem, 'ec'): # Return an ecdsakey object (ECDSAKEY_SCHEMA) with the private key included. return import_ecdsakey_from_private_pem(pem, password=None) else: raise securesystemslib.exceptions.FormatError('PEM contains neither a public' ' nor private key: ' + repr(pem)) # Begin building the ECDSA key dictionary. ecdsakey_dict = {} keytype = 'ecdsa-sha2-nistp256' # Generate the keyid of the ECDSA key. 'key_value' corresponds to the # 'keyval' entry of the 'ECDSAKEY_SCHEMA' dictionary. The private key # information is not included in the generation of the 'keyid' identifier. # If a PEM is found to contain a private key, the generated rsakey object # should be returned above. The following key object is for the case of a # PEM with only a public key. Convert any '\r\n' (e.g., Windows) newline # characters to '\n' so that a consistent keyid is generated. key_value = {'public': public_pem.replace('\r\n', '\n'), 'private': ''} keyid = _get_keyid(keytype, scheme, key_value) ecdsakey_dict['keytype'] = keytype ecdsakey_dict['scheme'] = scheme ecdsakey_dict['keyid'] = keyid ecdsakey_dict['keyval'] = key_value return ecdsakey_dict
<Purpose> Import either a public or private ECDSA PEM. In contrast to the other explicit import functions (import_ecdsakey_from_public_pem and import_ecdsakey_from_private_pem), this function is useful for when it is not known whether 'pem' is private or public. <Arguments> pem: A string in PEM format. scheme: The signature scheme used by the imported key. <Exceptions> securesystemslib.exceptions.FormatError, if 'pem' is improperly formatted. <Side Effects> None. <Returns> A dictionary containing the ECDSA keys and other identifying information. Conforms to 'securesystemslib.formats.ECDSAKEY_SCHEMA'.
Below is the the instruction that describes the task: ### Input: <Purpose> Import either a public or private ECDSA PEM. In contrast to the other explicit import functions (import_ecdsakey_from_public_pem and import_ecdsakey_from_private_pem), this function is useful for when it is not known whether 'pem' is private or public. <Arguments> pem: A string in PEM format. scheme: The signature scheme used by the imported key. <Exceptions> securesystemslib.exceptions.FormatError, if 'pem' is improperly formatted. <Side Effects> None. <Returns> A dictionary containing the ECDSA keys and other identifying information. Conforms to 'securesystemslib.formats.ECDSAKEY_SCHEMA'. ### Response: def import_ecdsakey_from_pem(pem, scheme='ecdsa-sha2-nistp256'): """ <Purpose> Import either a public or private ECDSA PEM. In contrast to the other explicit import functions (import_ecdsakey_from_public_pem and import_ecdsakey_from_private_pem), this function is useful for when it is not known whether 'pem' is private or public. <Arguments> pem: A string in PEM format. scheme: The signature scheme used by the imported key. <Exceptions> securesystemslib.exceptions.FormatError, if 'pem' is improperly formatted. <Side Effects> None. <Returns> A dictionary containing the ECDSA keys and other identifying information. Conforms to 'securesystemslib.formats.ECDSAKEY_SCHEMA'. """ # Does 'pem' have the correct format? # This check will ensure arguments has the appropriate number # of objects and object types, and that all dict keys are properly named. # Raise 'securesystemslib.exceptions.FormatError' if the check fails. securesystemslib.formats.PEMECDSA_SCHEMA.check_match(pem) # Is 'scheme' properly formatted? securesystemslib.formats.ECDSA_SCHEME_SCHEMA.check_match(scheme) public_pem = '' private_pem = '' # Ensure the PEM string has a public or private header and footer. Although # a simple validation of 'pem' is performed here, a fully valid PEM string is # needed later to successfully verify signatures. Performing stricter # validation of PEMs are left to the external libraries that use 'pem'. if is_pem_public(pem): public_pem = extract_pem(pem, private_pem=False) elif is_pem_private(pem, 'ec'): # Return an ecdsakey object (ECDSAKEY_SCHEMA) with the private key included. return import_ecdsakey_from_private_pem(pem, password=None) else: raise securesystemslib.exceptions.FormatError('PEM contains neither a public' ' nor private key: ' + repr(pem)) # Begin building the ECDSA key dictionary. ecdsakey_dict = {} keytype = 'ecdsa-sha2-nistp256' # Generate the keyid of the ECDSA key. 'key_value' corresponds to the # 'keyval' entry of the 'ECDSAKEY_SCHEMA' dictionary. The private key # information is not included in the generation of the 'keyid' identifier. # If a PEM is found to contain a private key, the generated rsakey object # should be returned above. The following key object is for the case of a # PEM with only a public key. Convert any '\r\n' (e.g., Windows) newline # characters to '\n' so that a consistent keyid is generated. key_value = {'public': public_pem.replace('\r\n', '\n'), 'private': ''} keyid = _get_keyid(keytype, scheme, key_value) ecdsakey_dict['keytype'] = keytype ecdsakey_dict['scheme'] = scheme ecdsakey_dict['keyid'] = keyid ecdsakey_dict['keyval'] = key_value return ecdsakey_dict
def get_instance(page_to_consume): """Return an instance of ConsumeModel.""" global _instances if isinstance(page_to_consume, basestring): uri = page_to_consume page_to_consume = consumepage.get_instance(uri) elif isinstance(page_to_consume, consumepage.ConsumePage): uri = page_to_consume.uri else: raise TypeError( "get_instance() expects a parker.ConsumePage " "or basestring derivative." ) try: instance = _instances[uri] except KeyError: instance = ConsumeModel(page_to_consume) _instances[uri] = instance return instance
Return an instance of ConsumeModel.
Below is the the instruction that describes the task: ### Input: Return an instance of ConsumeModel. ### Response: def get_instance(page_to_consume): """Return an instance of ConsumeModel.""" global _instances if isinstance(page_to_consume, basestring): uri = page_to_consume page_to_consume = consumepage.get_instance(uri) elif isinstance(page_to_consume, consumepage.ConsumePage): uri = page_to_consume.uri else: raise TypeError( "get_instance() expects a parker.ConsumePage " "or basestring derivative." ) try: instance = _instances[uri] except KeyError: instance = ConsumeModel(page_to_consume) _instances[uri] = instance return instance
def new_remove_attribute_transaction(self, ont_id: str, pub_key: str or bytes, attrib_key: str, b58_payer_address: str, gas_limit: int, gas_price: int): """ This interface is used to generate a Transaction object which is used to remove attribute. :param ont_id: OntId. :param pub_key: the hexadecimal public key in the form of string. :param attrib_key: a string which is used to indicate which attribute we want to remove. :param b58_payer_address: a base58 encode address which indicate who will pay for the transaction. :param gas_limit: an int value that indicate the gas limit. :param gas_price: an int value that indicate the gas price. :return: a Transaction object which is used to remove attribute. """ if isinstance(pub_key, str): bytes_pub_key = bytes.fromhex(pub_key) elif isinstance(pub_key, bytes): bytes_pub_key = pub_key else: raise SDKException(ErrorCode.params_type_error('a bytes or str type of public key is required.')) args = dict(ontid=ont_id.encode('utf-8'), attrib_key=attrib_key.encode('utf-8'), pk=bytes_pub_key) tx = self.__generate_transaction('removeAttribute', args, b58_payer_address, gas_limit, gas_price) return tx
This interface is used to generate a Transaction object which is used to remove attribute. :param ont_id: OntId. :param pub_key: the hexadecimal public key in the form of string. :param attrib_key: a string which is used to indicate which attribute we want to remove. :param b58_payer_address: a base58 encode address which indicate who will pay for the transaction. :param gas_limit: an int value that indicate the gas limit. :param gas_price: an int value that indicate the gas price. :return: a Transaction object which is used to remove attribute.
Below is the the instruction that describes the task: ### Input: This interface is used to generate a Transaction object which is used to remove attribute. :param ont_id: OntId. :param pub_key: the hexadecimal public key in the form of string. :param attrib_key: a string which is used to indicate which attribute we want to remove. :param b58_payer_address: a base58 encode address which indicate who will pay for the transaction. :param gas_limit: an int value that indicate the gas limit. :param gas_price: an int value that indicate the gas price. :return: a Transaction object which is used to remove attribute. ### Response: def new_remove_attribute_transaction(self, ont_id: str, pub_key: str or bytes, attrib_key: str, b58_payer_address: str, gas_limit: int, gas_price: int): """ This interface is used to generate a Transaction object which is used to remove attribute. :param ont_id: OntId. :param pub_key: the hexadecimal public key in the form of string. :param attrib_key: a string which is used to indicate which attribute we want to remove. :param b58_payer_address: a base58 encode address which indicate who will pay for the transaction. :param gas_limit: an int value that indicate the gas limit. :param gas_price: an int value that indicate the gas price. :return: a Transaction object which is used to remove attribute. """ if isinstance(pub_key, str): bytes_pub_key = bytes.fromhex(pub_key) elif isinstance(pub_key, bytes): bytes_pub_key = pub_key else: raise SDKException(ErrorCode.params_type_error('a bytes or str type of public key is required.')) args = dict(ontid=ont_id.encode('utf-8'), attrib_key=attrib_key.encode('utf-8'), pk=bytes_pub_key) tx = self.__generate_transaction('removeAttribute', args, b58_payer_address, gas_limit, gas_price) return tx
def check_filemode(filepath, mode): """Return True if 'file' matches ('permission') which should be entered in octal. """ filemode = stat.S_IMODE(os.stat(filepath).st_mode) return (oct(filemode) == mode)
Return True if 'file' matches ('permission') which should be entered in octal.
Below is the the instruction that describes the task: ### Input: Return True if 'file' matches ('permission') which should be entered in octal. ### Response: def check_filemode(filepath, mode): """Return True if 'file' matches ('permission') which should be entered in octal. """ filemode = stat.S_IMODE(os.stat(filepath).st_mode) return (oct(filemode) == mode)
def load_vars(opt): """Loads variable from cli and var files, passing in cli options as a seed (although they can be overwritten!). Note, turn this into an object so it's a nicer "cache".""" if not hasattr(opt, '_vars_cache'): cli_opts = cli_hash(opt.extra_vars) setattr(opt, '_vars_cache', merge_dicts(load_var_files(opt, cli_opts), cli_opts)) return getattr(opt, '_vars_cache')
Loads variable from cli and var files, passing in cli options as a seed (although they can be overwritten!). Note, turn this into an object so it's a nicer "cache".
Below is the the instruction that describes the task: ### Input: Loads variable from cli and var files, passing in cli options as a seed (although they can be overwritten!). Note, turn this into an object so it's a nicer "cache". ### Response: def load_vars(opt): """Loads variable from cli and var files, passing in cli options as a seed (although they can be overwritten!). Note, turn this into an object so it's a nicer "cache".""" if not hasattr(opt, '_vars_cache'): cli_opts = cli_hash(opt.extra_vars) setattr(opt, '_vars_cache', merge_dicts(load_var_files(opt, cli_opts), cli_opts)) return getattr(opt, '_vars_cache')
def delete(self, batch_id): """ Stops a batch request from running. Since only one batch request is run at a time, this can be used to cancel a long running request. The results of any completed operations will not be available after this call. :param batch_id: The unique id for the batch operation. :type batch_id: :py:class:`str` """ self.batch_id = batch_id self.operation_status = None return self._mc_client._delete(url=self._build_path(batch_id))
Stops a batch request from running. Since only one batch request is run at a time, this can be used to cancel a long running request. The results of any completed operations will not be available after this call. :param batch_id: The unique id for the batch operation. :type batch_id: :py:class:`str`
Below is the the instruction that describes the task: ### Input: Stops a batch request from running. Since only one batch request is run at a time, this can be used to cancel a long running request. The results of any completed operations will not be available after this call. :param batch_id: The unique id for the batch operation. :type batch_id: :py:class:`str` ### Response: def delete(self, batch_id): """ Stops a batch request from running. Since only one batch request is run at a time, this can be used to cancel a long running request. The results of any completed operations will not be available after this call. :param batch_id: The unique id for the batch operation. :type batch_id: :py:class:`str` """ self.batch_id = batch_id self.operation_status = None return self._mc_client._delete(url=self._build_path(batch_id))
def is_feeder(self, team_id=None): """Ensure ther resource has the role FEEDER.""" if team_id is None: return self._is_feeder team_id = uuid.UUID(str(team_id)) if team_id not in self.teams_ids: return False return self.teams[team_id]['role'] == 'FEEDER'
Ensure ther resource has the role FEEDER.
Below is the the instruction that describes the task: ### Input: Ensure ther resource has the role FEEDER. ### Response: def is_feeder(self, team_id=None): """Ensure ther resource has the role FEEDER.""" if team_id is None: return self._is_feeder team_id = uuid.UUID(str(team_id)) if team_id not in self.teams_ids: return False return self.teams[team_id]['role'] == 'FEEDER'
def patch_worker_run_task(): """ Patches the ``luigi.worker.Worker._run_task`` method to store the worker id and the id of its first task in the task. This information is required by the sandboxing mechanism """ _run_task = luigi.worker.Worker._run_task def run_task(self, task_id): task = self._scheduled_tasks[task_id] task._worker_id = self._id task._worker_task = self._first_task try: _run_task(self, task_id) finally: task._worker_id = None task._worker_task = None # make worker disposable when sandboxed if os.getenv("LAW_SANDBOX_SWITCHED") == "1": self._start_phasing_out() luigi.worker.Worker._run_task = run_task
Patches the ``luigi.worker.Worker._run_task`` method to store the worker id and the id of its first task in the task. This information is required by the sandboxing mechanism
Below is the the instruction that describes the task: ### Input: Patches the ``luigi.worker.Worker._run_task`` method to store the worker id and the id of its first task in the task. This information is required by the sandboxing mechanism ### Response: def patch_worker_run_task(): """ Patches the ``luigi.worker.Worker._run_task`` method to store the worker id and the id of its first task in the task. This information is required by the sandboxing mechanism """ _run_task = luigi.worker.Worker._run_task def run_task(self, task_id): task = self._scheduled_tasks[task_id] task._worker_id = self._id task._worker_task = self._first_task try: _run_task(self, task_id) finally: task._worker_id = None task._worker_task = None # make worker disposable when sandboxed if os.getenv("LAW_SANDBOX_SWITCHED") == "1": self._start_phasing_out() luigi.worker.Worker._run_task = run_task
def reset(self): """ Removes all accounts. """ with self.unlock_cond: for owner in self.owner2account: self.release_accounts(owner) self._remove_account(self.accounts.copy()) self.unlock_cond.notify_all()
Removes all accounts.
Below is the the instruction that describes the task: ### Input: Removes all accounts. ### Response: def reset(self): """ Removes all accounts. """ with self.unlock_cond: for owner in self.owner2account: self.release_accounts(owner) self._remove_account(self.accounts.copy()) self.unlock_cond.notify_all()
def change_view(self, request, object_id, **kwargs): """ For the concrete model, check ``get_content_model()`` for a subclass and redirect to its admin change view. """ instance = get_object_or_404(self.concrete_model, pk=object_id) content_model = instance.get_content_model() self.check_permission(request, content_model, "change") if content_model.__class__ != self.model: change_url = admin_url(content_model.__class__, "change", content_model.id) return HttpResponseRedirect(change_url) return super(ContentTypedAdmin, self).change_view( request, object_id, **kwargs)
For the concrete model, check ``get_content_model()`` for a subclass and redirect to its admin change view.
Below is the the instruction that describes the task: ### Input: For the concrete model, check ``get_content_model()`` for a subclass and redirect to its admin change view. ### Response: def change_view(self, request, object_id, **kwargs): """ For the concrete model, check ``get_content_model()`` for a subclass and redirect to its admin change view. """ instance = get_object_or_404(self.concrete_model, pk=object_id) content_model = instance.get_content_model() self.check_permission(request, content_model, "change") if content_model.__class__ != self.model: change_url = admin_url(content_model.__class__, "change", content_model.id) return HttpResponseRedirect(change_url) return super(ContentTypedAdmin, self).change_view( request, object_id, **kwargs)
def from_config(cls, name, config): """ Override of the base `from_config()` method that returns `None` if the name of the config file isn't "logging". We do this in case this `Configurable` subclass winds up sharing the root of the config directory with other subclasses. """ if name != cls.name: return return super(Logging, cls).from_config(name, config)
Override of the base `from_config()` method that returns `None` if the name of the config file isn't "logging". We do this in case this `Configurable` subclass winds up sharing the root of the config directory with other subclasses.
Below is the the instruction that describes the task: ### Input: Override of the base `from_config()` method that returns `None` if the name of the config file isn't "logging". We do this in case this `Configurable` subclass winds up sharing the root of the config directory with other subclasses. ### Response: def from_config(cls, name, config): """ Override of the base `from_config()` method that returns `None` if the name of the config file isn't "logging". We do this in case this `Configurable` subclass winds up sharing the root of the config directory with other subclasses. """ if name != cls.name: return return super(Logging, cls).from_config(name, config)
def sheets(self): """ Collection of sheets in this document. """ # http://www.openoffice.org/api/docs/common/ref/com/sun/star/sheet/XSpreadsheetDocument.html#getSheets try: return self._sheets except AttributeError: target = self._target.getSheets() self._sheets = SpreadsheetCollection(self, target) return self._sheets
Collection of sheets in this document.
Below is the the instruction that describes the task: ### Input: Collection of sheets in this document. ### Response: def sheets(self): """ Collection of sheets in this document. """ # http://www.openoffice.org/api/docs/common/ref/com/sun/star/sheet/XSpreadsheetDocument.html#getSheets try: return self._sheets except AttributeError: target = self._target.getSheets() self._sheets = SpreadsheetCollection(self, target) return self._sheets
def log_with_message(message): """ 打印函数运行日志的装饰器,可以再给装饰器传参数 :param message: :return: """ def decorator(func): @wraps(func) def wrapper(*args, **kwargs): print('decorator log_with_message is running, %s' % message) ret = func(*args, **kwargs) return ret return wrapper return decorator
打印函数运行日志的装饰器,可以再给装饰器传参数 :param message: :return:
Below is the the instruction that describes the task: ### Input: 打印函数运行日志的装饰器,可以再给装饰器传参数 :param message: :return: ### Response: def log_with_message(message): """ 打印函数运行日志的装饰器,可以再给装饰器传参数 :param message: :return: """ def decorator(func): @wraps(func) def wrapper(*args, **kwargs): print('decorator log_with_message is running, %s' % message) ret = func(*args, **kwargs) return ret return wrapper return decorator
async def read(self): """|coro| Retrieves the content of this asset as a :class:`bytes` object. .. warning:: :class:`PartialEmoji` won't have a connection state if user created, and a URL won't be present if a custom image isn't associated with the asset, e.g. a guild with no custom icon. .. versionadded:: 1.1.0 Raises ------ DiscordException There was no valid URL or internal connection state. HTTPException Downloading the asset failed. NotFound The asset was deleted. Returns ------- :class:`bytes` The content of the asset. """ if not self._url: raise DiscordException('Invalid asset (no URL provided)') if self._state is None: raise DiscordException('Invalid state (no ConnectionState provided)') return await self._state.http.get_from_cdn(self._url)
|coro| Retrieves the content of this asset as a :class:`bytes` object. .. warning:: :class:`PartialEmoji` won't have a connection state if user created, and a URL won't be present if a custom image isn't associated with the asset, e.g. a guild with no custom icon. .. versionadded:: 1.1.0 Raises ------ DiscordException There was no valid URL or internal connection state. HTTPException Downloading the asset failed. NotFound The asset was deleted. Returns ------- :class:`bytes` The content of the asset.
Below is the the instruction that describes the task: ### Input: |coro| Retrieves the content of this asset as a :class:`bytes` object. .. warning:: :class:`PartialEmoji` won't have a connection state if user created, and a URL won't be present if a custom image isn't associated with the asset, e.g. a guild with no custom icon. .. versionadded:: 1.1.0 Raises ------ DiscordException There was no valid URL or internal connection state. HTTPException Downloading the asset failed. NotFound The asset was deleted. Returns ------- :class:`bytes` The content of the asset. ### Response: async def read(self): """|coro| Retrieves the content of this asset as a :class:`bytes` object. .. warning:: :class:`PartialEmoji` won't have a connection state if user created, and a URL won't be present if a custom image isn't associated with the asset, e.g. a guild with no custom icon. .. versionadded:: 1.1.0 Raises ------ DiscordException There was no valid URL or internal connection state. HTTPException Downloading the asset failed. NotFound The asset was deleted. Returns ------- :class:`bytes` The content of the asset. """ if not self._url: raise DiscordException('Invalid asset (no URL provided)') if self._state is None: raise DiscordException('Invalid state (no ConnectionState provided)') return await self._state.http.get_from_cdn(self._url)
def EnumerateConfig(self, service, path, cache, filter_type=None): """Return PamConfigEntries it finds as it recursively follows PAM configs. Args: service: A string containing the service name we are processing. path: A string containing the file path name we want. cache: A dictionary keyed on path, with the file contents (list of str). filter_type: A string containing type name of the results we want. Returns: A tuple of a list of RDFValue PamConfigEntries found & a list of strings which are the external config references found. """ result = [] external = [] path = self._FixPath(path) # Make sure we only look at files under PAMDIR. # Check we have the file in our artifact/cache. If not, our artifact # didn't give it to us, and that's a problem. # Note: This should only ever happen if it was referenced # from /etc/pam.conf so we can assume that was the file. if path not in cache: external.append("%s -> %s", self.OLD_PAMCONF_FILENAME, path) return result, external for tokens in self.ParseEntries(cache[path]): if path == self.OLD_PAMCONF_FILENAME: # We are processing the old style PAM conf file. It's a special case. # It's format is "service type control module-path module-arguments" # i.e. the 'service' is the first arg, the rest is line # is like everything else except for that addition. try: service = tokens[0] # Grab the service from the start line. tokens = tokens[1:] # Make the rest of the line look like "normal". except IndexError: continue # It's a blank line, skip it. # Process any inclusions in the line. new_path = None filter_request = None try: # If a line starts with @include, then include the entire referenced # file. # e.g. "@include common-auth" if tokens[0] == "@include": new_path = tokens[1] # If a line's second arg is an include/substack, then filter the # referenced file only including entries that match the 'type' # requested. # e.g. "auth include common-auth-screensaver" elif tokens[1] in ["include", "substack"]: new_path = tokens[2] filter_request = tokens[0] except IndexError: # It's not a valid include line, so keep processing as normal. pass # If we found an include file, enumerate that file now, and # included it where we are in this config file. if new_path: # Preemptively check to see if we have a problem where the config # is referencing a file outside of the expected/defined artifact. # Doing it here allows us to produce a better context for the # problem. Hence the slight duplication of code. new_path = self._FixPath(new_path) if new_path not in cache: external.append("%s -> %s" % (path, new_path)) continue # Skip to the next line of the file. r, e = self.EnumerateConfig(service, new_path, cache, filter_request) result.extend(r) external.extend(e) else: # If we have been asked to filter on types, skip over any types # we are not interested in. if filter_type and tokens[0] != filter_type: continue # We can skip this line. # If we got here, then we want to include this line in this service's # config. # Reform the line and break into the correct fields as best we can. # Note: ParseEntries doesn't cope with what we need to do. match = self.PAMCONF_RE.match(" ".join(tokens)) if match: p_type, control, module_path, module_args = match.group(1, 2, 3, 4) # Trim a leading "-" from the type field if present. if p_type.startswith("-"): p_type = p_type[1:] result.append( rdf_config_file.PamConfigEntry( service=service, type=p_type, control=control, module_path=module_path, module_args=module_args)) return result, external
Return PamConfigEntries it finds as it recursively follows PAM configs. Args: service: A string containing the service name we are processing. path: A string containing the file path name we want. cache: A dictionary keyed on path, with the file contents (list of str). filter_type: A string containing type name of the results we want. Returns: A tuple of a list of RDFValue PamConfigEntries found & a list of strings which are the external config references found.
Below is the the instruction that describes the task: ### Input: Return PamConfigEntries it finds as it recursively follows PAM configs. Args: service: A string containing the service name we are processing. path: A string containing the file path name we want. cache: A dictionary keyed on path, with the file contents (list of str). filter_type: A string containing type name of the results we want. Returns: A tuple of a list of RDFValue PamConfigEntries found & a list of strings which are the external config references found. ### Response: def EnumerateConfig(self, service, path, cache, filter_type=None): """Return PamConfigEntries it finds as it recursively follows PAM configs. Args: service: A string containing the service name we are processing. path: A string containing the file path name we want. cache: A dictionary keyed on path, with the file contents (list of str). filter_type: A string containing type name of the results we want. Returns: A tuple of a list of RDFValue PamConfigEntries found & a list of strings which are the external config references found. """ result = [] external = [] path = self._FixPath(path) # Make sure we only look at files under PAMDIR. # Check we have the file in our artifact/cache. If not, our artifact # didn't give it to us, and that's a problem. # Note: This should only ever happen if it was referenced # from /etc/pam.conf so we can assume that was the file. if path not in cache: external.append("%s -> %s", self.OLD_PAMCONF_FILENAME, path) return result, external for tokens in self.ParseEntries(cache[path]): if path == self.OLD_PAMCONF_FILENAME: # We are processing the old style PAM conf file. It's a special case. # It's format is "service type control module-path module-arguments" # i.e. the 'service' is the first arg, the rest is line # is like everything else except for that addition. try: service = tokens[0] # Grab the service from the start line. tokens = tokens[1:] # Make the rest of the line look like "normal". except IndexError: continue # It's a blank line, skip it. # Process any inclusions in the line. new_path = None filter_request = None try: # If a line starts with @include, then include the entire referenced # file. # e.g. "@include common-auth" if tokens[0] == "@include": new_path = tokens[1] # If a line's second arg is an include/substack, then filter the # referenced file only including entries that match the 'type' # requested. # e.g. "auth include common-auth-screensaver" elif tokens[1] in ["include", "substack"]: new_path = tokens[2] filter_request = tokens[0] except IndexError: # It's not a valid include line, so keep processing as normal. pass # If we found an include file, enumerate that file now, and # included it where we are in this config file. if new_path: # Preemptively check to see if we have a problem where the config # is referencing a file outside of the expected/defined artifact. # Doing it here allows us to produce a better context for the # problem. Hence the slight duplication of code. new_path = self._FixPath(new_path) if new_path not in cache: external.append("%s -> %s" % (path, new_path)) continue # Skip to the next line of the file. r, e = self.EnumerateConfig(service, new_path, cache, filter_request) result.extend(r) external.extend(e) else: # If we have been asked to filter on types, skip over any types # we are not interested in. if filter_type and tokens[0] != filter_type: continue # We can skip this line. # If we got here, then we want to include this line in this service's # config. # Reform the line and break into the correct fields as best we can. # Note: ParseEntries doesn't cope with what we need to do. match = self.PAMCONF_RE.match(" ".join(tokens)) if match: p_type, control, module_path, module_args = match.group(1, 2, 3, 4) # Trim a leading "-" from the type field if present. if p_type.startswith("-"): p_type = p_type[1:] result.append( rdf_config_file.PamConfigEntry( service=service, type=p_type, control=control, module_path=module_path, module_args=module_args)) return result, external
def authenticated(func): """ Decorator to check if Smappee's access token has expired. If it has, use the refresh token to request a new access token """ @wraps(func) def wrapper(*args, **kwargs): self = args[0] if self.refresh_token is not None and \ self.token_expiration_time <= dt.datetime.utcnow(): self.re_authenticate() return func(*args, **kwargs) return wrapper
Decorator to check if Smappee's access token has expired. If it has, use the refresh token to request a new access token
Below is the the instruction that describes the task: ### Input: Decorator to check if Smappee's access token has expired. If it has, use the refresh token to request a new access token ### Response: def authenticated(func): """ Decorator to check if Smappee's access token has expired. If it has, use the refresh token to request a new access token """ @wraps(func) def wrapper(*args, **kwargs): self = args[0] if self.refresh_token is not None and \ self.token_expiration_time <= dt.datetime.utcnow(): self.re_authenticate() return func(*args, **kwargs) return wrapper
def _get_tab_description(self): """Returns "description" of current tab (tab text without shortcut info).""" text = self._get_page().text_tab if "(" in text: text = text[:text.index("(") - 1] text = text[0].lower() + text[1:] return text
Returns "description" of current tab (tab text without shortcut info).
Below is the the instruction that describes the task: ### Input: Returns "description" of current tab (tab text without shortcut info). ### Response: def _get_tab_description(self): """Returns "description" of current tab (tab text without shortcut info).""" text = self._get_page().text_tab if "(" in text: text = text[:text.index("(") - 1] text = text[0].lower() + text[1:] return text
def sor(A, x, b, omega, iterations=1, sweep='forward'): """Perform SOR iteration on the linear system Ax=b. Parameters ---------- A : csr_matrix, bsr_matrix Sparse NxN matrix x : ndarray Approximate solution (length N) b : ndarray Right-hand side (length N) omega : scalar Damping parameter iterations : int Number of iterations to perform sweep : {'forward','backward','symmetric'} Direction of sweep Returns ------- Nothing, x will be modified in place. Notes ----- When omega=1.0, SOR is equivalent to Gauss-Seidel. Examples -------- >>> # Use SOR as stand-along solver >>> from pyamg.relaxation.relaxation import sor >>> from pyamg.gallery import poisson >>> from pyamg.util.linalg import norm >>> import numpy as np >>> A = poisson((10,10), format='csr') >>> x0 = np.zeros((A.shape[0],1)) >>> b = np.ones((A.shape[0],1)) >>> sor(A, x0, b, 1.33, iterations=10) >>> print norm(b-A*x0) 3.03888724811 >>> # >>> # Use SOR as the multigrid smoother >>> from pyamg import smoothed_aggregation_solver >>> sa = smoothed_aggregation_solver(A, B=np.ones((A.shape[0],1)), ... coarse_solver='pinv2', max_coarse=50, ... presmoother=('sor', {'sweep':'symmetric', 'omega' : 1.33}), ... postsmoother=('sor', {'sweep':'symmetric', 'omega' : 1.33})) >>> x0 = np.zeros((A.shape[0],1)) >>> residuals=[] >>> x = sa.solve(b, x0=x0, tol=1e-8, residuals=residuals) """ A, x, b = make_system(A, x, b, formats=['csr', 'bsr']) x_old = np.empty_like(x) for i in range(iterations): x_old[:] = x gauss_seidel(A, x, b, iterations=1, sweep=sweep) x *= omega x_old *= (1-omega) x += x_old
Perform SOR iteration on the linear system Ax=b. Parameters ---------- A : csr_matrix, bsr_matrix Sparse NxN matrix x : ndarray Approximate solution (length N) b : ndarray Right-hand side (length N) omega : scalar Damping parameter iterations : int Number of iterations to perform sweep : {'forward','backward','symmetric'} Direction of sweep Returns ------- Nothing, x will be modified in place. Notes ----- When omega=1.0, SOR is equivalent to Gauss-Seidel. Examples -------- >>> # Use SOR as stand-along solver >>> from pyamg.relaxation.relaxation import sor >>> from pyamg.gallery import poisson >>> from pyamg.util.linalg import norm >>> import numpy as np >>> A = poisson((10,10), format='csr') >>> x0 = np.zeros((A.shape[0],1)) >>> b = np.ones((A.shape[0],1)) >>> sor(A, x0, b, 1.33, iterations=10) >>> print norm(b-A*x0) 3.03888724811 >>> # >>> # Use SOR as the multigrid smoother >>> from pyamg import smoothed_aggregation_solver >>> sa = smoothed_aggregation_solver(A, B=np.ones((A.shape[0],1)), ... coarse_solver='pinv2', max_coarse=50, ... presmoother=('sor', {'sweep':'symmetric', 'omega' : 1.33}), ... postsmoother=('sor', {'sweep':'symmetric', 'omega' : 1.33})) >>> x0 = np.zeros((A.shape[0],1)) >>> residuals=[] >>> x = sa.solve(b, x0=x0, tol=1e-8, residuals=residuals)
Below is the the instruction that describes the task: ### Input: Perform SOR iteration on the linear system Ax=b. Parameters ---------- A : csr_matrix, bsr_matrix Sparse NxN matrix x : ndarray Approximate solution (length N) b : ndarray Right-hand side (length N) omega : scalar Damping parameter iterations : int Number of iterations to perform sweep : {'forward','backward','symmetric'} Direction of sweep Returns ------- Nothing, x will be modified in place. Notes ----- When omega=1.0, SOR is equivalent to Gauss-Seidel. Examples -------- >>> # Use SOR as stand-along solver >>> from pyamg.relaxation.relaxation import sor >>> from pyamg.gallery import poisson >>> from pyamg.util.linalg import norm >>> import numpy as np >>> A = poisson((10,10), format='csr') >>> x0 = np.zeros((A.shape[0],1)) >>> b = np.ones((A.shape[0],1)) >>> sor(A, x0, b, 1.33, iterations=10) >>> print norm(b-A*x0) 3.03888724811 >>> # >>> # Use SOR as the multigrid smoother >>> from pyamg import smoothed_aggregation_solver >>> sa = smoothed_aggregation_solver(A, B=np.ones((A.shape[0],1)), ... coarse_solver='pinv2', max_coarse=50, ... presmoother=('sor', {'sweep':'symmetric', 'omega' : 1.33}), ... postsmoother=('sor', {'sweep':'symmetric', 'omega' : 1.33})) >>> x0 = np.zeros((A.shape[0],1)) >>> residuals=[] >>> x = sa.solve(b, x0=x0, tol=1e-8, residuals=residuals) ### Response: def sor(A, x, b, omega, iterations=1, sweep='forward'): """Perform SOR iteration on the linear system Ax=b. Parameters ---------- A : csr_matrix, bsr_matrix Sparse NxN matrix x : ndarray Approximate solution (length N) b : ndarray Right-hand side (length N) omega : scalar Damping parameter iterations : int Number of iterations to perform sweep : {'forward','backward','symmetric'} Direction of sweep Returns ------- Nothing, x will be modified in place. Notes ----- When omega=1.0, SOR is equivalent to Gauss-Seidel. Examples -------- >>> # Use SOR as stand-along solver >>> from pyamg.relaxation.relaxation import sor >>> from pyamg.gallery import poisson >>> from pyamg.util.linalg import norm >>> import numpy as np >>> A = poisson((10,10), format='csr') >>> x0 = np.zeros((A.shape[0],1)) >>> b = np.ones((A.shape[0],1)) >>> sor(A, x0, b, 1.33, iterations=10) >>> print norm(b-A*x0) 3.03888724811 >>> # >>> # Use SOR as the multigrid smoother >>> from pyamg import smoothed_aggregation_solver >>> sa = smoothed_aggregation_solver(A, B=np.ones((A.shape[0],1)), ... coarse_solver='pinv2', max_coarse=50, ... presmoother=('sor', {'sweep':'symmetric', 'omega' : 1.33}), ... postsmoother=('sor', {'sweep':'symmetric', 'omega' : 1.33})) >>> x0 = np.zeros((A.shape[0],1)) >>> residuals=[] >>> x = sa.solve(b, x0=x0, tol=1e-8, residuals=residuals) """ A, x, b = make_system(A, x, b, formats=['csr', 'bsr']) x_old = np.empty_like(x) for i in range(iterations): x_old[:] = x gauss_seidel(A, x, b, iterations=1, sweep=sweep) x *= omega x_old *= (1-omega) x += x_old
def _generate_matrix(self, hash_bytes): """ Generates matrix that describes which blocks should be coloured. Arguments: hash_bytes - List of hash byte values for which the identicon is being generated. Each element of the list should be an integer from 0 to 255. Returns: List of rows, where each element in a row is boolean. True means the foreground colour should be used, False means a background colour should be used. """ # Since the identicon needs to be symmetric, we'll need to work on half # the columns (rounded-up), and reflect where necessary. half_columns = self.columns // 2 + self.columns % 2 cells = self.rows * half_columns # Initialise the matrix (list of rows) that will be returned. matrix = [[False] * self.columns for _ in range(self.rows)] # Process the cells one by one. for cell in range(cells): # If the bit from hash correpsonding to this cell is 1, mark the # cell as foreground one. Do not use first byte (since that one is # used for determining the foreground colour. if self._get_bit(cell, hash_bytes[1:]): # Determine the cell coordinates in matrix. column = cell // self.columns row = cell % self.rows # Mark the cell and its reflection. Central column may get # marked twice, but we don't care. matrix[row][column] = True matrix[row][self.columns - column - 1] = True return matrix
Generates matrix that describes which blocks should be coloured. Arguments: hash_bytes - List of hash byte values for which the identicon is being generated. Each element of the list should be an integer from 0 to 255. Returns: List of rows, where each element in a row is boolean. True means the foreground colour should be used, False means a background colour should be used.
Below is the the instruction that describes the task: ### Input: Generates matrix that describes which blocks should be coloured. Arguments: hash_bytes - List of hash byte values for which the identicon is being generated. Each element of the list should be an integer from 0 to 255. Returns: List of rows, where each element in a row is boolean. True means the foreground colour should be used, False means a background colour should be used. ### Response: def _generate_matrix(self, hash_bytes): """ Generates matrix that describes which blocks should be coloured. Arguments: hash_bytes - List of hash byte values for which the identicon is being generated. Each element of the list should be an integer from 0 to 255. Returns: List of rows, where each element in a row is boolean. True means the foreground colour should be used, False means a background colour should be used. """ # Since the identicon needs to be symmetric, we'll need to work on half # the columns (rounded-up), and reflect where necessary. half_columns = self.columns // 2 + self.columns % 2 cells = self.rows * half_columns # Initialise the matrix (list of rows) that will be returned. matrix = [[False] * self.columns for _ in range(self.rows)] # Process the cells one by one. for cell in range(cells): # If the bit from hash correpsonding to this cell is 1, mark the # cell as foreground one. Do not use first byte (since that one is # used for determining the foreground colour. if self._get_bit(cell, hash_bytes[1:]): # Determine the cell coordinates in matrix. column = cell // self.columns row = cell % self.rows # Mark the cell and its reflection. Central column may get # marked twice, but we don't care. matrix[row][column] = True matrix[row][self.columns - column - 1] = True return matrix
def _collect_attrs(self, name, obj): """Collect all the attributes for the provided file object. """ for key in obj.ncattrs(): value = getattr(obj, key) fc_key = "{}/attr/{}".format(name, key) try: self.file_content[fc_key] = np2str(value) except ValueError: self.file_content[fc_key] = value
Collect all the attributes for the provided file object.
Below is the the instruction that describes the task: ### Input: Collect all the attributes for the provided file object. ### Response: def _collect_attrs(self, name, obj): """Collect all the attributes for the provided file object. """ for key in obj.ncattrs(): value = getattr(obj, key) fc_key = "{}/attr/{}".format(name, key) try: self.file_content[fc_key] = np2str(value) except ValueError: self.file_content[fc_key] = value
def createNetwork(dataSource): """Create and initialize a network.""" with open(_PARAMS_PATH, "r") as f: modelParams = yaml.safe_load(f)["modelParams"] # Create a network that will hold the regions. network = Network() # Add a sensor region. network.addRegion("sensor", "py.RecordSensor", '{}') # Set the encoder and data source of the sensor region. sensorRegion = network.regions["sensor"].getSelf() sensorRegion.encoder = createEncoder(modelParams["sensorParams"]["encoders"]) sensorRegion.dataSource = dataSource # Make sure the SP input width matches the sensor region output width. modelParams["spParams"]["inputWidth"] = sensorRegion.encoder.getWidth() # Add SP and TM regions. network.addRegion("SP", "py.SPRegion", json.dumps(modelParams["spParams"])) network.addRegion("TM", "py.TMRegion", json.dumps(modelParams["tmParams"])) # Add a classifier region. clName = "py.%s" % modelParams["clParams"].pop("regionName") network.addRegion("classifier", clName, json.dumps(modelParams["clParams"])) # Add all links createSensorToClassifierLinks(network, "sensor", "classifier") createDataOutLink(network, "sensor", "SP") createFeedForwardLink(network, "SP", "TM") createFeedForwardLink(network, "TM", "classifier") # Reset links are optional, since the sensor region does not send resets. createResetLink(network, "sensor", "SP") createResetLink(network, "sensor", "TM") # Make sure all objects are initialized. network.initialize() return network
Create and initialize a network.
Below is the the instruction that describes the task: ### Input: Create and initialize a network. ### Response: def createNetwork(dataSource): """Create and initialize a network.""" with open(_PARAMS_PATH, "r") as f: modelParams = yaml.safe_load(f)["modelParams"] # Create a network that will hold the regions. network = Network() # Add a sensor region. network.addRegion("sensor", "py.RecordSensor", '{}') # Set the encoder and data source of the sensor region. sensorRegion = network.regions["sensor"].getSelf() sensorRegion.encoder = createEncoder(modelParams["sensorParams"]["encoders"]) sensorRegion.dataSource = dataSource # Make sure the SP input width matches the sensor region output width. modelParams["spParams"]["inputWidth"] = sensorRegion.encoder.getWidth() # Add SP and TM regions. network.addRegion("SP", "py.SPRegion", json.dumps(modelParams["spParams"])) network.addRegion("TM", "py.TMRegion", json.dumps(modelParams["tmParams"])) # Add a classifier region. clName = "py.%s" % modelParams["clParams"].pop("regionName") network.addRegion("classifier", clName, json.dumps(modelParams["clParams"])) # Add all links createSensorToClassifierLinks(network, "sensor", "classifier") createDataOutLink(network, "sensor", "SP") createFeedForwardLink(network, "SP", "TM") createFeedForwardLink(network, "TM", "classifier") # Reset links are optional, since the sensor region does not send resets. createResetLink(network, "sensor", "SP") createResetLink(network, "sensor", "TM") # Make sure all objects are initialized. network.initialize() return network
def _generate_atom_feed(self, feed): """ A function returning a feed like `feedgen.feed.FeedGenerator`. The function can be overwritten when used in other applications. :param feed: a feed object :return: an atom feed `feedgen.feed.FeedGenerator` """ atom_feed = self.init_atom_feed(feed) atom_feed.title("Feed") return atom_feed
A function returning a feed like `feedgen.feed.FeedGenerator`. The function can be overwritten when used in other applications. :param feed: a feed object :return: an atom feed `feedgen.feed.FeedGenerator`
Below is the the instruction that describes the task: ### Input: A function returning a feed like `feedgen.feed.FeedGenerator`. The function can be overwritten when used in other applications. :param feed: a feed object :return: an atom feed `feedgen.feed.FeedGenerator` ### Response: def _generate_atom_feed(self, feed): """ A function returning a feed like `feedgen.feed.FeedGenerator`. The function can be overwritten when used in other applications. :param feed: a feed object :return: an atom feed `feedgen.feed.FeedGenerator` """ atom_feed = self.init_atom_feed(feed) atom_feed.title("Feed") return atom_feed
async def _connect(self): """Start asynchronous reconnect loop.""" self.waiting = True await self.client.start(self.ip) self.waiting = False if self.client.protocol is None: raise IOError("Could not connect to '{}'.".format(self.ip)) self.open = True
Start asynchronous reconnect loop.
Below is the the instruction that describes the task: ### Input: Start asynchronous reconnect loop. ### Response: async def _connect(self): """Start asynchronous reconnect loop.""" self.waiting = True await self.client.start(self.ip) self.waiting = False if self.client.protocol is None: raise IOError("Could not connect to '{}'.".format(self.ip)) self.open = True
def unregisterAddon(cls, name): """ Unregisters the addon defined by the given name from the class. :param name | <str> """ prop = '_{0}__addons'.format(cls.__name__) cmds = getattr(cls, prop, {}) cmds.pop(name, None)
Unregisters the addon defined by the given name from the class. :param name | <str>
Below is the the instruction that describes the task: ### Input: Unregisters the addon defined by the given name from the class. :param name | <str> ### Response: def unregisterAddon(cls, name): """ Unregisters the addon defined by the given name from the class. :param name | <str> """ prop = '_{0}__addons'.format(cls.__name__) cmds = getattr(cls, prop, {}) cmds.pop(name, None)
def Spitzglass_low(SG, Tavg, L=None, D=None, P1=None, P2=None, Q=None, Ts=288.7, Ps=101325., Zavg=1, E=1.): r'''Calculation function for dealing with flow of a compressible gas in a pipeline with the Spitzglass (low pressure drop) formula. Can calculate any of the following, given all other inputs: * Flow rate * Upstream pressure * Downstream pressure * Diameter of pipe (numerical solution) * Length of pipe A variety of different constants and expressions have been presented for the Spitzglass (low pressure drop) formula. Here, the form as in [1]_ is used but with a more precise metric conversion from inches to m. .. math:: Q = 125.1060 E \left(\frac{T_s}{P_s}\right)\left[\frac{2(P_1 -P_2)(P_s+1210)}{L \cdot {SG} \cdot T_{avg}Z_{avg} (1 + 0.09144/D + \frac{150}{127}D)}\right]^{0.5}D^{2.5} Parameters ---------- SG : float Specific gravity of fluid with respect to air at the reference temperature and pressure `Ts` and `Ps`, [-] Tavg : float Average temperature of the fluid in the pipeline, [K] L : float, optional Length of pipe, [m] D : float, optional Diameter of pipe, [m] P1 : float, optional Inlet pressure to pipe, [Pa] P2 : float, optional Outlet pressure from pipe, [Pa] Q : float, optional Flow rate of gas through pipe, [m^3/s] Ts : float, optional Reference temperature for the specific gravity of the gas, [K] Ps : float, optional Reference pressure for the specific gravity of the gas, [Pa] Zavg : float, optional Average compressibility factor for gas, [-] E : float, optional Pipeline efficiency, a correction factor between 0 and 1 Returns ------- Q, P1, P2, D, or L : float The missing input which was solved for [base SI] Notes ----- This equation is often presented without any correction for reference conditions for specific gravity. This model is also presented in [2]_ with a leading constant of 5.69E-2, the same exponents as used here, units of mm (diameter), kPa, km (length), and flow in m^3/hour. However, it is believed to contain a typo, and gives results <1/3 of the correct values. It is also present in [2]_ in imperial form; this is believed correct, but makes a slight assumption not done in [1]_. This model is present in [3]_ without reference corrections. The 1210 constant in [1]_ is an approximation necessary for the reference correction to function without a square of the pressure difference. The GPSA version is as follows, and matches this formulation very closely: .. math:: Q = 0.821 \left[\frac{(P_1-P_2)D^5}{L \cdot {SG} (1 + 91.44/D + 0.0018D)}\right]^{0.5} The model is also shown in [4]_, with diameter in inches, length in feet, flow in MMSCFD, pressure drop in inH2O, and a rounded leading constant of 0.09; this makes its predictions several percent higher than the model here. Examples -------- >>> Spitzglass_low(D=0.154051, P1=6720.3199, P2=0, L=54.864, SG=0.6, Tavg=288.7) 0.9488775242530617 References ---------- .. [1] Coelho, Paulo M., and Carlos Pinho. "Considerations about Equations for Steady State Flow in Natural Gas Pipelines." Journal of the Brazilian Society of Mechanical Sciences and Engineering 29, no. 3 (September 2007): 262-73. doi:10.1590/S1678-58782007000300005. .. [2] Menon, E. Shashi. Gas Pipeline Hydraulics. 1st edition. Boca Raton, FL: CRC Press, 2005. .. [3] GPSA. GPSA Engineering Data Book. 13th edition. Gas Processors Suppliers Association, Tulsa, OK, 2012. .. [4] PetroWiki. "Pressure Drop Evaluation along Pipelines" Accessed September 11, 2016. http://petrowiki.org/Pressure_drop_evaluation_along_pipelines#Spitzglass_equation_2. ''' c3 = 1.181102362204724409448818897637795275591 # 0.03/inch or 150/127 c4 = 0.09144 c5 = 125.1060 if Q is None and (None not in [L, D, P1, P2]): return c5*Ts/Ps*D**2.5*E*(((P1-P2)*2*(Ps+1210.))/(L*SG*Tavg*Zavg*(1 + c4/D + c3*D)))**0.5 elif D is None and (None not in [L, Q, P1, P2]): to_solve = lambda D : Q - Spitzglass_low(SG=SG, Tavg=Tavg, L=L, D=D, P1=P1, P2=P2, Ts=Ts, Ps=Ps, Zavg=Zavg, E=E) return newton(to_solve, 0.5) elif P1 is None and (None not in [L, Q, D, P2]): return 0.5*(2.0*D**6*E**2*P2*Ts**2*c5**2*(Ps + 1210.0) + D**2*L*Ps**2*Q**2*SG*Tavg*Zavg*c3 + D*L*Ps**2*Q**2*SG*Tavg*Zavg + L*Ps**2*Q**2*SG*Tavg*Zavg*c4)/(D**6*E**2*Ts**2*c5**2*(Ps + 1210.0)) elif P2 is None and (None not in [L, Q, D, P1]): return 0.5*(2.0*D**6*E**2*P1*Ts**2*c5**2*(Ps + 1210.0) - D**2*L*Ps**2*Q**2*SG*Tavg*Zavg*c3 - D*L*Ps**2*Q**2*SG*Tavg*Zavg - L*Ps**2*Q**2*SG*Tavg*Zavg*c4)/(D**6*E**2*Ts**2*c5**2*(Ps + 1210.0)) elif L is None and (None not in [P2, Q, D, P1]): return 2.0*D**6*E**2*Ts**2*c5**2*(P1*Ps + 1210.0*P1 - P2*Ps - 1210.0*P2)/(Ps**2*Q**2*SG*Tavg*Zavg*(D**2*c3 + D + c4)) else: raise Exception('This function solves for either flow, upstream \ pressure, downstream pressure, diameter, or length; all other inputs \ must be provided.')
r'''Calculation function for dealing with flow of a compressible gas in a pipeline with the Spitzglass (low pressure drop) formula. Can calculate any of the following, given all other inputs: * Flow rate * Upstream pressure * Downstream pressure * Diameter of pipe (numerical solution) * Length of pipe A variety of different constants and expressions have been presented for the Spitzglass (low pressure drop) formula. Here, the form as in [1]_ is used but with a more precise metric conversion from inches to m. .. math:: Q = 125.1060 E \left(\frac{T_s}{P_s}\right)\left[\frac{2(P_1 -P_2)(P_s+1210)}{L \cdot {SG} \cdot T_{avg}Z_{avg} (1 + 0.09144/D + \frac{150}{127}D)}\right]^{0.5}D^{2.5} Parameters ---------- SG : float Specific gravity of fluid with respect to air at the reference temperature and pressure `Ts` and `Ps`, [-] Tavg : float Average temperature of the fluid in the pipeline, [K] L : float, optional Length of pipe, [m] D : float, optional Diameter of pipe, [m] P1 : float, optional Inlet pressure to pipe, [Pa] P2 : float, optional Outlet pressure from pipe, [Pa] Q : float, optional Flow rate of gas through pipe, [m^3/s] Ts : float, optional Reference temperature for the specific gravity of the gas, [K] Ps : float, optional Reference pressure for the specific gravity of the gas, [Pa] Zavg : float, optional Average compressibility factor for gas, [-] E : float, optional Pipeline efficiency, a correction factor between 0 and 1 Returns ------- Q, P1, P2, D, or L : float The missing input which was solved for [base SI] Notes ----- This equation is often presented without any correction for reference conditions for specific gravity. This model is also presented in [2]_ with a leading constant of 5.69E-2, the same exponents as used here, units of mm (diameter), kPa, km (length), and flow in m^3/hour. However, it is believed to contain a typo, and gives results <1/3 of the correct values. It is also present in [2]_ in imperial form; this is believed correct, but makes a slight assumption not done in [1]_. This model is present in [3]_ without reference corrections. The 1210 constant in [1]_ is an approximation necessary for the reference correction to function without a square of the pressure difference. The GPSA version is as follows, and matches this formulation very closely: .. math:: Q = 0.821 \left[\frac{(P_1-P_2)D^5}{L \cdot {SG} (1 + 91.44/D + 0.0018D)}\right]^{0.5} The model is also shown in [4]_, with diameter in inches, length in feet, flow in MMSCFD, pressure drop in inH2O, and a rounded leading constant of 0.09; this makes its predictions several percent higher than the model here. Examples -------- >>> Spitzglass_low(D=0.154051, P1=6720.3199, P2=0, L=54.864, SG=0.6, Tavg=288.7) 0.9488775242530617 References ---------- .. [1] Coelho, Paulo M., and Carlos Pinho. "Considerations about Equations for Steady State Flow in Natural Gas Pipelines." Journal of the Brazilian Society of Mechanical Sciences and Engineering 29, no. 3 (September 2007): 262-73. doi:10.1590/S1678-58782007000300005. .. [2] Menon, E. Shashi. Gas Pipeline Hydraulics. 1st edition. Boca Raton, FL: CRC Press, 2005. .. [3] GPSA. GPSA Engineering Data Book. 13th edition. Gas Processors Suppliers Association, Tulsa, OK, 2012. .. [4] PetroWiki. "Pressure Drop Evaluation along Pipelines" Accessed September 11, 2016. http://petrowiki.org/Pressure_drop_evaluation_along_pipelines#Spitzglass_equation_2.
Below is the the instruction that describes the task: ### Input: r'''Calculation function for dealing with flow of a compressible gas in a pipeline with the Spitzglass (low pressure drop) formula. Can calculate any of the following, given all other inputs: * Flow rate * Upstream pressure * Downstream pressure * Diameter of pipe (numerical solution) * Length of pipe A variety of different constants and expressions have been presented for the Spitzglass (low pressure drop) formula. Here, the form as in [1]_ is used but with a more precise metric conversion from inches to m. .. math:: Q = 125.1060 E \left(\frac{T_s}{P_s}\right)\left[\frac{2(P_1 -P_2)(P_s+1210)}{L \cdot {SG} \cdot T_{avg}Z_{avg} (1 + 0.09144/D + \frac{150}{127}D)}\right]^{0.5}D^{2.5} Parameters ---------- SG : float Specific gravity of fluid with respect to air at the reference temperature and pressure `Ts` and `Ps`, [-] Tavg : float Average temperature of the fluid in the pipeline, [K] L : float, optional Length of pipe, [m] D : float, optional Diameter of pipe, [m] P1 : float, optional Inlet pressure to pipe, [Pa] P2 : float, optional Outlet pressure from pipe, [Pa] Q : float, optional Flow rate of gas through pipe, [m^3/s] Ts : float, optional Reference temperature for the specific gravity of the gas, [K] Ps : float, optional Reference pressure for the specific gravity of the gas, [Pa] Zavg : float, optional Average compressibility factor for gas, [-] E : float, optional Pipeline efficiency, a correction factor between 0 and 1 Returns ------- Q, P1, P2, D, or L : float The missing input which was solved for [base SI] Notes ----- This equation is often presented without any correction for reference conditions for specific gravity. This model is also presented in [2]_ with a leading constant of 5.69E-2, the same exponents as used here, units of mm (diameter), kPa, km (length), and flow in m^3/hour. However, it is believed to contain a typo, and gives results <1/3 of the correct values. It is also present in [2]_ in imperial form; this is believed correct, but makes a slight assumption not done in [1]_. This model is present in [3]_ without reference corrections. The 1210 constant in [1]_ is an approximation necessary for the reference correction to function without a square of the pressure difference. The GPSA version is as follows, and matches this formulation very closely: .. math:: Q = 0.821 \left[\frac{(P_1-P_2)D^5}{L \cdot {SG} (1 + 91.44/D + 0.0018D)}\right]^{0.5} The model is also shown in [4]_, with diameter in inches, length in feet, flow in MMSCFD, pressure drop in inH2O, and a rounded leading constant of 0.09; this makes its predictions several percent higher than the model here. Examples -------- >>> Spitzglass_low(D=0.154051, P1=6720.3199, P2=0, L=54.864, SG=0.6, Tavg=288.7) 0.9488775242530617 References ---------- .. [1] Coelho, Paulo M., and Carlos Pinho. "Considerations about Equations for Steady State Flow in Natural Gas Pipelines." Journal of the Brazilian Society of Mechanical Sciences and Engineering 29, no. 3 (September 2007): 262-73. doi:10.1590/S1678-58782007000300005. .. [2] Menon, E. Shashi. Gas Pipeline Hydraulics. 1st edition. Boca Raton, FL: CRC Press, 2005. .. [3] GPSA. GPSA Engineering Data Book. 13th edition. Gas Processors Suppliers Association, Tulsa, OK, 2012. .. [4] PetroWiki. "Pressure Drop Evaluation along Pipelines" Accessed September 11, 2016. http://petrowiki.org/Pressure_drop_evaluation_along_pipelines#Spitzglass_equation_2. ### Response: def Spitzglass_low(SG, Tavg, L=None, D=None, P1=None, P2=None, Q=None, Ts=288.7, Ps=101325., Zavg=1, E=1.): r'''Calculation function for dealing with flow of a compressible gas in a pipeline with the Spitzglass (low pressure drop) formula. Can calculate any of the following, given all other inputs: * Flow rate * Upstream pressure * Downstream pressure * Diameter of pipe (numerical solution) * Length of pipe A variety of different constants and expressions have been presented for the Spitzglass (low pressure drop) formula. Here, the form as in [1]_ is used but with a more precise metric conversion from inches to m. .. math:: Q = 125.1060 E \left(\frac{T_s}{P_s}\right)\left[\frac{2(P_1 -P_2)(P_s+1210)}{L \cdot {SG} \cdot T_{avg}Z_{avg} (1 + 0.09144/D + \frac{150}{127}D)}\right]^{0.5}D^{2.5} Parameters ---------- SG : float Specific gravity of fluid with respect to air at the reference temperature and pressure `Ts` and `Ps`, [-] Tavg : float Average temperature of the fluid in the pipeline, [K] L : float, optional Length of pipe, [m] D : float, optional Diameter of pipe, [m] P1 : float, optional Inlet pressure to pipe, [Pa] P2 : float, optional Outlet pressure from pipe, [Pa] Q : float, optional Flow rate of gas through pipe, [m^3/s] Ts : float, optional Reference temperature for the specific gravity of the gas, [K] Ps : float, optional Reference pressure for the specific gravity of the gas, [Pa] Zavg : float, optional Average compressibility factor for gas, [-] E : float, optional Pipeline efficiency, a correction factor between 0 and 1 Returns ------- Q, P1, P2, D, or L : float The missing input which was solved for [base SI] Notes ----- This equation is often presented without any correction for reference conditions for specific gravity. This model is also presented in [2]_ with a leading constant of 5.69E-2, the same exponents as used here, units of mm (diameter), kPa, km (length), and flow in m^3/hour. However, it is believed to contain a typo, and gives results <1/3 of the correct values. It is also present in [2]_ in imperial form; this is believed correct, but makes a slight assumption not done in [1]_. This model is present in [3]_ without reference corrections. The 1210 constant in [1]_ is an approximation necessary for the reference correction to function without a square of the pressure difference. The GPSA version is as follows, and matches this formulation very closely: .. math:: Q = 0.821 \left[\frac{(P_1-P_2)D^5}{L \cdot {SG} (1 + 91.44/D + 0.0018D)}\right]^{0.5} The model is also shown in [4]_, with diameter in inches, length in feet, flow in MMSCFD, pressure drop in inH2O, and a rounded leading constant of 0.09; this makes its predictions several percent higher than the model here. Examples -------- >>> Spitzglass_low(D=0.154051, P1=6720.3199, P2=0, L=54.864, SG=0.6, Tavg=288.7) 0.9488775242530617 References ---------- .. [1] Coelho, Paulo M., and Carlos Pinho. "Considerations about Equations for Steady State Flow in Natural Gas Pipelines." Journal of the Brazilian Society of Mechanical Sciences and Engineering 29, no. 3 (September 2007): 262-73. doi:10.1590/S1678-58782007000300005. .. [2] Menon, E. Shashi. Gas Pipeline Hydraulics. 1st edition. Boca Raton, FL: CRC Press, 2005. .. [3] GPSA. GPSA Engineering Data Book. 13th edition. Gas Processors Suppliers Association, Tulsa, OK, 2012. .. [4] PetroWiki. "Pressure Drop Evaluation along Pipelines" Accessed September 11, 2016. http://petrowiki.org/Pressure_drop_evaluation_along_pipelines#Spitzglass_equation_2. ''' c3 = 1.181102362204724409448818897637795275591 # 0.03/inch or 150/127 c4 = 0.09144 c5 = 125.1060 if Q is None and (None not in [L, D, P1, P2]): return c5*Ts/Ps*D**2.5*E*(((P1-P2)*2*(Ps+1210.))/(L*SG*Tavg*Zavg*(1 + c4/D + c3*D)))**0.5 elif D is None and (None not in [L, Q, P1, P2]): to_solve = lambda D : Q - Spitzglass_low(SG=SG, Tavg=Tavg, L=L, D=D, P1=P1, P2=P2, Ts=Ts, Ps=Ps, Zavg=Zavg, E=E) return newton(to_solve, 0.5) elif P1 is None and (None not in [L, Q, D, P2]): return 0.5*(2.0*D**6*E**2*P2*Ts**2*c5**2*(Ps + 1210.0) + D**2*L*Ps**2*Q**2*SG*Tavg*Zavg*c3 + D*L*Ps**2*Q**2*SG*Tavg*Zavg + L*Ps**2*Q**2*SG*Tavg*Zavg*c4)/(D**6*E**2*Ts**2*c5**2*(Ps + 1210.0)) elif P2 is None and (None not in [L, Q, D, P1]): return 0.5*(2.0*D**6*E**2*P1*Ts**2*c5**2*(Ps + 1210.0) - D**2*L*Ps**2*Q**2*SG*Tavg*Zavg*c3 - D*L*Ps**2*Q**2*SG*Tavg*Zavg - L*Ps**2*Q**2*SG*Tavg*Zavg*c4)/(D**6*E**2*Ts**2*c5**2*(Ps + 1210.0)) elif L is None and (None not in [P2, Q, D, P1]): return 2.0*D**6*E**2*Ts**2*c5**2*(P1*Ps + 1210.0*P1 - P2*Ps - 1210.0*P2)/(Ps**2*Q**2*SG*Tavg*Zavg*(D**2*c3 + D + c4)) else: raise Exception('This function solves for either flow, upstream \ pressure, downstream pressure, diameter, or length; all other inputs \ must be provided.')
def page_should_not_contain_element(self, locator, loglevel='INFO'): """Verifies that current page not contains `locator` element. If this keyword fails, it automatically logs the page source using the log level specified with the optional `loglevel` argument. Giving `NONE` as level disables logging. """ if self._is_element_present(locator): self.log_source(loglevel) raise AssertionError("Page should not have contained element '%s'" % locator) self._info("Current page not contains element '%s'." % locator)
Verifies that current page not contains `locator` element. If this keyword fails, it automatically logs the page source using the log level specified with the optional `loglevel` argument. Giving `NONE` as level disables logging.
Below is the the instruction that describes the task: ### Input: Verifies that current page not contains `locator` element. If this keyword fails, it automatically logs the page source using the log level specified with the optional `loglevel` argument. Giving `NONE` as level disables logging. ### Response: def page_should_not_contain_element(self, locator, loglevel='INFO'): """Verifies that current page not contains `locator` element. If this keyword fails, it automatically logs the page source using the log level specified with the optional `loglevel` argument. Giving `NONE` as level disables logging. """ if self._is_element_present(locator): self.log_source(loglevel) raise AssertionError("Page should not have contained element '%s'" % locator) self._info("Current page not contains element '%s'." % locator)
def make_editions_dict(editions): """Take a reporter editions dict and flatten it, returning a dict for use in the DictWriter. """ d = {} nums = ['1', '2', '3', '4', '5', '6'] num_counter = 0 for k, date_dict in editions.items(): d['edition%s' % nums[num_counter]] = k if date_dict['start'] is not None: d['start_e%s' % nums[num_counter]] = date_dict['start'].isoformat() if date_dict['end'] is not None: d['end_e%s' % nums[num_counter]] = date_dict['end'].isoformat() num_counter += 1 return d
Take a reporter editions dict and flatten it, returning a dict for use in the DictWriter.
Below is the the instruction that describes the task: ### Input: Take a reporter editions dict and flatten it, returning a dict for use in the DictWriter. ### Response: def make_editions_dict(editions): """Take a reporter editions dict and flatten it, returning a dict for use in the DictWriter. """ d = {} nums = ['1', '2', '3', '4', '5', '6'] num_counter = 0 for k, date_dict in editions.items(): d['edition%s' % nums[num_counter]] = k if date_dict['start'] is not None: d['start_e%s' % nums[num_counter]] = date_dict['start'].isoformat() if date_dict['end'] is not None: d['end_e%s' % nums[num_counter]] = date_dict['end'].isoformat() num_counter += 1 return d
def get_first(self, filter=None, order_by=None, group_by=[], query_parameters=None, commit=False, async=False, callback=None): """ Fetch object and directly return the first one Note: `get_first` won't put the fetched object in the parent's children list. You cannot override this behavior. If you want to commit it in the parent you can use :method:vsdk.NURESTFetcher.fetch or manually add it with :method:vsdk.NURESTObject.add_child Args: filter (string): string that represents a predicate filter order_by (string): string that represents an order by clause group_by (string): list of names for grouping page (int): number of the page to load page_size (int): number of results per page commit (bool): boolean to update current object callback (function): Callback that should be called in case of a async request Returns: vsdk.NURESTObject: the first object if any, or None Example: >>> print entity.children.get_first(filter="name == 'My Entity'") <NUChildren at xxx> """ objects = self.get(filter=filter, order_by=order_by, group_by=group_by, page=0, page_size=1, query_parameters=query_parameters, commit=commit) return objects[0] if len(objects) else None
Fetch object and directly return the first one Note: `get_first` won't put the fetched object in the parent's children list. You cannot override this behavior. If you want to commit it in the parent you can use :method:vsdk.NURESTFetcher.fetch or manually add it with :method:vsdk.NURESTObject.add_child Args: filter (string): string that represents a predicate filter order_by (string): string that represents an order by clause group_by (string): list of names for grouping page (int): number of the page to load page_size (int): number of results per page commit (bool): boolean to update current object callback (function): Callback that should be called in case of a async request Returns: vsdk.NURESTObject: the first object if any, or None Example: >>> print entity.children.get_first(filter="name == 'My Entity'") <NUChildren at xxx>
Below is the the instruction that describes the task: ### Input: Fetch object and directly return the first one Note: `get_first` won't put the fetched object in the parent's children list. You cannot override this behavior. If you want to commit it in the parent you can use :method:vsdk.NURESTFetcher.fetch or manually add it with :method:vsdk.NURESTObject.add_child Args: filter (string): string that represents a predicate filter order_by (string): string that represents an order by clause group_by (string): list of names for grouping page (int): number of the page to load page_size (int): number of results per page commit (bool): boolean to update current object callback (function): Callback that should be called in case of a async request Returns: vsdk.NURESTObject: the first object if any, or None Example: >>> print entity.children.get_first(filter="name == 'My Entity'") <NUChildren at xxx> ### Response: def get_first(self, filter=None, order_by=None, group_by=[], query_parameters=None, commit=False, async=False, callback=None): """ Fetch object and directly return the first one Note: `get_first` won't put the fetched object in the parent's children list. You cannot override this behavior. If you want to commit it in the parent you can use :method:vsdk.NURESTFetcher.fetch or manually add it with :method:vsdk.NURESTObject.add_child Args: filter (string): string that represents a predicate filter order_by (string): string that represents an order by clause group_by (string): list of names for grouping page (int): number of the page to load page_size (int): number of results per page commit (bool): boolean to update current object callback (function): Callback that should be called in case of a async request Returns: vsdk.NURESTObject: the first object if any, or None Example: >>> print entity.children.get_first(filter="name == 'My Entity'") <NUChildren at xxx> """ objects = self.get(filter=filter, order_by=order_by, group_by=group_by, page=0, page_size=1, query_parameters=query_parameters, commit=commit) return objects[0] if len(objects) else None
def _calcDistance(self, inputPattern, distanceNorm=None): """Calculate the distances from inputPattern to all stored patterns. All distances are between 0.0 and 1.0 :param inputPattern The pattern from which distances to all other patterns are calculated :param distanceNorm Degree of the distance norm """ if distanceNorm is None: distanceNorm = self.distanceNorm # Sparse memory if self.useSparseMemory: if self._protoSizes is None: self._protoSizes = self._Memory.rowSums() overlapsWithProtos = self._Memory.rightVecSumAtNZ(inputPattern) inputPatternSum = inputPattern.sum() if self.distanceMethod == "rawOverlap": dist = inputPattern.sum() - overlapsWithProtos elif self.distanceMethod == "pctOverlapOfInput": dist = inputPatternSum - overlapsWithProtos if inputPatternSum > 0: dist /= inputPatternSum elif self.distanceMethod == "pctOverlapOfProto": overlapsWithProtos /= self._protoSizes dist = 1.0 - overlapsWithProtos elif self.distanceMethod == "pctOverlapOfLarger": maxVal = numpy.maximum(self._protoSizes, inputPatternSum) if maxVal.all() > 0: overlapsWithProtos /= maxVal dist = 1.0 - overlapsWithProtos elif self.distanceMethod == "norm": dist = self._Memory.vecLpDist(self.distanceNorm, inputPattern) distMax = dist.max() if distMax > 0: dist /= distMax else: raise RuntimeError("Unimplemented distance method %s" % self.distanceMethod) # Dense memory else: if self.distanceMethod == "norm": dist = numpy.power(numpy.abs(self._M - inputPattern), self.distanceNorm) dist = dist.sum(1) dist = numpy.power(dist, 1.0/self.distanceNorm) dist /= dist.max() else: raise RuntimeError ("Not implemented yet for dense storage....") return dist
Calculate the distances from inputPattern to all stored patterns. All distances are between 0.0 and 1.0 :param inputPattern The pattern from which distances to all other patterns are calculated :param distanceNorm Degree of the distance norm
Below is the the instruction that describes the task: ### Input: Calculate the distances from inputPattern to all stored patterns. All distances are between 0.0 and 1.0 :param inputPattern The pattern from which distances to all other patterns are calculated :param distanceNorm Degree of the distance norm ### Response: def _calcDistance(self, inputPattern, distanceNorm=None): """Calculate the distances from inputPattern to all stored patterns. All distances are between 0.0 and 1.0 :param inputPattern The pattern from which distances to all other patterns are calculated :param distanceNorm Degree of the distance norm """ if distanceNorm is None: distanceNorm = self.distanceNorm # Sparse memory if self.useSparseMemory: if self._protoSizes is None: self._protoSizes = self._Memory.rowSums() overlapsWithProtos = self._Memory.rightVecSumAtNZ(inputPattern) inputPatternSum = inputPattern.sum() if self.distanceMethod == "rawOverlap": dist = inputPattern.sum() - overlapsWithProtos elif self.distanceMethod == "pctOverlapOfInput": dist = inputPatternSum - overlapsWithProtos if inputPatternSum > 0: dist /= inputPatternSum elif self.distanceMethod == "pctOverlapOfProto": overlapsWithProtos /= self._protoSizes dist = 1.0 - overlapsWithProtos elif self.distanceMethod == "pctOverlapOfLarger": maxVal = numpy.maximum(self._protoSizes, inputPatternSum) if maxVal.all() > 0: overlapsWithProtos /= maxVal dist = 1.0 - overlapsWithProtos elif self.distanceMethod == "norm": dist = self._Memory.vecLpDist(self.distanceNorm, inputPattern) distMax = dist.max() if distMax > 0: dist /= distMax else: raise RuntimeError("Unimplemented distance method %s" % self.distanceMethod) # Dense memory else: if self.distanceMethod == "norm": dist = numpy.power(numpy.abs(self._M - inputPattern), self.distanceNorm) dist = dist.sum(1) dist = numpy.power(dist, 1.0/self.distanceNorm) dist /= dist.max() else: raise RuntimeError ("Not implemented yet for dense storage....") return dist
def findattr(self, name): """Search the vgroup for a given attribute. Args:: name attribute name Returns:: if found, VGAttr instance describing the attribute None otherwise C library equivalent : Vfindattr """ try: att = self.attr(name) if att._index is None: att = None except HDF4Error: att = None return att
Search the vgroup for a given attribute. Args:: name attribute name Returns:: if found, VGAttr instance describing the attribute None otherwise C library equivalent : Vfindattr
Below is the the instruction that describes the task: ### Input: Search the vgroup for a given attribute. Args:: name attribute name Returns:: if found, VGAttr instance describing the attribute None otherwise C library equivalent : Vfindattr ### Response: def findattr(self, name): """Search the vgroup for a given attribute. Args:: name attribute name Returns:: if found, VGAttr instance describing the attribute None otherwise C library equivalent : Vfindattr """ try: att = self.attr(name) if att._index is None: att = None except HDF4Error: att = None return att
def _create_file_if_needed(self): """Create an empty file if necessary. This method will not initialize the file. Instead it implements a simple version of "touch" to ensure the file has been created. """ if not os.path.exists(self._filename): old_umask = os.umask(0o177) try: open(self._filename, 'a+b').close() finally: os.umask(old_umask)
Create an empty file if necessary. This method will not initialize the file. Instead it implements a simple version of "touch" to ensure the file has been created.
Below is the the instruction that describes the task: ### Input: Create an empty file if necessary. This method will not initialize the file. Instead it implements a simple version of "touch" to ensure the file has been created. ### Response: def _create_file_if_needed(self): """Create an empty file if necessary. This method will not initialize the file. Instead it implements a simple version of "touch" to ensure the file has been created. """ if not os.path.exists(self._filename): old_umask = os.umask(0o177) try: open(self._filename, 'a+b').close() finally: os.umask(old_umask)
def create_storage(kwargs=None, conn=None, call=None): ''' .. versionadded:: 2015.8.0 Create a new storage account CLI Example: .. code-block:: bash salt-cloud -f create_storage my-azure name=my_storage label=my_storage location='West US' ''' if call != 'function': raise SaltCloudSystemExit( 'The show_storage function must be called with -f or --function.' ) if kwargs is None: kwargs = {} if not conn: conn = get_conn() if 'name' not in kwargs: raise SaltCloudSystemExit('A name must be specified as "name"') if 'description' not in kwargs: raise SaltCloudSystemExit('A description must be specified as "description"') if 'label' not in kwargs: raise SaltCloudSystemExit('A label must be specified as "label"') if 'location' not in kwargs and 'affinity_group' not in kwargs: raise SaltCloudSystemExit('Either a location or an affinity_group ' 'must be specified (but not both)') try: data = conn.create_storage_account( service_name=kwargs['name'], label=kwargs['label'], description=kwargs.get('description', None), location=kwargs.get('location', None), affinity_group=kwargs.get('affinity_group', None), extended_properties=kwargs.get('extended_properties', None), geo_replication_enabled=kwargs.get('geo_replication_enabled', None), account_type=kwargs.get('account_type', 'Standard_GRS'), ) return {'Success': 'The storage account was successfully created'} except AzureConflictHttpError: raise SaltCloudSystemExit('There was a conflict. This usually means that the storage account already exists.')
.. versionadded:: 2015.8.0 Create a new storage account CLI Example: .. code-block:: bash salt-cloud -f create_storage my-azure name=my_storage label=my_storage location='West US'
Below is the the instruction that describes the task: ### Input: .. versionadded:: 2015.8.0 Create a new storage account CLI Example: .. code-block:: bash salt-cloud -f create_storage my-azure name=my_storage label=my_storage location='West US' ### Response: def create_storage(kwargs=None, conn=None, call=None): ''' .. versionadded:: 2015.8.0 Create a new storage account CLI Example: .. code-block:: bash salt-cloud -f create_storage my-azure name=my_storage label=my_storage location='West US' ''' if call != 'function': raise SaltCloudSystemExit( 'The show_storage function must be called with -f or --function.' ) if kwargs is None: kwargs = {} if not conn: conn = get_conn() if 'name' not in kwargs: raise SaltCloudSystemExit('A name must be specified as "name"') if 'description' not in kwargs: raise SaltCloudSystemExit('A description must be specified as "description"') if 'label' not in kwargs: raise SaltCloudSystemExit('A label must be specified as "label"') if 'location' not in kwargs and 'affinity_group' not in kwargs: raise SaltCloudSystemExit('Either a location or an affinity_group ' 'must be specified (but not both)') try: data = conn.create_storage_account( service_name=kwargs['name'], label=kwargs['label'], description=kwargs.get('description', None), location=kwargs.get('location', None), affinity_group=kwargs.get('affinity_group', None), extended_properties=kwargs.get('extended_properties', None), geo_replication_enabled=kwargs.get('geo_replication_enabled', None), account_type=kwargs.get('account_type', 'Standard_GRS'), ) return {'Success': 'The storage account was successfully created'} except AzureConflictHttpError: raise SaltCloudSystemExit('There was a conflict. This usually means that the storage account already exists.')
def getDevices(self, status=None): """ The devices in the given state or all devices is the arg is none. :param status: the state to match against. :return: the devices """ return [d for d in self.devices.values() if status is None or d.payload.get('status') == status]
The devices in the given state or all devices is the arg is none. :param status: the state to match against. :return: the devices
Below is the the instruction that describes the task: ### Input: The devices in the given state or all devices is the arg is none. :param status: the state to match against. :return: the devices ### Response: def getDevices(self, status=None): """ The devices in the given state or all devices is the arg is none. :param status: the state to match against. :return: the devices """ return [d for d in self.devices.values() if status is None or d.payload.get('status') == status]
def drop_all_tables(self): """Drop all document collections of the database. .. warning:: ALL DATA WILL BE LOST. Use only for automated testing. """ # Retrieve database name from application config app = self.db.app mongo_settings = app.config['MONGODB_SETTINGS'] database_name = mongo_settings['db'] # Flask-MongoEngine is built on MongoEngine, which is built on PyMongo. # To drop database collections, we need to access the PyMongo Database object, # which is stored in the PyMongo MongoClient object, # which is stored in app.extensions['mongoengine'][self]['conn'] py_mongo_mongo_client = app.extensions['mongoengine'][self.db]['conn'] py_mongo_database = py_mongo_mongo_client[database_name] # Use the PyMongo Database object for collection_name in py_mongo_database.collection_names(): py_mongo_database.drop_collection(collection_name)
Drop all document collections of the database. .. warning:: ALL DATA WILL BE LOST. Use only for automated testing.
Below is the the instruction that describes the task: ### Input: Drop all document collections of the database. .. warning:: ALL DATA WILL BE LOST. Use only for automated testing. ### Response: def drop_all_tables(self): """Drop all document collections of the database. .. warning:: ALL DATA WILL BE LOST. Use only for automated testing. """ # Retrieve database name from application config app = self.db.app mongo_settings = app.config['MONGODB_SETTINGS'] database_name = mongo_settings['db'] # Flask-MongoEngine is built on MongoEngine, which is built on PyMongo. # To drop database collections, we need to access the PyMongo Database object, # which is stored in the PyMongo MongoClient object, # which is stored in app.extensions['mongoengine'][self]['conn'] py_mongo_mongo_client = app.extensions['mongoengine'][self.db]['conn'] py_mongo_database = py_mongo_mongo_client[database_name] # Use the PyMongo Database object for collection_name in py_mongo_database.collection_names(): py_mongo_database.drop_collection(collection_name)
def get_argument_from_call( call_node: astroid.Call, position: int = None, keyword: str = None ) -> astroid.Name: """Returns the specified argument from a function call. :param astroid.Call call_node: Node representing a function call to check. :param int position: position of the argument. :param str keyword: the keyword of the argument. :returns: The node representing the argument, None if the argument is not found. :rtype: astroid.Name :raises ValueError: if both position and keyword are None. :raises NoSuchArgumentError: if no argument at the provided position or with the provided keyword. """ if position is None and keyword is None: raise ValueError("Must specify at least one of: position or keyword.") if position is not None: try: return call_node.args[position] except IndexError: pass if keyword and call_node.keywords: for arg in call_node.keywords: if arg.arg == keyword: return arg.value raise NoSuchArgumentError
Returns the specified argument from a function call. :param astroid.Call call_node: Node representing a function call to check. :param int position: position of the argument. :param str keyword: the keyword of the argument. :returns: The node representing the argument, None if the argument is not found. :rtype: astroid.Name :raises ValueError: if both position and keyword are None. :raises NoSuchArgumentError: if no argument at the provided position or with the provided keyword.
Below is the the instruction that describes the task: ### Input: Returns the specified argument from a function call. :param astroid.Call call_node: Node representing a function call to check. :param int position: position of the argument. :param str keyword: the keyword of the argument. :returns: The node representing the argument, None if the argument is not found. :rtype: astroid.Name :raises ValueError: if both position and keyword are None. :raises NoSuchArgumentError: if no argument at the provided position or with the provided keyword. ### Response: def get_argument_from_call( call_node: astroid.Call, position: int = None, keyword: str = None ) -> astroid.Name: """Returns the specified argument from a function call. :param astroid.Call call_node: Node representing a function call to check. :param int position: position of the argument. :param str keyword: the keyword of the argument. :returns: The node representing the argument, None if the argument is not found. :rtype: astroid.Name :raises ValueError: if both position and keyword are None. :raises NoSuchArgumentError: if no argument at the provided position or with the provided keyword. """ if position is None and keyword is None: raise ValueError("Must specify at least one of: position or keyword.") if position is not None: try: return call_node.args[position] except IndexError: pass if keyword and call_node.keywords: for arg in call_node.keywords: if arg.arg == keyword: return arg.value raise NoSuchArgumentError
def _sanity_check_args(args): """Ensure dependent arguments are correctly specified """ if "scheduler" in args and "queue" in args: if args.scheduler and not args.queue: if args.scheduler != "sge": return "IPython parallel scheduler (-s) specified. This also requires a queue (-q)." elif args.queue and not args.scheduler: return "IPython parallel queue (-q) supplied. This also requires a scheduler (-s)." elif args.paralleltype == "ipython" and (not args.queue or not args.scheduler): return "IPython parallel requires queue (-q) and scheduler (-s) arguments."
Ensure dependent arguments are correctly specified
Below is the the instruction that describes the task: ### Input: Ensure dependent arguments are correctly specified ### Response: def _sanity_check_args(args): """Ensure dependent arguments are correctly specified """ if "scheduler" in args and "queue" in args: if args.scheduler and not args.queue: if args.scheduler != "sge": return "IPython parallel scheduler (-s) specified. This also requires a queue (-q)." elif args.queue and not args.scheduler: return "IPython parallel queue (-q) supplied. This also requires a scheduler (-s)." elif args.paralleltype == "ipython" and (not args.queue or not args.scheduler): return "IPython parallel requires queue (-q) and scheduler (-s) arguments."
def export(path=None, user_content=False, context=None, username=None, password=None, render_offline=False, render_wide=False, render_inline=True, out_filename=None, api_url=None, title=None, quiet=False, grip_class=None): """ Exports the rendered HTML to a file. """ export_to_stdout = out_filename == '-' if out_filename is None: if path == '-': export_to_stdout = True else: filetitle, _ = os.path.splitext( os.path.relpath(DirectoryReader(path).root_filename)) out_filename = '{0}.html'.format(filetitle) if not export_to_stdout and not quiet: print('Exporting to', out_filename, file=sys.stderr) page = render_page(path, user_content, context, username, password, render_offline, render_wide, render_inline, api_url, title, None, quiet, grip_class) if export_to_stdout: try: print(page) except IOError as ex: if ex.errno != 0 and ex.errno != errno.EPIPE: raise else: with io.open(out_filename, 'w', encoding='utf-8') as f: f.write(page)
Exports the rendered HTML to a file.
Below is the the instruction that describes the task: ### Input: Exports the rendered HTML to a file. ### Response: def export(path=None, user_content=False, context=None, username=None, password=None, render_offline=False, render_wide=False, render_inline=True, out_filename=None, api_url=None, title=None, quiet=False, grip_class=None): """ Exports the rendered HTML to a file. """ export_to_stdout = out_filename == '-' if out_filename is None: if path == '-': export_to_stdout = True else: filetitle, _ = os.path.splitext( os.path.relpath(DirectoryReader(path).root_filename)) out_filename = '{0}.html'.format(filetitle) if not export_to_stdout and not quiet: print('Exporting to', out_filename, file=sys.stderr) page = render_page(path, user_content, context, username, password, render_offline, render_wide, render_inline, api_url, title, None, quiet, grip_class) if export_to_stdout: try: print(page) except IOError as ex: if ex.errno != 0 and ex.errno != errno.EPIPE: raise else: with io.open(out_filename, 'w', encoding='utf-8') as f: f.write(page)
def median_min_distance(data, metric): """This function computes a graph of nearest-neighbors for each sample point in 'data' and returns the median of the distribution of distances between those nearest-neighbors, the distance metric being specified by 'metric'. Parameters ---------- data : array of shape (n_samples, n_features) The data-set, a fraction of whose sample points will be extracted by density sampling. metric : string The distance metric used to determine the nearest-neighbor to each data-point. The DistanceMetric class defined in scikit-learn's library lists all available metrics. Returns ------- median_min_dist : float The median of the distribution of distances between nearest-neighbors. """ data = np.atleast_2d(data) nearest_distances = kneighbors_graph(data, 1, mode = 'distance', metric = metric, include_self = False).data median_min_dist = np.median(nearest_distances, overwrite_input = True) return round(median_min_dist, 4)
This function computes a graph of nearest-neighbors for each sample point in 'data' and returns the median of the distribution of distances between those nearest-neighbors, the distance metric being specified by 'metric'. Parameters ---------- data : array of shape (n_samples, n_features) The data-set, a fraction of whose sample points will be extracted by density sampling. metric : string The distance metric used to determine the nearest-neighbor to each data-point. The DistanceMetric class defined in scikit-learn's library lists all available metrics. Returns ------- median_min_dist : float The median of the distribution of distances between nearest-neighbors.
Below is the the instruction that describes the task: ### Input: This function computes a graph of nearest-neighbors for each sample point in 'data' and returns the median of the distribution of distances between those nearest-neighbors, the distance metric being specified by 'metric'. Parameters ---------- data : array of shape (n_samples, n_features) The data-set, a fraction of whose sample points will be extracted by density sampling. metric : string The distance metric used to determine the nearest-neighbor to each data-point. The DistanceMetric class defined in scikit-learn's library lists all available metrics. Returns ------- median_min_dist : float The median of the distribution of distances between nearest-neighbors. ### Response: def median_min_distance(data, metric): """This function computes a graph of nearest-neighbors for each sample point in 'data' and returns the median of the distribution of distances between those nearest-neighbors, the distance metric being specified by 'metric'. Parameters ---------- data : array of shape (n_samples, n_features) The data-set, a fraction of whose sample points will be extracted by density sampling. metric : string The distance metric used to determine the nearest-neighbor to each data-point. The DistanceMetric class defined in scikit-learn's library lists all available metrics. Returns ------- median_min_dist : float The median of the distribution of distances between nearest-neighbors. """ data = np.atleast_2d(data) nearest_distances = kneighbors_graph(data, 1, mode = 'distance', metric = metric, include_self = False).data median_min_dist = np.median(nearest_distances, overwrite_input = True) return round(median_min_dist, 4)
def register_name(self, username): """ register a name """ if self.is_username_used(username): raise UsernameInUseException('Username {username} already in use!'.format(username=username)) self.registered_names.append(username)
register a name
Below is the the instruction that describes the task: ### Input: register a name ### Response: def register_name(self, username): """ register a name """ if self.is_username_used(username): raise UsernameInUseException('Username {username} already in use!'.format(username=username)) self.registered_names.append(username)
def convert_softmax(builder, layer, input_names, output_names, keras_layer): """Convert a softmax layer from keras to coreml. Parameters keras_layer: layer ---------- A keras layer object. builder: NeuralNetworkBuilder A neural network builder object. """ input_name, output_name = (input_names[0], output_names[0]) builder.add_softmax(name = layer, input_name = input_name, output_name = output_name)
Convert a softmax layer from keras to coreml. Parameters keras_layer: layer ---------- A keras layer object. builder: NeuralNetworkBuilder A neural network builder object.
Below is the the instruction that describes the task: ### Input: Convert a softmax layer from keras to coreml. Parameters keras_layer: layer ---------- A keras layer object. builder: NeuralNetworkBuilder A neural network builder object. ### Response: def convert_softmax(builder, layer, input_names, output_names, keras_layer): """Convert a softmax layer from keras to coreml. Parameters keras_layer: layer ---------- A keras layer object. builder: NeuralNetworkBuilder A neural network builder object. """ input_name, output_name = (input_names[0], output_names[0]) builder.add_softmax(name = layer, input_name = input_name, output_name = output_name)
def to_map_with_default(value, default_value): """ Converts value into map object or returns default when conversion is not possible :param value: the value to convert. :param default_value: the default value. :return: map object or emptu map when conversion is not supported. """ result = RecursiveMapConverter.to_nullable_map(value) return result if result != None else default_value
Converts value into map object or returns default when conversion is not possible :param value: the value to convert. :param default_value: the default value. :return: map object or emptu map when conversion is not supported.
Below is the the instruction that describes the task: ### Input: Converts value into map object or returns default when conversion is not possible :param value: the value to convert. :param default_value: the default value. :return: map object or emptu map when conversion is not supported. ### Response: def to_map_with_default(value, default_value): """ Converts value into map object or returns default when conversion is not possible :param value: the value to convert. :param default_value: the default value. :return: map object or emptu map when conversion is not supported. """ result = RecursiveMapConverter.to_nullable_map(value) return result if result != None else default_value
def diff_move(self,v,new_comm): """ Calculate the difference in the quality function if node ``v`` is moved to community ``new_comm``. Parameters ---------- v The node to move. new_comm The community to move to. Returns ------- float Difference in quality function. Notes ----- The difference returned by diff_move should be equivalent to first determining the quality of the partition, then calling move_node, and then determining again the quality of the partition and looking at the difference. In other words >>> partition = louvain.find_partition(ig.Graph.Famous('Zachary'), ... louvain.ModularityVertexPartition) >>> diff = partition.diff_move(v=0, new_comm=0) >>> q1 = partition.quality() >>> partition.move_node(v=0, new_comm=0) >>> q2 = partition.quality() >>> round(diff, 10) == round(q2 - q1, 10) True .. warning:: Only derived classes provide actual implementations, the base class provides no implementation for this function. """ return _c_louvain._MutableVertexPartition_diff_move(self._partition, v, new_comm)
Calculate the difference in the quality function if node ``v`` is moved to community ``new_comm``. Parameters ---------- v The node to move. new_comm The community to move to. Returns ------- float Difference in quality function. Notes ----- The difference returned by diff_move should be equivalent to first determining the quality of the partition, then calling move_node, and then determining again the quality of the partition and looking at the difference. In other words >>> partition = louvain.find_partition(ig.Graph.Famous('Zachary'), ... louvain.ModularityVertexPartition) >>> diff = partition.diff_move(v=0, new_comm=0) >>> q1 = partition.quality() >>> partition.move_node(v=0, new_comm=0) >>> q2 = partition.quality() >>> round(diff, 10) == round(q2 - q1, 10) True .. warning:: Only derived classes provide actual implementations, the base class provides no implementation for this function.
Below is the the instruction that describes the task: ### Input: Calculate the difference in the quality function if node ``v`` is moved to community ``new_comm``. Parameters ---------- v The node to move. new_comm The community to move to. Returns ------- float Difference in quality function. Notes ----- The difference returned by diff_move should be equivalent to first determining the quality of the partition, then calling move_node, and then determining again the quality of the partition and looking at the difference. In other words >>> partition = louvain.find_partition(ig.Graph.Famous('Zachary'), ... louvain.ModularityVertexPartition) >>> diff = partition.diff_move(v=0, new_comm=0) >>> q1 = partition.quality() >>> partition.move_node(v=0, new_comm=0) >>> q2 = partition.quality() >>> round(diff, 10) == round(q2 - q1, 10) True .. warning:: Only derived classes provide actual implementations, the base class provides no implementation for this function. ### Response: def diff_move(self,v,new_comm): """ Calculate the difference in the quality function if node ``v`` is moved to community ``new_comm``. Parameters ---------- v The node to move. new_comm The community to move to. Returns ------- float Difference in quality function. Notes ----- The difference returned by diff_move should be equivalent to first determining the quality of the partition, then calling move_node, and then determining again the quality of the partition and looking at the difference. In other words >>> partition = louvain.find_partition(ig.Graph.Famous('Zachary'), ... louvain.ModularityVertexPartition) >>> diff = partition.diff_move(v=0, new_comm=0) >>> q1 = partition.quality() >>> partition.move_node(v=0, new_comm=0) >>> q2 = partition.quality() >>> round(diff, 10) == round(q2 - q1, 10) True .. warning:: Only derived classes provide actual implementations, the base class provides no implementation for this function. """ return _c_louvain._MutableVertexPartition_diff_move(self._partition, v, new_comm)
def new_address(self, prefix, type, callback=None, errback=None, **kwargs): """ Create a new address space in this Network :param str prefix: The CIDR prefix of the address to add :param str type: planned, assignment, host :return: The newly created Address object """ if not self.data: raise NetworkException('Network not loaded') return Address(self.config, prefix, type, self).create(**kwargs)
Create a new address space in this Network :param str prefix: The CIDR prefix of the address to add :param str type: planned, assignment, host :return: The newly created Address object
Below is the the instruction that describes the task: ### Input: Create a new address space in this Network :param str prefix: The CIDR prefix of the address to add :param str type: planned, assignment, host :return: The newly created Address object ### Response: def new_address(self, prefix, type, callback=None, errback=None, **kwargs): """ Create a new address space in this Network :param str prefix: The CIDR prefix of the address to add :param str type: planned, assignment, host :return: The newly created Address object """ if not self.data: raise NetworkException('Network not loaded') return Address(self.config, prefix, type, self).create(**kwargs)
def apply_fd_time_shift(htilde, shifttime, kmin=0, fseries=None, copy=True): """Shifts a frequency domain waveform in time. The shift applied is shiftime - htilde.epoch. Parameters ---------- htilde : FrequencySeries The waveform frequency series. shifttime : float The time to shift the frequency series to. kmin : {0, int} The starting index of htilde to apply the time shift. Default is 0. fseries : {None, numpy array} The frequencies of each element in htilde. This is only needed if htilde is not sampled at equal frequency steps. copy : {True, bool} Make a copy of htilde before applying the time shift. If False, the time shift will be applied to htilde's data. Returns ------- FrequencySeries A frequency series with the waveform shifted to the new time. If makecopy is True, will be a new frequency series; if makecopy is False, will be the same as htilde. """ dt = float(shifttime - htilde.epoch) if dt == 0.: # no shift to apply, just copy if desired if copy: htilde = 1. * htilde elif isinstance(htilde, FrequencySeries): # FrequencySeries means equally sampled in frequency, use faster shifting htilde = apply_fseries_time_shift(htilde, dt, kmin=kmin, copy=copy) else: if fseries is None: fseries = htilde.sample_frequencies.numpy() shift = Array(numpy.exp(-2j*numpy.pi*dt*fseries), dtype=complex_same_precision_as(htilde)) if copy: htilde = 1. * htilde htilde *= shift return htilde
Shifts a frequency domain waveform in time. The shift applied is shiftime - htilde.epoch. Parameters ---------- htilde : FrequencySeries The waveform frequency series. shifttime : float The time to shift the frequency series to. kmin : {0, int} The starting index of htilde to apply the time shift. Default is 0. fseries : {None, numpy array} The frequencies of each element in htilde. This is only needed if htilde is not sampled at equal frequency steps. copy : {True, bool} Make a copy of htilde before applying the time shift. If False, the time shift will be applied to htilde's data. Returns ------- FrequencySeries A frequency series with the waveform shifted to the new time. If makecopy is True, will be a new frequency series; if makecopy is False, will be the same as htilde.
Below is the the instruction that describes the task: ### Input: Shifts a frequency domain waveform in time. The shift applied is shiftime - htilde.epoch. Parameters ---------- htilde : FrequencySeries The waveform frequency series. shifttime : float The time to shift the frequency series to. kmin : {0, int} The starting index of htilde to apply the time shift. Default is 0. fseries : {None, numpy array} The frequencies of each element in htilde. This is only needed if htilde is not sampled at equal frequency steps. copy : {True, bool} Make a copy of htilde before applying the time shift. If False, the time shift will be applied to htilde's data. Returns ------- FrequencySeries A frequency series with the waveform shifted to the new time. If makecopy is True, will be a new frequency series; if makecopy is False, will be the same as htilde. ### Response: def apply_fd_time_shift(htilde, shifttime, kmin=0, fseries=None, copy=True): """Shifts a frequency domain waveform in time. The shift applied is shiftime - htilde.epoch. Parameters ---------- htilde : FrequencySeries The waveform frequency series. shifttime : float The time to shift the frequency series to. kmin : {0, int} The starting index of htilde to apply the time shift. Default is 0. fseries : {None, numpy array} The frequencies of each element in htilde. This is only needed if htilde is not sampled at equal frequency steps. copy : {True, bool} Make a copy of htilde before applying the time shift. If False, the time shift will be applied to htilde's data. Returns ------- FrequencySeries A frequency series with the waveform shifted to the new time. If makecopy is True, will be a new frequency series; if makecopy is False, will be the same as htilde. """ dt = float(shifttime - htilde.epoch) if dt == 0.: # no shift to apply, just copy if desired if copy: htilde = 1. * htilde elif isinstance(htilde, FrequencySeries): # FrequencySeries means equally sampled in frequency, use faster shifting htilde = apply_fseries_time_shift(htilde, dt, kmin=kmin, copy=copy) else: if fseries is None: fseries = htilde.sample_frequencies.numpy() shift = Array(numpy.exp(-2j*numpy.pi*dt*fseries), dtype=complex_same_precision_as(htilde)) if copy: htilde = 1. * htilde htilde *= shift return htilde
def render(self, template, filename, context={}, filters={}): """ Renders a Jinja2 template to text. """ filename = os.path.normpath(filename) path, file = os.path.split(filename) try: os.makedirs(path) except OSError as exception: if exception.errno != errno.EEXIST: raise path, file = os.path.split(template) loader = jinja2.FileSystemLoader(path) env = jinja2.Environment(loader=loader, trim_blocks=True, lstrip_blocks=True) env.filters.update(filters) template = env.get_template(file) text = template.render(context) with open(filename, 'wt') as f: f.write(text)
Renders a Jinja2 template to text.
Below is the the instruction that describes the task: ### Input: Renders a Jinja2 template to text. ### Response: def render(self, template, filename, context={}, filters={}): """ Renders a Jinja2 template to text. """ filename = os.path.normpath(filename) path, file = os.path.split(filename) try: os.makedirs(path) except OSError as exception: if exception.errno != errno.EEXIST: raise path, file = os.path.split(template) loader = jinja2.FileSystemLoader(path) env = jinja2.Environment(loader=loader, trim_blocks=True, lstrip_blocks=True) env.filters.update(filters) template = env.get_template(file) text = template.render(context) with open(filename, 'wt') as f: f.write(text)
def handle_api_exception(error): """ Converts an API exception into an error response. """ _mp_track( type="exception", status_code=error.status_code, message=error.message, ) response = jsonify(dict( message=error.message )) response.status_code = error.status_code return response
Converts an API exception into an error response.
Below is the the instruction that describes the task: ### Input: Converts an API exception into an error response. ### Response: def handle_api_exception(error): """ Converts an API exception into an error response. """ _mp_track( type="exception", status_code=error.status_code, message=error.message, ) response = jsonify(dict( message=error.message )) response.status_code = error.status_code return response
def setDefaultIREncoding(encoding): ''' setDefaultIREncoding - Sets the default encoding used by IndexedRedis. This will be the default encoding used for field data. You can override this on a per-field basis by using an IRField (such as IRUnicodeField or IRRawField) @param encoding - An encoding (like utf-8) ''' try: b''.decode(encoding) except: raise ValueError('setDefaultIREncoding was provided an invalid codec. Got (encoding="%s")' %(str(encoding), )) global defaultIREncoding defaultIREncoding = encoding
setDefaultIREncoding - Sets the default encoding used by IndexedRedis. This will be the default encoding used for field data. You can override this on a per-field basis by using an IRField (such as IRUnicodeField or IRRawField) @param encoding - An encoding (like utf-8)
Below is the the instruction that describes the task: ### Input: setDefaultIREncoding - Sets the default encoding used by IndexedRedis. This will be the default encoding used for field data. You can override this on a per-field basis by using an IRField (such as IRUnicodeField or IRRawField) @param encoding - An encoding (like utf-8) ### Response: def setDefaultIREncoding(encoding): ''' setDefaultIREncoding - Sets the default encoding used by IndexedRedis. This will be the default encoding used for field data. You can override this on a per-field basis by using an IRField (such as IRUnicodeField or IRRawField) @param encoding - An encoding (like utf-8) ''' try: b''.decode(encoding) except: raise ValueError('setDefaultIREncoding was provided an invalid codec. Got (encoding="%s")' %(str(encoding), )) global defaultIREncoding defaultIREncoding = encoding