code
stringlengths
75
104k
docstring
stringlengths
1
46.9k
text
stringlengths
164
112k
def any(self, array, role = None): """ Return ``True`` if ``array`` is ``True`` for any members of the entity. ``array`` must have the dimension of the number of persons in the simulation If ``role`` is provided, only the entity member with the given role are taken into account. Example: >>> salaries = household.members('salary', '2018-01') # e.g. [2000, 1500, 0, 0, 0] >>> household.any(salaries >= 1800) >>> array([True]) """ sum_in_entity = self.sum(array, role = role) return (sum_in_entity > 0)
Return ``True`` if ``array`` is ``True`` for any members of the entity. ``array`` must have the dimension of the number of persons in the simulation If ``role`` is provided, only the entity member with the given role are taken into account. Example: >>> salaries = household.members('salary', '2018-01') # e.g. [2000, 1500, 0, 0, 0] >>> household.any(salaries >= 1800) >>> array([True])
Below is the the instruction that describes the task: ### Input: Return ``True`` if ``array`` is ``True`` for any members of the entity. ``array`` must have the dimension of the number of persons in the simulation If ``role`` is provided, only the entity member with the given role are taken into account. Example: >>> salaries = household.members('salary', '2018-01') # e.g. [2000, 1500, 0, 0, 0] >>> household.any(salaries >= 1800) >>> array([True]) ### Response: def any(self, array, role = None): """ Return ``True`` if ``array`` is ``True`` for any members of the entity. ``array`` must have the dimension of the number of persons in the simulation If ``role`` is provided, only the entity member with the given role are taken into account. Example: >>> salaries = household.members('salary', '2018-01') # e.g. [2000, 1500, 0, 0, 0] >>> household.any(salaries >= 1800) >>> array([True]) """ sum_in_entity = self.sum(array, role = role) return (sum_in_entity > 0)
def skipgram_lookup(indices, subwordidxs, subwordidxsptr, offset=0): """Get a sparse COO array of words and subwords for SkipGram. Parameters ---------- indices : numpy.ndarray Array containing numbers in [0, vocabulary_size). The element at position idx is taken to be the word that occurs at row idx in the SkipGram batch. offset : int Offset to add to each subword index. subwordidxs : numpy.ndarray Array containing concatenation of all subwords of all tokens in the vocabulary, in order of their occurrence in the vocabulary. For example np.concatenate(idx_to_subwordidxs) subwordidxsptr Array containing pointers into subwordidxs array such that subwordidxs[subwordidxsptr[i]:subwordidxsptr[i+1]] returns all subwords of of token i. For example subwordidxsptr = np.cumsum([ len(subwordidxs) for subwordidxs in idx_to_subwordidxs]) offset : int, default 0 Offset to add to each subword index. Returns ------- numpy.ndarray of dtype float32 Array containing weights such that for each row, all weights sum to 1. In particular, all elements in a row have weight 1 / num_elements_in_the_row numpy.ndarray of dtype int64 This array is the row array of a sparse array of COO format. numpy.ndarray of dtype int64 This array is the col array of a sparse array of COO format. """ row = [] col = [] data = [] for i, idx in enumerate(indices): start = subwordidxsptr[idx] end = subwordidxsptr[idx + 1] row.append(i) col.append(idx) data.append(1 / (1 + end - start)) for subword in subwordidxs[start:end]: row.append(i) col.append(subword + offset) data.append(1 / (1 + end - start)) return (np.array(data, dtype=np.float32), np.array(row, dtype=np.int64), np.array(col, dtype=np.int64))
Get a sparse COO array of words and subwords for SkipGram. Parameters ---------- indices : numpy.ndarray Array containing numbers in [0, vocabulary_size). The element at position idx is taken to be the word that occurs at row idx in the SkipGram batch. offset : int Offset to add to each subword index. subwordidxs : numpy.ndarray Array containing concatenation of all subwords of all tokens in the vocabulary, in order of their occurrence in the vocabulary. For example np.concatenate(idx_to_subwordidxs) subwordidxsptr Array containing pointers into subwordidxs array such that subwordidxs[subwordidxsptr[i]:subwordidxsptr[i+1]] returns all subwords of of token i. For example subwordidxsptr = np.cumsum([ len(subwordidxs) for subwordidxs in idx_to_subwordidxs]) offset : int, default 0 Offset to add to each subword index. Returns ------- numpy.ndarray of dtype float32 Array containing weights such that for each row, all weights sum to 1. In particular, all elements in a row have weight 1 / num_elements_in_the_row numpy.ndarray of dtype int64 This array is the row array of a sparse array of COO format. numpy.ndarray of dtype int64 This array is the col array of a sparse array of COO format.
Below is the the instruction that describes the task: ### Input: Get a sparse COO array of words and subwords for SkipGram. Parameters ---------- indices : numpy.ndarray Array containing numbers in [0, vocabulary_size). The element at position idx is taken to be the word that occurs at row idx in the SkipGram batch. offset : int Offset to add to each subword index. subwordidxs : numpy.ndarray Array containing concatenation of all subwords of all tokens in the vocabulary, in order of their occurrence in the vocabulary. For example np.concatenate(idx_to_subwordidxs) subwordidxsptr Array containing pointers into subwordidxs array such that subwordidxs[subwordidxsptr[i]:subwordidxsptr[i+1]] returns all subwords of of token i. For example subwordidxsptr = np.cumsum([ len(subwordidxs) for subwordidxs in idx_to_subwordidxs]) offset : int, default 0 Offset to add to each subword index. Returns ------- numpy.ndarray of dtype float32 Array containing weights such that for each row, all weights sum to 1. In particular, all elements in a row have weight 1 / num_elements_in_the_row numpy.ndarray of dtype int64 This array is the row array of a sparse array of COO format. numpy.ndarray of dtype int64 This array is the col array of a sparse array of COO format. ### Response: def skipgram_lookup(indices, subwordidxs, subwordidxsptr, offset=0): """Get a sparse COO array of words and subwords for SkipGram. Parameters ---------- indices : numpy.ndarray Array containing numbers in [0, vocabulary_size). The element at position idx is taken to be the word that occurs at row idx in the SkipGram batch. offset : int Offset to add to each subword index. subwordidxs : numpy.ndarray Array containing concatenation of all subwords of all tokens in the vocabulary, in order of their occurrence in the vocabulary. For example np.concatenate(idx_to_subwordidxs) subwordidxsptr Array containing pointers into subwordidxs array such that subwordidxs[subwordidxsptr[i]:subwordidxsptr[i+1]] returns all subwords of of token i. For example subwordidxsptr = np.cumsum([ len(subwordidxs) for subwordidxs in idx_to_subwordidxs]) offset : int, default 0 Offset to add to each subword index. Returns ------- numpy.ndarray of dtype float32 Array containing weights such that for each row, all weights sum to 1. In particular, all elements in a row have weight 1 / num_elements_in_the_row numpy.ndarray of dtype int64 This array is the row array of a sparse array of COO format. numpy.ndarray of dtype int64 This array is the col array of a sparse array of COO format. """ row = [] col = [] data = [] for i, idx in enumerate(indices): start = subwordidxsptr[idx] end = subwordidxsptr[idx + 1] row.append(i) col.append(idx) data.append(1 / (1 + end - start)) for subword in subwordidxs[start:end]: row.append(i) col.append(subword + offset) data.append(1 / (1 + end - start)) return (np.array(data, dtype=np.float32), np.array(row, dtype=np.int64), np.array(col, dtype=np.int64))
def str2dt(time: datetime) -> np.ndarray: """ Converts times in string or list of strings to datetime(s) Parameters ---------- time : str or datetime.datetime or numpy.datetime64 Results ------- t : datetime.datetime """ if isinstance(time, datetime): return time elif isinstance(time, str): return parse(time) elif isinstance(time, np.datetime64): return time.astype(datetime) else: # some sort of iterable try: if isinstance(time[0], datetime): return time elif isinstance(time[0], np.datetime64): return time.astype(datetime) elif isinstance(time[0], str): return [parse(t) for t in time] except (IndexError, TypeError): pass # last resort--assume pandas/xarray return time.values.astype('datetime64[us]').astype(datetime)
Converts times in string or list of strings to datetime(s) Parameters ---------- time : str or datetime.datetime or numpy.datetime64 Results ------- t : datetime.datetime
Below is the the instruction that describes the task: ### Input: Converts times in string or list of strings to datetime(s) Parameters ---------- time : str or datetime.datetime or numpy.datetime64 Results ------- t : datetime.datetime ### Response: def str2dt(time: datetime) -> np.ndarray: """ Converts times in string or list of strings to datetime(s) Parameters ---------- time : str or datetime.datetime or numpy.datetime64 Results ------- t : datetime.datetime """ if isinstance(time, datetime): return time elif isinstance(time, str): return parse(time) elif isinstance(time, np.datetime64): return time.astype(datetime) else: # some sort of iterable try: if isinstance(time[0], datetime): return time elif isinstance(time[0], np.datetime64): return time.astype(datetime) elif isinstance(time[0], str): return [parse(t) for t in time] except (IndexError, TypeError): pass # last resort--assume pandas/xarray return time.values.astype('datetime64[us]').astype(datetime)
def load_psf(psf_path, psf_hdu, pixel_scale, renormalize=False): """Factory for loading the psf from a .fits file. Parameters ---------- psf_path : str The path to the psf .fits file containing the psf (e.g. '/path/to/psf.fits') psf_hdu : int The hdu the psf is contained in the .fits file specified by *psf_path*. pixel_scale : float The size of each pixel in arc seconds. renormalize : bool If True, the PSF is renoralized such that all elements sum to 1.0. """ if renormalize: return PSF.from_fits_renormalized(file_path=psf_path, hdu=psf_hdu, pixel_scale=pixel_scale) if not renormalize: return PSF.from_fits_with_scale(file_path=psf_path, hdu=psf_hdu, pixel_scale=pixel_scale)
Factory for loading the psf from a .fits file. Parameters ---------- psf_path : str The path to the psf .fits file containing the psf (e.g. '/path/to/psf.fits') psf_hdu : int The hdu the psf is contained in the .fits file specified by *psf_path*. pixel_scale : float The size of each pixel in arc seconds. renormalize : bool If True, the PSF is renoralized such that all elements sum to 1.0.
Below is the the instruction that describes the task: ### Input: Factory for loading the psf from a .fits file. Parameters ---------- psf_path : str The path to the psf .fits file containing the psf (e.g. '/path/to/psf.fits') psf_hdu : int The hdu the psf is contained in the .fits file specified by *psf_path*. pixel_scale : float The size of each pixel in arc seconds. renormalize : bool If True, the PSF is renoralized such that all elements sum to 1.0. ### Response: def load_psf(psf_path, psf_hdu, pixel_scale, renormalize=False): """Factory for loading the psf from a .fits file. Parameters ---------- psf_path : str The path to the psf .fits file containing the psf (e.g. '/path/to/psf.fits') psf_hdu : int The hdu the psf is contained in the .fits file specified by *psf_path*. pixel_scale : float The size of each pixel in arc seconds. renormalize : bool If True, the PSF is renoralized such that all elements sum to 1.0. """ if renormalize: return PSF.from_fits_renormalized(file_path=psf_path, hdu=psf_hdu, pixel_scale=pixel_scale) if not renormalize: return PSF.from_fits_with_scale(file_path=psf_path, hdu=psf_hdu, pixel_scale=pixel_scale)
def _parse_textgroup_wrapper(self, cts_file): """ Wraps with a Try/Except the textgroup parsing from a cts file :param cts_file: Path to the CTS File :type cts_file: str :return: CtsTextgroupMetadata """ try: return self._parse_textgroup(cts_file) except Exception as E: self.logger.error("Error parsing %s ", cts_file) if self.RAISE_ON_GENERIC_PARSING_ERROR: raise E
Wraps with a Try/Except the textgroup parsing from a cts file :param cts_file: Path to the CTS File :type cts_file: str :return: CtsTextgroupMetadata
Below is the the instruction that describes the task: ### Input: Wraps with a Try/Except the textgroup parsing from a cts file :param cts_file: Path to the CTS File :type cts_file: str :return: CtsTextgroupMetadata ### Response: def _parse_textgroup_wrapper(self, cts_file): """ Wraps with a Try/Except the textgroup parsing from a cts file :param cts_file: Path to the CTS File :type cts_file: str :return: CtsTextgroupMetadata """ try: return self._parse_textgroup(cts_file) except Exception as E: self.logger.error("Error parsing %s ", cts_file) if self.RAISE_ON_GENERIC_PARSING_ERROR: raise E
def force_rerun(flag, outfile): """Check if we should force rerunning of a command if an output file exists. Args: flag (bool): Flag to force rerun. outfile (str): Path to output file which may already exist. Returns: bool: If we should force rerunning of a command Examples: >>> force_rerun(flag=True, outfile='/not/existing/file.txt') True >>> force_rerun(flag=False, outfile='/not/existing/file.txt') True >>> force_rerun(flag=True, outfile='./utils.py') True >>> force_rerun(flag=False, outfile='./utils.py') False """ # If flag is True, always run if flag: return True # If flag is False but file doesn't exist, also run elif not flag and not op.exists(outfile): return True # If flag is False but filesize of output is 0, also run elif not flag and not is_non_zero_file(outfile): return True # Otherwise, do not run else: return False
Check if we should force rerunning of a command if an output file exists. Args: flag (bool): Flag to force rerun. outfile (str): Path to output file which may already exist. Returns: bool: If we should force rerunning of a command Examples: >>> force_rerun(flag=True, outfile='/not/existing/file.txt') True >>> force_rerun(flag=False, outfile='/not/existing/file.txt') True >>> force_rerun(flag=True, outfile='./utils.py') True >>> force_rerun(flag=False, outfile='./utils.py') False
Below is the the instruction that describes the task: ### Input: Check if we should force rerunning of a command if an output file exists. Args: flag (bool): Flag to force rerun. outfile (str): Path to output file which may already exist. Returns: bool: If we should force rerunning of a command Examples: >>> force_rerun(flag=True, outfile='/not/existing/file.txt') True >>> force_rerun(flag=False, outfile='/not/existing/file.txt') True >>> force_rerun(flag=True, outfile='./utils.py') True >>> force_rerun(flag=False, outfile='./utils.py') False ### Response: def force_rerun(flag, outfile): """Check if we should force rerunning of a command if an output file exists. Args: flag (bool): Flag to force rerun. outfile (str): Path to output file which may already exist. Returns: bool: If we should force rerunning of a command Examples: >>> force_rerun(flag=True, outfile='/not/existing/file.txt') True >>> force_rerun(flag=False, outfile='/not/existing/file.txt') True >>> force_rerun(flag=True, outfile='./utils.py') True >>> force_rerun(flag=False, outfile='./utils.py') False """ # If flag is True, always run if flag: return True # If flag is False but file doesn't exist, also run elif not flag and not op.exists(outfile): return True # If flag is False but filesize of output is 0, also run elif not flag and not is_non_zero_file(outfile): return True # Otherwise, do not run else: return False
def expand_iota_subscript(input_str, lowercase=False): """Find characters with iota subscript and replace w/ char + iota added.""" new_list = [] for char in input_str: new_char = MAP_SUBSCRIPT_NO_SUB.get(char) if not new_char: new_char = char new_list.append(new_char) new_str = ''.join(new_list) if lowercase: new_str = new_str.lower() return new_str
Find characters with iota subscript and replace w/ char + iota added.
Below is the the instruction that describes the task: ### Input: Find characters with iota subscript and replace w/ char + iota added. ### Response: def expand_iota_subscript(input_str, lowercase=False): """Find characters with iota subscript and replace w/ char + iota added.""" new_list = [] for char in input_str: new_char = MAP_SUBSCRIPT_NO_SUB.get(char) if not new_char: new_char = char new_list.append(new_char) new_str = ''.join(new_list) if lowercase: new_str = new_str.lower() return new_str
def lpc(blk, order=None): """ Find the Linear Predictive Coding (LPC) coefficients as a ZFilter object, the analysis whitening filter. This implementation uses the covariance method, assuming a zero-mean stochastic process, using numpy.linalg.pinv as a linear system solver. """ from numpy import matrix from numpy.linalg import pinv lagm = lag_matrix(blk, order) phi = matrix(lagm) psi = phi[1:, 0] coeffs = pinv(phi[1:, 1:]) * -psi coeffs = coeffs.T.tolist()[0] filt = 1 + sum(ai * z ** -i for i, ai in enumerate(coeffs, 1)) filt.error = phi[0, 0] + sum(a * c for a, c in xzip(lagm[0][1:], coeffs)) return filt
Find the Linear Predictive Coding (LPC) coefficients as a ZFilter object, the analysis whitening filter. This implementation uses the covariance method, assuming a zero-mean stochastic process, using numpy.linalg.pinv as a linear system solver.
Below is the the instruction that describes the task: ### Input: Find the Linear Predictive Coding (LPC) coefficients as a ZFilter object, the analysis whitening filter. This implementation uses the covariance method, assuming a zero-mean stochastic process, using numpy.linalg.pinv as a linear system solver. ### Response: def lpc(blk, order=None): """ Find the Linear Predictive Coding (LPC) coefficients as a ZFilter object, the analysis whitening filter. This implementation uses the covariance method, assuming a zero-mean stochastic process, using numpy.linalg.pinv as a linear system solver. """ from numpy import matrix from numpy.linalg import pinv lagm = lag_matrix(blk, order) phi = matrix(lagm) psi = phi[1:, 0] coeffs = pinv(phi[1:, 1:]) * -psi coeffs = coeffs.T.tolist()[0] filt = 1 + sum(ai * z ** -i for i, ai in enumerate(coeffs, 1)) filt.error = phi[0, 0] + sum(a * c for a, c in xzip(lagm[0][1:], coeffs)) return filt
def register_clean(self, entity_class, entity): """ Registers the given entity for the given class as CLEAN. :returns: Cloned entity. """ EntityState.manage(entity, self) EntityState.get_state(entity).status = ENTITY_STATUS.CLEAN self.__entity_set_map[entity_class].add(entity)
Registers the given entity for the given class as CLEAN. :returns: Cloned entity.
Below is the the instruction that describes the task: ### Input: Registers the given entity for the given class as CLEAN. :returns: Cloned entity. ### Response: def register_clean(self, entity_class, entity): """ Registers the given entity for the given class as CLEAN. :returns: Cloned entity. """ EntityState.manage(entity, self) EntityState.get_state(entity).status = ENTITY_STATUS.CLEAN self.__entity_set_map[entity_class].add(entity)
def get(self, index: Union[int, str]) -> HistoryItem: """Get item from the History list using 1-based indexing. :param index: optional item to get (index as either integer or string) :return: a single HistoryItem """ index = int(index) if index == 0: raise IndexError('The first command in history is command 1.') elif index < 0: return self[index] else: return self[index - 1]
Get item from the History list using 1-based indexing. :param index: optional item to get (index as either integer or string) :return: a single HistoryItem
Below is the the instruction that describes the task: ### Input: Get item from the History list using 1-based indexing. :param index: optional item to get (index as either integer or string) :return: a single HistoryItem ### Response: def get(self, index: Union[int, str]) -> HistoryItem: """Get item from the History list using 1-based indexing. :param index: optional item to get (index as either integer or string) :return: a single HistoryItem """ index = int(index) if index == 0: raise IndexError('The first command in history is command 1.') elif index < 0: return self[index] else: return self[index - 1]
def density_2d(self, r, kwargs_profile): """ computes the projected density along the line-of-sight :param r: radius (arcsec) :param kwargs_profile: keyword argument list with lens model parameters :return: 2d projected density at projected radius r """ kwargs = copy.deepcopy(kwargs_profile) try: del kwargs['center_x'] del kwargs['center_y'] except: pass # integral of self._profile.density(np.sqrt(x^2+r^2))* dx, 0, infty out = integrate.quad(lambda x: 2*self._profile.density(np.sqrt(x**2+r**2), **kwargs), 0, 100) return out[0]
computes the projected density along the line-of-sight :param r: radius (arcsec) :param kwargs_profile: keyword argument list with lens model parameters :return: 2d projected density at projected radius r
Below is the the instruction that describes the task: ### Input: computes the projected density along the line-of-sight :param r: radius (arcsec) :param kwargs_profile: keyword argument list with lens model parameters :return: 2d projected density at projected radius r ### Response: def density_2d(self, r, kwargs_profile): """ computes the projected density along the line-of-sight :param r: radius (arcsec) :param kwargs_profile: keyword argument list with lens model parameters :return: 2d projected density at projected radius r """ kwargs = copy.deepcopy(kwargs_profile) try: del kwargs['center_x'] del kwargs['center_y'] except: pass # integral of self._profile.density(np.sqrt(x^2+r^2))* dx, 0, infty out = integrate.quad(lambda x: 2*self._profile.density(np.sqrt(x**2+r**2), **kwargs), 0, 100) return out[0]
def _expand_vrf_spec(self, spec): """ Expand VRF specification to SQL. id [integer] internal database id of VRF name [string] name of VRF A VRF is referenced either by its internal database id or by its name. Both are used for exact matching and so no wildcard or regular expressions are allowed. Only one key may be used and an error will be thrown if both id and name is specified. """ if type(spec) is not dict: raise NipapInputError("vrf specification must be a dict") allowed_values = ['id', 'name', 'rt'] for a in spec: if a not in allowed_values: raise NipapExtraneousInputError("extraneous specification key %s" % a) if 'id' in spec: if type(spec['id']) not in (int, long): raise NipapValueError("VRF specification key 'id' must be an integer.") elif 'rt' in spec: if type(spec['rt']) != type(''): raise NipapValueError("VRF specification key 'rt' must be a string.") elif 'name' in spec: if type(spec['name']) != type(''): raise NipapValueError("VRF specification key 'name' must be a string.") if len(spec) > 1: raise NipapExtraneousInputError("VRF specification contains too many keys, specify VRF id, vrf or name.") where, params = self._sql_expand_where(spec, 'spec_') return where, params
Expand VRF specification to SQL. id [integer] internal database id of VRF name [string] name of VRF A VRF is referenced either by its internal database id or by its name. Both are used for exact matching and so no wildcard or regular expressions are allowed. Only one key may be used and an error will be thrown if both id and name is specified.
Below is the the instruction that describes the task: ### Input: Expand VRF specification to SQL. id [integer] internal database id of VRF name [string] name of VRF A VRF is referenced either by its internal database id or by its name. Both are used for exact matching and so no wildcard or regular expressions are allowed. Only one key may be used and an error will be thrown if both id and name is specified. ### Response: def _expand_vrf_spec(self, spec): """ Expand VRF specification to SQL. id [integer] internal database id of VRF name [string] name of VRF A VRF is referenced either by its internal database id or by its name. Both are used for exact matching and so no wildcard or regular expressions are allowed. Only one key may be used and an error will be thrown if both id and name is specified. """ if type(spec) is not dict: raise NipapInputError("vrf specification must be a dict") allowed_values = ['id', 'name', 'rt'] for a in spec: if a not in allowed_values: raise NipapExtraneousInputError("extraneous specification key %s" % a) if 'id' in spec: if type(spec['id']) not in (int, long): raise NipapValueError("VRF specification key 'id' must be an integer.") elif 'rt' in spec: if type(spec['rt']) != type(''): raise NipapValueError("VRF specification key 'rt' must be a string.") elif 'name' in spec: if type(spec['name']) != type(''): raise NipapValueError("VRF specification key 'name' must be a string.") if len(spec) > 1: raise NipapExtraneousInputError("VRF specification contains too many keys, specify VRF id, vrf or name.") where, params = self._sql_expand_where(spec, 'spec_') return where, params
def qsat(self,temp,pres,parameter): """ Calculate (qsat_lst) vector of saturation humidity from: temp = vector of element layer temperatures pres = pressure (at current timestep). """ gamw = (parameter.cl - parameter.cpv) / parameter.rv betaw = (parameter.lvtt/parameter.rv) + (gamw * parameter.tt) alpw = math.log(parameter.estt) + (betaw /parameter.tt) + (gamw * math.log(parameter.tt)) work2 = parameter.r/parameter.rv foes_lst = [0 for i in range(len(temp))] work1_lst = [0 for i in range(len(temp))] qsat_lst = [0 for i in range(len(temp))] for i in range(len(temp)): # saturation vapor pressure foes_lst[i] = math.exp( alpw - betaw/temp[i] - gamw*math.log(temp[i]) ) work1_lst[i] = foes_lst[i]/pres[i] # saturation humidity qsat_lst[i] = work2*work1_lst[i] / (1. + (work2-1.) * work1_lst[i]) return qsat_lst
Calculate (qsat_lst) vector of saturation humidity from: temp = vector of element layer temperatures pres = pressure (at current timestep).
Below is the the instruction that describes the task: ### Input: Calculate (qsat_lst) vector of saturation humidity from: temp = vector of element layer temperatures pres = pressure (at current timestep). ### Response: def qsat(self,temp,pres,parameter): """ Calculate (qsat_lst) vector of saturation humidity from: temp = vector of element layer temperatures pres = pressure (at current timestep). """ gamw = (parameter.cl - parameter.cpv) / parameter.rv betaw = (parameter.lvtt/parameter.rv) + (gamw * parameter.tt) alpw = math.log(parameter.estt) + (betaw /parameter.tt) + (gamw * math.log(parameter.tt)) work2 = parameter.r/parameter.rv foes_lst = [0 for i in range(len(temp))] work1_lst = [0 for i in range(len(temp))] qsat_lst = [0 for i in range(len(temp))] for i in range(len(temp)): # saturation vapor pressure foes_lst[i] = math.exp( alpw - betaw/temp[i] - gamw*math.log(temp[i]) ) work1_lst[i] = foes_lst[i]/pres[i] # saturation humidity qsat_lst[i] = work2*work1_lst[i] / (1. + (work2-1.) * work1_lst[i]) return qsat_lst
def _assemble_gap(stmt): """Assemble Gap statements into text.""" subj_str = _assemble_agent_str(stmt.gap) obj_str = _assemble_agent_str(stmt.ras) stmt_str = subj_str + ' is a GAP for ' + obj_str return _make_sentence(stmt_str)
Assemble Gap statements into text.
Below is the the instruction that describes the task: ### Input: Assemble Gap statements into text. ### Response: def _assemble_gap(stmt): """Assemble Gap statements into text.""" subj_str = _assemble_agent_str(stmt.gap) obj_str = _assemble_agent_str(stmt.ras) stmt_str = subj_str + ' is a GAP for ' + obj_str return _make_sentence(stmt_str)
def check_address_has_code( client: 'JSONRPCClient', address: Address, contract_name: str = '', ): """ Checks that the given address contains code. """ result = client.web3.eth.getCode(to_checksum_address(address), 'latest') if not result: if contract_name: formated_contract_name = '[{}]: '.format(contract_name) else: formated_contract_name = '' raise AddressWithoutCode( '{}Address {} does not contain code'.format( formated_contract_name, to_checksum_address(address), ), )
Checks that the given address contains code.
Below is the the instruction that describes the task: ### Input: Checks that the given address contains code. ### Response: def check_address_has_code( client: 'JSONRPCClient', address: Address, contract_name: str = '', ): """ Checks that the given address contains code. """ result = client.web3.eth.getCode(to_checksum_address(address), 'latest') if not result: if contract_name: formated_contract_name = '[{}]: '.format(contract_name) else: formated_contract_name = '' raise AddressWithoutCode( '{}Address {} does not contain code'.format( formated_contract_name, to_checksum_address(address), ), )
def axis_overlap(ax1, ax2): """ Tests whether two axes overlap vertically """ b1, t1 = ax1.get_position().intervaly b2, t2 = ax2.get_position().intervaly return t1 > b2 and b1 < t2
Tests whether two axes overlap vertically
Below is the the instruction that describes the task: ### Input: Tests whether two axes overlap vertically ### Response: def axis_overlap(ax1, ax2): """ Tests whether two axes overlap vertically """ b1, t1 = ax1.get_position().intervaly b2, t2 = ax2.get_position().intervaly return t1 > b2 and b1 < t2
def runSearchCallSets(self, request): """ Runs the specified SearchCallSetsRequest. """ return self.runSearchRequest( request, protocol.SearchCallSetsRequest, protocol.SearchCallSetsResponse, self.callSetsGenerator)
Runs the specified SearchCallSetsRequest.
Below is the the instruction that describes the task: ### Input: Runs the specified SearchCallSetsRequest. ### Response: def runSearchCallSets(self, request): """ Runs the specified SearchCallSetsRequest. """ return self.runSearchRequest( request, protocol.SearchCallSetsRequest, protocol.SearchCallSetsResponse, self.callSetsGenerator)
def delete(context, force, yes, analysis_id): """Delete an analysis log from the database.""" analysis_obj = context.obj['store'].analysis(analysis_id) if analysis_obj is None: print(click.style('analysis log not found', fg='red')) context.abort() print(click.style(f"{analysis_obj.family}: {analysis_obj.status}")) if analysis_obj.is_temp: if yes or click.confirm(f"remove analysis log?"): analysis_obj.delete() context.obj['store'].commit() print(click.style(f"analysis deleted: {analysis_obj.family}", fg='blue')) else: if analysis_obj.is_deleted: print(click.style(f"{analysis_obj.family}: already deleted", fg='red')) context.abort() if Path(analysis_obj.out_dir).exists(): root_dir = context.obj['store'].families_dir family_dir = analysis_obj.out_dir if not force and (len(family_dir) <= len(root_dir) or root_dir not in family_dir): print(click.style(f"unknown analysis output dir: {analysis_obj.out_dir}", fg='red')) print(click.style("use '--force' to override")) context.abort() if yes or click.confirm(f"remove analysis output: {analysis_obj.out_dir}?"): shutil.rmtree(analysis_obj.out_dir, ignore_errors=True) analysis_obj.is_deleted = True context.obj['store'].commit() print(click.style(f"analysis deleted: {analysis_obj.family}", fg='blue')) else: print(click.style(f"analysis output doesn't exist: {analysis_obj.out_dir}", fg='red')) context.abort()
Delete an analysis log from the database.
Below is the the instruction that describes the task: ### Input: Delete an analysis log from the database. ### Response: def delete(context, force, yes, analysis_id): """Delete an analysis log from the database.""" analysis_obj = context.obj['store'].analysis(analysis_id) if analysis_obj is None: print(click.style('analysis log not found', fg='red')) context.abort() print(click.style(f"{analysis_obj.family}: {analysis_obj.status}")) if analysis_obj.is_temp: if yes or click.confirm(f"remove analysis log?"): analysis_obj.delete() context.obj['store'].commit() print(click.style(f"analysis deleted: {analysis_obj.family}", fg='blue')) else: if analysis_obj.is_deleted: print(click.style(f"{analysis_obj.family}: already deleted", fg='red')) context.abort() if Path(analysis_obj.out_dir).exists(): root_dir = context.obj['store'].families_dir family_dir = analysis_obj.out_dir if not force and (len(family_dir) <= len(root_dir) or root_dir not in family_dir): print(click.style(f"unknown analysis output dir: {analysis_obj.out_dir}", fg='red')) print(click.style("use '--force' to override")) context.abort() if yes or click.confirm(f"remove analysis output: {analysis_obj.out_dir}?"): shutil.rmtree(analysis_obj.out_dir, ignore_errors=True) analysis_obj.is_deleted = True context.obj['store'].commit() print(click.style(f"analysis deleted: {analysis_obj.family}", fg='blue')) else: print(click.style(f"analysis output doesn't exist: {analysis_obj.out_dir}", fg='red')) context.abort()
def convert_docx_to_text( filename: str = None, blob: bytes = None, config: TextProcessingConfig = _DEFAULT_CONFIG) -> str: """ Converts a DOCX file to text. Pass either a filename or a binary object. Args: filename: filename to process blob: binary ``bytes`` object to process config: :class:`TextProcessingConfig` control object Returns: text contents Notes: - Old ``docx`` (https://pypi.python.org/pypi/python-docx) has been superseded (see https://github.com/mikemaccana/python-docx). - ``docx.opendocx(file)`` uses :class:`zipfile.ZipFile`, which can take either a filename or a file-like object (https://docs.python.org/2/library/zipfile.html). - Method was: .. code-block:: python with get_filelikeobject(filename, blob) as fp: document = docx.opendocx(fp) paratextlist = docx.getdocumenttext(document) return '\n\n'.join(paratextlist) - Newer ``docx`` is python-docx - https://pypi.python.org/pypi/python-docx - https://python-docx.readthedocs.org/en/latest/ - http://stackoverflow.com/questions/25228106 However, it uses ``lxml``, which has C dependencies, so it doesn't always install properly on e.g. bare Windows machines. PERFORMANCE of my method: - nice table formatting - but tables grouped at end, not in sensible places - can iterate via ``doc.paragraphs`` and ``doc.tables`` but not in true document order, it seems - others have noted this too: - https://github.com/python-openxml/python-docx/issues/40 - https://github.com/deanmalmgren/textract/pull/92 - ``docx2txt`` is at https://pypi.python.org/pypi/docx2txt/0.6; this is pure Python. Its command-line function appears to be for Python 2 only (2016-04-21: crashes under Python 3; is due to an encoding bug). However, it seems fine as a library. It doesn't handle in-memory blobs properly, though, so we need to extend it. PERFORMANCE OF ITS ``process()`` function: - all text comes out - table text is in a sensible place - table formatting is lost. - Other manual methods (not yet implemented): http://etienned.github.io/posts/extract-text-from-word-docx-simply/. Looks like it won't deal with header stuff (etc.) that ``docx2txt`` handles. - Upshot: we need a DIY version. - See also this "compile lots of techniques" libraries, which has C dependencies: http://textract.readthedocs.org/en/latest/ """ if True: text = '' with get_filelikeobject(filename, blob) as fp: for xml in gen_xml_files_from_docx(fp): text += docx_text_from_xml(xml, config) return text
Converts a DOCX file to text. Pass either a filename or a binary object. Args: filename: filename to process blob: binary ``bytes`` object to process config: :class:`TextProcessingConfig` control object Returns: text contents Notes: - Old ``docx`` (https://pypi.python.org/pypi/python-docx) has been superseded (see https://github.com/mikemaccana/python-docx). - ``docx.opendocx(file)`` uses :class:`zipfile.ZipFile`, which can take either a filename or a file-like object (https://docs.python.org/2/library/zipfile.html). - Method was: .. code-block:: python with get_filelikeobject(filename, blob) as fp: document = docx.opendocx(fp) paratextlist = docx.getdocumenttext(document) return '\n\n'.join(paratextlist) - Newer ``docx`` is python-docx - https://pypi.python.org/pypi/python-docx - https://python-docx.readthedocs.org/en/latest/ - http://stackoverflow.com/questions/25228106 However, it uses ``lxml``, which has C dependencies, so it doesn't always install properly on e.g. bare Windows machines. PERFORMANCE of my method: - nice table formatting - but tables grouped at end, not in sensible places - can iterate via ``doc.paragraphs`` and ``doc.tables`` but not in true document order, it seems - others have noted this too: - https://github.com/python-openxml/python-docx/issues/40 - https://github.com/deanmalmgren/textract/pull/92 - ``docx2txt`` is at https://pypi.python.org/pypi/docx2txt/0.6; this is pure Python. Its command-line function appears to be for Python 2 only (2016-04-21: crashes under Python 3; is due to an encoding bug). However, it seems fine as a library. It doesn't handle in-memory blobs properly, though, so we need to extend it. PERFORMANCE OF ITS ``process()`` function: - all text comes out - table text is in a sensible place - table formatting is lost. - Other manual methods (not yet implemented): http://etienned.github.io/posts/extract-text-from-word-docx-simply/. Looks like it won't deal with header stuff (etc.) that ``docx2txt`` handles. - Upshot: we need a DIY version. - See also this "compile lots of techniques" libraries, which has C dependencies: http://textract.readthedocs.org/en/latest/
Below is the the instruction that describes the task: ### Input: Converts a DOCX file to text. Pass either a filename or a binary object. Args: filename: filename to process blob: binary ``bytes`` object to process config: :class:`TextProcessingConfig` control object Returns: text contents Notes: - Old ``docx`` (https://pypi.python.org/pypi/python-docx) has been superseded (see https://github.com/mikemaccana/python-docx). - ``docx.opendocx(file)`` uses :class:`zipfile.ZipFile`, which can take either a filename or a file-like object (https://docs.python.org/2/library/zipfile.html). - Method was: .. code-block:: python with get_filelikeobject(filename, blob) as fp: document = docx.opendocx(fp) paratextlist = docx.getdocumenttext(document) return '\n\n'.join(paratextlist) - Newer ``docx`` is python-docx - https://pypi.python.org/pypi/python-docx - https://python-docx.readthedocs.org/en/latest/ - http://stackoverflow.com/questions/25228106 However, it uses ``lxml``, which has C dependencies, so it doesn't always install properly on e.g. bare Windows machines. PERFORMANCE of my method: - nice table formatting - but tables grouped at end, not in sensible places - can iterate via ``doc.paragraphs`` and ``doc.tables`` but not in true document order, it seems - others have noted this too: - https://github.com/python-openxml/python-docx/issues/40 - https://github.com/deanmalmgren/textract/pull/92 - ``docx2txt`` is at https://pypi.python.org/pypi/docx2txt/0.6; this is pure Python. Its command-line function appears to be for Python 2 only (2016-04-21: crashes under Python 3; is due to an encoding bug). However, it seems fine as a library. It doesn't handle in-memory blobs properly, though, so we need to extend it. PERFORMANCE OF ITS ``process()`` function: - all text comes out - table text is in a sensible place - table formatting is lost. - Other manual methods (not yet implemented): http://etienned.github.io/posts/extract-text-from-word-docx-simply/. Looks like it won't deal with header stuff (etc.) that ``docx2txt`` handles. - Upshot: we need a DIY version. - See also this "compile lots of techniques" libraries, which has C dependencies: http://textract.readthedocs.org/en/latest/ ### Response: def convert_docx_to_text( filename: str = None, blob: bytes = None, config: TextProcessingConfig = _DEFAULT_CONFIG) -> str: """ Converts a DOCX file to text. Pass either a filename or a binary object. Args: filename: filename to process blob: binary ``bytes`` object to process config: :class:`TextProcessingConfig` control object Returns: text contents Notes: - Old ``docx`` (https://pypi.python.org/pypi/python-docx) has been superseded (see https://github.com/mikemaccana/python-docx). - ``docx.opendocx(file)`` uses :class:`zipfile.ZipFile`, which can take either a filename or a file-like object (https://docs.python.org/2/library/zipfile.html). - Method was: .. code-block:: python with get_filelikeobject(filename, blob) as fp: document = docx.opendocx(fp) paratextlist = docx.getdocumenttext(document) return '\n\n'.join(paratextlist) - Newer ``docx`` is python-docx - https://pypi.python.org/pypi/python-docx - https://python-docx.readthedocs.org/en/latest/ - http://stackoverflow.com/questions/25228106 However, it uses ``lxml``, which has C dependencies, so it doesn't always install properly on e.g. bare Windows machines. PERFORMANCE of my method: - nice table formatting - but tables grouped at end, not in sensible places - can iterate via ``doc.paragraphs`` and ``doc.tables`` but not in true document order, it seems - others have noted this too: - https://github.com/python-openxml/python-docx/issues/40 - https://github.com/deanmalmgren/textract/pull/92 - ``docx2txt`` is at https://pypi.python.org/pypi/docx2txt/0.6; this is pure Python. Its command-line function appears to be for Python 2 only (2016-04-21: crashes under Python 3; is due to an encoding bug). However, it seems fine as a library. It doesn't handle in-memory blobs properly, though, so we need to extend it. PERFORMANCE OF ITS ``process()`` function: - all text comes out - table text is in a sensible place - table formatting is lost. - Other manual methods (not yet implemented): http://etienned.github.io/posts/extract-text-from-word-docx-simply/. Looks like it won't deal with header stuff (etc.) that ``docx2txt`` handles. - Upshot: we need a DIY version. - See also this "compile lots of techniques" libraries, which has C dependencies: http://textract.readthedocs.org/en/latest/ """ if True: text = '' with get_filelikeobject(filename, blob) as fp: for xml in gen_xml_files_from_docx(fp): text += docx_text_from_xml(xml, config) return text
def create_unique_autosave_filename(self, filename, autosave_dir): """ Create unique autosave file name for specified file name. Args: filename (str): original file name autosave_dir (str): directory in which autosave files are stored """ basename = osp.basename(filename) autosave_filename = osp.join(autosave_dir, basename) if autosave_filename in self.name_mapping.values(): counter = 0 root, ext = osp.splitext(basename) while autosave_filename in self.name_mapping.values(): counter += 1 autosave_basename = '{}-{}{}'.format(root, counter, ext) autosave_filename = osp.join(autosave_dir, autosave_basename) return autosave_filename
Create unique autosave file name for specified file name. Args: filename (str): original file name autosave_dir (str): directory in which autosave files are stored
Below is the the instruction that describes the task: ### Input: Create unique autosave file name for specified file name. Args: filename (str): original file name autosave_dir (str): directory in which autosave files are stored ### Response: def create_unique_autosave_filename(self, filename, autosave_dir): """ Create unique autosave file name for specified file name. Args: filename (str): original file name autosave_dir (str): directory in which autosave files are stored """ basename = osp.basename(filename) autosave_filename = osp.join(autosave_dir, basename) if autosave_filename in self.name_mapping.values(): counter = 0 root, ext = osp.splitext(basename) while autosave_filename in self.name_mapping.values(): counter += 1 autosave_basename = '{}-{}{}'.format(root, counter, ext) autosave_filename = osp.join(autosave_dir, autosave_basename) return autosave_filename
def array(description, **kwargs) -> typing.Type: """Create a :class:`~doctor.types.Array` type. :param description: A description of the type. :param kwargs: Can include any attribute defined in :class:`~doctor.types.Array` """ kwargs['description'] = description return type('Array', (Array,), kwargs)
Create a :class:`~doctor.types.Array` type. :param description: A description of the type. :param kwargs: Can include any attribute defined in :class:`~doctor.types.Array`
Below is the the instruction that describes the task: ### Input: Create a :class:`~doctor.types.Array` type. :param description: A description of the type. :param kwargs: Can include any attribute defined in :class:`~doctor.types.Array` ### Response: def array(description, **kwargs) -> typing.Type: """Create a :class:`~doctor.types.Array` type. :param description: A description of the type. :param kwargs: Can include any attribute defined in :class:`~doctor.types.Array` """ kwargs['description'] = description return type('Array', (Array,), kwargs)
def update_short(self, **kwargs): """ Update the short optional arguments (those with one leading '-') This method updates the short argument name for the specified function arguments as stored in :attr:`unfinished_arguments` Parameters ---------- ``**kwargs`` Keywords must be keys in the :attr:`unfinished_arguments` dictionary (i.e. keywords of the root functions), values the short argument names Examples -------- Setting:: >>> parser.update_short(something='s', something_else='se') is basically the same as:: >>> parser.update_arg('something', short='s') >>> parser.update_arg('something_else', short='se') which in turn is basically comparable to:: >>> parser.add_argument('-s', '--something', ...) >>> parser.add_argument('-se', '--something_else', ...) See Also -------- update_shortf, update_long""" for key, val in six.iteritems(kwargs): self.update_arg(key, short=val)
Update the short optional arguments (those with one leading '-') This method updates the short argument name for the specified function arguments as stored in :attr:`unfinished_arguments` Parameters ---------- ``**kwargs`` Keywords must be keys in the :attr:`unfinished_arguments` dictionary (i.e. keywords of the root functions), values the short argument names Examples -------- Setting:: >>> parser.update_short(something='s', something_else='se') is basically the same as:: >>> parser.update_arg('something', short='s') >>> parser.update_arg('something_else', short='se') which in turn is basically comparable to:: >>> parser.add_argument('-s', '--something', ...) >>> parser.add_argument('-se', '--something_else', ...) See Also -------- update_shortf, update_long
Below is the the instruction that describes the task: ### Input: Update the short optional arguments (those with one leading '-') This method updates the short argument name for the specified function arguments as stored in :attr:`unfinished_arguments` Parameters ---------- ``**kwargs`` Keywords must be keys in the :attr:`unfinished_arguments` dictionary (i.e. keywords of the root functions), values the short argument names Examples -------- Setting:: >>> parser.update_short(something='s', something_else='se') is basically the same as:: >>> parser.update_arg('something', short='s') >>> parser.update_arg('something_else', short='se') which in turn is basically comparable to:: >>> parser.add_argument('-s', '--something', ...) >>> parser.add_argument('-se', '--something_else', ...) See Also -------- update_shortf, update_long ### Response: def update_short(self, **kwargs): """ Update the short optional arguments (those with one leading '-') This method updates the short argument name for the specified function arguments as stored in :attr:`unfinished_arguments` Parameters ---------- ``**kwargs`` Keywords must be keys in the :attr:`unfinished_arguments` dictionary (i.e. keywords of the root functions), values the short argument names Examples -------- Setting:: >>> parser.update_short(something='s', something_else='se') is basically the same as:: >>> parser.update_arg('something', short='s') >>> parser.update_arg('something_else', short='se') which in turn is basically comparable to:: >>> parser.add_argument('-s', '--something', ...) >>> parser.add_argument('-se', '--something_else', ...) See Also -------- update_shortf, update_long""" for key, val in six.iteritems(kwargs): self.update_arg(key, short=val)
def get_one_optional(self, name): """ Gets one optional dependency by its name. :param name: the dependency name to locate. :return: a dependency reference or null of the dependency was not found """ locator = self._locate(name) return self._references.get_one_optional(locator) if locator != None else None
Gets one optional dependency by its name. :param name: the dependency name to locate. :return: a dependency reference or null of the dependency was not found
Below is the the instruction that describes the task: ### Input: Gets one optional dependency by its name. :param name: the dependency name to locate. :return: a dependency reference or null of the dependency was not found ### Response: def get_one_optional(self, name): """ Gets one optional dependency by its name. :param name: the dependency name to locate. :return: a dependency reference or null of the dependency was not found """ locator = self._locate(name) return self._references.get_one_optional(locator) if locator != None else None
def parse_struct(s): """ Returns a docco section for the given struct. :Parameters: s Parsed IDL struct dict. Keys: 'comment', 'name', 'extends', 'fields' """ docs = s['comment'] code = '<span class="k">struct</span> <span class="gs">%s</span>' % s['name'] if s['extends']: code += ' extends <span class="gs">%s</span>' % s['extends'] code += ' {\n' namelen = 0 typelen = 0 for v in s["fields"]: tlen = len(format_type(v, includeOptional=False)) if len(v['name']) > namelen: namelen = len(v['name']) if tlen > typelen: typelen = tlen namelen += 1 typelen += 1 formatstr = ' <span class="nv">%s</span><span class="kt">%s %s</span>\n' i = 0 for v in s["fields"]: if v.has_key('comment') and v['comment']: if i > 0: code += "\n" for line in v['comment'].split("\n"): code += ' <span class="c1">// %s</span>\n' % line opt = "" if v.has_key('optional') and v['optional'] == True: opt = " [optional]" code += formatstr % (string.ljust(v['name'], namelen), string.ljust(format_type(v, includeOptional=False), typelen), opt) i += 1 code += "}" return to_section(docs, code)
Returns a docco section for the given struct. :Parameters: s Parsed IDL struct dict. Keys: 'comment', 'name', 'extends', 'fields'
Below is the the instruction that describes the task: ### Input: Returns a docco section for the given struct. :Parameters: s Parsed IDL struct dict. Keys: 'comment', 'name', 'extends', 'fields' ### Response: def parse_struct(s): """ Returns a docco section for the given struct. :Parameters: s Parsed IDL struct dict. Keys: 'comment', 'name', 'extends', 'fields' """ docs = s['comment'] code = '<span class="k">struct</span> <span class="gs">%s</span>' % s['name'] if s['extends']: code += ' extends <span class="gs">%s</span>' % s['extends'] code += ' {\n' namelen = 0 typelen = 0 for v in s["fields"]: tlen = len(format_type(v, includeOptional=False)) if len(v['name']) > namelen: namelen = len(v['name']) if tlen > typelen: typelen = tlen namelen += 1 typelen += 1 formatstr = ' <span class="nv">%s</span><span class="kt">%s %s</span>\n' i = 0 for v in s["fields"]: if v.has_key('comment') and v['comment']: if i > 0: code += "\n" for line in v['comment'].split("\n"): code += ' <span class="c1">// %s</span>\n' % line opt = "" if v.has_key('optional') and v['optional'] == True: opt = " [optional]" code += formatstr % (string.ljust(v['name'], namelen), string.ljust(format_type(v, includeOptional=False), typelen), opt) i += 1 code += "}" return to_section(docs, code)
def open_user_directory_path(self): """Open File dialog to choose the user directory path.""" # noinspection PyCallByClass,PyTypeChecker directory_name = QFileDialog.getExistingDirectory( self, self.tr('Results directory'), self.leUserDirectoryPath.text(), QFileDialog.ShowDirsOnly) if directory_name: self.leUserDirectoryPath.setText(directory_name)
Open File dialog to choose the user directory path.
Below is the the instruction that describes the task: ### Input: Open File dialog to choose the user directory path. ### Response: def open_user_directory_path(self): """Open File dialog to choose the user directory path.""" # noinspection PyCallByClass,PyTypeChecker directory_name = QFileDialog.getExistingDirectory( self, self.tr('Results directory'), self.leUserDirectoryPath.text(), QFileDialog.ShowDirsOnly) if directory_name: self.leUserDirectoryPath.setText(directory_name)
def check(self, dsm, **kwargs): """ Check layered architecture. Args: dsm (:class:`DesignStructureMatrix`): the DSM to check. Returns: bool, str: True if layered architecture else False, messages """ layered_architecture = True messages = [] categories = dsm.categories dsm_size = dsm.size[0] if not categories: categories = ['appmodule'] * dsm_size for i in range(0, dsm_size - 1): for j in range(i + 1, dsm_size): if (categories[i] != 'broker' and categories[j] != 'broker' and dsm.entities[i].split('.')[0] != dsm.entities[j].split('.')[0]): # noqa if dsm.data[i][j] > 0: layered_architecture = False messages.append( 'Dependency from %s to %s breaks the ' 'layered architecture.' % ( dsm.entities[i], dsm.entities[j])) return layered_architecture, '\n'.join(messages)
Check layered architecture. Args: dsm (:class:`DesignStructureMatrix`): the DSM to check. Returns: bool, str: True if layered architecture else False, messages
Below is the the instruction that describes the task: ### Input: Check layered architecture. Args: dsm (:class:`DesignStructureMatrix`): the DSM to check. Returns: bool, str: True if layered architecture else False, messages ### Response: def check(self, dsm, **kwargs): """ Check layered architecture. Args: dsm (:class:`DesignStructureMatrix`): the DSM to check. Returns: bool, str: True if layered architecture else False, messages """ layered_architecture = True messages = [] categories = dsm.categories dsm_size = dsm.size[0] if not categories: categories = ['appmodule'] * dsm_size for i in range(0, dsm_size - 1): for j in range(i + 1, dsm_size): if (categories[i] != 'broker' and categories[j] != 'broker' and dsm.entities[i].split('.')[0] != dsm.entities[j].split('.')[0]): # noqa if dsm.data[i][j] > 0: layered_architecture = False messages.append( 'Dependency from %s to %s breaks the ' 'layered architecture.' % ( dsm.entities[i], dsm.entities[j])) return layered_architecture, '\n'.join(messages)
def empty(shape, ctx=None, dtype=None): """Returns a new array of given shape and type, without initializing entries. Parameters ---------- shape : int or tuple of int The shape of the empty array. ctx : Context, optional An optional device context (default is the current default context). dtype : str or numpy.dtype, optional An optional value type (default is `float32`). Returns ------- NDArray A created array. """ if isinstance(shape, int): shape = (shape, ) if ctx is None: ctx = current_context() if dtype is None: dtype = mx_real_t return NDArray(handle=_new_alloc_handle(shape, ctx, False, dtype))
Returns a new array of given shape and type, without initializing entries. Parameters ---------- shape : int or tuple of int The shape of the empty array. ctx : Context, optional An optional device context (default is the current default context). dtype : str or numpy.dtype, optional An optional value type (default is `float32`). Returns ------- NDArray A created array.
Below is the the instruction that describes the task: ### Input: Returns a new array of given shape and type, without initializing entries. Parameters ---------- shape : int or tuple of int The shape of the empty array. ctx : Context, optional An optional device context (default is the current default context). dtype : str or numpy.dtype, optional An optional value type (default is `float32`). Returns ------- NDArray A created array. ### Response: def empty(shape, ctx=None, dtype=None): """Returns a new array of given shape and type, without initializing entries. Parameters ---------- shape : int or tuple of int The shape of the empty array. ctx : Context, optional An optional device context (default is the current default context). dtype : str or numpy.dtype, optional An optional value type (default is `float32`). Returns ------- NDArray A created array. """ if isinstance(shape, int): shape = (shape, ) if ctx is None: ctx = current_context() if dtype is None: dtype = mx_real_t return NDArray(handle=_new_alloc_handle(shape, ctx, False, dtype))
def vblk_erase(self, address): """nvm_vblk erase""" cmd = ["nvm_vblk erase", self.envs, "0x%x" % address] status, _, _ = cij.ssh.command(cmd, shell=True) return status
nvm_vblk erase
Below is the the instruction that describes the task: ### Input: nvm_vblk erase ### Response: def vblk_erase(self, address): """nvm_vblk erase""" cmd = ["nvm_vblk erase", self.envs, "0x%x" % address] status, _, _ = cij.ssh.command(cmd, shell=True) return status
def shift_com(elements, coordinates, com_adjust=np.zeros(3)): """ Return coordinates translated by some vector. Parameters ---------- elements : numpy.ndarray An array of all elements (type: str) in a molecule. coordinates : numpy.ndarray An array containing molecule's coordinates. com_adjust : numpy.ndarray (default = [0, 0, 0]) Returns ------- numpy.ndarray Translated array of molecule's coordinates. """ com = center_of_mass(elements, coordinates) com = np.array([com - com_adjust] * coordinates.shape[0]) return coordinates - com
Return coordinates translated by some vector. Parameters ---------- elements : numpy.ndarray An array of all elements (type: str) in a molecule. coordinates : numpy.ndarray An array containing molecule's coordinates. com_adjust : numpy.ndarray (default = [0, 0, 0]) Returns ------- numpy.ndarray Translated array of molecule's coordinates.
Below is the the instruction that describes the task: ### Input: Return coordinates translated by some vector. Parameters ---------- elements : numpy.ndarray An array of all elements (type: str) in a molecule. coordinates : numpy.ndarray An array containing molecule's coordinates. com_adjust : numpy.ndarray (default = [0, 0, 0]) Returns ------- numpy.ndarray Translated array of molecule's coordinates. ### Response: def shift_com(elements, coordinates, com_adjust=np.zeros(3)): """ Return coordinates translated by some vector. Parameters ---------- elements : numpy.ndarray An array of all elements (type: str) in a molecule. coordinates : numpy.ndarray An array containing molecule's coordinates. com_adjust : numpy.ndarray (default = [0, 0, 0]) Returns ------- numpy.ndarray Translated array of molecule's coordinates. """ com = center_of_mass(elements, coordinates) com = np.array([com - com_adjust] * coordinates.shape[0]) return coordinates - com
def add_job(self, idx): """Called after self.targets[idx] just got the job with header. Override with subclasses. The default ordering is simple LRU. The default loads are the number of outstanding jobs.""" self.loads[idx] += 1 for lis in (self.targets, self.loads): lis.append(lis.pop(idx))
Called after self.targets[idx] just got the job with header. Override with subclasses. The default ordering is simple LRU. The default loads are the number of outstanding jobs.
Below is the the instruction that describes the task: ### Input: Called after self.targets[idx] just got the job with header. Override with subclasses. The default ordering is simple LRU. The default loads are the number of outstanding jobs. ### Response: def add_job(self, idx): """Called after self.targets[idx] just got the job with header. Override with subclasses. The default ordering is simple LRU. The default loads are the number of outstanding jobs.""" self.loads[idx] += 1 for lis in (self.targets, self.loads): lis.append(lis.pop(idx))
def prepare_for_authenticate( self, entityid=None, relay_state="", binding=saml2.BINDING_HTTP_REDIRECT, vorg="", nameid_format=None, scoping=None, consent=None, extensions=None, sign=None, response_binding=saml2.BINDING_HTTP_POST, **kwargs): """ Makes all necessary preparations for an authentication request. :param entityid: The entity ID of the IdP to send the request to :param relay_state: To where the user should be returned after successfull log in. :param binding: Which binding to use for sending the request :param vorg: The entity_id of the virtual organization I'm a member of :param nameid_format: :param scoping: For which IdPs this query are aimed. :param consent: Whether the principal have given her consent :param extensions: Possible extensions :param sign: Whether the request should be signed or not. :param response_binding: Which binding to use for receiving the response :param kwargs: Extra key word arguments :return: session id and AuthnRequest info """ reqid, negotiated_binding, info = \ self.prepare_for_negotiated_authenticate( entityid=entityid, relay_state=relay_state, binding=binding, vorg=vorg, nameid_format=nameid_format, scoping=scoping, consent=consent, extensions=extensions, sign=sign, response_binding=response_binding, **kwargs) assert negotiated_binding == binding return reqid, info
Makes all necessary preparations for an authentication request. :param entityid: The entity ID of the IdP to send the request to :param relay_state: To where the user should be returned after successfull log in. :param binding: Which binding to use for sending the request :param vorg: The entity_id of the virtual organization I'm a member of :param nameid_format: :param scoping: For which IdPs this query are aimed. :param consent: Whether the principal have given her consent :param extensions: Possible extensions :param sign: Whether the request should be signed or not. :param response_binding: Which binding to use for receiving the response :param kwargs: Extra key word arguments :return: session id and AuthnRequest info
Below is the the instruction that describes the task: ### Input: Makes all necessary preparations for an authentication request. :param entityid: The entity ID of the IdP to send the request to :param relay_state: To where the user should be returned after successfull log in. :param binding: Which binding to use for sending the request :param vorg: The entity_id of the virtual organization I'm a member of :param nameid_format: :param scoping: For which IdPs this query are aimed. :param consent: Whether the principal have given her consent :param extensions: Possible extensions :param sign: Whether the request should be signed or not. :param response_binding: Which binding to use for receiving the response :param kwargs: Extra key word arguments :return: session id and AuthnRequest info ### Response: def prepare_for_authenticate( self, entityid=None, relay_state="", binding=saml2.BINDING_HTTP_REDIRECT, vorg="", nameid_format=None, scoping=None, consent=None, extensions=None, sign=None, response_binding=saml2.BINDING_HTTP_POST, **kwargs): """ Makes all necessary preparations for an authentication request. :param entityid: The entity ID of the IdP to send the request to :param relay_state: To where the user should be returned after successfull log in. :param binding: Which binding to use for sending the request :param vorg: The entity_id of the virtual organization I'm a member of :param nameid_format: :param scoping: For which IdPs this query are aimed. :param consent: Whether the principal have given her consent :param extensions: Possible extensions :param sign: Whether the request should be signed or not. :param response_binding: Which binding to use for receiving the response :param kwargs: Extra key word arguments :return: session id and AuthnRequest info """ reqid, negotiated_binding, info = \ self.prepare_for_negotiated_authenticate( entityid=entityid, relay_state=relay_state, binding=binding, vorg=vorg, nameid_format=nameid_format, scoping=scoping, consent=consent, extensions=extensions, sign=sign, response_binding=response_binding, **kwargs) assert negotiated_binding == binding return reqid, info
def open_url_seekable(path_url, mode='rt'): """Open a URL and ensure that the result is seekable, copying it into a buffer if necessary.""" logging.debug("Making request to '%s'", path_url) response = urllib.request.urlopen(path_url) logging.debug("Got response %s", response.info()) try: response.seek(0) except (IOError, AttributeError): # Copy into buffer to allow seeking. response = io.BytesIO(response.read()) if "b" in mode: return response else: return io.TextIOWrapper(response)
Open a URL and ensure that the result is seekable, copying it into a buffer if necessary.
Below is the the instruction that describes the task: ### Input: Open a URL and ensure that the result is seekable, copying it into a buffer if necessary. ### Response: def open_url_seekable(path_url, mode='rt'): """Open a URL and ensure that the result is seekable, copying it into a buffer if necessary.""" logging.debug("Making request to '%s'", path_url) response = urllib.request.urlopen(path_url) logging.debug("Got response %s", response.info()) try: response.seek(0) except (IOError, AttributeError): # Copy into buffer to allow seeking. response = io.BytesIO(response.read()) if "b" in mode: return response else: return io.TextIOWrapper(response)
def join (self, timeout=None): """Blocks until all items in the Queue have been gotten and processed. The count of unfinished tasks goes up whenever an item is added to the queue. The count goes down whenever a consumer thread calls task_done() to indicate the item was retrieved and all work on it is complete. When the count of unfinished tasks drops to zero, join() unblocks. """ with self.all_tasks_done: if timeout is None: while self.unfinished_tasks: self.all_tasks_done.wait() else: if timeout < 0: raise ValueError("'timeout' must be a positive number") endtime = _time() + timeout while self.unfinished_tasks: remaining = endtime - _time() if remaining <= 0.0: raise Timeout() self.all_tasks_done.wait(remaining)
Blocks until all items in the Queue have been gotten and processed. The count of unfinished tasks goes up whenever an item is added to the queue. The count goes down whenever a consumer thread calls task_done() to indicate the item was retrieved and all work on it is complete. When the count of unfinished tasks drops to zero, join() unblocks.
Below is the the instruction that describes the task: ### Input: Blocks until all items in the Queue have been gotten and processed. The count of unfinished tasks goes up whenever an item is added to the queue. The count goes down whenever a consumer thread calls task_done() to indicate the item was retrieved and all work on it is complete. When the count of unfinished tasks drops to zero, join() unblocks. ### Response: def join (self, timeout=None): """Blocks until all items in the Queue have been gotten and processed. The count of unfinished tasks goes up whenever an item is added to the queue. The count goes down whenever a consumer thread calls task_done() to indicate the item was retrieved and all work on it is complete. When the count of unfinished tasks drops to zero, join() unblocks. """ with self.all_tasks_done: if timeout is None: while self.unfinished_tasks: self.all_tasks_done.wait() else: if timeout < 0: raise ValueError("'timeout' must be a positive number") endtime = _time() + timeout while self.unfinished_tasks: remaining = endtime - _time() if remaining <= 0.0: raise Timeout() self.all_tasks_done.wait(remaining)
def live_processes(self): """Return a list of the live processes. Returns: A list of the live processes. """ result = [] for process_type, process_infos in self.all_processes.items(): for process_info in process_infos: if process_info.process.poll() is None: result.append((process_type, process_info.process)) return result
Return a list of the live processes. Returns: A list of the live processes.
Below is the the instruction that describes the task: ### Input: Return a list of the live processes. Returns: A list of the live processes. ### Response: def live_processes(self): """Return a list of the live processes. Returns: A list of the live processes. """ result = [] for process_type, process_infos in self.all_processes.items(): for process_info in process_infos: if process_info.process.poll() is None: result.append((process_type, process_info.process)) return result
def get_auth_params_from_request(request): """Extracts properties needed by novaclient call from the request object. These will be used to memoize the calls to novaclient. """ return ( request.user.username, request.user.token.id, request.user.tenant_id, request.user.token.project.get('domain_id'), base.url_for(request, 'compute'), base.url_for(request, 'identity') )
Extracts properties needed by novaclient call from the request object. These will be used to memoize the calls to novaclient.
Below is the the instruction that describes the task: ### Input: Extracts properties needed by novaclient call from the request object. These will be used to memoize the calls to novaclient. ### Response: def get_auth_params_from_request(request): """Extracts properties needed by novaclient call from the request object. These will be used to memoize the calls to novaclient. """ return ( request.user.username, request.user.token.id, request.user.tenant_id, request.user.token.project.get('domain_id'), base.url_for(request, 'compute'), base.url_for(request, 'identity') )
def _guess_vc(self): """ Locate Visual C for 2017 """ if self.vc_ver <= 14.0: return default = r'VC\Tools\MSVC' guess_vc = os.path.join(self.VSInstallDir, default) # Subdir with VC exact version as name try: vc_exact_ver = os.listdir(guess_vc)[-1] return os.path.join(guess_vc, vc_exact_ver) except (OSError, IOError, IndexError): pass
Locate Visual C for 2017
Below is the the instruction that describes the task: ### Input: Locate Visual C for 2017 ### Response: def _guess_vc(self): """ Locate Visual C for 2017 """ if self.vc_ver <= 14.0: return default = r'VC\Tools\MSVC' guess_vc = os.path.join(self.VSInstallDir, default) # Subdir with VC exact version as name try: vc_exact_ver = os.listdir(guess_vc)[-1] return os.path.join(guess_vc, vc_exact_ver) except (OSError, IOError, IndexError): pass
def commit_input_confirmed(self, **kwargs): """Auto Generated Code """ config = ET.Element("config") commit = ET.Element("commit") config = commit input = ET.SubElement(commit, "input") confirmed = ET.SubElement(input, "confirmed") callback = kwargs.pop('callback', self._callback) return callback(config)
Auto Generated Code
Below is the the instruction that describes the task: ### Input: Auto Generated Code ### Response: def commit_input_confirmed(self, **kwargs): """Auto Generated Code """ config = ET.Element("config") commit = ET.Element("commit") config = commit input = ET.SubElement(commit, "input") confirmed = ET.SubElement(input, "confirmed") callback = kwargs.pop('callback', self._callback) return callback(config)
def read_ssh_config(path): """ Read ssh config file and return parsed SshConfig """ with open(path, "r") as fh_: lines = fh_.read().splitlines() return SshConfig(lines)
Read ssh config file and return parsed SshConfig
Below is the the instruction that describes the task: ### Input: Read ssh config file and return parsed SshConfig ### Response: def read_ssh_config(path): """ Read ssh config file and return parsed SshConfig """ with open(path, "r") as fh_: lines = fh_.read().splitlines() return SshConfig(lines)
def get_info(brain_or_object, endpoint=None, complete=False): """Extract the data from the catalog brain or object :param brain_or_object: A single catalog brain or content object :type brain_or_object: ATContentType/DexterityContentType/CatalogBrain :param endpoint: The named URL endpoint for the root of the items :type endpoint: str/unicode :param complete: Flag to wake up the object and fetch all data :type complete: bool :returns: Data mapping for the object/catalog brain :rtype: dict """ # also extract the brain data for objects if not is_brain(brain_or_object): brain_or_object = get_brain(brain_or_object) if brain_or_object is None: logger.warn("Couldn't find/fetch brain of {}".format(brain_or_object)) return {} complete = True # When querying uid catalog we have to be sure that we skip the objects # used to relate two or more objects if is_relationship_object(brain_or_object): logger.warn("Skipping relationship object {}".format(repr(brain_or_object))) return {} # extract the data from the initial object with the proper adapter info = IInfo(brain_or_object).to_dict() # update with url info (always included) url_info = get_url_info(brain_or_object, endpoint) info.update(url_info) # include the parent url info parent = get_parent_info(brain_or_object) info.update(parent) # add the complete data of the object if requested # -> requires to wake up the object if it is a catalog brain if complete: # ensure we have a full content object obj = api.get_object(brain_or_object) # get the compatible adapter adapter = IInfo(obj) # update the data set with the complete information info.update(adapter.to_dict()) # update the data set with the workflow information # -> only possible if `?complete=yes&workflow=yes` if req.get_workflow(False): info.update(get_workflow_info(obj)) # # add sharing data if the user requested it # # -> only possible if `?complete=yes` # if req.get_sharing(False): # sharing = get_sharing_info(obj) # info.update({"sharing": sharing}) return info
Extract the data from the catalog brain or object :param brain_or_object: A single catalog brain or content object :type brain_or_object: ATContentType/DexterityContentType/CatalogBrain :param endpoint: The named URL endpoint for the root of the items :type endpoint: str/unicode :param complete: Flag to wake up the object and fetch all data :type complete: bool :returns: Data mapping for the object/catalog brain :rtype: dict
Below is the the instruction that describes the task: ### Input: Extract the data from the catalog brain or object :param brain_or_object: A single catalog brain or content object :type brain_or_object: ATContentType/DexterityContentType/CatalogBrain :param endpoint: The named URL endpoint for the root of the items :type endpoint: str/unicode :param complete: Flag to wake up the object and fetch all data :type complete: bool :returns: Data mapping for the object/catalog brain :rtype: dict ### Response: def get_info(brain_or_object, endpoint=None, complete=False): """Extract the data from the catalog brain or object :param brain_or_object: A single catalog brain or content object :type brain_or_object: ATContentType/DexterityContentType/CatalogBrain :param endpoint: The named URL endpoint for the root of the items :type endpoint: str/unicode :param complete: Flag to wake up the object and fetch all data :type complete: bool :returns: Data mapping for the object/catalog brain :rtype: dict """ # also extract the brain data for objects if not is_brain(brain_or_object): brain_or_object = get_brain(brain_or_object) if brain_or_object is None: logger.warn("Couldn't find/fetch brain of {}".format(brain_or_object)) return {} complete = True # When querying uid catalog we have to be sure that we skip the objects # used to relate two or more objects if is_relationship_object(brain_or_object): logger.warn("Skipping relationship object {}".format(repr(brain_or_object))) return {} # extract the data from the initial object with the proper adapter info = IInfo(brain_or_object).to_dict() # update with url info (always included) url_info = get_url_info(brain_or_object, endpoint) info.update(url_info) # include the parent url info parent = get_parent_info(brain_or_object) info.update(parent) # add the complete data of the object if requested # -> requires to wake up the object if it is a catalog brain if complete: # ensure we have a full content object obj = api.get_object(brain_or_object) # get the compatible adapter adapter = IInfo(obj) # update the data set with the complete information info.update(adapter.to_dict()) # update the data set with the workflow information # -> only possible if `?complete=yes&workflow=yes` if req.get_workflow(False): info.update(get_workflow_info(obj)) # # add sharing data if the user requested it # # -> only possible if `?complete=yes` # if req.get_sharing(False): # sharing = get_sharing_info(obj) # info.update({"sharing": sharing}) return info
def make_spatialmap_source(name, Spatial_Filename, spectrum): """Construct and return a `fermipy.roi_model.Source` object """ data = dict(Spatial_Filename=Spatial_Filename, ra=0.0, dec=0.0, SpatialType='SpatialMap', Source_Name=name) if spectrum is not None: data.update(spectrum) return roi_model.Source(name, data)
Construct and return a `fermipy.roi_model.Source` object
Below is the the instruction that describes the task: ### Input: Construct and return a `fermipy.roi_model.Source` object ### Response: def make_spatialmap_source(name, Spatial_Filename, spectrum): """Construct and return a `fermipy.roi_model.Source` object """ data = dict(Spatial_Filename=Spatial_Filename, ra=0.0, dec=0.0, SpatialType='SpatialMap', Source_Name=name) if spectrum is not None: data.update(spectrum) return roi_model.Source(name, data)
def injectable( name=None, autocall=True, cache=False, cache_scope=_CS_FOREVER, memoize=False): """ Decorates functions that will be injected into other functions. Decorator version of `add_injectable`. Name defaults to name of function. The function's argument names and keyword argument values will be matched to registered variables when the function needs to be evaluated by Orca. The argument name "iter_var" may be used to have the current iteration variable injected. """ def decorator(func): if name: n = name else: n = func.__name__ add_injectable( n, func, autocall=autocall, cache=cache, cache_scope=cache_scope, memoize=memoize) return func return decorator
Decorates functions that will be injected into other functions. Decorator version of `add_injectable`. Name defaults to name of function. The function's argument names and keyword argument values will be matched to registered variables when the function needs to be evaluated by Orca. The argument name "iter_var" may be used to have the current iteration variable injected.
Below is the the instruction that describes the task: ### Input: Decorates functions that will be injected into other functions. Decorator version of `add_injectable`. Name defaults to name of function. The function's argument names and keyword argument values will be matched to registered variables when the function needs to be evaluated by Orca. The argument name "iter_var" may be used to have the current iteration variable injected. ### Response: def injectable( name=None, autocall=True, cache=False, cache_scope=_CS_FOREVER, memoize=False): """ Decorates functions that will be injected into other functions. Decorator version of `add_injectable`. Name defaults to name of function. The function's argument names and keyword argument values will be matched to registered variables when the function needs to be evaluated by Orca. The argument name "iter_var" may be used to have the current iteration variable injected. """ def decorator(func): if name: n = name else: n = func.__name__ add_injectable( n, func, autocall=autocall, cache=cache, cache_scope=cache_scope, memoize=memoize) return func return decorator
def merge_upwards_if_smaller_than(self, small_size, a_or_u): """After prune_if_smaller_than is run, we may still have excess nodes. For example, with a small_size of 609710690: 7 /* 28815419 /data/* 32 /data/srv/* 925746 /data/srv/docker.bak/* 12 /data/srv/docker.bak/shared/* 682860348 /data/srv/docker.bak/shared/standalone/* This is reduced to: 31147487 /* 682860355 /data/srv/docker.bak/shared/standalone/* Run this only when done with the scanning.""" # Assert that we're not messing things up. prev_app_size = self.app_size() prev_use_size = self.use_size() small_nodes = self._find_small_nodes(small_size, (), a_or_u) for node, parents in small_nodes: # Check immediate grandparent for isdir=None and if it # exists, move this there. The isdir=None node is always # last. if len(parents) >= 2: tail = parents[-2]._nodes[-1] if tail._isdir is None: assert tail._app_size is not None, tail tail._add_size(node.app_size(), node.use_size()) parents[-1]._nodes.remove(node) assert len(parents[-1]._nodes) # The actual assertion. assert prev_app_size == self.app_size(), ( prev_app_size, self.app_size()) assert prev_use_size == self.use_size(), ( prev_use_size, self.use_size())
After prune_if_smaller_than is run, we may still have excess nodes. For example, with a small_size of 609710690: 7 /* 28815419 /data/* 32 /data/srv/* 925746 /data/srv/docker.bak/* 12 /data/srv/docker.bak/shared/* 682860348 /data/srv/docker.bak/shared/standalone/* This is reduced to: 31147487 /* 682860355 /data/srv/docker.bak/shared/standalone/* Run this only when done with the scanning.
Below is the the instruction that describes the task: ### Input: After prune_if_smaller_than is run, we may still have excess nodes. For example, with a small_size of 609710690: 7 /* 28815419 /data/* 32 /data/srv/* 925746 /data/srv/docker.bak/* 12 /data/srv/docker.bak/shared/* 682860348 /data/srv/docker.bak/shared/standalone/* This is reduced to: 31147487 /* 682860355 /data/srv/docker.bak/shared/standalone/* Run this only when done with the scanning. ### Response: def merge_upwards_if_smaller_than(self, small_size, a_or_u): """After prune_if_smaller_than is run, we may still have excess nodes. For example, with a small_size of 609710690: 7 /* 28815419 /data/* 32 /data/srv/* 925746 /data/srv/docker.bak/* 12 /data/srv/docker.bak/shared/* 682860348 /data/srv/docker.bak/shared/standalone/* This is reduced to: 31147487 /* 682860355 /data/srv/docker.bak/shared/standalone/* Run this only when done with the scanning.""" # Assert that we're not messing things up. prev_app_size = self.app_size() prev_use_size = self.use_size() small_nodes = self._find_small_nodes(small_size, (), a_or_u) for node, parents in small_nodes: # Check immediate grandparent for isdir=None and if it # exists, move this there. The isdir=None node is always # last. if len(parents) >= 2: tail = parents[-2]._nodes[-1] if tail._isdir is None: assert tail._app_size is not None, tail tail._add_size(node.app_size(), node.use_size()) parents[-1]._nodes.remove(node) assert len(parents[-1]._nodes) # The actual assertion. assert prev_app_size == self.app_size(), ( prev_app_size, self.app_size()) assert prev_use_size == self.use_size(), ( prev_use_size, self.use_size())
def providers_count(df): """ Returns total occurrences of each provider in the database. """ providers_count = {} cnpj_array = df.values for a in cnpj_array: cnpj = a[0] occurrences = providers_count.get(cnpj, 0) providers_count[cnpj] = occurrences + 1 return pd.DataFrame.from_dict(providers_count, orient='index')
Returns total occurrences of each provider in the database.
Below is the the instruction that describes the task: ### Input: Returns total occurrences of each provider in the database. ### Response: def providers_count(df): """ Returns total occurrences of each provider in the database. """ providers_count = {} cnpj_array = df.values for a in cnpj_array: cnpj = a[0] occurrences = providers_count.get(cnpj, 0) providers_count[cnpj] = occurrences + 1 return pd.DataFrame.from_dict(providers_count, orient='index')
def load_figure(d, new_fig=True): """Create a figure from what is returned by :meth:`inspect_figure`""" import matplotlib.pyplot as plt subplotpars = d.pop('subplotpars', None) if subplotpars is not None: subplotpars.pop('validate', None) subplotpars = mfig.SubplotParams(**subplotpars) if new_fig: nums = plt.get_fignums() if d.get('num') in nums: d['num'] = next( i for i in range(max(plt.get_fignums()) + 1, 0, -1) if i not in nums) return plt.figure(subplotpars=subplotpars, **d)
Create a figure from what is returned by :meth:`inspect_figure`
Below is the the instruction that describes the task: ### Input: Create a figure from what is returned by :meth:`inspect_figure` ### Response: def load_figure(d, new_fig=True): """Create a figure from what is returned by :meth:`inspect_figure`""" import matplotlib.pyplot as plt subplotpars = d.pop('subplotpars', None) if subplotpars is not None: subplotpars.pop('validate', None) subplotpars = mfig.SubplotParams(**subplotpars) if new_fig: nums = plt.get_fignums() if d.get('num') in nums: d['num'] = next( i for i in range(max(plt.get_fignums()) + 1, 0, -1) if i not in nums) return plt.figure(subplotpars=subplotpars, **d)
def check_ups_estimated_minutes_remaining(the_session, the_helper, the_snmp_value): """ OID .1.3.6.1.2.1.33.1.2.3.0 MIB excerpt An estimate of the time to battery charge depletion under the present load conditions if the utility power is off and remains off, or if it were to be lost and remain off. """ the_helper.add_metric( label=the_helper.options.type, value=the_snmp_value, uom="minutes") the_helper.set_summary("Remaining runtime on battery is {} minutes".format(the_snmp_value))
OID .1.3.6.1.2.1.33.1.2.3.0 MIB excerpt An estimate of the time to battery charge depletion under the present load conditions if the utility power is off and remains off, or if it were to be lost and remain off.
Below is the the instruction that describes the task: ### Input: OID .1.3.6.1.2.1.33.1.2.3.0 MIB excerpt An estimate of the time to battery charge depletion under the present load conditions if the utility power is off and remains off, or if it were to be lost and remain off. ### Response: def check_ups_estimated_minutes_remaining(the_session, the_helper, the_snmp_value): """ OID .1.3.6.1.2.1.33.1.2.3.0 MIB excerpt An estimate of the time to battery charge depletion under the present load conditions if the utility power is off and remains off, or if it were to be lost and remain off. """ the_helper.add_metric( label=the_helper.options.type, value=the_snmp_value, uom="minutes") the_helper.set_summary("Remaining runtime on battery is {} minutes".format(the_snmp_value))
def package_releases(self, package, url_fmt=lambda u: u): """List all versions of a package Along with the version, the caller also receives the file list with all the available formats. """ return [{ 'name': package, 'version': version, 'urls': [self.get_urlhash(f, url_fmt) for f in files] } for version, files in self.storage.get(package, {}).items()]
List all versions of a package Along with the version, the caller also receives the file list with all the available formats.
Below is the the instruction that describes the task: ### Input: List all versions of a package Along with the version, the caller also receives the file list with all the available formats. ### Response: def package_releases(self, package, url_fmt=lambda u: u): """List all versions of a package Along with the version, the caller also receives the file list with all the available formats. """ return [{ 'name': package, 'version': version, 'urls': [self.get_urlhash(f, url_fmt) for f in files] } for version, files in self.storage.get(package, {}).items()]
def from_buffer(string, config_path=None): ''' Detects MIME type of the buffered content :param string: buffered content whose type needs to be detected :return: ''' status, response = callServer('put', ServerEndpoint, '/detect/stream', string, {'Accept': 'text/plain'}, False, config_path=config_path) return response
Detects MIME type of the buffered content :param string: buffered content whose type needs to be detected :return:
Below is the the instruction that describes the task: ### Input: Detects MIME type of the buffered content :param string: buffered content whose type needs to be detected :return: ### Response: def from_buffer(string, config_path=None): ''' Detects MIME type of the buffered content :param string: buffered content whose type needs to be detected :return: ''' status, response = callServer('put', ServerEndpoint, '/detect/stream', string, {'Accept': 'text/plain'}, False, config_path=config_path) return response
def _dusty_vm_exists(): """We use VBox directly instead of Docker Machine because it shaves about 0.5 seconds off the runtime of this check.""" existing_vms = check_output_demoted(['VBoxManage', 'list', 'vms']) for line in existing_vms.splitlines(): if '"{}"'.format(constants.VM_MACHINE_NAME) in line: return True return False
We use VBox directly instead of Docker Machine because it shaves about 0.5 seconds off the runtime of this check.
Below is the the instruction that describes the task: ### Input: We use VBox directly instead of Docker Machine because it shaves about 0.5 seconds off the runtime of this check. ### Response: def _dusty_vm_exists(): """We use VBox directly instead of Docker Machine because it shaves about 0.5 seconds off the runtime of this check.""" existing_vms = check_output_demoted(['VBoxManage', 'list', 'vms']) for line in existing_vms.splitlines(): if '"{}"'.format(constants.VM_MACHINE_NAME) in line: return True return False
def metrics(self): """ Set of metrics for this model """ from vel.metrics.loss_metric import Loss from vel.metrics.accuracy import Accuracy return [Loss(), Accuracy()]
Set of metrics for this model
Below is the the instruction that describes the task: ### Input: Set of metrics for this model ### Response: def metrics(self): """ Set of metrics for this model """ from vel.metrics.loss_metric import Loss from vel.metrics.accuracy import Accuracy return [Loss(), Accuracy()]
def add_modifier_on_derived_quantities(self, new_quantity, func, *quantities): """ Deprecated. Use `add_derived_quantity` instead. """ warnings.warn("Use `add_derived_quantity` instead.", DeprecationWarning) self.add_derived_quantity(new_quantity, func, *quantities)
Deprecated. Use `add_derived_quantity` instead.
Below is the the instruction that describes the task: ### Input: Deprecated. Use `add_derived_quantity` instead. ### Response: def add_modifier_on_derived_quantities(self, new_quantity, func, *quantities): """ Deprecated. Use `add_derived_quantity` instead. """ warnings.warn("Use `add_derived_quantity` instead.", DeprecationWarning) self.add_derived_quantity(new_quantity, func, *quantities)
def rename_columns(self, rename_dict): """ Renames the columns :param rename_dict: dict where the keys are the current column names and the values are the new names :return: nothing """ if not all([x in self._columns for x in rename_dict.keys()]): raise ValueError('all dictionary keys must be in current columns') for current in rename_dict.keys(): self._columns[self._columns.index(current)] = rename_dict[current]
Renames the columns :param rename_dict: dict where the keys are the current column names and the values are the new names :return: nothing
Below is the the instruction that describes the task: ### Input: Renames the columns :param rename_dict: dict where the keys are the current column names and the values are the new names :return: nothing ### Response: def rename_columns(self, rename_dict): """ Renames the columns :param rename_dict: dict where the keys are the current column names and the values are the new names :return: nothing """ if not all([x in self._columns for x in rename_dict.keys()]): raise ValueError('all dictionary keys must be in current columns') for current in rename_dict.keys(): self._columns[self._columns.index(current)] = rename_dict[current]
def request(self, filter=None, with_defaults=None): """Retrieve running configuration and device state information. *filter* specifies the portion of the configuration to retrieve (by default entire configuration is retrieved) *with_defaults* defines an explicit method of retrieving default values from the configuration (see RFC 6243) :seealso: :ref:`filter_params` """ node = new_ele("get") if filter is not None: node.append(util.build_filter(filter)) if with_defaults is not None: self._assert(":with-defaults") _append_with_defaults_mode( node, with_defaults, self._session.server_capabilities, ) return self._request(node)
Retrieve running configuration and device state information. *filter* specifies the portion of the configuration to retrieve (by default entire configuration is retrieved) *with_defaults* defines an explicit method of retrieving default values from the configuration (see RFC 6243) :seealso: :ref:`filter_params`
Below is the the instruction that describes the task: ### Input: Retrieve running configuration and device state information. *filter* specifies the portion of the configuration to retrieve (by default entire configuration is retrieved) *with_defaults* defines an explicit method of retrieving default values from the configuration (see RFC 6243) :seealso: :ref:`filter_params` ### Response: def request(self, filter=None, with_defaults=None): """Retrieve running configuration and device state information. *filter* specifies the portion of the configuration to retrieve (by default entire configuration is retrieved) *with_defaults* defines an explicit method of retrieving default values from the configuration (see RFC 6243) :seealso: :ref:`filter_params` """ node = new_ele("get") if filter is not None: node.append(util.build_filter(filter)) if with_defaults is not None: self._assert(":with-defaults") _append_with_defaults_mode( node, with_defaults, self._session.server_capabilities, ) return self._request(node)
def get_data( dataset, query=None, crs="epsg:4326", bounds=None, sortby=None, pagesize=10000, max_workers=5, ): """Get GeoJSON featurecollection from DataBC WFS """ param_dicts = define_request(dataset, query, crs, bounds, sortby, pagesize) with ThreadPoolExecutor(max_workers=max_workers) as executor: results = executor.map(make_request, param_dicts) outjson = dict(type="FeatureCollection", features=[]) for result in results: outjson["features"] += result return outjson
Get GeoJSON featurecollection from DataBC WFS
Below is the the instruction that describes the task: ### Input: Get GeoJSON featurecollection from DataBC WFS ### Response: def get_data( dataset, query=None, crs="epsg:4326", bounds=None, sortby=None, pagesize=10000, max_workers=5, ): """Get GeoJSON featurecollection from DataBC WFS """ param_dicts = define_request(dataset, query, crs, bounds, sortby, pagesize) with ThreadPoolExecutor(max_workers=max_workers) as executor: results = executor.map(make_request, param_dicts) outjson = dict(type="FeatureCollection", features=[]) for result in results: outjson["features"] += result return outjson
def images(self): """ This method returns the listing image. :return: """ try: uls = self._ad_page_content.find( "ul", {"class": "smi-gallery-list"}) except Exception as e: if self._debug: logging.error( "Error getting images. Error message: " + e.args[0]) return images = [] if uls is None: return for li in uls.find_all('li'): if li.find('img')['src']: images.append(li.find('img')['src']) return images
This method returns the listing image. :return:
Below is the the instruction that describes the task: ### Input: This method returns the listing image. :return: ### Response: def images(self): """ This method returns the listing image. :return: """ try: uls = self._ad_page_content.find( "ul", {"class": "smi-gallery-list"}) except Exception as e: if self._debug: logging.error( "Error getting images. Error message: " + e.args[0]) return images = [] if uls is None: return for li in uls.find_all('li'): if li.find('img')['src']: images.append(li.find('img')['src']) return images
def array(self, directories): """Return multiline string for simple array jobs over *directories*. .. Warning:: The string is in ``bash`` and hence the template must also be ``bash`` (and *not* ``csh`` or ``sh``). """ if not self.has_arrays(): raise NotImplementedError('Not known how make array jobs for ' 'queuing system %(name)s' % vars(self)) hrule = '#'+60*'-' lines = [ '', hrule, '# job array:', self.array_flag(directories), hrule, '# directories for job tasks', 'declare -a jobdirs'] for i,dirname in enumerate(asiterable(directories)): idx = i+1 # job array indices are 1-based lines.append('jobdirs[{idx:d}]={dirname!r}'.format(**vars())) lines.extend([ '# Switch to the current tasks directory:', 'wdir="${{jobdirs[${{{array_variable!s}}}]}}"'.format(**vars(self)), 'cd "$wdir" || { echo "ERROR: failed to enter $wdir."; exit 1; }', hrule, '' ]) return "\n".join(lines)
Return multiline string for simple array jobs over *directories*. .. Warning:: The string is in ``bash`` and hence the template must also be ``bash`` (and *not* ``csh`` or ``sh``).
Below is the the instruction that describes the task: ### Input: Return multiline string for simple array jobs over *directories*. .. Warning:: The string is in ``bash`` and hence the template must also be ``bash`` (and *not* ``csh`` or ``sh``). ### Response: def array(self, directories): """Return multiline string for simple array jobs over *directories*. .. Warning:: The string is in ``bash`` and hence the template must also be ``bash`` (and *not* ``csh`` or ``sh``). """ if not self.has_arrays(): raise NotImplementedError('Not known how make array jobs for ' 'queuing system %(name)s' % vars(self)) hrule = '#'+60*'-' lines = [ '', hrule, '# job array:', self.array_flag(directories), hrule, '# directories for job tasks', 'declare -a jobdirs'] for i,dirname in enumerate(asiterable(directories)): idx = i+1 # job array indices are 1-based lines.append('jobdirs[{idx:d}]={dirname!r}'.format(**vars())) lines.extend([ '# Switch to the current tasks directory:', 'wdir="${{jobdirs[${{{array_variable!s}}}]}}"'.format(**vars(self)), 'cd "$wdir" || { echo "ERROR: failed to enter $wdir."; exit 1; }', hrule, '' ]) return "\n".join(lines)
def selected(self): """Action to be executed when a valid item has been selected""" EditableComboBox.selected(self) self.open_dir.emit(self.currentText())
Action to be executed when a valid item has been selected
Below is the the instruction that describes the task: ### Input: Action to be executed when a valid item has been selected ### Response: def selected(self): """Action to be executed when a valid item has been selected""" EditableComboBox.selected(self) self.open_dir.emit(self.currentText())
def open_gwf(filename, mode='r'): """Open a filename for reading or writing GWF format data Parameters ---------- filename : `str` the path to read from, or write to mode : `str`, optional either ``'r'`` (read) or ``'w'`` (write) Returns ------- `LDAStools.frameCPP.IFrameFStream` the input frame stream (if `mode='r'`), or `LDAStools.frameCPP.IFrameFStream` the output frame stream (if `mode='w'`) """ if mode not in ('r', 'w'): raise ValueError("mode must be either 'r' or 'w'") from LDAStools import frameCPP filename = urlparse(filename).path # strip file://localhost or similar if mode == 'r': return frameCPP.IFrameFStream(str(filename)) return frameCPP.OFrameFStream(str(filename))
Open a filename for reading or writing GWF format data Parameters ---------- filename : `str` the path to read from, or write to mode : `str`, optional either ``'r'`` (read) or ``'w'`` (write) Returns ------- `LDAStools.frameCPP.IFrameFStream` the input frame stream (if `mode='r'`), or `LDAStools.frameCPP.IFrameFStream` the output frame stream (if `mode='w'`)
Below is the the instruction that describes the task: ### Input: Open a filename for reading or writing GWF format data Parameters ---------- filename : `str` the path to read from, or write to mode : `str`, optional either ``'r'`` (read) or ``'w'`` (write) Returns ------- `LDAStools.frameCPP.IFrameFStream` the input frame stream (if `mode='r'`), or `LDAStools.frameCPP.IFrameFStream` the output frame stream (if `mode='w'`) ### Response: def open_gwf(filename, mode='r'): """Open a filename for reading or writing GWF format data Parameters ---------- filename : `str` the path to read from, or write to mode : `str`, optional either ``'r'`` (read) or ``'w'`` (write) Returns ------- `LDAStools.frameCPP.IFrameFStream` the input frame stream (if `mode='r'`), or `LDAStools.frameCPP.IFrameFStream` the output frame stream (if `mode='w'`) """ if mode not in ('r', 'w'): raise ValueError("mode must be either 'r' or 'w'") from LDAStools import frameCPP filename = urlparse(filename).path # strip file://localhost or similar if mode == 'r': return frameCPP.IFrameFStream(str(filename)) return frameCPP.OFrameFStream(str(filename))
def runExperimentPool(numSequences, numFeatures, numLocations, numObjects, numWorkers=7, nTrials=1, seqLength=10, figure="", numRepetitions=1, synPermProximalDecL2=[0.001], minThresholdProximalL2=[10], sampleSizeProximalL2=[15], inputSize=[1024], basalPredictedSegmentDecrement=[0.0006], resultsName="convergence_results.pkl"): """ Run a bunch of experiments using a pool of numWorkers multiple processes. For numSequences, numFeatures, and numLocations pass in a list containing valid values for that parameter. The cross product of everything is run, and each combination is run nTrials times. Returns a list of dict containing detailed results from each experiment. Also pickles and saves all the results in resultsName for later analysis. If numWorkers == 1, the experiments will be run in a single thread. This makes it easier to debug. Example: results = runExperimentPool( numSequences=[10, 20], numFeatures=[5, 13], numWorkers=8, nTrials=5) """ # Create function arguments for every possibility args = [] for bd in basalPredictedSegmentDecrement: for i in inputSize: for thresh in minThresholdProximalL2: for dec in synPermProximalDecL2: for s in sampleSizeProximalL2: for o in reversed(numSequences): for l in numLocations: for f in numFeatures: for no in numObjects: for t in range(nTrials): args.append( {"numSequences": o, "numFeatures": f, "numObjects": no, "trialNum": t, "seqLength": seqLength, "numLocations": l, "sampleSizeProximalL2": s, "synPermProximalDecL2": dec, "minThresholdProximalL2": thresh, "numRepetitions": numRepetitions, "figure": figure, "inputSize": i, "basalPredictedSegmentDecrement": bd, } ) print "{} experiments to run, {} workers".format(len(args), numWorkers) # Run the pool if numWorkers > 1: pool = Pool(processes=numWorkers) result = pool.map(runExperiment, args) else: result = [] for arg in args: result.append(runExperiment(arg)) # Pickle results for later use with open(resultsName,"wb") as f: cPickle.dump(result,f) return result
Run a bunch of experiments using a pool of numWorkers multiple processes. For numSequences, numFeatures, and numLocations pass in a list containing valid values for that parameter. The cross product of everything is run, and each combination is run nTrials times. Returns a list of dict containing detailed results from each experiment. Also pickles and saves all the results in resultsName for later analysis. If numWorkers == 1, the experiments will be run in a single thread. This makes it easier to debug. Example: results = runExperimentPool( numSequences=[10, 20], numFeatures=[5, 13], numWorkers=8, nTrials=5)
Below is the the instruction that describes the task: ### Input: Run a bunch of experiments using a pool of numWorkers multiple processes. For numSequences, numFeatures, and numLocations pass in a list containing valid values for that parameter. The cross product of everything is run, and each combination is run nTrials times. Returns a list of dict containing detailed results from each experiment. Also pickles and saves all the results in resultsName for later analysis. If numWorkers == 1, the experiments will be run in a single thread. This makes it easier to debug. Example: results = runExperimentPool( numSequences=[10, 20], numFeatures=[5, 13], numWorkers=8, nTrials=5) ### Response: def runExperimentPool(numSequences, numFeatures, numLocations, numObjects, numWorkers=7, nTrials=1, seqLength=10, figure="", numRepetitions=1, synPermProximalDecL2=[0.001], minThresholdProximalL2=[10], sampleSizeProximalL2=[15], inputSize=[1024], basalPredictedSegmentDecrement=[0.0006], resultsName="convergence_results.pkl"): """ Run a bunch of experiments using a pool of numWorkers multiple processes. For numSequences, numFeatures, and numLocations pass in a list containing valid values for that parameter. The cross product of everything is run, and each combination is run nTrials times. Returns a list of dict containing detailed results from each experiment. Also pickles and saves all the results in resultsName for later analysis. If numWorkers == 1, the experiments will be run in a single thread. This makes it easier to debug. Example: results = runExperimentPool( numSequences=[10, 20], numFeatures=[5, 13], numWorkers=8, nTrials=5) """ # Create function arguments for every possibility args = [] for bd in basalPredictedSegmentDecrement: for i in inputSize: for thresh in minThresholdProximalL2: for dec in synPermProximalDecL2: for s in sampleSizeProximalL2: for o in reversed(numSequences): for l in numLocations: for f in numFeatures: for no in numObjects: for t in range(nTrials): args.append( {"numSequences": o, "numFeatures": f, "numObjects": no, "trialNum": t, "seqLength": seqLength, "numLocations": l, "sampleSizeProximalL2": s, "synPermProximalDecL2": dec, "minThresholdProximalL2": thresh, "numRepetitions": numRepetitions, "figure": figure, "inputSize": i, "basalPredictedSegmentDecrement": bd, } ) print "{} experiments to run, {} workers".format(len(args), numWorkers) # Run the pool if numWorkers > 1: pool = Pool(processes=numWorkers) result = pool.map(runExperiment, args) else: result = [] for arg in args: result.append(runExperiment(arg)) # Pickle results for later use with open(resultsName,"wb") as f: cPickle.dump(result,f) return result
def set_country_map(cfrom, cto, name=None, replace=True): ''' Set a mapping between a country code to another code ''' global _country_maps cdb = countries() cfrom = str(cfrom).upper() c = cdb.get(cfrom) if c: if name: c = name cto = str(cto).upper() if cto in cdb: raise CountryError('Country %s already in database' % cto) cdb[cto] = c _country_maps[cfrom] = cto ccys = currencydb() cccys = countryccys() ccy = cccys[cfrom] cccys[cto] = ccy # If set, remove cfrom from database if replace: ccy = ccys.get(ccy) ccy.default_country = cto cdb.pop(cfrom) cccys.pop(cfrom) else: raise CountryError('Country %s not in database' % c)
Set a mapping between a country code to another code
Below is the the instruction that describes the task: ### Input: Set a mapping between a country code to another code ### Response: def set_country_map(cfrom, cto, name=None, replace=True): ''' Set a mapping between a country code to another code ''' global _country_maps cdb = countries() cfrom = str(cfrom).upper() c = cdb.get(cfrom) if c: if name: c = name cto = str(cto).upper() if cto in cdb: raise CountryError('Country %s already in database' % cto) cdb[cto] = c _country_maps[cfrom] = cto ccys = currencydb() cccys = countryccys() ccy = cccys[cfrom] cccys[cto] = ccy # If set, remove cfrom from database if replace: ccy = ccys.get(ccy) ccy.default_country = cto cdb.pop(cfrom) cccys.pop(cfrom) else: raise CountryError('Country %s not in database' % c)
def _reassign_misplaced_members(binding): """Apply misplaced members from `binding` to Qt.py Arguments: binding (dict): Misplaced members """ for src, dst in _misplaced_members[binding].items(): src_module, src_member = src.split(".") dst_module, dst_member = dst.split(".") try: src_object = getattr(Qt, dst_module) except AttributeError: # Skip reassignment of non-existing members. # This can happen if a request was made to # rename a member that didn't exist, for example # if QtWidgets isn't available on the target platform. continue dst_value = getattr(getattr(Qt, "_" + src_module), src_member) setattr( src_object, dst_member, dst_value )
Apply misplaced members from `binding` to Qt.py Arguments: binding (dict): Misplaced members
Below is the the instruction that describes the task: ### Input: Apply misplaced members from `binding` to Qt.py Arguments: binding (dict): Misplaced members ### Response: def _reassign_misplaced_members(binding): """Apply misplaced members from `binding` to Qt.py Arguments: binding (dict): Misplaced members """ for src, dst in _misplaced_members[binding].items(): src_module, src_member = src.split(".") dst_module, dst_member = dst.split(".") try: src_object = getattr(Qt, dst_module) except AttributeError: # Skip reassignment of non-existing members. # This can happen if a request was made to # rename a member that didn't exist, for example # if QtWidgets isn't available on the target platform. continue dst_value = getattr(getattr(Qt, "_" + src_module), src_member) setattr( src_object, dst_member, dst_value )
def _get_multiparts(response): """ From this 'multipart/parallel; boundary="874e43d27ec6d83f30f37841bdaf90c7"; charset=utf-8' get this --874e43d27ec6d83f30f37841bdaf90c7 """ boundary = None for part in response.headers.get('Content-Type', '').split(';'): if 'boundary=' in part: boundary = '--{}'.format(part.split('=', 1)[1].strip('\"')) break if not boundary: raise ParseError("Was not able to find the boundary between objects in a multipart response") if response.content is None: return [] response_string = response.content if six.PY3: # Python3 returns bytes, decode for string operations response_string = response_string.decode('latin-1') # help bad responses be more multipart compliant whole_body = response_string.strip('\r\n') no_front_boundary = whole_body.strip(boundary) # The boundary comes with some characters multi_parts = [] for part in no_front_boundary.split(boundary): multi_parts.append(part.strip('\r\n')) return multi_parts
From this 'multipart/parallel; boundary="874e43d27ec6d83f30f37841bdaf90c7"; charset=utf-8' get this --874e43d27ec6d83f30f37841bdaf90c7
Below is the the instruction that describes the task: ### Input: From this 'multipart/parallel; boundary="874e43d27ec6d83f30f37841bdaf90c7"; charset=utf-8' get this --874e43d27ec6d83f30f37841bdaf90c7 ### Response: def _get_multiparts(response): """ From this 'multipart/parallel; boundary="874e43d27ec6d83f30f37841bdaf90c7"; charset=utf-8' get this --874e43d27ec6d83f30f37841bdaf90c7 """ boundary = None for part in response.headers.get('Content-Type', '').split(';'): if 'boundary=' in part: boundary = '--{}'.format(part.split('=', 1)[1].strip('\"')) break if not boundary: raise ParseError("Was not able to find the boundary between objects in a multipart response") if response.content is None: return [] response_string = response.content if six.PY3: # Python3 returns bytes, decode for string operations response_string = response_string.decode('latin-1') # help bad responses be more multipart compliant whole_body = response_string.strip('\r\n') no_front_boundary = whole_body.strip(boundary) # The boundary comes with some characters multi_parts = [] for part in no_front_boundary.split(boundary): multi_parts.append(part.strip('\r\n')) return multi_parts
def get_followers(self): """ :calls: `GET /user/followers <http://developer.github.com/v3/users/followers>`_ :rtype: :class:`github.PaginatedList.PaginatedList` of :class:`github.NamedUser.NamedUser` """ return github.PaginatedList.PaginatedList( github.NamedUser.NamedUser, self._requester, "/user/followers", None )
:calls: `GET /user/followers <http://developer.github.com/v3/users/followers>`_ :rtype: :class:`github.PaginatedList.PaginatedList` of :class:`github.NamedUser.NamedUser`
Below is the the instruction that describes the task: ### Input: :calls: `GET /user/followers <http://developer.github.com/v3/users/followers>`_ :rtype: :class:`github.PaginatedList.PaginatedList` of :class:`github.NamedUser.NamedUser` ### Response: def get_followers(self): """ :calls: `GET /user/followers <http://developer.github.com/v3/users/followers>`_ :rtype: :class:`github.PaginatedList.PaginatedList` of :class:`github.NamedUser.NamedUser` """ return github.PaginatedList.PaginatedList( github.NamedUser.NamedUser, self._requester, "/user/followers", None )
def getSQLQuery(self, count = False) : "Returns the query without performing it. If count, the query returned will be a SELECT COUNT() instead of a SELECT" sqlFilters = [] sqlValues = [] # print self.filters for f in self.filters : filt = [] for k, vv in f.iteritems() : if type(vv) is types.ListType or type(vv) is types.TupleType : sqlValues.extend(vv) kk = 'OR %s ? '%k * len(vv) kk = "(%s)" % kk[3:] else : kk = k sqlValues.append(vv) filt.append(kk) sqlFilters.append('(%s ?)' % ' ? AND '.join(filt)) if len(sqlValues) > stp.SQLITE_LIMIT_VARIABLE_NUMBER : raise ValueError("""The limit number of parameters imposed by sqlite is %s. You will have to break your query into several smaller one. Sorry about that. (actual number of parameters is: %s)""" % (stp.SQLITE_LIMIT_VARIABLE_NUMBER, len(sqlValues))) sqlFilters =' OR '.join(sqlFilters) if len(self.tables) < 2 : tablesStr = self.rabaClass.__name__ else : tablesStr = ', '.join(self.tables) if len(sqlFilters) == 0 : sqlFilters = '1' if count : sql = 'SELECT COUNT(*) FROM %s WHERE %s' % (tablesStr, sqlFilters) else : sql = 'SELECT %s.raba_id FROM %s WHERE %s' % (self.rabaClass.__name__, tablesStr, sqlFilters) return (sql, sqlValues)
Returns the query without performing it. If count, the query returned will be a SELECT COUNT() instead of a SELECT
Below is the the instruction that describes the task: ### Input: Returns the query without performing it. If count, the query returned will be a SELECT COUNT() instead of a SELECT ### Response: def getSQLQuery(self, count = False) : "Returns the query without performing it. If count, the query returned will be a SELECT COUNT() instead of a SELECT" sqlFilters = [] sqlValues = [] # print self.filters for f in self.filters : filt = [] for k, vv in f.iteritems() : if type(vv) is types.ListType or type(vv) is types.TupleType : sqlValues.extend(vv) kk = 'OR %s ? '%k * len(vv) kk = "(%s)" % kk[3:] else : kk = k sqlValues.append(vv) filt.append(kk) sqlFilters.append('(%s ?)' % ' ? AND '.join(filt)) if len(sqlValues) > stp.SQLITE_LIMIT_VARIABLE_NUMBER : raise ValueError("""The limit number of parameters imposed by sqlite is %s. You will have to break your query into several smaller one. Sorry about that. (actual number of parameters is: %s)""" % (stp.SQLITE_LIMIT_VARIABLE_NUMBER, len(sqlValues))) sqlFilters =' OR '.join(sqlFilters) if len(self.tables) < 2 : tablesStr = self.rabaClass.__name__ else : tablesStr = ', '.join(self.tables) if len(sqlFilters) == 0 : sqlFilters = '1' if count : sql = 'SELECT COUNT(*) FROM %s WHERE %s' % (tablesStr, sqlFilters) else : sql = 'SELECT %s.raba_id FROM %s WHERE %s' % (self.rabaClass.__name__, tablesStr, sqlFilters) return (sql, sqlValues)
def compute_convex_hull(feed: "Feed") -> Polygon: """ Return a Shapely Polygon representing the convex hull formed by the stops of the given Feed. """ m = sg.MultiPoint(feed.stops[["stop_lon", "stop_lat"]].values) return m.convex_hull
Return a Shapely Polygon representing the convex hull formed by the stops of the given Feed.
Below is the the instruction that describes the task: ### Input: Return a Shapely Polygon representing the convex hull formed by the stops of the given Feed. ### Response: def compute_convex_hull(feed: "Feed") -> Polygon: """ Return a Shapely Polygon representing the convex hull formed by the stops of the given Feed. """ m = sg.MultiPoint(feed.stops[["stop_lon", "stop_lat"]].values) return m.convex_hull
def document_url(self): """ Constructs and returns the document URL. :returns: Document URL """ if '_id' not in self or self['_id'] is None: return None # handle design document url if self['_id'].startswith('_design/'): return '/'.join(( self._database_host, url_quote_plus(self._database_name), '_design', url_quote(self['_id'][8:], safe='') )) # handle document url return '/'.join(( self._database_host, url_quote_plus(self._database_name), url_quote(self['_id'], safe='') ))
Constructs and returns the document URL. :returns: Document URL
Below is the the instruction that describes the task: ### Input: Constructs and returns the document URL. :returns: Document URL ### Response: def document_url(self): """ Constructs and returns the document URL. :returns: Document URL """ if '_id' not in self or self['_id'] is None: return None # handle design document url if self['_id'].startswith('_design/'): return '/'.join(( self._database_host, url_quote_plus(self._database_name), '_design', url_quote(self['_id'][8:], safe='') )) # handle document url return '/'.join(( self._database_host, url_quote_plus(self._database_name), url_quote(self['_id'], safe='') ))
def solid(name=None, inputs=None, outputs=None, config_field=None, description=None): '''(decorator) Create a solid with specified parameters. This shortcut simplifies the core solid API by exploding arguments into kwargs of the transform function and omitting additional parameters when they are not needed. Parameters are otherwise as in the core API, :py:class:`SolidDefinition`. The decorated function will be used as the solid's transform function. Unlike in the core API, the transform function does not have to yield :py:class:`Result` object directly. Several simpler alternatives are available: 1. Return a value. This is returned as a :py:class:`Result` for a single output solid. 2. Return a :py:class:`Result`. Works like yielding result. 3. Return an instance of :py:class:`MultipleResults`. Works like yielding several results for multiple outputs. Useful for solids that have multiple outputs. 4. Yield :py:class:`Result`. Same as default transform behaviour. Args: name (str): Name of solid. inputs (list[InputDefinition]): List of inputs. outputs (list[OutputDefinition]): List of outputs. config_field (Field): The configuration for this solid. description (str): Description of this solid. Examples: .. code-block:: python @solid def hello_world(_context): print('hello') @solid() def hello_world(_context): print('hello') @solid(outputs=[OutputDefinition()]) def hello_world(_context): return {'foo': 'bar'} @solid(outputs=[OutputDefinition()]) def hello_world(_context): return Result(value={'foo': 'bar'}) @solid(outputs=[OutputDefinition()]) def hello_world(_context): yield Result(value={'foo': 'bar'}) @solid(outputs=[ OutputDefinition(name="left"), OutputDefinition(name="right"), ]) def hello_world(_context): return MultipleResults.from_dict({ 'left': {'foo': 'left'}, 'right': {'foo': 'right'}, }) @solid( inputs=[InputDefinition(name="foo")], outputs=[OutputDefinition()] ) def hello_world(_context, foo): return foo @solid( inputs=[InputDefinition(name="foo")], outputs=[OutputDefinition()], ) def hello_world(context, foo): context.log.info('log something') return foo @solid( inputs=[InputDefinition(name="foo")], outputs=[OutputDefinition()], config_field=Field(types.Dict({'str_value' : Field(types.String)})), ) def hello_world(context, foo): # context.solid_config is a dictionary with 'str_value' key return foo + context.solid_config['str_value'] ''' # This case is for when decorator is used bare, without arguments. e.g. @solid versus @solid() if callable(name): check.invariant(inputs is None) check.invariant(outputs is None) check.invariant(description is None) check.invariant(config_field is None) return _Solid()(name) return _Solid( name=name, inputs=inputs, outputs=outputs, config_field=config_field, description=description, )
(decorator) Create a solid with specified parameters. This shortcut simplifies the core solid API by exploding arguments into kwargs of the transform function and omitting additional parameters when they are not needed. Parameters are otherwise as in the core API, :py:class:`SolidDefinition`. The decorated function will be used as the solid's transform function. Unlike in the core API, the transform function does not have to yield :py:class:`Result` object directly. Several simpler alternatives are available: 1. Return a value. This is returned as a :py:class:`Result` for a single output solid. 2. Return a :py:class:`Result`. Works like yielding result. 3. Return an instance of :py:class:`MultipleResults`. Works like yielding several results for multiple outputs. Useful for solids that have multiple outputs. 4. Yield :py:class:`Result`. Same as default transform behaviour. Args: name (str): Name of solid. inputs (list[InputDefinition]): List of inputs. outputs (list[OutputDefinition]): List of outputs. config_field (Field): The configuration for this solid. description (str): Description of this solid. Examples: .. code-block:: python @solid def hello_world(_context): print('hello') @solid() def hello_world(_context): print('hello') @solid(outputs=[OutputDefinition()]) def hello_world(_context): return {'foo': 'bar'} @solid(outputs=[OutputDefinition()]) def hello_world(_context): return Result(value={'foo': 'bar'}) @solid(outputs=[OutputDefinition()]) def hello_world(_context): yield Result(value={'foo': 'bar'}) @solid(outputs=[ OutputDefinition(name="left"), OutputDefinition(name="right"), ]) def hello_world(_context): return MultipleResults.from_dict({ 'left': {'foo': 'left'}, 'right': {'foo': 'right'}, }) @solid( inputs=[InputDefinition(name="foo")], outputs=[OutputDefinition()] ) def hello_world(_context, foo): return foo @solid( inputs=[InputDefinition(name="foo")], outputs=[OutputDefinition()], ) def hello_world(context, foo): context.log.info('log something') return foo @solid( inputs=[InputDefinition(name="foo")], outputs=[OutputDefinition()], config_field=Field(types.Dict({'str_value' : Field(types.String)})), ) def hello_world(context, foo): # context.solid_config is a dictionary with 'str_value' key return foo + context.solid_config['str_value']
Below is the the instruction that describes the task: ### Input: (decorator) Create a solid with specified parameters. This shortcut simplifies the core solid API by exploding arguments into kwargs of the transform function and omitting additional parameters when they are not needed. Parameters are otherwise as in the core API, :py:class:`SolidDefinition`. The decorated function will be used as the solid's transform function. Unlike in the core API, the transform function does not have to yield :py:class:`Result` object directly. Several simpler alternatives are available: 1. Return a value. This is returned as a :py:class:`Result` for a single output solid. 2. Return a :py:class:`Result`. Works like yielding result. 3. Return an instance of :py:class:`MultipleResults`. Works like yielding several results for multiple outputs. Useful for solids that have multiple outputs. 4. Yield :py:class:`Result`. Same as default transform behaviour. Args: name (str): Name of solid. inputs (list[InputDefinition]): List of inputs. outputs (list[OutputDefinition]): List of outputs. config_field (Field): The configuration for this solid. description (str): Description of this solid. Examples: .. code-block:: python @solid def hello_world(_context): print('hello') @solid() def hello_world(_context): print('hello') @solid(outputs=[OutputDefinition()]) def hello_world(_context): return {'foo': 'bar'} @solid(outputs=[OutputDefinition()]) def hello_world(_context): return Result(value={'foo': 'bar'}) @solid(outputs=[OutputDefinition()]) def hello_world(_context): yield Result(value={'foo': 'bar'}) @solid(outputs=[ OutputDefinition(name="left"), OutputDefinition(name="right"), ]) def hello_world(_context): return MultipleResults.from_dict({ 'left': {'foo': 'left'}, 'right': {'foo': 'right'}, }) @solid( inputs=[InputDefinition(name="foo")], outputs=[OutputDefinition()] ) def hello_world(_context, foo): return foo @solid( inputs=[InputDefinition(name="foo")], outputs=[OutputDefinition()], ) def hello_world(context, foo): context.log.info('log something') return foo @solid( inputs=[InputDefinition(name="foo")], outputs=[OutputDefinition()], config_field=Field(types.Dict({'str_value' : Field(types.String)})), ) def hello_world(context, foo): # context.solid_config is a dictionary with 'str_value' key return foo + context.solid_config['str_value'] ### Response: def solid(name=None, inputs=None, outputs=None, config_field=None, description=None): '''(decorator) Create a solid with specified parameters. This shortcut simplifies the core solid API by exploding arguments into kwargs of the transform function and omitting additional parameters when they are not needed. Parameters are otherwise as in the core API, :py:class:`SolidDefinition`. The decorated function will be used as the solid's transform function. Unlike in the core API, the transform function does not have to yield :py:class:`Result` object directly. Several simpler alternatives are available: 1. Return a value. This is returned as a :py:class:`Result` for a single output solid. 2. Return a :py:class:`Result`. Works like yielding result. 3. Return an instance of :py:class:`MultipleResults`. Works like yielding several results for multiple outputs. Useful for solids that have multiple outputs. 4. Yield :py:class:`Result`. Same as default transform behaviour. Args: name (str): Name of solid. inputs (list[InputDefinition]): List of inputs. outputs (list[OutputDefinition]): List of outputs. config_field (Field): The configuration for this solid. description (str): Description of this solid. Examples: .. code-block:: python @solid def hello_world(_context): print('hello') @solid() def hello_world(_context): print('hello') @solid(outputs=[OutputDefinition()]) def hello_world(_context): return {'foo': 'bar'} @solid(outputs=[OutputDefinition()]) def hello_world(_context): return Result(value={'foo': 'bar'}) @solid(outputs=[OutputDefinition()]) def hello_world(_context): yield Result(value={'foo': 'bar'}) @solid(outputs=[ OutputDefinition(name="left"), OutputDefinition(name="right"), ]) def hello_world(_context): return MultipleResults.from_dict({ 'left': {'foo': 'left'}, 'right': {'foo': 'right'}, }) @solid( inputs=[InputDefinition(name="foo")], outputs=[OutputDefinition()] ) def hello_world(_context, foo): return foo @solid( inputs=[InputDefinition(name="foo")], outputs=[OutputDefinition()], ) def hello_world(context, foo): context.log.info('log something') return foo @solid( inputs=[InputDefinition(name="foo")], outputs=[OutputDefinition()], config_field=Field(types.Dict({'str_value' : Field(types.String)})), ) def hello_world(context, foo): # context.solid_config is a dictionary with 'str_value' key return foo + context.solid_config['str_value'] ''' # This case is for when decorator is used bare, without arguments. e.g. @solid versus @solid() if callable(name): check.invariant(inputs is None) check.invariant(outputs is None) check.invariant(description is None) check.invariant(config_field is None) return _Solid()(name) return _Solid( name=name, inputs=inputs, outputs=outputs, config_field=config_field, description=description, )
def sync_close(self): """ 同步关闭 """ if self._closed: return while self._free: conn = self._free.popleft() if not conn.closed: # pragma: no cover conn.sync_close() for conn in self._used: if not conn.closed: # pragma: no cover conn.sync_close() self._terminated.add(conn) self._used.clear() self._closed = True
同步关闭
Below is the the instruction that describes the task: ### Input: 同步关闭 ### Response: def sync_close(self): """ 同步关闭 """ if self._closed: return while self._free: conn = self._free.popleft() if not conn.closed: # pragma: no cover conn.sync_close() for conn in self._used: if not conn.closed: # pragma: no cover conn.sync_close() self._terminated.add(conn) self._used.clear() self._closed = True
def first(self, symbols): """Computes the intermediate FIRST set using symbols.""" ret = set() if EPSILON in symbols: return set([EPSILON]) for symbol in symbols: ret |= self._first[symbol] - set([EPSILON]) if EPSILON not in self._first[symbol]: break else: ret.add(EPSILON) return ret
Computes the intermediate FIRST set using symbols.
Below is the the instruction that describes the task: ### Input: Computes the intermediate FIRST set using symbols. ### Response: def first(self, symbols): """Computes the intermediate FIRST set using symbols.""" ret = set() if EPSILON in symbols: return set([EPSILON]) for symbol in symbols: ret |= self._first[symbol] - set([EPSILON]) if EPSILON not in self._first[symbol]: break else: ret.add(EPSILON) return ret
def AddTableColumn(self, table, column): """Add column to table if it is not already there.""" if column not in self._table_columns[table]: self._table_columns[table].append(column)
Add column to table if it is not already there.
Below is the the instruction that describes the task: ### Input: Add column to table if it is not already there. ### Response: def AddTableColumn(self, table, column): """Add column to table if it is not already there.""" if column not in self._table_columns[table]: self._table_columns[table].append(column)
def widgets(self): """Get the items.""" widgets = [] for i, chart in enumerate(most_visited_pages_charts()): widgets.append(Widget(html_id='most_visited_chart_%d' % i, content=json.dumps(chart), template='meerkat/widgets/highcharts.html', js_code=['plotOptions.tooltip.pointFormatter'])) return widgets
Get the items.
Below is the the instruction that describes the task: ### Input: Get the items. ### Response: def widgets(self): """Get the items.""" widgets = [] for i, chart in enumerate(most_visited_pages_charts()): widgets.append(Widget(html_id='most_visited_chart_%d' % i, content=json.dumps(chart), template='meerkat/widgets/highcharts.html', js_code=['plotOptions.tooltip.pointFormatter'])) return widgets
def admin_authenticate(self, password): """ Authenticate the user using admin super privileges :param password: User's password :return: """ auth_params = { 'USERNAME': self.username, 'PASSWORD': password } self._add_secret_hash(auth_params, 'SECRET_HASH') tokens = self.client.admin_initiate_auth( UserPoolId=self.user_pool_id, ClientId=self.client_id, # AuthFlow='USER_SRP_AUTH'|'REFRESH_TOKEN_AUTH'|'REFRESH_TOKEN'|'CUSTOM_AUTH'|'ADMIN_NO_SRP_AUTH', AuthFlow='ADMIN_NO_SRP_AUTH', AuthParameters=auth_params, ) self.verify_token(tokens['AuthenticationResult']['IdToken'], 'id_token','id') self.refresh_token = tokens['AuthenticationResult']['RefreshToken'] self.verify_token(tokens['AuthenticationResult']['AccessToken'], 'access_token','access') self.token_type = tokens['AuthenticationResult']['TokenType']
Authenticate the user using admin super privileges :param password: User's password :return:
Below is the the instruction that describes the task: ### Input: Authenticate the user using admin super privileges :param password: User's password :return: ### Response: def admin_authenticate(self, password): """ Authenticate the user using admin super privileges :param password: User's password :return: """ auth_params = { 'USERNAME': self.username, 'PASSWORD': password } self._add_secret_hash(auth_params, 'SECRET_HASH') tokens = self.client.admin_initiate_auth( UserPoolId=self.user_pool_id, ClientId=self.client_id, # AuthFlow='USER_SRP_AUTH'|'REFRESH_TOKEN_AUTH'|'REFRESH_TOKEN'|'CUSTOM_AUTH'|'ADMIN_NO_SRP_AUTH', AuthFlow='ADMIN_NO_SRP_AUTH', AuthParameters=auth_params, ) self.verify_token(tokens['AuthenticationResult']['IdToken'], 'id_token','id') self.refresh_token = tokens['AuthenticationResult']['RefreshToken'] self.verify_token(tokens['AuthenticationResult']['AccessToken'], 'access_token','access') self.token_type = tokens['AuthenticationResult']['TokenType']
def threshold_monitor_hidden_threshold_monitor_Cpu_actions(self, **kwargs): """Auto Generated Code """ config = ET.Element("config") threshold_monitor_hidden = ET.SubElement(config, "threshold-monitor-hidden", xmlns="urn:brocade.com:mgmt:brocade-threshold-monitor") threshold_monitor = ET.SubElement(threshold_monitor_hidden, "threshold-monitor") Cpu = ET.SubElement(threshold_monitor, "Cpu") actions = ET.SubElement(Cpu, "actions") actions.text = kwargs.pop('actions') callback = kwargs.pop('callback', self._callback) return callback(config)
Auto Generated Code
Below is the the instruction that describes the task: ### Input: Auto Generated Code ### Response: def threshold_monitor_hidden_threshold_monitor_Cpu_actions(self, **kwargs): """Auto Generated Code """ config = ET.Element("config") threshold_monitor_hidden = ET.SubElement(config, "threshold-monitor-hidden", xmlns="urn:brocade.com:mgmt:brocade-threshold-monitor") threshold_monitor = ET.SubElement(threshold_monitor_hidden, "threshold-monitor") Cpu = ET.SubElement(threshold_monitor, "Cpu") actions = ET.SubElement(Cpu, "actions") actions.text = kwargs.pop('actions') callback = kwargs.pop('callback', self._callback) return callback(config)
def simulate_experiment(self, modelparams, expparams, repeat=1): """ Produces data according to the given model parameters and experimental parameters, structured as a NumPy array. :param np.ndarray modelparams: A shape ``(n_models, n_modelparams)`` array of model parameter vectors describing the hypotheses under which data should be simulated. :param np.ndarray expparams: A shape ``(n_experiments, )`` array of experimental control settings, with ``dtype`` given by :attr:`~qinfer.Model.expparams_dtype`, describing the experiments whose outcomes should be simulated. :param int repeat: How many times the specified experiment should be repeated. :rtype: np.ndarray :return: A three-index tensor ``data[i, j, k]``, where ``i`` is the repetition, ``j`` indexes which vector of model parameters was used, and where ``k`` indexes which experimental parameters where used. If ``repeat == 1``, ``len(modelparams) == 1`` and ``len(expparams) == 1``, then a scalar datum is returned instead. """ self._sim_count += modelparams.shape[0] * expparams.shape[0] * repeat assert(self.are_expparam_dtypes_consistent(expparams))
Produces data according to the given model parameters and experimental parameters, structured as a NumPy array. :param np.ndarray modelparams: A shape ``(n_models, n_modelparams)`` array of model parameter vectors describing the hypotheses under which data should be simulated. :param np.ndarray expparams: A shape ``(n_experiments, )`` array of experimental control settings, with ``dtype`` given by :attr:`~qinfer.Model.expparams_dtype`, describing the experiments whose outcomes should be simulated. :param int repeat: How many times the specified experiment should be repeated. :rtype: np.ndarray :return: A three-index tensor ``data[i, j, k]``, where ``i`` is the repetition, ``j`` indexes which vector of model parameters was used, and where ``k`` indexes which experimental parameters where used. If ``repeat == 1``, ``len(modelparams) == 1`` and ``len(expparams) == 1``, then a scalar datum is returned instead.
Below is the the instruction that describes the task: ### Input: Produces data according to the given model parameters and experimental parameters, structured as a NumPy array. :param np.ndarray modelparams: A shape ``(n_models, n_modelparams)`` array of model parameter vectors describing the hypotheses under which data should be simulated. :param np.ndarray expparams: A shape ``(n_experiments, )`` array of experimental control settings, with ``dtype`` given by :attr:`~qinfer.Model.expparams_dtype`, describing the experiments whose outcomes should be simulated. :param int repeat: How many times the specified experiment should be repeated. :rtype: np.ndarray :return: A three-index tensor ``data[i, j, k]``, where ``i`` is the repetition, ``j`` indexes which vector of model parameters was used, and where ``k`` indexes which experimental parameters where used. If ``repeat == 1``, ``len(modelparams) == 1`` and ``len(expparams) == 1``, then a scalar datum is returned instead. ### Response: def simulate_experiment(self, modelparams, expparams, repeat=1): """ Produces data according to the given model parameters and experimental parameters, structured as a NumPy array. :param np.ndarray modelparams: A shape ``(n_models, n_modelparams)`` array of model parameter vectors describing the hypotheses under which data should be simulated. :param np.ndarray expparams: A shape ``(n_experiments, )`` array of experimental control settings, with ``dtype`` given by :attr:`~qinfer.Model.expparams_dtype`, describing the experiments whose outcomes should be simulated. :param int repeat: How many times the specified experiment should be repeated. :rtype: np.ndarray :return: A three-index tensor ``data[i, j, k]``, where ``i`` is the repetition, ``j`` indexes which vector of model parameters was used, and where ``k`` indexes which experimental parameters where used. If ``repeat == 1``, ``len(modelparams) == 1`` and ``len(expparams) == 1``, then a scalar datum is returned instead. """ self._sim_count += modelparams.shape[0] * expparams.shape[0] * repeat assert(self.are_expparam_dtypes_consistent(expparams))
def get_image_file_hash(image_path): '''get_image_hash will return an md5 hash of the file based on a criteria level. :param level: one of LOW, MEDIUM, HIGH :param image_path: full path to the singularity image ''' hasher = hashlib.md5() with open(image_path, "rb") as f: for chunk in iter(lambda: f.read(4096), b""): hasher.update(chunk) return hasher.hexdigest()
get_image_hash will return an md5 hash of the file based on a criteria level. :param level: one of LOW, MEDIUM, HIGH :param image_path: full path to the singularity image
Below is the the instruction that describes the task: ### Input: get_image_hash will return an md5 hash of the file based on a criteria level. :param level: one of LOW, MEDIUM, HIGH :param image_path: full path to the singularity image ### Response: def get_image_file_hash(image_path): '''get_image_hash will return an md5 hash of the file based on a criteria level. :param level: one of LOW, MEDIUM, HIGH :param image_path: full path to the singularity image ''' hasher = hashlib.md5() with open(image_path, "rb") as f: for chunk in iter(lambda: f.read(4096), b""): hasher.update(chunk) return hasher.hexdigest()
def _delete_extraneous_files(self): # type: (Downloader) -> None """Delete extraneous files cataloged :param Downloader self: this """ logger.info('attempting to delete {} extraneous files'.format( len(self._delete_after))) for file in self._delete_after: if self._general_options.dry_run: logger.info('[DRY RUN] deleting local file: {}'.format( file)) else: if self._general_options.verbose: logger.debug('deleting local file: {}'.format(file)) try: file.unlink() except OSError as e: logger.error('error deleting local file: {}'.format( str(e)))
Delete extraneous files cataloged :param Downloader self: this
Below is the the instruction that describes the task: ### Input: Delete extraneous files cataloged :param Downloader self: this ### Response: def _delete_extraneous_files(self): # type: (Downloader) -> None """Delete extraneous files cataloged :param Downloader self: this """ logger.info('attempting to delete {} extraneous files'.format( len(self._delete_after))) for file in self._delete_after: if self._general_options.dry_run: logger.info('[DRY RUN] deleting local file: {}'.format( file)) else: if self._general_options.verbose: logger.debug('deleting local file: {}'.format(file)) try: file.unlink() except OSError as e: logger.error('error deleting local file: {}'.format( str(e)))
def commit(self, txnCount, stateRoot, txnRoot, ppTime) -> List: """ :param txnCount: The number of requests to commit (The actual requests are picked up from the uncommitted list from the ledger) :param stateRoot: The state trie root after the txns are committed :param txnRoot: The txn merkle root after the txns are committed :return: list of committed transactions """ return self._commit(self.ledger, self.state, txnCount, stateRoot, txnRoot, ppTime, ts_store=self.ts_store)
:param txnCount: The number of requests to commit (The actual requests are picked up from the uncommitted list from the ledger) :param stateRoot: The state trie root after the txns are committed :param txnRoot: The txn merkle root after the txns are committed :return: list of committed transactions
Below is the the instruction that describes the task: ### Input: :param txnCount: The number of requests to commit (The actual requests are picked up from the uncommitted list from the ledger) :param stateRoot: The state trie root after the txns are committed :param txnRoot: The txn merkle root after the txns are committed :return: list of committed transactions ### Response: def commit(self, txnCount, stateRoot, txnRoot, ppTime) -> List: """ :param txnCount: The number of requests to commit (The actual requests are picked up from the uncommitted list from the ledger) :param stateRoot: The state trie root after the txns are committed :param txnRoot: The txn merkle root after the txns are committed :return: list of committed transactions """ return self._commit(self.ledger, self.state, txnCount, stateRoot, txnRoot, ppTime, ts_store=self.ts_store)
def refactor_create_inline(self, offset, only_this): """Inline the function call at point.""" refactor = create_inline(self.project, self.resource, offset) if only_this: return self._get_changes(refactor, remove=False, only_current=True) else: return self._get_changes(refactor, remove=True, only_current=False)
Inline the function call at point.
Below is the the instruction that describes the task: ### Input: Inline the function call at point. ### Response: def refactor_create_inline(self, offset, only_this): """Inline the function call at point.""" refactor = create_inline(self.project, self.resource, offset) if only_this: return self._get_changes(refactor, remove=False, only_current=True) else: return self._get_changes(refactor, remove=True, only_current=False)
def solveAndNotify(proto, exercise): """The user at the given AMP protocol has solved the given exercise. This will log the solution and notify the user. """ exercise.solvedBy(proto.user) proto.callRemote(ce.NotifySolved, identifier=exercise.identifier, title=exercise.title)
The user at the given AMP protocol has solved the given exercise. This will log the solution and notify the user.
Below is the the instruction that describes the task: ### Input: The user at the given AMP protocol has solved the given exercise. This will log the solution and notify the user. ### Response: def solveAndNotify(proto, exercise): """The user at the given AMP protocol has solved the given exercise. This will log the solution and notify the user. """ exercise.solvedBy(proto.user) proto.callRemote(ce.NotifySolved, identifier=exercise.identifier, title=exercise.title)
def schema(self): """ Returns the database schema as a string. """ c = self.conn.cursor() c.execute( ''' SELECT sql FROM sqlite_master ''') results = [] for i, in c: if i is not None: results.append(i) return '\n'.join(results)
Returns the database schema as a string.
Below is the the instruction that describes the task: ### Input: Returns the database schema as a string. ### Response: def schema(self): """ Returns the database schema as a string. """ c = self.conn.cursor() c.execute( ''' SELECT sql FROM sqlite_master ''') results = [] for i, in c: if i is not None: results.append(i) return '\n'.join(results)
def UCRTIncludes(self): """ Microsoft Universal C Runtime SDK Include """ if self.vc_ver < 14.0: return [] include = os.path.join(self.si.UniversalCRTSdkDir, 'include') return [os.path.join(include, '%sucrt' % self._ucrt_subdir)]
Microsoft Universal C Runtime SDK Include
Below is the the instruction that describes the task: ### Input: Microsoft Universal C Runtime SDK Include ### Response: def UCRTIncludes(self): """ Microsoft Universal C Runtime SDK Include """ if self.vc_ver < 14.0: return [] include = os.path.join(self.si.UniversalCRTSdkDir, 'include') return [os.path.join(include, '%sucrt' % self._ucrt_subdir)]
def tag_path(cls, project, incident, tag): """Return a fully-qualified tag string.""" return google.api_core.path_template.expand( "projects/{project}/incidents/{incident}/tags/{tag}", project=project, incident=incident, tag=tag, )
Return a fully-qualified tag string.
Below is the the instruction that describes the task: ### Input: Return a fully-qualified tag string. ### Response: def tag_path(cls, project, incident, tag): """Return a fully-qualified tag string.""" return google.api_core.path_template.expand( "projects/{project}/incidents/{incident}/tags/{tag}", project=project, incident=incident, tag=tag, )
def get_local_aws_session(): """Returns a session for the local instance, not for a remote account Returns: :obj:`boto3:boto3.session.Session` """ if not all((app_config.aws_api.access_key, app_config.aws_api.secret_key)): return boto3.session.Session() else: # If we are not running on an EC2 instance, assume the instance role # first, then assume the remote role session_args = [app_config.aws_api.access_key, app_config.aws_api.secret_key] if app_config.aws_api.session_token: session_args.append(app_config.aws_api.session_token) return boto3.session.Session(*session_args)
Returns a session for the local instance, not for a remote account Returns: :obj:`boto3:boto3.session.Session`
Below is the the instruction that describes the task: ### Input: Returns a session for the local instance, not for a remote account Returns: :obj:`boto3:boto3.session.Session` ### Response: def get_local_aws_session(): """Returns a session for the local instance, not for a remote account Returns: :obj:`boto3:boto3.session.Session` """ if not all((app_config.aws_api.access_key, app_config.aws_api.secret_key)): return boto3.session.Session() else: # If we are not running on an EC2 instance, assume the instance role # first, then assume the remote role session_args = [app_config.aws_api.access_key, app_config.aws_api.secret_key] if app_config.aws_api.session_token: session_args.append(app_config.aws_api.session_token) return boto3.session.Session(*session_args)
def prepare_to_run(self, clock, period_count): """ Prepare the component for execution. :param clock: The clock containing the execution start time and execution period information. :param period_count: The total amount of periods this activity will be requested to be run for. """ for c in self.components: c.prepare_to_run(clock, period_count) for a in self.activities: a.prepare_to_run(clock, period_count)
Prepare the component for execution. :param clock: The clock containing the execution start time and execution period information. :param period_count: The total amount of periods this activity will be requested to be run for.
Below is the the instruction that describes the task: ### Input: Prepare the component for execution. :param clock: The clock containing the execution start time and execution period information. :param period_count: The total amount of periods this activity will be requested to be run for. ### Response: def prepare_to_run(self, clock, period_count): """ Prepare the component for execution. :param clock: The clock containing the execution start time and execution period information. :param period_count: The total amount of periods this activity will be requested to be run for. """ for c in self.components: c.prepare_to_run(clock, period_count) for a in self.activities: a.prepare_to_run(clock, period_count)
def _generate_ids(self, document, content): """Generate unique ids for html elements in page content so that it's possible to link to them. """ existing_ids = content.xpath('//*/@id') elements = [ 'p', 'dl', 'dt', 'dd', 'table', 'div', 'section', 'figure', 'blockquote', 'q', 'code', 'pre', 'object', 'img', 'audio', 'video', ] elements_xpath = '|'.join(['.//{}|.//xhtml:{}'.format(elem, elem) for elem in elements]) data_types = [ 'equation', 'list', 'exercise', 'rule', 'example', 'note', 'footnote-number', 'footnote-ref', 'problem', 'solution', 'media', 'proof', 'statement', 'commentary' ] data_types_xpath = '|'.join(['.//*[@data-type="{}"]'.format(data_type) for data_type in data_types]) xpath = '|'.join([elements_xpath, data_types_xpath]) mapping = {} # old id -> new id for node in content.xpath(xpath, namespaces=HTML_DOCUMENT_NAMESPACES): old_id = node.attrib.get('id') document_id = document.id.replace('_', '') if old_id: new_id = 'auto_{}_{}'.format(document_id, old_id) else: random_number = random.randint(0, 100000) new_id = 'auto_{}_{}'.format(document_id, random_number) while new_id in existing_ids: random_number = random.randint(0, 100000) new_id = 'auto_{}_{}'.format(document_id, random_number) node.attrib['id'] = new_id if old_id: mapping[old_id] = new_id existing_ids.append(new_id) for a in content.xpath('//a[@href]|//xhtml:a[@href]', namespaces=HTML_DOCUMENT_NAMESPACES): href = a.attrib['href'] if href.startswith('#') and href[1:] in mapping: a.attrib['href'] = '#{}'.format(mapping[href[1:]])
Generate unique ids for html elements in page content so that it's possible to link to them.
Below is the the instruction that describes the task: ### Input: Generate unique ids for html elements in page content so that it's possible to link to them. ### Response: def _generate_ids(self, document, content): """Generate unique ids for html elements in page content so that it's possible to link to them. """ existing_ids = content.xpath('//*/@id') elements = [ 'p', 'dl', 'dt', 'dd', 'table', 'div', 'section', 'figure', 'blockquote', 'q', 'code', 'pre', 'object', 'img', 'audio', 'video', ] elements_xpath = '|'.join(['.//{}|.//xhtml:{}'.format(elem, elem) for elem in elements]) data_types = [ 'equation', 'list', 'exercise', 'rule', 'example', 'note', 'footnote-number', 'footnote-ref', 'problem', 'solution', 'media', 'proof', 'statement', 'commentary' ] data_types_xpath = '|'.join(['.//*[@data-type="{}"]'.format(data_type) for data_type in data_types]) xpath = '|'.join([elements_xpath, data_types_xpath]) mapping = {} # old id -> new id for node in content.xpath(xpath, namespaces=HTML_DOCUMENT_NAMESPACES): old_id = node.attrib.get('id') document_id = document.id.replace('_', '') if old_id: new_id = 'auto_{}_{}'.format(document_id, old_id) else: random_number = random.randint(0, 100000) new_id = 'auto_{}_{}'.format(document_id, random_number) while new_id in existing_ids: random_number = random.randint(0, 100000) new_id = 'auto_{}_{}'.format(document_id, random_number) node.attrib['id'] = new_id if old_id: mapping[old_id] = new_id existing_ids.append(new_id) for a in content.xpath('//a[@href]|//xhtml:a[@href]', namespaces=HTML_DOCUMENT_NAMESPACES): href = a.attrib['href'] if href.startswith('#') and href[1:] in mapping: a.attrib['href'] = '#{}'.format(mapping[href[1:]])
def rolling_fltr(dem, f=np.nanmedian, size=3, circular=True, origmask=False): """General rolling filter (default operator is median filter) Can input any function f Efficient for smaller arrays, correclty handles NaN, fills gaps """ print("Applying rolling filter: %s with size %s" % (f.__name__, size)) dem = malib.checkma(dem) #Convert to float32 so we can fill with nan dem = dem.astype(np.float32) newshp = (dem.size, size*size) #Force a step size of 1 t = malib.sliding_window_padded(dem.filled(np.nan), (size, size), (1, 1)) if circular: if size > 3: mask = circular_mask(size) t[:,mask] = np.nan t = t.reshape(newshp) out = f(t, axis=1).reshape(dem.shape) out = np.ma.fix_invalid(out).astype(dem.dtype) out.set_fill_value(dem.fill_value) if origmask: out = np.ma.array(out, mask=np.ma.getmaskarray(dem)) return out
General rolling filter (default operator is median filter) Can input any function f Efficient for smaller arrays, correclty handles NaN, fills gaps
Below is the the instruction that describes the task: ### Input: General rolling filter (default operator is median filter) Can input any function f Efficient for smaller arrays, correclty handles NaN, fills gaps ### Response: def rolling_fltr(dem, f=np.nanmedian, size=3, circular=True, origmask=False): """General rolling filter (default operator is median filter) Can input any function f Efficient for smaller arrays, correclty handles NaN, fills gaps """ print("Applying rolling filter: %s with size %s" % (f.__name__, size)) dem = malib.checkma(dem) #Convert to float32 so we can fill with nan dem = dem.astype(np.float32) newshp = (dem.size, size*size) #Force a step size of 1 t = malib.sliding_window_padded(dem.filled(np.nan), (size, size), (1, 1)) if circular: if size > 3: mask = circular_mask(size) t[:,mask] = np.nan t = t.reshape(newshp) out = f(t, axis=1).reshape(dem.shape) out = np.ma.fix_invalid(out).astype(dem.dtype) out.set_fill_value(dem.fill_value) if origmask: out = np.ma.array(out, mask=np.ma.getmaskarray(dem)) return out
def get_p_vals(self, X): ''' Imputes p-values from the Z-scores of `ScaledFScore` scores. Assuming incorrectly that the scaled f-scores are normally distributed. Parameters ---------- X : np.array Array of word counts, shape (N, 2) where N is the vocab size. X[:,0] is the positive class, while X[:,1] is the negative class. Returns ------- np.array of p-values ''' z_scores = ScaledFZScore(self.scaler_algo, self.beta).get_scores(X[:,0], X[:,1]) return norm.cdf(z_scores)
Imputes p-values from the Z-scores of `ScaledFScore` scores. Assuming incorrectly that the scaled f-scores are normally distributed. Parameters ---------- X : np.array Array of word counts, shape (N, 2) where N is the vocab size. X[:,0] is the positive class, while X[:,1] is the negative class. Returns ------- np.array of p-values
Below is the the instruction that describes the task: ### Input: Imputes p-values from the Z-scores of `ScaledFScore` scores. Assuming incorrectly that the scaled f-scores are normally distributed. Parameters ---------- X : np.array Array of word counts, shape (N, 2) where N is the vocab size. X[:,0] is the positive class, while X[:,1] is the negative class. Returns ------- np.array of p-values ### Response: def get_p_vals(self, X): ''' Imputes p-values from the Z-scores of `ScaledFScore` scores. Assuming incorrectly that the scaled f-scores are normally distributed. Parameters ---------- X : np.array Array of word counts, shape (N, 2) where N is the vocab size. X[:,0] is the positive class, while X[:,1] is the negative class. Returns ------- np.array of p-values ''' z_scores = ScaledFZScore(self.scaler_algo, self.beta).get_scores(X[:,0], X[:,1]) return norm.cdf(z_scores)
def get_random_available(self, max_iter=10000): ''' get a random key out of the first max_iter rows ''' c = 1 keeper = None ## note the ConsistencyLevel here. If we do not do this, and ## get all slick with things like column_count=0 and filter ## empty False, then we can get keys that were recently ## deleted... EVEN if the default consistency would seem to ## rule that out! ## note the random start key, so that we do not always hit the ## same place in the key range with all workers #random_key = hashlib.md5(str(random.random())).hexdigest() #random_key = '0' * 32 #logger.debug('available.get_range(%r)' % random_key) ## scratch that idea: turns out that using a random start key ## OR using row_count=1 can cause get_range to hang for hours ## why we need ConsistencyLevel.ALL on a single node is not ## clear, but experience indicates it is needed. ## note that putting a finite row_count is problematic in two ## ways: # 1) if there are more workers than max_iter, some will not # get tasks # # 2) if there are more than max_iter records, then all workers # have to wade through all of these just to get a task! What # we really want is a "pick random row" function, and that is # probably best implemented using CQL3 token function via the # cql python module instead of pycassa... for row in self._available.get_range(row_count=max_iter, read_consistency_level=pycassa.ConsistencyLevel.ALL): #for row in self._available.get_range(row_count=100): logger.debug('considering %r' % (row,)) if random.random() < 1 / c: keeper = row[0] if c == max_iter: break c += 1 return keeper
get a random key out of the first max_iter rows
Below is the the instruction that describes the task: ### Input: get a random key out of the first max_iter rows ### Response: def get_random_available(self, max_iter=10000): ''' get a random key out of the first max_iter rows ''' c = 1 keeper = None ## note the ConsistencyLevel here. If we do not do this, and ## get all slick with things like column_count=0 and filter ## empty False, then we can get keys that were recently ## deleted... EVEN if the default consistency would seem to ## rule that out! ## note the random start key, so that we do not always hit the ## same place in the key range with all workers #random_key = hashlib.md5(str(random.random())).hexdigest() #random_key = '0' * 32 #logger.debug('available.get_range(%r)' % random_key) ## scratch that idea: turns out that using a random start key ## OR using row_count=1 can cause get_range to hang for hours ## why we need ConsistencyLevel.ALL on a single node is not ## clear, but experience indicates it is needed. ## note that putting a finite row_count is problematic in two ## ways: # 1) if there are more workers than max_iter, some will not # get tasks # # 2) if there are more than max_iter records, then all workers # have to wade through all of these just to get a task! What # we really want is a "pick random row" function, and that is # probably best implemented using CQL3 token function via the # cql python module instead of pycassa... for row in self._available.get_range(row_count=max_iter, read_consistency_level=pycassa.ConsistencyLevel.ALL): #for row in self._available.get_range(row_count=100): logger.debug('considering %r' % (row,)) if random.random() < 1 / c: keeper = row[0] if c == max_iter: break c += 1 return keeper
def MAE(x1, x2=-1): """ Mean absolute error - this function accepts two series of data or directly one series with error. **Args:** * `x1` - first data series or error (1d array) **Kwargs:** * `x2` - second series (1d array) if first series was not error directly,\\ then this should be the second series **Returns:** * `e` - MAE of error (float) obtained directly from `x1`, \\ or as a difference of `x1` and `x2` """ e = get_valid_error(x1, x2) return np.sum(np.abs(e)) / float(len(e))
Mean absolute error - this function accepts two series of data or directly one series with error. **Args:** * `x1` - first data series or error (1d array) **Kwargs:** * `x2` - second series (1d array) if first series was not error directly,\\ then this should be the second series **Returns:** * `e` - MAE of error (float) obtained directly from `x1`, \\ or as a difference of `x1` and `x2`
Below is the the instruction that describes the task: ### Input: Mean absolute error - this function accepts two series of data or directly one series with error. **Args:** * `x1` - first data series or error (1d array) **Kwargs:** * `x2` - second series (1d array) if first series was not error directly,\\ then this should be the second series **Returns:** * `e` - MAE of error (float) obtained directly from `x1`, \\ or as a difference of `x1` and `x2` ### Response: def MAE(x1, x2=-1): """ Mean absolute error - this function accepts two series of data or directly one series with error. **Args:** * `x1` - first data series or error (1d array) **Kwargs:** * `x2` - second series (1d array) if first series was not error directly,\\ then this should be the second series **Returns:** * `e` - MAE of error (float) obtained directly from `x1`, \\ or as a difference of `x1` and `x2` """ e = get_valid_error(x1, x2) return np.sum(np.abs(e)) / float(len(e))
def eol_distance_last(self, offset=0): """Return the ammount of characters until the last newline.""" distance = 0 for char in reversed(self.string[:self.pos + offset]): if char == '\n': break else: distance += 1 return distance
Return the ammount of characters until the last newline.
Below is the the instruction that describes the task: ### Input: Return the ammount of characters until the last newline. ### Response: def eol_distance_last(self, offset=0): """Return the ammount of characters until the last newline.""" distance = 0 for char in reversed(self.string[:self.pos + offset]): if char == '\n': break else: distance += 1 return distance
def correlation(args): """ %prog correlation postgenomic-s.tsv Plot correlation of age vs. postgenomic features. """ p = OptionParser(correlation.__doc__) opts, args, iopts = p.set_image_options(args, figsize="12x8") if len(args) != 1: sys.exit(not p.print_help()) tsvfile, = args df = pd.read_csv(tsvfile, sep="\t") composite_correlation(df, size=(iopts.w, iopts.h)) outfile = tsvfile.rsplit(".", 1)[0] + ".correlation.pdf" savefig(outfile)
%prog correlation postgenomic-s.tsv Plot correlation of age vs. postgenomic features.
Below is the the instruction that describes the task: ### Input: %prog correlation postgenomic-s.tsv Plot correlation of age vs. postgenomic features. ### Response: def correlation(args): """ %prog correlation postgenomic-s.tsv Plot correlation of age vs. postgenomic features. """ p = OptionParser(correlation.__doc__) opts, args, iopts = p.set_image_options(args, figsize="12x8") if len(args) != 1: sys.exit(not p.print_help()) tsvfile, = args df = pd.read_csv(tsvfile, sep="\t") composite_correlation(df, size=(iopts.w, iopts.h)) outfile = tsvfile.rsplit(".", 1)[0] + ".correlation.pdf" savefig(outfile)
def get_html_desc(self, markdown_inst=None): """ Translates the enum's 'desc' property into HTML. Any RDLFormatCode tags used in the description are converted to HTML. The text is also fed through a Markdown processor. The additional Markdown processing allows designers the choice to use a more modern lightweight markup language as an alternative to SystemRDL's "RDLFormatCode". Parameters ---------- markdown_inst: ``markdown.Markdown`` Override the class instance of the Markdown processor. See the `Markdown module <https://python-markdown.github.io/reference/#Markdown>`_ for more details. Returns ------- str or None HTML formatted string. If node does not have a description, returns ``None`` """ desc_str = self._rdl_desc_ if desc_str is None: return None return rdlformatcode.rdlfc_to_html(desc_str, md=markdown_inst)
Translates the enum's 'desc' property into HTML. Any RDLFormatCode tags used in the description are converted to HTML. The text is also fed through a Markdown processor. The additional Markdown processing allows designers the choice to use a more modern lightweight markup language as an alternative to SystemRDL's "RDLFormatCode". Parameters ---------- markdown_inst: ``markdown.Markdown`` Override the class instance of the Markdown processor. See the `Markdown module <https://python-markdown.github.io/reference/#Markdown>`_ for more details. Returns ------- str or None HTML formatted string. If node does not have a description, returns ``None``
Below is the the instruction that describes the task: ### Input: Translates the enum's 'desc' property into HTML. Any RDLFormatCode tags used in the description are converted to HTML. The text is also fed through a Markdown processor. The additional Markdown processing allows designers the choice to use a more modern lightweight markup language as an alternative to SystemRDL's "RDLFormatCode". Parameters ---------- markdown_inst: ``markdown.Markdown`` Override the class instance of the Markdown processor. See the `Markdown module <https://python-markdown.github.io/reference/#Markdown>`_ for more details. Returns ------- str or None HTML formatted string. If node does not have a description, returns ``None`` ### Response: def get_html_desc(self, markdown_inst=None): """ Translates the enum's 'desc' property into HTML. Any RDLFormatCode tags used in the description are converted to HTML. The text is also fed through a Markdown processor. The additional Markdown processing allows designers the choice to use a more modern lightweight markup language as an alternative to SystemRDL's "RDLFormatCode". Parameters ---------- markdown_inst: ``markdown.Markdown`` Override the class instance of the Markdown processor. See the `Markdown module <https://python-markdown.github.io/reference/#Markdown>`_ for more details. Returns ------- str or None HTML formatted string. If node does not have a description, returns ``None`` """ desc_str = self._rdl_desc_ if desc_str is None: return None return rdlformatcode.rdlfc_to_html(desc_str, md=markdown_inst)
def _parse_href(self, href): """ Extract "real" URL from Google redirected url by getting `q` querystring parameter. """ params = parse_qs(urlsplit(href).query) return params.get('q')
Extract "real" URL from Google redirected url by getting `q` querystring parameter.
Below is the the instruction that describes the task: ### Input: Extract "real" URL from Google redirected url by getting `q` querystring parameter. ### Response: def _parse_href(self, href): """ Extract "real" URL from Google redirected url by getting `q` querystring parameter. """ params = parse_qs(urlsplit(href).query) return params.get('q')
def create_default_config(): """ Create a default configuration object, with all parameters filled """ config = configparser.RawConfigParser() config.add_section('global') config.set('global', 'env_source_rc', False) config.add_section('shell') config.set('shell', 'bash', "true") config.set('shell', 'zsh', "true") config.set('shell', 'gui', "true") return config
Create a default configuration object, with all parameters filled
Below is the the instruction that describes the task: ### Input: Create a default configuration object, with all parameters filled ### Response: def create_default_config(): """ Create a default configuration object, with all parameters filled """ config = configparser.RawConfigParser() config.add_section('global') config.set('global', 'env_source_rc', False) config.add_section('shell') config.set('shell', 'bash', "true") config.set('shell', 'zsh', "true") config.set('shell', 'gui', "true") return config
def evaluate_generative_model(A, Atgt, D, eta, gamma=None, model_type='matching', model_var='powerlaw', epsilon=1e-6, seed=None): ''' Generates synthetic networks with parameters provided and evaluates their energy function. The energy function is defined as in Betzel et al. 2016. Basically it takes the Kolmogorov-Smirnov statistics of 4 network measures; comparing the degree distributions, clustering coefficients, betweenness centrality, and Euclidean distances between connected regions. The energy is globally low if the synthetic network matches the target. Energy is defined as the maximum difference across the four statistics. ''' m = np.size(np.where(Atgt.flat))//2 n = len(Atgt) xk = np.sum(Atgt, axis=1) xc = clustering_coef_bu(Atgt) xb = betweenness_bin(Atgt) xe = D[np.triu(Atgt, 1) > 0] B = generative_model(A, D, m, eta, gamma, model_type=model_type, model_var=model_var, epsilon=epsilon, copy=True, seed=seed) #if eta != gamma then an error is thrown within generative model nB = len(eta) if nB == 1: B = np.reshape(B, np.append(np.shape(B), 1)) K = np.zeros((nB, 4)) def kstats(x, y): bin_edges = np.concatenate([[-np.inf], np.sort(np.concatenate((x, y))), [np.inf]]) bin_x,_ = np.histogram(x, bin_edges) bin_y,_ = np.histogram(y, bin_edges) #print(np.shape(bin_x)) sum_x = np.cumsum(bin_x) / np.sum(bin_x) sum_y = np.cumsum(bin_y) / np.sum(bin_y) cdfsamp_x = sum_x[:-1] cdfsamp_y = sum_y[:-1] delta_cdf = np.abs(cdfsamp_x - cdfsamp_y) print(np.shape(delta_cdf)) #print(delta_cdf) print(np.argmax(delta_cdf), np.max(delta_cdf)) return np.max(delta_cdf) for ib in range(nB): Bc = B[:,:,ib] yk = np.sum(Bc, axis=1) yc = clustering_coef_bu(Bc) yb = betweenness_bin(Bc) ye = D[np.triu(Bc, 1) > 0] K[ib, 0] = kstats(xk, yk) K[ib, 1] = kstats(xc, yc) K[ib, 2] = kstats(xb, yb) K[ib, 3] = kstats(xe, ye) return np.max(K, axis=1)
Generates synthetic networks with parameters provided and evaluates their energy function. The energy function is defined as in Betzel et al. 2016. Basically it takes the Kolmogorov-Smirnov statistics of 4 network measures; comparing the degree distributions, clustering coefficients, betweenness centrality, and Euclidean distances between connected regions. The energy is globally low if the synthetic network matches the target. Energy is defined as the maximum difference across the four statistics.
Below is the the instruction that describes the task: ### Input: Generates synthetic networks with parameters provided and evaluates their energy function. The energy function is defined as in Betzel et al. 2016. Basically it takes the Kolmogorov-Smirnov statistics of 4 network measures; comparing the degree distributions, clustering coefficients, betweenness centrality, and Euclidean distances between connected regions. The energy is globally low if the synthetic network matches the target. Energy is defined as the maximum difference across the four statistics. ### Response: def evaluate_generative_model(A, Atgt, D, eta, gamma=None, model_type='matching', model_var='powerlaw', epsilon=1e-6, seed=None): ''' Generates synthetic networks with parameters provided and evaluates their energy function. The energy function is defined as in Betzel et al. 2016. Basically it takes the Kolmogorov-Smirnov statistics of 4 network measures; comparing the degree distributions, clustering coefficients, betweenness centrality, and Euclidean distances between connected regions. The energy is globally low if the synthetic network matches the target. Energy is defined as the maximum difference across the four statistics. ''' m = np.size(np.where(Atgt.flat))//2 n = len(Atgt) xk = np.sum(Atgt, axis=1) xc = clustering_coef_bu(Atgt) xb = betweenness_bin(Atgt) xe = D[np.triu(Atgt, 1) > 0] B = generative_model(A, D, m, eta, gamma, model_type=model_type, model_var=model_var, epsilon=epsilon, copy=True, seed=seed) #if eta != gamma then an error is thrown within generative model nB = len(eta) if nB == 1: B = np.reshape(B, np.append(np.shape(B), 1)) K = np.zeros((nB, 4)) def kstats(x, y): bin_edges = np.concatenate([[-np.inf], np.sort(np.concatenate((x, y))), [np.inf]]) bin_x,_ = np.histogram(x, bin_edges) bin_y,_ = np.histogram(y, bin_edges) #print(np.shape(bin_x)) sum_x = np.cumsum(bin_x) / np.sum(bin_x) sum_y = np.cumsum(bin_y) / np.sum(bin_y) cdfsamp_x = sum_x[:-1] cdfsamp_y = sum_y[:-1] delta_cdf = np.abs(cdfsamp_x - cdfsamp_y) print(np.shape(delta_cdf)) #print(delta_cdf) print(np.argmax(delta_cdf), np.max(delta_cdf)) return np.max(delta_cdf) for ib in range(nB): Bc = B[:,:,ib] yk = np.sum(Bc, axis=1) yc = clustering_coef_bu(Bc) yb = betweenness_bin(Bc) ye = D[np.triu(Bc, 1) > 0] K[ib, 0] = kstats(xk, yk) K[ib, 1] = kstats(xc, yc) K[ib, 2] = kstats(xb, yb) K[ib, 3] = kstats(xe, ye) return np.max(K, axis=1)
def convert_to_int(value, from_base): """ Convert value to an int. :param value: the value to convert :type value: sequence of int :param int from_base: base of value :returns: the conversion result :rtype: int :raises ConvertError: if from_base is less than 2 :raises ConvertError: if elements in value outside bounds Preconditions: * all integers in value must be at least 0 * all integers in value must be less than from_base * from_base must be at least 2 Complexity: O(len(value)) """ if from_base < 2: raise BasesValueError( from_base, "from_base", "must be greater than 2" ) if any(x < 0 or x >= from_base for x in value): raise BasesValueError( value, "value", "elements must be at least 0 and less than %s" % from_base ) return reduce(lambda x, y: x * from_base + y, value, 0)
Convert value to an int. :param value: the value to convert :type value: sequence of int :param int from_base: base of value :returns: the conversion result :rtype: int :raises ConvertError: if from_base is less than 2 :raises ConvertError: if elements in value outside bounds Preconditions: * all integers in value must be at least 0 * all integers in value must be less than from_base * from_base must be at least 2 Complexity: O(len(value))
Below is the the instruction that describes the task: ### Input: Convert value to an int. :param value: the value to convert :type value: sequence of int :param int from_base: base of value :returns: the conversion result :rtype: int :raises ConvertError: if from_base is less than 2 :raises ConvertError: if elements in value outside bounds Preconditions: * all integers in value must be at least 0 * all integers in value must be less than from_base * from_base must be at least 2 Complexity: O(len(value)) ### Response: def convert_to_int(value, from_base): """ Convert value to an int. :param value: the value to convert :type value: sequence of int :param int from_base: base of value :returns: the conversion result :rtype: int :raises ConvertError: if from_base is less than 2 :raises ConvertError: if elements in value outside bounds Preconditions: * all integers in value must be at least 0 * all integers in value must be less than from_base * from_base must be at least 2 Complexity: O(len(value)) """ if from_base < 2: raise BasesValueError( from_base, "from_base", "must be greater than 2" ) if any(x < 0 or x >= from_base for x in value): raise BasesValueError( value, "value", "elements must be at least 0 and less than %s" % from_base ) return reduce(lambda x, y: x * from_base + y, value, 0)
def index_row_dict_from_csv(path, index_col=None, iterator=False, chunksize=None, skiprows=None, nrows=None, use_ordered_dict=True, **kwargs): """Read the csv into a dictionary. The key is it's index, the value is the dictionary form of the row. :param path: csv file path. :param index_col: None or str, the column that used as index. :param iterator: :param chunksize: :param skiprows: :param nrows: :param use_ordered_dict: :returns: {index_1: row1, index2: row2, ...} **中文文档** 读取csv, 选择一值完全不重复, 可作为index的列作为index, 生成一个字典 数据结构, 使得可以通过index直接访问row。 """ _kwargs = dict(list(kwargs.items())) _kwargs["iterator"] = None _kwargs["chunksize"] = None _kwargs["skiprows"] = 0 _kwargs["nrows"] = 1 df = pd.read_csv(path, index_col=index_col, **_kwargs) columns = df.columns if index_col is None: raise Exception("please give index_col!") if use_ordered_dict: table = OrderedDict() else: table = dict() kwargs["iterator"] = iterator kwargs["chunksize"] = chunksize kwargs["skiprows"] = skiprows kwargs["nrows"] = nrows if iterator is True: for df in pd.read_csv(path, index_col=index_col, **kwargs): for ind, tp in zip(df.index, itertuple(df)): table[ind] = dict(zip(columns, tp)) else: df = pd.read_csv(path, index_col=index_col, **kwargs) for ind, tp in zip(df.index, itertuple(df)): table[ind] = dict(zip(columns, tp)) return table
Read the csv into a dictionary. The key is it's index, the value is the dictionary form of the row. :param path: csv file path. :param index_col: None or str, the column that used as index. :param iterator: :param chunksize: :param skiprows: :param nrows: :param use_ordered_dict: :returns: {index_1: row1, index2: row2, ...} **中文文档** 读取csv, 选择一值完全不重复, 可作为index的列作为index, 生成一个字典 数据结构, 使得可以通过index直接访问row。
Below is the the instruction that describes the task: ### Input: Read the csv into a dictionary. The key is it's index, the value is the dictionary form of the row. :param path: csv file path. :param index_col: None or str, the column that used as index. :param iterator: :param chunksize: :param skiprows: :param nrows: :param use_ordered_dict: :returns: {index_1: row1, index2: row2, ...} **中文文档** 读取csv, 选择一值完全不重复, 可作为index的列作为index, 生成一个字典 数据结构, 使得可以通过index直接访问row。 ### Response: def index_row_dict_from_csv(path, index_col=None, iterator=False, chunksize=None, skiprows=None, nrows=None, use_ordered_dict=True, **kwargs): """Read the csv into a dictionary. The key is it's index, the value is the dictionary form of the row. :param path: csv file path. :param index_col: None or str, the column that used as index. :param iterator: :param chunksize: :param skiprows: :param nrows: :param use_ordered_dict: :returns: {index_1: row1, index2: row2, ...} **中文文档** 读取csv, 选择一值完全不重复, 可作为index的列作为index, 生成一个字典 数据结构, 使得可以通过index直接访问row。 """ _kwargs = dict(list(kwargs.items())) _kwargs["iterator"] = None _kwargs["chunksize"] = None _kwargs["skiprows"] = 0 _kwargs["nrows"] = 1 df = pd.read_csv(path, index_col=index_col, **_kwargs) columns = df.columns if index_col is None: raise Exception("please give index_col!") if use_ordered_dict: table = OrderedDict() else: table = dict() kwargs["iterator"] = iterator kwargs["chunksize"] = chunksize kwargs["skiprows"] = skiprows kwargs["nrows"] = nrows if iterator is True: for df in pd.read_csv(path, index_col=index_col, **kwargs): for ind, tp in zip(df.index, itertuple(df)): table[ind] = dict(zip(columns, tp)) else: df = pd.read_csv(path, index_col=index_col, **kwargs) for ind, tp in zip(df.index, itertuple(df)): table[ind] = dict(zip(columns, tp)) return table
def basic_event_table(event): """Formats a basic event table""" table = formatting.Table(["Id", "Status", "Type", "Start", "End"], title=utils.clean_splitlines(event.get('subject'))) table.add_row([ event.get('id'), utils.lookup(event, 'statusCode', 'name'), utils.lookup(event, 'notificationOccurrenceEventType', 'keyName'), utils.clean_time(event.get('startDate')), utils.clean_time(event.get('endDate')) ]) return table
Formats a basic event table
Below is the the instruction that describes the task: ### Input: Formats a basic event table ### Response: def basic_event_table(event): """Formats a basic event table""" table = formatting.Table(["Id", "Status", "Type", "Start", "End"], title=utils.clean_splitlines(event.get('subject'))) table.add_row([ event.get('id'), utils.lookup(event, 'statusCode', 'name'), utils.lookup(event, 'notificationOccurrenceEventType', 'keyName'), utils.clean_time(event.get('startDate')), utils.clean_time(event.get('endDate')) ]) return table
def update_timestampable_model(sender, instance, *args, **kwargs): ''' Using signals guarantees that timestamps are set no matter what: loading fixtures, bulk inserts, bulk updates, etc. Indeed, the `save()` method is *not* called when using fixtures. ''' if not isinstance(instance, TimestampableModel): return if not instance.pk: instance.created_at = now() instance.updated_at = now()
Using signals guarantees that timestamps are set no matter what: loading fixtures, bulk inserts, bulk updates, etc. Indeed, the `save()` method is *not* called when using fixtures.
Below is the the instruction that describes the task: ### Input: Using signals guarantees that timestamps are set no matter what: loading fixtures, bulk inserts, bulk updates, etc. Indeed, the `save()` method is *not* called when using fixtures. ### Response: def update_timestampable_model(sender, instance, *args, **kwargs): ''' Using signals guarantees that timestamps are set no matter what: loading fixtures, bulk inserts, bulk updates, etc. Indeed, the `save()` method is *not* called when using fixtures. ''' if not isinstance(instance, TimestampableModel): return if not instance.pk: instance.created_at = now() instance.updated_at = now()
def is_ignored(self, options): """ If we have changed children and all the children which are changes are ignored, then we are ignored. Otherwise, we are not ignored """ if not self.is_change(): return False changes = self.collect() if not changes: return False for change in changes: if change.is_change() and not change.is_ignored(options): return False return True
If we have changed children and all the children which are changes are ignored, then we are ignored. Otherwise, we are not ignored
Below is the the instruction that describes the task: ### Input: If we have changed children and all the children which are changes are ignored, then we are ignored. Otherwise, we are not ignored ### Response: def is_ignored(self, options): """ If we have changed children and all the children which are changes are ignored, then we are ignored. Otherwise, we are not ignored """ if not self.is_change(): return False changes = self.collect() if not changes: return False for change in changes: if change.is_change() and not change.is_ignored(options): return False return True
def set_categories(self, new_categories, ordered=None, rename=False, inplace=False): """ Set the categories to the specified new_categories. `new_categories` can include new categories (which will result in unused categories) or remove old categories (which results in values set to NaN). If `rename==True`, the categories will simple be renamed (less or more items than in old categories will result in values set to NaN or in unused categories respectively). This method can be used to perform more than one action of adding, removing, and reordering simultaneously and is therefore faster than performing the individual steps via the more specialised methods. On the other hand this methods does not do checks (e.g., whether the old categories are included in the new categories on a reorder), which can result in surprising changes, for example when using special string dtypes on python3, which does not considers a S1 string equal to a single char python string. Parameters ---------- new_categories : Index-like The categories in new order. ordered : bool, default False Whether or not the categorical is treated as a ordered categorical. If not given, do not change the ordered information. rename : bool, default False Whether or not the new_categories should be considered as a rename of the old categories or as reordered categories. inplace : bool, default False Whether or not to reorder the categories in-place or return a copy of this categorical with reordered categories. Returns ------- Categorical with reordered categories or None if inplace. Raises ------ ValueError If new_categories does not validate as categories See Also -------- rename_categories reorder_categories add_categories remove_categories remove_unused_categories """ inplace = validate_bool_kwarg(inplace, 'inplace') if ordered is None: ordered = self.dtype.ordered new_dtype = CategoricalDtype(new_categories, ordered=ordered) cat = self if inplace else self.copy() if rename: if (cat.dtype.categories is not None and len(new_dtype.categories) < len(cat.dtype.categories)): # remove all _codes which are larger and set to -1/NaN cat._codes[cat._codes >= len(new_dtype.categories)] = -1 else: codes = _recode_for_categories(cat.codes, cat.categories, new_dtype.categories) cat._codes = codes cat._dtype = new_dtype if not inplace: return cat
Set the categories to the specified new_categories. `new_categories` can include new categories (which will result in unused categories) or remove old categories (which results in values set to NaN). If `rename==True`, the categories will simple be renamed (less or more items than in old categories will result in values set to NaN or in unused categories respectively). This method can be used to perform more than one action of adding, removing, and reordering simultaneously and is therefore faster than performing the individual steps via the more specialised methods. On the other hand this methods does not do checks (e.g., whether the old categories are included in the new categories on a reorder), which can result in surprising changes, for example when using special string dtypes on python3, which does not considers a S1 string equal to a single char python string. Parameters ---------- new_categories : Index-like The categories in new order. ordered : bool, default False Whether or not the categorical is treated as a ordered categorical. If not given, do not change the ordered information. rename : bool, default False Whether or not the new_categories should be considered as a rename of the old categories or as reordered categories. inplace : bool, default False Whether or not to reorder the categories in-place or return a copy of this categorical with reordered categories. Returns ------- Categorical with reordered categories or None if inplace. Raises ------ ValueError If new_categories does not validate as categories See Also -------- rename_categories reorder_categories add_categories remove_categories remove_unused_categories
Below is the the instruction that describes the task: ### Input: Set the categories to the specified new_categories. `new_categories` can include new categories (which will result in unused categories) or remove old categories (which results in values set to NaN). If `rename==True`, the categories will simple be renamed (less or more items than in old categories will result in values set to NaN or in unused categories respectively). This method can be used to perform more than one action of adding, removing, and reordering simultaneously and is therefore faster than performing the individual steps via the more specialised methods. On the other hand this methods does not do checks (e.g., whether the old categories are included in the new categories on a reorder), which can result in surprising changes, for example when using special string dtypes on python3, which does not considers a S1 string equal to a single char python string. Parameters ---------- new_categories : Index-like The categories in new order. ordered : bool, default False Whether or not the categorical is treated as a ordered categorical. If not given, do not change the ordered information. rename : bool, default False Whether or not the new_categories should be considered as a rename of the old categories or as reordered categories. inplace : bool, default False Whether or not to reorder the categories in-place or return a copy of this categorical with reordered categories. Returns ------- Categorical with reordered categories or None if inplace. Raises ------ ValueError If new_categories does not validate as categories See Also -------- rename_categories reorder_categories add_categories remove_categories remove_unused_categories ### Response: def set_categories(self, new_categories, ordered=None, rename=False, inplace=False): """ Set the categories to the specified new_categories. `new_categories` can include new categories (which will result in unused categories) or remove old categories (which results in values set to NaN). If `rename==True`, the categories will simple be renamed (less or more items than in old categories will result in values set to NaN or in unused categories respectively). This method can be used to perform more than one action of adding, removing, and reordering simultaneously and is therefore faster than performing the individual steps via the more specialised methods. On the other hand this methods does not do checks (e.g., whether the old categories are included in the new categories on a reorder), which can result in surprising changes, for example when using special string dtypes on python3, which does not considers a S1 string equal to a single char python string. Parameters ---------- new_categories : Index-like The categories in new order. ordered : bool, default False Whether or not the categorical is treated as a ordered categorical. If not given, do not change the ordered information. rename : bool, default False Whether or not the new_categories should be considered as a rename of the old categories or as reordered categories. inplace : bool, default False Whether or not to reorder the categories in-place or return a copy of this categorical with reordered categories. Returns ------- Categorical with reordered categories or None if inplace. Raises ------ ValueError If new_categories does not validate as categories See Also -------- rename_categories reorder_categories add_categories remove_categories remove_unused_categories """ inplace = validate_bool_kwarg(inplace, 'inplace') if ordered is None: ordered = self.dtype.ordered new_dtype = CategoricalDtype(new_categories, ordered=ordered) cat = self if inplace else self.copy() if rename: if (cat.dtype.categories is not None and len(new_dtype.categories) < len(cat.dtype.categories)): # remove all _codes which are larger and set to -1/NaN cat._codes[cat._codes >= len(new_dtype.categories)] = -1 else: codes = _recode_for_categories(cat.codes, cat.categories, new_dtype.categories) cat._codes = codes cat._dtype = new_dtype if not inplace: return cat